Page 1 of 1

Filter Implementation

Posted: Tue Apr 09, 2024 3:44 pm
by stellarpower
Hi,

Was wondering if anyone knows any details about the filters in the Muse, what sort they are or how they are implemented.

I seemed to have the impression that they are done on the hardware, but now I'm thinking, are the bandpowers perhaps performed within the SDK? Has anyone managed to glean any details about the algorithm that's being used?

Edit: to clarify, when I say filter, I am referring to the frequency bands - not artefact rejection, although obviously would be interested to find out how they achieve this too!

Thanks a lot

Re: Filter Implementation

Posted: Sat Aug 10, 2024 1:25 pm
by Peter Gamma
The SDK is not available anymore. Most used is as far as I know LabStreamingLayer with the Muse. Then there is OSC streaming with the Mind Monitor. Raw data is also streamed and then you can choose whatever software you want with the Muse. Arnaud Delorme for instance used EEGLAB with the Muse. You can find it on YouTube. But he used .csv import from the Mind Monitor. Delorme also used NeuroFeedbackLab with the Muse, which is based as far as I know on BCILAB.

Re: Filter Implementation

Posted: Thu Oct 10, 2024 8:42 pm
by blue-j
So nice to join this forum! I'm looking forward to connecting with everybody. Just wanted to share that Muse does have an SDK still:

https://choosemuse.com/pages/developers

Not sure what your platform is, but mine is macOS, and the SDK uses Objective-C for it. : (

The SDK is pretty well documented though and that is quite useful in and of itself. We just forked MuseLSL personally. Have not had success adding any channels though.

Pretty sure the bandpowers are computed off the device. It really should happen after preprocessing I believe?

- J