Play sound in real-time when Alpha is "dominant"

topannuity
Posts: 2
Joined: Sat Oct 03, 2020 7:54 am

Play sound in real-time when Alpha is "dominant"

Post by topannuity »

Hi James-
I searched the board using terms "sound" and "alpha" and didn't find what I am looking (hoping) for.
Is there any way to make MM play a sound (beep, hum, buzz, etc) in "real-time" (e.g., during a session) when a user-determined condition is satisfied (e.g., Alpha or Delta is dominant or reach a preset threshold) ? This would make MM a very useful biofeedback app for training with eyes shut?
Thanks in Advance!! And thanks for creating this program/app.

PS - I wonder if that would cause a legal conflict for you with the Muse marketers if they perceive this added function to MM as impinging on what they are selling. I love the Muse headband but hate the Muse app which is inflexible and not very user friendly.
User avatar
James
Site Admin
Posts: 1110
Joined: Wed Jan 02, 2013 9:06 pm

Re: Play sound in real-time when Alpha is "dominant"

Post by James »

Sorry, there are no audio options in the app.

I looked in to adding audio a few times, but I have decided not to do it.

There are many subjective interpretations of what good meditation is. Personally I think that high relative Alpha corresponds to relaxation, but there are many other theories and I don't want to offend anyone by saying one is the best, when in reality I'm just a programmer myself, not a neuroscientist!
Instead I decided to focusing on making Mind Monitor a pure facts based app. This way people can interpret the data however they like.

If people want to do something with the data, like create audio feedback, then this is possible using the OSC data streaming function to send the data out to another app. If this is something you'd be interested in, check out the data spec and sample apps here:
https://mind-monitor.com/FAQ.php#oscspec

For example, yesterday I got sent this project, which I'm sure the creator wouldn't mind me sharing:
https://www.boraaydintug.com/experiment ... essive-eeg
https://vimeo.com/418996156/aaf1dad475

The main issue I ran in to when testing audio feedback, was what audio to play, why and how? There are so many options! You can change pitch, frequency and tone for so many different reasons. Each of the four physical sensors has 5 waves, so that's 20 different brainwave values to choose from, plus a number of combinations for grouping them (e.g. left/right Alpha, front Beta). Mind Monitor already has way too many options in the settings screen. I think adding a ton of audio options would make the settings screen just way too confusing for the average user.

Lastly, when I was testing audio feedback, I personally found it very distracting to have a changing tone playing while trying to relax. I think the way the Interaxon Calm app currently does it with soundscapes is very good, so I would recommend their app if you're looking for an audio only eyes shut experience.
topannuity
Posts: 2
Joined: Sat Oct 03, 2020 7:54 am

Re: Play sound in real-time when Alpha is "dominant"

Post by topannuity »

Thanks James for giving so much of your time to answer my question in great detail. :)
paulbrennan
Posts: 17
Joined: Thu Jul 16, 2020 7:51 am

Re: Play sound in real-time when Alpha is "dominant"

Post by paulbrennan »

Hi folks,

This is something I've been puzzling over for a while and I can't seem to figure it out (my tech knowledge probably isn't up to it). The way that I've figured out how to do this using things like MAX requires two devices - a phone with Mind Monitor and a laptop with MAX or whatever sound processor. It also requires a fair bit of setup with IP addresses, etc. It does work but it's not a robust or portable approach. Myndlift, on a commercial level, are able to do this all in a single app so I wonder if there is a simpler way to do it.

Is it technically necessary to have two devices, one running MM and another one running a sound generator? The OSC can't be routed on the same device?

Also, an browser would be a better solution but when I looked at acquiring the OSC data in a browser it seems like it was pretty tricky to get a UDP connection to the browser, it's not something that browsers typically permit (or maybe I'm wrong on that).

Myndlift does this really well (and with the extra electrode) so it's the obvious solution, but it's expensive!

Thanks, Paul
Customer
Posts: 6
Joined: Sat May 01, 2021 11:37 am

Re: Play sound in real-time when Alpha is "dominant"

Post by Customer »

James wrote: Sat Oct 03, 2020 11:31 am Sorry, there are no audio options in the app.

I looked in to adding audio a few times, but I have decided not to do it.

There are many subjective interpretations of what good meditation is. Personally I think that high relative Alpha corresponds to relaxation, but there are many other theories and I don't want to offend anyone by saying one is the best, when in reality I'm just a programmer myself, not a neuroscientist!
Instead I decided to focusing on making Mind Monitor a pure facts based app. This way people can interpret the data however they like.

If people want to do something with the data, like create audio feedback, then this is possible using the OSC data streaming function to send the data out to another app. If this is something you'd be interested in, check out the data spec and sample apps here:
https://mind-monitor.com/FAQ.php#oscspec

For example, yesterday I got sent this project, which I'm sure the creator wouldn't mind me sharing:
https://www.boraaydintug.com/experiment ... essive-eeg
https://vimeo.com/418996156/aaf1dad475

The main issue I ran in to when testing audio feedback, was what audio to play, why and how? There are so many options! You can change pitch, frequency and tone for so many different reasons. Each of the four physical sensors has 5 waves, so that's 20 different brainwave values to choose from, plus a number of combinations for grouping them (e.g. left/right Alpha, front Beta). Mind Monitor already has way too many options in the settings screen. I think adding a ton of audio options would make the settings screen just way too confusing for the average user.

Lastly, when I was testing audio feedback, I personally found it very distracting to have a changing tone playing while trying to relax. I think the way the Interaxon Calm app currently does it with soundscapes is very good, so I would recommend their app if you're looking for an audio only eyes shut experience.
Is this a joke? I've paid for the app twice and now realise i've been screwed twice because you deliberately chose not to give users an option to turn the passive display into the best brain feedback software available. What sound to make? how about a bell, a fart, anything! What frequencies? You give us the option to choose which channels display which frequencies, just provide an alert option next to each one of the options.

Example: Channel 2 has Gamma selected; when the amplitude drops under a certain given value you get a beep, when it goes up you get a bell, same with the other channels. Done. The best biofeedback app on the planet.
User avatar
James
Site Admin
Posts: 1110
Joined: Wed Jan 02, 2013 9:06 pm

Re: Play sound in real-time when Alpha is "dominant"

Post by James »

If you're looking for an audio feedback meditation experience, then I recommend you use Interaxon's Muse Calm app. That is what it is designed to do and I think it does a great job. Mind Monitor is focused on delivering data without subjective interpretation.

With that said, just for you, I have spent the evening creating a python program to do exactly as you have asked. This code uses the OSC streaming option built into Mind Monitor.

On your computer, make sure you firewall is set to allow UDP traffic on port 5000 and run the program.
In Mind Monitor, set the "OSC Streaming Target IP" to your computers local Wifi IP and press the stream button to start it sending data.

When run it:
* Calculates and graphs the relative waves.
* Plays a sound file if Alpha relative reaches a pre-set threshold.
* Displays in the console, if the headband is correctly fitted.
RelativeGraph.jpg
RelativeGraph.jpg (52.02 KiB) Viewed 4582 times
You can find the code in my new Python example repository here: https://github.com/Enigma644/MindMonitorPython
Specifically this sample is for you: https://github.com/Enigma644/MindMonito ... eedback.py

I have endeavoured to make the code as user friendly as possible, so it can be edited and expanded upon easily without a high level of programming skills.
For example, in my example you will see:

Code: Select all

#Audio Variables
alpha_sound_threshold = 0.6
sound_file = "bell.mp3"
If you want to change this to play a different audio file, or change the threshold, just change these values.

The testing to see if a file should be played is in this function:

Code: Select all

#Audio test
def test_alpha_relative():
    alpha_relative = rel_waves[2]
    if (alpha_relative>alpha_sound_threshold):
        print ("BEEP! Alpha Relative: "+str(alpha_relative))
        playsound(sound_file)
If you want to test gamma instead of alpha, the relative waves are all stored in the rel_waves array and you just need to change the rel_waves[2] to rel_waves[4].

The waves are stored in the array in this order: 0=delta, 1=theta, 2=alpha, 3=beta, 4=gamma, which was set in this code by the number at the end of the dispatcher handler:

Code: Select all

    dispatcher.map("/muse/elements/delta_absolute", abs_handler,0)
    dispatcher.map("/muse/elements/theta_absolute", abs_handler,1)
    dispatcher.map("/muse/elements/alpha_absolute", abs_handler,2)
    dispatcher.map("/muse/elements/beta_absolute",  abs_handler,3)
    dispatcher.map("/muse/elements/gamma_absolute", abs_handler,4)
Naz
Posts: 1
Joined: Wed Jun 30, 2021 3:19 pm

Re: Play sound in real-time when Alpha is "dominant"

Post by Naz »

I first downloaded this app excitedly when it first came out. Since December of 2019 I kind of fell out of practice and have recently decided to get back on the biofeedback horse.

I’ve read the thread included and I appreciate the depth and effort you’ve put into your responses.

The reason I wanted this software was to be able to engender my own biofeedback training to suit my own specific training interests.

I have done some 8 Chanel Alpha wave training and enjoyed it. The muse is great, and can assist many with their meditative goals. However, there is much opportunity for real time EEG feedback that bridges beyond the meditative aspect. Isolating and training the brain with feedback in each brain frequency is like going to the “brain gym”

What I would like to do is:

Given the Muse has 5 sensory locations. Right and left temporal, right and left frontals and a mid frontal . The temporals and frontals are pairs that being pointed the following this is the capability I wish to have.

I would like to isolate each brainwave. Let’s go with Alpha. I would like to be able to pick (a sound from a MIDI or Synth or any digital sound) and chose a sound for each of the 5 different sensory inputs....

Example:

A flute sound for the right temporal
A flute sound for the left temporal (but at a different pitch)

An oboe sound for the right frontal
An oboe sound for the left frontal (but at a different pitch)

A trumpet sound for the center frontal sensor.

5 different effects that sound with each corresponding sensor. Now, when hemi-coherence happens between say the temporals or the frontals the respective sounds double their octave to serve as an audible signal that both corresponding regions were “firing” at the same time in coherence.

Also, very important. A user can define what his baseline levels are and filter out anything that is less than a gain from the baseline. This serves as a metric that can be worked at to suppress with ongoing training.

Additionally, as the brainwave amplitude of each sensor increases... so does the volume of the respective sound.

So in effect you are creating musical notes with the selected brainwave you with to train.

What works for making large Alpha is not necessary the same for what makes large theta. So being able to receive real time feedback in a musical way would allow users to figure out what works to make alpha and then focus on making more is that. And than on another training session figure out what makes theta and then with the feedback learn how to make more of that.

This is idea is not necessarily for meditative purposes, but rather being able to run through each frequency and learn how to intentionally create and strengthen ones ability to produce it.

This would serve as a method of naturally exercising the brains neurons to be stronger and give the trainee the ability to deliberately exercise: delta, theta, alpha, beta, and gamma (or at least attempt gamma as I understand the slightest twitch can cause artifact)

Creating the brainwashes internally is more desirable and appealing than forcing them into the brain with binaural beats or something like that.

Having the coding foundation built with the 5 sensor input will allow you to adapt to future Home use EEG that may offer 8 channels (and give the opportunity for 4 sets of hemi coherence and a musical choir of 12 simultaneous user defined tones at once.

An advanced feature would be to disable the individual sounds and only permit audio feedback when pairs of coherence are achieved.

Experienced trainees could just do “touch ups” or warm ups with coherence alone in each of the respective brain waves weekly to keep their strengthened brains “toned”

The above is the non-subjective... user parametered biofeedback I would like the mind-monitor soft where to be able to do and to also work with any future hardware (that offers additional occipital and central channels.) to be able to do.

I believe what I am asking for is possible, but I hope it’s also realistic. I think in time this level of control will be expected as a standard feature for all serious home EEG devices.
User avatar
James
Site Admin
Posts: 1110
Joined: Wed Jan 02, 2013 9:06 pm

Re: Play sound in real-time when Alpha is "dominant"

Post by James »

You're welcome to do all that yourself using the code samples from my above email ;-)

Fyi, the Muse only has 4 sensors. The three center electrodes don't really count, as you don't get brainwave values from them. They're just used to ground the signal.
Customer
Posts: 6
Joined: Sat May 01, 2021 11:37 am

Re: Play sound in real-time when Alpha is "dominant"

Post by Customer »

James wrote: Wed Jun 30, 2021 5:27 pm You're welcome to do all that yourself using the code samples from my above email ;-)

Fyi, the Muse only has 4 sensors. The three center electrodes don't really count, as you don't get brainwave values from them. They're just used to ground the signal.
Are you serious? What did you write that code for when you know Muse direct had been taken off. The person that replied to you was another customer that didn't get what he was looking for when paying for your app. I don't want a relaxation app I want a custom brain monitoring app, which is what yours is supposed to be. A ping at a selected L/R bandwidth of this screen is good enough, just 1 ping. When Gamma reaches 60db it goes PING. That's it. Is that THAT hard to implement in the app?! If you want me to pay you more, ask for it, I've already been screwed twice. I don't have Mac, I have an iphone, I want it on the mind monitor app, there's already a FREE (FREE!!) monitoring active feedback for Muse for alpha but it isn't parametric so it's useless to me, I want Gamma NOT Alpha.
Image
User avatar
James
Site Admin
Posts: 1110
Joined: Wed Jan 02, 2013 9:06 pm

Re: Play sound in real-time when Alpha is "dominant"

Post by James »

I'm not sure why you're talking about Interaxon's Muse Direct? FYI I created Mind Monitor years before the now discontinued Muse Direct existed.

Regardless, you don't need Muse Direct to run the code above. It runs with Mind Monitor's built in data streaming. But, yes, you'd need a Mac or PC to run it on and if you don't have that, then I'm sorry, it won't work for you.

I'm not adding audio in, sorry.
#1 - I don't want to try to compete with the Muse Calm app. That's not what this app is about.

#2 - Every time someone asks about audio feedback, they want something different and I don't want to add in a ton of options. The settings menu already has too many things in. There's a LOT more to thinks about than just make a ping at x. For example, if you brainwave is almost exactly at 60db, lets say it wobbling up and down around 59 - 61 db, then do you keep pinging the noise constantly? Bear in mind the brainwaves are calculated 10 times a second, so in just one second it could go up and down over the threshold a bunch of times. Playing the sound over itself would be super annyoing, so now you need a cooldown time before it fires again, but how long a cooldown? Make a setting for it? That's another option clogging up the menu.. things like that just build up in complexity and I don't think it'd be very user friendly or easy to use and I don't want to do that, sorry. If I was making a full screen Mac/PC app where I had a ton of space for on-screen buttons and drop down menus, then maybe, but this is a phone app that needs to fit in a few inches of space and I want it to be a nice clean UI that's simple and easy to read.

#3 - The Muse headband completely saturates the Bluetooth connection with data. If you try and play audio at the same time through bluetooth earbuds/headphones, it can cause data loss (with some devices). I know Interaxon had a hard time with this when Apple updated the AirPods with the "HD" audio codec. Mind Monitor is all about the data, so I'd rather not introduce something that could cause issues with data collection.

I'm always happy to refund anyone who buys the app and it's not what they expect. If you buy through Google, email me your Google recepit with the order ID and I can refund it myself. If you buy through Apple or Amazon, you need to go through their refund process on their App Stores, but I always approve all refund requests that they forward on to me for confirmation, regardless of how long ago the purchase was.
Post Reply