Generating Music with just my Brainwaves | Brain-Computer Interfaces

How I used Brain-Computer Interfaces to Create Music from my Brainwaves

Bagavan Marakathalingasivam
10 min readMar 19, 2022

I don’t know about you, but I love to play the piano. I’ve been playing it for just around 5 years now and found that it’s a great way to help me stay relaxed and have fun! However, one thing that I’ve tried so hard to do is create my music… but it never ends up sounding good…

Over centuries ago, we started creating music from just hitting random things together and making sounds. It sounds kind of funny but if you think about it…. we’re still doing the same thing… except this time the music that’s made is a lot more complicated and generally sounds a lot better 😅

But still, with over centuries of evolution, we’re stuck creating music the same way. But if we could get to a point where we won’t need to even do anything except think. What if we could somehow create music… from our brainwaves… Well, that’s exactly what I did!

In this article, I’m going over a project where I used brain-computer interfaces to create music from my brainwaves.

Now to build this project we need to first understand what brain-computer interfaces are and how we can use them for this project…

Source

So What Are Brain-Computer Interfaces?

Whenever you’re doing anything, from talking, playing video games, to even reading this article, billions of neurons in our brain are constantly firing. A brain-computer interface is that’s used to capture these brain signals, analyze them and translate them into a specific output or command (in our case, it’s music). Now, a lot of methods used for capturing our brain signals using BCIs are generally invasive/semi-invasive, meaning you’d have to undergo some kind of surgery to implant electrodes into your brain. However, that isn’t going to be practical for this project, and scaling any project/product. That’s why one of the most common ways of collecting this data is through a non-invasive method known as electroencephalography (EEG).

Electroencephalography (or EEG for short) is what’s used to collect electrical activity from the brain, by placing electrodes (small metal discs with thin wires) onto the scalp/surface of your head. These electrodes are what are used to detect different changes in electrical charge due to activity from your brain waves.

Understanding how EEG works | Source

The number of electrodes you need, and the placement of the electrodes rely heavily on the amount/type of data you are trying to collect. If we just want brainwave data, then it doesn’t matter too much where we place the electrodes (and we would generally just need one or two).

If you want to learn more about how BCIs and EEG work, then feel free to check out my other article here

So what BCI are you using for this project?

I Used OpenBCI’s Ganglion Board to Collect the EEG data

Now that we know what BCIs are and where we will be placing the electrodes… where are we going to get them from?

The BCI that I used was OpenBCI’s Ganglion Board. This is a biosensing device that’s mainly used for collecting different types of electrical signals from not only the brain but anywhere (which can come in handy for other projects). As for the electrodes, I also used OpenBCI’s headband kit. OpenBCI has great documentation for how to set up this kit, so I would recommend checking that out here.

There are still a ton of different BCIs you can use for this project, and it isn’t just limited to OpenBCI (there are headsets like MUSE 2, Emotiv Epoc, Neurosity Crown, etc.)

Electrode Placement

As I mentioned, the placement of the electrodes won't’ matter too much, however, for this project I placed mine on the FP1 and FP2 locations, and the A1 and A2 (earlobes) as reference points. This is to make sure the data that we are collecting is accurate and isn’t random.

One of the cool things about OpenBCI is that they also come with their own open-source GUI that’s used to visualize the EEG data. In the diagram below, we can see the EEG data visualized as waves (right-side) and the FFT and Headplots (left-side) which also tell us a bit more about our data.

Visualizing the EEG data stream using OpenBCI’s GUI

Now that we have everything setup, let’s take a look at how we’re converting this into our brainwaves!

Diving into the code for this project!

So now that we’ve set up the board, let’s get into building this project. In this article, I’ll be going over the five main steps needed for building the project: importing libraries, collecting data from our board, signal filtering/data processing, converting the data into sound, and creating the soundtrack.

Feel free to check out this github repository for the full the code!

Step 1: Import the necessary libraries

We’re going to be building this program in python, and so we first need to figure out what libraries we need to build this project.

import time
import brainflow
import numpy as np
from midiutil import MIDIFile
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
from brainflow.board_shim import BoardShim, BrainFlowInputParams,
LogLevels, BoardIds
from brainflow.data_filter import DataFilter, FilterTypes,
AggOperations, WindowFunctions,
DetrendOperations, NoiseTypes

The main library I want to focus on here is Brainflow.

BrainFlow is a library intended to obtain, parse and analyze EEG, EMG, ECG and other kinds of data from biosensors. (taken from their documentation)

This library is the core of our program cause it allows us to connect our board to the code as well as do things like signal processing, using its BoardShim and DataFilter modules.

We also imported libraries like numpy, matplobliband pandasfor handling and visualizing the data.

The midiutil the library is what we’re using to convert the data into different sounds and compile it together into one midi file.

Step 2: Getting Data from our Board

Now that we’ve imported all our libraries, let’s start collecting this data from our board.

def main():
params = BrainFlowInputParams()
board_id = BoardIds.SYNTHETIC_BOARD.value
board = BoardShim(board_id, params)
eeg_channels = board.get_eeg_channels(board_id)
sampling_rate = board.get_sampling_rate(board_id)
timestamp = BoardShim.get_timestamp_channel(board_id)
board = BoardShim(board_id, params)
board.prepare_session()
board.start_stream()
BoardShim.log_message(LogLevels.LEVEL_INFO.value,
'start sleeping in the main thread')
time.sleep(10)
nfft = DataFilter.get_nearest_power_of_two(sampling_rate)
data = board.get_board_data()
eeg_channels = BoardShim.get_eeg_channels(board_id)

We start by defining a function main and declare some variables. The first section of the code allows us to connect to the board (we’ll be using a synthetic board so that you can follow along too) as well as get different things from the stream, like the different channels, sampling rate and timestamps.

After declaring all the variables, the next bit of code starts the data stream. What this is doing is collecting the data for a specific amount of time (10 seconds) and then storing it into specific variables. The eeg_channels will contain the index number for all channels that contain EEG data. We can use these indices with the raw data (from data variable) see what the data looks like for each channel:

Plotting the raw data for the first four channels

Step 3: Signal Processing on the Data

Now if we look at the data, it might be hard to understand what’s happening but I’m going to tell you right now that this data isn’t just our brain signals. You see, EEGs are used to capture electrical signals from the scalp of our heads. But… literally, everything we do creates electrical activity — from moving our eyes, our jaws, even our heartbeats are all going to get captured from our BCI device. That’s exactly why we need to preprocess and filter our data so that we only capture our brainwaves.

for count, channel in enumerate(eeg_channels):
# Applying bandpass filter
DataFilter.perform_bandpass(data[channel],
BoardShim.get_sampling_rate(board_id),
7.0, 3.0, FilterTypes.BUTTERWORTH.value, 0)
# Notch Filter
DataFilter.remove_environmental_noise(data[channel],
sampling_rate, NoiseTypes.FIFTY.value)

This part of the code goes through the raw data and calls the bandpass and notch filters using the class DataFilter

But what do these filters do?

The bandpass filter is used to allow for a certain range of frequencies to be highlighted, and the rest of the data attenuated. In this case, we’re using this filter with an center_frequency of 7hz and a bandwidth of 3hz. This means that frequencies within the range 4hz to 10hz, and anything below or above that is attenuated. This will get rid of things like our heartbeats, eye movement, etc.

There is also the notch filter. This is the opposite of a bandpass filter, where it attenuates the power over a specific range of frequencies. This is used to eliminate powerline noise, which occurs in the 50/60hz range.

Now that we’ve filtered our data, we can see that it is a lot “cleaner” and easier to understand (and we also know that these are just from our brainwaves and not other artifacts)

Plotting the processed data from the first four channels

Let’s move on to converting this data into sound!

Step 4: Converting the Data into Sound

Now the idea behind this project is to convert the values of each sample into a sound, but since we have over 2000 samples playing a sound for each one isn’t probably isn’t going to sound that good and will be a bit of an overkill. That’s why we need to reduce the number of samples we have, but it isn’t as simple as getting the first 100.

That’s why I decided to take the value from every 50 samples and store that into a new list of values

new_data = []
for i in enumerate(data[eeg_channels[3]]):
if i[0] % 50 == 0:
new_data.append(i[1])

Here, I’m using data from the 4th channel— this was just a personal choice, but you can try it out with different channels and see what they sound like!

Plotting the new data from the 4th channel

Now we can see that we’re left with just 50 samples which are great for generating the music.

The actual “music generation” part of this project is pretty simple, it’s a lot of if statements that convert the value of each sample into a specific note. So essentially, if the note is between 0 and 1, the program will play Middle C on the keyboard, if it’s between 1 and 2 it’ll play a different note, etc.

musical_notes = []
for i in new_data:
if i < 1 and i > 0:
musical_notes.append(60)
if i < 0 and i > -1:
musical_notes.append(59)
if i < -1 and i > -2:
musical_notes.append(58)
if i < -2:
musical_notes.append(57)
if i > 1 and i < 2:
musical_notes.append(62)
if i > 2 and i < 3:
musical_notes.append(63)
if i > 3 and i < 4:
musical_notes.append(64)
if i > 4 and i < 5:
musical_notes.append(65)
if i > 5:
musical_notes.append(66)

In this snippet of code, we’re iterating through our data and adding a note to our new list depending on what the value of our data is (like mentioned previously) through the musical_notes.append() lines. The numbers ranging from 57–66 represent notes on a keyboard, where 60 is middle C (so 59 would be a note lower, 61 is a note higher, etc.). Since we are playing the music from a MIDI file, we would represent the notes as numbers like these.

Step 5: Creating the Soundtrack

After collecting our data, processing and converting it into notes, we need to play the notes together into one file. That’s exactly what these next few lines of code do:

track    = 0
channel = 0
time = 0 # In beats
duration = 1 # In beats
tempo = 250 # In BPM
volume = 100 # 0-127, as per the MIDI standard
MyMIDI = MIDIFile(1) # One track, defaults to format 1 (tempo track
# automatically created)
MyMIDI.addTempo(track,time, tempo)for pitch in musical_notes:
MyMIDI.addNote(track, channel, pitch, time, duration, volume)
time = time + 1
with open("le-music.mid", "wb") as output_file:
MyMIDI.writeFile(output_file)

We are using the midiutil library to do this. The variables initialized in the beginning are used to determine what the overall musical piece will sound like. This includes things like how loud the track will be, the speed each note is played at, etc. We then iterate through our musical_notes and store the pitch for each note in our MyMIDI variable. After creating the MIDI file, we need to save it which is what the last part of the code does.

Now that’ we’ve finally finished, let’s take a look at what our output sounds like 👇

What our program outputted

Conclusion and Next Steps

Now the outputs aren’t as good as compositions from Mozart, but it’s still cool to see how this was created using our brainwaves. In terms of the next steps, I’m currently looking into how we can leverage this application to assist with mental health and stress. Essentially, developing a closed-feedback loop that generates music based on someone’s brainwaves to change their emotional state through real-time emotion classification. This means that we would need 5 different electrodes: 2 electrodes placed in the AF7 & AF8 locations on the forehead, one on the Fpz part of the forehead, and two more electrodes in the TP9 & TP10 locations (see diagram below for locations).

(left) The diagram shows where to place electrodes Source | (right) Visual representation of a high-level overview of this project Source

I can’t believe we’re already at the end of this article! I hope learned and built something new! If you have any questions, feel free to reach out to me on LinkedIn or Twitter :)

--

--