Generating Music with just my Brainwaves | Brain-Computer Interfaces

In this article, I’m going over a project where I used brain-computer interfaces to create music from my brainwaves.

Source

So What Are Brain-Computer Interfaces?

Understanding how EEG works | Source

So what BCI are you using for this project?

I Used OpenBCI’s Ganglion Board to Collect the EEG data

Electrode Placement

Visualizing the EEG data stream using OpenBCI’s GUI

Now that we have everything setup, let’s take a look at how we’re converting this into our brainwaves!

Diving into the code for this project!

Step 1: Import the necessary libraries

import time
import brainflow
import numpy as np
from midiutil import MIDIFile
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
from brainflow.board_shim import BoardShim, BrainFlowInputParams,
LogLevels, BoardIds
from brainflow.data_filter import DataFilter, FilterTypes,
AggOperations, WindowFunctions,
DetrendOperations, NoiseTypes

Step 2: Getting Data from our Board

def main():
params = BrainFlowInputParams()
board_id = BoardIds.SYNTHETIC_BOARD.value
board = BoardShim(board_id, params)
eeg_channels = board.get_eeg_channels(board_id)
sampling_rate = board.get_sampling_rate(board_id)
timestamp = BoardShim.get_timestamp_channel(board_id)
board = BoardShim(board_id, params)
board.prepare_session()
board.start_stream()
BoardShim.log_message(LogLevels.LEVEL_INFO.value,
'start sleeping in the main thread')
time.sleep(10)
nfft = DataFilter.get_nearest_power_of_two(sampling_rate)
data = board.get_board_data()
eeg_channels = BoardShim.get_eeg_channels(board_id)
Plotting the raw data for the first four channels

Step 3: Signal Processing on the Data

for count, channel in enumerate(eeg_channels):
# Applying bandpass filter
DataFilter.perform_bandpass(data[channel],
BoardShim.get_sampling_rate(board_id),
7.0, 3.0, FilterTypes.BUTTERWORTH.value, 0)
# Notch Filter
DataFilter.remove_environmental_noise(data[channel],
sampling_rate, NoiseTypes.FIFTY.value)
Plotting the processed data from the first four channels

Step 4: Converting the Data into Sound

new_data = []
for i in enumerate(data[eeg_channels[3]]):
if i[0] % 50 == 0:
new_data.append(i[1])
Plotting the new data from the 4th channel
musical_notes = []
for i in new_data:
if i < 1 and i > 0:
musical_notes.append(60)
if i < 0 and i > -1:
musical_notes.append(59)
if i < -1 and i > -2:
musical_notes.append(58)
if i < -2:
musical_notes.append(57)
if i > 1 and i < 2:
musical_notes.append(62)
if i > 2 and i < 3:
musical_notes.append(63)
if i > 3 and i < 4:
musical_notes.append(64)
if i > 4 and i < 5:
musical_notes.append(65)
if i > 5:
musical_notes.append(66)

Step 5: Creating the Soundtrack

track    = 0
channel = 0
time = 0 # In beats
duration = 1 # In beats
tempo = 250 # In BPM
volume = 100 # 0-127, as per the MIDI standard
MyMIDI = MIDIFile(1) # One track, defaults to format 1 (tempo track
# automatically created)
MyMIDI.addTempo(track,time, tempo)for pitch in musical_notes:
MyMIDI.addNote(track, channel, pitch, time, duration, volume)
time = time + 1
with open("le-music.mid", "wb") as output_file:
MyMIDI.writeFile(output_file)
What our program outputted

Conclusion and Next Steps

(left) The diagram shows where to place electrodes Source | (right) Visual representation of a high-level overview of this project Source

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store