Abbey Road Red Talk Recap: A Primer on Brain-Computer Interfaces for Music

Abbey Road Red Talk Recap: A Primer on Brain-Computer Interfaces for Music

25th May 2022

In our music tech incubator’s 12th Red Talk, they decided to explore neural interfaces, also known as Brain-Computer Interfaces (abbreviated to BCIs), and how they relate to music from both a creative and experiential perspective.


Red’s Karim Fanous and David Fong uncover what a BCI is, the evolution of them and how we can use their signal for music creation.

 

What is a BCI?


A Brain-Computer Interface is a computer-based system which acquires brain signals originating from the Central Nervous System, analyses them and translates them into commands that are relayed to an output device to carry out a desired action.

When the brain is working, it generates electricity. Brain-Computer Interfaces rely on electroencephalogram (EEG) signals, which are signals displaying the electrical information deteced in the brain through electrodes placed on a specific muscle or organ.

 

The Evolution of BCIs


Our talk began with the history of BCIs, overviewed by David Fong, who set the starting scene in 1875, when British physician and physiologist Richard Caton laid the groundwork for work with EEG signals after he detected electrical impulses from the surfaces of rabbit and monkey brains.

German doctor and researcher Hans Berger later pioneered the method for recording EEG signals in 1924 and identified the first alpha and beta wave activity from the brain. Alpha waves are produced in people that are in a relaxed state, while Beta waves are produced in people that are in an awakened or focused state. He coined the term EEG and published the first official paper on the subject in 1929 which described the changes he observed in EEG signals due to attention, mental effort, and cerebral injuries.

The 1970s saw the first research on BCIs commencing at the University of California, with leading researcher Jacques Vidalis coining the term. In 1977 he described the first application of BCIs which enabled control of a cursor-like graphical object on a computer screen.

As research into BCIs intensified, researchers started to consider how they could be deployed commercially. In 2001 the world saw the first commercially available BCI called NeuroPort from Cyberkinetics in collaboration with Brown University. The device monitored brain activity in order to identify micro-seizures in patients.

Jumping to the present day, Neuralink founded by Elon Musk has a different goal, that of eventually implanting chips into paralyzed humans in order to allow them to control electronic devices like phones or computers.

Before testing the method on humans which it aims to do this year in 2022, Neuralink implanted a chip into a nine-year-old macaque monkey’s motor cortex. The macaque was first taught to play Pong using a joystick by rewarding it when it used the joystick correctly. As the macaque played the game, data from the implanted chips was analysed to determine which brain signals corresponded to different actions on the joystick. The joystick was then disconnected to see if the macaque could play Pong by simply thinking about moving its hands up or down, which it managed successfully. While this experiment showed the potential of implanting BCIs into living things, it also revealed the potential dangers, as several monkeys did not survive the testing.

BCIs are also attracting a lot of interest from investors, with financing tripling in 2021 relative to 2019 including big funding rounds closed by the aforementioned Neuralink raising a $205 million Series C round in July 2021, Cognixion, developer of wearable devices with integrated BCI technology, raising a $12 million seed round, and Paradromics, developer of BCIs helping people with disorders ranging from paralysis to speech impediments, raising a $20 million seed round.

 
 

How do we obtain useful brain signals?


Next, our first speaker was Eduardo Miranda, a Professor in Computer Music at the University of Plymouth whose traditional focus has been how BCIs can be used to enable musicians to have control over a musical instrument during a performance, or which musical ideas are chosen when composing new music.

Eduardo highlighted that signals captured by BCIs are very different from the electrical activity originating from other muscles and organs because a lot of noise is introduced in the measurement process. For instance, when taking EEG readings from a human scalp, hair applies a lot of filtering on the original signal. Some processing can be applied to improve the purity of the signal such as reverse engineering the captured signal to figure out what the original signal from the source was. Capturing, cleaning and parsing brainwave data is a complex and challenging process.

 

Professor Eduardo Miranda

 

Using BCI signals for music creation


While some musicians have experimented with turning signals to music directly by converting the BCI signal into an audio signal, the resulting audio signals sound chaotic fairly chaotic in their raw form. However, musicians have tried to manipulate these signals to make them more musical. Eduardo thinks a more promising approach is to use BCIs to control music devices.

Eduardo has carried out several landmark experiments including one exploring how musicians suffering from locked-in syndrome could select musical ideas using BCI readings.

He did this by harnessing a phenomenon called visual evoked potential in which someone looking at a light flashing at a specific frequency will produce the same frequency from their brain which can be read by a BCI.

Eduardo’s experiment enabled the musicians suffering from locked-in syndrome to trigger different musical phrases by placing lights flashing at different frequencies beside them such that the BCI can infer which light the musicians are looking at. A string quartet was then instructed to play these musical phrases sequentially as they were selected to allow the musicians to effectively compose a piece of music in real-time (watch a performance here).

Eduardo carried out another experiment making use of visual evoked potential. This one used BCI readings to control the triggering of four different sound effects of bird chirping, gunshot, telephone, and bell depending on which icon, each flashing at a specific frequency, a user wearing a BCI was detected to be looking at (watch it here).

 

Conclusion


Following David and Eduardo’s presentations, we enjoyed guest appearances from Abbey Road Red alum BrainRap’s founder Micah Brown, who is developing an assistive lyrics tool for songwriters and freestylers powered by AI and BCIs, and MiSynth’s co-founder Senaida Ng, who is developing a BCI-powered software synthesizer for creators, along with our in-the-round audience discussion after that presentations.

 
 

Related News