Helen Sartory's hot takes from Abbey Road’s first Hackathon

Helen Sartory's hot takes from Abbey Road’s first Hackathon

18th January 2019
Helen Sartory is an artist-producer and emerging technologies advisor currently involved in a range of projects across startup incubation, music technology and immersive performance. She recently joined the team at Abbey Road Red to help produce their first Hackathon. Read about her experiences from the 36 hour tech-fest.
 
I joined the team at Abbey Road Red to help produce their first Hackathon - a 36 hour marathon of technology, coding, music and caffeine, and absolutely one of the most rewarding projects I’ve worked on so far in music tech. I’ve just about recovered, so now seemed like a good time to unload my brain into a post about what I personally took away from the event apart from 100 new friends!
 
 
There were outstanding entries from teams from all around the world, but as a musician with a lean towards immersive performance, I was really excited to see those focused on innovating the way music is created and performed, specifically those using Chirp - one of our technology sponsors - which allows data to be sent between nearby devices using sound (via audible or nearultrasonic frequencies). Incorporating devices that aren’t traditionally accessible in live performance, like phones, or building robust performance systems that can be triggered wirelessly is a hugely exciting shift to see, and a sign that we might be moving beyond the traditional PA system - a key hurdle for performing in non-traditional venues and formats.

Two of my favourite projects, La Vaca Cega and AHRChoir (which was built in 2 hours!), both created tools based on Chirp that transform an audience into a distributed synthesiser / speaker using their phones, an idea I’m really excited about (and attempted to build at the SXSW Hackathon in 2017 - shoutout to team CrowdPlay!). The demos were magical to experience and I think there’s huge potential to add some geo-fencing and start creating 3D sound effects from within a crowd.

Sonic Breadcrumbs also used Chirp to create a audio-triggered “scavenger hunt”, which encouraged users to explore the Studios (along a storyline that followed a stressed-out Abbey Road runner!) - another application that has really exciting implications for performance layouts and interactivity.
 
 
If you haven’t met Alex Glow and her amazing 3D printed owl Archimedes, google them immediately! Her project, Hidden Lyrics, used Chirp commands embedded into existing tracks to trigger a series of owl-y “dance” moves (adorable…), which had my brain turning over all the different ways we might be able to use audio to control physical objects and environments.
 
 
Another big theme that I loved seeing in action were projects that created new interfaces for music creation, especially those aimed at musicians with restricted movement - I can’t help but think that the wonderful Kris Halpin’s Mi.Mu. Glove demonstration on Day One was a big influence on this.

In the physical domain, the AutoMelodiChord and the Theremin T-Shirt were great examples of capacitance / gestural controlled instruments, and Face Beat utilised face tracking modules to build a mind-boggling sound triggering system that reacted to facial expressions in real-time. On the other end of the spectrum, XRSynth approached sound design in super innovative way - putting virtual objects with different sound properties into an interactive 3D environment, which has huge applications in both audio-visual performance and AR / VR.
 
 
I could go on and on, as every team’s submission blew my mind in some way, but I’ll leave it there for now. A big shout-out to everyone who took part and kept spirit’s high (even at 4am!), our wonderful sponsors, and massive thanks to the team at Abbey Road for all your help. I can’t wait to do it all over again next year!
 
 

Related News