Adapting the Human Machine: Examining the use of bio-based interactivity in contemporary new music performance
In my first semester teaching computer music at Cornell, I decided to take a small pilgrimage to Asheville, NC to visit what was then the Moog Music factory, largely due to our shared histories and what seemed to be a bit of professional flex – and, seeing as Asheville and Ithaca are not close, decided to book a couple shows along the way for good measure. The only problem with that was I’d just spent most of the pandemic playing guitar and found most venues were expecting me to do something electronic for some reason – and, as it happens, I apparently struggle with doing these things well at the same time. By the time I arrived in Asheville, I was a bit of a mess as the company rep began their speech on how Moog ushered in the sound of the future while demonstrating a 50-year-old instrument. I left confused – both with my inability to juggle instrumentation as well as what the new sound of the future could possibly be – and, in an effort to confront both issues, somehow turned to biometric inputs as a potential viable solution, starting with brainwave monitors and branching out from there. Over the course of this talk, we’ll discuss historical examples of biometric-informed composition, how it relates to my own practice and explore hands-on examples and suggestions on how to incorporate these devices into your own work.
Special thanks to the Office of Alumni Engagement for their support of this lecture.
Open to all Oberlin students