music technology portfolio

Thank you for reviewing my application to your graduate program in music technology! On this webpage I have collected and explained three works that represent my background in music composition and theory, research and informatics, writing, programming, and interaction design. If you run into any issues accessing or using these works, please do not hesitate to email me for assistance or a demo video. I am also happy to provide additional code and writing samples if you need them. I hope you enjoy looking through these projects as much as I did putting them together!

i.

Unity as an Interactive Binaural DAW:

the nature of intelligent life is to destroy itself

This experience works best in fullscreen with headphones! If the embedding isn’t working, you can also play the composition here. You can peruse the orchestral score here, the code here, and an earlier blog post with additional context for this project here.

What is this?

This is one part of my 2022 MA/BA thesis in music theory and composition at Yale University. My main project involved designing and building an ambisonic (3D audio) array at the Center for Collaborative Arts and Media (CCAM). I composed a ten-minute piece for the Yale Symphony Orchestra (YSO), recorded it with a bespoke multichannel microphone rig, and mixed it in ambisonic space. I was proud of how my piece turned out, but I wanted the experience to be accessible to people who couldn’t physically come to the CCAM to hear it. So, as I was preparing the ambisonic mix, I began to explore Unity as a workstation for interactive binaural audio. People really connected with the gamified medium, and so I presented my composition in this form to the Computing and the Arts department. I came back to the project earlier this year in order to clean up the art, code, and interaction design, and that’s what you see here!

What did I learn?

Technologically: My first pass at this project was also my first experience coding in C# and dealing with a number of HCI and computer graphics concepts like shaders and interfaces. It was an opportunity to get hands-on with the interwoven systems of interaction design, testing how frontend and backend decisions affect the emotional valence of the resulting experience. And of course, I dove down the sound spatialization rabbit hole, learning about HRTFs, ambisonic file encoding, spatial perception, and spatial DAWs. When I returned to this project earlier this year, I felt that I could approach these topics with more confidence.

Musically: My composition was heavily inspired by Joseph Schwantner’s …and the mountains rising nowhere and John Luther Adams’ Become Ocean, works whose attention to instrument pairings and texture pays off on grand emotional scales. I was ambitious with my use of extended technique, asking the brass to blow bubbles with their mouthpieces, the low strings to rustle their instruments, and the whole orchestra to vocalize as part of the overall texture. There’s also a structured group improvisation, where I play with agency as it affects density and time. I spent extra time recording demo videos and working with individual YSO members in order to iron out the performance of these experimental elements. I also worked with the recording engineer at the Yale School of Music, figuring out how we could modify the standard orchestral microphone rig in order for our resulting stems to work in a spatial multichannel mix.

Personally: This project put me in charge of a yearlong collaboration between the CCAM, the Yale Music Department, and the Yale Symphony Orchestra. I practiced my communication skills constantly, resolving conflicts related to schedules, budgets, logistics, and policy. It was challenging and rewarding to work with so many people in service of an overarching vision, to methodically plan and execute a project with so many different moving parts. It was also an important exercise in adapting to feedback from my mentors and my listeners, putting extra effort into the elements of my work that resonated most with my community.

ii.

Discovering harmonic structures in real time with a new musical instrument:

Filtered Point-Symmetry (FiPS)

Download a zip file with this patch (titled FiPS 2024) and its subpatches here, and read my introductory explainer here. Read my 2019 final paper for the seminar “The Math of Tonality” on relaxing the symmetry constraint of FiPS here.

What is this?

This is a Max/MSP/Jitter patch that simulates the ‘filtered point-symmetry’ (FiPS) theory of musical harmony. The theory, rooted in John Clough and Jack Douthett’s work on maximally even sets, proposes a function that generates pitch-class sets which divide the octave as evenly as possible. These sets, in standard tuning, coincide with familiar musical structures like the diatonic scales and chords. My patch includes an introductory explainer that can get you up to speed on how this system works, with links to relevant publications. I first encountered FiPS in 2019, in a seminar on math and tonality. I was taking a class on Max/MSP/Jitter at the same time, and I figured it could be fun to create a final project for the latter course that explores ideas from the former. I came up with an experimental MIDI controller that outputs maximally even filtered sets, with degrees of freedom relating to the cardinality, filters, and pitch context of those sets. The version you see is slightly modified from the original, with updated jit.gen graphics and a more navigable, customizable user interface modeled after familiar hardware layouts.

What did I learn?

Technologically: This turned out to be a pretty complicated patch, and the visual coding environment calls for a high standard of organization. I approached it as an exercise in abstraction and consistency. One fun problem I ran across was how to create a nested for-loop that updates in real time. Max doesn’t have built-in loop functionalities, so I ended up using two ticks on different timescales: the render tick, which happens on every frame of the simulation, and a metronome tick, which happens at regular intervals as a function of Max’s built–in millisecond clock. I realized that playing with the render speed and metronome speed gave some interesting and unexpected results, so I decided to include control over these values as part of the user interface!

Musically: A lot of interesting new music deals with curating the unfamiliar, with taking comfortable structures and stretching them in unexpected directions. In designing this simulation to also work as an instrument, I had to think about the composers and performers who might want to use it, the questions they might want to answer with it: What if a musical octave were defined by some arbitrary interval? What if our chromatic universe had an arbitrarily high number of pitch-classes to choose from, but our scales still had seven notes? How can rhythm and harmony serve as forces of constructive and destructive interference for one another? Conversely, I had to think a lot about what restrictions to impose in order to make the interface feel expressive.

Personally: I find the politics of FiPS interesting. Much of the literature centers on analyzing or generating Western music, on a special relationship between Western art music and values of symmetry and order. I wanted to create this simulation to showcase just how broad this generative space actually is, how much unfamiliar, dissonant, wacky music can come from the exact same pursuit of symmetry. In general, I think interactive experiences like this can do much to separate the idea of ‘we can generate structures by X algorithm’ from ‘the values underlying X algorithm also underlie the structures it is able to generate’.

iii.

An MIR framework for analyzing music as a form of cultural evolution:

Latent evolutionary signatures

Read the published manuscript here, and peruse the code here.

What is this?

This is a research paper I co-first-authored with Drs. Jonathan Warrell and Leonidas Salichos at the Gerstein Laboratory. It was published this spring in The Journal of the Royal Society: Interface. Broadly, the paper proposes an ‘evolutionary’ variational autoencoder (VAE) that approaches common music information retrieval (MIR) tasks like period and genre prediction from the assumption that, under certain conditions, musical structure is subject to evolutionary processes akin to mutation and selective pressure. This approach is inspired by the world of bioinformatics, in which similar evolutionary models have been successful in classifying different kinds of cancer. We find that our model outperforms similar models working on the same dataset (the McGill Billboard dataset of pop music from 1958 to 1991), and that our latent signatures support studied trends in popular music history. We propose that a general class of evolutionary models can glean insight from many different kinds of cultural artifacts.

What did I learn?

Technologically: I wrote nearly all the code for this project, meaning that I spent over a year studying Bayesian statistics, principles of machine learning, and relevant libraries python and MatLab. It is hard to overstate just how much I read, learned, and practiced in order to bring this research to life, and although I have much more growing to do as a programmer and engineer, I’m incredibly proud of how my efforts turned out.

Musically: I also contributed much of the introduction and discussion text for this project, drawing on my prior studies at the Yale Music Department to compile a musicology reading list for my collaborators. It was an interesting and difficult task to draw from some very humanities-flavored disciplines in the service of a very straightforward, STEM-flavored research project. It led to interesting discussions on how we want to define music, how we relate our specific symbolic dataset to other cultural artifacts, and how we want to delineate the boundaries of the evolutionary analog between biological and musical structures.

Personally: This was my first major research project after college, and it served to instill my passion for research. Both of my collaborators moved on from the lab before the paper was finished, and so I stepped into a leadership position in order to meet our revision and resubmission requirements on time. It was a rewarding process to work in that dynamic: to ask questions and learn from people much more experienced than me, but also to find the confidence to contribute my own voice and expertise. By the time we published our final manuscript, I felt that I had matured significantly as both a scientist and an academic collaborator.