Generative Music: Phasing Music Connnect-the-Dots

I recently downloaded a country music sample pack for another project, and I’ve wanted to use it to make something akin to Henry Flynt’s Hillbilly Tape Music. Building on a project from another class a few weeks back, I took a fiddle sample and played it through two separate players, with playback speed determined by mouse position when the mouse is pressed down (it plays back at a normal rate when the mouse is not pressed).  In the interest of creating a “proper piece of music” with a beginning, middle and end (rather than just a weird interactive piece that plays until the user gets bored and closes the tab), I drew an array of dots to the canvas and provided instructions to connect them all before stopping the piece. This ensures that the user explores the entirety of the interface and all of the various combinations of playback speed, while still allowing them room to explore and play the piece their own way. Thus, the piece relies on both a generative mechanical system (the out-of-phase playback) and a generative social system (the indeterminate pattern of connecting the dots). 

https://editor.p5js.org/mhorwich/full/BJoSPYUaX

The Code of Music 7: Midterm (Sound Over Time)

I can’t remember where I first read it, but the definition of music that I’ve found most useful in my practice is “the deliberate organization of sound over time.” That definition, which cuts to the core of what is both magical and universal about music, served as the loose inspiration for this project, which bears the working title “Sound Over Time.” 

In this piece, I used the second(), minute(), and hour() functions of p5.js, and mapped to them the pitch of a series of oscillators. Each oscillator plays through an array of 60 notes (a little over eight octaves of a C major scale), with the note changing every second, minute and hour respectively. The piece never repeats over the course of the day, and starts over from the beginning every night at midnight. 

In the interest of making the work interactive, and of allowing myself and the user to hear the whole piece without waiting a full day, I added a series of sliders in the top right corner that can be used to override the global clock time. 

Play it here: https://editor.p5js.org/mhorwich/full/BJbLg3soQ

Steal the code: https://editor.p5js.org/mhorwich/sketches/BJbLg3soQ

The Code of Music 6: Sampling

This is a placeholder post. I’ve got a lot to say about Sampulator and Keezy but I’ve been having trouble getting on the internet since I’ve been here today so it’ll have to wait. 


For my project on sampling, I wanted to use a recording of a collective improvisation that I took last month in Germany. I recorded myself and a group of four other people (most of whom I had never met before) playing music in a stairwell for over an hour. We had a guitar and a ukulele, but mostly everyone just clapped and sang. It’s hard to put into words what a powerful experience this was, and I wanted to zoom in on a few minutes toward the end, where I improvised a new arrangement of a song I wrote nearly ten years ago.

The irony is not lost on me that I was in Germany for a web audio conference and my biggest musical takeaway was an acoustic jam session. With that in mind, I wanted to take this document and filter it through the medium of web audio. When you push play, two recordings begin playing simultaneously — a Tone.Player and a Tone.GrainPlayer. When the mouse is pressed, the playback rate of each recording is mapped to the X- and Y- axes, and when the mouse is released, it snaps back to regular time, but the recordings are (almost necessarily) out of phase. 

https://editor.p5js.org/mhorwich/full/SkAumv9jm

Generative Music: Bias in Machine Learning

Coming into school several hours later than I had planned today, I ran into Sukanya, who mentioned writing her homework blog post about a (terrifying) company called Faception. I remembered reading about them last year in Allison Parrish’s Electronic Rituals, Oracles and Fortune Telling class in an article called “Physiognomy’s New Clothes“; so I returned to that syllabus and found an article I had never gotten around to reading called “How Algorithms Rule Our Working Lives.” 

The article, adapted from a book called Weapons of Math Destruction, focuses primarily on a company called Kronos (why do all these companies have such ominous names?) that develops systems to assist in the hiring process for large companies like chain stores and restaurants. Part of their hiring process is a legally dubious “personality quiz” often used by psychiatrists to diagnose and treat personality disorders. This portion of the quiz flags applicants according to their answers, effectively weeding out any potential hires who have ever shown signs of mental illness. 

[note: going back to add more to this; just wanted to hit “save” so I have something to post on the homework wiki, even if it’s incomplete]

The Code of Music 4: Synthesis

This sketch is relatively bare-bones still, but it incorporates an idea that I think has potential. The basic premise is two synths, an AM synth and an FM synth, which play up and down a scale along the X and Y axes, respectively. However, the mic input is mapped to a Chebysev distortion, which warps the waveform when the mic picks up a signal. 

sketch: https://editor.p5js.org/full/rkk0BwF5Q

code: https://editor.p5js.org/mhorwich/sketches/rkk0BwF5Q

Design Pitch: Music in Social VR

The short version: 

My proposal is a two-participant virtual reality experience where users make music together by creating, altering, and moving sonic objects in virtual three-dimensional space. 

The longer version: 

In this proposed VR experience, two users enter an abstract landscape, embodying humanoid but amorphous avatars. Their voices are audible from the headset microphones, but processed through enough reverb to make words more-or-less indistinguishable. They are able to instantiate and otherwise interact with an evolving array of small game objects, each of which is mapped to a sound that either loops for the lifecycle of that object or triggers when that object interacts with a collider. Objects are destroyed gradually over time and through various “gameplay” interactions.If the maximum number of objects are already in play, the oldest object is destroyed when a new one is instantiated. The total number of objects that can exist at once changes over the course of the experience — with just a few in the beginning and gradually more over time until they slowly fade out at the end, with the whole experience lasting around four minutes. 

The background/inspiration/etc.

This project draws inspiration primarily from three sources: 

  1. Collaborative electronic musical interfaces like Reactable and Orbit, which encourage musical exploration in a way that’s playful and inviting without sacrificing depth
  2. Existing VR musical interfaces like The Music Room and Soundscape VR, which, in my opinion, fall into the unsurprising but slightly disappointing trap of mimicking real life spaces (for the former) and traditional real-world layouts (for the latter) 
  3. Abstract, non-objective-based games like Panoramical where “goal” of the game is simply to enjoy the beauty of the sounds and colors you’re creating

The goal is to blend these influences into a discrete song-length experience, brief but infinitely repeatable with different results. All figures — things like basic 3d geometries and various free Unity assets with simple animations — would be sufficiently abstract that the experience can transport any user, with no dependence on prior knowledge or context. Any two people, regardless of background, can come together for a brief moment to create something beautiful and ephemeral that only they will experience. 

Generative Music: Drunken Mess

drunkenMess is an experiment with creating an instrument that plays itself. I designed the original Max patch as the sound source for a NIME project, but I wanted to hear what it would sound like to play before mapping it to the physical controller, so I mapped the various parameters (mostly pitch and filter) to a series of randomly triggered drunk walks. 

Here are the results: 

The Code of Music 3: Melody

When creating a web audio interface, my primary goal, above clarity or depth of control, is immersiveness. Can the user get lost in the experience, even if they don’t entirely understand how it works? 

I love composing with phrases of different lengths because it provides the most variation with the fewest notes. Just a few seconds of a few different loops at different lengths  repeating out of phase can play for hours without repeating. It’s just a matter of finding a few samples that sound nice out of phase, and adding some simple adjustable parameters to make the experience interactive. 

In this piece, three different synthesizers cycle through different parts of a six-octave pentatonic scale in different patterns. Because each instrument covers a different array of notes with a different pattern (one alternates up through the lowest octave, another alternates down through the next two octaves up, and a third cycles through a random walk of the top three octaves). The playback rate of each pattern is then mapped to a slider, where each instrument can be adjusted from 1/4-16x its original speed. The result is an endless cascade of strange inverted chords and melodies in shifting interlocking rhythms. 

https://vimeo.com/292779658

Syncing animations with the music in a satisfying way was a complicated process, so again I went with atmosphere over accuracy. The flashing colors and pulsing circles are mapped to the amplitude and phrase position of the various synth patterns; although, as with the sounds, it all blends together into a dreamy wash. 

sketch: https://editor.p5js.org/full/HJkp693t7

source code: https://editor.p5js.org/mhorwich/sketches/HJkp693t7

Code of Music 2: Rhythm

Our lesson on time signatures reminded me of a song my guitar teacher showed me when I was a teenager. 

I think it’s in 13/12? Or 11/12? Most of it anyway — the entire groove changes so often it’s hard to tell what’s going on. Somehow they seem to make time stop for an instant at the end of every phrase without ever losing the forward momentum. Almost twenty years since I first heard it and it still blows my mind. 


I’m always drawn to web audio interfaces shy away from emulating traditional instruments and embrace the affordances of their programming environment. Web apps like Groove Pizza and the various Chrome Music Lab Experiments incorporate inviting Javascript animations and a simple, inviting onboarding process for the user, who may or may not have any idea what a step sequencer is or how it works. Google’s Infinite Drum Machine further embraces the instrument’s home on the internet by including an unimaginably vast library of samples to use (along with gorgeous data visualization and a helpful tagging system). 

Still, these all look and feel like the millennial grandchildren of the 808. There’s always a step sequencer somewhere along the bottom of the page. Step sequencers are great, but there are other under-explored ways to represent the organization of sound over time.

Type Drummer offers one interesting solution. With each key mapped to a different drum sample, which is pushed to a loop once typed, the user is prompted simply to “type something!” Music starts immediately, and maintains a steady groove regardless of the text entered. There are no other controls, and no legend explaining what keys make what sounds, so the user is invited to explore the one interaction — an interaction that any internet user, regardless of musical ability, would understand innately — in as many ways as possible.  


Gonna update with explanation soon, but here’s the sketch I made as a study in rhythm: https://editor.p5js.org/full/Sy6B7P2Ym

And here’s the source code:  https://editor.p5js.org/mhorwich/sketches/Sy6B7P2Ym