Music To Quiet The Mind

I’ve been struggling all week to find a way to talk to my proverbial elephant. Mostly, I’ve been struggling to find the time — I’m spread too thin for the first half of this semester, and I can’t get my head around the idea of taking ten minutes out of my day to meditate or journal or perform any other basic self-care mechanism to recalibrate my brain. Partly, I know that I’d mostly be recalibrating to optimize my work habits, and feeling vaguely resentful of that. 

I clicked through every link under “Techniques for Talking to the Rest of You/Us” looking for an answer, but found myself getting increasingly cynical about the whole process. Nestled at the bottom of a list of “12 online meditation tools to help you be more present, more mindful, and more effective” on Huffington Post, I found my answer. 

If you’d rather listen to music than sit in silence, try Eckhart Tolle’s “Music To Quiet The Mind,” available on Spotify. The album is a compilation of Eckhart Tolle’s favorite songs to inspire serenity and stillness. Listen to the relaxing songs when you need to calm down at work, or press play when you get home at the end of the day and want to simply experience the “power of now.”

Hey, I would rather listen to music than sit in silence! That’s kinda my whole thing! 
 
I’m listening to Tolle’s compilation as I write this. It’s pretty good! Sort of a sampler plate of what could broadly be defined as “new age music” — the kind of thing I delivered literally by the hundreds in my time as Compilations Manager at the Orchard, but never thought much of at the time. The music was cheap to produce, which meant it was abundant enough for us to acquire the rights cheaply and repackage endlessly. The same twenty songs could be sold as music for meditation, relaxation, massage, yoga, studying, sleep, or childbirth, often without even re-sequencing the track list. 
 
Since beginning my thesis research, I’ve come (somewhat unexpectedly) to take new age music more seriously, enthralled by exactly the characteristics that once led me to dismiss it. I appreciate the utilitarian functionality of it — that the same music works perfectly well for any occasion, as long as the listener isn’t really meant to focus on the music. Moreover, I appreciate that more-0r-less anyone who can tap on a synthesizer can make the music for themselves.  
 
The effects that playing music have on the mind has been the central focus of my thesis — particularly, the effects of playing music with others. This has led me to the works of music therapist Kenneth Bruscia, who designed systems of clinical musical improvisation for developing creativity, decisiveness, and interpersonal skills. It has also led me back to the work of musician-turned-neuroscientist Daniel Levitin, who theorizes that music evolved in tandem with human consciousness, buttressing the cognitive developments that essentially made society possible. 
 
My project involves connecting people over the internet to play music together online. As the internet grows ever-increasingly fractious and argumentative, I think there’s something particularly powerful about using it as a site to actively undermine the discord that it has bred — to encourage collaboration rather than competition. 

Facebook Corpus

Having previously downloaded a JSON of every post I’ve ever made on Facebook, I spent this week trying to parse it into something interesting and/or coherent. 

It took me a while to figure out how the JSON is formatted, and how to access the parts of it that I needed (there’s a lot of metadata that was beyond the scope of my interest at this point), but I managed to extract all the text from my posts into a .txt document, which I was then able to use as a corpus to explore various language models. 

I started with a Word2Vec analysis, but couldn’t figure out how to turn those data points into anything particularly compelling. Then I tried running some experiments with RiTa.js, which were perhaps not the most scientific approach I could have taken, but the results they yielded were much more fun. 

First, I ran a fourth order Markov Chain on the .txt file, generating ten sentences at a time. It does an impressive job at capturing my tone, which seems to be a blend of ill-advised political rants, music industry shit-posting, and begging people to come to my shows. 

 

Following that experiment, I ran a Key Word In Context example on some of my more commonly used words (as previously mentioned, I swear too much). 

 

I’m looking forward to diving deeper into this with a slightly more rigorous approach; but for now I have a fun examination of how I present myself to my immediate circle on the internet. 

Multi-Tasking

This was one of those weeks where every piece of technology I tried to use failed on me for one reason or another. I had been planning to collaborate with Jesse on this project, and we cycled through two different EEG readers, muscle sensors, and several Javascript libraries (PoseNet, WebGazer, etc), all to no avail. By the time we got the Pupil Labs eye-tracker working, we had decided to each go our own direction with it. 

By the time we got it working, it was Sunday night, so I thought if I could do something else productive while tracking my pupils, that would make the most of my time. With that in mind, I put on the eye-tracking glasses and pulled up last week’s reading assignment

One thing that surprised me was how small my pupil was while I read — I had assumed they would be more dilated, given how focused I was on the task at hand (I barely look away from the screen for a second during the ten minute recording); but I guess staring directly into a screen that’s mostly white light nullified any dilating effect my concentration might have otherwise had. 

The other noteworthy part of my experiment was when a friend of mine walked into the room where I was reading — I only look up for less than a second, but my eye movement suddenly gets much more rapid and strained, like I’m working harder to concentrate on the screen rather than the person standing behind it. 

I was hoping to turn this eye-tracking video into some kind of video art — there’s something hypnotic about watching my pupil flutter back and forth as I read — but for now I’ll settle for observing the behavior without aestheticizing it. 

Rubric

Full disclosure: I cringe every time I use the words “my work” to refer to music/art/whatever that I’ve made. That said, I’ve now been making music/art/whatever for long enough to have started noticing through lines in my work (*cringe*), and I think I’m slowly developing a sense of how to articulate them. 

My primary medium is sound, and within that medium I’m drawn to synthesizing disparate stylistic elements into new forms. Those forms vary widely, but there are a few points that I most try to emphasize: 

  1. It should be emotionally resonant. I’m tempted to use the word “beautiful,” but I often incorporate elements that are harsh, lo-fi, chintzy, or otherwise “ugly” in one sense or another.
  2. It should make effective use of the materials. Whatever hardware, software, sounds, or environment I use, they should be in service of each other and the work in general in the clearest and most direct way possible. 
  3. There should be room for interpretation beyond my own intention. 

 

Half_a_Lifetime.JSON

In the summer of 2004, a few weeks before I left for my freshman year of college, my friend Sam showed me a new site he had just discovered called The Facebook. Sam was headed off to the University of Chicago, which had just been added to the small-but-growing list of schools whose students could register profiles on the website. He showed me his profile and the profiles of a few of his classmates, and I didn’t think much of it; but a few weeks later, when my school was added to the list, I created a profile of my own.

That was nearly fifteen years ago. I’m about to turn 33, which means I’ve been on Facebook for just a little under half of my life. 

When I say I’ve been on Facebook for about a decade and a half, I mean ON Facebook. As features piled on, it got harder and harder to turn away. Over time it became my primary method for interacting with others online. Facebook knows a LOT about me (or, at the very least, a lot about the version of Me that I perform on Facebook). Still, with all the time I’ve spent on there, I’ve never really taken the time to find out what they actually know about me, even so much as to click on the tab labelled “Your Facebook Information,” until this weekend. 

The entire corpus is too much information to even comprehend — forget about knowing what to do with it — so I started with just my posts. While I waited for the JSON file of every Facebook post I had ever made to download, I clicked through them online, going all the way back to September 2006. I was amused by how little my “Facebook voice” has changed over the years — I didn’t need to parse through vast quantities of data to recognize that I swore a lot and used to many subjunctive clauses and mostly typed in all lower case letters, just as I still do in 2019. 

Once I got the JSON file, all 1.6 million lines of it, I got stuck. There are 2,429 posts with text, which was what I had planned on examining, but over 118,00 posts total. Some of them were posts I wasn’t even aware that I was making — like Spotify quietly announcing every song I played for seven years to some hidden part of my feed. There’s so much I want to explore in this data, but honestly I don’t know my way around a JSON file well enough to get it all effectively. I’m hoping to spend more time trying to parse through it before class tomorrow, but for now I’m excited (and a little unnerved) that the data is there to explore. 

Exposing the Rest of You (or at least Me)

“How is it possible to get angry at yourself: who, exactly, is mad at whom?”

-Incognito

I spend at least part of every day angry at myself. I don’t know how universal this is, but I’m sure I’m not the only one. The reason for that anger varies day to day, but it’s usually related to succumbing to some behavioral habit that illuminates the gap between who I am and who I want to be. “Why am I like this?” I wonder in these moments of self-directed anger; but I wonder even more standing outside these moments and reflecting on them. Who is this idiot living in my head who ordered another drink and slept too late and half-assed an important assignment? And who is this asshole living in my head who won’t give the poor dummy a break? Why do both of them seem to want me dead or in prison, and why do they both sound so much like me?

There’s no horror quite like being unable to trust your own mind; and yet, even at its healthiest, the human mind is so inherently untrustworthy, and mental health is so precarious. Are depression, anxiety and addiction spandrels that ended up in our system as byproducts of other survival mechanisms? Are these simply bugs that need to be worked out or overcome? Or is it conceivable that they could be features, providing some un-obvious benefits beyond their obvious detriments?

I can’t remember who told me this — it might have come up in this class last week; one of my software’s glitches is that I have amazing auditory recall but I’m terrible at remembering where I heard things — but someone recently told me about a woman who suffered from chronic, sometimes debilitating anxiety, until her life fell apart. After losing her mother and her job in short succession, she found herself oddly unfazed by the profound loss. She simply went about the work of putting her life back together — funeral arrangements, job applications and so on — methodically and with little emotional turmoil. After a lifetime of her brain telling her that everything around her was on fire, she had conditioned herself to turn it off, and was able to go about her business at a time when a more “mentally healthy” person would wallow in despair.

This isn’t to say that it’s a good idea to walk around constantly afraid for your life, but there are clearly times when the bug is what keeps the software running. The stereotype of the tortured artist, however dangerous or problematic, has its roots in reality, with depression affecting about half of musicians and painters and some whopping supermajority of poets. Glitchy software can be beautiful, but it’s usually not very stable. And when the bug starts to affect hardware performance, it should be addressed. 

Anyone can break out of unhealthy habits with the right resources, but putting in the actual work of getting better is another story. We go for what’s familiar, even if we know what’s familiar is hurting us. The effort it would take to retrain the body and mind to be healthier doesn’t seem worth the payoff.

In 2010, when I first moved to New York, I was lying in bed one night when a song came into my head, and I had the dumb idea that I think eventually led me to ITP. I heard the whole song perfectly, four-part harmonies and all, but I knew I’d never be able to play it all quite the way I heard it — at least not before it slipped from my mind, replaced by the mistakes I’d make trying to replicate it along the way. “Wouldn’t it be cool,” I thought, “if I could just play a whole digital audio workstation with my brain?

The idea, essentially, was to remove the friction of the creative process completely, to simply think great works of art into existence. Some combination of sufficiently complicated machine learning and brainwave detection could probably, hypothetically, make it a reality at some point, possibly even within my lifetime. But for now, artists still need to work. And the more we use technology to streamline the process, the more artists need to learn about those technologies.

My work at ITP has centered around the idea of building tools that make musical expression as effortless and intuitive as possible. The logical conclusion of that work points to a musical interface that responds to involuntary and subconscious inputs — blinking, breathing, pulse, brain activity. One could create music simply as a byproduct of living as an organism. With immediate biometric feedback in the form of sounds, the user becomes immediately and acutely aware of their mind’s and body’s activity; and, moreover, they become virtuosic at this instrument by gaining control of themselves, both physically and mentally.

Generative Music: Three Trios

  Exploring Magenta as a tool for musical expression (and, more generally, the field of generative music as a whole) I’m intrigued by a few recurring concepts: 

  1. The concept of musical “wrongness” in generative compositions — notes and rhythms that sound out of place within the context of the piece, to what extent those are bugs or features, and to what extent they need to be sanded smooth to create a “successful” piece
  2. The line between human creativity and computer “creativity” (sorry for all the scare quotes) and how far an algorithm can take a piece of music before a human needs to intervene to turn it into something recognizable as music

My previous experiment with MelodyRNN showed me that, while the generated output doesn’t quite stand on its own, it makes for excellent source material for sampling. I’ve always liked working with improvisations from other (human) musicians as source material for sampling — it immediately takes the piece in a more interesting direction when the building blocks are not the size and shape I would have constructed myself. With that in mind, I turned to the 16-bar trio models in MusicVAE to compose my piece for me. 

Using the example MIDI files from the pre-trained dataset, I generated three different interpolations at three different temperatures (0.1, 0.5, 1.5). I imported the nine resultant MIDI files to Ableton and played them all at the same time — sort of a digital riff on Ornette Coleman’s Free Jazz double quartet — and mapped each to a different patch for melody, bass or drums. I then added a bit of randomization to the velocity to make it sound more expressive. 

The results are strange as hell, but far from unlistenable:

The three melody tracks
The three bass tracks
The three drum tracks

Generative Music: EnyaBot

It’s been a long week, and I felt like I needed some new age music. Luckily, the Lakh Dataset has an absolutely mammoth collection of Enya MIDI files (75+). Seemed like a perfect little dataset to train for my exploration of Magenta’s MelodyRNN. 

I tried transposing them all into the same key, but in the MIDI transcriptions most of the songs were written in C major with an uncountable number of accidental sharps and flats; so I decided to roll the dice and see what MelodyRNN could do. 

I ran the network on 77 files (some of which were different transcriptions of the same song) for 4,000 steps, with a loss of ~0.6 and perplexity of ~1.8. I then ran the model to produce 10 outputs with 128 steps each. The results were off-kilter but musically satisfying. 

The two tracks below are two iterations of the resultant MIDI files. “enyaBot Piano” plays through each of the 10 files once on a GarageBand piano, while “enyaBot” loops each of the MIDI tracks one at a time on one of GarageBand’s ten “Classic” synthesizers. 

A screenshot of the arrangement for “enyaBot”

I was pleasantly surprised by the results. Even where the output sounds musically “wrong,” it sounds wrong in a consistent (and fairly interesting) way. I find something particularly compelling about the way the computer plays back these “wrong” notes — with no variation in velocity, every note is given the same weight, which makes it sound like it’s playing mistakes with 100% confidence. 

Octovox

The Octovox is a four-player live vocal processing unit. Each player stands on one side of the truncated square pyramid, and each side has five inputs which control two voices — one tracks the pitch of the player’s voice and maps it to a synthesizer, while the other retunes their voice to an algorithmically determined note. On the right of each side, a slider controls the volume of the pitch-tracking synthesizer and a knob adjusts the cutoff filter; on the left, a slider adjusts the volume of the vocoder and a knob adjusts the delay level, and a button in the center cycles through the Markov chain that determines the pitch of the vocoder. 

Here’s a picture of the Max patch: 

Here’s a picture of the circuitry inside (which should probably come with some kind of trigger warning):   

There are plenty of live videos of this instrument in use from the NIME performance (probably the most thoroughly documented three minutes of my life), but I’d rather share this video of rehearsal / play-testing from the night before the show, as I feel it captures the exploratory nature of the instrument even better than the live performance: 

The purpose of the Octovox is to encourage exploration and improvisation using the voice as an instrument. It’s a continuation of the same line of thinking that has inspired almost all of my work this semester (and, honestly, almost all of my work at ITP). One of Mimi’s questions during our final critiques cut to the core of that intention beautifully — when she asked if the instrument was intended more for “singers” or “non-singers,” I realized my goal was to demonstrate that there’s fundamentally no difference.