Compositional prompts

One of the challenges in creating Theory for Producers (or any online learning experience) is to build community. When you’re in a classroom with people, community emerges naturally, but on the web it’s harder. We’re using email to remind students to stay engaged over time, but we don’t want to end up in their spam folders. To make our emails welcome rather than intrusive, we decided to do Weekly Challenges, one-line prompts for music creation. Participants post their challenges in our SoundCloud group.

I’ve been doing something similar with guitar students for a long time, in person rather than via email, for example with the one-note groove. In coming up with more prompts, I’ve been drawing on my recent foray into prose scores, inspired by the example of Pauline Oliveros.

Pauline Oliveros

Really, you could think of my collection of prompts as very short and simple prose scores. Please feel free to use these, for yourself, for students, or for any other purpose. All I ask is that you drop me a line to tell me how you used them.

The Prompts

One Note Groove: Create a melody using only one pitch.

Two Note Groove: Create a melody using only two distinct pitches.

Three Note Groove: Create a melody using only three distinct pitches.

Four Note Groove: Create a melody using only four distinct pitches.

Arpeggio Groove: Create a melody using only a single column of the aQWERTYon.

Call And Response: Create a melody that includes a call phrase and response phrase.

Repeat Four Times: Create a melody consisting of a phrase that repeats identically four times.

Repeat Eight Times: Create a melody consisting of a phrase that repeats identically eight times.

Repeat Sixteen Times: Create a melody consisting of a phrase that repeats identically sixteen times.

Narrow Range: Create a melody that only uses the notes between C and E-flat.

Angular: Create a melody where no interval between one note and the next is smaller than a fifth.

Avoid The Root: Create a melody using any of the notes in a scale except the root.

Avoid The Triad: Create a melody using any of the notes in a scale except the root, third and fifth.

Dissonance: Create the “ugliest” melody you can.

Avoid The Tonic: Create a chord progression using any chords from a scale except for the tonic.

Fourths: Create a melody and/or chords using only the interval of a perfect fourth.

Universal Solvent: Create a blues scale melody over non-blues accompaniment.

Emotional Extremes I: Create the happiest melody you can.

Emotional Extremes II: Create the saddest melody you can.

Palindrome: Create a melody consisting of a sequence of notes, then that same sequence backwards.

Pattern Sequence: Create a melody by moving a “shape” to different locations on the aQWERTYon.

Minimalism: Create a melody that is mostly silence.

Maximalism: Create a melody containing no gaps or pauses.

Melodic Adaptation: Take an existing melody and adapt it into a new one by keeping the rhythms but changing the pitches.

Rhythmic Adaptation: Take an existing melody and adapt it into a new one by keeping the pitches but changing the rhythms.

Birdsong: Recreate a bird call as closely as you can.

Speech Melody: Recreate the pitches of a spoken phrase.

Musical simples – Teenage Dream

I’m working with Soundfly on the next installment of Theory For Producers, our ultra-futuristic online music theory course. The first unit covered the black keys of the piano and the pentatonic scales. The next one will talk about the white keys  and the diatonic modes. We were gathering examples, and we needed to find a well-known pop song that uses Lydian mode. My usual go-to example for Lydian is “Possibly Maybe” by Björk. But the course already uses a Björk tune for different example, and the Soundfly guys quite reasonably wanted something a little more millennial-friendly anyway. We decided to use Katy Perry’s “Teenage Dream” instead.

A couple of years ago, Slate ran an analysis of this tune by Owen Pallett. It’s an okay explanation, but it doesn’t delve too deep. We thought we could do better.

Here’s my transcription of the chorus:

When you look at the melody, this would seem to be a straightforward use of the B-flat major scale. However, the chord changes tell a different story. The tune doesn’t ever use a B-flat major chord. Instead, it oscillates back and forth between E-flat and F. In this harmonic context, the melody doesn’t belong to the plain vanilla B-flat major scale at all, but rather the dreamy and modernist E-flat Lydian mode. The graphic below shows the difference.

Teenage Dream Eb Lydian circles

Both scales use the same seven pitches: B-flat, C, D, E-flat, F, G, and A. The only difference between the two is which note you consider to be “home base.” Let’s consider B-flat major first.

To make chords from a scale, you pick any note, and then go clockwise around the scale, skipping every other degree. The chords are named for the note you start on. If you start on the fourth note, E-flat, you get the IV chord (the other two notes are G and B-flat.) If you start on the fifth note, F, you get the V chord (the other two notes are A and C.) In a major key, IV and V are very important chords. They’re called the subdominant and dominant chords, respectively, and they both create a feeling of suspense. You can resolve the suspense by following either one with the I chord. The weird thing about “Teenage Dream” is that if you think about it as being in B-flat, then it never lands on the I chord at all. It just oscillates back and fourth between IV and V. The suspense never gets resolved.

If we think of “Teenage Dream” as being in E-flat Lydian, then the E-flat chord is I, which makes more sense. The function of the F chord in this context isn’t clearly defined by music theory, but it does sound good. Lydian is very similar to the major scale, with only one difference: while the fourth note of E-flat major is A-flat, the fourth note of E-flat Lydian is A natural. That raised fourth gives Lydian mode its otherworldly sound. The F chord gets its airborne quality from that raised fourth.

Click here to play over “Teenage Dream” using the aQWERTYon. The two chords can be played on the letters Z-A-Q and X-S-W. For comparison, try playing it with B-flat major. Read more about the aQWERTYon here.

“Teenage Dream” is not the only well-known song to use the Lydian I-II progression. Other high profile examples include “Dreams” by Fleetwood Mac and “Jane Says” by Jane’s Addiction. over the same chords. Try singing any of these songs over any of the others; they all fit seamlessly.

The chorus of “Teenage Dream” uses a striking rhythm on the phrases “you make me”, “teenage dream”, and “I can’t sleep”. The song is in 4/4 time, like nearly all contemporary pop tracks, but that chorus rhythm has a feeling of three about it. It’s no illusion. The words “you” and “make” in the first line are each three eighth notes long. It sounds like an attempt to divide the eight eighth notes into groups of three. This rhythm is called Tresillo, and it’s one of the building blocks of Afro-Cuban drumming.

tresillo

Tresillo is the front half of son clave. It’s extraordinarily common in American vernacular music, especially in accompaniment patterns. You hear Tresillo in the bassline to “Hound Dog” and countless other fifties rock songs; in the generic acoustic guitar strumming pattern used by singer-songwriters everywhere; and in the kick and snare pattern characteristic of reggaetón. Tresillo is ubiquitous in jazz, and in the dance music of India and the Middle East.

“Teenage Dream” alternates the Tresillo with a funky syncopated rhythm pattern that skips the first beat of the measure. When you listen to the line “feel like I’m livin’ a”, there’s a hole right before the word “feel”. That hole is the downbeat, which is the usual place to start a phrase. When you avoid the obvious beat, you surprise the listener, which grabs their attention. The drums underneath this melody hammer relentlessly away on the strong beats, so it’s easy to parse out the rhythmic sophistication. Katy Perry songs have a lot of empty calories, but they taste as good as they do for a reason.

Milo meets Beethoven

For his birthday, Milo got a book called Welcome to the Symphony by Carolyn Sloan. We finally got around to showing it to him recently, and now he’s totally obsessed.

Welcome To The Symphony by Carolyn Sloan

The book has buttons along the side which you can press to hear little audio samples. They include each orchestra instrument playing a short Beethoven riff. All of the string instruments play the same “bum-bum-bum-BUMMM” so you can compare the sounds easily. All the winds play a different little phrase, and the brass another. The book itself is fine and all, but the thing that really hooked Milo is triggering the riffs one after another, Ableton-style, and singing merrily along.

Milo got primed to enjoy this book by two coincidental things. One is that in his preschool, they’ve been listening to Peter and the Wolf a lot, dancing to it, acting it out, etc. They use a YouTube video that shows both the story and the instruments side by side, so Milo has very clear ideas of what the oboe, clarinet, etc all look like and sound like. When he saw them in the orchestra book, he recognized them all immediately.

The other thing is this weird computer animated cartoon called Taratabong, which is about anthropomorphic musical instruments. Milo has been watching it on YouTube a bunch, to the point of wanting me to pretend to be different characters and “talk” to him (which is an entertaining challenge for me–how do you have a conversation as a snare drum?) So Milo also recognizes different instruments in the orchestra book as Taratabong characters.

Milo has now voluntarily watched a YouTube video of the entire first movement of Beethoven’s Fifth conducted by Leonard Bernstein, several times. That’s like nine minutes of classical music, which for a three-year-old is equivalent to nine hours. He sings along to all the riffs he recognized, announces each instrument as he sees it, and tells me about how Leonard Bernstein is Grandfather from Peter and the Wolf. I want to emphasize that we haven’t pushed him into any of this. If you read this blog, you know that I’m an outspoken anti-fan of Beethoven. We just put this stuff under Milo’s nose, and if he hadn’t been interested, we wouldn’t have pushed it.

The classical music tribe expresses continual anguish about how hard it is to draw people into the music. Having inadvertently created a budding Beethoven lover, I have a few insights to offer. Milo got connected to the music through multiple media simultaneously, in multiple settings. He was exposed initially in the context of stories about animals and cartoon characters. That exposure happened in the context of acting and dancing, not passive sitting or being lectured to. And when he did start listening, it was via playback devices that he controls completely: YouTube Kids on the iPad, and the buttons on the book.

Of all these different music experiences, the Ableton-like sample triggering is the one that has most seized Milo’s enthusiasm. Sometimes he wants to read the book and play the sounds when the text indicates. Sometimes he wants to systematically listen through each sound, singing along and acting out the instruments. Sometimes he just jams out, playing the excerpts in different orders and in different rhythms. I suspect he’d be even happier if he could get the sounds to loop. He wants to sing along, but the little phrases are half over before he can even get oriented. If the phrases looped in a musical-sounding way, I bet he would dig in much deeper.

This is not Milo’s first experience triggering sample playback. Before he even turned two, we spent a lot of time playing around with an APC 40.

APC40

Milo adores the lights and colors, and instantly grasped how the volume faders work. In general, though, the APC experience was too complicated for him. It was too easy to make it stop working, to lose the connection between button pushes and the music changing, and to generally get lost in the interface. (I have some of those same problems!) The orchestra book has the advantage of being vastly simpler and more predictable.

There’s a page in the book that shows Beethoven with quill pen, writing the music. (Milo is continually disappointed not to see Beethoven himself in any of the performance videos.) Interestingly, Milo has started using the phrase “writing music” as a synonym for “playing music”, either from an instrument or from iTunes. He seems not to know or care about the distinction between playing back pre-recorded music and creating new music. This conflation of writing and playing music was likely helped by the time Milo has spent with the aQWERTYon, an interface developed by the NYU MusEDLab for performing music on the computer keyboard.

aQWERTYon screencap

Milo isn’t extremely interested in the musical aspect of the aQWERTYon. He calls it “ABCs” and is mostly interested in using it to type his favorite letters. He also enjoys singing the alphabet song while playing semi-randomly along.

The MusEDLab’s work is motivated by the fact that computers make it enormously easier for total novices to participate actively in music. If Beethoven symphonies can be played with as toys, participated in as games, and connected to meaningful stories and activities, then it’s inevitable that kids are going to want to get involved. If I had experienced Beethoven as raw material for my own expression, I’d probably feel quite differently about him.

Ultralight Beam

The first song on Kanye West’s Life Of Pablo album, and my favorite so far, is the beautiful, gospel-saturated “Ultralight Beam.” See Kanye and company perform it live on SNL.

Ultralight Beam

The song uses only four chords, but they’re an interesting four: C minor, E-flat major, A-flat major, and G7. To find out why they sound so good together, let’s do a little music theory.

“Ultralight Beam” is in the key of C minor, and three of the four chords come from the C natural minor scale, shown below. Click the image to play the scale in the aQWERTYon (requires Chrome).

Ultralight Beam C natural minor

To make a chord, start on any scale degree, then skip two degrees clockwise, and then skip another two, and so on. To make C minor, you start on C, then jump to E-flat, and then to G. To make E-flat major, you start on E-flat, then jump to G, and then to B-flat. And to make A-flat major, you start on A-flat, then jump to C, and then to E-flat. Simple enough so far.

The C natural minor scale shares its seven notes with the E-flat major scale:

Ultralight Beam Eb major circles

All we’ve really done here is rotate the circle three slots counterclockwise. All the relationships stay the same, and you can form the same chords in the same way. The two scales are so closely related that if noodle around on C natural minor long enough, it starts just sounding like E-flat major. Try it!

The last of the four chords in “Ultralight Beam” is G7, and to make it, we need a note that isn’t in C natural minor (or E-flat major): the leading tone, B natural. If you take C natural minor and replace B-flat with B natural, you get a new scale: C harmonic minor.

Ultralight Beam C harmonic minor

If you make a chord starting on G from C natural minor, you get G minor (G, B-flat, D). The chord sounds fine, and you could use it with the other three above without offending anyone. But if you make the same chord using C harmonic minor, you get G major (G, B, D). This is a much more dramatic and exciting sound. If you add one more chord degree, you get G7 (G, B, D, F), known as the dominant chord in C minor. In the diagram below, the G7 chord is in blue, and C minor is in green.

Ultralight Beam C harmonic minor with V7 chord

Feel how much more intensely that B natural pulls to C than B-flat did? That’s what gives the song its drama, and what puts it unambivalently in C minor rather than E-flat major.

“Ultralight Beam” has a nice chord progression, but that isn’t its most distinctive feature. The thing that jumps out most immediately is the unusual beat. Nearly all hip-hop is in 4/4 time, where each measure is subdivided into four beats, and each of those four beats is subdivided into four sixteenth notes. “Ultralight Beam” uses 12/8 time, which was prevalent in the first half of the twentieth century, but is a rarity now. Each measure still has four beats in it, but these beats are subdivided into three beats rather than four.

four-four vs twelve-eight

The track states this rhythm very obliquely. The drum track is comprised almost entirely of silence. The vocals and other instruments skip lightly around the beat. Chance The Rapper’s verse in particular pulls against the meter in all kinds of complex ways.

The song’s structure is unusual too, a wide departure from the standard “verse-hook-verse-hook”.

Ultralight Beam song structure

The intro is six bars long, two bars of ambient voices, four bars over the chord progression. The song proper begins with just the first half of the chorus (known in hip-hop circles as the hook.) Kanye has an eight bar verse, followed by the first full chorus. Kelly Price gets the next eight bar verse. So far, so typical. But then, where you expect the next chorus, The-Dream gets his four-bar verse, followed by Chance The Rapper’s ecstatic sixteen-bar verse. Next is what feels like the last chorus, but that’s followed by Kirk Franklin’s four bar verse, and then a four-bar outtro with just the choir singing haunting single words. It’s strange, but it works. Say what you want about Kanye as a public figure, but as a musician, he is in complete control of his craft.

Inside the aQWERTYon

The MusEDLab and Soundfly just launched Theory For Producers, an interactive music theory course. The centerpiece of the interactive component is a MusEDLab tool called the aQWERTYon. You can try it by clicking the image below.

aQWERTYon screencap

In this post, I’ll talk about why and how we developed the aQWERTYon.

One of our core design principles is to work within our users’ real-world technological limitations. We build tools in the browser so they’ll be platform-independent and accessible anywhere there’s internet access (and where there isn’t internet access, we’ve developed the “MusEDLab in a box.”) We want to find out what musical possibilities there are in a typical computer with no additional software or hardware. That question led us to investigate ways of turning the standard QWERTY keyboard into a beginner-friendly instrument. We were inspired in part by GarageBand’s Musical Typing feature.

GarageBand musical typing

If you don’t have a MIDI controller, Apple thoughtfully made it possible for you to use your computer keyboard to play GarageBand’s many software instruments. You get an octave and a half of piano, plus other useful controls: pitch bend, modulation, sustain, octave shifting and simple velocity control. Many DAWs offer something similar, but Apple’s system is the most sophisticated I’ve seen.

Handy though it is, Musical Typing has some problems as a user interface. The biggest one is the poor fit between the piano keyboard layout and the grid of computer keys. Typing the letter A plays the note C. The rest of that row is the white keys, and the one above it is the black keys. You can play the chromatic scale by alternating A row, Q row, A row, Q row. That basic pattern is easy enough to figure out. However, you quickly get into trouble, because there’s no black key between E and F. The QWERTY keyboard gives no visual reminder of that fact, so you just have to remember it. Unfortunately, the “missing” black key happens to be the letter R, which is GarageBand’s keyboard shortcut for recording. So what inevitably happens is that you’re hunting for E-flat or F-sharp and you accidentally start recording over whatever you were doing. I’ve been using the program for years and still do this routinely.

Rather than recreating the piano keyboard on the computer, we drew on a different metaphor: the accordion.

The accordion: the user interface metaphor of the future!

We wanted to have chords and scales arranged in an easily discoverable way, like the way you can easily figure out the chord buttons on the accordion’s left hand. The QWERTY keyboard is really a staggered grid four keys tall and between ten and thirteen keys wide, plus assorted modifier and function keys. We decided to use the columns for chords and the rows for scales.

For the diatonic scales and modes, the layout is simple. The bottom row gives the notes in the scale starting on 1^. The second row has the same scale shifted over to start on 3^. The third row starts the scale on 5^, and the top row starts on 1^ an octave up. If this sounds confusing when you read it, try playing it, your ears will immediately pick up the pattern. Notes in the same column form the diatonic chords, with their roman numerals conveniently matching the number keys. There are no wrong notes, so even just mashing keys at random will sound at least okay. Typing your name usually sounds pretty cool, and picking out melodies is a piece of cake. Playing diagonal columns, like Z-S-E-4, gives you chords voiced in fourths. The same layout approach works great for any seven-note scale: all of the diatonic modes, plus the modes of harmonic and melodic minor.

Pentatonics work pretty much the same way as seven-note scales, except that the columns stack in fourths rather than fifths. The octatonic and diminished scales lay out easily as well. The real layout challenge lay in one strange but crucial exception: the blues scale. Unlike other scales, you can’t just stagger the blues scale pitches in thirds to get meaningful chords. The melodic and harmonic components of blues are more or less unrelated to each other. Our original idea was to put the blues scale on the bottom row of keys, and then use the others to spell out satisfying chords on top. That made it extremely awkward to play melodies, however, since the keys don’t form an intelligible pattern of intervals. Our compromise was to create two different blues modes: one with the chords, for harmony exploration, and one just repeating the blues scale in octaves for melodic purposes. Maybe a better solution exists, but we haven’t figured it out yet.

When you select a different root, all the pitches in the chords and scales are automatically changed as well. Even if the aQWERTYon had no other features or interactivity, this would still make it an invaluable music theory tool. But root selection raises a bigger question: what do you do about all the real-world music that uses more than one scale or mode? Totally uniform modality is unusual, even in simple pop songs. You can access notes outside the currently selected scale by pressing the shift keys, which transposes the entire keyboard up or down a half step. But what would be really great is if we could get the scale settings to change dynamically. Wouldn’t it be great if you were listening to a jazz tune, and the scale was always set to match whatever chord was going by at that moment? You could blow over complex changes effortlessly. We’ve discussed manually placing markers in YouTube videos that tell the aQWERTYon when to change its settings, but that would be labor-intensive. We’re hoping to discover an algorithmic method for placing markers automatically.

The other big design challenge we face is how to present all the different scale choices in a way that doesn’t overwhelm our core audience of non-expert users. One solution would just be to limit the scale choices. We already do that in the Soundfly course, in effect; when you land on a lesson, the embedded aQWERTYon is preset to the appropriate scale and key, and the user doesn’t even see the menus. But we’d like people to be able to explore the rich sonic diversity of the various scales without confronting them with technical Greek terms like “Lydian dominant”. Right now, the scales are categorized as Major, Minor and Other, but those terms aren’t meaningful to beginners. We’ve been discussing how we could organize the scales by mood or feeling, maybe from “brightest” to “darkest.” But how do you assign a mood to a scale? Do we just do it arbitrarily ourselves? Crowdsource mood tags? Find some objective sorting method that maps onto most listeners’ subjective associations? Some combination of the above? It’s an active area of research for us.

This issue of categorizing scales by mood has relevance for the original use case we imagined for the aQWERTYon: teaching film scoring. The idea behind the integrated video window was that you would load a video clip, set a mode, and then improvise some music that fit the emotional vibe of that clip. The idea of playing along with YouTube videos of songs came later. One could teach more general open-ended composition with the aQWERTYon, and in fact our friend Matt McLean is doing exactly that. But we’re attracted to film scoring as a gateway because it’s a more narrowly defined problem. Instead of just “write some music”, the challenge is “write some music with a particular feeling to it that fits into a scene of a particular length.

Would you like to help us test and improve the aQWERTYon, or to design curricula around it? Would you like to help fund our programmers and designers? Please get in touch.

Theory for Producers

I’m delighted to announce the launch of a new interactive online music course called Theory for Producers: The Black Keys. It’s a joint effort by Soundfly and the NYU MusEDLab, representing the culmination of several years worth of design and programming. We’re super proud of it.

Theory for Producers: The Black Keys

The course makes the abstractions of music theory concrete by presenting them in the form of actual songs you’re likely to already know. You can play and improvise along with the examples right in the web browser using the aQWERTYon, which turns your computer keyboard into an easily playable instrument. You can also bring the examples into programs like Ableton Live or Logic for further hands-on experimentation. We’ve spiced up the content with videos and animations, along with some entertaining digressions into the Stone Age and the auditory processing abilities of frogs.

So what does it mean that this is music theory for producers? We’re organizing the material in a way that’s easiest and most relevant to people using computers to create the dance music of the African diaspora: techno, hip-hop, and their various pop derivatives. This music carries most of its creative content outside of harmony: in rhythm, timbre, and repetitive structure. The harmony is usually static, sitting on a loop of a few chords or just a single mode. Alongside the standard (Western) major and minor scales, you’re just as likely to encounter more “exotic” (non-Western) sounds.

Music theory classes and textbooks typically begin with the C major scale, because it’s the easiest scale to represent and read in music notation. However, C major is not necessarily the most “basic” or fundamental scale for our intended audience. Instead, we start with E-flat minor pentatonic, otherwise known as the black keys on the piano. The piano metaphor is ubiquitous both in electronic music hardware and software, and pentatonics are even easier to play on piano than diatonic scales. E-flat minor pentatonic is more daunting in notated form than C major, but since dance and hip-hop producers tend not to be able to read music anyway, that’s no obstacle. And if producers want to use keys other than E-flat minor (or G-flat major), they can keep playing the black keys and then transpose the MIDI later.

The Black Keys is just the first installment in Theory For Producers. Next, we’ll do The White Keys, otherwise known as the modes of C major. We’re planning to start that course not with C major itself, but with G Mixolydian mode, because it’s a more familiar sound in Afrodiasporic music than straight major. After that, we’ll do a course about chords, and one about rhythm. We hope you sign up!

Update: oh hey, we’re on Lifehacker

Teaching reflections

Here’s what happened in my life as an educator this past semester, and what I have planned for the coming semester.

Montclair State University Intro To Music Technology

I wonder how much longer “music technology” is going to exist as a subject. They don’t teach “piano technology” or “violin technology.” It makes sense to teach specific areas like audio recording or synthesis or signal theory as separate classes. But “music technology” is such a broad term as to be meaningless. The unspoken assumption is that we’re teaching “musical practices involving a computer,” but even that is both too big and too small to structure a one-semester class around. On the one hand, every kind of music involves computers now. On the other hand, to focus just on the computer part is like teaching a word processing class that’s somehow separate from learning how to write.

MSU Intro to Music Tech

The newness and vagueness of the field of study gives me and my fellow music tech educators wide latitude to define our subject matter. I see my job as providing an introduction to pop production and songwriting. The tools we use for the job at Montclair are mostly GarageBand and Logic, but I don’t spend a lot of time on the mechanics of the software itself. Instead, I teach music: How do you express yourself creatively using sample libraries, or MIDI, or field recordings, or pre-existing songs? What kinds of rhythms, harmonies, timbres and structures make sense aesthetically when you’re assembling these materials in the DAW? Where do you get ideas? How do you listen to recorded music analytically? Why does Thriller sound so much better than any other album recorded in the eighties? We cover technical concepts as they arise in the natural course of producing and listening. My hope is that they’ll be more relevant and memorable that way.

Having now taught three semesters of Intro to Music Tech at MSU, my format is starting to gel. The students spend most of the semester creating tracks. They do one using only the loops that come with GarageBand, one using only MIDI and software instruments, one that includes a field recording they made with their phones, and so on. I started having them remix each other’s tracks this past semester, and it was such a smash hit that I’m going to have future classes do a whole series of peer remixes.

Montclair is a fairly traditional conservatory. For many students, my class is the only time in their college careers they get to make music according to their own sensibilities and tastes. It’s also usually the only time they engage critically with recordings, or electronic dance music, or hip-hop, or pop song forms, or sampling, or mixing and audio processing. I’m glad to be able to fill these vacuums, but I wish I had more than one semester to do it in.

Aside from creative music-making, the students do a couple of presentations, one on a song they think is interesting, and one on a topic of their choice. They also write blog posts about the process of creating their tracks. This last assignment is a persistent obstacle, since no one seems to share my enthusiasm for process documentation. Next semester I’m going to try introducing some of the cooperative/competitive spirit of the peer remixes by having them write reviews of each other’s tracks. Maybe that will get them to invest their writing with the same creativity they put into the music assignments.

Montclair State Advanced Computer Music Composition

This past fall I got to teach my first advanced class, and it went amazingly well. We used Ableton Live, my DAW of choice, and the guys (it was all guys) banged out tracks at a rapid clip for the entire semester. As with the intro class, I spent most of the time on the creative process, and dealt with Ableton functionality and audio engineering topics as they came up.

Tristan gets his FFT on

Each assignment came with some kind of tight technical restriction, but no stylistic restrictions. As with the intro class, the advanced dudes did tracks using only existing loops, only MIDI, and found sound. They did peer remixing and self remixing as well. The two hardest and most interesting assignments were to create a new track using only samples of an existing track, and then to create a new track using only a single five-second Duke Ellington sample. (These assignments were inspired heavily by the Disquiet Junto.) The more tightly I constrained the students, the more ingenuity they displayed. Listen for yourself:

As with the intro class, I tried to have the advanced dudes document their process with blog posts. As with the intro class, they showed zero interest. In the future, I’ll have to get more creative with the writing component. Also, I’d like to not have the class be entirely male.

NYU Music Education Technology Practicum

This class is meant to be a grounding in music tech for future music teachers. I’m even more time-constrained at NYU than at Montclair, and I teach in a regular classroom rather than a computer lab. While my class time at Montclair is mostly devoted to music-making, at NYU I’m forced to do more lectures, demos and listening sessions. It is very far from ideal. I have no idea how NYU can charge so much money without offering such a basic-seeming amenity as a room with computers in it for the music students. However, NYU does have one advantage over Montclair as a teaching environment, which is that I can hold a couple of class sessions in an extremely fancy recording studio.

Catherine and Joseph in the Dolan Studio

I mostly take the same approach at NYU as I do at Montclair, and use most of the same assignments. The major difference is that the NYU kids do a critical listening project, where they pick a recording and graph out its musical structure and spatial layout. It’s a difficult exercise, but an invaluable one. I did it in grad school, and it improved my analytical listening abilities significantly. We used to do the same assignment at Montclair, but the students were really not into it, like to the point of refusing to do it, so sadly we had to drop it from the syllabus. I hope we can find a way to reinstate it.

This past semester, the majority of my NYU kids were music business majors, which was pretty great. They came in with less musical experience than the education majors–sometimes with none at all–but they had less to unlearn, and they threw themselves confidently into producing tracks. This coming semester I have a bunch more music business kids. I’m attracting them because my class is the only one at Steinhardt that does intro-level creative music making in the pop idiom. I’m clearly filling a vacuum, and I’m hoping that I’m just the thin edge of the wedge, both for my own sake and the future music educators of NYU.

Interface designs

The NYU Music Experience Design Lab is baking education into a suite of creative music making and learning tools. As my friend and colleague Adam Bell likes to say, purchasers of a computer are purchasing a music education. We’re trying to make that education a better and more enjoyable one, whether our users are in formal classroom settings or playing around on their own. You can read about the lab’s various projects here. My own contributions are largely conceptual, though I’ve also devoted a lot of attention to making useful and inspiring presets.

Cold Sweat on the Groove Pizza

The Ed Sullivan Fellows Program

This winter, the MusEDLab is launching a brand new initiative, mentoring a group of young people from challenging circumstances in music and technology. I’ll be teaching the music side, doing a custom-tailored version of my intro class syllabus. Sullivan Fellows will also work with my colleagues in the lab on programming and design projects. This summer, we’ll have a showcase event as part of the 2016 IMPACT Conference. The goal is to help the Fellows get launched in careers in music and/or technology. I’ll be writing a lot more about this in the coming weeks.

Online courses with Soundfly

The MusEDLab is working with a music ed startup on some new interactive online courses. The first is called Music Theory For Bedroom Producers, and we expect to launch next month. I wrote a lot of the materials, and am appearing in some videos. Soundfly has ace designers, animators and programmers, so expect a rich multimedia experience. More on this as it gets closer.

Everything else

For the past few years, I’ve been a teaching artist with NYU’s IMPACT workshop. Below, you can see some participants making beats on an iPad. The workshop is a crash course not just in music, but in theater, dance, video, and the intersection of all of the above. I’m still very much figuring out my role in the whole thing, but so is everyone involved.

Mobile music at IMPACT

I continue to teach private lessons, do freelance production and composition, do some consulting, write for online publications, and generally keep hustling for gigs. If you’d like to have me do any of these things, be in touch.

Space Oddity: from song to track

If you’ve ever wondered what it is that a music producer does exactly, David Bowie’s “Space Oddity” is a crystal clear example. To put it in a nutshell, a producer turns this:

Into this:

It’s also interesting to listen to the first version of the commercial recording, which is better than the demo, but still nowhere near as majestic as the final version. The Austin Powers flute solo is especially silly.

Should we even consider these three recordings to be the same piece of music? On the one hand, they’re all the same melody and chords and lyrics. On the other hand, if the song only existed in its demo form, or in the awkward Austin Powers version, it would never have made the impact that it did. Some of the impact of the final version lies in better recording techniques and equipment, but it’s more than that. The music takes on a different meaning in the final version. It’s bigger, trippier, punchier, tighter, more cinematic, more transporting, and in general about a thousand times more effective.

The producer’s job is to marshall the efforts of songwriters, arrangers, performers and engineers to create a good-sounding recording. (The producer might also be a songwriter, arranger, performer, and/or engineer.) Producers are to songs what directors are to movies, or showrunners are to television.

When you’re thinking about a piece of recorded music, you’re really talking about three different things:

  1. The underlying composition, the part that can be represented on paper. Albin Zak calls this “the song.”
  2. The performance of the song.
  3. The finished recording, after overdubbing, mixing, editing, effects, and all the rest. Albin Zak calls this “the track.”

I had always assumed that Tony Visconti produced “Space Oddity,” since he produced a ton of other Bowie classics. As it turns out, though, Visconti was underwhelmed by the song, so he delegated it to his assistant, Gus Dudgeon. So what is it that Gus Dudgeon did precisely? First let’s separate out what he didn’t do.

You can hear from the demo that the chords, melody and lyrics were all in place before Bowie walked into the studio. They’re the parts reproduced by the subway busker I heard singing “Space Oddity” this morning. The demo includes a vocal arrangement that’s similar to the final one, aside from some minor phrasing changes. The acoustic guitar and Stylophone are in place as well. (I had always thought it was an oboe, but no, that droning sound is a low-tech synth.)

Gus Dudgeon took a song and a partial arrangement, and turned it into a track. He oversaw the addition of electric guitar, bass, drums, strings, woodwinds, and keyboards. He coached Bowie and the various studio musicians through their performances, selected the takes, and decided on effects like echoes and reverb. He supervised the mixing, which not only sets the relative loudness of the various sounds, but also affects their perceived location and significance. In short, he designed the actual sounds that you hear.

If you want to dive deep into the track, you’re in luck, because Bowie officially released the multitrack stems. Some particular points of interest:

  • The bassist, Herbie Flowers, was a rookie. The “Space Oddity” session was his first. He later went on to create the staggeringly great dual bass part in Lou Reed’s “Walk On The Wild Side.”
  • The strings were arranged and conducted by the multifaceted Paul Buckmaster, who a few years later would work with Miles Davis on the conception of On The Corner. Buckmaster’s cello harmonics contribute significantly to the psychedelic atmosphere–listen to the end of the stem labeled “Extras 1.”
  • The live strings are supplemented by Mellotron, played by future Yes keyboardist Rick Wakeman, he of the flamboyant gold cape.
  • Tony Visconti plays some flute and unspecified woodwinds, including the distinctive saxophone run that leads into the instrumental sections.

You can read a detailed analysis of the recording on the excellent Bowiesongs blog.

The big difference between the sixties and the present is that the track has assumed ever-greater importance relative to the song and the performance. In the age of MIDI and digital audio editing, live performance has become a totally optional component of music. The song is increasingly inseparable from the sounds used to realize it, especially in synth-heavy music like hip-hop and EDM. This shift gives the producer ever-greater importance in the creative process. There is really no such thing as a “demo” anymore, since anyone with a computer can produce finished-sounding tracks in their bedroom. If David Bowie were a kid now, he’d put together “Space Oddity” in GarageBand or FL Studio, with a lavish soundscape part of the conception from the beginning.

I want my students to understand that the words “producer” and “musician” are becoming synonymous. I want them to know that they can no longer focus solely on composition or performance and wait for someone else to craft a track around them. The techniques used to make “Space Oddity” were esoteric and expensive to realize at the time. Now, they’re easily within reach. But while the technology is more accessible, you still have to have the ideas. This is why it’s so valuable to study great producers like Tony Visconti and Gus Dudgeon: they’re a goldmine of sonic inspiration.

See also: a broader appreciation of Bowie.