Theory for Producers

I’m delighted to announce the launch of a new interactive online music course called Theory for Producers: The Black Keys. It’s a joint effort by Soundfly and the NYU MusEDLab, representing the culmination of several years worth of design and programming. We’re super proud of it.

Theory for Producers: The Black Keys

The course makes the abstractions of music theory concrete by presenting them in the form of actual songs you’re likely to already know. You can play and improvise along with the examples right in the web browser using the aQWERTYon, which turns your computer keyboard into an easily playable instrument. You can also bring the examples into programs like Ableton Live or Logic for further hands-on experimentation. We’ve spiced up the content with videos and animations, along with some entertaining digressions into the Stone Age and the auditory processing abilities of frogs.

So what does it mean that this is music theory for producers? We’re organizing the material in a way that’s easiest and most relevant to people using computers to create the dance music of the African diaspora: techno, hip-hop, and their various pop derivatives. This music carries most of its creative content outside of harmony: in rhythm, timbre, and repetitive structure. The harmony is usually static, sitting on a loop of a few chords or just a single mode. Alongside the standard (Western) major and minor scales, you’re just as likely to encounter more “exotic” (non-Western) sounds.

Music theory classes and textbooks typically begin with the C major scale, because it’s the easiest scale to represent and read in music notation. However, C major is not necessarily the most “basic” or fundamental scale for our intended audience. Instead, we start with E-flat minor pentatonic, otherwise known as the black keys on the piano. The piano metaphor is ubiquitous both in electronic music hardware and software, and pentatonics are even easier to play on piano than diatonic scales. E-flat minor pentatonic is more daunting in notated form than C major, but since dance and hip-hop producers tend not to be able to read music anyway, that’s no obstacle. And if producers want to use keys other than E-flat minor (or G-flat major), they can keep playing the black keys and then transpose the MIDI later.

The Black Keys is just the first installment in Theory For Producers. Next, we’ll do The White Keys, otherwise known as the modes of C major. We’re planning to start that course not with C major itself, but with G Mixolydian mode, because it’s a more familiar sound in Afrodiasporic music than straight major. After that, we’ll do a course about chords, and one about rhythm. We hope you sign up!

Update: oh hey, we’re on Lifehacker

Teaching reflections

Here’s what happened in my life as an educator this past semester, and what I have planned for the coming semester.

Montclair State University Intro To Music Technology

I wonder how much longer “music technology” is going to exist as a subject. They don’t teach “piano technology” or “violin technology.” It makes sense to teach specific areas like audio recording or synthesis or signal theory as separate classes. But “music technology” is such a broad term as to be meaningless. The unspoken assumption is that we’re teaching “musical practices involving a computer,” but even that is both too big and too small to structure a one-semester class around. On the one hand, every kind of music involves computers now. On the other hand, to focus just on the computer part is like teaching a word processing class that’s somehow separate from learning how to write.

MSU Intro to Music Tech

The newness and vagueness of the field of study gives me and my fellow music tech educators wide latitude to define our subject matter. I see my job as providing an introduction to pop production and songwriting. The tools we use for the job at Montclair are mostly GarageBand and Logic, but I don’t spend a lot of time on the mechanics of the software itself. Instead, I teach music: How do you express yourself creatively using sample libraries, or MIDI, or field recordings, or pre-existing songs? What kinds of rhythms, harmonies, timbres and structures make sense aesthetically when you’re assembling these materials in the DAW? Where do you get ideas? How do you listen to recorded music analytically? Why does Thriller sound so much better than any other album recorded in the eighties? We cover technical concepts as they arise in the natural course of producing and listening. My hope is that they’ll be more relevant and memorable that way.

Having now taught three semesters of Intro to Music Tech at MSU, my format is starting to gel. The students spend most of the semester creating tracks. They do one using only the loops that come with GarageBand, one using only MIDI and software instruments, one that includes a field recording they made with their phones, and so on. I started having them remix each other’s tracks this past semester, and it was such a smash hit that I’m going to have future classes do a whole series of peer remixes.

Montclair is a fairly traditional conservatory. For many students, my class is the only time in their college careers they get to make music according to their own sensibilities and tastes. It’s also usually the only time they engage critically with recordings, or electronic dance music, or hip-hop, or pop song forms, or sampling, or mixing and audio processing. I’m glad to be able to fill these vacuums, but I wish I had more than one semester to do it in.

Aside from creative music-making, the students do a couple of presentations, one on a song they think is interesting, and one on a topic of their choice. They also write blog posts about the process of creating their tracks. This last assignment is a persistent obstacle, since no one seems to share my enthusiasm for process documentation. Next semester I’m going to try introducing some of the cooperative/competitive spirit of the peer remixes by having them write reviews of each other’s tracks. Maybe that will get them to invest their writing with the same creativity they put into the music assignments.

Montclair State Advanced Computer Music Composition

This past fall I got to teach my first advanced class, and it went amazingly well. We used Ableton Live, my DAW of choice, and the guys (it was all guys) banged out tracks at a rapid clip for the entire semester. As with the intro class, I spent most of the time on the creative process, and dealt with Ableton functionality and audio engineering topics as they came up.

Tristan gets his FFT on

Each assignment came with some kind of tight technical restriction, but no stylistic restrictions. As with the intro class, the advanced dudes did tracks using only existing loops, only MIDI, and found sound. They did peer remixing and self remixing as well. The two hardest and most interesting assignments were to create a new track using only samples of an existing track, and then to create a new track using only a single five-second Duke Ellington sample. (These assignments were inspired heavily by the Disquiet Junto.) The more tightly I constrained the students, the more ingenuity they displayed. Listen for yourself:

As with the intro class, I tried to have the advanced dudes document their process with blog posts. As with the intro class, they showed zero interest. In the future, I’ll have to get more creative with the writing component. Also, I’d like to not have the class be entirely male.

NYU Music Education Technology Practicum

This class is meant to be a grounding in music tech for future music teachers. I’m even more time-constrained at NYU than at Montclair, and I teach in a regular classroom rather than a computer lab. While my class time at Montclair is mostly devoted to music-making, at NYU I’m forced to do more lectures, demos and listening sessions. It is very far from ideal. I have no idea how NYU can charge so much money without offering such a basic-seeming amenity as a room with computers in it for the music students. However, NYU does have one advantage over Montclair as a teaching environment, which is that I can hold a couple of class sessions in an extremely fancy recording studio.

Catherine and Joseph in the Dolan Studio

I mostly take the same approach at NYU as I do at Montclair, and use most of the same assignments. The major difference is that the NYU kids do a critical listening project, where they pick a recording and graph out its musical structure and spatial layout. It’s a difficult exercise, but an invaluable one. I did it in grad school, and it improved my analytical listening abilities significantly. We used to do the same assignment at Montclair, but the students were really not into it, like to the point of refusing to do it, so sadly we had to drop it from the syllabus. I hope we can find a way to reinstate it.

This past semester, the majority of my NYU kids were music business majors, which was pretty great. They came in with less musical experience than the education majors–sometimes with none at all–but they had less to unlearn, and they threw themselves confidently into producing tracks. This coming semester I have a bunch more music business kids. I’m attracting them because my class is the only one at Steinhardt that does intro-level creative music making in the pop idiom. I’m clearly filling a vacuum, and I’m hoping that I’m just the thin edge of the wedge, both for my own sake and the future music educators of NYU.

Interface designs

The NYU Music Experience Design Lab is baking education into a suite of creative music making and learning tools. As my friend and colleague Adam Bell likes to say, purchasers of a computer are purchasing a music education. We’re trying to make that education a better and more enjoyable one, whether our users are in formal classroom settings or playing around on their own. You can read about the lab’s various projects here. My own contributions are largely conceptual, though I’ve also devoted a lot of attention to making useful and inspiring presets.

Cold Sweat on the Groove Pizza

The Ed Sullivan Fellows Program

This winter, the MusEDLab is launching a brand new initiative, mentoring a group of young people from challenging circumstances in music and technology. I’ll be teaching the music side, doing a custom-tailored version of my intro class syllabus. Sullivan Fellows will also work with my colleagues in the lab on programming and design projects. This summer, we’ll have a showcase event as part of the 2016 IMPACT Conference. The goal is to help the Fellows get launched in careers in music and/or technology. I’ll be writing a lot more about this in the coming weeks.

Online courses with Soundfly

The MusEDLab is working with a music ed startup on some new interactive online courses. The first is called Music Theory For Bedroom Producers, and we expect to launch next month. I wrote a lot of the materials, and am appearing in some videos. Soundfly has ace designers, animators and programmers, so expect a rich multimedia experience. More on this as it gets closer.

Everything else

For the past few years, I’ve been a teaching artist with NYU’s IMPACT workshop. Below, you can see some participants making beats on an iPad. The workshop is a crash course not just in music, but in theater, dance, video, and the intersection of all of the above. I’m still very much figuring out my role in the whole thing, but so is everyone involved.

Mobile music at IMPACT

I continue to teach private lessons, do freelance production and composition, do some consulting, write for online publications, and generally keep hustling for gigs. If you’d like to have me do any of these things, be in touch.

Music education at the grownups’ table

I was asked by Alison Armstrong to comment on this Time magazine op-ed by Todd Stoll, the vice president of education at Jazz at Lincoln Center. Before I do, let me give some context: Todd Stoll is a friend and colleague of Wynton Marsalis, and he shares some of Wynton’s ideas about music.

Wynton Marsalis

Wynton Marsalis has some strong views about jazz, its historical significance, and its present condition. He holds jazz to be “America’s classical music,” the highest achievement of our culture, and the sonic embodiment of our best democratic ideals. The man himself is a brilliant practitioner of the art form. I’ve had the pleasure of hearing him play live several times, and he’s always a riveting improvisor. However, his definition of the word “jazz” is a narrow one. For Wynton Marsalis, jazz history ends in about 1965, right before Herbie Hancock traded in his grand piano for a Fender Rhodes. All the developments after that–the introduction of funk, rock, pop, electronic music, and hip-hop– are bastardizations of the music.

Wynton Marsalis’ public stature has given his philosophy enormous weight, which has been a mixed bag for jazz culture. On the one hand, he has been a key force in getting jazz the institutional recognition that it was denied for too many years. On the other hand, the form of jazz that Wynton advocates for is a museum piece, a time capsule of the middle part of the twentieth century. When jazz gained the legitimacy of “classical music,” it also became burdened with classical music’s stuffiness, pedantry, and disconnection from the broader culture. As the more innovative jazz artists try to keep pace with the world, they can find themselves more hindered by Wynton than helped.

So, with all that in mind, let’s see what Todd Stoll has to say about the state of music education on America.

No Child Left Behind, the largest attempt at education reform in our nation’s history, resulted in a massive surge in the testing of our kids and an increased focus in “STEM” (science, technology, engineering and math). While well-meaning, this legislation precipitated a gradual and massive decline of students participating in music and arts classes, as test prep and remedial classes took precedence over a broader liberal arts education, and music education was often reduced, cut, or relegated to after school.

Testing culture is a Bad Thing, no question there.

Taken on face value, Every Student Succeeds bodes well for music education and the National Association for Music Education, which spent thousands of hours lobbying on behalf of music teachers everywhere. The new act removes “adequate yearly progress” benchmarks and includes music and arts as part of its definition of a “well-rounded education.” It also refers to time spent teaching music and arts as “protected time.”

That is a Good Thing.

Music and arts educators now have some leverage for increased funding, professional development, equipment, staffing, prioritized scheduling of classes, and a more solid foothold when budgets get tight and cuts are being discussed. I can almost hear the discussions—”We can’t cut a core class now, can we?” In other words, music is finally at the grown-ups table with subjects like science, math, social studies and language arts.

Yes! Great. But how did music get sent to the kids’ table in the first place? How did we come to regard it as a luxury, or worse, a frivolity? How do we learn to value it more highly, so the next time that a rage for quantitative assessment sweeps the federal government, we won’t go through the same cycle all over again?

Now that we’re at the table, we need a national conversation to redefine the depth and quality of the content we teach in our music classes. We need a paradigm shift in how we define outcomes in our music students. And we need to go beyond the right notes, precise rhythms, clear diction and unified phrasing that have set the standard for the past century.

True. The standard music curriculum in America is very much stuck in the model of the nineteenth century European conservatory. There’s so much more we could be doing to awaken kids’ innate musicality.

We should define learning by a student’s intimate knowledge of composers or artists—their personal history, conception and the breadth and scope of their output.

Sure! This sounds good.

Students should know the social and cultural landscape of the era in which any piece was written or recorded, and the circumstances that had an influence.

Stoll is referring here to the outdated notion of “absolute music,” the idea that the best music is “pure,” that it transcends the grubby world of politics and economics and fashion. We definitely want kids to know that music comes from a particular time and place, and that it responds to particular forces and pressures.

We should teach the triumphant mythology of our greatest artists—from Louis Armstrong to Leonard Bernstein, from Marian Anderson to Mary Lou Williams, and others.

Sure, students should know who black and female and Jewish musicians are. Apparently, however, our greatest artists all did their work before 1965.

Students should understand the style and conception of a composer or artist—what are the aesthetics of a specific piece, the notes that have meaning? They should know the influences and inputs that went into the creation of a piece and how to identify those.

Very good idea. I’m a strong believer in the evolutionary biology model of music history. Rather than doing a chronological plod through the Great Men (and now Women), I like the idea of picking a musical trope and tracing out its family tree.

There should be discussion of the definitive recording of a piece, and students should make qualitative judgments on such against a rubric defined by the teacher that easily and broadly gives definition and shape to any genre.

The Wynton Marsalis version of jazz has turned out to be a good fit for academic culture, because there are Canonical Works by Great Masters. In jazz, the canonical work is a recording rather than a score, but the scholarly approach can be the same. This model is problematic for an improvised, largely aural, and dance-oriented tradition like jazz, to say the least, but it is progress to be talking about recording as an art form unto itself.

Selected pieces should illuminate the general concepts of any genre—the 6/8 march, the blues, a lyrical art song, counterpoint, AABA form, or call and response—and students should be able to understand these and know their precise location within a score and what these concepts represent.

Okay. Why? I mean, these are all fine things to learn and teach. But they only become meaningful through use. A kid might rightly question whether their knowledge of lyrical art song or AABA form has anything to do with anything. Once a kid tries writing a song, these ideas suddenly become a lot more pertinent.

We should embrace the American arts as a full constituent in our programs—not the pop-tinged sounds of The Voice or Glee but our music: blues, folk, spirituals, jazz, hymns, country and bluegrass, the styles that created the fabric of our culture and concert works by composers who embraced them.

This is where Stoll and I part company. Classical pedagogues have earned a bad reputation for insisting that kids like the wrong music. Stoll is committing the same sin here. Remember, kids: Our Music is not your music. You are supposed to like blues, folk, spirituals, jazz, hymns, country and bluegrass. Those are the styles that created the fabric of our culture. And they inspired concert works by composers, so that really makes them legit. Music that was popular in your lifetime, or your parents’ lifetime, is suspect.

Students should learn that the written score is a starting point. It’s the entry into a world of discovery and aspiration that can transform their lives; it’s deeper than notes. We should help them realize that a lifetime of discovery in music is a worthwhile and enjoyable endeavor.

Score-centrism is a bad look from anyone, and it’s especially disappointing from a jazz guy. What does this statement mean to a kid immersed in rock or hip-hop, where nothing is written down? The score should be presented as what it is: one starting point among many. You can have a lifetime of discovery in music without ever reading a note. I believe that notation is worth teaching, but it’s worth teaching as a means to an end, not as an end unto itself.

These lessons will require new skills, extra work outside of class, more research, and perhaps new training standards for teachers. But, it’s not an insurmountable task, and it is vital, given the current strife of our national discourse.

If we can agree on the definitive recording of West Side Story, we can bridge the partisan divide!

Our arts can help us define who we are and tell us who we can be. They can bind the wounds of racism, compensate for the scourge of socio-economic disadvantage, and inoculate a new generation against the fear of not knowing and understanding those who are different from themselves.

I want this all to be true. But there is some magical thinking at work here, and magical thinking is not going to help us when budgets get cut. I want the kids to have the opportunity to study Leonard Bernstein and Marian Anderson. I’d happily toss standardized testing overboard to free up the time and resources. I believe that doing so will result in better academic outcomes. And I believe that music does make better citizens. But how does it do that? Saying that we need school music in order to instill Reverence for the Great Masters is weak sauce, even if the list of Great Masters now has some women and people of color on it. We need to be able to articulate specifically why music is of value to kids.

I believe that we have a good answer already: the point of music education should be to build emotionally stronger people. Done right, music promotes flow, deep attention, social bonding, and resilience. As Steve Dillon puts it, music is “a powerful weapon against depression.” Kids who are centered, focused, and able to regulate their moods are going to be better students, better citizens, and (most importantly!) happier humans. That is why it’s worth using finite school resources to teach music.

The question we need to ask is: what methods of music education best support emotional development in kids? I believe that the best approach is to treat every kid as a latent musician, and to help them develop as such, to make them producers rather than consumers. If a kid’s musicality can be nurtured best through studying jazz, great! That approach worked great for me, because my innermost musical self turns out to have a lot of resonance with Ellington and Coltrane. If a kid finds meaning in Beethoven, also great. But if the key to a particular kid’s lock is hip-hop or trance or country, music education should be equipped to support them too. Pointing young people to music they might otherwise miss out on is a good idea. Stifling them under the weight of a canon is not.

Please stop saying “consuming music”

In the wake of David Bowie’s death, I went on iTunes and bought a couple of his tracks, including the majestic “Blackstar.” In economic terms, I “consumed” this song. I am a “music consumer.” I made an emotional connection to a dying man who has been a creative inspiration of mine for more than twenty years, via “consumption.” That does not feel like the right word, at all. When did we even start saying “music consumers”? Why did we start? It makes my skin crawl.

The Online Etymology Dictionary says that the verb “to consume” descends from Latin consumere, which means “to use up, eat, waste.” That last sense of the word speaks volumes about America, our values, and specifically, our pathological relationship with music.

The synonyms for “consume” listed in my computer’s thesaurus include: devour, ingest, swallow, gobble up, wolf down, guzzle, feast on, gulp down, polish off, dispose of, pig out on, swill, expend, deplete, exhaust, waste, squander, drain, dissipate, fritter away, destroy, demolish, lay waste, wipe out, annihilate, devastate, gut, ruin, wreck. None of these are words I want to apply to music.

I’m happy to spend money on music. I’m not happy to be a consumer of it. When I consume something, like electricity or food, then it’s gone, and can’t be used by anyone else. But having bought that David Bowie song from iTunes, I can listen to it endlessly, play it for other people, put it in playlists, mull it over when I’m not listening to it, sample it, remix it, mash it up with other songs.

What word should we use for buying songs from iTunes, or streaming them on Spotify, or otherwise spending money on them? (Or being advertised to around them?) Well, what’s wrong with “buying” or “streaming”? I’m happy to call myself a “music buyer” or “music streamer.” There’s no contradiction there between the economic activity and the creative one.

My colleagues in the music business world have developed a distressing habit of using “consuming” to describe any music listening experience. This is the sense of the word that I’m most committed to abolishing. Not only is it nonsensical, but it reduces the act of listening to the equivalent of eating a bag of potato chips. Listening is not a passive activity. It requires imaginative participation (and in more civilized cultures than ours, dancing.) Listening is a form of musicianship–the most important kind, since it’s a prerequisite for all of the others. Marc Sabatella says:

For the purposes of this primer, we are all musicians. Some of us may be performing musicians, while most of us are listening musicians. Most of the former are also the latter.

I mean, you would hope. Thomas Regelski goes further. He challenges the assumption that the deepest understanding of music comes from performing or composing it. Performing and composing are valuable and delightful experiences, and they can inform a rich musical understanding. But they aren’t the only way to access meaning at the deepest level. Listening alone can do it. Some of the best music scholarship I’ve read comes from “non-musicians.” Listening is a creative act. You couldn’t come up with a less apt term for it than “consumption.” Please stop saying it.

Space Oddity: from song to track

If you’ve ever wondered what it is that a music producer does exactly, David Bowie’s “Space Oddity” is a crystal clear example. To put it in a nutshell, a producer turns this:

Into this:

It’s also interesting to listen to the first version of the commercial recording, which is better than the demo, but still nowhere near as majestic as the final version. The Austin Powers flute solo is especially silly.

Should we even consider these three recordings to be the same piece of music? On the one hand, they’re all the same melody and chords and lyrics. On the other hand, if the song only existed in its demo form, or in the awkward Austin Powers version, it would never have made the impact that it did. Some of the impact of the final version lies in better recording techniques and equipment, but it’s more than that. The music takes on a different meaning in the final version. It’s bigger, trippier, punchier, tighter, more cinematic, more transporting, and in general about a thousand times more effective.

The producer’s job is to marshall the efforts of songwriters, arrangers, performers and engineers to create a good-sounding recording. (The producer might also be a songwriter, arranger, performer, and/or engineer.) Producers are to songs what directors are to movies, or showrunners are to television.

When you’re thinking about a piece of recorded music, you’re really talking about three different things:

  1. The underlying composition, the part that can be represented on paper. Albin Zak calls this “the song.”
  2. The performance of the song.
  3. The finished recording, after overdubbing, mixing, editing, effects, and all the rest. Albin Zak calls this “the track.”

I had always assumed that Tony Visconti produced “Space Oddity,” since he produced a ton of other Bowie classics. As it turns out, though, Visconti was underwhelmed by the song, so he delegated it to his assistant, Gus Dudgeon. So what is it that Gus Dudgeon did precisely? First let’s separate out what he didn’t do.

You can hear from the demo that the chords, melody and lyrics were all in place before Bowie walked into the studio. They’re the parts reproduced by the subway busker I heard singing “Space Oddity” this morning. The demo includes a vocal arrangement that’s similar to the final one, aside from some minor phrasing changes. The acoustic guitar and Stylophone are in place as well. (I had always thought it was an oboe, but no, that droning sound is a low-tech synth.)

Gus Dudgeon took a song and a partial arrangement, and turned it into a track. He oversaw the addition of electric guitar, bass, drums, strings, woodwinds, and keyboards. He coached Bowie and the various studio musicians through their performances, selected the takes, and decided on effects like echoes and reverb. He supervised the mixing, which not only sets the relative loudness of the various sounds, but also affects their perceived location and significance. In short, he designed the actual sounds that you hear.

If you want to dive deep into the track, you’re in luck, because Bowie officially released the multitrack stems. Some particular points of interest:

  • The bassist, Herbie Flowers, was a rookie. The “Space Oddity” session was his first. He later went on to create the staggeringly great dual bass part in Lou Reed’s “Walk On The Wild Side.”
  • The strings were arranged and conducted by the multifaceted Paul Buckmaster, who a few years later would work with Miles Davis on the conception of On The Corner. Buckmaster’s cello harmonics contribute significantly to the psychedelic atmosphere–listen to the end of the stem labeled “Extras 1.”
  • The live strings are supplemented by Mellotron, played by future Yes keyboardist Rick Wakeman, he of the flamboyant gold cape.
  • Tony Visconti plays some flute and unspecified woodwinds, including the distinctive saxophone run that leads into the instrumental sections.

You can read a detailed analysis of the recording on the excellent Bowiesongs blog.

The big difference between the sixties and the present is that the track has assumed ever-greater importance relative to the song and the performance. In the age of MIDI and digital audio editing, live performance has become a totally optional component of music. The song is increasingly inseparable from the sounds used to realize it, especially in synth-heavy music like hip-hop and EDM. This shift gives the producer ever-greater importance in the creative process. There is really no such thing as a “demo” anymore, since anyone with a computer can produce finished-sounding tracks in their bedroom. If David Bowie were a kid now, he’d put together “Space Oddity” in GarageBand or FL Studio, with a lavish soundscape part of the conception from the beginning.

I want my students to understand that the words “producer” and “musician” are becoming synonymous. I want them to know that they can no longer focus solely on composition or performance and wait for someone else to craft a track around them. The techniques used to make “Space Oddity” were esoteric and expensive to realize at the time. Now, they’re easily within reach. But while the technology is more accessible, you still have to have the ideas. This is why it’s so valuable to study great producers like Tony Visconti and Gus Dudgeon: they’re a goldmine of sonic inspiration.

See also: a broader appreciation of Bowie.