Theory for Producers

I’m delighted to announce the launch of a new interactive online music course called Theory for Producers: The Black Keys. It’s a joint effort by Soundfly and the NYU MusEDLab, representing the culmination of several years worth of design and programming. We’re super proud of it.

Theory for Producers: The Black Keys

The course makes the abstractions of music theory concrete by presenting them in the form of actual songs you’re likely to already know. You can play and improvise along with the examples right in the web browser using the aQWERTYon, which turns your computer keyboard into an easily playable instrument. You can also bring the examples into programs like Ableton Live or Logic for further hands-on experimentation. We’ve spiced up the content with videos and animations, along with some entertaining digressions into the Stone Age and the auditory processing abilities of frogs.

So what does it mean that this is music theory for producers? We’re organizing the material in a way that’s easiest and most relevant to people using computers to create the dance music of the African diaspora: techno, hip-hop, and their various pop derivatives. This music carries most of its creative content outside of harmony: in rhythm, timbre, and repetitive structure. The harmony is usually static, sitting on a loop of a few chords or just a single mode. Alongside the standard (Western) major and minor scales, you’re just as likely to encounter more “exotic” (non-Western) sounds.

Music theory classes and textbooks typically begin with the C major scale, because it’s the easiest scale to represent and read in music notation. However, C major is not necessarily the most “basic” or fundamental scale for our intended audience. Instead, we start with E-flat minor pentatonic, otherwise known as the black keys on the piano. The piano metaphor is ubiquitous both in electronic music hardware and software, and pentatonics are even easier to play on piano than diatonic scales. E-flat minor pentatonic is more daunting in notated form than C major, but since dance and hip-hop producers tend not to be able to read music anyway, that’s no obstacle. And if producers want to use keys other than E-flat minor (or G-flat major), they can keep playing the black keys and then transpose the MIDI later.

The Black Keys is just the first installment in Theory For Producers. Next, we’ll do The White Keys, otherwise known as the modes of C major. We’re planning to start that course not with C major itself, but with G Mixolydian mode, because it’s a more familiar sound in Afrodiasporic music than straight major. After that, we’ll do a course about chords, and one about rhythm. We hope you sign up!

Update: oh hey, we’re on Lifehacker

Teaching reflections

Here’s what happened in my life as an educator this past semester, and what I have planned for the coming semester.

Montclair State University Intro To Music Technology

I wonder how much longer “music technology” is going to exist as a subject. They don’t teach “piano technology” or “violin technology.” It makes sense to teach specific areas like audio recording or synthesis or signal theory as separate classes. But “music technology” is such a broad term as to be meaningless. The unspoken assumption is that we’re teaching “musical practices involving a computer,” but even that is both too big and too small to structure a one-semester class around. On the one hand, every kind of music involves computers now. On the other hand, to focus just on the computer part is like teaching a word processing class that’s somehow separate from learning how to write.

MSU Intro to Music Tech

The newness and vagueness of the field of study gives me and my fellow music tech educators wide latitude to define our subject matter. I see my job as providing an introduction to pop production and songwriting. The tools we use for the job at Montclair are mostly GarageBand and Logic, but I don’t spend a lot of time on the mechanics of the software itself. Instead, I teach music: How do you express yourself creatively using sample libraries, or MIDI, or field recordings, or pre-existing songs? What kinds of rhythms, harmonies, timbres and structures make sense aesthetically when you’re assembling these materials in the DAW? Where do you get ideas? How do you listen to recorded music analytically? Why does Thriller sound so much better than any other album recorded in the eighties? We cover technical concepts as they arise in the natural course of producing and listening. My hope is that they’ll be more relevant and memorable that way.

Having now taught three semesters of Intro to Music Tech at MSU, my format is starting to gel. The students spend most of the semester creating tracks. They do one using only the loops that come with GarageBand, one using only MIDI and software instruments, one that includes a field recording they made with their phones, and so on. I started having them remix each other’s tracks this past semester, and it was such a smash hit that I’m going to have future classes do a whole series of peer remixes.

Montclair is a fairly traditional conservatory. For many students, my class is the only time in their college careers they get to make music according to their own sensibilities and tastes. It’s also usually the only time they engage critically with recordings, or electronic dance music, or hip-hop, or pop song forms, or sampling, or mixing and audio processing. I’m glad to be able to fill these vacuums, but I wish I had more than one semester to do it in.

Aside from creative music-making, the students do a couple of presentations, one on a song they think is interesting, and one on a topic of their choice. They also write blog posts about the process of creating their tracks. This last assignment is a persistent obstacle, since no one seems to share my enthusiasm for process documentation. Next semester I’m going to try introducing some of the cooperative/competitive spirit of the peer remixes by having them write reviews of each other’s tracks. Maybe that will get them to invest their writing with the same creativity they put into the music assignments.

Montclair State Advanced Computer Music Composition

This past fall I got to teach my first advanced class, and it went amazingly well. We used Ableton Live, my DAW of choice, and the guys (it was all guys) banged out tracks at a rapid clip for the entire semester. As with the intro class, I spent most of the time on the creative process, and dealt with Ableton functionality and audio engineering topics as they came up.

Tristan gets his FFT on

Each assignment came with some kind of tight technical restriction, but no stylistic restrictions. As with the intro class, the advanced dudes did tracks using only existing loops, only MIDI, and found sound. They did peer remixing and self remixing as well. The two hardest and most interesting assignments were to create a new track using only samples of an existing track, and then to create a new track using only a single five-second Duke Ellington sample. (These assignments were inspired heavily by the Disquiet Junto.) The more tightly I constrained the students, the more ingenuity they displayed. Listen for yourself:

As with the intro class, I tried to have the advanced dudes document their process with blog posts. As with the intro class, they showed zero interest. In the future, I’ll have to get more creative with the writing component. Also, I’d like to not have the class be entirely male.

NYU Music Education Technology Practicum

This class is meant to be a grounding in music tech for future music teachers. I’m even more time-constrained at NYU than at Montclair, and I teach in a regular classroom rather than a computer lab. While my class time at Montclair is mostly devoted to music-making, at NYU I’m forced to do more lectures, demos and listening sessions. It is very far from ideal. I have no idea how NYU can charge so much money without offering such a basic-seeming amenity as a room with computers in it for the music students. However, NYU does have one advantage over Montclair as a teaching environment, which is that I can hold a couple of class sessions in an extremely fancy recording studio.

Catherine and Joseph in the Dolan Studio

I mostly take the same approach at NYU as I do at Montclair, and use most of the same assignments. The major difference is that the NYU kids do a critical listening project, where they pick a recording and graph out its musical structure and spatial layout. It’s a difficult exercise, but an invaluable one. I did it in grad school, and it improved my analytical listening abilities significantly. We used to do the same assignment at Montclair, but the students were really not into it, like to the point of refusing to do it, so sadly we had to drop it from the syllabus. I hope we can find a way to reinstate it.

This past semester, the majority of my NYU kids were music business majors, which was pretty great. They came in with less musical experience than the education majors–sometimes with none at all–but they had less to unlearn, and they threw themselves confidently into producing tracks. This coming semester I have a bunch more music business kids. I’m attracting them because my class is the only one at Steinhardt that does intro-level creative music making in the pop idiom. I’m clearly filling a vacuum, and I’m hoping that I’m just the thin edge of the wedge, both for my own sake and the future music educators of NYU.

Interface designs

The NYU Music Experience Design Lab is baking education into a suite of creative music making and learning tools. As my friend and colleague Adam Bell likes to say, purchasers of a computer are purchasing a music education. We’re trying to make that education a better and more enjoyable one, whether our users are in formal classroom settings or playing around on their own. You can read about the lab’s various projects here. My own contributions are largely conceptual, though I’ve also devoted a lot of attention to making useful and inspiring presets.

Cold Sweat on the Groove Pizza

The Ed Sullivan Fellows Program

This winter, the MusEDLab is launching a brand new initiative, mentoring a group of young people from challenging circumstances in music and technology. I’ll be teaching the music side, doing a custom-tailored version of my intro class syllabus. Sullivan Fellows will also work with my colleagues in the lab on programming and design projects. This summer, we’ll have a showcase event as part of the 2016 IMPACT Conference. The goal is to help the Fellows get launched in careers in music and/or technology. I’ll be writing a lot more about this in the coming weeks.

Online courses with Soundfly

The MusEDLab is working with a music ed startup on some new interactive online courses. The first is called Music Theory For Bedroom Producers, and we expect to launch next month. I wrote a lot of the materials, and am appearing in some videos. Soundfly has ace designers, animators and programmers, so expect a rich multimedia experience. More on this as it gets closer.

Everything else

For the past few years, I’ve been a teaching artist with NYU’s IMPACT workshop. Below, you can see some participants making beats on an iPad. The workshop is a crash course not just in music, but in theater, dance, video, and the intersection of all of the above. I’m still very much figuring out my role in the whole thing, but so is everyone involved.

Mobile music at IMPACT

I continue to teach private lessons, do freelance production and composition, do some consulting, write for online publications, and generally keep hustling for gigs. If you’d like to have me do any of these things, be in touch.

Music education at the grownups’ table

I was asked by Alison Armstrong to comment on this Time magazine op-ed by Todd Stoll, the vice president of education at Jazz at Lincoln Center. Before I do, let me give some context: Todd Stoll is a friend and colleague of Wynton Marsalis, and he shares some of Wynton’s ideas about music.

Wynton Marsalis

Wynton Marsalis has some strong views about jazz, its historical significance, and its present condition. He holds jazz to be “America’s classical music,” the highest achievement of our culture, and the sonic embodiment of our best democratic ideals. The man himself is a brilliant practitioner of the art form. I’ve had the pleasure of hearing him play live several times, and he’s always a riveting improvisor. However, his definition of the word “jazz” is a narrow one. For Wynton Marsalis, jazz history ends in about 1965, right before Herbie Hancock traded in his grand piano for a Fender Rhodes. All the developments after that–the introduction of funk, rock, pop, electronic music, and hip-hop– are bastardizations of the music.

Wynton Marsalis’ public stature has given his philosophy enormous weight, which has been a mixed bag for jazz culture. On the one hand, he has been a key force in getting jazz the institutional recognition that it was denied for too many years. On the other hand, the form of jazz that Wynton advocates for is a museum piece, a time capsule of the middle part of the twentieth century. When jazz gained the legitimacy of “classical music,” it also became burdened with classical music’s stuffiness, pedantry, and disconnection from the broader culture. As the more innovative jazz artists try to keep pace with the world, they can find themselves more hindered by Wynton than helped.

So, with all that in mind, let’s see what Todd Stoll has to say about the state of music education on America.

No Child Left Behind, the largest attempt at education reform in our nation’s history, resulted in a massive surge in the testing of our kids and an increased focus in “STEM” (science, technology, engineering and math). While well-meaning, this legislation precipitated a gradual and massive decline of students participating in music and arts classes, as test prep and remedial classes took precedence over a broader liberal arts education, and music education was often reduced, cut, or relegated to after school.

Testing culture is a Bad Thing, no question there.

Taken on face value, Every Student Succeeds bodes well for music education and the National Association for Music Education, which spent thousands of hours lobbying on behalf of music teachers everywhere. The new act removes “adequate yearly progress” benchmarks and includes music and arts as part of its definition of a “well-rounded education.” It also refers to time spent teaching music and arts as “protected time.”

That is a Good Thing.

Music and arts educators now have some leverage for increased funding, professional development, equipment, staffing, prioritized scheduling of classes, and a more solid foothold when budgets get tight and cuts are being discussed. I can almost hear the discussions—”We can’t cut a core class now, can we?” In other words, music is finally at the grown-ups table with subjects like science, math, social studies and language arts.

Yes! Great. But how did music get sent to the kids’ table in the first place? How did we come to regard it as a luxury, or worse, a frivolity? How do we learn to value it more highly, so the next time that a rage for quantitative assessment sweeps the federal government, we won’t go through the same cycle all over again?

Now that we’re at the table, we need a national conversation to redefine the depth and quality of the content we teach in our music classes. We need a paradigm shift in how we define outcomes in our music students. And we need to go beyond the right notes, precise rhythms, clear diction and unified phrasing that have set the standard for the past century.

True. The standard music curriculum in America is very much stuck in the model of the nineteenth century European conservatory. There’s so much more we could be doing to awaken kids’ innate musicality.

We should define learning by a student’s intimate knowledge of composers or artists—their personal history, conception and the breadth and scope of their output.

Sure! This sounds good.

Students should know the social and cultural landscape of the era in which any piece was written or recorded, and the circumstances that had an influence.

Stoll is referring here to the outdated notion of “absolute music,” the idea that the best music is “pure,” that it transcends the grubby world of politics and economics and fashion. We definitely want kids to know that music comes from a particular time and place, and that it responds to particular forces and pressures.

We should teach the triumphant mythology of our greatest artists—from Louis Armstrong to Leonard Bernstein, from Marian Anderson to Mary Lou Williams, and others.

Sure, students should know who black and female and Jewish musicians are. Apparently, however, our greatest artists all did their work before 1965.

Students should understand the style and conception of a composer or artist—what are the aesthetics of a specific piece, the notes that have meaning? They should know the influences and inputs that went into the creation of a piece and how to identify those.

Very good idea. I’m a strong believer in the evolutionary biology model of music history. Rather than doing a chronological plod through the Great Men (and now Women), I like the idea of picking a musical trope and tracing out its family tree.

There should be discussion of the definitive recording of a piece, and students should make qualitative judgments on such against a rubric defined by the teacher that easily and broadly gives definition and shape to any genre.

The Wynton Marsalis version of jazz has turned out to be a good fit for academic culture, because there are Canonical Works by Great Masters. In jazz, the canonical work is a recording rather than a score, but the scholarly approach can be the same. This model is problematic for an improvised, largely aural, and dance-oriented tradition like jazz, to say the least, but it is progress to be talking about recording as an art form unto itself.

Selected pieces should illuminate the general concepts of any genre—the 6/8 march, the blues, a lyrical art song, counterpoint, AABA form, or call and response—and students should be able to understand these and know their precise location within a score and what these concepts represent.

Okay. Why? I mean, these are all fine things to learn and teach. But they only become meaningful through use. A kid might rightly question whether their knowledge of lyrical art song or AABA form has anything to do with anything. Once a kid tries writing a song, these ideas suddenly become a lot more pertinent.

We should embrace the American arts as a full constituent in our programs—not the pop-tinged sounds of The Voice or Glee but our music: blues, folk, spirituals, jazz, hymns, country and bluegrass, the styles that created the fabric of our culture and concert works by composers who embraced them.

This is where Stoll and I part company. Classical pedagogues have earned a bad reputation for insisting that kids like the wrong music. Stoll is committing the same sin here. Remember, kids: Our Music is not your music. You are supposed to like blues, folk, spirituals, jazz, hymns, country and bluegrass. Those are the styles that created the fabric of our culture. And they inspired concert works by composers, so that really makes them legit. Music that was popular in your lifetime, or your parents’ lifetime, is suspect.

Students should learn that the written score is a starting point. It’s the entry into a world of discovery and aspiration that can transform their lives; it’s deeper than notes. We should help them realize that a lifetime of discovery in music is a worthwhile and enjoyable endeavor.

Score-centrism is a bad look from anyone, and it’s especially disappointing from a jazz guy. What does this statement mean to a kid immersed in rock or hip-hop, where nothing is written down? The score should be presented as what it is: one starting point among many. You can have a lifetime of discovery in music without ever reading a note. I believe that notation is worth teaching, but it’s worth teaching as a means to an end, not as an end unto itself.

These lessons will require new skills, extra work outside of class, more research, and perhaps new training standards for teachers. But, it’s not an insurmountable task, and it is vital, given the current strife of our national discourse.

If we can agree on the definitive recording of West Side Story, we can bridge the partisan divide!

Our arts can help us define who we are and tell us who we can be. They can bind the wounds of racism, compensate for the scourge of socio-economic disadvantage, and inoculate a new generation against the fear of not knowing and understanding those who are different from themselves.

I want this all to be true. But there is some magical thinking at work here, and magical thinking is not going to help us when budgets get cut. I want the kids to have the opportunity to study Leonard Bernstein and Marian Anderson. I’d happily toss standardized testing overboard to free up the time and resources. I believe that doing so will result in better academic outcomes. And I believe that music does make better citizens. But how does it do that? Saying that we need school music in order to instill Reverence for the Great Masters is weak sauce, even if the list of Great Masters now has some women and people of color on it. We need to be able to articulate specifically why music is of value to kids.

I believe that we have a good answer already: the point of music education should be to build emotionally stronger people. Done right, music promotes flow, deep attention, social bonding, and resilience. As Steve Dillon puts it, music is “a powerful weapon against depression.” Kids who are centered, focused, and able to regulate their moods are going to be better students, better citizens, and (most importantly!) happier humans. That is why it’s worth using finite school resources to teach music.

The question we need to ask is: what methods of music education best support emotional development in kids? I believe that the best approach is to treat every kid as a latent musician, and to help them develop as such, to make them producers rather than consumers. If a kid’s musicality can be nurtured best through studying jazz, great! That approach worked great for me, because my innermost musical self turns out to have a lot of resonance with Ellington and Coltrane. If a kid finds meaning in Beethoven, also great. But if the key to a particular kid’s lock is hip-hop or trance or country, music education should be equipped to support them too. Pointing young people to music they might otherwise miss out on is a good idea. Stifling them under the weight of a canon is not.

Please stop saying “consuming music”

In the wake of David Bowie’s death, I went on iTunes and bought a couple of his tracks, including the majestic “Blackstar.” In economic terms, I “consumed” this song. I am a “music consumer.” I made an emotional connection to a dying man who has been a creative inspiration of mine for more than twenty years, via “consumption.” That does not feel like the right word, at all. When did we even start saying “music consumers”? Why did we start? It makes my skin crawl.

The Online Etymology Dictionary says that the verb “to consume” descends from Latin consumere, which means “to use up, eat, waste.” That last sense of the word speaks volumes about America, our values, and specifically, our pathological relationship with music.

The synonyms for “consume” listed in my computer’s thesaurus include: devour, ingest, swallow, gobble up, wolf down, guzzle, feast on, gulp down, polish off, dispose of, pig out on, swill, expend, deplete, exhaust, waste, squander, drain, dissipate, fritter away, destroy, demolish, lay waste, wipe out, annihilate, devastate, gut, ruin, wreck. None of these are words I want to apply to music.

I’m happy to spend money on music. I’m not happy to be a consumer of it. When I consume something, like electricity or food, then it’s gone, and can’t be used by anyone else. But having bought that David Bowie song from iTunes, I can listen to it endlessly, play it for other people, put it in playlists, mull it over when I’m not listening to it, sample it, remix it, mash it up with other songs.

What word should we use for buying songs from iTunes, or streaming them on Spotify, or otherwise spending money on them? (Or being advertised to around them?) Well, what’s wrong with “buying” or “streaming”? I’m happy to call myself a “music buyer” or “music streamer.” There’s no contradiction there between the economic activity and the creative one.

My colleagues in the music business world have developed a distressing habit of using “consuming” to describe any music listening experience. This is the sense of the word that I’m most committed to abolishing. Not only is it nonsensical, but it reduces the act of listening to the equivalent of eating a bag of potato chips. Listening is not a passive activity. It requires imaginative participation (and in more civilized cultures than ours, dancing.) Listening is a form of musicianship–the most important kind, since it’s a prerequisite for all of the others. Marc Sabatella says:

For the purposes of this primer, we are all musicians. Some of us may be performing musicians, while most of us are listening musicians. Most of the former are also the latter.

I mean, you would hope. Thomas Regelski goes further. He challenges the assumption that the deepest understanding of music comes from performing or composing it. Performing and composing are valuable and delightful experiences, and they can inform a rich musical understanding. But they aren’t the only way to access meaning at the deepest level. Listening alone can do it. Some of the best music scholarship I’ve read comes from “non-musicians.” Listening is a creative act. You couldn’t come up with a less apt term for it than “consumption.” Please stop saying it.

Space Oddity: from song to track

If you’ve ever wondered what it is that a music producer does exactly, David Bowie’s “Space Oddity” is a crystal clear example. To put it in a nutshell, a producer turns this:

Into this:

It’s also interesting to listen to the first version of the commercial recording, which is better than the demo, but still nowhere near as majestic as the final version. The Austin Powers flute solo is especially silly.

Should we even consider these three recordings to be the same piece of music? On the one hand, they’re all the same melody and chords and lyrics. On the other hand, if the song only existed in its demo form, or in the awkward Austin Powers version, it would never have made the impact that it did. Some of the impact of the final version lies in better recording techniques and equipment, but it’s more than that. The music takes on a different meaning in the final version. It’s bigger, trippier, punchier, tighter, more cinematic, more transporting, and in general about a thousand times more effective.

The producer’s job is to marshall the efforts of songwriters, arrangers, performers and engineers to create a good-sounding recording. (The producer might also be a songwriter, arranger, performer, and/or engineer.) Producers are to songs what directors are to movies, or showrunners are to television.

When you’re thinking about a piece of recorded music, you’re really talking about three different things:

  1. The underlying composition, the part that can be represented on paper. Albin Zak calls this “the song.”
  2. The performance of the song.
  3. The finished recording, after overdubbing, mixing, editing, effects, and all the rest. Albin Zak calls this “the track.”

I had always assumed that Tony Visconti produced “Space Oddity,” since he produced a ton of other Bowie classics. As it turns out, though, Visconti was underwhelmed by the song, so he delegated it to his assistant, Gus Dudgeon. So what is it that Gus Dudgeon did precisely? First let’s separate out what he didn’t do.

You can hear from the demo that the chords, melody and lyrics were all in place before Bowie walked into the studio. They’re the parts reproduced by the subway busker I heard singing “Space Oddity” this morning. The demo includes a vocal arrangement that’s similar to the final one, aside from some minor phrasing changes. The acoustic guitar and Stylophone are in place as well. (I had always thought it was an oboe, but no, that droning sound is a low-tech synth.)

Gus Dudgeon took a song and a partial arrangement, and turned it into a track. He oversaw the addition of electric guitar, bass, drums, strings, woodwinds, and keyboards. He coached Bowie and the various studio musicians through their performances, selected the takes, and decided on effects like echoes and reverb. He supervised the mixing, which not only sets the relative loudness of the various sounds, but also affects their perceived location and significance. In short, he designed the actual sounds that you hear.

If you want to dive deep into the track, you’re in luck, because Bowie officially released the multitrack stems. Some particular points of interest:

  • The bassist, Herbie Flowers, was a rookie. The “Space Oddity” session was his first. He later went on to create the staggeringly great dual bass part in Lou Reed’s “Walk On The Wild Side.”
  • The strings were arranged and conducted by the multifaceted Paul Buckmaster, who a few years later would work with Miles Davis on the conception of On The Corner. Buckmaster’s cello harmonics contribute significantly to the psychedelic atmosphere–listen to the end of the stem labeled “Extras 1.”
  • The live strings are supplemented by Mellotron, played by future Yes keyboardist Rick Wakeman, he of the flamboyant gold cape.
  • Tony Visconti plays some flute and unspecified woodwinds, including the distinctive saxophone run that leads into the instrumental sections.

You can read a detailed analysis of the recording on the excellent Bowiesongs blog.

The big difference between the sixties and the present is that the track has assumed ever-greater importance relative to the song and the performance. In the age of MIDI and digital audio editing, live performance has become a totally optional component of music. The song is increasingly inseparable from the sounds used to realize it, especially in synth-heavy music like hip-hop and EDM. This shift gives the producer ever-greater importance in the creative process. There is really no such thing as a “demo” anymore, since anyone with a computer can produce finished-sounding tracks in their bedroom. If David Bowie were a kid now, he’d put together “Space Oddity” in GarageBand or FL Studio, with a lavish soundscape part of the conception from the beginning.

I want my students to understand that the words “producer” and “musician” are becoming synonymous. I want them to know that they can no longer focus solely on composition or performance and wait for someone else to craft a track around them. The techniques used to make “Space Oddity” were esoteric and expensive to realize at the time. Now, they’re easily within reach. But while the technology is more accessible, you still have to have the ideas. This is why it’s so valuable to study great producers like Tony Visconti and Gus Dudgeon: they’re a goldmine of sonic inspiration.

See also: a broader appreciation of Bowie.

A DIY video about DIY recording

For the benefit of Play With Your Music participants and anyone else we end up teaching basic audio production to, MusEDLab intern Robin Chakrabarti and I created this video on recording audio in less-than-ideal environments.

This video is itself quite a DIY production, shot and edited in less than twenty-four hours, with minimal discussion beforehand and zero rehearsal. Robin ran the camera, framed and planned shots and did the editing as well. We were operating from a loose script, but the details of the video ended being substantially improvised as we reacted to the room. For example, we discovered that the room opened onto a loud air conditioning unit that could be somewhat quieted by drawing a curtain. That became one of the more informative parts of the video. Also, while we had planned to do a shot in the bathroom to talk about its natural reverb, we also discovered that the hallway had fairly interesting reverb of its own, and it inspired a useful segment about standing waves.

Maybe the best improv moment came when someone inadvertently burst into the room where we were shooting. It could have been a ruined take, but we salvaged it by using it to address the idea that it’s hard to cordon off non-studio spaces to get the isolation you need.

Improvisation is such a valuable life skill. We shouldn’t make every kid learn how to read music notation, with improvisation as an optional side topic. We should make sure that everyone knows how to improvise, and then if people want to go on and learn to read, great.

The great music interface metaphor shift

I’m working on a long paper right now with my colleague at Montclair State University, Adam Bell. The premise is this: In the past, metaphors came from hardware, which software emulated. In the future, metaphors will come from software, which hardware will emulate.

The first generation of digital audio workstations have taken their metaphors from multitrack tape, the mixing desk, keyboards, analog synths, printed scores, and so on. Even the purely digital audio waveforms and MIDI clips behave like segments of tape. Sometimes the metaphors are graphically abstracted, as they are in Pro Tools. Sometimes the graphics are more literal, as in Logic. Propellerhead Reason is the most skeuomorphic software of them all. This image from the Propellerhead web site makes the intent of the designers crystal clear; the original analog synths dominate the image.

Reason with its inspiration

In Ableton Live, by contrast, hardware follows software. The metaphor behind Ableton’s Session View is a spreadsheet. Many of the instruments and effects have no hardware predecessor.

Loops in session view

Controllers like the APC and Push are designed to emulate Session View.

Ableton Push

Another software-centric user interface can be found in iZotope’s Iris. It enables you to selectively filter a sample by using Photoshop-like selection tools on a Fourier transform visualization.

iZotope Iris

Music is slow to embrace the idea of hardware designed to fit software rather than the other way around. Video games have followed this paradigm for decades. While there are some specialized controllers emulating car dashboards or guns or musical instruments, most game controllers are highly abstracted collections of buttons and knobs and motion sensors.

I was born in 1975, and I’m part of the last age cohort to grow up using analog tape. The kids now are likely to have never even seen a tape recorder. Hardware metaphors are only useful to people who are familiar with the hardware. Novel software metaphors take time to learn, especially if they stand for novel concepts. I’m looking forward to seeing what metaphors we dream up in the future.

Remix as compositional critique

This month I’ve been teaching music production and composition as part of NYU’s IMPACT program. A participant named Michelle asked me to critique some of her original compositions. I immediately said yes, and then immediately wondered how I was actually going to do it. I always want to evaluate music on its own terms, and to do that, I need to know what the terms are. I barely know Michelle. I’ve heard her play a little classical piano and know that she’s quite good, but beyond that, I don’t know her musical culture or intentions or style. Furthermore, she’s from China, and her English is limited.

I asked Michelle to email me audio files, and also MIDI files if she had them. Then I had an epiphany: I could just remix her MIDIs, and give my critique totally non-verbally.

Remix as compositional critique

Michelle sent me three MIDI files that she had created with Cubase, and I imported them into Ableton. The first two pieces sounded like Chinese folk music arranged in a western pop-classical style, with a lot of major pentatonic scales. This is very far away from my native musical territory, and I didn’t want to challenge Michelle’s melodic or harmonic choices. Instead, I decided to start by replacing her instrument sounds with hipper ones. Cubase has reasonably good built-in sounds, but sampled orchestral instruments played via MIDI are always going to sound goofy. Unless your work is going to be performed by humans, it makes more sense to use synths that sound their best in a robotic context.

I took the most liberty with Michelle’s drum patterns, which I replaced with harder, funkier beats. Classical musicians don’t get a lot of exposure to Afrocentric rhythm. Symphonic percussion is mostly a tasteful background element, and the classical tribe tends to treat all drums that way. For the pop idiom, you want a strong beat in the foreground.

Michelle’s third track had more of a jazz-funk vibe, and now we were speaking my language. Once again, I replaced the orchestra sounds with groovy synths. I also replaced the entire complex percussion arrangement with a single sampled breakbeat. Then I dove into the parts to make them more idiomatic. I get the sense that conservatory students in Shanghai aren’t listening to a lot of James Brown. Michelle had written an intricately contrapuntal bassline, which was full of good ideas, but was way too linear and eventful to suit the style. I isolated a few nice hooks and looped them. The track started feeling a lot tighter and funkier, so I did some similar looping and simplification of horn parts. The goal was to keep Michelle’s very hip melodic ideas intact, but to present them in a more economical setting.

My remix chops are well honed through continual practice, and I think I was pretty successful in my interpretations of Michelle’s tracks. She agreed, and indicated her delight at hearing her music in this new light with many exclamation points in her email. That felt good.

Upon reflection, I’m realizing that all of my remixes have been a kind of compositional critique: “This part right here is really fresh, but have you considered putting this kind of beat underneath it? And what if we skip this part, and slow the whole thing down a little? How about we change the chords over here, and put a new ending on?” Usually I’m remixing the work of strangers, so the conversation is indirect, but it’s still taking place inside my head.

The remix technique solves a problem that’s bothered me for my entire music teaching life: how do you evaluate someone else’s creative work? There is no objective standard for judging the quality of music. All evaluation is a statement of taste. But as a teacher, you still want to make judgments. How do you do that when you’re just expressing  differences in your arbitrary preferences?

One method for critiquing compositions is to harden your aesthetic whims into a dogmatic set of rules, and then apply them to everyone else. I studied jazz as an undergrad with Andy Jaffe. As far as Andy is concerned, all music aspires to the melodies of Duke Ellington, the rhythms of Horace Silver and the harmonies of John Coltrane. Fair enough, but my own tastes aren’t so tightly defined.

I like the remix idea because it isn’t evaluation at all. It’s a way of entering a conversation about alternative musical choices. If I remix your tune, you might feel like my version is an improvement, that it gets at what you were intending to say better than you knew how to say it. That’s the reaction that Michelle gave me, and it’s naturally the one that I want. Of course, you might also feel like I missed the point of your idea, that my version sounds awful. Fair enough. Neither of us is wrong. The beauty of digital audio is that there doesn’t need to be a last word; music can be rearranged and remixed indefinitely.

Update: a guy on Twitter had a brilliant suggestion: do the remix critique during class, so students can see your process, make suggestions, ask questions. Other people have asked me, “Wouldn’t remixing every single student composition take a lot of time?” Yes, I guess it would, but if you do it during class, that addresses the issue nicely.

How to write a pop song

My students are currently hard at work writing pop songs, many of them for the first time. For their benefit, and for yours, I thought I’d write out a beginner’s guide to contemporary songwriting. First, some points of clarification:

  1. This post only talks about the instrumental portion of the song, known as the track. I don’t deal with vocal parts or lyric writing here.
  2. This is not a guide to writing a great pop song. It’s a guide to writing an adequate one. Your sense of what makes a song good will probably differ from mine, whereas most of us can agree on what makes a song adequate. To make a good song, you’ll probably need to pump out a bunch of bad ones first to get the hang of the process.
  3. This is not a guide to writing a hit pop song. I have no idea how to do that. If you’re aiming for the charts, I refer you to the wise words of the KLF.
  4. You’ll notice that I seem to be talking a lot here about production, and that I never mention actual writing. This is because in 2014, songwriting and production are the same creative act. There is no such thing as a “demo” anymore. The world expects your song to sound finished. Also, most of the creativity in contemporary pop styles lies in rhythm, timbre and arrangement. Complex chord progressions and intricate melodies are neither necessary nor even desirable. It’s all in the beats and grooves.

To make a track, you’ll need a digital audio workstation (DAW) and a loop library. I’ll be using GarageBand, but you can use the same methods in Ableton Live, Logic, Reason, Pro Tools, etc. I produced this track for illustration purposes, and will be referring to it throughout the post:

Step one: gather loops

Put together four or eight bars worth of loops that all sound good together. Feel free to use the loops that come with your software, they’re probably a fine starting point. You can also generate your loops by recording instruments and singing, or by sequencing MIDI, or by sampling existing songs. Even if you aren’t working in an electronic medium, you can still gather loops: guitar parts, keyboard parts, bass riffs, drum patterns. Think of this set of loops as the highest-energy part of your song, the last chorus or what have you.

For my example track, I exclusively used GarageBand’s factory loops, which are mostly great if you tweak them a little. I selected a hip-hop beat, some congas, a shaker, a synth bass, some synth chords, and a string section melody. All of these loops are audio samples, except for the synth chord part, which is a MIDI sequence. I customized the synth part so that instead of playing the same chord four times, it makes a little progression that fits the bassline: I – I – bVI – bVII.

loops

Step two: duplicate your loops a bunch of times

I copied my set of loops fifteen times, so the whole tune is 128 bars long. It doesn’t matter at this point exactly how many times you copy everything, so long as you have three or four minutes worth of loops to work with. You can always copy and paste more track if you need to. GarageBand users: note that by default, the song length is set ridiculously short. You’ll need to drag the right edge of your song to give yourself enough room.

16 copies

 Step three: create structure via selective deletion

This is the hard part, and it’s where you do the most actual “songwriting.” Remember how I said that your set of loops was going to be the highest-energy part of the song? You’re going to create all of the other sections by removing stuff. Different subsets of your loop collection will form your various sections: intro, verses, choruses, breakdown, outtro, and so on. These sections should probably be four, eight, twelve or sixteen bars long.

Here’s the structure I came up with on my first pass:

song

I made a sixteen-bar intro with the synth chords entering first, then the percussion, then the hip-hop drums. The entrance of the bass is verse one. The entrance of the strings is chorus one. For verse two, everything drops out except the drums, congas and bass. Chorus two is twice the length of chorus one, with the keyboard chords out for the first half. Then there’s a breakdown, eight bars of just the bass, and another eight of the bass and drums. Next, there are three more choruses, the first minus the keyboard chords again, the next two with everything (my original loop collection.) Finally, there’s a long outtro, with parts exiting every four or eight bars.

Even experienced songwriters find structure difficult. I certainly do. Building your structure will likely require a lot of trial and error. For inspiration, I recommend analyzing the structure of songs you like, and imitating them. Here’s my collection of particularly interesting song structures. My main piece of advice here is to keep things repetitive. If the groove is happening, people will happily listen to it for three or four minutes with minimal variation.

Trained musicians frequently feel anxious that their song isn’t “interesting” enough, and work hard to pack it with surprises. That’s the wrong idea. Let your grooves breathe. Let the listener get comfortable. This is pop music, it should be gratifying on the first listen. If you feel like your song won’t work without all kinds of intricate musical drama, you should probably just find a more happening set of loops.

Step four: listen and iterate

After leaving my song alone for a couple of days, some shortcomings leaped out at me. The energy was building and dissipating in an awkward, unsatisfying way, and the string part was too repetitive to carry the whole melodic foreground. I decided to rebuild the structure from scratch. I also added another loop, a simple guitar riff. I then cut both the string and guitar parts in half, so the front half of the string loop calls, and the back half of the guitar loop answers. This worked hugely better. Here’s the finished product, the one you hear above:

final song

My final structure goes as follows: the intro is synth chords and guitar, quickly joined by the percussion, then the drum loop. Verse one adds the bass. Chorus one adds the strings, so now we’re at full power. Verse two is a dramatic drop in energy, just the conga and strings, joined halfway through by the drums. Chorus two adds the bass and guitar back in. The breakdown section is eight bars of drums and bass, then eight more bars adding in the strings and percussion. The drums and percussion drop out for a bar right at the end of the section to create some punctuation. Verse three is everything but the synth chords. Choruses three and four are everything. The outtro is a staggered series of exits, rhythm section first, until the guitar and strings are left alone.

So there you have it. Once you’ve committed to your musical ideas, let your song sit for a few days and then go back and listen to it with an ear for mix and space. Try some effects, if you haven’t yet. Reverb and echo/delay always sound cool. Chances are your mix is going to be weak. My students almost always need to turn up their drums and turn down their melodic instruments. Try to push things to completion, but don’t make yourself crazy. Get your track to a place where it doesn’t totally embarrass you, put it on the web, and go start another one.

If reading this inspires you to make a track, please put a link to it in the comments, I’d love to hear it.

Composing for controllerism

My first set of attempts at controllerism used samples of the Beatles and Michael Jackson. For the next round, I thought it would be good to try to create something completely from scratch. So this is my first piece of music created specifically with controllerism in mind.

The APC40 has forty trigger pads. You can use more than forty loops, but it’s a pain. I created eight loops that fit well together, and then made four additional variations of each one. That gave me a set of loops that fit tidily onto the APC40 grid. The instruments are 808 drum machine, latin percussion, wood blocks, blown tube, synth bass, bells, arpeggiated synth and an ambient pad.

40 loops

I tried to design my loops so that all of them would be mutually musically compatible. I didn’t systematically test them, because that would have required trying thousands of combinations. Instead, I decided to randomly generate a song using Ableton’s Follow Actions to see if anything obviously unmusical leapt out at me. The first attempt was not a success — hearing all eight loops all the time was too much information. I needed a way to introduce some space. Eventually I hit on the idea of adding empty clips to each column that would be randomly sprinkled in.

40 loops plus blanks

It is exceptionally relaxing watching a song write itself while you sit there drinking coffee.

The computer plays itself

The result was a mix of pleasing and not-so-pleasing. I edited the random sequence into a more coherent shape:

Edited randomness

Even with my editing, the result was not too hot. But it was useful to have something to react against. Finally, with all the prep behind me, it was time to play all this stuff live on the APC. Here’s the very first take of improv I did.

raw improv

I let it sit for a couple of days while I was preoccupied with other things, and when I finally listened back, I was pleasantly surprised. Here it is, minimally edited:

The piece has a coherent shape, with lifts and lulls, peaks and valleys. It’s quite different from the way I’d structure a piece of music by my usual method of drawing loops on the screen. It’s less symmetrical and orderly, but it makes an intuitive sense of its own. I’ve been looking for a way to reconcile my love of jazz with my love of electronic dance music for many years now. I think I’ve finally found it. For my next controllerist opus, I’m going to blend samples and my own MIDI loops, and have more odd-length loops. And maybe I’ll play these things for an audience too.