Music education at the grownups’ table

I was asked by Alison Armstrong to comment on this Time magazine op-ed by Todd Stoll, the vice president of education at Jazz at Lincoln Center. Before I do, let me give some context: Todd Stoll is a friend and colleague of Wynton Marsalis, and he shares some of Wynton’s ideas about music.

Wynton Marsalis

Wynton Marsalis has some strong views about jazz, its historical significance, and its present condition. He holds jazz to be “America’s classical music,” the highest achievement of our culture, and the sonic embodiment of our best democratic ideals. The man himself is a brilliant practitioner of the art form. I’ve had the pleasure of hearing him play live several times, and he’s always a riveting improvisor. However, his definition of the word “jazz” is a narrow one. For Wynton Marsalis, jazz history ends in about 1965, right before Herbie Hancock traded in his grand piano for a Fender Rhodes. All the developments after that–the introduction of funk, rock, pop, electronic music, and hip-hop– are bastardizations of the music.

Wynton Marsalis’ public stature has given his philosophy enormous weight, which has been a mixed bag for jazz culture. On the one hand, he has been a key force in getting jazz the institutional recognition that it was denied for too many years. On the other hand, the form of jazz that Wynton advocates for is a museum piece, a time capsule of the middle part of the twentieth century. When jazz gained the legitimacy of “classical music,” it also became burdened with classical music’s stuffiness, pedantry, and disconnection from the broader culture. As the more innovative jazz artists try to keep pace with the world, they can find themselves more hindered by Wynton than helped.

So, with all that in mind, let’s see what Todd Stoll has to say about the state of music education on America.

No Child Left Behind, the largest attempt at education reform in our nation’s history, resulted in a massive surge in the testing of our kids and an increased focus in “STEM” (science, technology, engineering and math). While well-meaning, this legislation precipitated a gradual and massive decline of students participating in music and arts classes, as test prep and remedial classes took precedence over a broader liberal arts education, and music education was often reduced, cut, or relegated to after school.

Testing culture is a Bad Thing, no question there.

Taken on face value, Every Student Succeeds bodes well for music education and the National Association for Music Education, which spent thousands of hours lobbying on behalf of music teachers everywhere. The new act removes “adequate yearly progress” benchmarks and includes music and arts as part of its definition of a “well-rounded education.” It also refers to time spent teaching music and arts as “protected time.”

That is a Good Thing.

Music and arts educators now have some leverage for increased funding, professional development, equipment, staffing, prioritized scheduling of classes, and a more solid foothold when budgets get tight and cuts are being discussed. I can almost hear the discussions—”We can’t cut a core class now, can we?” In other words, music is finally at the grown-ups table with subjects like science, math, social studies and language arts.

Yes! Great. But how did music get sent to the kids’ table in the first place? How did we come to regard it as a luxury, or worse, a frivolity? How do we learn to value it more highly, so the next time that a rage for quantitative assessment sweeps the federal government, we won’t go through the same cycle all over again?

Now that we’re at the table, we need a national conversation to redefine the depth and quality of the content we teach in our music classes. We need a paradigm shift in how we define outcomes in our music students. And we need to go beyond the right notes, precise rhythms, clear diction and unified phrasing that have set the standard for the past century.

True. The standard music curriculum in America is very much stuck in the model of the nineteenth century European conservatory. There’s so much more we could be doing to awaken kids’ innate musicality.

We should define learning by a student’s intimate knowledge of composers or artists—their personal history, conception and the breadth and scope of their output.

Sure! This sounds good.

Students should know the social and cultural landscape of the era in which any piece was written or recorded, and the circumstances that had an influence.

Stoll is referring here to the outdated notion of “absolute music,” the idea that the best music is “pure,” that it transcends the grubby world of politics and economics and fashion. We definitely want kids to know that music comes from a particular time and place, and that it responds to particular forces and pressures.

We should teach the triumphant mythology of our greatest artists—from Louis Armstrong to Leonard Bernstein, from Marian Anderson to Mary Lou Williams, and others.

Sure, students should know who black and female and Jewish musicians are. Apparently, however, our greatest artists all did their work before 1965.

Students should understand the style and conception of a composer or artist—what are the aesthetics of a specific piece, the notes that have meaning? They should know the influences and inputs that went into the creation of a piece and how to identify those.

Very good idea. I’m a strong believer in the evolutionary biology model of music history. Rather than doing a chronological plod through the Great Men (and now Women), I like the idea of picking a musical trope and tracing out its family tree.

There should be discussion of the definitive recording of a piece, and students should make qualitative judgments on such against a rubric defined by the teacher that easily and broadly gives definition and shape to any genre.

The Wynton Marsalis version of jazz has turned out to be a good fit for academic culture, because there are Canonical Works by Great Masters. In jazz, the canonical work is a recording rather than a score, but the scholarly approach can be the same. This model is problematic for an improvised, largely aural, and dance-oriented tradition like jazz, to say the least, but it is progress to be talking about recording as an art form unto itself.

Selected pieces should illuminate the general concepts of any genre—the 6/8 march, the blues, a lyrical art song, counterpoint, AABA form, or call and response—and students should be able to understand these and know their precise location within a score and what these concepts represent.

Okay. Why? I mean, these are all fine things to learn and teach. But they only become meaningful through use. A kid might rightly question whether their knowledge of lyrical art song or AABA form has anything to do with anything. Once a kid tries writing a song, these ideas suddenly become a lot more pertinent.

We should embrace the American arts as a full constituent in our programs—not the pop-tinged sounds of The Voice or Glee but our music: blues, folk, spirituals, jazz, hymns, country and bluegrass, the styles that created the fabric of our culture and concert works by composers who embraced them.

This is where Stoll and I part company. Classical pedagogues have earned a bad reputation for insisting that kids like the wrong music. Stoll is committing the same sin here. Remember, kids: Our Music is not your music. You are supposed to like blues, folk, spirituals, jazz, hymns, country and bluegrass. Those are the styles that created the fabric of our culture. And they inspired concert works by composers, so that really makes them legit. Music that was popular in your lifetime, or your parents’ lifetime, is suspect.

Students should learn that the written score is a starting point. It’s the entry into a world of discovery and aspiration that can transform their lives; it’s deeper than notes. We should help them realize that a lifetime of discovery in music is a worthwhile and enjoyable endeavor.

Score-centrism is a bad look from anyone, and it’s especially disappointing from a jazz guy. What does this statement mean to a kid immersed in rock or hip-hop, where nothing is written down? The score should be presented as what it is: one starting point among many. You can have a lifetime of discovery in music without ever reading a note. I believe that notation is worth teaching, but it’s worth teaching as a means to an end, not as an end unto itself.

These lessons will require new skills, extra work outside of class, more research, and perhaps new training standards for teachers. But, it’s not an insurmountable task, and it is vital, given the current strife of our national discourse.

If we can agree on the definitive recording of West Side Story, we can bridge the partisan divide!

Our arts can help us define who we are and tell us who we can be. They can bind the wounds of racism, compensate for the scourge of socio-economic disadvantage, and inoculate a new generation against the fear of not knowing and understanding those who are different from themselves.

I want this all to be true. But there is some magical thinking at work here, and magical thinking is not going to help us when budgets get cut. I want the kids to have the opportunity to study Leonard Bernstein and Marian Anderson. I’d happily toss standardized testing overboard to free up the time and resources. I believe that doing so will result in better academic outcomes. And I believe that music does make better citizens. But how does it do that? Saying that we need school music in order to instill Reverence for the Great Masters is weak sauce, even if the list of Great Masters now has some women and people of color on it. We need to be able to articulate specifically why music is of value to kids.

I believe that we have a good answer already: the point of music education should be to build emotionally stronger people. Done right, music promotes flow, deep attention, social bonding, and resilience. As Steve Dillon puts it, music is “a powerful weapon against depression.” Kids who are centered, focused, and able to regulate their moods are going to be better students, better citizens, and (most importantly!) happier humans. That is why it’s worth using finite school resources to teach music.

The question we need to ask is: what methods of music education best support emotional development in kids? I believe that the best approach is to treat every kid as a latent musician, and to help them develop as such, to make them producers rather than consumers. If a kid’s musicality can be nurtured best through studying jazz, great! That approach worked great for me, because my innermost musical self turns out to have a lot of resonance with Ellington and Coltrane. If a kid finds meaning in Beethoven, also great. But if the key to a particular kid’s lock is hip-hop or trance or country, music education should be equipped to support them too. Pointing young people to music they might otherwise miss out on is a good idea. Stifling them under the weight of a canon is not.

Please stop saying “consuming music”

In the wake of David Bowie’s death, I went on iTunes and bought a couple of his tracks, including the majestic “Blackstar.” In economic terms, I “consumed” this song. I am a “music consumer.” I made an emotional connection to a dying man who has been a creative inspiration of mine for more than twenty years, via “consumption.” That does not feel like the right word, at all. When did we even start saying “music consumers”? Why did we start? It makes my skin crawl.

The Online Etymology Dictionary says that the verb “to consume” descends from Latin consumere, which means “to use up, eat, waste.” That last sense of the word speaks volumes about America, our values, and specifically, our pathological relationship with music.

The synonyms for “consume” listed in my computer’s thesaurus include: devour, ingest, swallow, gobble up, wolf down, guzzle, feast on, gulp down, polish off, dispose of, pig out on, swill, expend, deplete, exhaust, waste, squander, drain, dissipate, fritter away, destroy, demolish, lay waste, wipe out, annihilate, devastate, gut, ruin, wreck. None of these are words I want to apply to music.

I’m happy to spend money on music. I’m not happy to be a consumer of it. When I consume something, like electricity or food, then it’s gone, and can’t be used by anyone else. But having bought that David Bowie song from iTunes, I can listen to it endlessly, play it for other people, put it in playlists, mull it over when I’m not listening to it, sample it, remix it, mash it up with other songs.

What word should we use for buying songs from iTunes, or streaming them on Spotify, or otherwise spending money on them? (Or being advertised to around them?) Well, what’s wrong with “buying” or “streaming”? I’m happy to call myself a “music buyer” or “music streamer.” There’s no contradiction there between the economic activity and the creative one.

My colleagues in the music business world have developed a distressing habit of using “consuming” to describe any music listening experience. This is the sense of the word that I’m most committed to abolishing. Not only is it nonsensical, but it reduces the act of listening to the equivalent of eating a bag of potato chips. Listening is not a passive activity. It requires imaginative participation (and in more civilized cultures than ours, dancing.) Listening is a form of musicianship–the most important kind, since it’s a prerequisite for all of the others. Marc Sabatella says:

For the purposes of this primer, we are all musicians. Some of us may be performing musicians, while most of us are listening musicians. Most of the former are also the latter.

I mean, you would hope. Thomas Regelski goes further. He challenges the assumption that the deepest understanding of music comes from performing or composing it. Performing and composing are valuable and delightful experiences, and they can inform a rich musical understanding. But they aren’t the only way to access meaning at the deepest level. Listening alone can do it. Some of the best music scholarship I’ve read comes from “non-musicians.” Listening is a creative act. You couldn’t come up with a less apt term for it than “consumption.” Please stop saying it.

Space Oddity: from song to track

If you’ve ever wondered what it is that a music producer does exactly, David Bowie’s “Space Oddity” is a crystal clear example. To put it in a nutshell, a producer turns this:

Into this:

It’s also interesting to listen to the first version of the commercial recording, which is better than the demo, but still nowhere near as majestic as the final version. The Austin Powers flute solo is especially silly.

Should we even consider these three recordings to be the same piece of music? On the one hand, they’re all the same melody and chords and lyrics. On the other hand, if the song only existed in its demo form, or in the awkward Austin Powers version, it would never have made the impact that it did. Some of the impact of the final version lies in better recording techniques and equipment, but it’s more than that. The music takes on a different meaning in the final version. It’s bigger, trippier, punchier, tighter, more cinematic, more transporting, and in general about a thousand times more effective.

The producer’s job is to marshall the efforts of songwriters, arrangers, performers and engineers to create a good-sounding recording. (The producer might also be a songwriter, arranger, performer, and/or engineer.) Producers are to songs what directors are to movies, or showrunners are to television.

When you’re thinking about a piece of recorded music, you’re really talking about three different things:

  1. The underlying composition, the part that can be represented on paper. Albin Zak calls this “the song.”
  2. The performance of the song.
  3. The finished recording, after overdubbing, mixing, editing, effects, and all the rest. Albin Zak calls this “the track.”

I had always assumed that Tony Visconti produced “Space Oddity,” since he produced a ton of other Bowie classics. As it turns out, though, Visconti was underwhelmed by the song, so he delegated it to his assistant, Gus Dudgeon. So what is it that Gus Dudgeon did precisely? First let’s separate out what he didn’t do.

You can hear from the demo that the chords, melody and lyrics were all in place before Bowie walked into the studio. They’re the parts reproduced by the subway busker I heard singing “Space Oddity” this morning. The demo includes a vocal arrangement that’s similar to the final one, aside from some minor phrasing changes. The acoustic guitar and Stylophone are in place as well. (I had always thought it was an oboe, but no, that droning sound is a low-tech synth.)

Gus Dudgeon took a song and a partial arrangement, and turned it into a track. He oversaw the addition of electric guitar, bass, drums, strings, woodwinds, and keyboards. He coached Bowie and the various studio musicians through their performances, selected the takes, and decided on effects like echoes and reverb. He supervised the mixing, which not only sets the relative loudness of the various sounds, but also affects their perceived location and significance. In short, he designed the actual sounds that you hear.

If you want to dive deep into the track, you’re in luck, because Bowie officially released the multitrack stems. Some particular points of interest:

  • The bassist, Herbie Flowers, was a rookie. The “Space Oddity” session was his first. He later went on to create the staggeringly great dual bass part in Lou Reed’s “Walk On The Wild Side.”
  • The strings were arranged and conducted by the multifaceted Paul Buckmaster, who a few years later would work with Miles Davis on the conception of On The Corner. Buckmaster’s cello harmonics contribute significantly to the psychedelic atmosphere–listen to the end of the stem labeled “Extras 1.”
  • The live strings are supplemented by Mellotron, played by future Yes keyboardist Rick Wakeman, he of the flamboyant gold cape.
  • Tony Visconti plays some flute and unspecified woodwinds, including the distinctive saxophone run that leads into the instrumental sections.

You can read a detailed analysis of the recording on the excellent Bowiesongs blog.

The big difference between the sixties and the present is that the track has assumed ever-greater importance relative to the song and the performance. In the age of MIDI and digital audio editing, live performance has become a totally optional component of music. The song is increasingly inseparable from the sounds used to realize it, especially in synth-heavy music like hip-hop and EDM. This shift gives the producer ever-greater importance in the creative process. There is really no such thing as a “demo” anymore, since anyone with a computer can produce finished-sounding tracks in their bedroom. If David Bowie were a kid now, he’d put together “Space Oddity” in GarageBand or FL Studio, with a lavish soundscape part of the conception from the beginning.

I want my students to understand that the words “producer” and “musician” are becoming synonymous. I want them to know that they can no longer focus solely on composition or performance and wait for someone else to craft a track around them. The techniques used to make “Space Oddity” were esoteric and expensive to realize at the time. Now, they’re easily within reach. But while the technology is more accessible, you still have to have the ideas. This is why it’s so valuable to study great producers like Tony Visconti and Gus Dudgeon: they’re a goldmine of sonic inspiration.

See also: a broader appreciation of Bowie.

A DIY video about DIY recording

For the benefit of Play With Your Music participants and anyone else we end up teaching basic audio production to, MusEDLab intern Robin Chakrabarti and I created this video on recording audio in less-than-ideal environments.

This video is itself quite a DIY production, shot and edited in less than twenty-four hours, with minimal discussion beforehand and zero rehearsal. Robin ran the camera, framed and planned shots and did the editing as well. We were operating from a loose script, but the details of the video ended being substantially improvised as we reacted to the room. For example, we discovered that the room opened onto a loud air conditioning unit that could be somewhat quieted by drawing a curtain. That became one of the more informative parts of the video. Also, while we had planned to do a shot in the bathroom to talk about its natural reverb, we also discovered that the hallway had fairly interesting reverb of its own, and it inspired a useful segment about standing waves.

Maybe the best improv moment came when someone inadvertently burst into the room where we were shooting. It could have been a ruined take, but we salvaged it by using it to address the idea that it’s hard to cordon off non-studio spaces to get the isolation you need.

Improvisation is such a valuable life skill. We shouldn’t make every kid learn how to read music notation, with improvisation as an optional side topic. We should make sure that everyone knows how to improvise, and then if people want to go on and learn to read, great.

The great music interface metaphor shift

I’m working on a long paper right now with my colleague at Montclair State University, Adam Bell. The premise is this: In the past, metaphors came from hardware, which software emulated. In the future, metaphors will come from software, which hardware will emulate.

The first generation of digital audio workstations have taken their metaphors from multitrack tape, the mixing desk, keyboards, analog synths, printed scores, and so on. Even the purely digital audio waveforms and MIDI clips behave like segments of tape. Sometimes the metaphors are graphically abstracted, as they are in Pro Tools. Sometimes the graphics are more literal, as in Logic. Propellerhead Reason is the most skeuomorphic software of them all. This image from the Propellerhead web site makes the intent of the designers crystal clear; the original analog synths dominate the image.

Reason with its inspiration

In Ableton Live, by contrast, hardware follows software. The metaphor behind Ableton’s Session View is a spreadsheet. Many of the instruments and effects have no hardware predecessor.

Loops in session view

Controllers like the APC and Push are designed to emulate Session View.

Ableton Push

Another software-centric user interface can be found in iZotope’s Iris. It enables you to selectively filter a sample by using Photoshop-like selection tools on a Fourier transform visualization.

iZotope Iris

Music is slow to embrace the idea of hardware designed to fit software rather than the other way around. Video games have followed this paradigm for decades. While there are some specialized controllers emulating car dashboards or guns or musical instruments, most game controllers are highly abstracted collections of buttons and knobs and motion sensors.

I was born in 1975, and I’m part of the last age cohort to grow up using analog tape. The kids now are likely to have never even seen a tape recorder. Hardware metaphors are only useful to people who are familiar with the hardware. Novel software metaphors take time to learn, especially if they stand for novel concepts. I’m looking forward to seeing what metaphors we dream up in the future.

Remix as compositional critique

This month I’ve been teaching music production and composition as part of NYU’s IMPACT program. A participant named Michelle asked me to critique some of her original compositions. I immediately said yes, and then immediately wondered how I was actually going to do it. I always want to evaluate music on its own terms, and to do that, I need to know what the terms are. I barely know Michelle. I’ve heard her play a little classical piano and know that she’s quite good, but beyond that, I don’t know her musical culture or intentions or style. Furthermore, she’s from China, and her English is limited.

I asked Michelle to email me audio files, and also MIDI files if she had them. Then I had an epiphany: I could just remix her MIDIs, and give my critique totally non-verbally.

Remix as compositional critique

Michelle sent me three MIDI files that she had created with Cubase, and I imported them into Ableton. The first two pieces sounded like Chinese folk music arranged in a western pop-classical style, with a lot of major pentatonic scales. This is very far away from my native musical territory, and I didn’t want to challenge Michelle’s melodic or harmonic choices. Instead, I decided to start by replacing her instrument sounds with hipper ones. Cubase has reasonably good built-in sounds, but sampled orchestral instruments played via MIDI are always going to sound goofy. Unless your work is going to be performed by humans, it makes more sense to use synths that sound their best in a robotic context.

I took the most liberty with Michelle’s drum patterns, which I replaced with harder, funkier beats. Classical musicians don’t get a lot of exposure to Afrocentric rhythm. Symphonic percussion is mostly a tasteful background element, and the classical tribe tends to treat all drums that way. For the pop idiom, you want a strong beat in the foreground.

Michelle’s third track had more of a jazz-funk vibe, and now we were speaking my language. Once again, I replaced the orchestra sounds with groovy synths. I also replaced the entire complex percussion arrangement with a single sampled breakbeat. Then I dove into the parts to make them more idiomatic. I get the sense that conservatory students in Shanghai aren’t listening to a lot of James Brown. Michelle had written an intricately contrapuntal bassline, which was full of good ideas, but was way too linear and eventful to suit the style. I isolated a few nice hooks and looped them. The track started feeling a lot tighter and funkier, so I did some similar looping and simplification of horn parts. The goal was to keep Michelle’s very hip melodic ideas intact, but to present them in a more economical setting.

My remix chops are well honed through continual practice, and I think I was pretty successful in my interpretations of Michelle’s tracks. She agreed, and indicated her delight at hearing her music in this new light with many exclamation points in her email. That felt good.

Upon reflection, I’m realizing that all of my remixes have been a kind of compositional critique: “This part right here is really fresh, but have you considered putting this kind of beat underneath it? And what if we skip this part, and slow the whole thing down a little? How about we change the chords over here, and put a new ending on?” Usually I’m remixing the work of strangers, so the conversation is indirect, but it’s still taking place inside my head.

The remix technique solves a problem that’s bothered me for my entire music teaching life: how do you evaluate someone else’s creative work? There is no objective standard for judging the quality of music. All evaluation is a statement of taste. But as a teacher, you still want to make judgments. How do you do that when you’re just expressing  differences in your arbitrary preferences?

One method for critiquing compositions is to harden your aesthetic whims into a dogmatic set of rules, and then apply them to everyone else. I studied jazz as an undergrad with Andy Jaffe. As far as Andy is concerned, all music aspires to the melodies of Duke Ellington, the rhythms of Horace Silver and the harmonies of John Coltrane. Fair enough, but my own tastes aren’t so tightly defined.

I like the remix idea because it isn’t evaluation at all. It’s a way of entering a conversation about alternative musical choices. If I remix your tune, you might feel like my version is an improvement, that it gets at what you were intending to say better than you knew how to say it. That’s the reaction that Michelle gave me, and it’s naturally the one that I want. Of course, you might also feel like I missed the point of your idea, that my version sounds awful. Fair enough. Neither of us is wrong. The beauty of digital audio is that there doesn’t need to be a last word; music can be rearranged and remixed indefinitely.

Update: a guy on Twitter had a brilliant suggestion: do the remix critique during class, so students can see your process, make suggestions, ask questions. Other people have asked me, “Wouldn’t remixing every single student composition take a lot of time?” Yes, I guess it would, but if you do it during class, that addresses the issue nicely.

How to write a pop song

My students are currently hard at work writing pop songs, many of them for the first time. For their benefit, and for yours, I thought I’d write out a beginner’s guide to contemporary songwriting. First, some points of clarification:

  1. This post only talks about the instrumental portion of the song, known as the track. I don’t deal with vocal parts or lyric writing here.
  2. This is not a guide to writing a great pop song. It’s a guide to writing an adequate one. Your sense of what makes a song good will probably differ from mine, whereas most of us can agree on what makes a song adequate. To make a good song, you’ll probably need to pump out a bunch of bad ones first to get the hang of the process.
  3. This is not a guide to writing a hit pop song. I have no idea how to do that. If you’re aiming for the charts, I refer you to the wise words of the KLF.
  4. You’ll notice that I seem to be talking a lot here about production, and that I never mention actual writing. This is because in 2014, songwriting and production are the same creative act. There is no such thing as a “demo” anymore. The world expects your song to sound finished. Also, most of the creativity in contemporary pop styles lies in rhythm, timbre and arrangement. Complex chord progressions and intricate melodies are neither necessary nor even desirable. It’s all in the beats and grooves.

To make a track, you’ll need a digital audio workstation (DAW) and a loop library. I’ll be using GarageBand, but you can use the same methods in Ableton Live, Logic, Reason, Pro Tools, etc. I produced this track for illustration purposes, and will be referring to it throughout the post:

Step one: gather loops

Put together four or eight bars worth of loops that all sound good together. Feel free to use the loops that come with your software, they’re probably a fine starting point. You can also generate your loops by recording instruments and singing, or by sequencing MIDI, or by sampling existing songs. Even if you aren’t working in an electronic medium, you can still gather loops: guitar parts, keyboard parts, bass riffs, drum patterns. Think of this set of loops as the highest-energy part of your song, the last chorus or what have you.

For my example track, I exclusively used GarageBand’s factory loops, which are mostly great if you tweak them a little. I selected a hip-hop beat, some congas, a shaker, a synth bass, some synth chords, and a string section melody. All of these loops are audio samples, except for the synth chord part, which is a MIDI sequence. I customized the synth part so that instead of playing the same chord four times, it makes a little progression that fits the bassline: I – I – bVI – bVII.

loops

Step two: duplicate your loops a bunch of times

I copied my set of loops fifteen times, so the whole tune is 128 bars long. It doesn’t matter at this point exactly how many times you copy everything, so long as you have three or four minutes worth of loops to work with. You can always copy and paste more track if you need to. GarageBand users: note that by default, the song length is set ridiculously short. You’ll need to drag the right edge of your song to give yourself enough room.

16 copies

 Step three: create structure via selective deletion

This is the hard part, and it’s where you do the most actual “songwriting.” Remember how I said that your set of loops was going to be the highest-energy part of the song? You’re going to create all of the other sections by removing stuff. Different subsets of your loop collection will form your various sections: intro, verses, choruses, breakdown, outtro, and so on. These sections should probably be four, eight, twelve or sixteen bars long.

Here’s the structure I came up with on my first pass:

song

I made a sixteen-bar intro with the synth chords entering first, then the percussion, then the hip-hop drums. The entrance of the bass is verse one. The entrance of the strings is chorus one. For verse two, everything drops out except the drums, congas and bass. Chorus two is twice the length of chorus one, with the keyboard chords out for the first half. Then there’s a breakdown, eight bars of just the bass, and another eight of the bass and drums. Next, there are three more choruses, the first minus the keyboard chords again, the next two with everything (my original loop collection.) Finally, there’s a long outtro, with parts exiting every four or eight bars.

Even experienced songwriters find structure difficult. I certainly do. Building your structure will likely require a lot of trial and error. For inspiration, I recommend analyzing the structure of songs you like, and imitating them. Here’s my collection of particularly interesting song structures. My main piece of advice here is to keep things repetitive. If the groove is happening, people will happily listen to it for three or four minutes with minimal variation.

Trained musicians frequently feel anxious that their song isn’t “interesting” enough, and work hard to pack it with surprises. That’s the wrong idea. Let your grooves breathe. Let the listener get comfortable. This is pop music, it should be gratifying on the first listen. If you feel like your song won’t work without all kinds of intricate musical drama, you should probably just find a more happening set of loops.

Step four: listen and iterate

After leaving my song alone for a couple of days, some shortcomings leaped out at me. The energy was building and dissipating in an awkward, unsatisfying way, and the string part was too repetitive to carry the whole melodic foreground. I decided to rebuild the structure from scratch. I also added another loop, a simple guitar riff. I then cut both the string and guitar parts in half, so the front half of the string loop calls, and the back half of the guitar loop answers. This worked hugely better. Here’s the finished product, the one you hear above:

final song

My final structure goes as follows: the intro is synth chords and guitar, quickly joined by the percussion, then the drum loop. Verse one adds the bass. Chorus one adds the strings, so now we’re at full power. Verse two is a dramatic drop in energy, just the conga and strings, joined halfway through by the drums. Chorus two adds the bass and guitar back in. The breakdown section is eight bars of drums and bass, then eight more bars adding in the strings and percussion. The drums and percussion drop out for a bar right at the end of the section to create some punctuation. Verse three is everything but the synth chords. Choruses three and four are everything. The outtro is a staggered series of exits, rhythm section first, until the guitar and strings are left alone.

So there you have it. Once you’ve committed to your musical ideas, let your song sit for a few days and then go back and listen to it with an ear for mix and space. Try some effects, if you haven’t yet. Reverb and echo/delay always sound cool. Chances are your mix is going to be weak. My students almost always need to turn up their drums and turn down their melodic instruments. Try to push things to completion, but don’t make yourself crazy. Get your track to a place where it doesn’t totally embarrass you, put it on the web, and go start another one.

If reading this inspires you to make a track, please put a link to it in the comments, I’d love to hear it.

Composing for controllerism

My first set of attempts at controllerism used samples of the Beatles and Michael Jackson. For the next round, I thought it would be good to try to create something completely from scratch. So this is my first piece of music created specifically with controllerism in mind.

The APC40 has forty trigger pads. You can use more than forty loops, but it’s a pain. I created eight loops that fit well together, and then made four additional variations of each one. That gave me a set of loops that fit tidily onto the APC40 grid. The instruments are 808 drum machine, latin percussion, wood blocks, blown tube, synth bass, bells, arpeggiated synth and an ambient pad.

40 loops

I tried to design my loops so that all of them would be mutually musically compatible. I didn’t systematically test them, because that would have required trying thousands of combinations. Instead, I decided to randomly generate a song using Ableton’s Follow Actions to see if anything obviously unmusical leapt out at me. The first attempt was not a success — hearing all eight loops all the time was too much information. I needed a way to introduce some space. Eventually I hit on the idea of adding empty clips to each column that would be randomly sprinkled in.

40 loops plus blanks

It is exceptionally relaxing watching a song write itself while you sit there drinking coffee.

The computer plays itself

The result was a mix of pleasing and not-so-pleasing. I edited the random sequence into a more coherent shape:

Edited randomness

Even with my editing, the result was not too hot. But it was useful to have something to react against. Finally, with all the prep behind me, it was time to play all this stuff live on the APC. Here’s the very first take of improv I did.

raw improv

I let it sit for a couple of days while I was preoccupied with other things, and when I finally listened back, I was pleasantly surprised. Here it is, minimally edited:

The piece has a coherent shape, with lifts and lulls, peaks and valleys. It’s quite different from the way I’d structure a piece of music by my usual method of drawing loops on the screen. It’s less symmetrical and orderly, but it makes an intuitive sense of its own. I’ve been looking for a way to reconcile my love of jazz with my love of electronic dance music for many years now. I think I’ve finally found it. For my next controllerist opus, I’m going to blend samples and my own MIDI loops, and have more odd-length loops. And maybe I’ll play these things for an audience too.

How should we be teaching music technology?

This semester, I had the pleasure of leading an independent study for two music students at Montclair State University. One was Matt Skouras, a grad student who wants to become the music tech teacher in a high school. First of all, let me just say that if you’re hiring for such a position in New Jersey, you should go right ahead and hire Matt, he’s an exceptionally serious and well-versed musician and technologist. But the reason for this post is a question that Matt asked me after our last meeting yesterday: What should he be studying in order to teach music tech?

Matt is an good example of a would-be music tech teacher. He’s a classical trumpet player by training who has found little opportunity to use that skill after college. Wanting to keep his life as a musician moving forward, he started learning guitar, and, in his independent study with me, has been producing adventurous laptop music with Ableton Live. Matt is a broad-minded listener, and a skilled audio engineer, but his exposure to non-classical music is limited in the way typical of people who came up through the classical pipeline. It was at Matt’s request that I put together this electronic music tasting menu.

So. How to answer Matt’s question? How does one go about learning to teach music technology? My first impulse was to say, I don’t know, but if you find out, please tell me. The answer I gave him was less flip: that the field is still taking shape, and it evolves rapidly as the technology does. Music tech is a broad and sprawling subject, and you could approach it from any number of different philosophical and technical angles. I’ll list a few of them here.

Teach the technology itself

NYU’s Music Technology program takes this approach. You learn the foundations of audio engineering and signal processing from the ones and zeroes up. The production of actual music is a secondary concern. The one required electronic composition class is rooted squarely in the modernist Euroclassical tradition (though since I took it, pop music has made some inroads as well). If you want to learn about the culture, history and aesthetics of non-academic music, NYU’s Music Tech program is not the place to do it.

Use new tools to teach traditional repertoire and concepts

Most music teachers in the US are operating in the Euroclassical tonal tradition. Notation software and the DAW can make teaching and learning that material a lot more engaging. I have my NYU music ed students read Barb Freedman’s excellent book, Teaching Music Through Composition. If you want to teach the basics of Western common-practice era composition and theory in an interactive, creativity-oriented way, Barb’s method is a great one.

Teaching Music Through Technology by Barb Freedman

The big problem here is not in Barb’s execution, but rather the philosophical assumptions underlying it. I don’t believe that Euroclassical tradition is the right way to bring most kids into active music-making. Barb’s methods are battle-tested and effective, but I think we should be using those methods in the service of different musical ends.

Use technology as a transmission vector for Afrocentric dance music

You can use the computer to make any kind of music, and people do, but there is a particular set of practices most naturally suited to it: hip-hop, techno, and their various pop derivatives. I put this music front and center in my music tech classes, for a couple of reasons. The big one is its systematic neglect by music education. The African diaspora is a more salient influence on American music at this point than Euroclassical, but you’d never guess it from looking at our syllabi or standards.

The other reason I use an Afrofuturistic approach is that this is the music that sounds the best when you make it with computers. Classical music sounds dreadful in synthesized form. Hip-hop and EDM sound terrific. Copy and paste is the defining gesture of digital audio editing, and it fits the loop-centrism of Afrocentric pop perfectly.

With due respect to my music tech professors, I don’t believe that most musicians need to know the details of timestretching algorithms or MP3 encoding. The kids don’t really need to be taught how to use a DAW or a mic or a preamp; all of those things are amply documented for the curious. What musicians need to be taught is how to use these tools for expressive purposes. They need to know how to use recordings as raw material for new music, how to program synths and drums in a way that sounds good, the best aesthetic practices for loop structures. I believe that these practices are valuable for musicians working in any idiom, not just pop. I have my students remix each others’ tracks, so they can discuss each others’ musical ideas in the language of music itself. It works much better than any verbal discourse possibly could.

Take a historical view of music technology

My Montclair State colleague Adam Bell shares my musical values, but he puts them into practice somewhat differently. Rather than focus on the musical present, he likes to bring his students on a journey through the last hundred years of technology, taking in audio recording and manipulation, electronic and electroacoustic music, film scoring, and yes, rock and pop. For example, he has students explore musique concréte by recording environmental sounds on their phones, and then editing them in the DAW. He’s an enthusiastic proponent of maker culture, and has the kids create DIY custom electronic music interfaces using the Makey Makey and LittleBits. He wants the students to explore the expressive possibilities of technology, not just as users of tools, but as designers of them as well. I’m working on absorbing more of this approach into my own.

Examine all of the above methods critically

My mentor figure, Alex Ruthmann, is an expert on many music tech pedagogies and philosophies, and he takes a thirty-five-thousand-foot overview of them all. While he teaches music technology and methods for teaching music technology, his main mission is to look critically at all of the myriad ways that people teach it to find out what they conceal and reveal, what their unstated values and goals are, and how the various methods have emerged and interacted throughout history. After all, while music technology is a new subject, it isn’t completely new, and forward thinkers have been teaching it for many decades.

NYU’s music education program has been taking steps recently to make technology more of a priority. NYU brought Alex on board with the express goal of bridging the gap between music tech and music ed. My own NYU class is another step toward preparing future music teachers to do music tech.

There is no single best approach

So where does all this leave Matt, and other would-be teachers of music tech? The bad news is that there is no clearly defined set of practices to learn, no equivalent to Orff or Suzuki or Kodály. The good news is that we’re left with a lot of freedom to define our mission in our own terms. It’s a freedom that few music teachers enjoy, and we might as well take advantage of the opportunity to innovate.

Music theory on Hacker News

This fascinating thread about music theory on Hacker News showed up recently in my blog pingbacks.

Hackers

Two posts in particular caught my eye. First, kev6168 had this eminently reasonable request:

I wish there is a lecture in the format of “One hundred songs for one hundred music concepts”. The songs should be stupidly simple (children’s songs, simple pops, short classical pieces, etc.). Each lesson concentrates on only _one_ concept appeared in the piece, explains what it is, how it is used, why it is used in that way, and how its application makes nice sound, etc. Basically, show me the _effect_ for each concept in a simple real world setting. Then more importantly, give me exercises to write a few bars of music using this concept, no matter how bad my writings are, as long as I am applying the new knowledge…

[A]rmed with only a few concepts, a newbie [coder] can start to write simple programs from the very beginning, in the forms of piecemeal topic-focused little exercises. The result of each exercise is a fully functioning program. I wish I can find a similarly structured music theory course that uses the same approach. Also, are there projects in music which are similar to ProjectEuler.net or the likes, where you can do focused practice on specific topic? I would be happy to pay for those services.

This represents a pedagogical opportunity, not to mention a market opportunity. The NYU Music Experience Design Lab is hard at work on creating just such a resource. It’s going to be called Play With Your Music: Theory, and we hope to get it launched this summer. If you want to help us with it, get in touch.

Deeper in the thread, TheOtherHobbes has a broader philosophical point.

Pop has become a massive historical experiment in perceptual psychology. The most popular bands can literally fill a stadium – something common practice music has never done.

While that doesn’t mean pop is better in some absolute sense, it does suggest it’s doing something right for many listeners.

If your training is too rigidly classical it actively stops you being able to hear and understand what that right thing is, because you’re too busy concentrating on a small subset of the many details in the music.

This is a point that I spend a lot of energy pursuing, but I hadn’t explicitly framed it in terms of perceptual psychology. It gets at some bigger questions: Why do people like music at all? Even though pop can indeed draw huge crowds, it’s mostly a recorded art form. How does that work? What does it mean that we’re so attracted to roboticized voices? A lot to think about.