Why hip-hop is interesting

The title of this post is also the title of a tutorial I’m giving at ISMIR 2016 with Jan Van Balen and Dan Brown. The conference is organized by the International Society for Music Information Retrieval, and it’s the fanciest of its kind. You may be wondering what Music Information Retrieval is. MIR is a specialized field in computer science devoted to teaching computers to understand music, so they can transcribe it, organize it, find connections and similarities, and, maybe, eventually, create it.

So why are we going to talk to the MIR community about hip-hop? So far, the field has mostly studied music using the tools of Western classical music theory, which emphasizes melody and harmony. Hip-hop songs don’t tend to have much going on in either of those areas, which makes the genre seem like it’s either too difficult to study, or just too boring. But the MIR community needs to find ways to engage this music, if for no other reason than the fact that hip-hop is the most-listened to genre in the world, at least among Spotify listeners.

Hip-hop has been getting plenty of scholarly attention lately, but most of it has been coming from cultural studies. Which is fine! Hip-hop is culturally interesting. When humanities people do engage with hip-hop as an art form, they tend to focus entirely on the lyrics, treating them as a subgenre of African-American literature that just happens to be performed over beats. And again, that’s cool! Hip-hop lyrics have literary interest. If you’re interested in the lyrical side, we recommend this video analyzing the rhyming techniques of several iconic emcees. But what we want to discuss is why hip-hop is musically interesting, a subject which academics have given approximately zero attention to.

Much of what I find exciting (and difficult) about hip-hop can be found in Kanye West’s song “Famous” from his album The Life Of Pablo.

The song comes with a video, a ten minute art film that shows Kanye in bed sleeping after a group sexual encounter with his wife, his former lover, his wife’s former lover, his father-in-law turned mother-in-law, various of his friends and collaborators, Bill Cosby, George Bush, Taylor Swift, and Donald Trump. There’s a lot to say about this, but it’s beyond the scope of our presentation, and my ability to verbalize thoughts. The song has some problematic lyrics. Kanye drops the n-word in the very first line and calls Taylor Swift a bitch in the second. He also speculates that he might have sex with her, and that he made her famous. I find his language difficult and objectionable, but that too is beyond the scope. Instead, I’m going to focus on the music itself.

“Famous” has a peculiar structure, shown in the graphic below.

The track begins with a six bar intro, Rihanna singing over a subtle gospel-flavored organ accompaniment in F-sharp major. She’s singing few lines from “Do What You Gotta Do” by Jimmy Webb. This song has been recorded many times, but for Kanye’s listeners, the most significant one is by Nina Simone.

Next comes a four-bar groove, a more aggressive organ part over a drum machine beat, with Swizz Beatz exclaiming on top. The beat is a minimal funk pattern on just kick and snare, treated with cavernous artificial reverb. The organ riff is in F-sharp minor, which is an abrupt mode change so early in the song. It’s sampled from the closing section of “Mi Sono Svegliato E…Ho Chiuso Gli Occhi” by Il Rovescio della Medaglia, an Italian prog-rock band I had never heard of until I looked the sample up just now. The song is itself built around quotes of Bach’s Well-Tempered Clavier–Kanye loves sampling material built from samples.

Verse one continues the same groove, with Kanye alternating between aggressive rap and loosely pitched singing. Rap is widely supposed not to be melodic, but this idea collapses immediately under scrutiny. The border between rapping and singing is fluid, and most emcees cross it effortlessly. Even in “straight” rapping, though, the pitch sequences are deliberate and meaningful. The pitches might not fall on the piano keys, but they are melodic nonetheless.

The verse is twelve bars long, which is unusual; hip-hop verses are almost always eight or sixteen bars. The hook (the hip-hop term for chorus) comes next, Rihanna singing the same Jimmy Webb/Nina Simone quote over the F-sharp major organ part from the intro. Swizz Beatz does more interjections, including a quote of “Wake Up Mr. West,” a short skit on Kanye’s album Late Registration in which DeRay Davis imitates Bernie Mac.

Verse two, like verse one, is twelve bars on the F-sharp minor loop. At the end, you think Rihanna is going to come back in for the hook, but she only delivers the pickup. The section abruptly shifts into an F-sharp major groove over fuller drums, including a snare that sounds like a socket wrench. The lead vocal is a sample of “Bam Bam” by Sister Nancy, which is a familiar reference for hip-hop fans–I recognize it from “Lost Ones” by Lauryn Hill and “Just Hangin’ Out” by Main Source. The chorus means “What a bum deal.” Sister Nancy’s track is itself sample-based–like many reggae songs, it uses a pre-existing riddim or instrumental backing, and the chorus is a quote of the Maytals.

Kanye doesn’t just sample “Bam Bam”, he also reharmonizes it. Sister Nancy’s original is a I – bVII progression in C Mixolydian. Kanye pitch shifts the vocal to fit it over a I – V – IV – V progression in F-sharp major. He doesn’t just transpose the sample up or down a tritone; instead, he keeps the pitches close by changing their chord function. Here’s Sister Nancy’s original:

And here’s Kanye’s version:

The pitch shifting gives Sister Nancy the feel of a robot from the future, while the lo-fidelity recording places her in the past. It’s a virtuoso sample flip.

After 24 bars of the Sister Nancy groove, the track ends with the Jimmy Webb hook again. But this time it isn’t Rihanna singing. Instead, it’s a sample of Nina Simone herself.It reminds me of Kanye’s song “Gold Digger“, which includes Jamie Foxx imitating Ray Charles, followed by a sample of Ray Charles himself. Kanye is showing off here. It would be a major coup for most producers to get Rihanna to sing on a track, and it would be an equally major coup to be able to license a Nina Simone sample, not to mention requiring the chutzpah to even want to sample such a sacred and iconic figure. Few people besides Kanye could afford to use both Rihanna and Nina Simone singing the same hook, and no one else would dare. I don’t think it’s just a conspicuous show of industry clout, either; Kanye wants you to feel the contrast between Rihanna’s heavily processed purr and Nina Simone’s stark, preacherly tone.

Here’s a diagram of all the samples and samples of samples in “Famous.”

In this one track, we have a dense interplay of rhythms, harmonies, timbres, vocal styles, and intertextual meaning, not to mention the complexities of cultural context. This is why hip-hop is interesting.

You probably have a good intuitive idea of what hip-hop is, but there’s plenty of confusion around the boundaries. What are the elements necessary for music to be hip-hop? Does it need to include rapping over a beat? When blues, rock, or R&B singers rap, should we retroactively consider that to be hip-hop? What about spoken-word poetry? Does hip-hop need to include rapping at all? Do singers like Mary J. Blige and Aaliyah qualify as hip-hop? Is Run-DMC’s version of “Walk This Way” by Aerosmith hip-hop or rock? Is “Love Lockdown” by Kanye West hip-hop or electronic pop? Do the rap sections of “Rapture” by Blondie or “Shake It Off” by Taylor Swift count as hip-hop?

If a single person can be said to have laid the groundwork for hip-hop, it’s James Brown. His black pride, sharp style, swagger, and blunt directness prefigure the rapper persona, and his records are a bottomless source of classic beats and samples. The HBO James Brown documentary is a must-watch.

Wikipedia lists hip-hop’s origins as including funk, disco,
electronic music, dub, R&B, reggae, dancehall, rock, jazz, toasting, performance poetry, spoken word, signifyin’, The Dozens, griots, scat singing, and talking blues. People use the terms hip-hop and rap interchangeably, but hip-hop and rap are not the same thing. The former is a genre; the latter is a technique. Rap long predates hip-hop–you can hear it in classicalrock, R&B, swingjazz fusion, soul, funkcountry, and especially blues, especially especially the subgenre of talking blues. Meanwhile, it’s possible to have hip-hop without rap. Nearly all current pop and R&B are outgrowths of hip-hop. Turntablists and controllerists have turned hip-hop into a virtuoso instrumental music.

It’s sometimes said that rock is European harmony combined with African rhythm. Rock began as dance music, and rhythm continues to be its most important component. This is even more true of hip-hop, where harmony is minimal and sometimes completely absent. More than any other music of the African diaspora, hip-hop is a delivery system for beats. These beats have undergone some evolution over time. Early hip-hop was built on funk, the product of what I call The Great Cut-Time Shift, as the underlying pulse of black music shifted from eighth notes to sixteenth notes. Current hip-hop is driving a Second Great Cut-Time Shift, as the average tempo slows and the pulse moves to thirty-second notes.

Like all other African-American vernacular music, hip-hop uses extensive syncopation, most commonly in the form of a backbeat. You can hear the blues musician Taj Mahal teach a German audience how to clap on the backbeat. (“Schvartze” is German for “black.”) Hip-hop has also absorbed a lot of Afro-Cuban rhythms, like the omnipresent son clave. This traditional Afro-Cuban rhythm is everywhere in hip-hop: in the drums, of course, but also in the rhythms of bass, keyboards, horns, vocals, and everywhere else. You can hear son clave in the snare drum part in “WTF” by Missy Elliott.

The NYU Music Experience Design Lab created the Groove Pizza app to help you visualize and interact with rhythms like the ones in hip-hop beats. You can use it to explore classic beats or more contemporary trap beats. Hip-hop beats come from three main sources: drum machines, samples, or (least commonly) live drummers.

Hip-hop was a DJ medium before emcees became the main focus. Party DJs in the disco era looped the funkiest, most rhythm-intensive sections of the records they were playing, and sometimes improvised toasts on top. Sampling and manipulating recordings has become effortless in the computer age, but doing it with vinyl records requires considerable technical skill. In the movie Wild Style, you can see Grandmaster Flash beat juggle and scratch “God Make Me Funky” by the Headhunters and “Take Me To The Mardi Gras” by Bob James (though the latter song had to be edited out of the movie for legal reasons.)

The creative process of making a modern pop recording is very different from composing on paper or performing live. Hip-hop is an art form about tracks, and the creativity is only partially in the songs and the performances. A major part of the art form is the creation of sound itself. It’s the timbre and space that makes the best tracks come alive as much as any of the “musical” components. The recording studio gives you control over the finest nuances of the music that live performers can only dream of. Most of the music consists of synths and samples that are far removed from a “live performance.” The digital studio erases the distinction between composition, improvisation, performance, recording and mixing. The best popular musicians are the ones most skilled at “playing the studio.”

Hip-hop has drawn much inspiration from the studio techniques of dub producers, who perform mixes of pre-existing multitrack tape recordings by literally playing the mixing desk. When you watch The Scientist mix Ted Sirota’s “Heavyweight Dub,” you can see him shaping the track by turning different instruments up and down and by turning the echo effect on and off. Like dub, hip-hop is usually created from scratch in the studio. Brian Eno describes the studio as a compositional tool, and hip-hop producers would agree.

Aside from the human voice, the most characteristic sounds in hip-hop are the synthesizer, the drum machine, the turntable, and the sampler. The skills needed by a hip-hop producer are quite different from the ones involved in playing traditional instruments or recording on tape. Rock musicians and fans are quick to judge electronic musicians like hip-hop producers for not being “real musicians” because sequencing electronic instruments appears to be easier to learn than guitar or drums. Is there something lazy or dishonest about hip-hop production techniques? Is the guitar more of a “real” instrument than the sampler or computer? Are the Roots “better” musicians because they incorporate instruments?

Maybe we discount the creative prowess of hip-hop producers because we’re unfamiliar with their workflow. Fortunately, there’s a growing body of YouTube videos that document various aspects of the process:

Before affordable digital samplers became available in the late 1980s, early hip-hop DJs and producers did most of their audio manipulation with turntables. Record scratching  demands considerable skill and practice, and it has evolved into a virtuoso form analogous to bebop saxophone or metal guitar shredding.

Hip-hop is built on a foundation of existing recordings, repurposed and recombined. Samples might be individual drum hits, or entire songs. Even hip-hop tracks without samples very often started with them; producers often replace copyrighted material with soundalike “original” beats and instrumental performances for legal reasons. Turntables and samplers make it possible to perform recordings like instruments.

The Amen break, a six-second drum solo, is one of the most important samples of all time. It’s been used in uncountably many hip-hop songs, and is the basis for entire subgenres of electronic music. Ali Jamieson gives an in-depth exploration of the Amen.

There are few artistic acts more controversial than sampling. Is it a way to enter into a conversation with other artists? An act of liberation against the forces of corporatized mass culture? A form of civil disobedience against a stifling copyright regime? Or is it a bunch of lazy hacks stealing ideas, profiting off other musicians’ hard work, and devaluing the concept of originality? Should artists be able to control what happens to their work? Is complete originality desirable, or even possible?

We look to hip-hop to tell us the truth, to be real, to speak to feelings that normally go unspoken. At the same time, we expect rappers to be larger than life, to sound impossibly good at all times, and to live out a fantasy life. And many of our favorite artists deliberately alter their appearance, race, gender, nationality, and even species. To make matters more complicated, we mostly experience hip-hop through recordings and videos, where artificiality is the nature of the medium. How important is authenticity in this music? To what extent is it even possible?

The “realness” debate in hip-hop reached its apogee with the controversy over Auto-Tune. Studio engineers have been using computer software to correct singers’ pitch since the early 1990s, but the practice only became widely known when T-Pain overtly used exaggerated Auto-Tune as a vocal effect rather than a corrective. The “T-Pain effect” makes it impossible to sing a wrong note, though at the expense of making the singer sound like a robot from the future. Is this the death of singing as an art form? Is it cheating to rely on software like this? Does it bother you that Kanye West can have hits as a singer when he can barely carry a tune? Does it make a difference to learn that T-Pain has flawless pitch when he turns off the Auto-Tune?

Hip-hop is inseparable from its social, racial and political environment. For example, you can’t understand eighties hip-hop without understanding New York City in the pre-Giuliani era. Eric B and Rakim capture it perfectly in the video for “I Ain’t No Joke.”

Given that hip-hop is the voice of the most marginalized people in America and the world, why is it so compelling to everyone else? Timothy Brennan argues that the musical African diaspora of which hip-hop is a part helps us resist imperialism through secular devotion. Brennan thinks that America’s love of African musical practice is related to an interest in African spiritual practice. We’re unconsciously drawn to the musical expression of African spirituality as a way of resisting oppressive industrial capitalism and Western hegemony. It isn’t just the defiant stance of the lyrics that’s doing the resisting. The beats and sounds themselves are doing the major emotional work, restructuring our sense of time, imposing a different grid system onto our experience. I would say that makes for some pretty interesting music.

Composing in the classroom

The hippest music teachers help their students create original music. But what exactly does that mean? What even is composition? In this post, I take a look at two innovators in music education and try to arrive at an answer.

Matt McLean is the founder of the amazing Young Composers and Improvisers Workshop. He teaches his students composition using a combination of Noteflight, an online notation editor, and the MusEDLab‘s own aQWERTYon, a web app that turns your regular computer keyboard into an intuitive musical interface.

http://www.yciw.net/1/the-interface-i-wish-noteflight-had-is-here-aqwertyon/

Matt explains:

Participating students in YCIW as well as my own students at LREI have been using Noteflight for over 6 years to compose music for chamber orchestras, symphony orchestras, jazz ensembles, movie soundtracks, video game music, school band and more – hundreds of compositions.

Before the advent of the aQWERTYon, students needed to enter music into Noteflight either by clicking with the mouse or by playing notes in with a MIDI keyboard. The former method is accessible but slow; the latter method is fast but requires some keyboard technique. The aQWERTYon combines the accessibility of the mouse with the immediacy of the piano keyboard.

For the first time there is a viable way for every student to generate and notate her ideas in a tactile manner with an instrument that can be played by all. We founded Young Composers & Improvisors Workshop so that every student can have the experience of composing original music. Much of my time has been spent exploring ways to emphasize the “experiencing” part of this endeavor. Students had previously learned parts of their composition on instruments after their piece was completed. Also, students with piano or guitar skills could work out their ideas prior to notating them. But efforts to incorporate MIDI keyboards or other interfaces with Noteflight in order to give students a way to perform their ideas into notation always fell short.

The aQWERTYon lets novices try out ideas the way that more experienced musicians do: by improvising with an instrument and reacting to the sounds intuitively. It’s possible to compose without using an instrument at all, using a kind of sudoku-solving method, but it’s not likely to yield good results. Your analytical consciousness, the part of your mind that can write notation, is also its slowest and dumbest part. You really need your emotions, your ear, and your motor cortex involved. Before computers, you needed considerable technical expertise to be able to improvise musical ideas, and remember them long enough to write them down. The advent of recording and MIDI removed a lot of the friction from the notation step, because you could preserve your ideas just by playing them. With the aQWERTYon and interfaces like it, you can do your improvisation before learning any instrumental technique at all.

Student feedback suggests that kids like being able to play along to previously notated parts as a way to find new parts to add to their composition. As a teacher I am curious to measure the effect of students being able to practice their ideas at home using aQWERTYon and then sharing their performances before using their idea in their composition. It is likely that this will create a stronger connection between the composer and her musical idea than if she had only notated it first.

Those of us who have been making original music in DAWs are familiar with the pleasures of creating ideas through playful jamming. It feels like a major advance to put that experience in the hands of elementary school students.

Matt uses progressive methods to teach a traditional kind of musical expression: writing notated scores that will then be performed live by instrumentalists. Matt’s kids are using futuristic tools, but the model for their compositional technique is the one established in the era of Beethoven.

Beethoven

(I just now noticed that the manuscript Beethoven is holding in this painting is in the key of D-sharp. That’s a tough key to read!)

Other models of composition exist. There’s the Lennon and McCartney method, which doesn’t involve any music notation. Like most untrained rock musicians, the Beatles worked from lyric sheets with chords written on them as a mnemonic. The “lyrics plus chords” method continues to be the standard for rock, folk and country musicians. It’s a notation system that’s only really useful if you already have a good idea of how the song is supposed to sound.

Lennon and McCartney writing

Lennon and McCartney originally wrote their songs to be performed live for an audience. They played in clubs for several years before ever entering a recording studio. As their career progressed, however, the Beatles stopped performing live, and began writing with the specific goal of creating studio recordings. Some of those later Beatles tunes would be difficult or impossible to perform live. Contemporary artists like Missy Elliott and Pharrell Williams have pushed the Beatles’ idea to its logical extreme: songs existing entirely within the computer as sequences of samples and software synths, with improvised vocals arranged into shape after being recorded. For Missy and Pharrell, creating the score and the finished recording are one and the same act.

Pharrell and Missy Elliott in the studio

Is it possible to teach the Missy and Pharrell method in the classroom? Alex Ruthmann, MusEDLab founder and my soon-to-be PhD advisor, documented his method for doing so in 2007.

As a middle school general music teacher, I’ve often wrestled with how to engage my students in meaningful composing experiences. Many of the approaches I’d read about seemed disconnected from the real-world musicality I saw daily in the music my students created at home and what they did in my classes. This disconnect prompted me to look for ways of bridging the gap’ between the students’ musical world outside music class and their in-class composing experiences.

It’s an axiom of constructivist music education that students will be most motivated to learn music that’s personally meaningful to them. There are kids out there for whom notated music performed on instruments is personally meaningful. But the musical world outside music class usually follows the Missy and Pharrell method.

[T]he majority of approaches to teaching music with technology center around notating musical ideas and are often rooted in European classical notions of composing (for example, creating ABA pieces, or restricting composing tasks to predetermined rhythmic values). These approaches require students to have a fairly sophisticated knowledge of standard music notation and a fluency working with rhythms and pitches before being able to explore and express their musical ideas through broader musical dimensions like form, texture, mood, and style.

Noteflight imposes some limitations on these musical dimensions. Some forms, textures, moods and styles are difficult to capture in standard notation. Some are impossible. If you want to specify a particular drum machine sound combined with a sampled breakbeat, or an ambient synth pad, or a particular stereo image, standard notation is not the right tool for the job.

Common approaches to organizing composing experiences with synthesizers and software often focus on simplified classical forms without regard to whether these forms are authentic to the genre or to technologies chosen as a medium for creation.

There is nothing wrong with teaching classical forms. But when making music with computers, the best results come from making the music that’s idiomatic to computers. Matt McLean goes to extraordinary lengths to have student compositions performed by professional musicians, but most kids will be confined to the sounds made by the computer itself. Classical forms and idioms sound awkward at best when played by the computer, but electronic music sounds terrific.

The middle school students enrolled in these classes came without much interest in performing, working with notation, or studying the classical music canon. Many saw themselves as “failed” musicians, placed in a general music class because they had not succeeded in or desired to continue with traditional performance-based music classes. Though they no longer had the desire to perform in traditional school ensembles, they were excited about having the opportunity to create music that might be personally meaningful to them.

Here it is, the story of my life as a music student. Too bad I didn’t go to Alex’s school.

How could I teach so that composing for personal expression could be a transformative experience for students? How could I let the voices and needs of the students guide lessons for the composition process? How could I draw on the deep, complex musical understandings that these students brought to class to help them develop as musicians and composers? What tools could I use to quickly engage them in organizing sound in musical and meaningful ways?

Alex draws parallels between writing music and writing English. Both are usually done alone at a computer, and both pose a combination of technical and creative challenges.

Musical thinking (thinking in sound) and linguistic thinking (thinking using language phrases and ideas) are personal creative processes, yet both occur within social and cultural contexts. Noting these parallels, I began to think about connections between the whole-language approach to writing used by language arts teachers in my school and approaches I might take in my music classroom.

In the whole-language approach to writing, students work individually as they learn to write, yet are supported through collaborative scaffolding-support from their peers and the teacher. At the earliest stages, students tell their stories and attempt to write them down using pictures, drawings, and invented notation. Students write about topics that are personally meaningful to them, learning from their own writing and from the writing of their peers, their teacher, and their families. They also study literature of published authors. Classes that take this approach to teaching writing are often referred to as “writers’ workshops”… The teacher facilitates [students’] growth as writers through minilessons, share sessions, and conferring sessions tailored to meet the needs that emerge as the writers progress in their work. Students’ original ideas and writings often become an important component of the curriculum. However, students in these settings do not spend their entire class time “freewriting.” There are also opportunities for students to share writing in progress and get feedback and support from teacher and peers. Revision and extension of students’ writing occur throughout the process. Lessons are not organized by uniform, prescriptive assignments, but rather are tailored to the students’ interests and needs. In this way, the direction of the curriculum and successive projects are informed by the students’ needs as developing writers.

Alex set about creating an equivalent “composers’ workshop,” combining composition, improvisation, and performing with analytical listening and genre studies.

The broad curricular goal of the composers’ workshop is to engage students collaboratively in:

  • Organizing and expressing musical ideas and feelings through sound with real-world, authentic reasons for and means of composing
  • Listening to and analyzing musical works appropriate to students’ interests and experiences, drawn from a broad spectrum of sources
  • Studying processes of experienced music creators through listening to, performing, and analyzing their music, as well as being informed by accounts of the composition process written by these creators.

Alex recommends production software with strong loop libraries so students can make high-level musical decisions with “real” sounds immediately.

While students do not initially work directly with rhythms and pitch, working with loops enables students to begin composing through working with several broad musical dimensions, including texture, form, mood, and affect. As our semester progresses, students begin to add their own original melodies and musical ideas to their loop-based compositions through work with synthesizers and voices.

As they listen to musical exemplars, I try to have students listen for the musical decisions and understand the processes that artists, sound engineers, and producers make when crafting their pieces. These listening experiences often open the door to further dialogue on and study of the multiplicity of musical roles’ that are a part of creating today’s popular music. Having students read accounts of the steps that audio engineers, producers, songwriters, film-score composers, and studio musicians go through when creating music has proven to be informative and has helped students learn the skills for more accurately expressing the musical ideas they have in their heads.

Alex shares my belief in project-based music technology teaching. Rather than walking through the software feature-by-feature, he plunges students directly into a creative challenge, trusting them to pick up the necessary software functionality as they go. Rather than tightly prescribe creative approaches, Alex observes the students’ explorations and uses them as opportunities to ask questions.

I often ask students about their composing and their musical intentions to better understand how they create and what meanings they’re constructing and expressing through their compositions. Insights drawn from these initial dialogues help me identify strategies I can use to guide their future composing and also help me identify listening experiences that might support their work or techniques they might use to achieve their musical ideas.

Some musical challenges are more structured–Alex does “genre studies” where students have to pick out the qualities that define techno or rock or film scores, and then create using those idioms. This is especially useful for younger students who may not have a lot of experience listening closely to a wide range of music.

Rather than devoting entire classes to demonstrations or lectures, Alex prefers to devote the bulk of classroom time to working on the projects, offering “minilessons” to smaller groups or individuals as the need arises.

Teaching through minilessons targeted to individuals or small groups of students has helped to maintain the musical flow of students’ compositional work. As a result, I can provide more individual feedback and support to students as they compose. The students themselves also offer their own minilessons to peers when they have designed to teach more about advanced features of the software, such as how to record a vocal track, add a fade-in or fade-out, or copy their musical material. These technology skills are taught directly to a few students, who then become the experts in that skill, responsible for teaching other students in the class who need the skill.

Not only does the peer-to-peer learning help with cultural authenticity, but it also gives students invaluable experience with the role of teacher.

One of my first questions is usually, “Is there anything that you would like me to listen for or know about before I listen?” This provides an opportunity for students to seek my help with particular aspects of their composing process. After listening to their compositions, I share my impressions of what I hear and offer my perspective on how to solve their musical problems. If students choose not to accept my ideas, that’s fine; after all, it’s their composition and personal expression… Use of conferring by both teacher and students fosters a culture of collaboration and helps students develop skills in peer scaffolding.

Alex recommends creating an online gallery of class compositions. This has become easier to implement since 2007 with the explosion of blog platforms like Tumblr, audio hosting tools like SoundCloud, and video hosts like YouTube. There are always going to be privacy considerations with such platforms, but there is no shortage of options to choose from.

Once a work is online, students can listen to and comment on these compositions at home outside of class time. Sometimes students post pieces in progress, but for the most part, works are posted when deemed “finished” by the composer. The online gallery can also be set up so students can hear works written by participants in other classes. Students are encouraged to listen to pieces published online for ideas to further their own work, to make comments, and to share these works with their friends and family. The realworld publishing of students’ music on the Internet seems to contribute to their motivation.

Assessing creative work is always going to be a challenge, since there’s no objective basis to assess it on. Alex looks at how well a student composer has met the goal of the assignment, and how well they have achieved their own compositional intent.

The word “composition” is problematic in the context of contemporary computer-based production. It carries the cultural baggage of Western Europe, the idea of music as having a sole identifiable author (or authors.) The sampling and remixing ethos of hip-hop and electronica are closer to the traditions of non-European cultures where music may be owned by everyone and no one. I’ve had good results bringing remixing into the classroom, having students rework each others’ tracks, or beginning with a shared pool of audio samples, or doing more complex collaborative activities like musical shares. Remixes are a way of talking about music via the medium of music, and remixes of remixes can make for some rich and deep conversation. The word “composition” makes less sense in this context. I prefer the broader term “production”, which includes both the creation of new musical ideas and the realization of those ideas in sound.

So far in this post, I’ve presented notation-based composition and loop-based production as if they’re diametrical opposites. In reality, the two overlap, and can be easily combined. A student can create a part as a MIDI sequence and then convert it to notation, or vice versa. The school band or choir can perform alongside recorded or sequenced tracks. Instrumental or vocal performances can be recorded, sampled, and turned into new works. Electronic productions can be arranged for live instruments, and acoustic pieces can be reconceived as electronica. If a hip-hop track can incorporate a sample of Duke Ellington, there’s no reason that sample couldn’t be performed by a high school jazz band. The possibilities are endless.

Milo meets Beethoven

For his birthday, Milo got a book called Welcome to the Symphony by Carolyn Sloan. We finally got around to showing it to him recently, and now he’s totally obsessed.

Welcome To The Symphony by Carolyn Sloan

The book has buttons along the side which you can press to hear little audio samples. They include each orchestra instrument playing a short Beethoven riff. All of the string instruments play the same “bum-bum-bum-BUMMM” so you can compare the sounds easily. All the winds play a different little phrase, and the brass another. The book itself is fine and all, but the thing that really hooked Milo is triggering the riffs one after another, Ableton-style, and singing merrily along.

Milo got primed to enjoy this book by two coincidental things. One is that in his preschool, they’ve been listening to Peter and the Wolf a lot, dancing to it, acting it out, etc. They use a YouTube video that shows both the story and the instruments side by side, so Milo has very clear ideas of what the oboe, clarinet, etc all look like and sound like. When he saw them in the orchestra book, he recognized them all immediately.

The other thing is this weird computer animated cartoon called Taratabong, which is about anthropomorphic musical instruments. Milo has been watching it on YouTube a bunch, to the point of wanting me to pretend to be different characters and “talk” to him (which is an entertaining challenge for me–how do you have a conversation as a snare drum?) So Milo also recognizes different instruments in the orchestra book as Taratabong characters.

Milo has now voluntarily watched a YouTube video of the entire first movement of Beethoven’s Fifth conducted by Leonard Bernstein, several times. That’s like nine minutes of classical music, which for a three-year-old is equivalent to nine hours. He sings along to all the riffs he recognized, announces each instrument as he sees it, and tells me about how Leonard Bernstein is Grandfather from Peter and the Wolf. I want to emphasize that we haven’t pushed him into any of this. If you read this blog, you know that I’m an outspoken anti-fan of Beethoven. We just put this stuff under Milo’s nose, and if he hadn’t been interested, we wouldn’t have pushed it.

The classical music tribe expresses continual anguish about how hard it is to draw people into the music. Having inadvertently created a budding Beethoven lover, I have a few insights to offer. Milo got connected to the music through multiple media simultaneously, in multiple settings. He was exposed initially in the context of stories about animals and cartoon characters. That exposure happened in the context of acting and dancing, not passive sitting or being lectured to. And when he did start listening, it was via playback devices that he controls completely: YouTube Kids on the iPad, and the buttons on the book.

Of all these different music experiences, the Ableton-like sample triggering is the one that has most seized Milo’s enthusiasm. Sometimes he wants to read the book and play the sounds when the text indicates. Sometimes he wants to systematically listen through each sound, singing along and acting out the instruments. Sometimes he just jams out, playing the excerpts in different orders and in different rhythms. I suspect he’d be even happier if he could get the sounds to loop. He wants to sing along, but the little phrases are half over before he can even get oriented. If the phrases looped in a musical-sounding way, I bet he would dig in much deeper.

This is not Milo’s first experience triggering sample playback. Before he even turned two, we spent a lot of time playing around with an APC 40.

APC40

Milo adores the lights and colors, and instantly grasped how the volume faders work. In general, though, the APC experience was too complicated for him. It was too easy to make it stop working, to lose the connection between button pushes and the music changing, and to generally get lost in the interface. (I have some of those same problems!) The orchestra book has the advantage of being vastly simpler and more predictable.

There’s a page in the book that shows Beethoven with quill pen, writing the music. (Milo is continually disappointed not to see Beethoven himself in any of the performance videos.) Interestingly, Milo has started using the phrase “writing music” as a synonym for “playing music”, either from an instrument or from iTunes. He seems not to know or care about the distinction between playing back pre-recorded music and creating new music. This conflation of writing and playing music was likely helped by the time Milo has spent with the aQWERTYon, an interface developed by the NYU MusEDLab for performing music on the computer keyboard.

aQWERTYon screencap

Milo isn’t extremely interested in the musical aspect of the aQWERTYon. He calls it “ABCs” and is mostly interested in using it to type his favorite letters. He also enjoys singing the alphabet song while playing semi-randomly along.

The MusEDLab’s work is motivated by the fact that computers make it enormously easier for total novices to participate actively in music. If Beethoven symphonies can be played with as toys, participated in as games, and connected to meaningful stories and activities, then it’s inevitable that kids are going to want to get involved. If I had experienced Beethoven as raw material for my own expression, I’d probably feel quite differently about him.

Please stop saying “consuming music”

In the wake of David Bowie’s death, I went on iTunes and bought a couple of his tracks, including the majestic “Blackstar.” In economic terms, I “consumed” this song. I am a “music consumer.” I made an emotional connection to a dying man who has been a creative inspiration of mine for more than twenty years, via “consumption.” That does not feel like the right word, at all. When did we even start saying “music consumers”? Why did we start? It makes my skin crawl.

The Online Etymology Dictionary says that the verb “to consume” descends from Latin consumere, which means “to use up, eat, waste.” That last sense of the word speaks volumes about America, our values, and specifically, our pathological relationship with music.

The synonyms for “consume” listed in my computer’s thesaurus include: devour, ingest, swallow, gobble up, wolf down, guzzle, feast on, gulp down, polish off, dispose of, pig out on, swill, expend, deplete, exhaust, waste, squander, drain, dissipate, fritter away, destroy, demolish, lay waste, wipe out, annihilate, devastate, gut, ruin, wreck. None of these are words I want to apply to music.

I’m happy to spend money on music. I’m not happy to be a consumer of it. When I consume something, like electricity or food, then it’s gone, and can’t be used by anyone else. But having bought that David Bowie song from iTunes, I can listen to it endlessly, play it for other people, put it in playlists, mull it over when I’m not listening to it, sample it, remix it, mash it up with other songs.

What word should we use for buying songs from iTunes, or streaming them on Spotify, or otherwise spending money on them? (Or being advertised to around them?) Well, what’s wrong with “buying” or “streaming”? I’m happy to call myself a “music buyer” or “music streamer.” There’s no contradiction there between the economic activity and the creative one.

My colleagues in the music business world have developed a distressing habit of using “consuming” to describe any music listening experience. This is the sense of the word that I’m most committed to abolishing. Not only is it nonsensical, but it reduces the act of listening to the equivalent of eating a bag of potato chips. Listening is not a passive activity. It requires imaginative participation (and in more civilized cultures than ours, dancing.) Listening is a form of musicianship–the most important kind, since it’s a prerequisite for all of the others. Marc Sabatella says:

For the purposes of this primer, we are all musicians. Some of us may be performing musicians, while most of us are listening musicians. Most of the former are also the latter.

I mean, you would hope. Thomas Regelski goes further. He challenges the assumption that the deepest understanding of music comes from performing or composing it. Performing and composing are valuable and delightful experiences, and they can inform a rich musical understanding. But they aren’t the only way to access meaning at the deepest level. Listening alone can do it. Some of the best music scholarship I’ve read comes from “non-musicians.” Listening is a creative act. You couldn’t come up with a less apt term for it than “consumption.” Please stop saying it.