The Groove Pizzeria

For his NYU music technology masters thesis, Tyler Bisson created a web app called Groove Pizzeria, a polyrhythmic/polymetric extension of the Groove Pizza. Click the image to try it for yourself.

<img data-attachment-id="18497" data-permalink="http://www.ethanhein.com/wp/2019/the-groove-pizzeria/groove-pizzaria/#main" data-orig-file="https://i2.wp.com/www.ethanhein.com/wp/wp-content/uploads/2019/05/Groove-Pizzaria.png?fit=2529%2C1458" data-orig-size="2529,1458" data-comments-opened="1" data-image-meta="{"aperture":"0","credit":"","camera":"","caption":"","created_timestamp":"0","copyright":"","focal_length":"0","iso":"0","shutter_speed":"0","title":"","orientation":"0"}" data-image-title="Groove Pizzeria" data-image-description="

Groove Pizzeria

” data-medium-file=”https://i2.wp.com/www.ethanhein.com/wp/wp-content/uploads/2019/05/Groove-Pizzaria.png?fit=300%2C173″ data-large-file=”https://i2.wp.com/www.ethanhein.com/wp/wp-content/uploads/2019/05/Groove-Pizzaria.png?fit=680%2C392″ class=”alignnone size-large wp-image-18497″ src=”https://i2.wp.com/www.ethanhein.com/wp/wp-content/uploads/2019/05/Groove-Pizzaria.png?resize=680%2C392″ alt=”” width=”680″ height=”392″ data-recalc-dims=”1″ />

Note that the Groove Pizzeria is still a prototype, and it doesn’t yet have the full feature set that the Groove Pizza does. As of this writing, there are no presets, no saving, no exporting of audio or MIDI, and no changing drum kits. You can record the Groove Pizzeria’s output using Audio Hijack, however.

Like the Groove Pizza, the Groove Pizzeria is based on the idea of the rhythm necklace, a circular representation of musical rhythm. The Groove Pizza is a set of three concentric rhythm necklaces, each of which controls one drum sound, e.g. kick, snare and hi-hat. The Groove Pizzeria gives you two sets of concentric rhythm necklaces, each of which can have its own time duration and subdivisions. This means that you can use the Groove Pizzeria to make polyrhythm and polymeter.

The words “polyrhythm” and “polymeter” are frequently used interchangeably, but they are not the same thing. Tyler’s thesis contains the clearest definition of the terms that I know of, which I paraphrase here.

  • Polyrhythm is two or more concurrent loops of equal duration. Each loop consists of a set of evenly-spaced subdivisions or rhythmic onsets. The loops contain different numbers of onsets, meaning that the subdivisions of each loop are not same length. Finally, the ratio of the number of onsets in each loop is not a whole number (otherwise one loop would just be an even subdivision of the other). When people talk about 4:3 or 5:2 polyrhythm, this is what they mean. In Western music, polyrhythms usually only occur for short time spans in the form of tuplets, but in West African drumming, polyrhythms are a core structural feature. 
  • Polymeter is two or more concurrent loops of different duration. The onsets in each loop have the same duration, but each loop has a different number of onsets. This is much more common in Western music than polyrhythm. In Western music, you mostly see polymeter over short time spans in the form of hemiola or syncopation.

With these two definitions in mind, let’s take a look at the Groove Pizzeria interface. For each loop, you can control both the number of subdivisions (the number of onsets) in each loop and the length (duration) of each subdivision. The basic time unit in the Groove Pizzeria is one sixteenth note. Each of the “teeth” on the outer radius of each circle represents the duration of one sixteenth note. If you change the Time Units setting, you make the sixteenth notes shorter, and the radius of the circle gets smaller to preserve the cumulative distances between each tooth of the loop. The easiest way to understand the difference is just to draw some rhythm patterns on the grid, play with the sliders, and see what happens. Notice that the Groove Pizzeria visualizes the compound pattern formed by the two loops in the top left corner of the screen.

Here’s a 5:4 polyrhythm created by taking two loops that are the same length and dividing them into five and four steps respectively:

Simple 5 against 4 polyrhythm on the Groove Pizzeria

If you want a 5:4 polymeter rather than a polyrhythm, then you will need to adjust the number of time units in each loop as well. (The patterns aren’t perfectly symmetric so you can hear where they start and end.)

Simple 5 against 4 polymeter

Here’s a less exotic sound, a 4:3 polymeter, also known as hemiola. On the left is a 4/4 hip-hop pattern. On the right, I made a 12-beat-long pattern that repeats four times in the same amount of time as it takes the hip-hip pattern to repeat three times.

4 vs 3 polymeter, also known as hemiola

Here’s a less familiar sound, an 11:5 polyrhythm. On the left, I made the closest thing to a hip-hop pattern that’s possible in 11/8 time, and on the right I made a simple quintuplet pattern. This will probably sound weird to you at first, but if you listen to it for a while, it will eventually start to make a wonky kind of sense.

11 against 5 polyrhythm

How about some real-world examples? Genuine polyrhythm is unusual in popular music, but it’s not unheard of. James Blake uses a quintuplet hi-hat pattern in his song “Unluck.”

Here’s my Groove Pizzeria representation of this beat. On the left is the kick and snare playing a straight quarter note pattern in 4/4, and on the right is the hi-hat pattern (though it’s not playing back on a hi-hat sound.)

Hip-hop producers sometimes use polyrhythms to create specific varieties of swing. On drum machines, swing (sometimes called shuffle) shortens and lengthens each alternate beat. At zero swing, also known as 1:1 swing, the beats within each pair are the same length. At maximum swing, the first beat in each pair will be twice as long as the second beat in the pair. This is known as 2:1 swing, sometimes called “triplet swing” because it’s as if the first beat is two triplets long, while the second is one triplet long. In real life, you usually want your swing setting somewhere between these two extremes. (Click here for a more detailed explanation of swing.)

One way to get a swing ratio in between 1:1 and 2:1 is to use a quintuplet grid. If you think of the first three quintuplets in each group as being one “beat” in a pair and the last two as being the “beat” in the pair, you get the equivalent of 5:3 swing. Slynk explains how to set this up in Ableton:

Here’s a neo soul groove I made using pentuplet swing:

Neo soul pentuplet swing groove

For an even narrower swing ratio, you can use septuplet swing. It’s the same idea, except now you’re grouping together the first four septuplets into one “beat” in the pair, and the last three septuplets into the other “beat”. This gives you a 4:3 swing ratio. This is pretty close to no swing at all, but it’s noticeably “off,” in a way that gives you a nice J Dilla “drunken drummer” feel. Slynk explains again:

Here’s a neo soul groove I made using septuplet swing:

Neo soul septuplet swing groove

Beyond complex rhythms, the Groove Pizzeria can teach another useful musical concept called event fusion. When a rhythm gets fast enough, you stop hearing individual beats and start to hear a continuous thrum. The transition happens at around twenty beats per second. If you play the rhythm even faster, the thrum becomes a steady pitch, and the higher the tempo, the faster the pitch. Here’s how you can experiment with event fusion yourself. First, put a clap on every sixteenth note. Next, reduce the number of time units to a small number (5 is fine) and set the tempo to 300 bpm. Now reduce the number of steps. Listen for the point when the claps fuse into a single tone. You can control the pitch of this tone by changing the number of steps.

Event fusion at extreme tempo

If you think of more interesting music learning or creation applications for the Groove Pizzeria, please let me know. Happy drumming!

The orchestra hit as a possible future for classical music

In my paper about whiteness in music education, I tried to make a point about sampling classical music that my professor was (rightly) confused about. So I’m going to use this post to unpack the idea some more. I was in arguing that, while we should definitely decanonize the curriculum, that doesn’t mean we need to stop teaching Western classical music entirely; we just need to teach it differently. Rather than seeing the canonical masterpieces as being carved in marble, we should use them as raw material for the creation of new music.

When I think about a happy future for classical music, I think of the orchestra hit in “Planet Rock” by Afrika Bambaataa and the Soulsonic Force, a sample that came packaged with the Fairlight CMI.

Fairlight CMI

The orchestra hit is a sample of “The Firebird”by Igor Stravinsky.

This sample is the subject of an amazing musicology paper by Robert Fink: The story of ORCH5, or, the classical ghost in the hip-hop machine. If you don’t feel like reading the paper, there’s also this delightful video on the subject.

Why would Afrika Bambaataa (or any other hip-hop musician) want to appropriate the sound of the symphony orchestra? Maybe producers use it just because it sounds cool, but Fink sees a deeper meaning in the sound’s Afrofuturism.

A key aspect of the Afro-futurist imagination lies in a complex identification with the science-fiction Other, with alienness, on the part of an Afro-diasporic culture still dominated by the dark legacy of subjugation to more technologically advanced colonialism… [I]n the sound-world of electro-funk, it is European art music that is cast, consciously or not, in the role of ancient, alien power source (351-352).

Ancient alien power sources are a deathless science fiction trope. Think of the vibranium meteor in Black Panther, bugger technology in the Ender’s Game series, Spice in Dune, Endurium and the Crystal Planet in Starflight, and the fifth element in The Fifth Element (a movie that makes zero sense, but that does creatively combine classical music and techno.) The world that gave rise to the classical canon no longer exists, outside of music schools and similar institutions. But its remnants are everywhere. Why not repurpose them for the making of future music?

Jazz musicians have done plenty of creative repurposing of classical music. My favorite examples are Django Reinhardt’s take on a Bach concerto and the Ellington Nutcracker. Classical music’s biggest influence on jazz is mostly behind the scenes, in the training that many musicians received before jazz was taught formally, in Charlie Parker’s love of Stravinsky and Miles Davis’ admiration for Stockhausen, and in John Coltrane’s study of Nicolas Slonimsky. For creators of hip-hop and electronic dance music, the notes and the concepts aren’t as useful as the recordings. It’s all the lush and varied timbres of classical music that have the most to offer the world now.

“Planet Rock” was only the first of many hip-hop songs to sample classical music. “Blue Flowers” by Dr Octagon samples Bartok’s Violin Concerto #2.

I also love Kelis’ sample of The Magic Flute, and The Streets’ sample of the New World Symphony. Here’s a Spotify playlist with many more examples.

There are also a few performance ensembles attempting to bridge the rap-classical divide. For example, the daKAH Hip Hop Orchestra performs rap classics live.

The idea of reproducing sampled recordings with instruments would seem to me to miss the point of sampling–that sitar riff in “Bonita Applebaum” isn’t just a sequence of pitches, it’s a specific timbre from a specific recording. But I appreciate the spirit.

A much better idea is to bring the alien power source of the orchestra to bear on the  creation of new works. The producer Max Wheeler wrote Grown: a Grime Opera, which combines emcees and DJs with a large orchestral ensemble. I think it’s a fantastic idea, and it’s well executed. (Though I’m not totally objective here, I’ve met Max personally and like him.)

My own interest lies mostly in the possibilities of sampling and remixing. Joseph Schloss, in his must-read book Making Beats, says that producers listen to records “as if potential breaks have been unlooped and hidden randomly throughout the world’s music. It is the producer’s job to find them.” We have barely scratched the surface of the classical canon’s unlooped breaks and hooks. Vassily Kalinnikov’s Symphony number one includes a gorgeous four-chord progression that could well be the saddest chord progression ever. But it’s buried among a ton of other material, and Kalinnikov only repeats it once. This, to me, is a tragic waste. I want to hear that progression repeated many more times than that. Fortunately, thanks to the magic of Ableton Live, I can!

I have more classical music remixes here.

The Music Experience Design Lab has been creating called Variation Playgrounds, which let you playfully remix classical works in the browser.

MusEDLab Variation Playground

The Variation Playgrounds are visually beautiful and cool, but sonically they’re unsatisfying, because they use fake-sounding MIDI versions of the music. Like I said above, the real creative potential for classical remixing isn’t in the notes, it’s in the timbres and textures, all the sonic nuance that you can only get from humans playing instruments.

It would be nice if classical music institutions took a liberal attitude toward sampling. (Most of the canonical works are in the public domain, but the recordings are owned by the record label or organization that made them.) Even better, music organizations could start creating sample libraries. There’s an existing model to follow, the New World Symphony remix contest run by the Deutsches Symphonie-Orchester Berlin. The DSO posted a bunch of pristinely recorded excerpts on SoundCloud and encouraged the internet to go to town. That is the world I want to live in.

So here’s my fantasy scenario: classical institutions create sample libraries for every canonical work. They categorize the samples by instrument, key, and tempo, along with scores, MIDI files, background information, video of the performances, and whatever other context might be of interest. They use a licensing scheme that automatically grants sample clearances in exchange for some reasonable fee or revenue-sharing scheme. They encourage transparency of sources: “Hey trap producers! Here are some suitably bleak sounds. Be sure to link back to us from your SoundCloud page.” Classical music might be a tough sell for casual music listeners, but producers listen to a lot of unusual things, and we listen closely. We might not be inclined to buy concert tickets, but we might eagerly comb through recordings with the right invitation.

I recognize that this idea is kind of a tough sell. My observation of classical institutions is that they aren’t particularly interested in fostering the production of more beat-driven electronic music; they want people to learn to appreciate the canon as it is. I don’t have much investment in that goal. My goal as a progressive music educator is to help young people find their own musical truths, through discovery or invention. Most music educators still see their goal as being the preservation of the canon, and are either indifferent or actively hostile toward the music that the kids like. I think the odds of keeping the canon alive are better if it maintains cultural relevance, if it isn’t just “musical spinach” that you eat because it’s somehow good for you. I don’t believe classical music to be any more intrinsically nutritious than anything else (it’s packed with melody and harmony, but deficient in other necessary musical vitamins, like groove.) But if preserving the canon is your goal, then sampling producers might be powerful allies.

Freedom ’90

Since George Michael died, I’ve been enjoying all of his hits, but none of them more than this one. Listening to it now, it’s painfully obvious how much it’s about George Michael’s struggles with his sexual orientation. I wonder whether he was being deliberately coy in the lyrics, or if he just wasn’t yet fully in touch with his identity. Being gay in the eighties must have been a nightmare.

This is the funkiest song that George Michael ever wrote, which is saying something. Was he the funkiest white British guy in history? Quite possibly. 

The beat

There are five layers to the drum pattern: a simple closed hi-hat from a drum machine, some programmed bongos and congas, a sampled tambourine playing lightly swung sixteenth notes, and finally, once the full groove kicks in, the good old Funky Drummer break. I include a Noteflight transcription of all that stuff below, but don’t listen to it, it sounds comically awful.

George Michael uses the Funky Drummer break on at least two of the songs on Listen Without Prejudice Vol 1. Hear him discuss the break and how it informed his writing process in this must-watch 1990 documentary.

The intro and choruses

Harmonically, this is a boilerplate C Mixolydian progression: the chords built on the first, seventh and fourth degrees of the scale. You can hear the same progression in uncountably many classic rock songs.

C Mixolydian chords

For a more detailed explanation of this scale and others like it, check out Theory For Producers.

The rhythm is what makes this groove so fresh. It’s an Afro-Cuban pattern full of syncopation and hemiola. Here’s an abstraction of it on the Groove Pizza. If you know the correct name of this rhythm, please tell me in the comments!

The verses

There’s a switch to plain vanilla C major, the chords built on the fifth, fourth and root of the scale.

C major chords

Like the chorus, this is standard issue pop/rock harmonically speaking, but it also gets its life from a funky Latin rhythm. It’s a kind of clave pattern, five hits spread more or less evenly across the sixteen sixteenth notes in the bar. Here it is on the Groove Pizza.

The prechorus and bridge

This section unexpectedly jumps over to C minor, and now things get harmonically interesting. The chords are built around a descending chromatic bassline: C, B, B-flat, A. It’s a simple idea but with complicated implications, because it implies four chords built on three different scales between them. First, we have the tonic triad in C natural minor, no big deal there. Next comes the V chord in C harmonic minor. Then we’re back to C natural minor, but with the seventh in the bass. Finally, we go to the IV chord in C Dorian mode. Really, all that we’re doing is stretching C natural minor to accommodate a couple of new notes, B natural in the second chord, and A natural in the fourth one.

C minor - descending chromatic bassline

The rhythm here is similar but not identical to the clave-like pattern in the verse–the final chord stab is a sixteenth note earlier. See and hear it on the Groove Pizza.

I don’t have the time to transcribe the whole bassline, but it’s absurdly tight and soulful. The album credits list bass played both by Deon Estus and by George Michael himself. Whichever one of them laid this down, they nailed it.

Song structure

“Freedom ’90” has an exceedingly peculiar structure for a mainstream pop song. The first chorus doesn’t hit until almost two minutes in, which is an eternity–most pop songs are practically over that that point. The graphic below shows the song segments as I marked them in Ableton.

Freedom '90 structure

The song begins with a four bar instrumental intro, nothing remarkable about that. But then it immediately moves into an eight bar section that I have trouble classifying. It’s the spot that would normally be occupied by verse one, but this part uses the chorus harmony and is different from the other verses. I labeled it “intro verse” for lack of a better term. (Update: upon listening again, I realized that this section is the backing vocals from the back half of the chorus. Clever, George Michael!) Then there’s an eight bar instrumental break, before the song has really even started. George Michael brings you on board with this unconventional sequence because it’s all so catchy, but it’s definitely strange.

Finally, twenty bars in, the song settles into a more traditional verse-prechorus-chorus loop. The verses are long, sixteen bars. The prechorus is eight bars, and the chorus is sixteen. You could think of the chorus as being two eight bar sections, the part that goes “All we have to do…” and the part that goes “Freedom…” but I hear it as all one big section.

After two verse-prechorus-chorus units, there’s a four bar breakdown on the prechorus chord progression. This leads into sixteen bar bridge, still following the prechorus form. Finally, the song ends with a climactic third chorus, which repeats and fades out as an outtro. All told, the song is over six minutes. That’s enough time (and musical information) for two songs by a lesser artist.

A word about dynamics: just from looking at the audio waveform, you can see that “Freedom ’90” has very little contrast in loudness and fullness over its duration. It starts sparse, but once the Funky Drummer loop kicks in at measure 13, the sound stays constantly big and full until the breakdown and bridge. These sections are a little emptier without the busy piano part. The final chorus is a little bigger than the rest of the song because there are more vocals layered in, but that still isn’t a lot of contrast. I guess George Michael decided that the groove was so hot, why mess with it by introducing contrast for the sake of contrast? He was right to feel that way.

Deconstructing the bassline in Herbie Hancock’s “Chameleon”

If you have even a passing interest in funk, you will want to familiarize yourself with Herbie Hancock’s “Chameleon.” And if you are preoccupied and dedicated to the preservation of the movement of the hips, then the bassline needs to be a cornerstone of your practice.

Chameleon - circular bass

Here’s a transcription I did in Noteflight – huge props to them for recently introducing sixteenth note swing.

And here’s how it looks in the MIDI piano roll:

The “Chameleon” bassline packs an incredible amount of music into just two bars. To understand how it’s put together, it’s helpful to take a look at the scale that Herbie built the tune around, the B-flat Dorian mode. Click the image below to play it on the aQWERTYon. I recommend doing some jamming with it over the song before you move on.

B-flat Dorian

Fun fact: this scale contains the same pitches as A-flat major. If you find that fact confusing, then feel free to ignore it. You can learn more about scales and modes in my Soundfly course.

The chord progression

The opening section of “Chameleon” is an endless loop of two chords, B♭-7 and E♭7. You build both of them using the notes in B-flat Dorian. To make B♭-7, start on the root of the scale, B-flat. Skip over the second scale degree to land on the third, D-flat. Skip over the fourth scale degree to land on the fifth, F. Then skip over the sixth to land on the seventh, A-flat. If you want to add extensions to the chord, just keep skipping scale degrees, like so:

B-flat Dorian mode chords

To make E♭7, you’re going to use the same seven pitches in the same order, but you’re going to treat E-flat as home base rather than B-flat. You could think of this new scale as being E-flat Mixolydian, or B-flat Dorian starting on E-flat; they’re perfectly interchangeable. Click to play E-flat Mixolydian on the aQWERTYon. You build your E♭7 chord like so:

B-flat Dorian mode chords on E-flat

Once you’ve got the sound of B♭-7 and E♭7 in your head, let’s try an extremely simplified version of the bassline.

Chord roots only

At the most basic level, the “Chameleon” bassline exists to spell out the chord progression in a rhythmically interesting way. (This is what all basslines do.) Here’s a version of the bassline that removes all of the notes except the ones on the first beat of each bar. They play the roots of the chords, B-flat and E-flat.

That’s boring, but effective. You can never go wrong playing chord roots on the downbeat.

Simple arpeggios

Next, we’ll hear a bassline that plays all of the notes in B♭-7 and E♭7 one at a time. When you play chords in this way, they’re called arpeggios.

The actual arpeggios

The real “Chameleon” bassline plays partial arpeggios–they don’t have all of the notes from each chord. Also, the rhythm is a complicated and interesting one.

Below, you can explore the rhythm in the Groove Pizza. The orange triangle shows the rhythm of the arpeggio notes, played on the snare. The yellow quadrilateral shows the rhythm of the walkups, played on the kick–we’ll get to those below.

The snare rhythm has a hit every three sixteenth notes. It’s a figure known in Afro-Latin music as tresillo, which you hear absolutely everywhere in all styles of American popular and vernacular music. Tresillo also forms the front half of the equally ubiquitous son clave. (By the way, you can also use the Groove Pizza to experiment with the “Chameleon” drum pattern.)

As for the pitches: Instead of going root-third-fifth-seventh, the bassline plays partial arpeggios. The figure over B♭-7 is just the root, seventh and root again, while the one over E♭7 is the root, fifth and seventh.

Adding the walkups

Now let’s forget about the arpeggios for a minute and go back to just playing the chord roots on the downbeats. The bassline walks up to each of these notes via the chromatic scale, that is, every pitch on the piano keyboard.

Chromatic walkups are a great way to introduce some hip dissonance into your basslines, because they can include notes that aren’t in the underlying scale. In “Chameleon” the walkups include A natural and D natural. Both of these notes sound really weird if you sustain them over B-flat Dorian, but in the context of the walkup they sound perfectly fine.

Putting it all together

The full bassline consists of the broken arpeggios anticipated by the walkups.

If you’re a guitarist or bassist, you can play this without even shifting position. Use your index on the third fret, your middle on the fourth fret, your ring on the fifth fret, and your pinkie on the sixth fret.

              .          . .
G|----------.-3----------3-6--|
D|----------6-----------------|
A|---------------3-4-5-6------|
E|--3-4-5-6-------------------|

If you’ve got this under your fingers, maybe you’d like to figure out the various keyboard and horn parts. They aren’t difficult, but you’ll need one more scale, the B-flat blues scale. Click the image to jam with it over the song and experience how great it sounds.

B-flat blues

There you have it, one of the cornerstones of funk. Good luck getting it out of your head!

A participant ethnography of the Ed Sullivan Fellows program

Note: I refer to mentors by their real names, and to participants by pseudonyms

Ed Sullivan Fellows (ESF) is a mentorship and artist development program run by the NYU Steinhardt Music Experience Design Lab. It came about by a combination of happenstances. I had a private music production student named Rob Precht, who had found my blog via a Google search. He and I usually held our lessons in the lab’s office space. Over the course of a few months, Rob met people from the lab and heard about our projects. He found us sufficiently inspiring that he approached us with an idea. He wanted to give us a grant to start a program that would help young people from under-resourced communities get a start in the music industry. He asked us to name it after his grandfather, Ed Sullivan, whose show had been crucial to launching the careers of Elvis, the Beatles, and the Jackson 5. While Rob’s initial idea had been to work with refugees who had relocated to New York, we agreed to shift the focus to native New York City residents, since our connections and competencies were stronger there.

Ed Sullivan Fellows

The Ed Sullivan Fellows program is run by Jamie Ehrenfeld, a graduate of NYU’s music education program, a founding member of the Music Experience Design Lab, and a music teacher at Eagle Academy in Brownsville. Like many members of the lab, she straddles musical worlds, bringing her training in classical voice to her work mentoring rappers and R&B singers. Participants are young men and women between the ages of 15 and 20, mostly low-SES people of color. They meet on Saturday afternoons at NYU Steinhardt to write and record songs; to get mentorship on the music business, marketing and branding; and to socialize. We had originally conceived of ESF as a series of formally organized classes, but it became immediately obvious that such a structure was going to be impractical. While there is a regular core of attendees, their lives are complicated, and there is no way to predict who will show up week to week or when they will arrive and leave. Instead, sessions have taken on a clubhouse feel, a series of ad-hoc jam sessions, cyphers, informal talks, and open-ended creativity. Conversations are as likely to focus on participants’ emotions, politics, social life and identity as they are on anything pertaining to music.

There is a “core squad” of nineteen regular ESF participants, and an additional thirty occasional attendees. Many are students at Eagle Academy and members of their social networks. This group is mostly black and Latino. Another smaller group attends City-As-School. Only three Fellows total are white. The Fellows are mostly male, partially because many of them attend an all-male school, and partially because of hip-hop’s skewed gender dynamics generally. There are six core mentors (including myself) and another sixteen peripheral mentors. Some are young black men and women from the Fellows’ social networks, and the rest are NYU people, or are socially connected to the lab. All of the mentors are musicians, but otherwise come from a variety of backgrounds: education, business, software development, design.

The ethnomusicologist Thomas Turino draws a distinction between “participatory” and “presentational” cultures of music performance. Presentational performances include all of the music you would hear at a professional concert: classical, jazz, rock, and so on. Participatory performances include campfire singalongs, jam sessions, and drum circles, where there is little to no distinction between performers and audience. We tend to regard presentational performance as “real” music. Turino argues that we undervalue participatory music subcultures, because they are some of the few cultural spaces in America where monetary profit is not a primary value, and where our jobs and economic status are not our major identifying characteristics. While the ostensible goal of ESF is developing young artists professionally, the actual music-making that takes place is highly participatory in nature.

Hip-hop is not the only style of music that the Fellows create. Some are singer-songwriters in a pop, R&B or gospel style. Still, hip-hop is the default, the unifying thread, and the common vocabulary. Among the forty-six Fellows, there are twenty-three emcees, nineteen singers, eighteen producers, thirteen live instrumentalists, and twenty-nine improvisors who are comfortable participating in a live jam. (These categories are not mutually exclusive.) Among the nine more mentors, there are three emcees, four singers, four producers, six live instrumentalists, and eight improvisors. Of the twenty-three Fellows who self-identify as rappers, sixteen of them can freestyle, improvising lyrics on the spot, a formidably challenging musical practice. Participation in cyphers and jams is a core part of the ESF ethos.

The Fellows are familiar with the drug-influenced mumble rappers who currently dominate the charts, but their sensibilities are more clear-eyed, narrative, and direct. For example, Rashad cites Chance The Rapper as his major musical inspiration. Chance has a densely intellectual flow with an irrepressible sunniness, and raps about his life, his community, politics, and his relationship with God. Other Fellows express outspoken admiration for Kendrick Lamar, who is less cheerful and optimistic, but who also has a strong social and political conscience. Like most current hip-hop artists, ESF participants favor beats that are extremely slow and sparse, with electronic drums playing stuttering subdivisions of the beat accompanied by disjointed samples or soft textural ambience on top. I try to keep current with hip-hop, as much as a 41-year-old white dad can, but this music continues to surprise me with how futuristic it sounds. It has a science-fictional dystopian quality, but for all its iciness, the funk heartbeat remains.

ESF meets and works in spaces belonging to NYU Steinhardt’s music technology and music education departments: primarily a conference room and recording studio, spilling over into various labs and classrooms as needed. During the week, these spaces host classes and presentations, and are otherwise occupied by NYU students, who socialize in low-key ways or work on their laptops. While NYU’s culture is informal, it is still an academic institution, and the predominant feeling in the common space is quiet and productive. On the rare occasion when music is played on the conference area PA system during the week, it is part of a class or lecture. During ESF sessions, by contrast, the PA plays hip-hop beats, sometimes looped endlessly for long periods of time, and usually at party volumes. The Fellows have an unreserved social style, and the feeling when they occupy the space is more one of play than of work.

ESF periodically records in the James Dolan studio, named for the owner of Madison Square Garden, himself an enthusiastic amateur musician and at one time the parent of two NYU music technology students. He noticed that the school’s recording facilities were old and run-down, so he essentially gave Steinhardt a blank check to build a state-of-the-art studio. Ten million dollars later, NYU boasts one of the best studios in New York, with top-of-the-line equipment and immaculate acoustics. The monitor speakers alone cost twenty thousand dollars; the mixing desk costs on the order of a hundred thousand. When the Fellows record, they are assisted by well-trained and highly competent student engineers. I always feel like more of a “real musician” whenever I work in there, and clearly it has a similar effect on the Fellows. For all its luxuriousness, though, the Dolan studio was designed to capture live performances using Pro Tools, not for creative hip-hop production with Logic or FL Studio or Ableton Live. The Fellows would be better served by a group of smaller, less grandiose spaces equipped with the software and hardware designed specifically for their methods.

For all its progressiveness, New York is one of the most racially segregated cities in America. NYU students and ESF participants live within a few miles of each other, but occupy very different social worlds. Nearly all of the Fellows are black or Latino, and all are of low socioeconomic status. NYU students are ethnically diverse, but this is because of the prevalence of international students; the students of color are predominantly Asian. NYU is an extremely expensive private institution, and unlike Ivy League schools, it does not have a large endowment that it can use for financial aid and scholarships. While not all NYU students are wealthy, a substantial percentage certainly are, and an air of casual privilege pervades. NYU music technology students know hip-hop, and some are aficionados, but their tastes center more on indie rock, electronica, and experimental music. Music education students mostly inhabit the self-contained classical world, or the similarly insular subculture of musical theater.

The ESF working style, in or out of the studio, is low-key, social, casual, and, at times, indistinguishable from simply hanging out. This is well in keeping with the broader norms of hip-hop. For all its apparent lack of focus, this ad-hoc working style is richly generative of original music. After extended socializing, the Fellows tend to make their creative choices quickly and decisively, and for the most part are confident and relaxed performers. The same is broadly true of other hip-hop musicians I have worked with.

While the music emerges seamlessly out of playful fraternizing, this is not to say that it is always effortless. The Fellows are not all expert musicians, and they sometimes show dissatisfaction or frustration with their music. Also, they vary in their willingness to share their ideas, especially the unfinished or insufficiently polished ones. That said, I can not recall seeing anyone in ESF display anxiety. This is a conspicuous difference from NYU’s music students, for whom anxiety is a dominant emotion in their creative spaces, especially the recording studio. During one session I led for some of my NYU undergraduate students, one woman came close to a panic attack from simply sitting in the control room listening to her peers recording. Classical music students face continual and strict scrutiny, and the studio represents the harshest scrutiny of all—an error that might go unnoticed in a live performance is painfully obvious on a recording.

Due to family obligations, I am not able to be a regular participant in ESF. When I can attend sessions, I teach audio engineering, work with the Fellows on mixing and editing their tracks, give creative feedback, or most commonly, make myself available and see what happens. Today it will be the latter. I arrive at 2 pm, the session’s scheduled start time. Jamie is there, as is another mentor, Amber, an NYU music education student. There are only two Fellows present, Juan and Marcus, and no one is making any music yet. Most of the Fellows will arrive late, and while the session is supposed to end at 6, Jamie tells me that “they’ll still be kicking it at 6:30 or 7:00… You can’t fight their body clock.”

Juan and Marcus join me at the table where I am sitting with my computer. They talk about the new Kendrick Lamar album and other recent developments in the rap world. Then Juan mentions that he is presently homeless due to a fight with his mother. (He is not the only homeless ESF participant.) There was apparently some police involvement, and a restraining order was issued. As a result, Juan missed a performance, so now on top of everything else, he will not be able to get booked at the venue again. He tells us this with the same wry detachment he used to talk about the new Kendrick. Either this happens to him routinely, or he is putting a brave face on a very bad situation, or both. The subject changes to whether a mutual friend is gay. Then Juan sings something, and Marcus asks, “You know the guy who sings that song?” Juan replies, “Who, Chris Brown?” Marcus says, “Yeah, you should let him sing it.” This is just friendly trash talk; Juan sings beautifully.

Three more Fellows drift in at 3:00 and gather in a far corner of the room. They plug a laptop into the PA and play a beat they’re working on. It is a four bar loop, endlessly repeating, with jazzy major seventh chords on piano over a drum machine. The three guys let it run while they shoot the breeze. As other Fellows arrive, they make a point of greeting me, shaking my hand firmly or fist bumping me, whether they have met me before or not. They look at their phones, noodle on the piano, and talk. It appears that nothing whatsoever is happening here, but I know from experience that it is all part of the process. After spending 45 minutes just letting their loop run, the group in the corner begins scrolling through different drum sounds. Then they quickly lay down a synth bassline on the MIDI controller. A notebook is produced, and songwriting begins in earnest.

Jamie and I are the only white people present. She and Amber continue to hang out, since the Fellows presently do not need any guidance. Amber complains about NYU’s music curriculum, that she is forced to study serialist composition. “I take all these music classes and only one involves me writing songs.” Jamie responds, “I got a whole music degree here and have never written a song.” She is committed to making expression the center of ESF; she wants everyone to write songs, to manifest themselves as creative and empowered beings. Kigan, another mentor, listens to us critique the Eurocentrism of the music academy, and is appalled to learn that universities did not begin to consider jazz an acceptable subject of study until decades after its peak cultural relevance. Kigan says that trap music now is what jazz was in the 1930s, that it’s where all the creativity is happening. He is not even referring to rap when he says this; he means the instrumental component of the music. He recommends a producer named Flosstradamus to me, and I make a note to look him up on SoundCloud later.

At 4:30, there is another beat looping on the speaker system. This one is in a minor key, with a mysterious vocal sample that sounds like aliens chanting. The beat is trap style, an extremely slow tempo with hi-hats stuttering in doubletime. Juan begins freestyling effortlessly over it. Another Fellow plays a line on the upright piano. Amber begins writing out a song structure on the whiteboard. Kigan and Jamie eat pizza and continue chatting. The energy in the room has picked up undeniably, even if it still seems unfocused.

Jamie and I talk about a grant proposal she is working on. She tries to articulate the value of what is happening here. “Saturdays are not the program. The space is not the program. The interactions are the program.” She wants to give ESF a sense of “accountability,” though she knows that this goal will run up against the chaotic reality of the Fellows’ lives. Rather than imposing some kind of discipline, she wants to foster intrinsic motivation from the sense of community: “Oh man, I saw on Facebook Live that you guys had a great session.” She ponders doing a “reboot” after Labor Day. Until then, the periodic recording sessions in the Dolan studio will continue to be natural anchor points. Jamie has also been bringing the Fellows to hackathons at Spotify and Splice–she wants them to imagine themselves someday working at those kinds of companies.

Alex Ruthmann, the director of the Music Experience Design Lab, is on the Steinhardt music education faculty, and has already started thinking of ways to integrate ESF with the official curriculum. The worlds of ESF and NYU have much to offer each other. NYU has its facilities and equipment, its expert faculty, its glamorous central location, and the accumulated expertise of all those well-trained musicians and composers and engineers. ESF has none of the material wealth or the privilege. But the Fellows are part of hip-hop, the single most important driver of America’s musical culture. A recent study conducted by Spotify concluded that hip-hop is the most-listened to genre of music on their service, not just in the United States, but everywhere in the world. It is astonishing to me that our country’s most marginalized young people are producing its most valued music. I hope that the academy learns to value their ideas as much as mass culture does.

Learning music from Ableton

Ableton recently launched a delightful web site that teaches the basics of beatmaking, production and music theory using elegant interactives. If you’re interested in music education, creation, or user experience design, you owe it to yourself to try it out.

Ableton - Learning Music site

One of the site’s co-creators is Dennis DeSantis, who wrote Live’s unusually lucid documentation, and also their first book, a highly-recommended collection of strategies for music creation (not just in the electronic idiom.)

Dennis DeSantis - Making Music

The other co-creator is Jack Schaedler, who also created this totally gorgeous interactive digital signal theory primer.

If you’ve been following the work of the NYU Music Experience Design Lab, you might notice some strong similarities between Ableton’s site and our tools. That’s no coincidence. Dennis and I have been having an informal back and forth on the role of technology in music education for a few years now. It’s a relationship that’s going to get a step more formal this fall at the 2017 Loop Conference – more details on that as it develops.

Meanwhile, Peter Kirn’s review of the Learning Music site raises some probing questions about why Ableton might be getting involved in education in the first place. But first, he makes some broad statements about the state of the musical world that are worth repeating in full.

I think there’s a common myth that music production tools somehow take away from the need to understand music theory. I’d say exactly the opposite: they’re more demanding.

Every musician is now in the position of composer. You have an opportunity to arrange new sounds in new ways without any clear frame from the past. You’re now part of a community of listeners who have more access to traditions across geography and essentially from the dawn of time. In other words, there’s almost no choice too obvious.

The music education world has been slow to react to these new realities. We still think of composition as an elite and esoteric skill, one reserved only for small class of highly trained specialists. Before computers, this was a reasonable enough attitude to have, because it was mostly true. Not many of us can learn an instrument well enough to compose with it, then learn to notate our ideas. Even fewer of us will be able to find musicians to perform those compositions. But anyone with an iPhone and twenty dollars worth of apps can make original music using an infinite variety of sounds, and share that music online to anyone willing to listen. My kids started playing with iOS music apps when they were one year old. With the technical barriers to musical creativity falling away, the remaining challenge is gaining an understanding of music itself, how it works, why some things sound good and others don’t. This is the challenge that we as music educators are suddenly free to take up.

There’s an important question to ask here, though: why Ableton?

To me, the answer to this is self-evident. Ableton has been in the music education business since its founding. Like Adam Bell says, every piece of music creation software is a de facto education experience. Designers of DAWs might even be the most culturally impactful music educators of our time. Most popular music is made by self-taught producers, and a lot of that self-teaching consists of exploring DAWs like Ableton Live. The presets, factory sounds and affordances of your DAW powerfully inform your understanding of musical possibility. If DAW makers are going to be teaching the world’s producers, I’d prefer if they do it intentionally.

So far, there has been a divide between “serious” music making tools like Ableton Live and the toy-like iOS and web apps that my kids use. If you’re sufficiently motivated, you can integrate them all together, but it takes some skill. One of the most interesting features of Ableton’s web site, then, is that each interactive tool includes a link that will open up your little creation in a Live session. Peter Kirn shares my excitement about this feature.

There are plenty of interactive learning examples online, but I think that “export” feature – the ability to integrate with serious desktop features – represents a kind of breakthrough.

Ableton Live is a superb creation tool, but I’ve been hesitant to recommend it to beginner producers. The web site could change my mind about that.

So, this is all wonderful. But Kirn points out a dark side.

The richness of music knowledge is something we’ve received because of healthy music communities and music institutions, because of a network of overlapping ecosystems. And it’s important that many of these are independent. I think it’s great that software companies are getting into the action, and I hope they continue to do so. In fact, I think that’s one healthy part of the present ecosystem.

It’s the rest of the ecosystem that’s worrying – the one outside individual brands and what they support. Public music education is getting squeezed in different ways all around the world. Independent content production is, too, even in advertising-supported publications like this one, but more so in other spheres. Worse, I think education around music technology hasn’t even begun to be reconciled with traditional music education – in the sense that people with specialties in one field tend not to have any understanding of the other. And right now, we need both – and both are getting their resources squeezed.

This might feel like I’m going on a tangent, but if your DAW has to teach you how harmony works, it’s worth asking the question – did some other part of the system break down?

Yes it did! Sure, you can learn the fundamentals of rhythm, harmony, and form from any of a thousand schools, courses, or books. But there aren’t many places you can go to learn about it in the context of Beyoncé, Daft Punk, or A Tribe Called Quest. Not many educators are hip enough to include the Sleng Teng riddim as one of the fundamentals. I’m doing my best to rectify this imbalance–that’s what my courses with Soundfly classes are for. But I join Peter Kirn in wondering why it’s left to private companies to do this work. Why isn’t school music more culturally relevant? Why do so many educators insist that you kids like the wrong music? Why is it so common to get a music degree without ever writing a song? Why is the chasm between the culture of school music and music generally so wide?

Like Kirn, I’m distressed that school music programs are getting their budgets cut. But there’s a reason that’s happening, and it isn’t that politicians and school boards are philistines. Enrollment in school music is declining in places where the budgets aren’t being cut, and even where schools are offering free instruments. We need to look at the content of school music itself to see why it’s driving kids away. Both the content of school music programs and the people teaching them are whiter than the student population. Even white kids are likely to be alienated from a Eurocentric curriculum that doesn’t reflect America’s increasingly Afrocentric musical culture. The large ensemble model that we imported from European conservatories is incompatible with the riot of polyglot individualism in the kids’ earbuds.

While music therapists have been teaching songwriting for years, it’s rare to find it in school music curricula. Production and beatmaking are even more rare. Not many adults can play oboe in an orchestra, but anyone with a guitar or keyboard or smartphone can write and perform songs. Music performance is a wonderful experience, one I wish were available to everyone, but music creation is on another level of emotional meaning entirely. It’s like the difference between watching basketball on TV and playing it yourself. It’s a way to understand your own innermost experiences and the innermost experiences of others. It changes the way you listen to music, and the way you approach any kind of art for that matter. It’s a tool that anyone should be able to have in their kit. Ableton is doing the music education world an invaluable service; I hope more of us follow their example.

Why hip-hop is interesting

The title of this post is also the title of a tutorial I’m giving at ISMIR 2016 with Jan Van Balen and Dan Brown. The conference is organized by the International Society for Music Information Retrieval, and it’s the fanciest of its kind. You may be wondering what Music Information Retrieval is. MIR is a specialized field in computer science devoted to teaching computers to understand music, so they can transcribe it, organize it, find connections and similarities, and, maybe, eventually, create it.

So why are we going to talk to the MIR community about hip-hop? So far, the field has mostly studied music using the tools of Western classical music theory, which emphasizes melody and harmony. Hip-hop songs don’t tend to have much going on in either of those areas, which makes the genre seem like it’s either too difficult to study, or just too boring. But the MIR community needs to find ways to engage this music, if for no other reason than the fact that hip-hop is the most-listened to genre in the world, at least among Spotify listeners.

Hip-hop has been getting plenty of scholarly attention lately, but most of it has been coming from cultural studies. Which is fine! Hip-hop is culturally interesting. When humanities people do engage with hip-hop as an art form, they tend to focus entirely on the lyrics, treating them as a subgenre of African-American literature that just happens to be performed over beats. And again, that’s cool! Hip-hop lyrics have literary interest. If you’re interested in the lyrical side, we recommend this video analyzing the rhyming techniques of several iconic emcees. But what we want to discuss is why hip-hop is musically interesting, a subject which academics have given approximately zero attention to.

Much of what I find exciting (and difficult) about hip-hop can be found in Kanye West’s song “Famous” from his album The Life Of Pablo.

The song comes with a video, a ten minute art film that shows Kanye in bed sleeping after a group sexual encounter with his wife, his former lover, his wife’s former lover, his father-in-law turned mother-in-law, various of his friends and collaborators, Bill Cosby, George Bush, Taylor Swift, and Donald Trump. There’s a lot to say about this, but it’s beyond the scope of our presentation, and my ability to verbalize thoughts. The song has some problematic lyrics. Kanye drops the n-word in the very first line and calls Taylor Swift a bitch in the second. He also speculates that he might have sex with her, and that he made her famous. I find his language difficult and objectionable, but that too is beyond the scope. Instead, I’m going to focus on the music itself.

“Famous” has a peculiar structure, shown in the graphic below.

The track begins with a six bar intro, Rihanna singing over a subtle gospel-flavored organ accompaniment in F-sharp major. She’s singing few lines from “Do What You Gotta Do” by Jimmy Webb. This song has been recorded many times, but for Kanye’s listeners, the most significant one is by Nina Simone.

Next comes a four-bar groove, a more aggressive organ part over a drum machine beat, with Swizz Beatz exclaiming on top. The beat is a minimal funk pattern on just kick and snare, treated with cavernous artificial reverb. The organ riff is in F-sharp minor, which is an abrupt mode change so early in the song. It’s sampled from the closing section of “Mi Sono Svegliato E…Ho Chiuso Gli Occhi” by Il Rovescio della Medaglia, an Italian prog-rock band I had never heard of until I looked the sample up just now. The song is itself built around quotes of Bach’s Well-Tempered Clavier–Kanye loves sampling material built from samples.

Verse one continues the same groove, with Kanye alternating between aggressive rap and loosely pitched singing. Rap is widely supposed not to be melodic, but this idea collapses immediately under scrutiny. The border between rapping and singing is fluid, and most emcees cross it effortlessly. Even in “straight” rapping, though, the pitch sequences are deliberate and meaningful. The pitches might not fall on the piano keys, but they are melodic nonetheless.

The verse is twelve bars long, which is unusual; hip-hop verses are almost always eight or sixteen bars. The hook (the hip-hop term for chorus) comes next, Rihanna singing the same Jimmy Webb/Nina Simone quote over the F-sharp major organ part from the intro. Swizz Beatz does more interjections, including a quote of “Wake Up Mr. West,” a short skit on Kanye’s album Late Registration in which DeRay Davis imitates Bernie Mac.

Verse two, like verse one, is twelve bars on the F-sharp minor loop. At the end, you think Rihanna is going to come back in for the hook, but she only delivers the pickup. The section abruptly shifts into an F-sharp major groove over fuller drums, including a snare that sounds like a socket wrench. The lead vocal is a sample of “Bam Bam” by Sister Nancy, which is a familiar reference for hip-hop fans–I recognize it from “Lost Ones” by Lauryn Hill and “Just Hangin’ Out” by Main Source. The chorus means “What a bum deal.” Sister Nancy’s track is itself sample-based–like many reggae songs, it uses a pre-existing riddim or instrumental backing, and the chorus is a quote of the Maytals.

Kanye doesn’t just sample “Bam Bam”, he also reharmonizes it. Sister Nancy’s original is a I – bVII progression in C Mixolydian. Kanye pitch shifts the vocal to fit it over a I – V – IV – V progression in F-sharp major. He doesn’t just transpose the sample up or down a tritone; instead, he keeps the pitches close by changing their chord function. Here’s Sister Nancy’s original:

And here’s Kanye’s version:

The pitch shifting gives Sister Nancy the feel of a robot from the future, while the lo-fidelity recording places her in the past. It’s a virtuoso sample flip.

After 24 bars of the Sister Nancy groove, the track ends with the Jimmy Webb hook again. But this time it isn’t Rihanna singing. Instead, it’s a sample of Nina Simone herself.It reminds me of Kanye’s song “Gold Digger“, which includes Jamie Foxx imitating Ray Charles, followed by a sample of Ray Charles himself. Kanye is showing off here. It would be a major coup for most producers to get Rihanna to sing on a track, and it would be an equally major coup to be able to license a Nina Simone sample, not to mention requiring the chutzpah to even want to sample such a sacred and iconic figure. Few people besides Kanye could afford to use both Rihanna and Nina Simone singing the same hook, and no one else would dare. I don’t think it’s just a conspicuous show of industry clout, either; Kanye wants you to feel the contrast between Rihanna’s heavily processed purr and Nina Simone’s stark, preacherly tone.

Here’s a diagram of all the samples and samples of samples in “Famous.”

In this one track, we have a dense interplay of rhythms, harmonies, timbres, vocal styles, and intertextual meaning, not to mention the complexities of cultural context. This is why hip-hop is interesting.

You probably have a good intuitive idea of what hip-hop is, but there’s plenty of confusion around the boundaries. What are the elements necessary for music to be hip-hop? Does it need to include rapping over a beat? When blues, rock, or R&B singers rap, should we retroactively consider that to be hip-hop? What about spoken-word poetry? Does hip-hop need to include rapping at all? Do singers like Mary J. Blige and Aaliyah qualify as hip-hop? Is Run-DMC’s version of “Walk This Way” by Aerosmith hip-hop or rock? Is “Love Lockdown” by Kanye West hip-hop or electronic pop? Do the rap sections of “Rapture” by Blondie or “Shake It Off” by Taylor Swift count as hip-hop?

If a single person can be said to have laid the groundwork for hip-hop, it’s James Brown. His black pride, sharp style, swagger, and blunt directness prefigure the rapper persona, and his records are a bottomless source of classic beats and samples. The HBO James Brown documentary is a must-watch.

Wikipedia lists hip-hop’s origins as including funk, disco,
electronic music, dub, R&B, reggae, dancehall, rock, jazz, toasting, performance poetry, spoken word, signifyin’, The Dozens, griots, scat singing, and talking blues. People use the terms hip-hop and rap interchangeably, but hip-hop and rap are not the same thing. The former is a genre; the latter is a technique. Rap long predates hip-hop–you can hear it in classicalrock, R&B, swingjazz fusion, soul, funkcountry, and especially blues, especially especially the subgenre of talking blues. Meanwhile, it’s possible to have hip-hop without rap. Nearly all current pop and R&B are outgrowths of hip-hop. Turntablists and controllerists have turned hip-hop into a virtuoso instrumental music.

It’s sometimes said that rock is European harmony combined with African rhythm. Rock began as dance music, and rhythm continues to be its most important component. This is even more true of hip-hop, where harmony is minimal and sometimes completely absent. More than any other music of the African diaspora, hip-hop is a delivery system for beats. These beats have undergone some evolution over time. Early hip-hop was built on funk, the product of what I call The Great Cut-Time Shift, as the underlying pulse of black music shifted from eighth notes to sixteenth notes. Current hip-hop is driving a Second Great Cut-Time Shift, as the average tempo slows and the pulse moves to thirty-second notes.

Like all other African-American vernacular music, hip-hop uses extensive syncopation, most commonly in the form of a backbeat. You can hear the blues musician Taj Mahal teach a German audience how to clap on the backbeat. (“Schvartze” is German for “black.”) Hip-hop has also absorbed a lot of Afro-Cuban rhythms, like the omnipresent son clave. This traditional Afro-Cuban rhythm is everywhere in hip-hop: in the drums, of course, but also in the rhythms of bass, keyboards, horns, vocals, and everywhere else. You can hear son clave in the snare drum part in “WTF” by Missy Elliott.

The NYU Music Experience Design Lab created the Groove Pizza app to help you visualize and interact with rhythms like the ones in hip-hop beats. You can use it to explore classic beats or more contemporary trap beats. Hip-hop beats come from three main sources: drum machines, samples, or (least commonly) live drummers.

Hip-hop was a DJ medium before emcees became the main focus. Party DJs in the disco era looped the funkiest, most rhythm-intensive sections of the records they were playing, and sometimes improvised toasts on top. Sampling and manipulating recordings has become effortless in the computer age, but doing it with vinyl records requires considerable technical skill. In the movie Wild Style, you can see Grandmaster Flash beat juggle and scratch “God Make Me Funky” by the Headhunters and “Take Me To The Mardi Gras” by Bob James (though the latter song had to be edited out of the movie for legal reasons.)

The creative process of making a modern pop recording is very different from composing on paper or performing live. Hip-hop is an art form about tracks, and the creativity is only partially in the songs and the performances. A major part of the art form is the creation of sound itself. It’s the timbre and space that makes the best tracks come alive as much as any of the “musical” components. The recording studio gives you control over the finest nuances of the music that live performers can only dream of. Most of the music consists of synths and samples that are far removed from a “live performance.” The digital studio erases the distinction between composition, improvisation, performance, recording and mixing. The best popular musicians are the ones most skilled at “playing the studio.”

Hip-hop has drawn much inspiration from the studio techniques of dub producers, who perform mixes of pre-existing multitrack tape recordings by literally playing the mixing desk. When you watch The Scientist mix Ted Sirota’s “Heavyweight Dub,” you can see him shaping the track by turning different instruments up and down and by turning the echo effect on and off. Like dub, hip-hop is usually created from scratch in the studio. Brian Eno describes the studio as a compositional tool, and hip-hop producers would agree.

Aside from the human voice, the most characteristic sounds in hip-hop are the synthesizer, the drum machine, the turntable, and the sampler. The skills needed by a hip-hop producer are quite different from the ones involved in playing traditional instruments or recording on tape. Rock musicians and fans are quick to judge electronic musicians like hip-hop producers for not being “real musicians” because sequencing electronic instruments appears to be easier to learn than guitar or drums. Is there something lazy or dishonest about hip-hop production techniques? Is the guitar more of a “real” instrument than the sampler or computer? Are the Roots “better” musicians because they incorporate instruments?

Maybe we discount the creative prowess of hip-hop producers because we’re unfamiliar with their workflow. Fortunately, there’s a growing body of YouTube videos that document various aspects of the process:

Before affordable digital samplers became available in the late 1980s, early hip-hop DJs and producers did most of their audio manipulation with turntables. Record scratching  demands considerable skill and practice, and it has evolved into a virtuoso form analogous to bebop saxophone or metal guitar shredding.

Hip-hop is built on a foundation of existing recordings, repurposed and recombined. Samples might be individual drum hits, or entire songs. Even hip-hop tracks without samples very often started with them; producers often replace copyrighted material with soundalike “original” beats and instrumental performances for legal reasons. Turntables and samplers make it possible to perform recordings like instruments.

The Amen break, a six-second drum solo, is one of the most important samples of all time. It’s been used in uncountably many hip-hop songs, and is the basis for entire subgenres of electronic music. Ali Jamieson gives an in-depth exploration of the Amen.

There are few artistic acts more controversial than sampling. Is it a way to enter into a conversation with other artists? An act of liberation against the forces of corporatized mass culture? A form of civil disobedience against a stifling copyright regime? Or is it a bunch of lazy hacks stealing ideas, profiting off other musicians’ hard work, and devaluing the concept of originality? Should artists be able to control what happens to their work? Is complete originality desirable, or even possible?

We look to hip-hop to tell us the truth, to be real, to speak to feelings that normally go unspoken. At the same time, we expect rappers to be larger than life, to sound impossibly good at all times, and to live out a fantasy life. And many of our favorite artists deliberately alter their appearance, race, gender, nationality, and even species. To make matters more complicated, we mostly experience hip-hop through recordings and videos, where artificiality is the nature of the medium. How important is authenticity in this music? To what extent is it even possible?

The “realness” debate in hip-hop reached its apogee with the controversy over Auto-Tune. Studio engineers have been using computer software to correct singers’ pitch since the early 1990s, but the practice only became widely known when T-Pain overtly used exaggerated Auto-Tune as a vocal effect rather than a corrective. The “T-Pain effect” makes it impossible to sing a wrong note, though at the expense of making the singer sound like a robot from the future. Is this the death of singing as an art form? Is it cheating to rely on software like this? Does it bother you that Kanye West can have hits as a singer when he can barely carry a tune? Does it make a difference to learn that T-Pain has flawless pitch when he turns off the Auto-Tune?

Hip-hop is inseparable from its social, racial and political environment. For example, you can’t understand eighties hip-hop without understanding New York City in the pre-Giuliani era. Eric B and Rakim capture it perfectly in the video for “I Ain’t No Joke.”

Given that hip-hop is the voice of the most marginalized people in America and the world, why is it so compelling to everyone else? Timothy Brennan argues that the musical African diaspora of which hip-hop is a part helps us resist imperialism through secular devotion. Brennan thinks that America’s love of African musical practice is related to an interest in African spiritual practice. We’re unconsciously drawn to the musical expression of African spirituality as a way of resisting oppressive industrial capitalism and Western hegemony. It isn’t just the defiant stance of the lyrics that’s doing the resisting. The beats and sounds themselves are doing the major emotional work, restructuring our sense of time, imposing a different grid system onto our experience. I would say that makes for some pretty interesting music.

Visualizing trap beats with the Groove Pizza

In a previous post, I used the Groove Pizza to visualize some classic hip-hop beats. But the kids are all about trap beats right now, which work differently from the funk-based boom-bap of my era.

IT'S A TRAP

From the dawn of jazz until about 1960, African-American popular music was based on an eighth note pulse. The advent of funk brought with it a shift to the sixteenth note pulse. Now we’re undergoing another shift, as Southern hip-hop is moving the rest of popular music over to a 32nd note pulse. The tempos have been slowing down as the beat subdivisions get finer. This may all seem like meaningless abstraction, but the consequences become real if you want to program beats of your own.

Back in the 90s, the template for a hip-hop beat looked like a planet of 16th notes orbited by kicks and snares. Click the image below to hear a simple “planet funk” pattern in the Groove Pizza. Each slice of the pizza is a sixteenth note, and the whole pizza is one bar long.

Planet Funk - 16th notes

(Music readers can also view it in Noteflight.)

You can hear the sixteenth note hi-hat pulse clearly in “So Fresh So Clean” by OutKast.

So Fresh So Clean

View in Noteflight

Trap beats have the same basic skeleton as older hip-hop styles: a kick on beat one, snares on beats two and four, and hi-hats on some or all of the beats making up the underlying pulse. However, in trap, that pulse is twice as fast as in 90s hip-hop, 32nd notes rather than sixteenths. This poses an immediate practical problem: a lot of drum machines don’t support such a fine grid resolution. For example, the interface of the ubiquitous TR-808 is sixteen buttons, one for each sixteenth note. On the computer, it’s less of an issue because you can set the grid resolution to be whatever you want, but even so, 32nd notes are a hassle. So what do you do?

The trap producer’s workaround is to double the song tempo, thereby turning sixteenths into effective 32nds. To get a trap beat at 70 beats per minute, you set the tempo to 140. Your 808 grid becomes half a bar of 32nd notes, rather than a full bar of sixteenths. And instead of putting your snares on beats two and four, you put them on beat three.

Here’s a generic trap beat I made. Each pizza slice is a 32nd note, and the whole pizza is half a bar.

View in Noteflight

Trap beats don’t use swing. Instead, they create rhythmic interest through syncopation, accenting unexpected weak beats. On the Groove Pizza, the weak beats are the ones in between the north, south, east and west. Afro-Cuban music is a good source of syncopated patterns. The snare pattern in the last quarter of my beat is a rotation of son clave, and the kick pattern is somewhat clave-like as well.

It's A Trap - last bar

Now let’s take a look at two real-life trap beats. First, there’s the inescapable “Trap Queen” by Fetty Wap.

Here’s a simplified version of the beat. (“Trap Queen” uses a few 64th notes on the hi-hat, which you can’t yet do on the Groove Pizza.)

Trap Queen simplified

View in Noteflight

The beat has an appealing symmetry. In each half bar, both the kick and snare each play a strong beat and a weak beat. The hi-hat pattern is mostly sixteenth notes, with just a few thirty-second notes as embellishments. The location of those embellishments changes from one half-bar to the next. It’s a simple technique, and it’s effective.

My other real-world example is “Panda” by Desiigner.

Here’s the beat on the GP, once again simplified a bit.

View in Noteflight

Unlike my generic trap beat, “Panda” doesn’t have any hi-hats on the 32nd notes at all. It feels more like an old-school sixteenth note pulse at a very slow tempo. The really “trappy” part comes at the very end, with a quick pair of kick drums on the last two 32nd notes. While the lawn-sprinkler effect of doubletime hi-hats has become a cliche, doubletime kick rolls are still startlingly fresh (at least to my ears.)

To make authentic trap beats, you’ll need a more full-featured tool than the Groove Pizza. For one thing, you need 64th notes and triplets. Also, trap isn’t just about the placement of the drum hits, it’s about specific sounds. In addition to closed hi-hats, you  need open hi-hats and crash cymbals. You want more than one snare or handclap, and maybe multiple kicks too. And you’d want to be able to alter the pitch of your drums too. The best resource to learn more, as always, is the music itself.

Composing in the classroom

The hippest music teachers help their students create original music. But what exactly does that mean? What even is composition? In this post, I take a look at two innovators in music education and try to arrive at an answer.

Matt McLean is the founder of the amazing Young Composers and Improvisers Workshop. He teaches his students composition using a combination of Noteflight, an online notation editor, and the MusEDLab‘s own aQWERTYon, a web app that turns your regular computer keyboard into an intuitive musical interface.

http://www.yciw.net/1/the-interface-i-wish-noteflight-had-is-here-aqwertyon/

Matt explains:

Participating students in YCIW as well as my own students at LREI have been using Noteflight for over 6 years to compose music for chamber orchestras, symphony orchestras, jazz ensembles, movie soundtracks, video game music, school band and more – hundreds of compositions.

Before the advent of the aQWERTYon, students needed to enter music into Noteflight either by clicking with the mouse or by playing notes in with a MIDI keyboard. The former method is accessible but slow; the latter method is fast but requires some keyboard technique. The aQWERTYon combines the accessibility of the mouse with the immediacy of the piano keyboard.

For the first time there is a viable way for every student to generate and notate her ideas in a tactile manner with an instrument that can be played by all. We founded Young Composers & Improvisors Workshop so that every student can have the experience of composing original music. Much of my time has been spent exploring ways to emphasize the “experiencing” part of this endeavor. Students had previously learned parts of their composition on instruments after their piece was completed. Also, students with piano or guitar skills could work out their ideas prior to notating them. But efforts to incorporate MIDI keyboards or other interfaces with Noteflight in order to give students a way to perform their ideas into notation always fell short.

The aQWERTYon lets novices try out ideas the way that more experienced musicians do: by improvising with an instrument and reacting to the sounds intuitively. It’s possible to compose without using an instrument at all, using a kind of sudoku-solving method, but it’s not likely to yield good results. Your analytical consciousness, the part of your mind that can write notation, is also its slowest and dumbest part. You really need your emotions, your ear, and your motor cortex involved. Before computers, you needed considerable technical expertise to be able to improvise musical ideas, and remember them long enough to write them down. The advent of recording and MIDI removed a lot of the friction from the notation step, because you could preserve your ideas just by playing them. With the aQWERTYon and interfaces like it, you can do your improvisation before learning any instrumental technique at all.

Student feedback suggests that kids like being able to play along to previously notated parts as a way to find new parts to add to their composition. As a teacher I am curious to measure the effect of students being able to practice their ideas at home using aQWERTYon and then sharing their performances before using their idea in their composition. It is likely that this will create a stronger connection between the composer and her musical idea than if she had only notated it first.

Those of us who have been making original music in DAWs are familiar with the pleasures of creating ideas through playful jamming. It feels like a major advance to put that experience in the hands of elementary school students.

Matt uses progressive methods to teach a traditional kind of musical expression: writing notated scores that will then be performed live by instrumentalists. Matt’s kids are using futuristic tools, but the model for their compositional technique is the one established in the era of Beethoven.

Beethoven

(I just now noticed that the manuscript Beethoven is holding in this painting is in the key of D-sharp. That’s a tough key to read!)

Other models of composition exist. There’s the Lennon and McCartney method, which doesn’t involve any music notation. Like most untrained rock musicians, the Beatles worked from lyric sheets with chords written on them as a mnemonic. The “lyrics plus chords” method continues to be the standard for rock, folk and country musicians. It’s a notation system that’s only really useful if you already have a good idea of how the song is supposed to sound.

Lennon and McCartney writing

Lennon and McCartney originally wrote their songs to be performed live for an audience. They played in clubs for several years before ever entering a recording studio. As their career progressed, however, the Beatles stopped performing live, and began writing with the specific goal of creating studio recordings. Some of those later Beatles tunes would be difficult or impossible to perform live. Contemporary artists like Missy Elliott and Pharrell Williams have pushed the Beatles’ idea to its logical extreme: songs existing entirely within the computer as sequences of samples and software synths, with improvised vocals arranged into shape after being recorded. For Missy and Pharrell, creating the score and the finished recording are one and the same act.

Pharrell and Missy Elliott in the studio

Is it possible to teach the Missy and Pharrell method in the classroom? Alex Ruthmann, MusEDLab founder and my soon-to-be PhD advisor, documented his method for doing so in 2007.

As a middle school general music teacher, I’ve often wrestled with how to engage my students in meaningful composing experiences. Many of the approaches I’d read about seemed disconnected from the real-world musicality I saw daily in the music my students created at home and what they did in my classes. This disconnect prompted me to look for ways of bridging the gap’ between the students’ musical world outside music class and their in-class composing experiences.

It’s an axiom of constructivist music education that students will be most motivated to learn music that’s personally meaningful to them. There are kids out there for whom notated music performed on instruments is personally meaningful. But the musical world outside music class usually follows the Missy and Pharrell method.

[T]he majority of approaches to teaching music with technology center around notating musical ideas and are often rooted in European classical notions of composing (for example, creating ABA pieces, or restricting composing tasks to predetermined rhythmic values). These approaches require students to have a fairly sophisticated knowledge of standard music notation and a fluency working with rhythms and pitches before being able to explore and express their musical ideas through broader musical dimensions like form, texture, mood, and style.

Noteflight imposes some limitations on these musical dimensions. Some forms, textures, moods and styles are difficult to capture in standard notation. Some are impossible. If you want to specify a particular drum machine sound combined with a sampled breakbeat, or an ambient synth pad, or a particular stereo image, standard notation is not the right tool for the job.

Common approaches to organizing composing experiences with synthesizers and software often focus on simplified classical forms without regard to whether these forms are authentic to the genre or to technologies chosen as a medium for creation.

There is nothing wrong with teaching classical forms. But when making music with computers, the best results come from making the music that’s idiomatic to computers. Matt McLean goes to extraordinary lengths to have student compositions performed by professional musicians, but most kids will be confined to the sounds made by the computer itself. Classical forms and idioms sound awkward at best when played by the computer, but electronic music sounds terrific.

The middle school students enrolled in these classes came without much interest in performing, working with notation, or studying the classical music canon. Many saw themselves as “failed” musicians, placed in a general music class because they had not succeeded in or desired to continue with traditional performance-based music classes. Though they no longer had the desire to perform in traditional school ensembles, they were excited about having the opportunity to create music that might be personally meaningful to them.

Here it is, the story of my life as a music student. Too bad I didn’t go to Alex’s school.

How could I teach so that composing for personal expression could be a transformative experience for students? How could I let the voices and needs of the students guide lessons for the composition process? How could I draw on the deep, complex musical understandings that these students brought to class to help them develop as musicians and composers? What tools could I use to quickly engage them in organizing sound in musical and meaningful ways?

Alex draws parallels between writing music and writing English. Both are usually done alone at a computer, and both pose a combination of technical and creative challenges.

Musical thinking (thinking in sound) and linguistic thinking (thinking using language phrases and ideas) are personal creative processes, yet both occur within social and cultural contexts. Noting these parallels, I began to think about connections between the whole-language approach to writing used by language arts teachers in my school and approaches I might take in my music classroom.

In the whole-language approach to writing, students work individually as they learn to write, yet are supported through collaborative scaffolding-support from their peers and the teacher. At the earliest stages, students tell their stories and attempt to write them down using pictures, drawings, and invented notation. Students write about topics that are personally meaningful to them, learning from their own writing and from the writing of their peers, their teacher, and their families. They also study literature of published authors. Classes that take this approach to teaching writing are often referred to as “writers’ workshops”… The teacher facilitates [students’] growth as writers through minilessons, share sessions, and conferring sessions tailored to meet the needs that emerge as the writers progress in their work. Students’ original ideas and writings often become an important component of the curriculum. However, students in these settings do not spend their entire class time “freewriting.” There are also opportunities for students to share writing in progress and get feedback and support from teacher and peers. Revision and extension of students’ writing occur throughout the process. Lessons are not organized by uniform, prescriptive assignments, but rather are tailored to the students’ interests and needs. In this way, the direction of the curriculum and successive projects are informed by the students’ needs as developing writers.

Alex set about creating an equivalent “composers’ workshop,” combining composition, improvisation, and performing with analytical listening and genre studies.

The broad curricular goal of the composers’ workshop is to engage students collaboratively in:

  • Organizing and expressing musical ideas and feelings through sound with real-world, authentic reasons for and means of composing
  • Listening to and analyzing musical works appropriate to students’ interests and experiences, drawn from a broad spectrum of sources
  • Studying processes of experienced music creators through listening to, performing, and analyzing their music, as well as being informed by accounts of the composition process written by these creators.

Alex recommends production software with strong loop libraries so students can make high-level musical decisions with “real” sounds immediately.

While students do not initially work directly with rhythms and pitch, working with loops enables students to begin composing through working with several broad musical dimensions, including texture, form, mood, and affect. As our semester progresses, students begin to add their own original melodies and musical ideas to their loop-based compositions through work with synthesizers and voices.

As they listen to musical exemplars, I try to have students listen for the musical decisions and understand the processes that artists, sound engineers, and producers make when crafting their pieces. These listening experiences often open the door to further dialogue on and study of the multiplicity of musical roles’ that are a part of creating today’s popular music. Having students read accounts of the steps that audio engineers, producers, songwriters, film-score composers, and studio musicians go through when creating music has proven to be informative and has helped students learn the skills for more accurately expressing the musical ideas they have in their heads.

Alex shares my belief in project-based music technology teaching. Rather than walking through the software feature-by-feature, he plunges students directly into a creative challenge, trusting them to pick up the necessary software functionality as they go. Rather than tightly prescribe creative approaches, Alex observes the students’ explorations and uses them as opportunities to ask questions.

I often ask students about their composing and their musical intentions to better understand how they create and what meanings they’re constructing and expressing through their compositions. Insights drawn from these initial dialogues help me identify strategies I can use to guide their future composing and also help me identify listening experiences that might support their work or techniques they might use to achieve their musical ideas.

Some musical challenges are more structured–Alex does “genre studies” where students have to pick out the qualities that define techno or rock or film scores, and then create using those idioms. This is especially useful for younger students who may not have a lot of experience listening closely to a wide range of music.

Rather than devoting entire classes to demonstrations or lectures, Alex prefers to devote the bulk of classroom time to working on the projects, offering “minilessons” to smaller groups or individuals as the need arises.

Teaching through minilessons targeted to individuals or small groups of students has helped to maintain the musical flow of students’ compositional work. As a result, I can provide more individual feedback and support to students as they compose. The students themselves also offer their own minilessons to peers when they have designed to teach more about advanced features of the software, such as how to record a vocal track, add a fade-in or fade-out, or copy their musical material. These technology skills are taught directly to a few students, who then become the experts in that skill, responsible for teaching other students in the class who need the skill.

Not only does the peer-to-peer learning help with cultural authenticity, but it also gives students invaluable experience with the role of teacher.

One of my first questions is usually, “Is there anything that you would like me to listen for or know about before I listen?” This provides an opportunity for students to seek my help with particular aspects of their composing process. After listening to their compositions, I share my impressions of what I hear and offer my perspective on how to solve their musical problems. If students choose not to accept my ideas, that’s fine; after all, it’s their composition and personal expression… Use of conferring by both teacher and students fosters a culture of collaboration and helps students develop skills in peer scaffolding.

Alex recommends creating an online gallery of class compositions. This has become easier to implement since 2007 with the explosion of blog platforms like Tumblr, audio hosting tools like SoundCloud, and video hosts like YouTube. There are always going to be privacy considerations with such platforms, but there is no shortage of options to choose from.

Once a work is online, students can listen to and comment on these compositions at home outside of class time. Sometimes students post pieces in progress, but for the most part, works are posted when deemed “finished” by the composer. The online gallery can also be set up so students can hear works written by participants in other classes. Students are encouraged to listen to pieces published online for ideas to further their own work, to make comments, and to share these works with their friends and family. The realworld publishing of students’ music on the Internet seems to contribute to their motivation.

Assessing creative work is always going to be a challenge, since there’s no objective basis to assess it on. Alex looks at how well a student composer has met the goal of the assignment, and how well they have achieved their own compositional intent.

The word “composition” is problematic in the context of contemporary computer-based production. It carries the cultural baggage of Western Europe, the idea of music as having a sole identifiable author (or authors.) The sampling and remixing ethos of hip-hop and electronica are closer to the traditions of non-European cultures where music may be owned by everyone and no one. I’ve had good results bringing remixing into the classroom, having students rework each others’ tracks, or beginning with a shared pool of audio samples, or doing more complex collaborative activities like musical shares. Remixes are a way of talking about music via the medium of music, and remixes of remixes can make for some rich and deep conversation. The word “composition” makes less sense in this context. I prefer the broader term “production”, which includes both the creation of new musical ideas and the realization of those ideas in sound.

So far in this post, I’ve presented notation-based composition and loop-based production as if they’re diametrical opposites. In reality, the two overlap, and can be easily combined. A student can create a part as a MIDI sequence and then convert it to notation, or vice versa. The school band or choir can perform alongside recorded or sequenced tracks. Instrumental or vocal performances can be recorded, sampled, and turned into new works. Electronic productions can be arranged for live instruments, and acoustic pieces can be reconceived as electronica. If a hip-hop track can incorporate a sample of Duke Ellington, there’s no reason that sample couldn’t be performed by a high school jazz band. The possibilities are endless.

Rohan lays beats

The Ed Sullivan Fellows program is an initiative by the NYU MusEDLab connecting up-and-coming hip-hop musicians to mentors, studio time, and creative and technical guidance. Our session this past Saturday got off to an intense start, talking about the role of young musicians of color in a world of the police brutality and Black Lives Matter. The Fellows are looking to Kendrick Lamar and Chance The Rapper to speak social and emotional truths through music. It’s a brave and difficult job they’ve taken on.

Eventually, we moved from heavy conversation into working on the Fellows’ projects, which this week involved branding and image. I was at kind of a loose end in this context, so I set up the MusEDLab’s Push controller and started playing around with it. Rohan, one of the Fellows, immediately gravitated to it, and understandably so.

Indigo lays beats

Rohan tried out a few drum sounds, then some synths. He quickly discovered a four-bar synth loop that he wanted to build a track around. He didn’t have any Ableton experience, however, so I volunteered to be his co-producer and operate the software for him.

We worked out some drum parts, first with a hi-hat and snare from the Amen break, and then a kick, clap and more hi-hats from Ableton’s C78 factory instrument. For bass, Rohan wanted that classic booming hip-hop sound you hear coming from car stereos in Brooklyn. He spotted the Hip-Hop Sub among the presets. We fiddled with it and he continued to be unsatisfied until I finally just put a brutal compressor on it, and then we got the sound he was hearing in his head.

While we were working, I had my computer connected to a Bluetooth speaker that was causing some weird and annoying system behavior. At one point, iTunes launched itself and started playing a random song under Rohan’s track, “I Can’t Realize You Love Me” by Duke Ellington and His Orchestra, featuring The Harlem Footwarmers and Sid Garry.

Rohan liked the combination of his beat and the Ellington song, so I sampled the opening four bars and added them to the mix. It took me several tries to match the keys, and I still don’t think I really nailed it, but the hip-hop kids have broad tolerance for chord clash, and Rohan was undisturbed.

Once we had the loops assembled, we started figuring out an arrangement. It took me a minute to figure out that when Rohan refers to a “bar,” he means a four-measure phrase. He’s essentially conflating hypermeasures with measures. I posted about it on Twitter later and got some interesting responses.

In a Direct Message, Latinfiddler also pointed out that Latin music calls two measures a “bar” because that’s the length of one cycle of the clave.

Thinking about it further, there’s yet another reason to conflate measures with hypermeasures, which is the broader cut-time shift taking place in hip-hop. All of the young hip-hop beatmakers I’ve observed lately work at half the base tempo of their DAW session. Rohan, being no exception, had the session tempo set to 125 bpm, but programmed a beat with an implied tempo of 62.5 bpm. He and his cohort put their backbeats on beat three, not beats two and four, so they have a base grid of thirty-second notes rather than sixteenth notes. A similar shift took place in the early 1960s when the swung eighth notes of jazz rhythm gave way to the swung sixteenth notes of funk.

Here’s Rohan’s track as of the end of our session:

By the time we were done working, the rest of the Fellows had gathered around and started freestyling. The next step is to record them rapping and singing on top. We also need to find someone to mix it properly. I understand aspects of hip-hop very well, but I mix amateurishly at best.

All the way around, I feel like a learn a ton about music whenever I work with young hip-hop musicians. They approach the placement of sounds in the meter in ways that would never occur to me. I’m delighted to be able to support them technically in realizing their ideas, it’s a privilege for me.