Design for Real Life – QWERTYBeats research

Writing assignment for Design For The Real World with Claire Kearney-Volpe and Diana Castro – research about a new rhythm interface for blind and low-vision novice musicians

Definition

I propose a new web-based accessible rhythm instrument called QWERTYBeats.Traditional instruments are highly accessible to blind and low-vision musicians. Electronic music production tools are not. I look at the history of accessible instruments and software interfaces, give an overview of current electronic music hardware and software, and discuss the design considerations underlying my project.

QWERTYBeats logo

Historical overview

Acoustic instruments give rich auditory and haptic feedback, and pose little obstacle to blind musicians. We need look no further for proof than the long history of iconic blind musicians like Ray Charles and Stevie Wonder. Even sighted instrumentalists rarely look at their instruments once they have attained a sufficient level of proficiency. Music notation is not accessible, but Braille notation has existed since the language’s inception. Also, a great many musicians both blind and sighted play entirely by ear anyway.

Most of the academic literature around accessibility issues in music education focuses on wider adoption of and support for Braille notation. See, for example, Rush, T. W. (2015). Incorporating Assistive Technology for Students with Visual Impairments into the Music Classroom. Music Educators Journal, 102(2), 78–83. For electronic music, notation is rarely if ever a factor.

Electronic instruments pose some new accessibility challenges. They may use graphical interfaces with nested menus, complex banks of knobs and patch cables, and other visual control surfaces. Feedback may be given entirely with LED lights and small text labels. Nevertheless, blind users can master these devices with sufficient practice, memorization and assistance. For example, Stevie Wonder has incorporated synthesizers and drum machines in most of his best-known recordings.

Most electronic music creation is currently done not with instruments, but rather using specialized software applications called digital audio workstations (DAWs). Keyboards and other controllers are mostly used to access features of the software, rather than as standalone instruments. The most commonly-used DAWs include Avid Pro Tools, Apple Logic, Ableton Live, and Steinberg Cubase. Mobile DAWs are more limited than their desktop counterparts, but are nevertheless becoming robust music creation tools in their own right. Examples include Apple GarageBand and Steinberg Cubasis. Notated music is commonly composed using score editing software like Sibelius and Finale, whose functionality increasingly overlaps with DAWs, especially in regard to MIDI sequencing.

DAWs and notation editors pose steep accessibility challenges due to their graphical and spatial interfaces, not to mention their sheer complexity. In class, we were given a presentation by Leona Godin, a blind musician who records and edits audio using Pro Tools by means of VoiceOver. While it must have taken a heroic effort on her part to learn the program, Leona demonstrates that it is possible. However, some DAWs pose insurmountable problems even to very determined blind users because they do not use standard operating system elements, making them inaccessible via screen readers.

Technological interventions

There are no mass-market electronic interfaces specifically geared toward blind or low-vision users. In this section, I discuss one product frequently hailed for its “accessibility” in the colloquial rather than blindess-specific sense, along with some more experimental and academic designs.

Ableton Push

Push layout for IMPACT Faculty Showcase

Ableton Live has become the DAW of choice for electronic music producers. Low-vision users can zoom in to the interface and modify the color scheme. However, Live is inaccessible via screen readers.

In recent years, Ableton has introduced a hardware controller, the Push, which is designed to make the software experience more tactile and instrument-like. The Push combines an eight by eight grid of LED-lit touch pads with banks of knobs, buttons and touch strips. It makes it possible to create, perform and record a piece of music from scratch without looking at the computer screen. In addition to drum programming and sampler performance, the Push also has an innovative melodic mode which maps scales onto the grid in such a way that users can not play a wrong note. Other comparable products exist; see, for example, the Native Instruments Maschine.

There are many pad-based drum machines and samplers. Live’s main differentiator is its Session view, where the pads launch clips: segments of audio or MIDI that can vary in length from a single drum hit to the length of an entire song. Clip launching is tempo-synced, so when you trigger a clip, playback is delayed until the start of the next measure (or whatever the quantization interval is.) Clip launching is a forgiving and beginner-friendly performance method, because it removes the possibility of playing something out of rhythm. Like other DAWs, Live also gives rhythmic scaffolding in its software instruments by means of arpeggiators, delay and other tempo-synced features.

The Push is a remarkable interface, but it has some shortcomings for blind users. First of all, it is expensive, $800 for the entry-level version and $1400 for the full-featured software suite. Much of its feedback is visual, in the form of LED screens and color-coded lighting on the pads. It switches between multiple modes which can be challenging to distinguish even for sighted users. And, like the software it accompanies, the Push is highly complex, with a steep learning curve unsuited to novice users, blind or sighted.

The aQWERTYon

Most DAWs enable users to perform MIDI instruments on the QWERTY keyboard. The most familiar example is the Musical Typing feature in Apple GarageBand.

GarageBand musical typing

Musical Typing makes it possible to play software instruments without an external MIDI controller, which is convenient and useful. However, its layout counterintuively follows the piano keyboard, which is an awkward fit for the computer keyboard. There is no easy way to distinguish the black and white keys, and even expert users find themselves inadvertantly hitting the keyboard shortcut for recording while hunting for F-sharp.

The aQWERTYon is a web interface developed by the NYU Music Experience Design Lab specifically intended to address the shortcomings of Musical Typing.

aQWERTYon screencap

Rather than emulating the piano keyboard, the aQWERTYon draws its inspiration from the chord buttons of an accordion. It fills the entire keyboard with harmonically related notes in a way that supports discovery by naive users. Specifically, it maps scales across the rows of keys, staggered by intervals such that each column forms a chord within the scale. Root notes and scales can be set from pulldown menus within the interface, or preset using URL parameters. It can be played as a standalone instrument, or as a MIDI controller in conjunction with a DAW. Here is a playlist of music I created using the aQWERTYon and GarageBand or Ableton Live:

The aQWERTYon is a completely tactile experience. Sighted users can carefully match keys to note names using the screen, but more typically approach the instrument by feel, seeking out patterns on the keyboard by ear. A blind user would need assistance loading the aQWERTYon initially and setting the scale and root note parameters, but otherwise, it is perfectly accessible. The present project was motivated in large part by a desire to make exploration of rhythm as playful and intuitive as the aQWERTYon makes exploring chords and scales.

Soundplant

The QWERTY keyboard can be turned into a simple drum machine quite easily using a free program called Soundplant. The user simply drags audio files onto a graphical key to have it triggered by that physical key. I was able to create a TR-808 kit in a matter of minutes:

Soundplant with 808 samples

After it is set up and configured, Soundplant can be as effortlessly accessible as the aQWERTYon. However, it does not give the user any rhythmic assistance. Drumming in perfect time is an advanced musical skill, and playing drum machine samples out of time is not much more satisfying than banging on a metal bowl with a spoon out of time. An ideal drum interface would offer beginners some of the rhythmic scaffolding and support that Ableton provides via Session view, arpeggiators, and the like.

The Groove Pizza

Drum machines and their software counterparts offer an alternative form of rhythmic scaffolding. The user sequences patterns in a time-unit box system or piano roll, and the computer performs those patterns flawlessly. The MusEDLab‘s Groove Pizza app is a web-based drum sequencer that wraps the time-unit box system into a circle.

Groove Pizza - Bembe

The Groove Pizza was designed to make drum programming more intuitive by visualizing the symmetries and patterns inherent in musical-sounding rhythms. However, it is totally unsuitable for blind or low-vision users. Interaction is only possible through the mouse pointer or touch, and there are no standard user interface elements that can be parsed by screen readers.

Before ever considering designing for the blind, the MusEDLab had already considered the Groove Pizza’s limitations for younger children and users with special needs: there is no “live performance” mode, and there is always some delay in feedback between making a change in the drum pattern and hearing the result. We have been considering ways to make a rhythm interface that is more immediate, performance-oriented and tactile. One possible direction would be to create a hardware version of the Groove Pizza; indeed, one of the earliest prototypes was a hardware version built by Adam November out of a pizza box. However, hardware design is vastly more complex and difficult than software, so for the time being, software promises more immediate results.

Haenselmann-Lemelson-Effelsberg MIDI sequencer

This experimental interface is described in Haenselmann, T., Lemelson, H., & Effelsberg, W. (2011). A zero-vision music recording paradigm for visually impaired people. Multimedia Tools and Applications, 5, 1–19.

Haenselmann-Lemelson-Effelsberg MIDI sequencer

The authors create a new mode for a standard MIDI keyboard that maps piano keys to DAW functions like playback, quantization, track selection, and so on. They also add “earcons” (auditory icons) to give sonic feedback when particular functions have been activated that normally only give graphical feedback. For example, one earcon sounds when recording is enabled; another sounds for regular playback. This interface sounds promising, but there are significant obstacles to its adoption. While the authors have released the source code as a free download, that requires a would-be user to be able to compile and run it. This is presuming that they could access the code in the first place; the download link given in the paper is inactive. It is an all-too-common fate of academic projects to never get widespread usage. By posting our projects on the web, the MusEDLab hopes to avoid this outcome.

Statement

Music education philosophy

My project is animated by a constructivist philosophy of music education, which operates by the following axiomatic assumptions:

  • Learning by doing is better than learning by being told.
  • Learning is not something done to you, but rather something done by you.
  • You do not get ideas; you make ideas. You are not a container that gets filled with knowledge and new ideas by the world around you; rather, you actively construct knowledge and ideas out of the materials at hand, building on top of your existing mental structures and models.
  • The most effective learning experiences grow out of the active construction of all types of things, particularly things that are personally or socially meaningful, that you develop through interactions with others, and that support thinking about your own thinking.

If an activity’s challenge level is beyond than your ability, you experience anxiety. If your ability at the activity far exceeds the challenge, the result is boredom. Flow happens when challenge and ability are well-balanced, as seen in this diagram adapted from Csikszentmihalyi.

Flow

Music students face significant obstacles to flow at the left side of the Ability axis. Most instruments require extensive practice before it is possible to make anything that resembles “real” music. Electronic music presents an opportunity here, because even a complete novice can produce music with a high degree of polish quickly. It is empowering to use technologies that make it impossible to do anything wrong; it frees you to begin exploring what you find to sound right. Beginners can be scaffolded in their pitch explorations with MIDI scale filters, Auto-Tune, and the configurable software keyboards in apps like Thumbjam and Animoog. Rhythmic scaffolding is more rare, but it can be had via Ableton’s quantized clip launcher, by MIDI arpeggiators, and using the Note Repeat feature on many drum machines.

QWERTYBeats proposal

My project takes drum machine Note Repeat as its jumping off point. When Note Repeat is activated, holding down a drum pad triggers the corresponding sound at a particular rhythmic interval: quarter notes, eighth notes, and so on. On the Ableton Push, Note Repeat automatically syncs to the global tempo, making it effortless to produce musically satisfying rhythms. However, this mode has a major shortcoming: it applies globally to all of the drum pads. To my knowledge, no drum machine makes it possible to simultaneously have, say, the snare drum playing every dotted eighth note while the hi-hat plays every sixteenth note.

I propose a web application called QWERTYBeats that maps drums to the computer keyboard as follows:

  • Each row of the keyboard triggers a different drum/beatbox sound (e.g. kick, snare, closed hi-hat, open hi-hat).
  • Each column retriggers the sample at a different rhythmic interval (e.g. quarter note, dotted eighth note).
  • Circles dynamically divide into “pie slices” to show rhythmic values.

The rhythm values are shown below by column, with descriptions followed by the time interval as shown as a fraction of the tempo in beats per minute.

  1. quarter note (1)
  2. dotted eighth note (3/4)
  3. quarter note triplet (2/3)
  4. eighth note (1/2)
  5. dotted sixteenth note (3/8)
  6. eighth note triplet (1/3)
  7. sixteenth note (1/4)
  8. dotted thirty-second note (3/16)
  9. sixteenth note triplet (1/6)
  10. thirty-second note (1/8)

By simply holding down different combinations of keys, users can attain complex syncopations and polyrhythms. If the app is synced to the tempo of a DAW or music playback, the user can perform good-sounding rhythms over any song that is personally meaningful to them.

The column layout leaves some unused keys in the upper right corner of the keyboard: “-“, “=”, “[“, “]”, “”, etc. These can be reserved for setting the tempo and other UI elements.

The app defaults to Perform Mode, but clicking Make New Kit opens Sampler mode, where users can import or record their own drum sounds:

  • Keyboard shortcuts enable the user to select a sound, audition it, record, set start and end point, and set its volume level.
  • A login/password system enables users to save kits to the cloud where they can be accessed from any computer. Kits get unique URL identifiers, so users can also share them via email or social media.

It is my goal to make the app accessible to users with the widest possible diversity of abilities.

  • The entire layout will use plain text, CSS and JavaScript to support screen readers.
  • All user interface elements can be accessed via the keyboard: tab to change the keyboard focus, menu selections and parameter changes via the up and down arrows, and so on.

Perform Mode:

QWERTYBeats concept images - Perform mode

Sampler Mode:

sampler-mode

Mobile version

The present thought is to divide up the screen into a grid mirroring the layout of the QWERTY keyboard. User testing will determine whether this will produce a satisfying experience.

QWERTYDrum - mobile

Prototype

I created a prototype of the app using Ableton Live’s Session View.

QWERTYBeats - Ableton prototype

Here is a sample performance:

There is not much literature examining the impact of drum programming and other electronic rhythm sequencing on students’ subsequent ability to play acoustic drums, or to keep time more accurately in general. I can report anecdotally that my own time spent sequencing and programming drums improved my drumming and timekeeping enormously (and mostly inadvertently.) I will continue to seek further support for the hypothesis that electronically assisted rhythm creation builds unassisted rhythmic ability. In the meantime, I am eager to prototype and test QWERTYBeats.

Why hip-hop is interesting

The title of this post is also the title of a tutorial I’m giving at ISMIR 2016 with Jan Van Balen and Dan Brown. The conference is organized by the International Society for Music Information Retrieval, and it’s the fanciest of its kind. You may be wondering what Music Information Retrieval is. MIR is a specialized field in computer science devoted to teaching computers to understand music, so they can transcribe it, organize it, find connections and similarities, and, maybe, eventually, create it.

So why are we going to talk to the MIR community about hip-hop? So far, the field has mostly studied music using the tools of Western classical music theory, which emphasizes melody and harmony. Hip-hop songs don’t tend to have much going on in either of those areas, which makes the genre seem like it’s either too difficult to study, or just too boring. But the MIR community needs to find ways to engage this music, if for no other reason than the fact that hip-hop is the most-listened to genre in the world, at least among Spotify listeners.

Hip-hop has been getting plenty of scholarly attention lately, but most of it has been coming from cultural studies. Which is fine! Hip-hop is culturally interesting. When humanities people do engage with hip-hop as an art form, they tend to focus entirely on the lyrics, treating them as a subgenre of African-American literature that just happens to be performed over beats. And again, that’s cool! Hip-hop lyrics have literary interest. If you’re interested in the lyrical side, we recommend this video analyzing the rhyming techniques of several iconic emcees. But what we want to discuss is why hip-hop is musically interesting, a subject which academics have given approximately zero attention to.

Much of what I find exciting (and difficult) about hip-hop can be found in Kanye West’s song “Famous” from his album The Life Of Pablo.

The song comes with a video, a ten minute art film that shows Kanye in bed sleeping after a group sexual encounter with his wife, his former lover, his wife’s former lover, his father-in-law turned mother-in-law, various of his friends and collaborators, Bill Cosby, George Bush, Taylor Swift, and Donald Trump. There’s a lot to say about this, but it’s beyond the scope of our presentation, and my ability to verbalize thoughts. The song has some problematic lyrics. Kanye drops the n-word in the very first line and calls Taylor Swift a bitch in the second. He also speculates that he might have sex with her, and that he made her famous. I find his language difficult and objectionable, but that too is beyond the scope. Instead, I’m going to focus on the music itself.

“Famous” has a peculiar structure, shown in the graphic below.

The track begins with a six bar intro, Rihanna singing over a subtle gospel-flavored organ accompaniment in F-sharp major. She’s singing few lines from “Do What You Gotta Do” by Jimmy Webb. This song has been recorded many times, but for Kanye’s listeners, the most significant one is by Nina Simone.

Next comes a four-bar groove, a more aggressive organ part over a drum machine beat, with Swizz Beatz exclaiming on top. The beat is a minimal funk pattern on just kick and snare, treated with cavernous artificial reverb. The organ riff is in F-sharp minor, which is an abrupt mode change so early in the song. It’s sampled from the closing section of “Mi Sono Svegliato E…Ho Chiuso Gli Occhi” by Il Rovescio della Medaglia, an Italian prog-rock band I had never heard of until I looked the sample up just now. The song is itself built around quotes of Bach’s Well-Tempered Clavier–Kanye loves sampling material built from samples.

Verse one continues the same groove, with Kanye alternating between aggressive rap and loosely pitched singing. Rap is widely supposed not to be melodic, but this idea collapses immediately under scrutiny. The border between rapping and singing is fluid, and most emcees cross it effortlessly. Even in “straight” rapping, though, the pitch sequences are deliberate and meaningful. The pitches might not fall on the piano keys, but they are melodic nonetheless.

The verse is twelve bars long, which is unusual; hip-hop verses are almost always eight or sixteen bars. The hook (the hip-hop term for chorus) comes next, Rihanna singing the same Jimmy Webb/Nina Simone quote over the F-sharp major organ part from the intro. Swizz Beatz does more interjections, including a quote of “Wake Up Mr. West,” a short skit on Kanye’s album Late Registration in which DeRay Davis imitates Bernie Mac.

Verse two, like verse one, is twelve bars on the F-sharp minor loop. At the end, you think Rihanna is going to come back in for the hook, but she only delivers the pickup. The section abruptly shifts into an F-sharp major groove over fuller drums, including a snare that sounds like a socket wrench. The lead vocal is a sample of “Bam Bam” by Sister Nancy, which is a familiar reference for hip-hop fans–I recognize it from “Lost Ones” by Lauryn Hill and “Just Hangin’ Out” by Main Source. The chorus means “What a bum deal.” Sister Nancy’s track is itself sample-based–like many reggae songs, it uses a pre-existing riddim or instrumental backing, and the chorus is a quote of the Maytals.

Kanye doesn’t just sample “Bam Bam”, he also reharmonizes it. Sister Nancy’s original is a I – bVII progression in C Mixolydian. Kanye pitch shifts the vocal to fit it over a I – V – IV – V progression in F-sharp major. He doesn’t just transpose the sample up or down a tritone; instead, he keeps the pitches close by changing their chord function. Here’s Sister Nancy’s original:

And here’s Kanye’s version:

The pitch shifting gives Sister Nancy the feel of a robot from the future, while the lo-fidelity recording places her in the past. It’s a virtuoso sample flip.

After 24 bars of the Sister Nancy groove, the track ends with the Jimmy Webb hook again. But this time it isn’t Rihanna singing. Instead, it’s a sample of Nina Simone herself.It reminds me of Kanye’s song “Gold Digger“, which includes Jamie Foxx imitating Ray Charles, followed by a sample of Ray Charles himself. Kanye is showing off here. It would be a major coup for most producers to get Rihanna to sing on a track, and it would be an equally major coup to be able to license a Nina Simone sample, not to mention requiring the chutzpah to even want to sample such a sacred and iconic figure. Few people besides Kanye could afford to use both Rihanna and Nina Simone singing the same hook, and no one else would dare. I don’t think it’s just a conspicuous show of industry clout, either; Kanye wants you to feel the contrast between Rihanna’s heavily processed purr and Nina Simone’s stark, preacherly tone.

Here’s a diagram of all the samples and samples of samples in “Famous.”

In this one track, we have a dense interplay of rhythms, harmonies, timbres, vocal styles, and intertextual meaning, not to mention the complexities of cultural context. This is why hip-hop is interesting.

You probably have a good intuitive idea of what hip-hop is, but there’s plenty of confusion around the boundaries. What are the elements necessary for music to be hip-hop? Does it need to include rapping over a beat? When blues, rock, or R&B singers rap, should we retroactively consider that to be hip-hop? What about spoken-word poetry? Does hip-hop need to include rapping at all? Do singers like Mary J. Blige and Aaliyah qualify as hip-hop? Is Run-DMC’s version of “Walk This Way” by Aerosmith hip-hop or rock? Is “Love Lockdown” by Kanye West hip-hop or electronic pop? Do the rap sections of “Rapture” by Blondie or “Shake It Off” by Taylor Swift count as hip-hop?

If a single person can be said to have laid the groundwork for hip-hop, it’s James Brown. His black pride, sharp style, swagger, and blunt directness prefigure the rapper persona, and his records are a bottomless source of classic beats and samples. The HBO James Brown documentary is a must-watch.

Wikipedia lists hip-hop’s origins as including funk, disco,
electronic music, dub, R&B, reggae, dancehall, rock, jazz, toasting, performance poetry, spoken word, signifyin’, The Dozens, griots, scat singing, and talking blues. People use the terms hip-hop and rap interchangeably, but hip-hop and rap are not the same thing. The former is a genre; the latter is a technique. Rap long predates hip-hop–you can hear it in classicalrock, R&B, swingjazz fusion, soul, funkcountry, and especially blues, especially especially the subgenre of talking blues. Meanwhile, it’s possible to have hip-hop without rap. Nearly all current pop and R&B are outgrowths of hip-hop. Turntablists and controllerists have turned hip-hop into a virtuoso instrumental music.

It’s sometimes said that rock is European harmony combined with African rhythm. Rock began as dance music, and rhythm continues to be its most important component. This is even more true of hip-hop, where harmony is minimal and sometimes completely absent. More than any other music of the African diaspora, hip-hop is a delivery system for beats. These beats have undergone some evolution over time. Early hip-hop was built on funk, the product of what I call The Great Cut-Time Shift, as the underlying pulse of black music shifted from eighth notes to sixteenth notes. Current hip-hop is driving a Second Great Cut-Time Shift, as the average tempo slows and the pulse moves to thirty-second notes.

Like all other African-American vernacular music, hip-hop uses extensive syncopation, most commonly in the form of a backbeat. You can hear the blues musician Taj Mahal teach a German audience how to clap on the backbeat. (“Schvartze” is German for “black.”) Hip-hop has also absorbed a lot of Afro-Cuban rhythms, like the omnipresent son clave. This traditional Afro-Cuban rhythm is everywhere in hip-hop: in the drums, of course, but also in the rhythms of bass, keyboards, horns, vocals, and everywhere else. You can hear son clave in the snare drum part in “WTF” by Missy Elliott.

The NYU Music Experience Design Lab created the Groove Pizza app to help you visualize and interact with rhythms like the ones in hip-hop beats. You can use it to explore classic beats or more contemporary trap beats. Hip-hop beats come from three main sources: drum machines, samples, or (least commonly) live drummers.

Hip-hop was a DJ medium before emcees became the main focus. Party DJs in the disco era looped the funkiest, most rhythm-intensive sections of the records they were playing, and sometimes improvised toasts on top. Sampling and manipulating recordings has become effortless in the computer age, but doing it with vinyl records requires considerable technical skill. In the movie Wild Style, you can see Grandmaster Flash beat juggle and scratch “God Make Me Funky” by the Headhunters and “Take Me To The Mardi Gras” by Bob James (though the latter song had to be edited out of the movie for legal reasons.)

The creative process of making a modern pop recording is very different from composing on paper or performing live. Hip-hop is an art form about tracks, and the creativity is only partially in the songs and the performances. A major part of the art form is the creation of sound itself. It’s the timbre and space that makes the best tracks come alive as much as any of the “musical” components. The recording studio gives you control over the finest nuances of the music that live performers can only dream of. Most of the music consists of synths and samples that are far removed from a “live performance.” The digital studio erases the distinction between composition, improvisation, performance, recording and mixing. The best popular musicians are the ones most skilled at “playing the studio.”

Hip-hop has drawn much inspiration from the studio techniques of dub producers, who perform mixes of pre-existing multitrack tape recordings by literally playing the mixing desk. When you watch The Scientist mix Ted Sirota’s “Heavyweight Dub,” you can see him shaping the track by turning different instruments up and down and by turning the echo effect on and off. Like dub, hip-hop is usually created from scratch in the studio. Brian Eno describes the studio as a compositional tool, and hip-hop producers would agree.

Aside from the human voice, the most characteristic sounds in hip-hop are the synthesizer, the drum machine, the turntable, and the sampler. The skills needed by a hip-hop producer are quite different from the ones involved in playing traditional instruments or recording on tape. Rock musicians and fans are quick to judge electronic musicians like hip-hop producers for not being “real musicians” because sequencing electronic instruments appears to be easier to learn than guitar or drums. Is there something lazy or dishonest about hip-hop production techniques? Is the guitar more of a “real” instrument than the sampler or computer? Are the Roots “better” musicians because they incorporate instruments?

Maybe we discount the creative prowess of hip-hop producers because we’re unfamiliar with their workflow. Fortunately, there’s a growing body of YouTube videos that document various aspects of the process:

Before affordable digital samplers became available in the late 1980s, early hip-hop DJs and producers did most of their audio manipulation with turntables. Record scratching  demands considerable skill and practice, and it has evolved into a virtuoso form analogous to bebop saxophone or metal guitar shredding.

Hip-hop is built on a foundation of existing recordings, repurposed and recombined. Samples might be individual drum hits, or entire songs. Even hip-hop tracks without samples very often started with them; producers often replace copyrighted material with soundalike “original” beats and instrumental performances for legal reasons. Turntables and samplers make it possible to perform recordings like instruments.

The Amen break, a six-second drum solo, is one of the most important samples of all time. It’s been used in uncountably many hip-hop songs, and is the basis for entire subgenres of electronic music. Ali Jamieson gives an in-depth exploration of the Amen.

There are few artistic acts more controversial than sampling. Is it a way to enter into a conversation with other artists? An act of liberation against the forces of corporatized mass culture? A form of civil disobedience against a stifling copyright regime? Or is it a bunch of lazy hacks stealing ideas, profiting off other musicians’ hard work, and devaluing the concept of originality? Should artists be able to control what happens to their work? Is complete originality desirable, or even possible?

We look to hip-hop to tell us the truth, to be real, to speak to feelings that normally go unspoken. At the same time, we expect rappers to be larger than life, to sound impossibly good at all times, and to live out a fantasy life. And many of our favorite artists deliberately alter their appearance, race, gender, nationality, and even species. To make matters more complicated, we mostly experience hip-hop through recordings and videos, where artificiality is the nature of the medium. How important is authenticity in this music? To what extent is it even possible?

The “realness” debate in hip-hop reached its apogee with the controversy over Auto-Tune. Studio engineers have been using computer software to correct singers’ pitch since the early 1990s, but the practice only became widely known when T-Pain overtly used exaggerated Auto-Tune as a vocal effect rather than a corrective. The “T-Pain effect” makes it impossible to sing a wrong note, though at the expense of making the singer sound like a robot from the future. Is this the death of singing as an art form? Is it cheating to rely on software like this? Does it bother you that Kanye West can have hits as a singer when he can barely carry a tune? Does it make a difference to learn that T-Pain has flawless pitch when he turns off the Auto-Tune?

Hip-hop is inseparable from its social, racial and political environment. For example, you can’t understand eighties hip-hop without understanding New York City in the pre-Giuliani era. Eric B and Rakim capture it perfectly in the video for “I Ain’t No Joke.”

Given that hip-hop is the voice of the most marginalized people in America and the world, why is it so compelling to everyone else? Timothy Brennan argues that the musical African diaspora of which hip-hop is a part helps us resist imperialism through secular devotion. Brennan thinks that America’s love of African musical practice is related to an interest in African spiritual practice. We’re unconsciously drawn to the musical expression of African spirituality as a way of resisting oppressive industrial capitalism and Western hegemony. It isn’t just the defiant stance of the lyrics that’s doing the resisting. The beats and sounds themselves are doing the major emotional work, restructuring our sense of time, imposing a different grid system onto our experience. I would say that makes for some pretty interesting music.

Visualizing trap beats with the Groove Pizza

In a previous post, I used the Groove Pizza to visualize some classic hip-hop beats. But the kids are all about trap beats right now, which work differently from the funk-based boom-bap of my era.

IT'S A TRAP

From the dawn of jazz until about 1960, African-American popular music was based on an eighth note pulse. The advent of funk brought with it a shift to the sixteenth note pulse. Now we’re undergoing another shift, as Southern hip-hop is moving the rest of popular music over to a 32nd note pulse. The tempos have been slowing down as the beat subdivisions get finer. This may all seem like meaningless abstraction, but the consequences become real if you want to program beats of your own.

Back in the 90s, the template for a hip-hop beat looked like a planet of 16th notes orbited by kicks and snares. Click the image below to hear a simple “planet funk” pattern in the Groove Pizza. Each slice of the pizza is a sixteenth note, and the whole pizza is one bar long.

Planet Funk - 16th notes

(Music readers can also view it in Noteflight.)

You can hear the sixteenth note hi-hat pulse clearly in “So Fresh So Clean” by OutKast.

So Fresh So Clean

View in Noteflight

Trap beats have the same basic skeleton as older hip-hop styles: a kick on beat one, snares on beats two and four, and hi-hats on some or all of the beats making up the underlying pulse. However, in trap, that pulse is twice as fast as in 90s hip-hop, 32nd notes rather than sixteenths. This poses an immediate practical problem: a lot of drum machines don’t support such a fine grid resolution. For example, the interface of the ubiquitous TR-808 is sixteen buttons, one for each sixteenth note. On the computer, it’s less of an issue because you can set the grid resolution to be whatever you want, but even so, 32nd notes are a hassle. So what do you do?

The trap producer’s workaround is to double the song tempo, thereby turning sixteenths into effective 32nds. To get a trap beat at 70 beats per minute, you set the tempo to 140. Your 808 grid becomes half a bar of 32nd notes, rather than a full bar of sixteenths. And instead of putting your snares on beats two and four, you put them on beat three.

Here’s a generic trap beat I made. Each pizza slice is a 32nd note, and the whole pizza is half a bar.

View in Noteflight

Trap beats don’t use swing. Instead, they create rhythmic interest through syncopation, accenting unexpected weak beats. On the Groove Pizza, the weak beats are the ones in between the north, south, east and west. Afro-Cuban music is a good source of syncopated patterns. The snare pattern in the last quarter of my beat is a rotation of son clave, and the kick pattern is somewhat clave-like as well.

It's A Trap - last bar

Now let’s take a look at two real-life trap beats. First, there’s the inescapable “Trap Queen” by Fetty Wap.

Here’s a simplified version of the beat. (“Trap Queen” uses a few 64th notes on the hi-hat, which you can’t yet do on the Groove Pizza.)

Trap Queen simplified

View in Noteflight

The beat has an appealing symmetry. In each half bar, both the kick and snare each play a strong beat and a weak beat. The hi-hat pattern is mostly sixteenth notes, with just a few thirty-second notes as embellishments. The location of those embellishments changes from one half-bar to the next. It’s a simple technique, and it’s effective.

My other real-world example is “Panda” by Desiigner.

Here’s the beat on the GP, once again simplified a bit.

View in Noteflight

Unlike my generic trap beat, “Panda” doesn’t have any hi-hats on the 32nd notes at all. It feels more like an old-school sixteenth note pulse at a very slow tempo. The really “trappy” part comes at the very end, with a quick pair of kick drums on the last two 32nd notes. While the lawn-sprinkler effect of doubletime hi-hats has become a cliche, doubletime kick rolls are still startlingly fresh (at least to my ears.)

To make authentic trap beats, you’ll need a more full-featured tool than the Groove Pizza. For one thing, you need 64th notes and triplets. Also, trap isn’t just about the placement of the drum hits, it’s about specific sounds. In addition to closed hi-hats, you  need open hi-hats and crash cymbals. You want more than one snare or handclap, and maybe multiple kicks too. And you’d want to be able to alter the pitch of your drums too. The best resource to learn more, as always, is the music itself.

Composing in the classroom

The hippest music teachers help their students create original music. But what exactly does that mean? What even is composition? In this post, I take a look at two innovators in music education and try to arrive at an answer.

Matt McLean is the founder of the amazing Young Composers and Improvisers Workshop. He teaches his students composition using a combination of Noteflight, an online notation editor, and the MusEDLab‘s own aQWERTYon, a web app that turns your regular computer keyboard into an intuitive musical interface.

http://www.yciw.net/1/the-interface-i-wish-noteflight-had-is-here-aqwertyon/

Matt explains:

Participating students in YCIW as well as my own students at LREI have been using Noteflight for over 6 years to compose music for chamber orchestras, symphony orchestras, jazz ensembles, movie soundtracks, video game music, school band and more – hundreds of compositions.

Before the advent of the aQWERTYon, students needed to enter music into Noteflight either by clicking with the mouse or by playing notes in with a MIDI keyboard. The former method is accessible but slow; the latter method is fast but requires some keyboard technique. The aQWERTYon combines the accessibility of the mouse with the immediacy of the piano keyboard.

For the first time there is a viable way for every student to generate and notate her ideas in a tactile manner with an instrument that can be played by all. We founded Young Composers & Improvisors Workshop so that every student can have the experience of composing original music. Much of my time has been spent exploring ways to emphasize the “experiencing” part of this endeavor. Students had previously learned parts of their composition on instruments after their piece was completed. Also, students with piano or guitar skills could work out their ideas prior to notating them. But efforts to incorporate MIDI keyboards or other interfaces with Noteflight in order to give students a way to perform their ideas into notation always fell short.

The aQWERTYon lets novices try out ideas the way that more experienced musicians do: by improvising with an instrument and reacting to the sounds intuitively. It’s possible to compose without using an instrument at all, using a kind of sudoku-solving method, but it’s not likely to yield good results. Your analytical consciousness, the part of your mind that can write notation, is also its slowest and dumbest part. You really need your emotions, your ear, and your motor cortex involved. Before computers, you needed considerable technical expertise to be able to improvise musical ideas, and remember them long enough to write them down. The advent of recording and MIDI removed a lot of the friction from the notation step, because you could preserve your ideas just by playing them. With the aQWERTYon and interfaces like it, you can do your improvisation before learning any instrumental technique at all.

Student feedback suggests that kids like being able to play along to previously notated parts as a way to find new parts to add to their composition. As a teacher I am curious to measure the effect of students being able to practice their ideas at home using aQWERTYon and then sharing their performances before using their idea in their composition. It is likely that this will create a stronger connection between the composer and her musical idea than if she had only notated it first.

Those of us who have been making original music in DAWs are familiar with the pleasures of creating ideas through playful jamming. It feels like a major advance to put that experience in the hands of elementary school students.

Matt uses progressive methods to teach a traditional kind of musical expression: writing notated scores that will then be performed live by instrumentalists. Matt’s kids are using futuristic tools, but the model for their compositional technique is the one established in the era of Beethoven.

Beethoven

(I just now noticed that the manuscript Beethoven is holding in this painting is in the key of D-sharp. That’s a tough key to read!)

Other models of composition exist. There’s the Lennon and McCartney method, which doesn’t involve any music notation. Like most untrained rock musicians, the Beatles worked from lyric sheets with chords written on them as a mnemonic. The “lyrics plus chords” method continues to be the standard for rock, folk and country musicians. It’s a notation system that’s only really useful if you already have a good idea of how the song is supposed to sound.

Lennon and McCartney writing

Lennon and McCartney originally wrote their songs to be performed live for an audience. They played in clubs for several years before ever entering a recording studio. As their career progressed, however, the Beatles stopped performing live, and began writing with the specific goal of creating studio recordings. Some of those later Beatles tunes would be difficult or impossible to perform live. Contemporary artists like Missy Elliott and Pharrell Williams have pushed the Beatles’ idea to its logical extreme: songs existing entirely within the computer as sequences of samples and software synths, with improvised vocals arranged into shape after being recorded. For Missy and Pharrell, creating the score and the finished recording are one and the same act.

Pharrell and Missy Elliott in the studio

Is it possible to teach the Missy and Pharrell method in the classroom? Alex Ruthmann, MusEDLab founder and my soon-to-be PhD advisor, documented his method for doing so in 2007.

As a middle school general music teacher, I’ve often wrestled with how to engage my students in meaningful composing experiences. Many of the approaches I’d read about seemed disconnected from the real-world musicality I saw daily in the music my students created at home and what they did in my classes. This disconnect prompted me to look for ways of bridging the gap’ between the students’ musical world outside music class and their in-class composing experiences.

It’s an axiom of constructivist music education that students will be most motivated to learn music that’s personally meaningful to them. There are kids out there for whom notated music performed on instruments is personally meaningful. But the musical world outside music class usually follows the Missy and Pharrell method.

[T]he majority of approaches to teaching music with technology center around notating musical ideas and are often rooted in European classical notions of composing (for example, creating ABA pieces, or restricting composing tasks to predetermined rhythmic values). These approaches require students to have a fairly sophisticated knowledge of standard music notation and a fluency working with rhythms and pitches before being able to explore and express their musical ideas through broader musical dimensions like form, texture, mood, and style.

Noteflight imposes some limitations on these musical dimensions. Some forms, textures, moods and styles are difficult to capture in standard notation. Some are impossible. If you want to specify a particular drum machine sound combined with a sampled breakbeat, or an ambient synth pad, or a particular stereo image, standard notation is not the right tool for the job.

Common approaches to organizing composing experiences with synthesizers and software often focus on simplified classical forms without regard to whether these forms are authentic to the genre or to technologies chosen as a medium for creation.

There is nothing wrong with teaching classical forms. But when making music with computers, the best results come from making the music that’s idiomatic to computers. Matt McLean goes to extraordinary lengths to have student compositions performed by professional musicians, but most kids will be confined to the sounds made by the computer itself. Classical forms and idioms sound awkward at best when played by the computer, but electronic music sounds terrific.

The middle school students enrolled in these classes came without much interest in performing, working with notation, or studying the classical music canon. Many saw themselves as “failed” musicians, placed in a general music class because they had not succeeded in or desired to continue with traditional performance-based music classes. Though they no longer had the desire to perform in traditional school ensembles, they were excited about having the opportunity to create music that might be personally meaningful to them.

Here it is, the story of my life as a music student. Too bad I didn’t go to Alex’s school.

How could I teach so that composing for personal expression could be a transformative experience for students? How could I let the voices and needs of the students guide lessons for the composition process? How could I draw on the deep, complex musical understandings that these students brought to class to help them develop as musicians and composers? What tools could I use to quickly engage them in organizing sound in musical and meaningful ways?

Alex draws parallels between writing music and writing English. Both are usually done alone at a computer, and both pose a combination of technical and creative challenges.

Musical thinking (thinking in sound) and linguistic thinking (thinking using language phrases and ideas) are personal creative processes, yet both occur within social and cultural contexts. Noting these parallels, I began to think about connections between the whole-language approach to writing used by language arts teachers in my school and approaches I might take in my music classroom.

In the whole-language approach to writing, students work individually as they learn to write, yet are supported through collaborative scaffolding-support from their peers and the teacher. At the earliest stages, students tell their stories and attempt to write them down using pictures, drawings, and invented notation. Students write about topics that are personally meaningful to them, learning from their own writing and from the writing of their peers, their teacher, and their families. They also study literature of published authors. Classes that take this approach to teaching writing are often referred to as “writers’ workshops”… The teacher facilitates [students’] growth as writers through minilessons, share sessions, and conferring sessions tailored to meet the needs that emerge as the writers progress in their work. Students’ original ideas and writings often become an important component of the curriculum. However, students in these settings do not spend their entire class time “freewriting.” There are also opportunities for students to share writing in progress and get feedback and support from teacher and peers. Revision and extension of students’ writing occur throughout the process. Lessons are not organized by uniform, prescriptive assignments, but rather are tailored to the students’ interests and needs. In this way, the direction of the curriculum and successive projects are informed by the students’ needs as developing writers.

Alex set about creating an equivalent “composers’ workshop,” combining composition, improvisation, and performing with analytical listening and genre studies.

The broad curricular goal of the composers’ workshop is to engage students collaboratively in:

  • Organizing and expressing musical ideas and feelings through sound with real-world, authentic reasons for and means of composing
  • Listening to and analyzing musical works appropriate to students’ interests and experiences, drawn from a broad spectrum of sources
  • Studying processes of experienced music creators through listening to, performing, and analyzing their music, as well as being informed by accounts of the composition process written by these creators.

Alex recommends production software with strong loop libraries so students can make high-level musical decisions with “real” sounds immediately.

While students do not initially work directly with rhythms and pitch, working with loops enables students to begin composing through working with several broad musical dimensions, including texture, form, mood, and affect. As our semester progresses, students begin to add their own original melodies and musical ideas to their loop-based compositions through work with synthesizers and voices.

As they listen to musical exemplars, I try to have students listen for the musical decisions and understand the processes that artists, sound engineers, and producers make when crafting their pieces. These listening experiences often open the door to further dialogue on and study of the multiplicity of musical roles’ that are a part of creating today’s popular music. Having students read accounts of the steps that audio engineers, producers, songwriters, film-score composers, and studio musicians go through when creating music has proven to be informative and has helped students learn the skills for more accurately expressing the musical ideas they have in their heads.

Alex shares my belief in project-based music technology teaching. Rather than walking through the software feature-by-feature, he plunges students directly into a creative challenge, trusting them to pick up the necessary software functionality as they go. Rather than tightly prescribe creative approaches, Alex observes the students’ explorations and uses them as opportunities to ask questions.

I often ask students about their composing and their musical intentions to better understand how they create and what meanings they’re constructing and expressing through their compositions. Insights drawn from these initial dialogues help me identify strategies I can use to guide their future composing and also help me identify listening experiences that might support their work or techniques they might use to achieve their musical ideas.

Some musical challenges are more structured–Alex does “genre studies” where students have to pick out the qualities that define techno or rock or film scores, and then create using those idioms. This is especially useful for younger students who may not have a lot of experience listening closely to a wide range of music.

Rather than devoting entire classes to demonstrations or lectures, Alex prefers to devote the bulk of classroom time to working on the projects, offering “minilessons” to smaller groups or individuals as the need arises.

Teaching through minilessons targeted to individuals or small groups of students has helped to maintain the musical flow of students’ compositional work. As a result, I can provide more individual feedback and support to students as they compose. The students themselves also offer their own minilessons to peers when they have designed to teach more about advanced features of the software, such as how to record a vocal track, add a fade-in or fade-out, or copy their musical material. These technology skills are taught directly to a few students, who then become the experts in that skill, responsible for teaching other students in the class who need the skill.

Not only does the peer-to-peer learning help with cultural authenticity, but it also gives students invaluable experience with the role of teacher.

One of my first questions is usually, “Is there anything that you would like me to listen for or know about before I listen?” This provides an opportunity for students to seek my help with particular aspects of their composing process. After listening to their compositions, I share my impressions of what I hear and offer my perspective on how to solve their musical problems. If students choose not to accept my ideas, that’s fine; after all, it’s their composition and personal expression… Use of conferring by both teacher and students fosters a culture of collaboration and helps students develop skills in peer scaffolding.

Alex recommends creating an online gallery of class compositions. This has become easier to implement since 2007 with the explosion of blog platforms like Tumblr, audio hosting tools like SoundCloud, and video hosts like YouTube. There are always going to be privacy considerations with such platforms, but there is no shortage of options to choose from.

Once a work is online, students can listen to and comment on these compositions at home outside of class time. Sometimes students post pieces in progress, but for the most part, works are posted when deemed “finished” by the composer. The online gallery can also be set up so students can hear works written by participants in other classes. Students are encouraged to listen to pieces published online for ideas to further their own work, to make comments, and to share these works with their friends and family. The realworld publishing of students’ music on the Internet seems to contribute to their motivation.

Assessing creative work is always going to be a challenge, since there’s no objective basis to assess it on. Alex looks at how well a student composer has met the goal of the assignment, and how well they have achieved their own compositional intent.

The word “composition” is problematic in the context of contemporary computer-based production. It carries the cultural baggage of Western Europe, the idea of music as having a sole identifiable author (or authors.) The sampling and remixing ethos of hip-hop and electronica are closer to the traditions of non-European cultures where music may be owned by everyone and no one. I’ve had good results bringing remixing into the classroom, having students rework each others’ tracks, or beginning with a shared pool of audio samples, or doing more complex collaborative activities like musical shares. Remixes are a way of talking about music via the medium of music, and remixes of remixes can make for some rich and deep conversation. The word “composition” makes less sense in this context. I prefer the broader term “production”, which includes both the creation of new musical ideas and the realization of those ideas in sound.

So far in this post, I’ve presented notation-based composition and loop-based production as if they’re diametrical opposites. In reality, the two overlap, and can be easily combined. A student can create a part as a MIDI sequence and then convert it to notation, or vice versa. The school band or choir can perform alongside recorded or sequenced tracks. Instrumental or vocal performances can be recorded, sampled, and turned into new works. Electronic productions can be arranged for live instruments, and acoustic pieces can be reconceived as electronica. If a hip-hop track can incorporate a sample of Duke Ellington, there’s no reason that sample couldn’t be performed by a high school jazz band. The possibilities are endless.

Rohan lays beats

The Ed Sullivan Fellows program is an initiative by the NYU MusEDLab connecting up-and-coming hip-hop musicians to mentors, studio time, and creative and technical guidance. Our session this past Saturday got off to an intense start, talking about the role of young musicians of color in a world of the police brutality and Black Lives Matter. The Fellows are looking to Kendrick Lamar and Chance The Rapper to speak social and emotional truths through music. It’s a brave and difficult job they’ve taken on.

Eventually, we moved from heavy conversation into working on the Fellows’ projects, which this week involved branding and image. I was at kind of a loose end in this context, so I set up the MusEDLab’s Push controller and started playing around with it. Rohan, one of the Fellows, immediately gravitated to it, and understandably so.

Indigo lays beats

Rohan tried out a few drum sounds, then some synths. He quickly discovered a four-bar synth loop that he wanted to build a track around. He didn’t have any Ableton experience, however, so I volunteered to be his co-producer and operate the software for him.

We worked out some drum parts, first with a hi-hat and snare from the Amen break, and then a kick, clap and more hi-hats from Ableton’s C78 factory instrument. For bass, Rohan wanted that classic booming hip-hop sound you hear coming from car stereos in Brooklyn. He spotted the Hip-Hop Sub among the presets. We fiddled with it and he continued to be unsatisfied until I finally just put a brutal compressor on it, and then we got the sound he was hearing in his head.

While we were working, I had my computer connected to a Bluetooth speaker that was causing some weird and annoying system behavior. At one point, iTunes launched itself and started playing a random song under Rohan’s track, “I Can’t Realize You Love Me” by Duke Ellington and His Orchestra, featuring The Harlem Footwarmers and Sid Garry.

Rohan liked the combination of his beat and the Ellington song, so I sampled the opening four bars and added them to the mix. It took me several tries to match the keys, and I still don’t think I really nailed it, but the hip-hop kids have broad tolerance for chord clash, and Rohan was undisturbed.

Once we had the loops assembled, we started figuring out an arrangement. It took me a minute to figure out that when Rohan refers to a “bar,” he means a four-measure phrase. He’s essentially conflating hypermeasures with measures. I posted about it on Twitter later and got some interesting responses.

In a Direct Message, Latinfiddler also pointed out that Latin music calls two measures a “bar” because that’s the length of one cycle of the clave.

Thinking about it further, there’s yet another reason to conflate measures with hypermeasures, which is the broader cut-time shift taking place in hip-hop. All of the young hip-hop beatmakers I’ve observed lately work at half the base tempo of their DAW session. Rohan, being no exception, had the session tempo set to 125 bpm, but programmed a beat with an implied tempo of 62.5 bpm. He and his cohort put their backbeats on beat three, not beats two and four, so they have a base grid of thirty-second notes rather than sixteenth notes. A similar shift took place in the early 1960s when the swung eighth notes of jazz rhythm gave way to the swung sixteenth notes of funk.

Here’s Rohan’s track as of the end of our session:

By the time we were done working, the rest of the Fellows had gathered around and started freestyling. The next step is to record them rapping and singing on top. We also need to find someone to mix it properly. I understand aspects of hip-hop very well, but I mix amateurishly at best.

All the way around, I feel like a learn a ton about music whenever I work with young hip-hop musicians. They approach the placement of sounds in the meter in ways that would never occur to me. I’m delighted to be able to support them technically in realizing their ideas, it’s a privilege for me.

Ilan meets the Fugees

My youngest private music production student is a kid named Ilan. He makes moody trip-hop and deep house using Ableton Live. For our session today, Ilan came in with a downtempo, jazzy hip-hop instrumental. I helped him refine and polish it, and then we talked about his ideas for what kind of vocal might work on top. He wanted an emcee to flow over it, so I gave him my folder of hip-hop acapellas I’ve collected. The first one he tried was “Fu-Gee-La [Refugee Camp Remix]” by the Fugees.

I had it all warped out already, so all he had to do was drag and drop it into his session and press play. It sounded great, so he ran with it. Here’s what he ended up with:

At this point, let me clarify something. To his knowledge, Ilan had never heard “Fu-Gee-La” before using it in his track. His first exposure was the acapella over his own instrumental. His track is quite a bit faster than the original (well, technically, it’s slower, but the kids these days like their rapping doubletime.) Also, we needed to pitch the acapella down a minor third to match the key of Ilan’s instrumental. As of this writing, he has heard his remix about a thousand more times than the original.

And now, let’s consider the Fugees’ “original” song. Ilan used the acapella from a remix, not from the original original, which makes a difference since the remix has some different lyrics. The Fugees’ original original is not itself totally original. It contains several samples, including liberal interpolations of Teena Marie, and a quote from “Shakiyla (JRH)” by Poor Righteous Teachers, which itself contains several samples.

Hip-hop’s sampling culture was still radical back in the 90s when “Fu-Gee-La” was released, but has since become absorbed into mainstream sensibilities. Ilan is ambitious and talented, but his sensibilities are well in keeping with most of his millennial peers. So it’s worth looking into his norms and values around authorship and ownership. During our session, he was interested in the Fugees song simply as raw material for his own creativity, not as a self-contained work that needed to be “appreciated” first (or ever.) Ilan’s concerns about where he sources his sounds comes down one hundred percent to expediency. He buys sounds from the Ableton web site because that’s easy. The same goes for buying tracks from iTunes, if they surface with a quick search. Otherwise Ilan just does YouTube to mp3 conversion. I’ve never heard him voice any concern about the idea of intellectual property, or any desire to seek anyone’s permission.

So here we have a young musician who created an original track, and then after the fact layered in a commercially released hip-hop vocal track on a whim. If that one hadn’t worked, he would have just dropped in another one chosen more or less at random. This kind of effortless drag-and-drop remixing requires some facility with Ableton Live, which is expensive and has a learning curve. But this practice is easier than it was five years ago, and is only going to get easer. Music educators: are we ready for a world where this kind of creativity is so accessible? Rights holders: do you know just how little the kids know or care about the concept of musical intellectual property? And musicians: have you experienced the pleasure and inspiration of freely mixing your ideas with everyone else’s? This is a crazy time we live in.

Project-based music technology teaching

I use a project-based approach to teaching music technology. Technical concepts stick with you better if you learn them in the course of making actual music. Here’s the list of projects I assign to my college classes and private students. I’ve arranged them from easiest to hardest. The first five projects are suitable for a beginner-level class using any DAW–my beginners use GarageBand. The last two projects are more advanced and require a DAW with sophisticated editing tools and effects, like Ableton Live. If you’re a teacher, feel free to use these (and let me know if you do). Same goes for all you bedroom producers and self-teachers.

The projects are agnostic as to musical content, style or genre. However, the computer is best suited to making electronic music, and most of these projects work best in the pop/hip-hop/techno sphere. Experimental, ambient or film music approaches also work well. Many of them draw on the Disquiet Junto. Enjoy.

Tristan gets his FFT on

Loops

Assignment: Create a song using only existing loops. You can use these or these, or restrict yourself to the loops included with your DAW. Do not use any additional sounds or instruments.

For beginners, I like to separate this into two separate assignments. First, create a short (two or four bar) phrase using four to six instrument loops and beats. Then use that set of loops as the basis of a full length track, by repeating, and by having sounds enter and exit.

Concepts:

  • Basic DAW functions
  • Listening like a producer
  • Musical form and song structures
  • Intellectual property, copyright and authorship

Hints:

  • MIDI loops are easier to edit and customize than audio loops.
  • Try slicing audio loops into smaller segments. Use only the front or back half of the loop. Or rearrange segments into a different order.

final song

MIDI

Assignment: Create a piece of music using MIDI and software instruments. Do not record or import any audio. You can use MIDI from any source, including: playing keyboards, drum pads or other interfaces; drawing in the piano roll; importing scores from notation programs; downloading MIDI files from the internet (for example, from here); or using the Audio To MIDI function in your DAW. 

I don’t treat this as a composition exercise (unless students want to make it one.) Feel free to use an existing piece of music. The only requirement is that the end result has to sound good. Simply dragging a classical or pop MIDI into the DAW is likely to sound terrible unless you put some thought into your instrument choices. If you do want to create something original, try these compositional prompts.

Concepts:

  • MIDI recording and editing
  • Quantization, swing, and grooves
  • “Real” vs “fake” instruments
  • Synthesized vs sampled sounds
  • Drum programming
  • Interfaces and controllers

Hints:

  • For beginners, see this post on beatmaking fundamentals.
  • Realism is unattainable. Embrace the fakeness.
  • Find a small segment of a classical piece and loop it.
  • Rather than playing back a Bach keyboard piece on piano or harpsichord, set your instrument to drums or percussion, and get ready for joy.

Montclair State Music Tech 101

Found sound

Assignment: Record a short environmental sound and incorporate it into a piece of music. You can edit and process your found sound as you see fit. Variation: use existing sounds from Freesound.

Concepts:

  • Audio recording, editing, and effects
  • The musical potential of “non-musical” sounds

Hints:

  • Students usually record their sounds with their phones, and the resulting recording quality is usually bad. Try using EQ, compression, delay, reverb, distortion, and other effects to mitigate or enhance poor sound quality and background noise.

pyt stems

Peer remix

Assignment: Remix a track by one of your classmates (or friends, or a stranger on the internet.) Feel free to incorporate other pieces of music as well. Follow your personal definition of the word “remix.” That might mean small edits and adjustments to the mix and effects, or a radical reworking leading to complete transformation of the source material.

There are endless variations on the peer remix. Try the “metaremix,” where students remix each others’ remixes, to the nth degree as time permits. Also, do group remix activities like Musical Shares or FX Roulette.

Concepts:

  • Collaboration and authorship
  • Sampling
  • Mashups
  • Evolution of musical ideas
  • Musical critique using musical language

Hints:

  • A change in tempo can have dramatic effects on the mood and feel of a track.
  • Adding sounds is the obvious move, but don’t be afraid to remove things too.

Self remix

Assignment: Remix one of your own projects, using the same guidelines as the peer remix. This is a good project for the end of the semester/term.

Song transformation

Assignment: Take an existing song and turn it into a new song. Don’t use any additional sounds or MIDI.

Concepts:

  • Advanced audio editing and effects
  • Musical form and structure
  • The nature of originality

Hints:

  • You can transform short segments simply by repeating them out of context. For example, try taking single chords or lyrical phrases and looping them.

Serato

Shared sample

Assignment: Take a short audio sample (five seconds or less) and build a complete piece of music out of it. Do not use any other sounds. This is the most difficult assignment here, and the most rewarding one if you can pull it off successfully.

Concepts:

  • Advanced audio editing and effects
  • Musical form and structure
  • The nature of originality

Hints:

  • Pitch shifting and timestretching are your friends.
  • Short bursts of noise can be tuned up and down to make drums.
  • Extreme timestretching produces great ambient textures.

Mobile music at IMPACT

Writing assignments

I like to have students document their process in blog posts. I ask: What sounds and techniques did you use? Why did you use them? Are you happy with the end result? Given unlimited time and expertise, what changes would you make? Do you consider this to be a valid form of musical creativity?

This semester I also asked students to write reviews of each others’ work in the style of their preferred music publication. In the future, I plan to have students write a review of an imaginary track, and then assign other students to try to create the track being described.

The best way to learn how to produce good recordings is to do critical listening exercises. I assign students to create musical structure and space graphs in the spirit of William Moylan.

Further challenges

The projects above were intended to be used for a one-semester college class. If I were teaching over a longer time span or I needed more assignments, I would draw from the Disquiet JuntoMaking Music by Dennis DeSantis, or the Oblique Strategies cards. Let me know in the comments if you have additional recommendations.

Beatmaking fundamentals

I’m currently working with the Ed Sullivan Fellows program, an initiative of the NYU MusEDLab where we mentor up and coming rappers and producers. Many of them are working with beats they got from YouTube or SoundCloud. That’s fine for working out ideas, but to get to the next level, the Fellows need to be making their own beats. Partially this is for intellectual property reasons, and partially it’s because the quality of mp3s you get from YouTube is not so good. Here’s a collection of resources and ideas I collected for them, and that you might find useful too.

Sullivan Fellows - beatmaking with FL Studio

What should you use?

There are a lot of digital audio workstations (DAWs) out there. All of them have the same basic set of functions: a way to record and edit audio, a MIDI sequencer, and a set of samples and software instruments. My DAW of choice is Ableton Live. Most of the Sullivan Fellows favor FL Studio. Mac users naturally lean toward GarageBand and Logic. Other common tools for hip-hop producers include Reason, Pro Tools, Maschine, and in Europe, Cubase.

Traditional DAWs are not the only option. Soundtrap is a browser-based DAW that’s similar to GarageBand, but with the enormous advantage that it runs entirely in the web browser. It also offers some nifty features like built-in Auto-Tune at a fraction of the usual price. The MusEDLab’s own Groove Pizza is an accessible browser-based drum sequencer. Looplabs is another intriguing browser tool.

Mobile apps are not as robust or full-featured as desktop DAWs yet, but some of them are getting there. The iOS version of GarageBand is especially tasty. Figure makes great techno loops, though you’ll need to assemble them into songs using another tool. The Launchpad app is a remarkably easy and intuitive one. See my full list of recommendations.

Sullivan Fellows - beatmaking with iOS GarageBand

Where do you get sounds?

DAW factory sounds

Every DAW comes with a sample library and a set of software instruments. Pros: they’re royalty-free. Cons: they tend to be generic-sounding and overused. Be sure to tweak the presets.

Sample libraries and instrument packs

The internet is full of third-party sound libraries. They range widely in price and quality. Pros: like DAW factory sounds, library sounds are also royalty-free, with greatly wider variety available. Cons: the best libraries are expensive.

Humans playing instruments

You could record music the way it was played from the Stone Age through about 1980. Pros: you get human feel, creativity, improvisation, and distinctive instrumental timbres and techniques. Cons: humans are expensive and impractical to record well.

Your record collection

Using more DJ-oriented tools like Ableton, it’s perfectly effortless to pull sounds out of any existing recording. Pros: bottomless inspiration, and the ability to connect emotionally to your listener through sounds that are familiar and meaningful to them. Cons: if you want to charge money, you will probably need permission from the copyright holders, and that can be difficult and expensive. Even giving tracks away on the internet can be problematic. I’ve been using unauthorized samples for years and have never been in any trouble, but I’ve had a few SoundCloud takedowns.

Sullivan Fellows - beatmaking with Pro Tools

What sounds do you need?

Drums

Most hip-hop beats revolve around the components of the standard drum kit: kicks, snares, hi-hats (open and closed), crash cymbals, ride cymbals, and toms. Handclaps and finger snaps have become part of the standard drum palette as well. There are two kinds of drum sounds, synthetic (“fake”) and acoustic (“real”).

Synthetic drums are the heart and soul of hip-hop (and most other pop and dance music at this point.) There are tons of software and hardware drum machines out there, but there are three in particular you should be aware of.

  • Roland TR-808: If you could only have one drum machine for hip-hop creation, this would be the one. Every DAW contains sampled or simulated 808 sounds, sometimes labeled “old-skool” or something similar. It’s an iconic sound for good reason.
  • Roland TR-808: A cousin of the 808 that’s traditionally used more for techno. Still, you can get great hip-hop sounds out of it too. Your DAW is certain to contain some 909 sounds, often labeled with some kind of dance music terminology.
  • LinnDrum: The sound of the 80s. Think Prince, or Hall And Oates. Not as ubiquitous in DAWs as the 808 and 909, but pretty common.

Acoustic drums are less common in hip-hop, though not unheard of; just ask Questlove.

Some hip-hop producers use live drummers, but it’s much easier to use sampled acoustic drums. Samples are also a good source of Afro-Cuban percussion sounds like bongos, congas, timbales, cowbells, and so on. Also consider using “non-musical” percussion sounds: trash can lids, pots and pans, basketballs bouncing, stomping on the floor, and so on.

And how do you learn where to place these drum sounds? Try the specials on the Groove Pizza. Here’s an additional hip-hop classics to experiment with, the beat from “Nas Is Like” by Nas.

Groove Pizza - Nas Is Like

Bass

Hip-hop uses synth bass the vast majority of the time. Your DAW comes with a variety of synth bass sounds, including the simple sine wave sub, the P-Funk Moog bass, dubstep wobbles, and many others. For more unusual bass sounds, try very low-pitched piano or organ. Bass guitar isn’t extremely common in current hip-hop, but it’s worth a try. If you want a 90s Tribe Called Quest vibe, try upright bass.

In the past decade, some hip-hop producers have followed Kanye West’s example and used tuned 808 kick drums to play their basslines. Kanye has used it on all of his albums since 808s and Heartbreak. It’s an amazing solution; those 808 kicks are huge, and if they’re carrying the bassline too, then your low end can be nice and open. Another interesting alternative is to have no bassline at all. It worked for Prince!

And what notes should your bass be playing? If you have chords, the obvious thing is to have the bass playing the roots. You can also have the bass play complicated countermelodies. We made a free online course called Theory for Producers to help you figure these things out.

Chords

Usually your chords are played on some combination of piano, electric piano, organ, synth, strings, guitar, or horns. Vocal choirs are nice too. Once again, consult Theory for Producers for inspiration. Be sure to try out chords with the aQWERTYon, which was specifically designed for this very purpose.

Leads

The same instruments that you use for chords also work fine for melodies. In fact, you can think of melodies as chords stretched out horizontally, and conversely, you can think of chords as melodies stacked up vertically.

FX

For atmosphere in your track, ambient synth pads are always effective. Also try non-musical sounds like speech, police sirens, cash registers, gun shots, birds chirping, movie dialog, or whatever else your imagination can conjure. Make sure to visit Freesound.org – you have to sign up, but it’s worth it. Above all, listen to other people’s tracks, experiment, and trust your ears.

The evolution of the Groove Pizza

The Groove Pizza is a playful tool for creating grooves using math concepts like shapes, angles, and patterns. Here’s a beat I made just nowTry it yourself!

 
This post explains how and why we designed Groove Pizza.

What it does

The Groove Pizza represents beats as concentric rhythm necklaces. The circle represents one measure. Each slice of the pizza is a sixteenth note. The outermost ring controls the kick drum; the middle one controls the snare; and the innermost one plays cymbals.

Connecting the dots on a given ring creates shapes, like the square formed by the snare drum in the pattern below.

Groove Pizza - jazz swing

The pizza can play time signatures other than 4/4 by changing the number of slices. Here’s a twelve-slice pizza playing an African bell pattern.

Groove Pizza - Bembe

You can explore the geometry of musical rhythm by dragging shapes onto the circular grid. Patterns that are visually appealing tend to sound good, and patterns that sound good tend to look cool.

Groove Pizza - shapes

Herbie Hancock did some user testing for us, and he suggested that we make it possible to show the interior angles of the shapes.

Groove Pizza - angles

Groove Pizza History

The ideas behind the Groove Pizza began in my masters thesis work in 2013 at NYU. For his NYU senior thesis, Adam November built web and physical prototypes. In late summer 2015, Adam wrote what would become the Groove Pizza 1.0 (GP1), with a library of drum patterns that he and I curated. The MusEDLab has been user testing this version for the past year, both with kids and with music and math educators in New York City.

In January 2016, the Music Experience Design Lab began developing the Groove Pizza 2.0 (GP2) as part of the MathScienceMusic initiative.

MathScienceMusic Groove Pizza Credits:

  • Original Ideas: Ethan Hein, Adam November & Alex Ruthmann
  • Design: Diana Castro
  • Software Architect: Kevin Irlen
  • Creative Code Guru: Matthew Kaney
  • Backend Code Guru: Seth Hillinger
  • Play Testing: Marijke Jorritsma, Angela Lau, Harshini Karunaratne, Matt McLean
  • Odds & Ends: Asyrique Thevendran, Jamie Ehrenfeld, Jason Sigal

The learning opportunity

The goals of the Groove Pizza are to help novice drummers and drum programmers get started; to create a gentler introduction to beatmaking with more complex tools like Logic or Ableton Live; and to use music to open windows into math and geometry. The Groove Pizza is intended to be simple enough to be learned easily without prior experience or formal training, but it must also have sufficient depth to teach substantial and transferable skills and concepts, including:

  • Familiarity with the component instruments in a drum beat and the ability to pick them individually out of the sound mass.
  • A repertoire of standard patterns and rhythmic motifs. Understanding of where to place the kick, snare, hi-hats and so on to produce satisfying beats.
  • Awareness of different genres and styles and how they are distinguished by their different degrees of syncopation, customary kick drum patterns and claves, tempo ranges and so on.
  • An intuitive understanding of the difference between strong and weak beats and the emotional effect of syncopation.
  • Acquaintance with the concept of hemiola and other more complex rhythmic devices.

Marshall (2010) recommends “folding musical analysis into musical experience.” Programming drums in pop and dance idioms makes the rhythmic abstractions concrete.

Visualizing rhythm

Western music notation is fairly intuitive on the pitch axis, where height on the staff corresponds clearly to pitch height. On the time axis, however, Western notation is less easily parsed—horizontal space need not have any bearing at all on time values. A popular alternative is the “time-unit box system,” a kind of rhythm tablature used by ethnomusicologists. In a time-unit box system, each pulse is represented by a square. Rhythmic onsets are shown as filled boxes.

Clave patterns in TUBS

Nearly all electronic music production interfaces use the time-unit box system scheme, including grid sequencers and the MIDI piano roll.

Ableton TUBS

A row of time-unit boxes can also be wrapped in a circle to form a rhythm necklace. The Groove Pizza is simply a set of rhythm necklaces arranged concentrically.

Circular rhythm visualization offers a significant advantage over linear notation: it more clearly shows metrical function. We can define meter as “the grouping of perceived beats or pulses into equivalence classes” (Forth, Wiggin & McLean, 2010, 521). Linear musical concepts like small-scale melodies depend mostly on relationships between adjacent events, or at least closely spaced events. But periodicity and meter depend on relationships between nonadjacent events. Linear representations of music do not show meter directly. Simply by looking at the page, there is no indication that the first and third beats of a measure of 4/4 time are functionally related, as are the second and fourth beats.

However, when we wrap the musical timeline into a circle, meter becomes much easier to parse. Pairs of metrically related beats are directly opposite one another on the circle. Rotational and reflectional symmetries give strong clues to metrical function generally. For example, this illustration of 2-3 son clave adapted from Barth (2011) shows an axis of reflective symmetry between the fourth and twelfth beats of the pattern. This symmetry is considerably less obvious when viewed in more conventional notation.

Son clave symmetry

The Groove Pizza adds a layer of dynamic interaction to circular representation. Users can change time signatures during playback by adding or removing slices. In this way, very complex metrical shifts can be performed by complete novices. Furthermore, each rhythm necklace can be rotated during playback, enabling a rhythmic modularity characteristic of the most sophisticated Afro-Latin and jazz rhythms. Exploring rotational rhythmic transformation typically requires very sophisticated music-reading and performance skills to understand and execute, but doing so is effortlessly accessible to Groove Pizza users.

Visualizing swing

We traditionally associate swing with jazz, but it is omnipresent in American vernacular music: in rock, country, funk, reggae, hip-hop, EDM, and so on. For that reason, swing is a standard feature of notation software, MIDI sequencers, and drum machines. However, while swing is crucial to rhythmic expressiveness, it is rarely visualized in any explicit way, in notation or in software interfaces. Sequencers will sometimes show swing by displacing events on the MIDI piano roll, but the user must place those events first. The grid itself generally does not show swing.

The Groove Pizza uses a novel (and to our knowledge unprecedented) graphical representation of swing on the background grid, not just on the musical events. The slices alternately expand and contract in width according to the amount of swing specified. At 0% swing, the wedges are all of uniform width. At 50% swing, the odd-numbered slice in each pair is twice as long as the following even-numbered slice. As the user adjusts the swing slider, the slices dynamically change their width accordingly.

Straight 16ths vs swing 16ths

Our swing visualization system also addresses the issue of whether swing should be applied to eighth notes or sixteenths. In the jazz era, swing was understood to apply to eighth notes. However, since the 1960s, swing is more commonly applied to sixteenth notes, reflecting a broader shift from eighth note to sixteenth note pulse in American vernacular music. To hear the difference, compare the swung eighth note pulse of “Rockin’ Robin” by Bobby Day (1958) with the sixteenth note pulse of “I Want You Back” by the Jackson Five (1969). Electronic music production tools like Ableton Live and Logic default to sixteenth-note swing. However, notation programs like Sibelius, Finale and Noteflight can only apply swing to eighth notes.

The Groove Pizza supports both eighth and sixteenth swing simply by changing the slice labeling. The default labeling scheme is agnostic, simply numbering the slices sequentially from one. In GP1, users can choose to label a sixteen-slice pizza either as one measure of sixteenth notes or two measures of eighth notes. The grid looks the same either way; only the labels change.

Drum kits

With one drum sound per ring, the number of sounds available to the user is limited by the number of rings that can reasonably fit on the screen. In my thesis prototype, we were able to accommodate six sounds per “drum kit.” GP1 was reduced to five rings, and GP2 has only three rings, prioritizing simplicity over musical versatility.

GP1 offers three drum kits: Acoustic, Hip-Hop, and Techno. The Acoustic kit uses samples of a real drum kit; the Hip-Hop kit uses samples of the Roland TR-808 drum machine; and the Techno kit uses samples of the Roland TR-909. GP2 adds two additional kits: Jazz (an acoustic drum kit played with brushes), and Afro-Latin (congas, bell, and shaker.) Preset patterns automatically load with specific kits selected, but the user is free to change kits after loading.

In GP1, sounds can be mixed and matched at wiell, so the user can, for example, combine the acoustic kick with the hip-hop snare. In GP2, kits cannot be customized. A wider variety of sounds would present a wider variety of sonic choices. However, placing strict limits on the sounds available has its own creative advantage: it eliminates option paralysis and forces users to concentrate on creating interesting patterns, rather than struggling to choose from a long list of sounds.

It became clear in the course of testing that open and closed hi-hats need not operate separate rings, since it is not desirable to ever have them sound at the same time. (While drum machines are not bound by the physical limitations of human drummers, our rhythmic traditions are.) In future versions of the GP, we plan to place closed and open hi-hats together on the same ring. Clicking a beat in the hi-hat ring will place a closed hi-hat; clicking it again will replace it with an open hi-hat; and a third click will return the beat to silence. We will use the same mechanic to toggle between high and low cowbells or congas.

Preset patterns

In keeping with the constructivist value of working with authentic cultural materials, the exercises in the Groove Pizza are based on rhythms drawn from actual music. Most of the patterns are breakbeats—drums and percussion sampled from funk, rock and soul recordings that have been widely repurposed in electronic dance and hip-hop music. There are also generic rock, pop and dance rhythms, as well as an assortment of traditional Afro-Cuban patterns.

The GP1 offers a broad selection of preset patterns. The GP2 uses a smaller subset of these presets.

Breakbeats

  • The Winstons, ”Amen, Brother” (1969)
  • James Brown, ”Cold Sweat” (1967)”
  • James Brown, “The Funky Drummer” (1970)
  • Bobby Byrd, “I Know You Got Soul” (1971)
  • The Honeydrippers, “Impeach The President” (1973)
  • Skull Snaps, “It’s A New Day” (1973)
  • Joe Tex, ”Papa Was Too” (1966)
  • Stevie Wonder, “Superstition” (1972)
  • Melvin Bliss, “Synthetic Substitution”(1973)

Afro-Cuban

  • Bembé—also known as the “standard bell pattern”
  • Rumba clave
  • Son clave (3-2)
  • Son clave (2-3)

Pop

  • Michael Jackson, ”Billie Jean” (1982)
  • Boots-n-cats—a prototypical disco pattern, e.g. “Funkytown” by Lipps Inc (1979)
  • INXS, “Need You Tonight” (1987)
  • Uhnntsss—the standard “four on the floor” pattern common to disco and electronic dance music

Hip-hop

  • Lil Mama, “Lip Gloss” (2008)
  • Nas, “Nas Is Like” (1999)
  • Digable Planets, “Rebirth Of Slick (Cool Like Dat)” (1993)
  • OutKast, “So Fresh, So Clean” (2000)
  • Audio Two, “Top Billin’” (1987)

Rock

  • Pink Floyd, ”Money” (1973)
  • Peter Gabriel, “Solisbury Hill” (1977)
  • Billy Squier, “The Big Beat” (1980)
  • Aerosmith, “Walk This Way” (1975)
  • Queen, “We Will Rock You” (1977)
  • Led Zeppelin, “When The Levee Breaks” (1971)

Jazz

  • Bossa nova, e.g. “The Girl From Ipanima” by Antônio Carlos Jobim (1964)
  • Herbie Hancock, ”Chameleon” (1973)
  • Miles Davis, ”It’s About That Time” (1969)
  • Jazz spang-a-lang—the standard swing ride cymbal pattern
  • Jazz waltz—e.g. “My Favorite Things” as performed by John Coltrane (1961)
  • Dizzy Gillespie, ”Manteca” (1947)
  • Horace Silver, ”Song For My Father” (1965)
  • Paul Desmond, ”Take Five” (1959)
  • Herbie Hancock, “Watermelon Man” (1973)

Mathematical applications

The most substantial new feature of GP2 is “shapes mode.” The user can drag shapes onto the grid and rotate them to create geometric drum patterns: triangle, square, pentagon, hexagon, and octagon. Placing shapes in this way creates maximally even rhythms that are nearly always musically satisfying (Toussaint 2011). For example, on a sixteen-slice pizza, the pentagon forms rumba or bossa nova clave, while the hexagon creates a tresillo rhythm. As a general matter, the way that a rhythm “looks” gives insight into the way it sounds, and vice versa.

Because of the way it uses circle geometry, the Groove Pizza can be used to teach or reinforce the following subjects:

  • Fractions
  • Ratios and proportional relationships
  • Angles
  • Polar vs Cartesian coordinates
  • Symmetry: rotations, reflections
  • Frequency vs duration
  • Modular arithmetic
  • The unit circle in the complex plane

Specific kinds of music can help to introduce specific mathematical concepts. For example, Afro-Cuban patterns and other grooves built on hemiola are useful for graphically illustrating the concept of least common multiples. When presented with a kick playing every four slices and a snare playing every three slices, a student can both see and hear how they will line up every twelve slices. Bamberger and diSessa (2003) describe the “aha” moment that students have when they grasp this concept in a music context. One student in their study is quoted as describing the twelve-beat cycle “pulling” the other two beats together. Once students grasp least common multiples in a musical context, they have a valuable new inroad into a variety of scientific and mathematical concepts: harmonics in sound analysis, gears, pendulums, tiling patterns, and much else.

In addition to eighth and sixteenth notes, GP1 users can also label the pizza slices as fractions or angles, both Cartesian and polar. Users can thereby describe musical concepts in mathematical terms, and vice versa. It is an intriguing coincidence that the polar angle π/16 represents a sixteenth note. One could go even further with polar mode and use it as the unit circle on the complex plane. From there, lessons could move into powers of e, the relationship between sine and cosine waves, and other more advanced topics. The Groove Pizza could thereby be used to lay the ground work for concepts in electrical engineering, signal processing, and anything else involving wave mechanics.

Future work

The Groove Pizza does not offer any tone controls like duration, pitch, EQ and the like. This choice was due to a combination of expediency and the push to reduce option paralysis. However, velocity (loudness) control is a high-priority future feature. While nuanced velocity control is not necessary for the artificial aesthetic of electronic dance music, a basic loud/medium/soft toggle would make the Groove Pizza a more versatile tool.

The next step beyond preset patterns is to offer drum programming exercises or challenges. In exercises, users are presented with a pattern. They may alter this pattern as they see fit by adding and removing drum hits, and by rotating instrument parts within their respective rings. There are restraints of various kinds, to ensure that the results are appealing and musical-sounding. The restraints are tighter for more basic exercises, and looser for more advanced ones. For example, we might present users with a locked four-on-the-floor kick pattern, and ask them to create a satisfying techno beat using the snares and hi-hats. We also plan to create game-like challenges, where users are given the sound of a beat and must figure out how to represent it on the circular grid.

The Groove Pizza would be more useful for the purposes of trigonometry and circle geometry if it were presented slightly differently. Presently, the first beat of each pattern is at twelve o’clock, with playback running clockwise. However, angles are usually representing as originating at three o’clock and increasing in a counterclockwise direction. To create “math mode,” the radial grid would need to be reflected left-to-right and rotated ninety degrees.

References

Ankney, K.L. (2012). Alternative representations for musical composition. Visions of Research in Music Education, 20.

Bamberger, J., & DiSessa, A. (2003). Music As Embodied Mathematics: A Study Of A Mutually Informing Affinity. International Journal of Computers for Mathematical Learning, 8(2), 123–160.

Bamberger, J. (1996). Turning Music Theory On Its Ear. International Journal of Computers for Mathematical Learning, 1: 33-55.

Bamberger, J. (1994). Developing Musical Structures: Going Beyond the Simples. In R. Atlas & M. Cherlin (Eds.), Musical Transformation and Musical Intuition. Ovenbird Press.

Barth, E. (2011). Geometry of Music. In Greenwald, S. and Thomley, J., eds., Essays in Encyclopedia of Mathematics and Society. Ipswich, MA: Salem Press.

Bell, A. (2013). Oblivious Trailblazers: Case Studies of the Role of Recording Technology in the Music-Making Processes of Amateur Home Studio Users. Doctoral dissertation, New York University.

Benadon, F. (2007). A Circular Plot for Rhythm Visualization and Analysis. Music Theory Online, Volume 13, Issue 3.

Demaine, E.; Gomez-Martin, F.; Meijer, H.; Rappaport, D.; Taslakian, P.; Toussaint, G.; Winograd, T.; & Wood, D. (2009). The Distance Geometry of Music. Computational Geometry 42, 429–454.

Forth, J.; Wiggin, G.; & McLean, A. (2010). Unifying Conceptual Spaces: Concept Formation in Musical Creative Systems. Minds & Machines, 20:503–532.

Magnusson, T. (2010). Designing Constraints: Composing and Performing with Digital Musical Systems. Computer Music Journal, Volume 34, Number 4, pp. 62 – 73.

Marrington, M. (2011). Experiencing Musical Composition In The DAW: The Software Interface As Mediator Of The Musical Idea. The Journal on the Art of Record Production, (5).

Marshall, W. (2010). Mashup Poetics as Pedagogical Practice. In Biamonte, N., ed. Pop-Culture Pedagogy in the Music Classroom: Teaching Tools from American Idol to YouTube. Lanham, MD: Scarecrow Press.

McClary, S. (2004). Rap, Minimalism and Structures of Time in Late Twentieth-Century Culture. In Warner, D. ed., Audio Culture. London: Continuum International Publishing Group.

Monson, I. (1999). Riffs, Repetition, and Theories of Globalization. Ethnomusicology, Vol. 43, No. 1, 31-65.

New York State Learning Standards and Core Curriculum — Mathematics

Ruthmann, A. (2012). Engaging Adolescents with Music and Technology. In Burton, S. (Ed.). Engaging Musical Practices: A Sourcebook for Middle School General Music. Lanham, MD: R&L Education.

Thibeault, M. (2011). Wisdom for Music Education from the Recording Studio. General Music Today, 20 October 2011.

Thompson, P. (2012). An Empirical Study Into the Learning Practices and Enculturation of DJs, Turntablists, Hip-Hop and Dance Music Producers.” Journal of Music, Technology & Education, Volume 5, Number 1, 43 – 58.

Toussaint, G. (2013). The Geometry of Musical Rhythm. Cleveland: Chapman and Hall/CRC.

____ (2005). The Euclidean algorithm generates traditional musical rhythms. Proceedings of BRIDGES: Mathematical Connections in Art, Music, and Science, Banff, Alberta, Canada, July 31 to August 3, 2005, pp. 47-56.

____ (2004). A comparison of rhythmic similarity measures. Proceedings of ISMIR 2004: 5th International Conference on Music Information Retrieval, Universitat Pompeu Fabra, Barcelona, Spain, October 10-14, 2004, pp. 242-245.

____ (2003). Classification and phylogenetic analysis of African ternary rhythm timelines. Proceedings of BRIDGES: Mathematical Connections in Art, Music, and Science, University of Granada, Granada, Spain July 23-27, 2003, pp. 25-36.

____ (2002). A mathematical analysis of African, Brazilian, and Cuban clave rhythms. Proceedings of BRIDGES: Mathematical Connections in Art, Music and Science, Townson University, Towson, MD, July 27-29, 2002, pp. 157-168.

Whosampled.com. “The 10 Most Sampled Breakbeats of All Time.”

Wiggins, J. (2001). Teaching for musical understanding. Rochester, Michigan: Center for Applied Research in Musical Understanding, Oakland University.

Wilkie, K.; Holland, S.; & Mulholland, P. (2010). What Can the Language of Musicians Tell Us about Music Interaction Design?” Computer Music Journal, Vol. 34, No. 4, 34-48.

Theory for Producers: the White Keys

I’m pleased to announce the second installment of Theory For Producers, jointly produced by Soundfly and the MusEDLab. The first part discussed the scales you can play on the black keys of the piano. This one talks about three of the scales you get from the white keys. The next segment will deal with four additional white-key scales. Go try it!

Theory for Producers: the White Keys

If you’re a music educator or theory nerd, and would like to read more about the motivation behind the course design, read on.

Some of my colleagues in the music teaching world are puzzled by the order in which we’re presenting concepts. Theory resources almost always start with the C major scale, but we start with E-flat minor pentatonic. While it’s harder to represent in notation, E-flat minor pentatonic is easier to play and learn by ear, and for our target audience, that’s the most important consideration.

Okay, fine, the pentatonics are simple, it makes sense to start with them. But surely we would begin the white key part on the major scale, right? Nope! We start with Mixolydian mode. In electronica, hip-hop, rock, and pop, Mixolydian is more “basic” than major is. The sound of the flat seventh is more native to this music than the leading tone, and V-I cadences are rare or absent. I once had a student complain that the major scale makes everything sound like “Happy Birthday.” Our Mixolydian example, a Michael Jackson tune, was chosen to make our audience of producers feel culturally at home, to make them feel like we value the dance music of the African diaspora over the folk and classical of Western Europe.

After Mixolydian, we discuss Lydian mode. While it’s a pretty exotic scale, we chose to address it before major because it’s more forgiving to improvise with–Lydian doesn’t have any “wrong” notes. In major, you have to be careful about the fourth, because it has strong functional connotations, and because it conflicts hard with the third. In Lydian, you can play notes in any order and any combination without fear of hitting a clunker. Also, exotic though it may be, Lydian does pop up in a few well-known songs, like in a recent Katy Perry hit.

Finally, we do get to major, using David Bowie, and Queen. Even here, though, we downplay functional harmony, treating major as just another mode. Our song example uses a I-IV-V chord progression, but it runs over a static riff bassline, which makes it float rather than resolve.

This class only deals with the three major diatonic modes. We’ll get to the minor ones (natural minor, Dorian, Phrygian and Locrian) in the third class. We debated doing minor first, but there are more of the minor modes, and they’re more complicated.

We also debated whether or not to talk about chords. The chord changes in our examples are minimal, but they’re present. We ultimately decided to stick to horizontal scales only for the time being, and to treat chords separately. We plan to go back through all of the modes and talk about the chord progressions characteristic of each one. For example, with Mixolydian, we’ll talk about I-bVII-IV; with Lydian we’ll do I-II; and with major we’ll do all the permutations of I, IV, V and vi.

Once again, we know it’s unconventional to deal with modes so thoroughly before even touching any chords, but for our audience, we think this approach will make more sense. Electronic music is not big on complex harmony, but it is big on modes.