Freedom ’90

Since George Michael died, I’ve been enjoying all of his hits, but none of them more than this one. Listening to it now, it’s painfully obvious how much it’s about George Michael’s struggles with his sexual orientation. I wonder whether he was being deliberately coy in the lyrics, or if he just wasn’t yet fully in touch with his identity. Being gay in the eighties must have been a nightmare.

This is the funkiest song that George Michael ever wrote, which is saying something. Was he the funkiest white British guy in history? Quite possibly. 

The beat

There are five layers to the drum pattern: a simple closed hi-hat from a drum machine, some programmed bongos and congas, a sampled tambourine playing lightly swung sixteenth notes, and finally, once the full groove kicks in, the good old Funky Drummer break. I include a Noteflight transcription of all that stuff below, but don’t listen to it, it sounds comically awful.

George Michael uses the Funky Drummer break on at least two of the songs on Listen Without Prejudice Vol 1. Hear him discuss the break and how it informed his writing process in this must-watch 1990 documentary.

The intro and choruses

Harmonically, this is a boilerplate C Mixolydian progression: the chords built on the first, seventh and fourth degrees of the scale. You can hear the same progression in uncountably many classic rock songs.

C Mixolydian chords

For a more detailed explanation of this scale and others like it, check out Theory For Producers.

The rhythm is what makes this groove so fresh. It’s an Afro-Cuban pattern full of syncopation and hemiola. Here’s an abstraction of it on the Groove Pizza. If you know the correct name of this rhythm, please tell me in the comments!

The verses

There’s a switch to plain vanilla C major, the chords built on the fifth, fourth and root of the scale.

C major chords

Like the chorus, this is standard issue pop/rock harmonically speaking, but it also gets its life from a funky Latin rhythm. It’s a kind of clave pattern, five hits spread more or less evenly across the sixteen sixteenth notes in the bar. Here it is on the Groove Pizza.

The prechorus and bridge

This section unexpectedly jumps over to C minor, and now things get harmonically interesting. The chords are built around a descending chromatic bassline: C, B, B-flat, A. It’s a simple idea but with complicated implications, because it implies four chords built on three different scales between them. First, we have the tonic triad in C natural minor, no big deal there. Next comes the V chord in C harmonic minor. Then we’re back to C natural minor, but with the seventh in the bass. Finally, we go to the IV chord in C Dorian mode. Really, all that we’re doing is stretching C natural minor to accommodate a couple of new notes, B natural in the second chord, and A natural in the fourth one.

C minor - descending chromatic bassline

The rhythm here is similar but not identical to the clave-like pattern in the verse–the final chord stab is a sixteenth note earlier. See and hear it on the Groove Pizza.

I don’t have the time to transcribe the whole bassline, but it’s absurdly tight and soulful. The album credits list bass played both by Deon Estus and by George Michael himself. Whichever one of them laid this down, they nailed it.

Song structure

“Freedom ’90” has an exceedingly peculiar structure for a mainstream pop song. The first chorus doesn’t hit until almost two minutes in, which is an eternity–most pop songs are practically over that that point. The graphic below shows the song segments as I marked them in Ableton.

Freedom '90 structure

The song begins with a four bar instrumental intro, nothing remarkable about that. But then it immediately moves into an eight bar section that I have trouble classifying. It’s the spot that would normally be occupied by verse one, but this part uses the chorus harmony and is different from the other verses. I labeled it “intro verse” for lack of a better term. (Update: upon listening again, I realized that this section is the backing vocals from the back half of the chorus. Clever, George Michael!) Then there’s an eight bar instrumental break, before the song has really even started. George Michael brings you on board with this unconventional sequence because it’s all so catchy, but it’s definitely strange.

Finally, twenty bars in, the song settles into a more traditional verse-prechorus-chorus loop. The verses are long, sixteen bars. The prechorus is eight bars, and the chorus is sixteen. You could think of the chorus as being two eight bar sections, the part that goes “All we have to do…” and the part that goes “Freedom…” but I hear it as all one big section.

After two verse-prechorus-chorus units, there’s a four bar breakdown on the prechorus chord progression. This leads into sixteen bar bridge, still following the prechorus form. Finally, the song ends with a climactic third chorus, which repeats and fades out as an outtro. All told, the song is over six minutes. That’s enough time (and musical information) for two songs by a lesser artist.

A word about dynamics: just from looking at the audio waveform, you can see that “Freedom ’90” has very little contrast in loudness and fullness over its duration. It starts sparse, but once the Funky Drummer loop kicks in at measure 13, the sound stays constantly big and full until the breakdown and bridge. These sections are a little emptier without the busy piano part. The final chorus is a little bigger than the rest of the song because there are more vocals layered in, but that still isn’t a lot of contrast. I guess George Michael decided that the groove was so hot, why mess with it by introducing contrast for the sake of contrast? He was right to feel that way.

Careless Whisper

 The infamous saxophone riff in “Careless Whisper” is one of the most infectious earworms in musical history. Love it or hate it, there is no getting it out of your head. In honor of the late George Michael, let’s take a look at what makes it work.
careless-whisper-midi


Play the riff yourself using your computer keyboard!

Press these keys to get the riff:Careless Whisper aQW score
So why is the riff so impossible to forget? Its melodic structure certainly jumps right out at you. The first three phrases are descending lines spelling out chords using similar rhythms. The fourth phrase is an ascending line running up a scale, using a very different rhythm.

First let’s take a closer look at those rhythms. The first three phrases are heavily syncopated. After the downbeats, every single note in each pattern falls on a weak beat. The fourth phrase is less syncopated; it’s a predictable pattern of eighth notes. But because your ear has become used to the pattern of the first three phrases, the straighter rhythm in the fourth one feels more “syncopated” because it defies your expectation.

Now let’s consider the harmonic content. The left diagram below shows the D natural minor scale on the chromatic circle. The right diagram shows it on the circle of fifths. Scale tones have a white background, while non-scale tones are greyed out.

Three of the four phrases in the “Careless Whisper” riff are arpeggios, the notes from a chord played one at a time. Here’s how you make the chords.

  • Take the D natural minor scale. Start on the root (D). Skip the second (E) and land on the third (F). Skip the fourth (G) and land on the fifth (A). Skip the sixth (B-flat) and land on the seventh (C). Finally, skip the root (D) and land on the ninth (E). These pitches – D, F, A, C, and E – make a D minor 9 (Dm9) chord. Now look at the first bar of the sax riff. All the pitches in D minor 9 are there except for C.
  • If you do the same process, but starting on G, you get the pitches G, B-flat, D, F, A, C, which make up a G minor 11 chord. The second phrase has most of those pitches.
  • Do the same process starting on B-flat, and you get B-flat, D, F, and A, making a B-flat major 7 (B♭maj7) chord. The third phrase has all of these pitches.

Careless Whisper D natural minor scale chords

The fourth phrase is different from the others. Rather than outlining an arpeggio, it runs up the D natural minor scale from A to A. This sequence of pitches (A, B-flat, C, D, E, F, G, A) is also known as the A Phrygian mode. The half-step interval between A and B-flat gives Phrygian its exotic quality.

This riff certainly is catchy. It’s also notoriously corny, and to many people’s ears, quite annoying. Why? Some of it is the timbre. The use of unrestrainedly passionate alto sax through heavy reverb was briefly in vogue in the 1980s, and then fell permanently out of style. To my ears, though, the real problem is the chord progression. In D minor, both Gm11 and B♭maj7 are subdominants, and functionally they’re interchangeable. Jazz musicians like me hear them as being essentially the same chord. It would be hipper to replace the Gm with G7, or the B♭maj7 with B♭7. The A minor in the last bar is weak too; it would be more satisfying to replace the C with C-sharp, to make D harmonic minor. But your mileage may vary.

Enjoy my mashup of this track with “Calabria 2007” by Enur featuring Natasja.

Learning music from Ableton

Ableton recently launched a delightful web site that teaches the basics of beatmaking, production and music theory using elegant interactives. If you’re interested in music education, creation, or user experience design, you owe it to yourself to try it out.

Ableton - Learning Music site

One of the site’s co-creators is Dennis DeSantis, who wrote Live’s unusually lucid documentation, and also their first book, a highly-recommended collection of strategies for music creation (not just in the electronic idiom.)

Dennis DeSantis - Making Music

The other co-creator is Jack Schaedler, who also created this totally gorgeous interactive digital signal theory primer.

If you’ve been following the work of the NYU Music Experience Design Lab, you might notice some strong similarities between Ableton’s site and our tools. That’s no coincidence. Dennis and I have been having an informal back and forth on the role of technology in music education for a few years now. It’s a relationship that’s going to get a step more formal this fall at the 2017 Loop Conference – more details on that as it develops.

Meanwhile, Peter Kirn’s review of the Learning Music site raises some probing questions about why Ableton might be getting involved in education in the first place. But first, he makes some broad statements about the state of the musical world that are worth repeating in full.

I think there’s a common myth that music production tools somehow take away from the need to understand music theory. I’d say exactly the opposite: they’re more demanding.

Every musician is now in the position of composer. You have an opportunity to arrange new sounds in new ways without any clear frame from the past. You’re now part of a community of listeners who have more access to traditions across geography and essentially from the dawn of time. In other words, there’s almost no choice too obvious.

The music education world has been slow to react to these new realities. We still think of composition as an elite and esoteric skill, one reserved only for small class of highly trained specialists. Before computers, this was a reasonable enough attitude to have, because it was mostly true. Not many of us can learn an instrument well enough to compose with it, then learn to notate our ideas. Even fewer of us will be able to find musicians to perform those compositions. But anyone with an iPhone and twenty dollars worth of apps can make original music using an infinite variety of sounds, and share that music online to anyone willing to listen. My kids started playing with iOS music apps when they were one year old. With the technical barriers to musical creativity falling away, the remaining challenge is gaining an understanding of music itself, how it works, why some things sound good and others don’t. This is the challenge that we as music educators are suddenly free to take up.

There’s an important question to ask here, though: why Ableton?

To me, the answer to this is self-evident. Ableton has been in the music education business since its founding. Like Adam Bell says, every piece of music creation software is a de facto education experience. Designers of DAWs might even be the most culturally impactful music educators of our time. Most popular music is made by self-taught producers, and a lot of that self-teaching consists of exploring DAWs like Ableton Live. The presets, factory sounds and affordances of your DAW powerfully inform your understanding of musical possibility. If DAW makers are going to be teaching the world’s producers, I’d prefer if they do it intentionally.

So far, there has been a divide between “serious” music making tools like Ableton Live and the toy-like iOS and web apps that my kids use. If you’re sufficiently motivated, you can integrate them all together, but it takes some skill. One of the most interesting features of Ableton’s web site, then, is that each interactive tool includes a link that will open up your little creation in a Live session. Peter Kirn shares my excitement about this feature.

There are plenty of interactive learning examples online, but I think that “export” feature – the ability to integrate with serious desktop features – represents a kind of breakthrough.

Ableton Live is a superb creation tool, but I’ve been hesitant to recommend it to beginner producers. The web site could change my mind about that.

So, this is all wonderful. But Kirn points out a dark side.

The richness of music knowledge is something we’ve received because of healthy music communities and music institutions, because of a network of overlapping ecosystems. And it’s important that many of these are independent. I think it’s great that software companies are getting into the action, and I hope they continue to do so. In fact, I think that’s one healthy part of the present ecosystem.

It’s the rest of the ecosystem that’s worrying – the one outside individual brands and what they support. Public music education is getting squeezed in different ways all around the world. Independent content production is, too, even in advertising-supported publications like this one, but more so in other spheres. Worse, I think education around music technology hasn’t even begun to be reconciled with traditional music education – in the sense that people with specialties in one field tend not to have any understanding of the other. And right now, we need both – and both are getting their resources squeezed.

This might feel like I’m going on a tangent, but if your DAW has to teach you how harmony works, it’s worth asking the question – did some other part of the system break down?

Yes it did! Sure, you can learn the fundamentals of rhythm, harmony, and form from any of a thousand schools, courses, or books. But there aren’t many places you can go to learn about it in the context of Beyoncé, Daft Punk, or A Tribe Called Quest. Not many educators are hip enough to include the Sleng Teng riddim as one of the fundamentals. I’m doing my best to rectify this imbalance–that’s what my courses with Soundfly classes are for. But I join Peter Kirn in wondering why it’s left to private companies to do this work. Why isn’t school music more culturally relevant? Why do so many educators insist that you kids like the wrong music? Why is it so common to get a music degree without ever writing a song? Why is the chasm between the culture of school music and music generally so wide?

Like Kirn, I’m distressed that school music programs are getting their budgets cut. But there’s a reason that’s happening, and it isn’t that politicians and school boards are philistines. Enrollment in school music is declining in places where the budgets aren’t being cut, and even where schools are offering free instruments. We need to look at the content of school music itself to see why it’s driving kids away. Both the content of school music programs and the people teaching them are whiter than the student population. Even white kids are likely to be alienated from a Eurocentric curriculum that doesn’t reflect America’s increasingly Afrocentric musical culture. The large ensemble model that we imported from European conservatories is incompatible with the riot of polyglot individualism in the kids’ earbuds.

While music therapists have been teaching songwriting for years, it’s rare to find it in school music curricula. Production and beatmaking are even more rare. Not many adults can play oboe in an orchestra, but anyone with a guitar or keyboard or smartphone can write and perform songs. Music performance is a wonderful experience, one I wish were available to everyone, but music creation is on another level of emotional meaning entirely. It’s like the difference between watching basketball on TV and playing it yourself. It’s a way to understand your own innermost experiences and the innermost experiences of others. It changes the way you listen to music, and the way you approach any kind of art for that matter. It’s a tool that anyone should be able to have in their kit. Ableton is doing the music education world an invaluable service; I hope more of us follow their example.

Why hip-hop is interesting

The title of this post is also the title of a tutorial I’m giving at ISMIR 2016 with Jan Van Balen and Dan Brown. The conference is organized by the International Society for Music Information Retrieval, and it’s the fanciest of its kind. You may be wondering what Music Information Retrieval is. MIR is a specialized field in computer science devoted to teaching computers to understand music, so they can transcribe it, organize it, find connections and similarities, and, maybe, eventually, create it.

So why are we going to talk to the MIR community about hip-hop? So far, the field has mostly studied music using the tools of Western classical music theory, which emphasizes melody and harmony. Hip-hop songs don’t tend to have much going on in either of those areas, which makes the genre seem like it’s either too difficult to study, or just too boring. But the MIR community needs to find ways to engage this music, if for no other reason than the fact that hip-hop is the most-listened to genre in the world, at least among Spotify listeners.

Hip-hop has been getting plenty of scholarly attention lately, but most of it has been coming from cultural studies. Which is fine! Hip-hop is culturally interesting. When humanities people do engage with hip-hop as an art form, they tend to focus entirely on the lyrics, treating them as a subgenre of African-American literature that just happens to be performed over beats. And again, that’s cool! Hip-hop lyrics have literary interest. If you’re interested in the lyrical side, we recommend this video analyzing the rhyming techniques of several iconic emcees. But what we want to discuss is why hip-hop is musically interesting, a subject which academics have given approximately zero attention to.

Much of what I find exciting (and difficult) about hip-hop can be found in Kanye West’s song “Famous” from his album The Life Of Pablo.

The song comes with a video, a ten minute art film that shows Kanye in bed sleeping after a group sexual encounter with his wife, his former lover, his wife’s former lover, his father-in-law turned mother-in-law, various of his friends and collaborators, Bill Cosby, George Bush, Taylor Swift, and Donald Trump. There’s a lot to say about this, but it’s beyond the scope of our presentation, and my ability to verbalize thoughts. The song has some problematic lyrics. Kanye drops the n-word in the very first line and calls Taylor Swift a bitch in the second. He also speculates that he might have sex with her, and that he made her famous. I find his language difficult and objectionable, but that too is beyond the scope. Instead, I’m going to focus on the music itself.

“Famous” has a peculiar structure, shown in the graphic below.

The track begins with a six bar intro, Rihanna singing over a subtle gospel-flavored organ accompaniment in F-sharp major. She’s singing few lines from “Do What You Gotta Do” by Jimmy Webb. This song has been recorded many times, but for Kanye’s listeners, the most significant one is by Nina Simone.

Next comes a four-bar groove, a more aggressive organ part over a drum machine beat, with Swizz Beatz exclaiming on top. The beat is a minimal funk pattern on just kick and snare, treated with cavernous artificial reverb. The organ riff is in F-sharp minor, which is an abrupt mode change so early in the song. It’s sampled from the closing section of “Mi Sono Svegliato E…Ho Chiuso Gli Occhi” by Il Rovescio della Medaglia, an Italian prog-rock band I had never heard of until I looked the sample up just now. The song is itself built around quotes of Bach’s Well-Tempered Clavier–Kanye loves sampling material built from samples.

Verse one continues the same groove, with Kanye alternating between aggressive rap and loosely pitched singing. Rap is widely supposed not to be melodic, but this idea collapses immediately under scrutiny. The border between rapping and singing is fluid, and most emcees cross it effortlessly. Even in “straight” rapping, though, the pitch sequences are deliberate and meaningful. The pitches might not fall on the piano keys, but they are melodic nonetheless.

The verse is twelve bars long, which is unusual; hip-hop verses are almost always eight or sixteen bars. The hook (the hip-hop term for chorus) comes next, Rihanna singing the same Jimmy Webb/Nina Simone quote over the F-sharp major organ part from the intro. Swizz Beatz does more interjections, including a quote of “Wake Up Mr. West,” a short skit on Kanye’s album Late Registration in which DeRay Davis imitates Bernie Mac.

Verse two, like verse one, is twelve bars on the F-sharp minor loop. At the end, you think Rihanna is going to come back in for the hook, but she only delivers the pickup. The section abruptly shifts into an F-sharp major groove over fuller drums, including a snare that sounds like a socket wrench. The lead vocal is a sample of “Bam Bam” by Sister Nancy, which is a familiar reference for hip-hop fans–I recognize it from “Lost Ones” by Lauryn Hill and “Just Hangin’ Out” by Main Source. The chorus means “What a bum deal.” Sister Nancy’s track is itself sample-based–like many reggae songs, it uses a pre-existing riddim or instrumental backing, and the chorus is a quote of the Maytals.

Kanye doesn’t just sample “Bam Bam”, he also reharmonizes it. Sister Nancy’s original is a I – bVII progression in C Mixolydian. Kanye pitch shifts the vocal to fit it over a I – V – IV – V progression in F-sharp major. He doesn’t just transpose the sample up or down a tritone; instead, he keeps the pitches close by changing their chord function. Here’s Sister Nancy’s original:

And here’s Kanye’s version:

The pitch shifting gives Sister Nancy the feel of a robot from the future, while the lo-fidelity recording places her in the past. It’s a virtuoso sample flip.

After 24 bars of the Sister Nancy groove, the track ends with the Jimmy Webb hook again. But this time it isn’t Rihanna singing. Instead, it’s a sample of Nina Simone herself.It reminds me of Kanye’s song “Gold Digger“, which includes Jamie Foxx imitating Ray Charles, followed by a sample of Ray Charles himself. Kanye is showing off here. It would be a major coup for most producers to get Rihanna to sing on a track, and it would be an equally major coup to be able to license a Nina Simone sample, not to mention requiring the chutzpah to even want to sample such a sacred and iconic figure. Few people besides Kanye could afford to use both Rihanna and Nina Simone singing the same hook, and no one else would dare. I don’t think it’s just a conspicuous show of industry clout, either; Kanye wants you to feel the contrast between Rihanna’s heavily processed purr and Nina Simone’s stark, preacherly tone.

Here’s a diagram of all the samples and samples of samples in “Famous.”

In this one track, we have a dense interplay of rhythms, harmonies, timbres, vocal styles, and intertextual meaning, not to mention the complexities of cultural context. This is why hip-hop is interesting.

You probably have a good intuitive idea of what hip-hop is, but there’s plenty of confusion around the boundaries. What are the elements necessary for music to be hip-hop? Does it need to include rapping over a beat? When blues, rock, or R&B singers rap, should we retroactively consider that to be hip-hop? What about spoken-word poetry? Does hip-hop need to include rapping at all? Do singers like Mary J. Blige and Aaliyah qualify as hip-hop? Is Run-DMC’s version of “Walk This Way” by Aerosmith hip-hop or rock? Is “Love Lockdown” by Kanye West hip-hop or electronic pop? Do the rap sections of “Rapture” by Blondie or “Shake It Off” by Taylor Swift count as hip-hop?

If a single person can be said to have laid the groundwork for hip-hop, it’s James Brown. His black pride, sharp style, swagger, and blunt directness prefigure the rapper persona, and his records are a bottomless source of classic beats and samples. The HBO James Brown documentary is a must-watch.

Wikipedia lists hip-hop’s origins as including funk, disco,
electronic music, dub, R&B, reggae, dancehall, rock, jazz, toasting, performance poetry, spoken word, signifyin’, The Dozens, griots, scat singing, and talking blues. People use the terms hip-hop and rap interchangeably, but hip-hop and rap are not the same thing. The former is a genre; the latter is a technique. Rap long predates hip-hop–you can hear it in classicalrock, R&B, swingjazz fusion, soul, funkcountry, and especially blues, especially especially the subgenre of talking blues. Meanwhile, it’s possible to have hip-hop without rap. Nearly all current pop and R&B are outgrowths of hip-hop. Turntablists and controllerists have turned hip-hop into a virtuoso instrumental music.

It’s sometimes said that rock is European harmony combined with African rhythm. Rock began as dance music, and rhythm continues to be its most important component. This is even more true of hip-hop, where harmony is minimal and sometimes completely absent. More than any other music of the African diaspora, hip-hop is a delivery system for beats. These beats have undergone some evolution over time. Early hip-hop was built on funk, the product of what I call The Great Cut-Time Shift, as the underlying pulse of black music shifted from eighth notes to sixteenth notes. Current hip-hop is driving a Second Great Cut-Time Shift, as the average tempo slows and the pulse moves to thirty-second notes.

Like all other African-American vernacular music, hip-hop uses extensive syncopation, most commonly in the form of a backbeat. You can hear the blues musician Taj Mahal teach a German audience how to clap on the backbeat. (“Schvartze” is German for “black.”) Hip-hop has also absorbed a lot of Afro-Cuban rhythms, like the omnipresent son clave. This traditional Afro-Cuban rhythm is everywhere in hip-hop: in the drums, of course, but also in the rhythms of bass, keyboards, horns, vocals, and everywhere else. You can hear son clave in the snare drum part in “WTF” by Missy Elliott.

The NYU Music Experience Design Lab created the Groove Pizza app to help you visualize and interact with rhythms like the ones in hip-hop beats. You can use it to explore classic beats or more contemporary trap beats. Hip-hop beats come from three main sources: drum machines, samples, or (least commonly) live drummers.

Hip-hop was a DJ medium before emcees became the main focus. Party DJs in the disco era looped the funkiest, most rhythm-intensive sections of the records they were playing, and sometimes improvised toasts on top. Sampling and manipulating recordings has become effortless in the computer age, but doing it with vinyl records requires considerable technical skill. In the movie Wild Style, you can see Grandmaster Flash beat juggle and scratch “God Make Me Funky” by the Headhunters and “Take Me To The Mardi Gras” by Bob James (though the latter song had to be edited out of the movie for legal reasons.)

The creative process of making a modern pop recording is very different from composing on paper or performing live. Hip-hop is an art form about tracks, and the creativity is only partially in the songs and the performances. A major part of the art form is the creation of sound itself. It’s the timbre and space that makes the best tracks come alive as much as any of the “musical” components. The recording studio gives you control over the finest nuances of the music that live performers can only dream of. Most of the music consists of synths and samples that are far removed from a “live performance.” The digital studio erases the distinction between composition, improvisation, performance, recording and mixing. The best popular musicians are the ones most skilled at “playing the studio.”

Hip-hop has drawn much inspiration from the studio techniques of dub producers, who perform mixes of pre-existing multitrack tape recordings by literally playing the mixing desk. When you watch The Scientist mix Ted Sirota’s “Heavyweight Dub,” you can see him shaping the track by turning different instruments up and down and by turning the echo effect on and off. Like dub, hip-hop is usually created from scratch in the studio. Brian Eno describes the studio as a compositional tool, and hip-hop producers would agree.

Aside from the human voice, the most characteristic sounds in hip-hop are the synthesizer, the drum machine, the turntable, and the sampler. The skills needed by a hip-hop producer are quite different from the ones involved in playing traditional instruments or recording on tape. Rock musicians and fans are quick to judge electronic musicians like hip-hop producers for not being “real musicians” because sequencing electronic instruments appears to be easier to learn than guitar or drums. Is there something lazy or dishonest about hip-hop production techniques? Is the guitar more of a “real” instrument than the sampler or computer? Are the Roots “better” musicians because they incorporate instruments?

Maybe we discount the creative prowess of hip-hop producers because we’re unfamiliar with their workflow. Fortunately, there’s a growing body of YouTube videos that document various aspects of the process:

Before affordable digital samplers became available in the late 1980s, early hip-hop DJs and producers did most of their audio manipulation with turntables. Record scratching  demands considerable skill and practice, and it has evolved into a virtuoso form analogous to bebop saxophone or metal guitar shredding.

Hip-hop is built on a foundation of existing recordings, repurposed and recombined. Samples might be individual drum hits, or entire songs. Even hip-hop tracks without samples very often started with them; producers often replace copyrighted material with soundalike “original” beats and instrumental performances for legal reasons. Turntables and samplers make it possible to perform recordings like instruments.

The Amen break, a six-second drum solo, is one of the most important samples of all time. It’s been used in uncountably many hip-hop songs, and is the basis for entire subgenres of electronic music. Ali Jamieson gives an in-depth exploration of the Amen.

There are few artistic acts more controversial than sampling. Is it a way to enter into a conversation with other artists? An act of liberation against the forces of corporatized mass culture? A form of civil disobedience against a stifling copyright regime? Or is it a bunch of lazy hacks stealing ideas, profiting off other musicians’ hard work, and devaluing the concept of originality? Should artists be able to control what happens to their work? Is complete originality desirable, or even possible?

We look to hip-hop to tell us the truth, to be real, to speak to feelings that normally go unspoken. At the same time, we expect rappers to be larger than life, to sound impossibly good at all times, and to live out a fantasy life. And many of our favorite artists deliberately alter their appearance, race, gender, nationality, and even species. To make matters more complicated, we mostly experience hip-hop through recordings and videos, where artificiality is the nature of the medium. How important is authenticity in this music? To what extent is it even possible?

The “realness” debate in hip-hop reached its apogee with the controversy over Auto-Tune. Studio engineers have been using computer software to correct singers’ pitch since the early 1990s, but the practice only became widely known when T-Pain overtly used exaggerated Auto-Tune as a vocal effect rather than a corrective. The “T-Pain effect” makes it impossible to sing a wrong note, though at the expense of making the singer sound like a robot from the future. Is this the death of singing as an art form? Is it cheating to rely on software like this? Does it bother you that Kanye West can have hits as a singer when he can barely carry a tune? Does it make a difference to learn that T-Pain has flawless pitch when he turns off the Auto-Tune?

Hip-hop is inseparable from its social, racial and political environment. For example, you can’t understand eighties hip-hop without understanding New York City in the pre-Giuliani era. Eric B and Rakim capture it perfectly in the video for “I Ain’t No Joke.”

Given that hip-hop is the voice of the most marginalized people in America and the world, why is it so compelling to everyone else? Timothy Brennan argues that the musical African diaspora of which hip-hop is a part helps us resist imperialism through secular devotion. Brennan thinks that America’s love of African musical practice is related to an interest in African spiritual practice. We’re unconsciously drawn to the musical expression of African spirituality as a way of resisting oppressive industrial capitalism and Western hegemony. It isn’t just the defiant stance of the lyrics that’s doing the resisting. The beats and sounds themselves are doing the major emotional work, restructuring our sense of time, imposing a different grid system onto our experience. I would say that makes for some pretty interesting music.

Beatmaking fundamentals

I’m currently working with the Ed Sullivan Fellows program, an initiative of the NYU MusEDLab where we mentor up and coming rappers and producers. Many of them are working with beats they got from YouTube or SoundCloud. That’s fine for working out ideas, but to get to the next level, the Fellows need to be making their own beats. Partially this is for intellectual property reasons, and partially it’s because the quality of mp3s you get from YouTube is not so good. Here’s a collection of resources and ideas I collected for them, and that you might find useful too.

Sullivan Fellows - beatmaking with FL Studio

What should you use?

There are a lot of digital audio workstations (DAWs) out there. All of them have the same basic set of functions: a way to record and edit audio, a MIDI sequencer, and a set of samples and software instruments. My DAW of choice is Ableton Live. Most of the Sullivan Fellows favor FL Studio. Mac users naturally lean toward GarageBand and Logic. Other common tools for hip-hop producers include Reason, Pro Tools, Maschine, and in Europe, Cubase.

Traditional DAWs are not the only option. Soundtrap is a browser-based DAW that’s similar to GarageBand, but with the enormous advantage that it runs entirely in the web browser. It also offers some nifty features like built-in Auto-Tune at a fraction of the usual price. The MusEDLab’s own Groove Pizza is an accessible browser-based drum sequencer. Looplabs is another intriguing browser tool.

Mobile apps are not as robust or full-featured as desktop DAWs yet, but some of them are getting there. The iOS version of GarageBand is especially tasty. Figure makes great techno loops, though you’ll need to assemble them into songs using another tool. The Launchpad app is a remarkably easy and intuitive one. See my full list of recommendations.

Sullivan Fellows - beatmaking with iOS GarageBand

Where do you get sounds?

DAW factory sounds

Every DAW comes with a sample library and a set of software instruments. Pros: they’re royalty-free. Cons: they tend to be generic-sounding and overused. Be sure to tweak the presets.

Sample libraries and instrument packs

The internet is full of third-party sound libraries. They range widely in price and quality. Pros: like DAW factory sounds, library sounds are also royalty-free, with greatly wider variety available. Cons: the best libraries are expensive.

Humans playing instruments

You could record music the way it was played from the Stone Age through about 1980. Pros: you get human feel, creativity, improvisation, and distinctive instrumental timbres and techniques. Cons: humans are expensive and impractical to record well.

Your record collection

Using more DJ-oriented tools like Ableton, it’s perfectly effortless to pull sounds out of any existing recording. Pros: bottomless inspiration, and the ability to connect emotionally to your listener through sounds that are familiar and meaningful to them. Cons: if you want to charge money, you will probably need permission from the copyright holders, and that can be difficult and expensive. Even giving tracks away on the internet can be problematic. I’ve been using unauthorized samples for years and have never been in any trouble, but I’ve had a few SoundCloud takedowns.

Sullivan Fellows - beatmaking with Pro Tools

What sounds do you need?

Drums

Most hip-hop beats revolve around the components of the standard drum kit: kicks, snares, hi-hats (open and closed), crash cymbals, ride cymbals, and toms. Handclaps and finger snaps have become part of the standard drum palette as well. There are two kinds of drum sounds, synthetic (“fake”) and acoustic (“real”).

Synthetic drums are the heart and soul of hip-hop (and most other pop and dance music at this point.) There are tons of software and hardware drum machines out there, but there are three in particular you should be aware of.

  • Roland TR-808: If you could only have one drum machine for hip-hop creation, this would be the one. Every DAW contains sampled or simulated 808 sounds, sometimes labeled “old-skool” or something similar. It’s an iconic sound for good reason.
  • Roland TR-808: A cousin of the 808 that’s traditionally used more for techno. Still, you can get great hip-hop sounds out of it too. Your DAW is certain to contain some 909 sounds, often labeled with some kind of dance music terminology.
  • LinnDrum: The sound of the 80s. Think Prince, or Hall And Oates. Not as ubiquitous in DAWs as the 808 and 909, but pretty common.

Acoustic drums are less common in hip-hop, though not unheard of; just ask Questlove.

Some hip-hop producers use live drummers, but it’s much easier to use sampled acoustic drums. Samples are also a good source of Afro-Cuban percussion sounds like bongos, congas, timbales, cowbells, and so on. Also consider using “non-musical” percussion sounds: trash can lids, pots and pans, basketballs bouncing, stomping on the floor, and so on.

And how do you learn where to place these drum sounds? Try the specials on the Groove Pizza. Here’s an additional hip-hop classics to experiment with, the beat from “Nas Is Like” by Nas.

Groove Pizza - Nas Is Like

Bass

Hip-hop uses synth bass the vast majority of the time. Your DAW comes with a variety of synth bass sounds, including the simple sine wave sub, the P-Funk Moog bass, dubstep wobbles, and many others. For more unusual bass sounds, try very low-pitched piano or organ. Bass guitar isn’t extremely common in current hip-hop, but it’s worth a try. If you want a 90s Tribe Called Quest vibe, try upright bass.

In the past decade, some hip-hop producers have followed Kanye West’s example and used tuned 808 kick drums to play their basslines. Kanye has used it on all of his albums since 808s and Heartbreak. It’s an amazing solution; those 808 kicks are huge, and if they’re carrying the bassline too, then your low end can be nice and open. Another interesting alternative is to have no bassline at all. It worked for Prince!

And what notes should your bass be playing? If you have chords, the obvious thing is to have the bass playing the roots. You can also have the bass play complicated countermelodies. We made a free online course called Theory for Producers to help you figure these things out.

Chords

Usually your chords are played on some combination of piano, electric piano, organ, synth, strings, guitar, or horns. Vocal choirs are nice too. Once again, consult Theory for Producers for inspiration. Be sure to try out chords with the aQWERTYon, which was specifically designed for this very purpose.

Leads

The same instruments that you use for chords also work fine for melodies. In fact, you can think of melodies as chords stretched out horizontally, and conversely, you can think of chords as melodies stacked up vertically.

FX

For atmosphere in your track, ambient synth pads are always effective. Also try non-musical sounds like speech, police sirens, cash registers, gun shots, birds chirping, movie dialog, or whatever else your imagination can conjure. Make sure to visit Freesound.org – you have to sign up, but it’s worth it. Above all, listen to other people’s tracks, experiment, and trust your ears.

Musical simples – Teenage Dream

I’m working with Soundfly on the next installment of Theory For Producers, our ultra-futuristic online music theory course. The first unit covered the black keys of the piano and the pentatonic scales. The next one will talk about the white keys  and the diatonic modes. We were gathering examples, and we needed to find a well-known pop song that uses Lydian mode. My usual go-to example for Lydian is “Possibly Maybe” by Björk. But the course already uses a Björk tune for different example, and the Soundfly guys quite reasonably wanted something a little more millennial-friendly anyway. We decided to use Katy Perry’s “Teenage Dream” instead.

A couple of years ago, Slate ran an analysis of this tune by Owen Pallett. It’s an okay explanation, but it doesn’t delve too deep. We thought we could do better.

Here’s my transcription of the chorus:

When you look at the melody, this would seem to be a straightforward use of the B-flat major scale. However, the chord changes tell a different story. The tune doesn’t ever use a B-flat major chord. Instead, it oscillates back and forth between E-flat and F. In this harmonic context, the melody doesn’t belong to the plain vanilla B-flat major scale at all, but rather the dreamy and modernist E-flat Lydian mode. The graphic below shows the difference.

Teenage Dream Eb Lydian circles

Both scales use the same seven pitches: B-flat, C, D, E-flat, F, G, and A. The only difference between the two is which note you consider to be “home base.” Let’s consider B-flat major first.

To make chords from a scale, you pick any note, and then go clockwise around the scale, skipping every other degree. The chords are named for the note you start on. If you start on the fourth note, E-flat, you get the IV chord (the other two notes are G and B-flat.) If you start on the fifth note, F, you get the V chord (the other two notes are A and C.) In a major key, IV and V are very important chords. They’re called the subdominant and dominant chords, respectively, and they both create a feeling of suspense. You can resolve the suspense by following either one with the I chord. The weird thing about “Teenage Dream” is that if you think about it as being in B-flat, then it never lands on the I chord at all. It just oscillates back and fourth between IV and V. The suspense never gets resolved.

If we think of “Teenage Dream” as being in E-flat Lydian, then the E-flat chord is I, which makes more sense. The function of the F chord in this context isn’t clearly defined by music theory, but it does sound good. Lydian is very similar to the major scale, with only one difference: while the fourth note of E-flat major is A-flat, the fourth note of E-flat Lydian is A natural. That raised fourth gives Lydian mode its otherworldly sound. The F chord gets its airborne quality from that raised fourth.

Click here to play over “Teenage Dream” using the aQWERTYon. The two chords can be played on the letters Z-A-Q and X-S-W. For comparison, try playing it with B-flat major. Read more about the aQWERTYon here.

“Teenage Dream” is not the only well-known song to use the Lydian I-II progression. Other high profile examples include “Dreams” by Fleetwood Mac and “Jane Says” by Jane’s Addiction. over the same chords. Try singing any of these songs over any of the others; they all fit seamlessly.

The chorus of “Teenage Dream” uses a striking rhythm on the phrases “you make me”, “teenage dream”, and “I can’t sleep”. The song is in 4/4 time, like nearly all contemporary pop tracks, but that chorus rhythm has a feeling of three about it. It’s no illusion. The words “you” and “make” in the first line are each three eighth notes long. It sounds like an attempt to divide the eight eighth notes into groups of three. This rhythm is called Tresillo, and it’s one of the building blocks of Afro-Cuban drumming.

tresillo

Tresillo is the front half of son clave. It’s extraordinarily common in American vernacular music, especially in accompaniment patterns. You hear Tresillo in the bassline to “Hound Dog” and countless other fifties rock songs; in the generic acoustic guitar strumming pattern used by singer-songwriters everywhere; and in the kick and snare pattern characteristic of reggaetón. Tresillo is ubiquitous in jazz, and in the dance music of India and the Middle East.

“Teenage Dream” alternates the Tresillo with a funky syncopated rhythm pattern that skips the first beat of the measure. When you listen to the line “feel like I’m livin’ a”, there’s a hole right before the word “feel”. That hole is the downbeat, which is the usual place to start a phrase. When you avoid the obvious beat, you surprise the listener, which grabs their attention. The drums underneath this melody hammer relentlessly away on the strong beats, so it’s easy to parse out the rhythmic sophistication. Katy Perry songs have a lot of empty calories, but they taste as good as they do for a reason.

Theory for Producers

I’m delighted to announce the launch of a new interactive online music course called Theory for Producers: The Black Keys. It’s a joint effort by Soundfly and the NYU MusEDLab, representing the culmination of several years worth of design and programming. We’re super proud of it.

Theory for Producers: The Black Keys

The course makes the abstractions of music theory concrete by presenting them in the form of actual songs you’re likely to already know. You can play and improvise along with the examples right in the web browser using the aQWERTYon, which turns your computer keyboard into an easily playable instrument. You can also bring the examples into programs like Ableton Live or Logic for further hands-on experimentation. We’ve spiced up the content with videos and animations, along with some entertaining digressions into the Stone Age and the auditory processing abilities of frogs.

So what does it mean that this is music theory for producers? We’re organizing the material in a way that’s easiest and most relevant to people using computers to create the dance music of the African diaspora: techno, hip-hop, and their various pop derivatives. This music carries most of its creative content outside of harmony: in rhythm, timbre, and repetitive structure. The harmony is usually static, sitting on a loop of a few chords or just a single mode. Alongside the standard (Western) major and minor scales, you’re just as likely to encounter more “exotic” (non-Western) sounds.

Music theory classes and textbooks typically begin with the C major scale, because it’s the easiest scale to represent and read in music notation. However, C major is not necessarily the most “basic” or fundamental scale for our intended audience. Instead, we start with E-flat minor pentatonic, otherwise known as the black keys on the piano. The piano metaphor is ubiquitous both in electronic music hardware and software, and pentatonics are even easier to play on piano than diatonic scales. E-flat minor pentatonic is more daunting in notated form than C major, but since dance and hip-hop producers tend not to be able to read music anyway, that’s no obstacle. And if producers want to use keys other than E-flat minor (or G-flat major), they can keep playing the black keys and then transpose the MIDI later.

The Black Keys is just the first installment in Theory For Producers. Next, we’ll do The White Keys, otherwise known as the modes of C major. We’re planning to start that course not with C major itself, but with G Mixolydian mode, because it’s a more familiar sound in Afrodiasporic music than straight major. After that, we’ll do a course about chords, and one about rhythm. We hope you sign up!

Update: oh hey, we’re on Lifehacker

Teaching reflections

Here’s what happened in my life as an educator this past semester, and what I have planned for the coming semester.

Montclair State University Intro To Music Technology

I wonder how much longer “music technology” is going to exist as a subject. They don’t teach “piano technology” or “violin technology.” It makes sense to teach specific areas like audio recording or synthesis or signal theory as separate classes. But “music technology” is such a broad term as to be meaningless. The unspoken assumption is that we’re teaching “musical practices involving a computer,” but even that is both too big and too small to structure a one-semester class around. On the one hand, every kind of music involves computers now. On the other hand, to focus just on the computer part is like teaching a word processing class that’s somehow separate from learning how to write.

MSU Intro to Music Tech

The newness and vagueness of the field of study gives me and my fellow music tech educators wide latitude to define our subject matter. I see my job as providing an introduction to pop production and songwriting. The tools we use for the job at Montclair are mostly GarageBand and Logic, but I don’t spend a lot of time on the mechanics of the software itself. Instead, I teach music: How do you express yourself creatively using sample libraries, or MIDI, or field recordings, or pre-existing songs? What kinds of rhythms, harmonies, timbres and structures make sense aesthetically when you’re assembling these materials in the DAW? Where do you get ideas? How do you listen to recorded music analytically? Why does Thriller sound so much better than any other album recorded in the eighties? We cover technical concepts as they arise in the natural course of producing and listening. My hope is that they’ll be more relevant and memorable that way.

Having now taught three semesters of Intro to Music Tech at MSU, my format is starting to gel. The students spend most of the semester creating tracks. They do one using only the loops that come with GarageBand, one using only MIDI and software instruments, one that includes a field recording they made with their phones, and so on. I started having them remix each other’s tracks this past semester, and it was such a smash hit that I’m going to have future classes do a whole series of peer remixes.

Montclair is a fairly traditional conservatory. For many students, my class is the only time in their college careers they get to make music according to their own sensibilities and tastes. It’s also usually the only time they engage critically with recordings, or electronic dance music, or hip-hop, or pop song forms, or sampling, or mixing and audio processing. I’m glad to be able to fill these vacuums, but I wish I had more than one semester to do it in.

Aside from creative music-making, the students do a couple of presentations, one on a song they think is interesting, and one on a topic of their choice. They also write blog posts about the process of creating their tracks. This last assignment is a persistent obstacle, since no one seems to share my enthusiasm for process documentation. Next semester I’m going to try introducing some of the cooperative/competitive spirit of the peer remixes by having them write reviews of each other’s tracks. Maybe that will get them to invest their writing with the same creativity they put into the music assignments.

Montclair State Advanced Computer Music Composition

This past fall I got to teach my first advanced class, and it went amazingly well. We used Ableton Live, my DAW of choice, and the guys (it was all guys) banged out tracks at a rapid clip for the entire semester. As with the intro class, I spent most of the time on the creative process, and dealt with Ableton functionality and audio engineering topics as they came up.

Tristan gets his FFT on

Each assignment came with some kind of tight technical restriction, but no stylistic restrictions. As with the intro class, the advanced dudes did tracks using only existing loops, only MIDI, and found sound. They did peer remixing and self remixing as well. The two hardest and most interesting assignments were to create a new track using only samples of an existing track, and then to create a new track using only a single five-second Duke Ellington sample. (These assignments were inspired heavily by the Disquiet Junto.) The more tightly I constrained the students, the more ingenuity they displayed. Listen for yourself:

As with the intro class, I tried to have the advanced dudes document their process with blog posts. As with the intro class, they showed zero interest. In the future, I’ll have to get more creative with the writing component. Also, I’d like to not have the class be entirely male.

NYU Music Education Technology Practicum

This class is meant to be a grounding in music tech for future music teachers. I’m even more time-constrained at NYU than at Montclair, and I teach in a regular classroom rather than a computer lab. While my class time at Montclair is mostly devoted to music-making, at NYU I’m forced to do more lectures, demos and listening sessions. It is very far from ideal. I have no idea how NYU can charge so much money without offering such a basic-seeming amenity as a room with computers in it for the music students. However, NYU does have one advantage over Montclair as a teaching environment, which is that I can hold a couple of class sessions in an extremely fancy recording studio.

Catherine and Joseph in the Dolan Studio

I mostly take the same approach at NYU as I do at Montclair, and use most of the same assignments. The major difference is that the NYU kids do a critical listening project, where they pick a recording and graph out its musical structure and spatial layout. It’s a difficult exercise, but an invaluable one. I did it in grad school, and it improved my analytical listening abilities significantly. We used to do the same assignment at Montclair, but the students were really not into it, like to the point of refusing to do it, so sadly we had to drop it from the syllabus. I hope we can find a way to reinstate it.

This past semester, the majority of my NYU kids were music business majors, which was pretty great. They came in with less musical experience than the education majors–sometimes with none at all–but they had less to unlearn, and they threw themselves confidently into producing tracks. This coming semester I have a bunch more music business kids. I’m attracting them because my class is the only one at Steinhardt that does intro-level creative music making in the pop idiom. I’m clearly filling a vacuum, and I’m hoping that I’m just the thin edge of the wedge, both for my own sake and the future music educators of NYU.

Interface designs

The NYU Music Experience Design Lab is baking education into a suite of creative music making and learning tools. As my friend and colleague Adam Bell likes to say, purchasers of a computer are purchasing a music education. We’re trying to make that education a better and more enjoyable one, whether our users are in formal classroom settings or playing around on their own. You can read about the lab’s various projects here. My own contributions are largely conceptual, though I’ve also devoted a lot of attention to making useful and inspiring presets.

Cold Sweat on the Groove Pizza

The Ed Sullivan Fellows Program

This winter, the MusEDLab is launching a brand new initiative, mentoring a group of young people from challenging circumstances in music and technology. I’ll be teaching the music side, doing a custom-tailored version of my intro class syllabus. Sullivan Fellows will also work with my colleagues in the lab on programming and design projects. This summer, we’ll have a showcase event as part of the 2016 IMPACT Conference. The goal is to help the Fellows get launched in careers in music and/or technology. I’ll be writing a lot more about this in the coming weeks.

Online courses with Soundfly

The MusEDLab is working with a music ed startup on some new interactive online courses. The first is called Music Theory For Bedroom Producers, and we expect to launch next month. I wrote a lot of the materials, and am appearing in some videos. Soundfly has ace designers, animators and programmers, so expect a rich multimedia experience. More on this as it gets closer.

Everything else

For the past few years, I’ve been a teaching artist with NYU’s IMPACT workshop. Below, you can see some participants making beats on an iPad. The workshop is a crash course not just in music, but in theater, dance, video, and the intersection of all of the above. I’m still very much figuring out my role in the whole thing, but so is everyone involved.

Mobile music at IMPACT

I continue to teach private lessons, do freelance production and composition, do some consulting, write for online publications, and generally keep hustling for gigs. If you’d like to have me do any of these things, be in touch.

Space Oddity: from song to track

If you’ve ever wondered what it is that a music producer does exactly, David Bowie’s “Space Oddity” is a crystal clear example. To put it in a nutshell, a producer turns this:

Into this:

It’s also interesting to listen to the first version of the commercial recording, which is better than the demo, but still nowhere near as majestic as the final version. The Austin Powers flute solo is especially silly.

Should we even consider these three recordings to be the same piece of music? On the one hand, they’re all the same melody and chords and lyrics. On the other hand, if the song only existed in its demo form, or in the awkward Austin Powers version, it would never have made the impact that it did. Some of the impact of the final version lies in better recording techniques and equipment, but it’s more than that. The music takes on a different meaning in the final version. It’s bigger, trippier, punchier, tighter, more cinematic, more transporting, and in general about a thousand times more effective.

The producer’s job is to marshall the efforts of songwriters, arrangers, performers and engineers to create a good-sounding recording. (The producer might also be a songwriter, arranger, performer, and/or engineer.) Producers are to songs what directors are to movies, or showrunners are to television.

When you’re thinking about a piece of recorded music, you’re really talking about three different things:

  1. The underlying composition, the part that can be represented on paper. Albin Zak calls this “the song.”
  2. The performance of the song.
  3. The finished recording, after overdubbing, mixing, editing, effects, and all the rest. Albin Zak calls this “the track.”

I had always assumed that Tony Visconti produced “Space Oddity,” since he produced a ton of other Bowie classics. As it turns out, though, Visconti was underwhelmed by the song, so he delegated it to his assistant, Gus Dudgeon. So what is it that Gus Dudgeon did precisely? First let’s separate out what he didn’t do.

You can hear from the demo that the chords, melody and lyrics were all in place before Bowie walked into the studio. They’re the parts reproduced by the subway busker I heard singing “Space Oddity” this morning. The demo includes a vocal arrangement that’s similar to the final one, aside from some minor phrasing changes. The acoustic guitar and Stylophone are in place as well. (I had always thought it was an oboe, but no, that droning sound is a low-tech synth.)

Gus Dudgeon took a song and a partial arrangement, and turned it into a track. He oversaw the addition of electric guitar, bass, drums, strings, woodwinds, and keyboards. He coached Bowie and the various studio musicians through their performances, selected the takes, and decided on effects like echoes and reverb. He supervised the mixing, which not only sets the relative loudness of the various sounds, but also affects their perceived location and significance. In short, he designed the actual sounds that you hear.

If you want to dive deep into the track, you’re in luck, because Bowie officially released the multitrack stems. Some particular points of interest:

  • The bassist, Herbie Flowers, was a rookie. The “Space Oddity” session was his first. He later went on to create the staggeringly great dual bass part in Lou Reed’s “Walk On The Wild Side.”
  • The strings were arranged and conducted by the multifaceted Paul Buckmaster, who a few years later would work with Miles Davis on the conception of On The Corner. Buckmaster’s cello harmonics contribute significantly to the psychedelic atmosphere–listen to the end of the stem labeled “Extras 1.”
  • The live strings are supplemented by Mellotron, played by future Yes keyboardist Rick Wakeman, he of the flamboyant gold cape.
  • Tony Visconti plays some flute and unspecified woodwinds, including the distinctive saxophone run that leads into the instrumental sections.

You can read a detailed analysis of the recording on the excellent Bowiesongs blog.

The big difference between the sixties and the present is that the track has assumed ever-greater importance relative to the song and the performance. In the age of MIDI and digital audio editing, live performance has become a totally optional component of music. The song is increasingly inseparable from the sounds used to realize it, especially in synth-heavy music like hip-hop and EDM. This shift gives the producer ever-greater importance in the creative process. There is really no such thing as a “demo” anymore, since anyone with a computer can produce finished-sounding tracks in their bedroom. If David Bowie were a kid now, he’d put together “Space Oddity” in GarageBand or FL Studio, with a lavish soundscape part of the conception from the beginning.

I want my students to understand that the words “producer” and “musician” are becoming synonymous. I want them to know that they can no longer focus solely on composition or performance and wait for someone else to craft a track around them. The techniques used to make “Space Oddity” were esoteric and expensive to realize at the time. Now, they’re easily within reach. But while the technology is more accessible, you still have to have the ideas. This is why it’s so valuable to study great producers like Tony Visconti and Gus Dudgeon: they’re a goldmine of sonic inspiration.

See also: a broader appreciation of Bowie.