Ultralight Beam

The first song on Kanye West’s Life Of Pablo album, and my favorite so far, is the beautiful, gospel-saturated “Ultralight Beam.” See Kanye and company perform it live on SNL.

Ultralight Beam

The song uses only four chords, but they’re an interesting four: C minor, E-flat major, A-flat major, and G7. To find out why they sound so good together, let’s do a little music theory.

“Ultralight Beam” is in the key of C minor, and three of the four chords come from the C natural minor scale, shown below. Click the image to play the scale in the aQWERTYon (requires Chrome).

Ultralight Beam C natural minor

To make a chord, start on any scale degree, then skip two degrees clockwise, and then skip another two, and so on. To make C minor, you start on C, then jump to E-flat, and then to G. To make E-flat major, you start on E-flat, then jump to G, and then to B-flat. And to make A-flat major, you start on A-flat, then jump to C, and then to E-flat. Simple enough so far.

The C natural minor scale shares its seven notes with the E-flat major scale:

Ultralight Beam Eb major circles

All we’ve really done here is rotate the circle three slots counterclockwise. All the relationships stay the same, and you can form the same chords in the same way. The two scales are so closely related that if noodle around on C natural minor long enough, it starts just sounding like E-flat major. Try it!

The last of the four chords in “Ultralight Beam” is G7, and to make it, we need a note that isn’t in C natural minor (or E-flat major): the leading tone, B natural. If you take C natural minor and replace B-flat with B natural, you get a new scale: C harmonic minor.

Ultralight Beam C harmonic minor

If you make a chord starting on G from C natural minor, you get G minor (G, B-flat, D). The chord sounds fine, and you could use it with the other three above without offending anyone. But if you make the same chord using C harmonic minor, you get G major (G, B, D). This is a much more dramatic and exciting sound. If you add one more chord degree, you get G7 (G, B, D, F), known as the dominant chord in C minor. In the diagram below, the G7 chord is in blue, and C minor is in green.

Ultralight Beam C harmonic minor with V7 chord

Feel how much more intensely that B natural pulls to C than B-flat did? That’s what gives the song its drama, and what puts it unambivalently in C minor rather than E-flat major.

“Ultralight Beam” has a nice chord progression, but that isn’t its most distinctive feature. The thing that jumps out most immediately is the unusual beat. Nearly all hip-hop is in 4/4 time, where each measure is subdivided into four beats, and each of those four beats is subdivided into four sixteenth notes. “Ultralight Beam” uses 12/8 time, which was prevalent in the first half of the twentieth century, but is a rarity now. Each measure still has four beats in it, but these beats are subdivided into three beats rather than four.

four-four vs twelve-eight

The track states this rhythm very obliquely. The drum track is comprised almost entirely of silence. The vocals and other instruments skip lightly around the beat. Chance The Rapper’s verse in particular pulls against the meter in all kinds of complex ways.

The song’s structure is unusual too, a wide departure from the standard “verse-hook-verse-hook”.

Ultralight Beam song structure

The intro is six bars long, two bars of ambient voices, four bars over the chord progression. The song proper begins with just the first half of the chorus (known in hip-hop circles as the hook.) Kanye has an eight bar verse, followed by the first full chorus. Kelly Price gets the next eight bar verse. So far, so typical. But then, where you expect the next chorus, The-Dream gets his four-bar verse, followed by Chance The Rapper’s ecstatic sixteen-bar verse. Next is what feels like the last chorus, but that’s followed by Kirk Franklin’s four bar verse, and then a four-bar outtro with just the choir singing haunting single words. It’s strange, but it works. Say what you want about Kanye as a public figure, but as a musician, he is in complete control of his craft.

Inside the aQWERTYon

The MusEDLab and Soundfly just launched Theory For Producers, an interactive music theory course. The centerpiece of the interactive component is a MusEDLab tool called the aQWERTYon. You can try it by clicking the image below.

aQWERTYon screencap

In this post, I’ll talk about why and how we developed the aQWERTYon.

One of our core design principles is to work within our users’ real-world technological limitations. We build tools in the browser so they’ll be platform-independent and accessible anywhere there’s internet access (and where there isn’t internet access, we’ve developed the “MusEDLab in a box.”) We want to find out what musical possibilities there are in a typical computer with no additional software or hardware. That question led us to investigate ways of turning the standard QWERTY keyboard into a beginner-friendly instrument. We were inspired in part by GarageBand’s Musical Typing feature.

GarageBand musical typing

If you don’t have a MIDI controller, Apple thoughtfully made it possible for you to use your computer keyboard to play GarageBand’s many software instruments. You get an octave and a half of piano, plus other useful controls: pitch bend, modulation, sustain, octave shifting and simple velocity control. Many DAWs offer something similar, but Apple’s system is the most sophisticated I’ve seen.

Handy though it is, Musical Typing has some problems as a user interface. The biggest one is the poor fit between the piano keyboard layout and the grid of computer keys. Typing the letter A plays the note C. The rest of that row is the white keys, and the one above it is the black keys. You can play the chromatic scale by alternating A row, Q row, A row, Q row. That basic pattern is easy enough to figure out. However, you quickly get into trouble, because there’s no black key between E and F. The QWERTY keyboard gives no visual reminder of that fact, so you just have to remember it. Unfortunately, the “missing” black key happens to be the letter R, which is GarageBand’s keyboard shortcut for recording. So what inevitably happens is that you’re hunting for E-flat or F-sharp and you accidentally start recording over whatever you were doing. I’ve been using the program for years and still do this routinely.

Rather than recreating the piano keyboard on the computer, we drew on a different metaphor: the accordion.

The accordion: the user interface metaphor of the future!

We wanted to have chords and scales arranged in an easily discoverable way, like the way you can easily figure out the chord buttons on the accordion’s left hand. The QWERTY keyboard is really a staggered grid four keys tall and between ten and thirteen keys wide, plus assorted modifier and function keys. We decided to use the columns for chords and the rows for scales.

For the diatonic scales and modes, the layout is simple. The bottom row gives the notes in the scale starting on 1^. The second row has the same scale shifted over to start on 3^. The third row starts the scale on 5^, and the top row starts on 1^ an octave up. If this sounds confusing when you read it, try playing it, your ears will immediately pick up the pattern. Notes in the same column form the diatonic chords, with their roman numerals conveniently matching the number keys. There are no wrong notes, so even just mashing keys at random will sound at least okay. Typing your name usually sounds pretty cool, and picking out melodies is a piece of cake. Playing diagonal columns, like Z-S-E-4, gives you chords voiced in fourths. The same layout approach works great for any seven-note scale: all of the diatonic modes, plus the modes of harmonic and melodic minor.

Pentatonics work pretty much the same way as seven-note scales, except that the columns stack in fourths rather than fifths. The octatonic and diminished scales lay out easily as well. The real layout challenge lay in one strange but crucial exception: the blues scale. Unlike other scales, you can’t just stagger the blues scale pitches in thirds to get meaningful chords. The melodic and harmonic components of blues are more or less unrelated to each other. Our original idea was to put the blues scale on the bottom row of keys, and then use the others to spell out satisfying chords on top. That made it extremely awkward to play melodies, however, since the keys don’t form an intelligible pattern of intervals. Our compromise was to create two different blues modes: one with the chords, for harmony exploration, and one just repeating the blues scale in octaves for melodic purposes. Maybe a better solution exists, but we haven’t figured it out yet.

When you select a different root, all the pitches in the chords and scales are automatically changed as well. Even if the aQWERTYon had no other features or interactivity, this would still make it an invaluable music theory tool. But root selection raises a bigger question: what do you do about all the real-world music that uses more than one scale or mode? Totally uniform modality is unusual, even in simple pop songs. You can access notes outside the currently selected scale by pressing the shift keys, which transposes the entire keyboard up or down a half step. But what would be really great is if we could get the scale settings to change dynamically. Wouldn’t it be great if you were listening to a jazz tune, and the scale was always set to match whatever chord was going by at that moment? You could blow over complex changes effortlessly. We’ve discussed manually placing markers in YouTube videos that tell the aQWERTYon when to change its settings, but that would be labor-intensive. We’re hoping to discover an algorithmic method for placing markers automatically.

The other big design challenge we face is how to present all the different scale choices in a way that doesn’t overwhelm our core audience of non-expert users. One solution would just be to limit the scale choices. We already do that in the Soundfly course, in effect; when you land on a lesson, the embedded aQWERTYon is preset to the appropriate scale and key, and the user doesn’t even see the menus. But we’d like people to be able to explore the rich sonic diversity of the various scales without confronting them with technical Greek terms like “Lydian dominant”. Right now, the scales are categorized as Major, Minor and Other, but those terms aren’t meaningful to beginners. We’ve been discussing how we could organize the scales by mood or feeling, maybe from “brightest” to “darkest.” But how do you assign a mood to a scale? Do we just do it arbitrarily ourselves? Crowdsource mood tags? Find some objective sorting method that maps onto most listeners’ subjective associations? Some combination of the above? It’s an active area of research for us.

This issue of categorizing scales by mood has relevance for the original use case we imagined for the aQWERTYon: teaching film scoring. The idea behind the integrated video window was that you would load a video clip, set a mode, and then improvise some music that fit the emotional vibe of that clip. The idea of playing along with YouTube videos of songs came later. One could teach more general open-ended composition with the aQWERTYon, and in fact our friend Matt McLean is doing exactly that. But we’re attracted to film scoring as a gateway because it’s a more narrowly defined problem. Instead of just “write some music”, the challenge is “write some music with a particular feeling to it that fits into a scene of a particular length.

Would you like to help us test and improve the aQWERTYon, or to design curricula around it? Would you like to help fund our programmers and designers? Please get in touch.

Theory for Producers

I’m delighted to announce the launch of a new interactive online music course called Theory for Producers: The Black Keys. It’s a joint effort by Soundfly and the NYU MusEDLab, representing the culmination of several years worth of design and programming. We’re super proud of it.

Theory for Producers: The Black Keys

The course makes the abstractions of music theory concrete by presenting them in the form of actual songs you’re likely to already know. You can play and improvise along with the examples right in the web browser using the aQWERTYon, which turns your computer keyboard into an easily playable instrument. You can also bring the examples into programs like Ableton Live or Logic for further hands-on experimentation. We’ve spiced up the content with videos and animations, along with some entertaining digressions into the Stone Age and the auditory processing abilities of frogs.

So what does it mean that this is music theory for producers? We’re organizing the material in a way that’s easiest and most relevant to people using computers to create the dance music of the African diaspora: techno, hip-hop, and their various pop derivatives. This music carries most of its creative content outside of harmony: in rhythm, timbre, and repetitive structure. The harmony is usually static, sitting on a loop of a few chords or just a single mode. Alongside the standard (Western) major and minor scales, you’re just as likely to encounter more “exotic” (non-Western) sounds.

Music theory classes and textbooks typically begin with the C major scale, because it’s the easiest scale to represent and read in music notation. However, C major is not necessarily the most “basic” or fundamental scale for our intended audience. Instead, we start with E-flat minor pentatonic, otherwise known as the black keys on the piano. The piano metaphor is ubiquitous both in electronic music hardware and software, and pentatonics are even easier to play on piano than diatonic scales. E-flat minor pentatonic is more daunting in notated form than C major, but since dance and hip-hop producers tend not to be able to read music anyway, that’s no obstacle. And if producers want to use keys other than E-flat minor (or G-flat major), they can keep playing the black keys and then transpose the MIDI later.

The Black Keys is just the first installment in Theory For Producers. Next, we’ll do The White Keys, otherwise known as the modes of C major. We’re planning to start that course not with C major itself, but with G Mixolydian mode, because it’s a more familiar sound in Afrodiasporic music than straight major. After that, we’ll do a course about chords, and one about rhythm. We hope you sign up!

Update: oh hey, we’re on Lifehacker

Teaching reflections

Here’s what happened in my life as an educator this past semester, and what I have planned for the coming semester.

Montclair State University Intro To Music Technology

I wonder how much longer “music technology” is going to exist as a subject. They don’t teach “piano technology” or “violin technology.” It makes sense to teach specific areas like audio recording or synthesis or signal theory as separate classes. But “music technology” is such a broad term as to be meaningless. The unspoken assumption is that we’re teaching “musical practices involving a computer,” but even that is both too big and too small to structure a one-semester class around. On the one hand, every kind of music involves computers now. On the other hand, to focus just on the computer part is like teaching a word processing class that’s somehow separate from learning how to write.

MSU Intro to Music Tech

The newness and vagueness of the field of study gives me and my fellow music tech educators wide latitude to define our subject matter. I see my job as providing an introduction to pop production and songwriting. The tools we use for the job at Montclair are mostly GarageBand and Logic, but I don’t spend a lot of time on the mechanics of the software itself. Instead, I teach music: How do you express yourself creatively using sample libraries, or MIDI, or field recordings, or pre-existing songs? What kinds of rhythms, harmonies, timbres and structures make sense aesthetically when you’re assembling these materials in the DAW? Where do you get ideas? How do you listen to recorded music analytically? Why does Thriller sound so much better than any other album recorded in the eighties? We cover technical concepts as they arise in the natural course of producing and listening. My hope is that they’ll be more relevant and memorable that way.

Having now taught three semesters of Intro to Music Tech at MSU, my format is starting to gel. The students spend most of the semester creating tracks. They do one using only the loops that come with GarageBand, one using only MIDI and software instruments, one that includes a field recording they made with their phones, and so on. I started having them remix each other’s tracks this past semester, and it was such a smash hit that I’m going to have future classes do a whole series of peer remixes.

Montclair is a fairly traditional conservatory. For many students, my class is the only time in their college careers they get to make music according to their own sensibilities and tastes. It’s also usually the only time they engage critically with recordings, or electronic dance music, or hip-hop, or pop song forms, or sampling, or mixing and audio processing. I’m glad to be able to fill these vacuums, but I wish I had more than one semester to do it in.

Aside from creative music-making, the students do a couple of presentations, one on a song they think is interesting, and one on a topic of their choice. They also write blog posts about the process of creating their tracks. This last assignment is a persistent obstacle, since no one seems to share my enthusiasm for process documentation. Next semester I’m going to try introducing some of the cooperative/competitive spirit of the peer remixes by having them write reviews of each other’s tracks. Maybe that will get them to invest their writing with the same creativity they put into the music assignments.

Montclair State Advanced Computer Music Composition

This past fall I got to teach my first advanced class, and it went amazingly well. We used Ableton Live, my DAW of choice, and the guys (it was all guys) banged out tracks at a rapid clip for the entire semester. As with the intro class, I spent most of the time on the creative process, and dealt with Ableton functionality and audio engineering topics as they came up.

Tristan gets his FFT on

Each assignment came with some kind of tight technical restriction, but no stylistic restrictions. As with the intro class, the advanced dudes did tracks using only existing loops, only MIDI, and found sound. They did peer remixing and self remixing as well. The two hardest and most interesting assignments were to create a new track using only samples of an existing track, and then to create a new track using only a single five-second Duke Ellington sample. (These assignments were inspired heavily by the Disquiet Junto.) The more tightly I constrained the students, the more ingenuity they displayed. Listen for yourself:

As with the intro class, I tried to have the advanced dudes document their process with blog posts. As with the intro class, they showed zero interest. In the future, I’ll have to get more creative with the writing component. Also, I’d like to not have the class be entirely male.

NYU Music Education Technology Practicum

This class is meant to be a grounding in music tech for future music teachers. I’m even more time-constrained at NYU than at Montclair, and I teach in a regular classroom rather than a computer lab. While my class time at Montclair is mostly devoted to music-making, at NYU I’m forced to do more lectures, demos and listening sessions. It is very far from ideal. I have no idea how NYU can charge so much money without offering such a basic-seeming amenity as a room with computers in it for the music students. However, NYU does have one advantage over Montclair as a teaching environment, which is that I can hold a couple of class sessions in an extremely fancy recording studio.

Catherine and Joseph in the Dolan Studio

I mostly take the same approach at NYU as I do at Montclair, and use most of the same assignments. The major difference is that the NYU kids do a critical listening project, where they pick a recording and graph out its musical structure and spatial layout. It’s a difficult exercise, but an invaluable one. I did it in grad school, and it improved my analytical listening abilities significantly. We used to do the same assignment at Montclair, but the students were really not into it, like to the point of refusing to do it, so sadly we had to drop it from the syllabus. I hope we can find a way to reinstate it.

This past semester, the majority of my NYU kids were music business majors, which was pretty great. They came in with less musical experience than the education majors–sometimes with none at all–but they had less to unlearn, and they threw themselves confidently into producing tracks. This coming semester I have a bunch more music business kids. I’m attracting them because my class is the only one at Steinhardt that does intro-level creative music making in the pop idiom. I’m clearly filling a vacuum, and I’m hoping that I’m just the thin edge of the wedge, both for my own sake and the future music educators of NYU.

Interface designs

The NYU Music Experience Design Lab is baking education into a suite of creative music making and learning tools. As my friend and colleague Adam Bell likes to say, purchasers of a computer are purchasing a music education. We’re trying to make that education a better and more enjoyable one, whether our users are in formal classroom settings or playing around on their own. You can read about the lab’s various projects here. My own contributions are largely conceptual, though I’ve also devoted a lot of attention to making useful and inspiring presets.

Cold Sweat on the Groove Pizza

The Ed Sullivan Fellows Program

This winter, the MusEDLab is launching a brand new initiative, mentoring a group of young people from challenging circumstances in music and technology. I’ll be teaching the music side, doing a custom-tailored version of my intro class syllabus. Sullivan Fellows will also work with my colleagues in the lab on programming and design projects. This summer, we’ll have a showcase event as part of the 2016 IMPACT Conference. The goal is to help the Fellows get launched in careers in music and/or technology. I’ll be writing a lot more about this in the coming weeks.

Online courses with Soundfly

The MusEDLab is working with a music ed startup on some new interactive online courses. The first is called Music Theory For Bedroom Producers, and we expect to launch next month. I wrote a lot of the materials, and am appearing in some videos. Soundfly has ace designers, animators and programmers, so expect a rich multimedia experience. More on this as it gets closer.

Everything else

For the past few years, I’ve been a teaching artist with NYU’s IMPACT workshop. Below, you can see some participants making beats on an iPad. The workshop is a crash course not just in music, but in theater, dance, video, and the intersection of all of the above. I’m still very much figuring out my role in the whole thing, but so is everyone involved.

Mobile music at IMPACT

I continue to teach private lessons, do freelance production and composition, do some consulting, write for online publications, and generally keep hustling for gigs. If you’d like to have me do any of these things, be in touch.

Music education at the grownups’ table

I was asked by Alison Armstrong to comment on this Time magazine op-ed by Todd Stoll, the vice president of education at Jazz at Lincoln Center. Before I do, let me give some context: Todd Stoll is a friend and colleague of Wynton Marsalis, and he shares some of Wynton’s ideas about music.

Wynton Marsalis

Wynton Marsalis has some strong views about jazz, its historical significance, and its present condition. He holds jazz to be “America’s classical music,” the highest achievement of our culture, and the sonic embodiment of our best democratic ideals. The man himself is a brilliant practitioner of the art form. I’ve had the pleasure of hearing him play live several times, and he’s always a riveting improvisor. However, his definition of the word “jazz” is a narrow one. For Wynton Marsalis, jazz history ends in about 1965, right before Herbie Hancock traded in his grand piano for a Fender Rhodes. All the developments after that–the introduction of funk, rock, pop, electronic music, and hip-hop– are bastardizations of the music.

Wynton Marsalis’ public stature has given his philosophy enormous weight, which has been a mixed bag for jazz culture. On the one hand, he has been a key force in getting jazz the institutional recognition that it was denied for too many years. On the other hand, the form of jazz that Wynton advocates for is a museum piece, a time capsule of the middle part of the twentieth century. When jazz gained the legitimacy of “classical music,” it also became burdened with classical music’s stuffiness, pedantry, and disconnection from the broader culture. As the more innovative jazz artists try to keep pace with the world, they can find themselves more hindered by Wynton than helped.

So, with all that in mind, let’s see what Todd Stoll has to say about the state of music education on America.

No Child Left Behind, the largest attempt at education reform in our nation’s history, resulted in a massive surge in the testing of our kids and an increased focus in “STEM” (science, technology, engineering and math). While well-meaning, this legislation precipitated a gradual and massive decline of students participating in music and arts classes, as test prep and remedial classes took precedence over a broader liberal arts education, and music education was often reduced, cut, or relegated to after school.

Testing culture is a Bad Thing, no question there.

Taken on face value, Every Student Succeeds bodes well for music education and the National Association for Music Education, which spent thousands of hours lobbying on behalf of music teachers everywhere. The new act removes “adequate yearly progress” benchmarks and includes music and arts as part of its definition of a “well-rounded education.” It also refers to time spent teaching music and arts as “protected time.”

That is a Good Thing.

Music and arts educators now have some leverage for increased funding, professional development, equipment, staffing, prioritized scheduling of classes, and a more solid foothold when budgets get tight and cuts are being discussed. I can almost hear the discussions—”We can’t cut a core class now, can we?” In other words, music is finally at the grown-ups table with subjects like science, math, social studies and language arts.

Yes! Great. But how did music get sent to the kids’ table in the first place? How did we come to regard it as a luxury, or worse, a frivolity? How do we learn to value it more highly, so the next time that a rage for quantitative assessment sweeps the federal government, we won’t go through the same cycle all over again?

Now that we’re at the table, we need a national conversation to redefine the depth and quality of the content we teach in our music classes. We need a paradigm shift in how we define outcomes in our music students. And we need to go beyond the right notes, precise rhythms, clear diction and unified phrasing that have set the standard for the past century.

True. The standard music curriculum in America is very much stuck in the model of the nineteenth century European conservatory. There’s so much more we could be doing to awaken kids’ innate musicality.

We should define learning by a student’s intimate knowledge of composers or artists—their personal history, conception and the breadth and scope of their output.

Sure! This sounds good.

Students should know the social and cultural landscape of the era in which any piece was written or recorded, and the circumstances that had an influence.

Stoll is referring here to the outdated notion of “absolute music,” the idea that the best music is “pure,” that it transcends the grubby world of politics and economics and fashion. We definitely want kids to know that music comes from a particular time and place, and that it responds to particular forces and pressures.

We should teach the triumphant mythology of our greatest artists—from Louis Armstrong to Leonard Bernstein, from Marian Anderson to Mary Lou Williams, and others.

Sure, students should know who black and female and Jewish musicians are. Apparently, however, our greatest artists all did their work before 1965.

Students should understand the style and conception of a composer or artist—what are the aesthetics of a specific piece, the notes that have meaning? They should know the influences and inputs that went into the creation of a piece and how to identify those.

Very good idea. I’m a strong believer in the evolutionary biology model of music history. Rather than doing a chronological plod through the Great Men (and now Women), I like the idea of picking a musical trope and tracing out its family tree.

There should be discussion of the definitive recording of a piece, and students should make qualitative judgments on such against a rubric defined by the teacher that easily and broadly gives definition and shape to any genre.

The Wynton Marsalis version of jazz has turned out to be a good fit for academic culture, because there are Canonical Works by Great Masters. In jazz, the canonical work is a recording rather than a score, but the scholarly approach can be the same. This model is problematic for an improvised, largely aural, and dance-oriented tradition like jazz, to say the least, but it is progress to be talking about recording as an art form unto itself.

Selected pieces should illuminate the general concepts of any genre—the 6/8 march, the blues, a lyrical art song, counterpoint, AABA form, or call and response—and students should be able to understand these and know their precise location within a score and what these concepts represent.

Okay. Why? I mean, these are all fine things to learn and teach. But they only become meaningful through use. A kid might rightly question whether their knowledge of lyrical art song or AABA form has anything to do with anything. Once a kid tries writing a song, these ideas suddenly become a lot more pertinent.

We should embrace the American arts as a full constituent in our programs—not the pop-tinged sounds of The Voice or Glee but our music: blues, folk, spirituals, jazz, hymns, country and bluegrass, the styles that created the fabric of our culture and concert works by composers who embraced them.

This is where Stoll and I part company. Classical pedagogues have earned a bad reputation for insisting that kids like the wrong music. Stoll is committing the same sin here. Remember, kids: Our Music is not your music. You are supposed to like blues, folk, spirituals, jazz, hymns, country and bluegrass. Those are the styles that created the fabric of our culture. And they inspired concert works by composers, so that really makes them legit. Music that was popular in your lifetime, or your parents’ lifetime, is suspect.

Students should learn that the written score is a starting point. It’s the entry into a world of discovery and aspiration that can transform their lives; it’s deeper than notes. We should help them realize that a lifetime of discovery in music is a worthwhile and enjoyable endeavor.

Score-centrism is a bad look from anyone, and it’s especially disappointing from a jazz guy. What does this statement mean to a kid immersed in rock or hip-hop, where nothing is written down? The score should be presented as what it is: one starting point among many. You can have a lifetime of discovery in music without ever reading a note. I believe that notation is worth teaching, but it’s worth teaching as a means to an end, not as an end unto itself.

These lessons will require new skills, extra work outside of class, more research, and perhaps new training standards for teachers. But, it’s not an insurmountable task, and it is vital, given the current strife of our national discourse.

If we can agree on the definitive recording of West Side Story, we can bridge the partisan divide!

Our arts can help us define who we are and tell us who we can be. They can bind the wounds of racism, compensate for the scourge of socio-economic disadvantage, and inoculate a new generation against the fear of not knowing and understanding those who are different from themselves.

I want this all to be true. But there is some magical thinking at work here, and magical thinking is not going to help us when budgets get cut. I want the kids to have the opportunity to study Leonard Bernstein and Marian Anderson. I’d happily toss standardized testing overboard to free up the time and resources. I believe that doing so will result in better academic outcomes. And I believe that music does make better citizens. But how does it do that? Saying that we need school music in order to instill Reverence for the Great Masters is weak sauce, even if the list of Great Masters now has some women and people of color on it. We need to be able to articulate specifically why music is of value to kids.

I believe that we have a good answer already: the point of music education should be to build emotionally stronger people. Done right, music promotes flow, deep attention, social bonding, and resilience. As Steve Dillon puts it, music is “a powerful weapon against depression.” Kids who are centered, focused, and able to regulate their moods are going to be better students, better citizens, and (most importantly!) happier humans. That is why it’s worth using finite school resources to teach music.

The question we need to ask is: what methods of music education best support emotional development in kids? I believe that the best approach is to treat every kid as a latent musician, and to help them develop as such, to make them producers rather than consumers. If a kid’s musicality can be nurtured best through studying jazz, great! That approach worked great for me, because my innermost musical self turns out to have a lot of resonance with Ellington and Coltrane. If a kid finds meaning in Beethoven, also great. But if the key to a particular kid’s lock is hip-hop or trance or country, music education should be equipped to support them too. Pointing young people to music they might otherwise miss out on is a good idea. Stifling them under the weight of a canon is not.

Space Oddity: from song to track

If you’ve ever wondered what it is that a music producer does exactly, David Bowie’s “Space Oddity” is a crystal clear example. To put it in a nutshell, a producer turns this:

Into this:

It’s also interesting to listen to the first version of the commercial recording, which is better than the demo, but still nowhere near as majestic as the final version. The Austin Powers flute solo is especially silly.

Should we even consider these three recordings to be the same piece of music? On the one hand, they’re all the same melody and chords and lyrics. On the other hand, if the song only existed in its demo form, or in the awkward Austin Powers version, it would never have made the impact that it did. Some of the impact of the final version lies in better recording techniques and equipment, but it’s more than that. The music takes on a different meaning in the final version. It’s bigger, trippier, punchier, tighter, more cinematic, more transporting, and in general about a thousand times more effective.

The producer’s job is to marshall the efforts of songwriters, arrangers, performers and engineers to create a good-sounding recording. (The producer might also be a songwriter, arranger, performer, and/or engineer.) Producers are to songs what directors are to movies, or showrunners are to television.

When you’re thinking about a piece of recorded music, you’re really talking about three different things:

  1. The underlying composition, the part that can be represented on paper. Albin Zak calls this “the song.”
  2. The performance of the song.
  3. The finished recording, after overdubbing, mixing, editing, effects, and all the rest. Albin Zak calls this “the track.”

I had always assumed that Tony Visconti produced “Space Oddity,” since he produced a ton of other Bowie classics. As it turns out, though, Visconti was underwhelmed by the song, so he delegated it to his assistant, Gus Dudgeon. So what is it that Gus Dudgeon did precisely? First let’s separate out what he didn’t do.

You can hear from the demo that the chords, melody and lyrics were all in place before Bowie walked into the studio. They’re the parts reproduced by the subway busker I heard singing “Space Oddity” this morning. The demo includes a vocal arrangement that’s similar to the final one, aside from some minor phrasing changes. The acoustic guitar and Stylophone are in place as well. (I had always thought it was an oboe, but no, that droning sound is a low-tech synth.)

Gus Dudgeon took a song and a partial arrangement, and turned it into a track. He oversaw the addition of electric guitar, bass, drums, strings, woodwinds, and keyboards. He coached Bowie and the various studio musicians through their performances, selected the takes, and decided on effects like echoes and reverb. He supervised the mixing, which not only sets the relative loudness of the various sounds, but also affects their perceived location and significance. In short, he designed the actual sounds that you hear.

If you want to dive deep into the track, you’re in luck, because Bowie officially released the multitrack stems. Some particular points of interest:

  • The bassist, Herbie Flowers, was a rookie. The “Space Oddity” session was his first. He later went on to create the staggeringly great dual bass part in Lou Reed’s “Walk On The Wild Side.”
  • The strings were arranged and conducted by the multifaceted Paul Buckmaster, who a few years later would work with Miles Davis on the conception of On The Corner. Buckmaster’s cello harmonics contribute significantly to the psychedelic atmosphere–listen to the end of the stem labeled “Extras 1.”
  • The live strings are supplemented by Mellotron, played by future Yes keyboardist Rick Wakeman, he of the flamboyant gold cape.
  • Tony Visconti plays some flute and unspecified woodwinds, including the distinctive saxophone run that leads into the instrumental sections.

You can read a detailed analysis of the recording on the excellent Bowiesongs blog.

The big difference between the sixties and the present is that the track has assumed ever-greater importance relative to the song and the performance. In the age of MIDI and digital audio editing, live performance has become a totally optional component of music. The song is increasingly inseparable from the sounds used to realize it, especially in synth-heavy music like hip-hop and EDM. This shift gives the producer ever-greater importance in the creative process. There is really no such thing as a “demo” anymore, since anyone with a computer can produce finished-sounding tracks in their bedroom. If David Bowie were a kid now, he’d put together “Space Oddity” in GarageBand or FL Studio, with a lavish soundscape part of the conception from the beginning.

I want my students to understand that the words “producer” and “musician” are becoming synonymous. I want them to know that they can no longer focus solely on composition or performance and wait for someone else to craft a track around them. The techniques used to make “Space Oddity” were esoteric and expensive to realize at the time. Now, they’re easily within reach. But while the technology is more accessible, you still have to have the ideas. This is why it’s so valuable to study great producers like Tony Visconti and Gus Dudgeon: they’re a goldmine of sonic inspiration.

See also: a broader appreciation of Bowie.