Currently Student, Alum Partner & Educator.
What do you like about working at MusEDLab?
Get ready to choose (+create) your own adventure!
Your current favorite song?
Currently Student, Alum Partner & Educator.
What do you like about working at MusEDLab?
Get ready to choose (+create) your own adventure!
Your current favorite song?
My first set of attempts at controllerism used samples of the Beatles and Michael Jackson. For the next round, I thought it would be good to try to create something completely from scratch. So this is my first piece of music created specifically with controllerism in mind.
The APC40 has forty trigger pads. You can use more than forty loops, but it’s a pain. I created eight loops that fit well together, and then made four additional variations of each one. That gave me a set of loops that fit tidily onto the APC40 grid. The instruments are 808 drum machine, latin percussion, wood blocks, blown tube, synth bass, bells, arpeggiated synth and an ambient pad.
I tried to design my loops so that all of them would be mutually musically compatible. I didn’t systematically test them, because that would have required trying thousands of combinations. Instead, I decided to randomly generate a song using Ableton’s Follow Actions to see if anything obviously unmusical leapt out at me. The first attempt was not a success — hearing all eight loops all the time was too much information. I needed a way to introduce some space. Eventually I hit on the idea of adding empty clips to each column that would be randomly sprinkled in.
It is exceptionally relaxing watching a song write itself while you sit there drinking coffee.
The result was a mix of pleasing and not-so-pleasing. I edited the random sequence into a more coherent shape:
Even with my editing, the result was not too hot. But it was useful to have something to react against. Finally, with all the prep behind me, it was time to play all this stuff live on the APC. Here’s the very first take of improv I did.
I let it sit for a couple of days while I was preoccupied with other things, and when I finally listened back, I was pleasantly surprised. Here it is, minimally edited:
The piece has a coherent shape, with lifts and lulls, peaks and valleys. It’s quite different from the way I’d structure a piece of music by my usual method of drawing loops on the screen. It’s less symmetrical and orderly, but it makes an intuitive sense of its own. I’ve been looking for a way to reconcile my love of jazz with my love of electronic dance music for many years now. I think I’ve finally found it. For my next controllerist opus, I’m going to blend samples and my own MIDI loops, and have more odd-length loops. And maybe I’ll play these things for an audience too.
My students are currently hard at work writing pop songs, many of them for the first time. For their benefit, and for yours, I thought I’d write out a beginner’s guide to contemporary songwriting. First, some points of clarification:
To make a track, you’ll need a digital audio workstation (DAW) and a loop library. I’ll be using GarageBand, but you can use the same methods in Ableton Live, Logic, Reason, Pro Tools, etc. I produced this track for illustration purposes, and will be referring to it throughout the post:
Put together four or eight bars worth of loops that all sound good together. Feel free to use the loops that come with your software, they’re probably a fine starting point. You can also generate your loops by recording instruments and singing, or by sequencing MIDI, or by sampling existing songs. Even if you aren’t working in an electronic medium, you can still gather loops: guitar parts, keyboard parts, bass riffs, drum patterns. Think of this set of loops as the highest-energy part of your song, the last chorus or what have you.
For my example track, I exclusively used GarageBand’s factory loops, which are mostly great if you tweak them a little. I selected a hip-hop beat, some congas, a shaker, a synth bass, some synth chords, and a string section melody. All of these loops are audio samples, except for the synth chord part, which is a MIDI sequence. I customized the synth part so that instead of playing the same chord four times, it makes a little progression that fits the bassline: I – I – bVI – bVII.
I copied my set of loops fifteen times, so the whole tune is 128 bars long. It doesn’t matter at this point exactly how many times you copy everything, so long as you have three or four minutes worth of loops to work with. You can always copy and paste more track if you need to. GarageBand users: note that by default, the song length is set ridiculously short. You’ll need to drag the right edge of your song to give yourself enough room.
This is the hard part, and it’s where you do the most actual “songwriting.” Remember how I said that your set of loops was going to be the highest-energy part of the song? You’re going to create all of the other sections by removing stuff. Different subsets of your loop collection will form your various sections: intro, verses, choruses, breakdown, outtro, and so on. These sections should probably be four, eight, twelve or sixteen bars long.
Here’s the structure I came up with on my first pass:
I made a sixteen-bar intro with the synth chords entering first, then the percussion, then the hip-hop drums. The entrance of the bass is verse one. The entrance of the strings is chorus one. For verse two, everything drops out except the drums, congas and bass. Chorus two is twice the length of chorus one, with the keyboard chords out for the first half. Then there’s a breakdown, eight bars of just the bass, and another eight of the bass and drums. Next, there are three more choruses, the first minus the keyboard chords again, the next two with everything (my original loop collection.) Finally, there’s a long outtro, with parts exiting every four or eight bars.
Even experienced songwriters find structure difficult. I certainly do. Building your structure will likely require a lot of trial and error. For inspiration, I recommend analyzing the structure of songs you like, and imitating them. Here’s my collection of particularly interesting song structures. My main piece of advice here is to keep things repetitive. If the groove is happening, people will happily listen to it for three or four minutes with minimal variation.
Trained musicians frequently feel anxious that their song isn’t “interesting” enough, and work hard to pack it with surprises. That’s the wrong idea. Let your grooves breathe. Let the listener get comfortable. This is pop music, it should be gratifying on the first listen. If you feel like your song won’t work without all kinds of intricate musical drama, you should probably just find a more happening set of loops.
After leaving my song alone for a couple of days, some shortcomings leaped out at me. The energy was building and dissipating in an awkward, unsatisfying way, and the string part was too repetitive to carry the whole melodic foreground. I decided to rebuild the structure from scratch. I also added another loop, a simple guitar riff. I then cut both the string and guitar parts in half, so the front half of the string loop calls, and the back half of the guitar loop answers. This worked hugely better. Here’s the finished product, the one you hear above:
My final structure goes as follows: the intro is synth chords and guitar, quickly joined by the percussion, then the drum loop. Verse one adds the bass. Chorus one adds the strings, so now we’re at full power. Verse two is a dramatic drop in energy, just the conga and strings, joined halfway through by the drums. Chorus two adds the bass and guitar back in. The breakdown section is eight bars of drums and bass, then eight more bars adding in the strings and percussion. The drums and percussion drop out for a bar right at the end of the section to create some punctuation. Verse three is everything but the synth chords. Choruses three and four are everything. The outtro is a staggered series of exits, rhythm section first, until the guitar and strings are left alone.
So there you have it. Once you’ve committed to your musical ideas, let your song sit for a few days and then go back and listen to it with an ear for mix and space. Try some effects, if you haven’t yet. Reverb and echo/delay always sound cool. Chances are your mix is going to be weak. My students almost always need to turn up their drums and turn down their melodic instruments. Try to push things to completion, but don’t make yourself crazy. Get your track to a place where it doesn’t totally embarrass you, put it on the web, and go start another one.
If reading this inspires you to make a track, please put a link to it in the comments, I’d love to hear it.
This month I’ve been teaching music production and composition as part of NYU’s IMPACT program. A participant named Michelle asked me to critique some of her original compositions. I immediately said yes, and then immediately wondered how I was actually going to do it. I always want to evaluate music on its own terms, and to do that, I need to know what the terms are. I barely know Michelle. I’ve heard her play a little classical piano and know that she’s quite good, but beyond that, I don’t know her musical culture or intentions or style. Furthermore, she’s from China, and her English is limited.
I asked Michelle to email me audio files, and also MIDI files if she had them. Then I had an epiphany: I could just remix her MIDIs, and give my critique totally non-verbally.
Michelle sent me three MIDI files that she had created with Cubase, and I imported them into Ableton. The first two pieces sounded like Chinese folk music arranged in a western pop-classical style, with a lot of major pentatonic scales. This is very far away from my native musical territory, and I didn’t want to challenge Michelle’s melodic or harmonic choices. Instead, I decided to start by replacing her instrument sounds with hipper ones. Cubase has reasonably good built-in sounds, but sampled orchestral instruments played via MIDI are always going to sound goofy. Unless your work is going to be performed by humans, it makes more sense to use synths that sound their best in a robotic context.
I took the most liberty with Michelle’s drum patterns, which I replaced with harder, funkier beats. Classical musicians don’t get a lot of exposure to Afrocentric rhythm. Symphonic percussion is mostly a tasteful background element, and the classical tribe tends to treat all drums that way. For the pop idiom, you want a strong beat in the foreground.
Michelle’s third track had more of a jazz-funk vibe, and now we were speaking my language. Once again, I replaced the orchestra sounds with groovy synths. I also replaced the entire complex percussion arrangement with a single sampled breakbeat. Then I dove into the parts to make them more idiomatic. I get the sense that conservatory students in Shanghai aren’t listening to a lot of James Brown. Michelle had written an intricately contrapuntal bassline, which was full of good ideas, but was way too linear and eventful to suit the style. I isolated a few nice hooks and looped them. The track started feeling a lot tighter and funkier, so I did some similar looping and simplification of horn parts. The goal was to keep Michelle’s very hip melodic ideas intact, but to present them in a more economical setting.
My remix chops are well honed through continual practice, and I think I was pretty successful in my interpretations of Michelle’s tracks. She agreed, and indicated her delight at hearing her music in this new light with many exclamation points in her email. That felt good.
Upon reflection, I’m realizing that all of my remixes have been a kind of compositional critique: “This part right here is really fresh, but have you considered putting this kind of beat underneath it? And what if we skip this part, and slow the whole thing down a little? How about we change the chords over here, and put a new ending on?” Usually I’m remixing the work of strangers, so the conversation is indirect, but it’s still taking place inside my head.
The remix technique solves a problem that’s bothered me for my entire music teaching life: how do you evaluate someone else’s creative work? There is no objective standard for judging the quality of music. All evaluation is a statement of taste. But as a teacher, you still want to make judgments. How do you do that when you’re just expressing differences in your arbitrary preferences?
One method for critiquing compositions is to harden your aesthetic whims into a dogmatic set of rules, and then apply them to everyone else. I studied jazz as an undergrad with Andy Jaffe. As far as Andy is concerned, all music aspires to the melodies of Duke Ellington, the rhythms of Horace Silver and the harmonies of John Coltrane. Fair enough, but my own tastes aren’t so tightly defined.
I like the remix idea because it isn’t evaluation at all. It’s a way of entering a conversation about alternative musical choices. If I remix your tune, you might feel like my version is an improvement, that it gets at what you were intending to say better than you knew how to say it. That’s the reaction that Michelle gave me, and it’s naturally the one that I want. Of course, you might also feel like I missed the point of your idea, that my version sounds awful. Fair enough. Neither of us is wrong. The beauty of digital audio is that there doesn’t need to be a last word; music can be rearranged and remixed indefinitely.
Update: a guy on Twitter had a brilliant suggestion: do the remix critique during class, so students can see your process, make suggestions, ask questions. Other people have asked me, “Wouldn’t remixing every single student composition take a lot of time?” Yes, I guess it would, but if you do it during class, that addresses the issue nicely.
I’m working on a long paper right now with my colleague at Montclair State University, Adam Bell. The premise is this: In the past, metaphors came from hardware, which software emulated. In the future, metaphors will come from software, which hardware will emulate.
The first generation of digital audio workstations have taken their metaphors from multitrack tape, the mixing desk, keyboards, analog synths, printed scores, and so on. Even the purely digital audio waveforms and MIDI clips behave like segments of tape. Sometimes the metaphors are graphically abstracted, as they are in Pro Tools. Sometimes the graphics are more literal, as in Logic. Propellerhead Reason is the most skeuomorphic software of them all. This image from the Propellerhead web site makes the intent of the designers crystal clear; the original analog synths dominate the image.
In Ableton Live, by contrast, hardware follows software. The metaphor behind Ableton’s Session View is a spreadsheet. Many of the instruments and effects have no hardware predecessor.
Controllers like the APC and Push are designed to emulate Session View.
Another software-centric user interface can be found in iZotope’s Iris. It enables you to selectively filter a sample by using Photoshop-like selection tools on a Fourier transform visualization.
Music is slow to embrace the idea of hardware designed to fit software rather than the other way around. Video games have followed this paradigm for decades. While there are some specialized controllers emulating car dashboards or guns or musical instruments, most game controllers are highly abstracted collections of buttons and knobs and motion sensors.
I was born in 1975, and I’m part of the last age cohort to grow up using analog tape. The kids now are likely to have never even seen a tape recorder. Hardware metaphors are only useful to people who are familiar with the hardware. Novel software metaphors take time to learn, especially if they stand for novel concepts. I’m looking forward to seeing what metaphors we dream up in the future.
For the benefit of Play With Your Music participants and anyone else we end up teaching basic audio production to, MusEDLab intern Robin Chakrabarti and I created this video on recording audio in less-than-ideal environments.
This video is itself quite a DIY production, shot and edited in less than twenty-four hours, with minimal discussion beforehand and zero rehearsal. Robin ran the camera, framed and planned shots and did the editing as well. We were operating from a loose script, but the details of the video ended being substantially improvised as we reacted to the room. For example, we discovered that the room opened onto a loud air conditioning unit that could be somewhat quieted by drawing a curtain. That became one of the more informative parts of the video. Also, while we had planned to do a shot in the bathroom to talk about its natural reverb, we also discovered that the hallway had fairly interesting reverb of its own, and it inspired a useful segment about standing waves.
Maybe the best improv moment came when someone inadvertently burst into the room where we were shooting. It could have been a ruined take, but we salvaged it by using it to address the idea that it’s hard to cordon off non-studio spaces to get the isolation you need.
Improvisation is such a valuable life skill. We shouldn’t make every kid learn how to read music notation, with improvisation as an optional side topic. We should make sure that everyone knows how to improvise, and then if people want to go on and learn to read, great.
I’m a proud member of the NYU Music Experience Design Lab, a research group that crosses the disciplines of music education, technology, and design. Here’s an overview of our many ongoing projects.
My other major personal involvement is in the aQWERTYon, which turns any computer keyboard into a futuristic MIDI controller.
You can choose from a variety of scale and chord mappings, and then jam or compose with the confidence that you can’t play a wrong note. You can use our built-in sound library, or you can play software instruments from Logic, GarageBand, Ableton, and so on.
The aQWERTYon and Groove Pizza are core components of a new learning tool called Play With Your Music: Theory, part of the Play With Your Music series. They were originally conceived as MOOCs, but have since evolved into online learning communities. All of the recording, mixing, editing and performance interfaces run in the web browser, so you don’t need any additional hardware or software to participate.
The lab has a close relationship with the Urban Arts Partnership. We’re creating web tools in support of Fresh Ed, an amazing initiative that teaches various humanities subjects using hip-hop. We’re talking to them about incorporating the Groove Pizza into their work as well. And we play host to Smartbomb Labs–last summer a kid designed a biology game starring a character named Homie O. Stasis.
Other folks in the lab are working with the Chamber Music Society of Detroit to create a set of chamber music engagement tools. I’m particularly fond of the extreme POV string quartet videos, giving you the chance to see and hear a Haydn performance from the players’ perspectives. There’s also a cool musical puzzle card game.
Finally, lab director Alex Ruthmann has been consulting on a new multitrack audio format called OOID, which lets you hear a song’s instruments and vocals in isolation or the mix of your choosing, and also layers in video, lyrics, and even guitar chords. It’s sadly not available in the US yet, but if you’re in Europe, you can download away.
If you are an educator, coder, or designer, and you want to get involved, be in touch. If you’re a philanthropist or grantmaker and you want to support us, definitely be in touch. Also, we’ll be spinning off a business arm this winter; if you’d like to become an investor, be in touch as well.
I’m part of a research group at NYU called the Music Experience Design Lab. One of our projects is called Play With Your Music, a series of online interactive music courses. We’re currently developing the latest iteration, called Play With Your Music: Theory. Each module presents a “musical simple,” a short and memorable loop of melody or rhythm. Each simple is a window into one or more music theory concepts. Users can learn and play with the simples using a new interface called the aQWERTYon, which maps scales and chords to the regular computer keyboard.
We’re presenting the simples in a variety of formats, including YouTube videos, standard music notation, MIDI, data visualization, and our custom aQWERTYon notation.
The goal is to teach theory through creative engagement with meaningful real-world music. We also want to put more emphasis on rhythm, which traditional music theory pedagogy tends to neglect. I’ve put some prototypes up, and I invite you to take a look and play around.
There’s a lot of work to do to make this vision a reality, and we’re looking for partners to help us do it. Specifically, here’s what we’d like to do in the coming year:
If you’d like to get involved, or you want to offer some feedback, please let me know.
This fascinating thread about music theory on Hacker News showed up recently in my blog pingbacks.
Two posts in particular caught my eye. First, kev6168 had this eminently reasonable request:
I wish there is a lecture in the format of “One hundred songs for one hundred music concepts”. The songs should be stupidly simple (children’s songs, simple pops, short classical pieces, etc.). Each lesson concentrates on only _one_ concept appeared in the piece, explains what it is, how it is used, why it is used in that way, and how its application makes nice sound, etc. Basically, show me the _effect_ for each concept in a simple real world setting. Then more importantly, give me exercises to write a few bars of music using this concept, no matter how bad my writings are, as long as I am applying the new knowledge…
[A]rmed with only a few concepts, a newbie [coder] can start to write simple programs from the very beginning, in the forms of piecemeal topic-focused little exercises. The result of each exercise is a fully functioning program. I wish I can find a similarly structured music theory course that uses the same approach. Also, are there projects in music which are similar to ProjectEuler.net or the likes, where you can do focused practice on specific topic? I would be happy to pay for those services.
This represents a pedagogical opportunity, not to mention a market opportunity. The NYU Music Experience Design Lab is hard at work on creating just such a resource. It’s going to be called Play With Your Music: Theory, and we hope to get it launched this summer. If you want to help us with it, get in touch.
Deeper in the thread, TheOtherHobbes has a broader philosophical point.
Pop has become a massive historical experiment in perceptual psychology. The most popular bands can literally fill a stadium – something common practice music has never done.
While that doesn’t mean pop is better in some absolute sense, it does suggest it’s doing something right for many listeners.
If your training is too rigidly classical it actively stops you being able to hear and understand what that right thing is, because you’re too busy concentrating on a small subset of the many details in the music.
This is a point that I spend a lot of energy pursuing, but I hadn’t explicitly framed it in terms of perceptual psychology. It gets at some bigger questions: Why do people like music at all? Even though pop can indeed draw huge crowds, it’s mostly a recorded art form. How does that work? What does it mean that we’re so attracted to roboticized voices? A lot to think about.