Composing in the classroom

The hippest music teachers help their students create original music. But what exactly does that mean? What even is composition? In this post, I take a look at two innovators in music education and try to arrive at an answer.

Matt McLean is the founder of the amazing Young Composers and Improvisers Workshop. He teaches his students composition using a combination of Noteflight, an online notation editor, and the MusEDLab‘s own aQWERTYon, a web app that turns your regular computer keyboard into an intuitive musical interface.

http://www.yciw.net/1/the-interface-i-wish-noteflight-had-is-here-aqwertyon/

Matt explains:

Participating students in YCIW as well as my own students at LREI have been using Noteflight for over 6 years to compose music for chamber orchestras, symphony orchestras, jazz ensembles, movie soundtracks, video game music, school band and more – hundreds of compositions.

Before the advent of the aQWERTYon, students needed to enter music into Noteflight either by clicking with the mouse or by playing notes in with a MIDI keyboard. The former method is accessible but slow; the latter method is fast but requires some keyboard technique. The aQWERTYon combines the accessibility of the mouse with the immediacy of the piano keyboard.

For the first time there is a viable way for every student to generate and notate her ideas in a tactile manner with an instrument that can be played by all. We founded Young Composers & Improvisors Workshop so that every student can have the experience of composing original music. Much of my time has been spent exploring ways to emphasize the “experiencing” part of this endeavor. Students had previously learned parts of their composition on instruments after their piece was completed. Also, students with piano or guitar skills could work out their ideas prior to notating them. But efforts to incorporate MIDI keyboards or other interfaces with Noteflight in order to give students a way to perform their ideas into notation always fell short.

The aQWERTYon lets novices try out ideas the way that more experienced musicians do: by improvising with an instrument and reacting to the sounds intuitively. It’s possible to compose without using an instrument at all, using a kind of sudoku-solving method, but it’s not likely to yield good results. Your analytical consciousness, the part of your mind that can write notation, is also its slowest and dumbest part. You really need your emotions, your ear, and your motor cortex involved. Before computers, you needed considerable technical expertise to be able to improvise musical ideas, and remember them long enough to write them down. The advent of recording and MIDI removed a lot of the friction from the notation step, because you could preserve your ideas just by playing them. With the aQWERTYon and interfaces like it, you can do your improvisation before learning any instrumental technique at all.

Student feedback suggests that kids like being able to play along to previously notated parts as a way to find new parts to add to their composition. As a teacher I am curious to measure the effect of students being able to practice their ideas at home using aQWERTYon and then sharing their performances before using their idea in their composition. It is likely that this will create a stronger connection between the composer and her musical idea than if she had only notated it first.

Those of us who have been making original music in DAWs are familiar with the pleasures of creating ideas through playful jamming. It feels like a major advance to put that experience in the hands of elementary school students.

Matt uses progressive methods to teach a traditional kind of musical expression: writing notated scores that will then be performed live by instrumentalists. Matt’s kids are using futuristic tools, but the model for their compositional technique is the one established in the era of Beethoven.

Beethoven

(I just now noticed that the manuscript Beethoven is holding in this painting is in the key of D-sharp. That’s a tough key to read!)

Other models of composition exist. There’s the Lennon and McCartney method, which doesn’t involve any music notation. Like most untrained rock musicians, the Beatles worked from lyric sheets with chords written on them as a mnemonic. The “lyrics plus chords” method continues to be the standard for rock, folk and country musicians. It’s a notation system that’s only really useful if you already have a good idea of how the song is supposed to sound.

Lennon and McCartney writing

Lennon and McCartney originally wrote their songs to be performed live for an audience. They played in clubs for several years before ever entering a recording studio. As their career progressed, however, the Beatles stopped performing live, and began writing with the specific goal of creating studio recordings. Some of those later Beatles tunes would be difficult or impossible to perform live. Contemporary artists like Missy Elliott and Pharrell Williams have pushed the Beatles’ idea to its logical extreme: songs existing entirely within the computer as sequences of samples and software synths, with improvised vocals arranged into shape after being recorded. For Missy and Pharrell, creating the score and the finished recording are one and the same act.

Pharrell and Missy Elliott in the studio

Is it possible to teach the Missy and Pharrell method in the classroom? Alex Ruthmann, MusEDLab founder and my soon-to-be PhD advisor, documented his method for doing so in 2007.

As a middle school general music teacher, I’ve often wrestled with how to engage my students in meaningful composing experiences. Many of the approaches I’d read about seemed disconnected from the real-world musicality I saw daily in the music my students created at home and what they did in my classes. This disconnect prompted me to look for ways of bridging the gap’ between the students’ musical world outside music class and their in-class composing experiences.

It’s an axiom of constructivist music education that students will be most motivated to learn music that’s personally meaningful to them. There are kids out there for whom notated music performed on instruments is personally meaningful. But the musical world outside music class usually follows the Missy and Pharrell method.

[T]he majority of approaches to teaching music with technology center around notating musical ideas and are often rooted in European classical notions of composing (for example, creating ABA pieces, or restricting composing tasks to predetermined rhythmic values). These approaches require students to have a fairly sophisticated knowledge of standard music notation and a fluency working with rhythms and pitches before being able to explore and express their musical ideas through broader musical dimensions like form, texture, mood, and style.

Noteflight imposes some limitations on these musical dimensions. Some forms, textures, moods and styles are difficult to capture in standard notation. Some are impossible. If you want to specify a particular drum machine sound combined with a sampled breakbeat, or an ambient synth pad, or a particular stereo image, standard notation is not the right tool for the job.

Common approaches to organizing composing experiences with synthesizers and software often focus on simplified classical forms without regard to whether these forms are authentic to the genre or to technologies chosen as a medium for creation.

There is nothing wrong with teaching classical forms. But when making music with computers, the best results come from making the music that’s idiomatic to computers. Matt McLean goes to extraordinary lengths to have student compositions performed by professional musicians, but most kids will be confined to the sounds made by the computer itself. Classical forms and idioms sound awkward at best when played by the computer, but electronic music sounds terrific.

The middle school students enrolled in these classes came without much interest in performing, working with notation, or studying the classical music canon. Many saw themselves as “failed” musicians, placed in a general music class because they had not succeeded in or desired to continue with traditional performance-based music classes. Though they no longer had the desire to perform in traditional school ensembles, they were excited about having the opportunity to create music that might be personally meaningful to them.

Here it is, the story of my life as a music student. Too bad I didn’t go to Alex’s school.

How could I teach so that composing for personal expression could be a transformative experience for students? How could I let the voices and needs of the students guide lessons for the composition process? How could I draw on the deep, complex musical understandings that these students brought to class to help them develop as musicians and composers? What tools could I use to quickly engage them in organizing sound in musical and meaningful ways?

Alex draws parallels between writing music and writing English. Both are usually done alone at a computer, and both pose a combination of technical and creative challenges.

Musical thinking (thinking in sound) and linguistic thinking (thinking using language phrases and ideas) are personal creative processes, yet both occur within social and cultural contexts. Noting these parallels, I began to think about connections between the whole-language approach to writing used by language arts teachers in my school and approaches I might take in my music classroom.

In the whole-language approach to writing, students work individually as they learn to write, yet are supported through collaborative scaffolding-support from their peers and the teacher. At the earliest stages, students tell their stories and attempt to write them down using pictures, drawings, and invented notation. Students write about topics that are personally meaningful to them, learning from their own writing and from the writing of their peers, their teacher, and their families. They also study literature of published authors. Classes that take this approach to teaching writing are often referred to as “writers’ workshops”… The teacher facilitates [students’] growth as writers through minilessons, share sessions, and conferring sessions tailored to meet the needs that emerge as the writers progress in their work. Students’ original ideas and writings often become an important component of the curriculum. However, students in these settings do not spend their entire class time “freewriting.” There are also opportunities for students to share writing in progress and get feedback and support from teacher and peers. Revision and extension of students’ writing occur throughout the process. Lessons are not organized by uniform, prescriptive assignments, but rather are tailored to the students’ interests and needs. In this way, the direction of the curriculum and successive projects are informed by the students’ needs as developing writers.

Alex set about creating an equivalent “composers’ workshop,” combining composition, improvisation, and performing with analytical listening and genre studies.

The broad curricular goal of the composers’ workshop is to engage students collaboratively in:

  • Organizing and expressing musical ideas and feelings through sound with real-world, authentic reasons for and means of composing
  • Listening to and analyzing musical works appropriate to students’ interests and experiences, drawn from a broad spectrum of sources
  • Studying processes of experienced music creators through listening to, performing, and analyzing their music, as well as being informed by accounts of the composition process written by these creators.

Alex recommends production software with strong loop libraries so students can make high-level musical decisions with “real” sounds immediately.

While students do not initially work directly with rhythms and pitch, working with loops enables students to begin composing through working with several broad musical dimensions, including texture, form, mood, and affect. As our semester progresses, students begin to add their own original melodies and musical ideas to their loop-based compositions through work with synthesizers and voices.

As they listen to musical exemplars, I try to have students listen for the musical decisions and understand the processes that artists, sound engineers, and producers make when crafting their pieces. These listening experiences often open the door to further dialogue on and study of the multiplicity of musical roles’ that are a part of creating today’s popular music. Having students read accounts of the steps that audio engineers, producers, songwriters, film-score composers, and studio musicians go through when creating music has proven to be informative and has helped students learn the skills for more accurately expressing the musical ideas they have in their heads.

Alex shares my belief in project-based music technology teaching. Rather than walking through the software feature-by-feature, he plunges students directly into a creative challenge, trusting them to pick up the necessary software functionality as they go. Rather than tightly prescribe creative approaches, Alex observes the students’ explorations and uses them as opportunities to ask questions.

I often ask students about their composing and their musical intentions to better understand how they create and what meanings they’re constructing and expressing through their compositions. Insights drawn from these initial dialogues help me identify strategies I can use to guide their future composing and also help me identify listening experiences that might support their work or techniques they might use to achieve their musical ideas.

Some musical challenges are more structured–Alex does “genre studies” where students have to pick out the qualities that define techno or rock or film scores, and then create using those idioms. This is especially useful for younger students who may not have a lot of experience listening closely to a wide range of music.

Rather than devoting entire classes to demonstrations or lectures, Alex prefers to devote the bulk of classroom time to working on the projects, offering “minilessons” to smaller groups or individuals as the need arises.

Teaching through minilessons targeted to individuals or small groups of students has helped to maintain the musical flow of students’ compositional work. As a result, I can provide more individual feedback and support to students as they compose. The students themselves also offer their own minilessons to peers when they have designed to teach more about advanced features of the software, such as how to record a vocal track, add a fade-in or fade-out, or copy their musical material. These technology skills are taught directly to a few students, who then become the experts in that skill, responsible for teaching other students in the class who need the skill.

Not only does the peer-to-peer learning help with cultural authenticity, but it also gives students invaluable experience with the role of teacher.

One of my first questions is usually, “Is there anything that you would like me to listen for or know about before I listen?” This provides an opportunity for students to seek my help with particular aspects of their composing process. After listening to their compositions, I share my impressions of what I hear and offer my perspective on how to solve their musical problems. If students choose not to accept my ideas, that’s fine; after all, it’s their composition and personal expression… Use of conferring by both teacher and students fosters a culture of collaboration and helps students develop skills in peer scaffolding.

Alex recommends creating an online gallery of class compositions. This has become easier to implement since 2007 with the explosion of blog platforms like Tumblr, audio hosting tools like SoundCloud, and video hosts like YouTube. There are always going to be privacy considerations with such platforms, but there is no shortage of options to choose from.

Once a work is online, students can listen to and comment on these compositions at home outside of class time. Sometimes students post pieces in progress, but for the most part, works are posted when deemed “finished” by the composer. The online gallery can also be set up so students can hear works written by participants in other classes. Students are encouraged to listen to pieces published online for ideas to further their own work, to make comments, and to share these works with their friends and family. The realworld publishing of students’ music on the Internet seems to contribute to their motivation.

Assessing creative work is always going to be a challenge, since there’s no objective basis to assess it on. Alex looks at how well a student composer has met the goal of the assignment, and how well they have achieved their own compositional intent.

The word “composition” is problematic in the context of contemporary computer-based production. It carries the cultural baggage of Western Europe, the idea of music as having a sole identifiable author (or authors.) The sampling and remixing ethos of hip-hop and electronica are closer to the traditions of non-European cultures where music may be owned by everyone and no one. I’ve had good results bringing remixing into the classroom, having students rework each others’ tracks, or beginning with a shared pool of audio samples, or doing more complex collaborative activities like musical shares. Remixes are a way of talking about music via the medium of music, and remixes of remixes can make for some rich and deep conversation. The word “composition” makes less sense in this context. I prefer the broader term “production”, which includes both the creation of new musical ideas and the realization of those ideas in sound.

So far in this post, I’ve presented notation-based composition and loop-based production as if they’re diametrical opposites. In reality, the two overlap, and can be easily combined. A student can create a part as a MIDI sequence and then convert it to notation, or vice versa. The school band or choir can perform alongside recorded or sequenced tracks. Instrumental or vocal performances can be recorded, sampled, and turned into new works. Electronic productions can be arranged for live instruments, and acoustic pieces can be reconceived as electronica. If a hip-hop track can incorporate a sample of Duke Ellington, there’s no reason that sample couldn’t be performed by a high school jazz band. The possibilities are endless.

Rohan lays beats

The Ed Sullivan Fellows program is an initiative by the NYU MusEDLab connecting up-and-coming hip-hop musicians to mentors, studio time, and creative and technical guidance. Our session this past Saturday got off to an intense start, talking about the role of young musicians of color in a world of the police brutality and Black Lives Matter. The Fellows are looking to Kendrick Lamar and Chance The Rapper to speak social and emotional truths through music. It’s a brave and difficult job they’ve taken on.

Eventually, we moved from heavy conversation into working on the Fellows’ projects, which this week involved branding and image. I was at kind of a loose end in this context, so I set up the MusEDLab’s Push controller and started playing around with it. Rohan, one of the Fellows, immediately gravitated to it, and understandably so.

Indigo lays beats

Rohan tried out a few drum sounds, then some synths. He quickly discovered a four-bar synth loop that he wanted to build a track around. He didn’t have any Ableton experience, however, so I volunteered to be his co-producer and operate the software for him.

We worked out some drum parts, first with a hi-hat and snare from the Amen break, and then a kick, clap and more hi-hats from Ableton’s C78 factory instrument. For bass, Rohan wanted that classic booming hip-hop sound you hear coming from car stereos in Brooklyn. He spotted the Hip-Hop Sub among the presets. We fiddled with it and he continued to be unsatisfied until I finally just put a brutal compressor on it, and then we got the sound he was hearing in his head.

While we were working, I had my computer connected to a Bluetooth speaker that was causing some weird and annoying system behavior. At one point, iTunes launched itself and started playing a random song under Rohan’s track, “I Can’t Realize You Love Me” by Duke Ellington and His Orchestra, featuring The Harlem Footwarmers and Sid Garry.

Rohan liked the combination of his beat and the Ellington song, so I sampled the opening four bars and added them to the mix. It took me several tries to match the keys, and I still don’t think I really nailed it, but the hip-hop kids have broad tolerance for chord clash, and Rohan was undisturbed.

Once we had the loops assembled, we started figuring out an arrangement. It took me a minute to figure out that when Rohan refers to a “bar,” he means a four-measure phrase. He’s essentially conflating hypermeasures with measures. I posted about it on Twitter later and got some interesting responses.

In a Direct Message, Latinfiddler also pointed out that Latin music calls two measures a “bar” because that’s the length of one cycle of the clave.

Thinking about it further, there’s yet another reason to conflate measures with hypermeasures, which is the broader cut-time shift taking place in hip-hop. All of the young hip-hop beatmakers I’ve observed lately work at half the base tempo of their DAW session. Rohan, being no exception, had the session tempo set to 125 bpm, but programmed a beat with an implied tempo of 62.5 bpm. He and his cohort put their backbeats on beat three, not beats two and four, so they have a base grid of thirty-second notes rather than sixteenth notes. A similar shift took place in the early 1960s when the swung eighth notes of jazz rhythm gave way to the swung sixteenth notes of funk.

Here’s Rohan’s track as of the end of our session:

By the time we were done working, the rest of the Fellows had gathered around and started freestyling. The next step is to record them rapping and singing on top. We also need to find someone to mix it properly. I understand aspects of hip-hop very well, but I mix amateurishly at best.

All the way around, I feel like a learn a ton about music whenever I work with young hip-hop musicians. They approach the placement of sounds in the meter in ways that would never occur to me. I’m delighted to be able to support them technically in realizing their ideas, it’s a privilege for me.

Ilan meets the Fugees

My youngest private music production student is a kid named Ilan. He makes moody trip-hop and deep house using Ableton Live. For our session today, Ilan came in with a downtempo, jazzy hip-hop instrumental. I helped him refine and polish it, and then we talked about his ideas for what kind of vocal might work on top. He wanted an emcee to flow over it, so I gave him my folder of hip-hop acapellas I’ve collected. The first one he tried was “Fu-Gee-La [Refugee Camp Remix]” by the Fugees.

I had it all warped out already, so all he had to do was drag and drop it into his session and press play. It sounded great, so he ran with it. Here’s what he ended up with:

At this point, let me clarify something. To his knowledge, Ilan had never heard “Fu-Gee-La” before using it in his track. His first exposure was the acapella over his own instrumental. His track is quite a bit faster than the original (well, technically, it’s slower, but the kids these days like their rapping doubletime.) Also, we needed to pitch the acapella down a minor third to match the key of Ilan’s instrumental. As of this writing, he has heard his remix about a thousand more times than the original.

And now, let’s consider the Fugees’ “original” song. Ilan used the acapella from a remix, not from the original original, which makes a difference since the remix has some different lyrics. The Fugees’ original original is not itself totally original. It contains several samples, including liberal interpolations of Teena Marie, and a quote from “Shakiyla (JRH)” by Poor Righteous Teachers, which itself contains several samples.

Hip-hop’s sampling culture was still radical back in the 90s when “Fu-Gee-La” was released, but has since become absorbed into mainstream sensibilities. Ilan is ambitious and talented, but his sensibilities are well in keeping with most of his millennial peers. So it’s worth looking into his norms and values around authorship and ownership. During our session, he was interested in the Fugees song simply as raw material for his own creativity, not as a self-contained work that needed to be “appreciated” first (or ever.) Ilan’s concerns about where he sources his sounds comes down one hundred percent to expediency. He buys sounds from the Ableton web site because that’s easy. The same goes for buying tracks from iTunes, if they surface with a quick search. Otherwise Ilan just does YouTube to mp3 conversion. I’ve never heard him voice any concern about the idea of intellectual property, or any desire to seek anyone’s permission.

So here we have a young musician who created an original track, and then after the fact layered in a commercially released hip-hop vocal track on a whim. If that one hadn’t worked, he would have just dropped in another one chosen more or less at random. This kind of effortless drag-and-drop remixing requires some facility with Ableton Live, which is expensive and has a learning curve. But this practice is easier than it was five years ago, and is only going to get easer. Music educators: are we ready for a world where this kind of creativity is so accessible? Rights holders: do you know just how little the kids know or care about the concept of musical intellectual property? And musicians: have you experienced the pleasure and inspiration of freely mixing your ideas with everyone else’s? This is a crazy time we live in.

Beatmaking fundamentals

I’m currently working with the Ed Sullivan Fellows program, an initiative of the NYU MusEDLab where we mentor up and coming rappers and producers. Many of them are working with beats they got from YouTube or SoundCloud. That’s fine for working out ideas, but to get to the next level, the Fellows need to be making their own beats. Partially this is for intellectual property reasons, and partially it’s because the quality of mp3s you get from YouTube is not so good. Here’s a collection of resources and ideas I collected for them, and that you might find useful too.

Sullivan Fellows - beatmaking with FL Studio

What should you use?

There are a lot of digital audio workstations (DAWs) out there. All of them have the same basic set of functions: a way to record and edit audio, a MIDI sequencer, and a set of samples and software instruments. My DAW of choice is Ableton Live. Most of the Sullivan Fellows favor FL Studio. Mac users naturally lean toward GarageBand and Logic. Other common tools for hip-hop producers include Reason, Pro Tools, Maschine, and in Europe, Cubase.

Traditional DAWs are not the only option. Soundtrap is a browser-based DAW that’s similar to GarageBand, but with the enormous advantage that it runs entirely in the web browser. It also offers some nifty features like built-in Auto-Tune at a fraction of the usual price. The MusEDLab’s own Groove Pizza is an accessible browser-based drum sequencer. Looplabs is another intriguing browser tool.

Mobile apps are not as robust or full-featured as desktop DAWs yet, but some of them are getting there. The iOS version of GarageBand is especially tasty. Figure makes great techno loops, though you’ll need to assemble them into songs using another tool. The Launchpad app is a remarkably easy and intuitive one. See my full list of recommendations.

Sullivan Fellows - beatmaking with iOS GarageBand

Where do you get sounds?

DAW factory sounds

Every DAW comes with a sample library and a set of software instruments. Pros: they’re royalty-free. Cons: they tend to be generic-sounding and overused. Be sure to tweak the presets.

Sample libraries and instrument packs

The internet is full of third-party sound libraries. They range widely in price and quality. Pros: like DAW factory sounds, library sounds are also royalty-free, with greatly wider variety available. Cons: the best libraries are expensive.

Humans playing instruments

You could record music the way it was played from the Stone Age through about 1980. Pros: you get human feel, creativity, improvisation, and distinctive instrumental timbres and techniques. Cons: humans are expensive and impractical to record well.

Your record collection

Using more DJ-oriented tools like Ableton, it’s perfectly effortless to pull sounds out of any existing recording. Pros: bottomless inspiration, and the ability to connect emotionally to your listener through sounds that are familiar and meaningful to them. Cons: if you want to charge money, you will probably need permission from the copyright holders, and that can be difficult and expensive. Even giving tracks away on the internet can be problematic. I’ve been using unauthorized samples for years and have never been in any trouble, but I’ve had a few SoundCloud takedowns.

Sullivan Fellows - beatmaking with Pro Tools

What sounds do you need?

Drums

Most hip-hop beats revolve around the components of the standard drum kit: kicks, snares, hi-hats (open and closed), crash cymbals, ride cymbals, and toms. Handclaps and finger snaps have become part of the standard drum palette as well. There are two kinds of drum sounds, synthetic (“fake”) and acoustic (“real”).

Synthetic drums are the heart and soul of hip-hop (and most other pop and dance music at this point.) There are tons of software and hardware drum machines out there, but there are three in particular you should be aware of.

  • Roland TR-808: If you could only have one drum machine for hip-hop creation, this would be the one. Every DAW contains sampled or simulated 808 sounds, sometimes labeled “old-skool” or something similar. It’s an iconic sound for good reason.
  • Roland TR-808: A cousin of the 808 that’s traditionally used more for techno. Still, you can get great hip-hop sounds out of it too. Your DAW is certain to contain some 909 sounds, often labeled with some kind of dance music terminology.
  • LinnDrum: The sound of the 80s. Think Prince, or Hall And Oates. Not as ubiquitous in DAWs as the 808 and 909, but pretty common.

Acoustic drums are less common in hip-hop, though not unheard of; just ask Questlove.

Some hip-hop producers use live drummers, but it’s much easier to use sampled acoustic drums. Samples are also a good source of Afro-Cuban percussion sounds like bongos, congas, timbales, cowbells, and so on. Also consider using “non-musical” percussion sounds: trash can lids, pots and pans, basketballs bouncing, stomping on the floor, and so on.

And how do you learn where to place these drum sounds? Try the specials on the Groove Pizza. Here’s an additional hip-hop classics to experiment with, the beat from “Nas Is Like” by Nas.

Groove Pizza - Nas Is Like

Bass

Hip-hop uses synth bass the vast majority of the time. Your DAW comes with a variety of synth bass sounds, including the simple sine wave sub, the P-Funk Moog bass, dubstep wobbles, and many others. For more unusual bass sounds, try very low-pitched piano or organ. Bass guitar isn’t extremely common in current hip-hop, but it’s worth a try. If you want a 90s Tribe Called Quest vibe, try upright bass.

In the past decade, some hip-hop producers have followed Kanye West’s example and used tuned 808 kick drums to play their basslines. Kanye has used it on all of his albums since 808s and Heartbreak. It’s an amazing solution; those 808 kicks are huge, and if they’re carrying the bassline too, then your low end can be nice and open. Another interesting alternative is to have no bassline at all. It worked for Prince!

And what notes should your bass be playing? If you have chords, the obvious thing is to have the bass playing the roots. You can also have the bass play complicated countermelodies. We made a free online course called Theory for Producers to help you figure these things out.

Chords

Usually your chords are played on some combination of piano, electric piano, organ, synth, strings, guitar, or horns. Vocal choirs are nice too. Once again, consult Theory for Producers for inspiration. Be sure to try out chords with the aQWERTYon, which was specifically designed for this very purpose.

Leads

The same instruments that you use for chords also work fine for melodies. In fact, you can think of melodies as chords stretched out horizontally, and conversely, you can think of chords as melodies stacked up vertically.

FX

For atmosphere in your track, ambient synth pads are always effective. Also try non-musical sounds like speech, police sirens, cash registers, gun shots, birds chirping, movie dialog, or whatever else your imagination can conjure. Make sure to visit Freesound.org – you have to sign up, but it’s worth it. Above all, listen to other people’s tracks, experiment, and trust your ears.

The evolution of the Groove Pizza

The Groove Pizza is a playful tool for creating grooves using math concepts like shapes, angles, and patterns. Here’s a beat I made just nowTry it yourself!

 
This post explains how and why we designed Groove Pizza.

What it does

The Groove Pizza represents beats as concentric rhythm necklaces. The circle represents one measure. Each slice of the pizza is a sixteenth note. The outermost ring controls the kick drum; the middle one controls the snare; and the innermost one plays cymbals.

Connecting the dots on a given ring creates shapes, like the square formed by the snare drum in the pattern below.

Groove Pizza - jazz swing

The pizza can play time signatures other than 4/4 by changing the number of slices. Here’s a twelve-slice pizza playing an African bell pattern.

Groove Pizza - Bembe

You can explore the geometry of musical rhythm by dragging shapes onto the circular grid. Patterns that are visually appealing tend to sound good, and patterns that sound good tend to look cool.

Groove Pizza - shapes

Herbie Hancock did some user testing for us, and he suggested that we make it possible to show the interior angles of the shapes.

Groove Pizza - angles

Groove Pizza History

The ideas behind the Groove Pizza began in my masters thesis work in 2013 at NYU. For his NYU senior thesis, Adam November built web and physical prototypes. In late summer 2015, Adam wrote what would become the Groove Pizza 1.0 (GP1), with a library of drum patterns that he and I curated. The MusEDLab has been user testing this version for the past year, both with kids and with music and math educators in New York City.

In January 2016, the Music Experience Design Lab began developing the Groove Pizza 2.0 (GP2) as part of the MathScienceMusic initiative.

MathScienceMusic Groove Pizza Credits:

  • Original Ideas: Ethan Hein, Adam November & Alex Ruthmann
  • Design: Diana Castro
  • Software Architect: Kevin Irlen
  • Creative Code Guru: Matthew Kaney
  • Backend Code Guru: Seth Hillinger
  • Play Testing: Marijke Jorritsma, Angela Lau, Harshini Karunaratne, Matt McLean
  • Odds & Ends: Asyrique Thevendran, Jamie Ehrenfeld, Jason Sigal

The learning opportunity

The goals of the Groove Pizza are to help novice drummers and drum programmers get started; to create a gentler introduction to beatmaking with more complex tools like Logic or Ableton Live; and to use music to open windows into math and geometry. The Groove Pizza is intended to be simple enough to be learned easily without prior experience or formal training, but it must also have sufficient depth to teach substantial and transferable skills and concepts, including:

  • Familiarity with the component instruments in a drum beat and the ability to pick them individually out of the sound mass.
  • A repertoire of standard patterns and rhythmic motifs. Understanding of where to place the kick, snare, hi-hats and so on to produce satisfying beats.
  • Awareness of different genres and styles and how they are distinguished by their different degrees of syncopation, customary kick drum patterns and claves, tempo ranges and so on.
  • An intuitive understanding of the difference between strong and weak beats and the emotional effect of syncopation.
  • Acquaintance with the concept of hemiola and other more complex rhythmic devices.

Marshall (2010) recommends “folding musical analysis into musical experience.” Programming drums in pop and dance idioms makes the rhythmic abstractions concrete.

Visualizing rhythm

Western music notation is fairly intuitive on the pitch axis, where height on the staff corresponds clearly to pitch height. On the time axis, however, Western notation is less easily parsed—horizontal space need not have any bearing at all on time values. A popular alternative is the “time-unit box system,” a kind of rhythm tablature used by ethnomusicologists. In a time-unit box system, each pulse is represented by a square. Rhythmic onsets are shown as filled boxes.

Clave patterns in TUBS

Nearly all electronic music production interfaces use the time-unit box system scheme, including grid sequencers and the MIDI piano roll.

Ableton TUBS

A row of time-unit boxes can also be wrapped in a circle to form a rhythm necklace. The Groove Pizza is simply a set of rhythm necklaces arranged concentrically.

Circular rhythm visualization offers a significant advantage over linear notation: it more clearly shows metrical function. We can define meter as “the grouping of perceived beats or pulses into equivalence classes” (Forth, Wiggin & McLean, 2010, 521). Linear musical concepts like small-scale melodies depend mostly on relationships between adjacent events, or at least closely spaced events. But periodicity and meter depend on relationships between nonadjacent events. Linear representations of music do not show meter directly. Simply by looking at the page, there is no indication that the first and third beats of a measure of 4/4 time are functionally related, as are the second and fourth beats.

However, when we wrap the musical timeline into a circle, meter becomes much easier to parse. Pairs of metrically related beats are directly opposite one another on the circle. Rotational and reflectional symmetries give strong clues to metrical function generally. For example, this illustration of 2-3 son clave adapted from Barth (2011) shows an axis of reflective symmetry between the fourth and twelfth beats of the pattern. This symmetry is considerably less obvious when viewed in more conventional notation.

Son clave symmetry

The Groove Pizza adds a layer of dynamic interaction to circular representation. Users can change time signatures during playback by adding or removing slices. In this way, very complex metrical shifts can be performed by complete novices. Furthermore, each rhythm necklace can be rotated during playback, enabling a rhythmic modularity characteristic of the most sophisticated Afro-Latin and jazz rhythms. Exploring rotational rhythmic transformation typically requires very sophisticated music-reading and performance skills to understand and execute, but doing so is effortlessly accessible to Groove Pizza users.

Visualizing swing

We traditionally associate swing with jazz, but it is omnipresent in American vernacular music: in rock, country, funk, reggae, hip-hop, EDM, and so on. For that reason, swing is a standard feature of notation software, MIDI sequencers, and drum machines. However, while swing is crucial to rhythmic expressiveness, it is rarely visualized in any explicit way, in notation or in software interfaces. Sequencers will sometimes show swing by displacing events on the MIDI piano roll, but the user must place those events first. The grid itself generally does not show swing.

The Groove Pizza uses a novel (and to our knowledge unprecedented) graphical representation of swing on the background grid, not just on the musical events. The slices alternately expand and contract in width according to the amount of swing specified. At 0% swing, the wedges are all of uniform width. At 50% swing, the odd-numbered slice in each pair is twice as long as the following even-numbered slice. As the user adjusts the swing slider, the slices dynamically change their width accordingly.

Straight 16ths vs swing 16ths

Our swing visualization system also addresses the issue of whether swing should be applied to eighth notes or sixteenths. In the jazz era, swing was understood to apply to eighth notes. However, since the 1960s, swing is more commonly applied to sixteenth notes, reflecting a broader shift from eighth note to sixteenth note pulse in American vernacular music. To hear the difference, compare the swung eighth note pulse of “Rockin’ Robin” by Bobby Day (1958) with the sixteenth note pulse of “I Want You Back” by the Jackson Five (1969). Electronic music production tools like Ableton Live and Logic default to sixteenth-note swing. However, notation programs like Sibelius, Finale and Noteflight can only apply swing to eighth notes.

The Groove Pizza supports both eighth and sixteenth swing simply by changing the slice labeling. The default labeling scheme is agnostic, simply numbering the slices sequentially from one. In GP1, users can choose to label a sixteen-slice pizza either as one measure of sixteenth notes or two measures of eighth notes. The grid looks the same either way; only the labels change.

Drum kits

With one drum sound per ring, the number of sounds available to the user is limited by the number of rings that can reasonably fit on the screen. In my thesis prototype, we were able to accommodate six sounds per “drum kit.” GP1 was reduced to five rings, and GP2 has only three rings, prioritizing simplicity over musical versatility.

GP1 offers three drum kits: Acoustic, Hip-Hop, and Techno. The Acoustic kit uses samples of a real drum kit; the Hip-Hop kit uses samples of the Roland TR-808 drum machine; and the Techno kit uses samples of the Roland TR-909. GP2 adds two additional kits: Jazz (an acoustic drum kit played with brushes), and Afro-Latin (congas, bell, and shaker.) Preset patterns automatically load with specific kits selected, but the user is free to change kits after loading.

In GP1, sounds can be mixed and matched at wiell, so the user can, for example, combine the acoustic kick with the hip-hop snare. In GP2, kits cannot be customized. A wider variety of sounds would present a wider variety of sonic choices. However, placing strict limits on the sounds available has its own creative advantage: it eliminates option paralysis and forces users to concentrate on creating interesting patterns, rather than struggling to choose from a long list of sounds.

It became clear in the course of testing that open and closed hi-hats need not operate separate rings, since it is not desirable to ever have them sound at the same time. (While drum machines are not bound by the physical limitations of human drummers, our rhythmic traditions are.) In future versions of the GP, we plan to place closed and open hi-hats together on the same ring. Clicking a beat in the hi-hat ring will place a closed hi-hat; clicking it again will replace it with an open hi-hat; and a third click will return the beat to silence. We will use the same mechanic to toggle between high and low cowbells or congas.

Preset patterns

In keeping with the constructivist value of working with authentic cultural materials, the exercises in the Groove Pizza are based on rhythms drawn from actual music. Most of the patterns are breakbeats—drums and percussion sampled from funk, rock and soul recordings that have been widely repurposed in electronic dance and hip-hop music. There are also generic rock, pop and dance rhythms, as well as an assortment of traditional Afro-Cuban patterns.

The GP1 offers a broad selection of preset patterns. The GP2 uses a smaller subset of these presets.

Breakbeats

  • The Winstons, ”Amen, Brother” (1969)
  • James Brown, ”Cold Sweat” (1967)”
  • James Brown, “The Funky Drummer” (1970)
  • Bobby Byrd, “I Know You Got Soul” (1971)
  • The Honeydrippers, “Impeach The President” (1973)
  • Skull Snaps, “It’s A New Day” (1973)
  • Joe Tex, ”Papa Was Too” (1966)
  • Stevie Wonder, “Superstition” (1972)
  • Melvin Bliss, “Synthetic Substitution”(1973)

Afro-Cuban

  • Bembé—also known as the “standard bell pattern”
  • Rumba clave
  • Son clave (3-2)
  • Son clave (2-3)

Pop

  • Michael Jackson, ”Billie Jean” (1982)
  • Boots-n-cats—a prototypical disco pattern, e.g. “Funkytown” by Lipps Inc (1979)
  • INXS, “Need You Tonight” (1987)
  • Uhnntsss—the standard “four on the floor” pattern common to disco and electronic dance music

Hip-hop

  • Lil Mama, “Lip Gloss” (2008)
  • Nas, “Nas Is Like” (1999)
  • Digable Planets, “Rebirth Of Slick (Cool Like Dat)” (1993)
  • OutKast, “So Fresh, So Clean” (2000)
  • Audio Two, “Top Billin’” (1987)

Rock

  • Pink Floyd, ”Money” (1973)
  • Peter Gabriel, “Solisbury Hill” (1977)
  • Billy Squier, “The Big Beat” (1980)
  • Aerosmith, “Walk This Way” (1975)
  • Queen, “We Will Rock You” (1977)
  • Led Zeppelin, “When The Levee Breaks” (1971)

Jazz

  • Bossa nova, e.g. “The Girl From Ipanima” by Antônio Carlos Jobim (1964)
  • Herbie Hancock, ”Chameleon” (1973)
  • Miles Davis, ”It’s About That Time” (1969)
  • Jazz spang-a-lang—the standard swing ride cymbal pattern
  • Jazz waltz—e.g. “My Favorite Things” as performed by John Coltrane (1961)
  • Dizzy Gillespie, ”Manteca” (1947)
  • Horace Silver, ”Song For My Father” (1965)
  • Paul Desmond, ”Take Five” (1959)
  • Herbie Hancock, “Watermelon Man” (1973)

Mathematical applications

The most substantial new feature of GP2 is “shapes mode.” The user can drag shapes onto the grid and rotate them to create geometric drum patterns: triangle, square, pentagon, hexagon, and octagon. Placing shapes in this way creates maximally even rhythms that are nearly always musically satisfying (Toussaint 2011). For example, on a sixteen-slice pizza, the pentagon forms rumba or bossa nova clave, while the hexagon creates a tresillo rhythm. As a general matter, the way that a rhythm “looks” gives insight into the way it sounds, and vice versa.

Because of the way it uses circle geometry, the Groove Pizza can be used to teach or reinforce the following subjects:

  • Fractions
  • Ratios and proportional relationships
  • Angles
  • Polar vs Cartesian coordinates
  • Symmetry: rotations, reflections
  • Frequency vs duration
  • Modular arithmetic
  • The unit circle in the complex plane

Specific kinds of music can help to introduce specific mathematical concepts. For example, Afro-Cuban patterns and other grooves built on hemiola are useful for graphically illustrating the concept of least common multiples. When presented with a kick playing every four slices and a snare playing every three slices, a student can both see and hear how they will line up every twelve slices. Bamberger and diSessa (2003) describe the “aha” moment that students have when they grasp this concept in a music context. One student in their study is quoted as describing the twelve-beat cycle “pulling” the other two beats together. Once students grasp least common multiples in a musical context, they have a valuable new inroad into a variety of scientific and mathematical concepts: harmonics in sound analysis, gears, pendulums, tiling patterns, and much else.

In addition to eighth and sixteenth notes, GP1 users can also label the pizza slices as fractions or angles, both Cartesian and polar. Users can thereby describe musical concepts in mathematical terms, and vice versa. It is an intriguing coincidence that the polar angle π/16 represents a sixteenth note. One could go even further with polar mode and use it as the unit circle on the complex plane. From there, lessons could move into powers of e, the relationship between sine and cosine waves, and other more advanced topics. The Groove Pizza could thereby be used to lay the ground work for concepts in electrical engineering, signal processing, and anything else involving wave mechanics.

Future work

The Groove Pizza does not offer any tone controls like duration, pitch, EQ and the like. This choice was due to a combination of expediency and the push to reduce option paralysis. However, velocity (loudness) control is a high-priority future feature. While nuanced velocity control is not necessary for the artificial aesthetic of electronic dance music, a basic loud/medium/soft toggle would make the Groove Pizza a more versatile tool.

The next step beyond preset patterns is to offer drum programming exercises or challenges. In exercises, users are presented with a pattern. They may alter this pattern as they see fit by adding and removing drum hits, and by rotating instrument parts within their respective rings. There are restraints of various kinds, to ensure that the results are appealing and musical-sounding. The restraints are tighter for more basic exercises, and looser for more advanced ones. For example, we might present users with a locked four-on-the-floor kick pattern, and ask them to create a satisfying techno beat using the snares and hi-hats. We also plan to create game-like challenges, where users are given the sound of a beat and must figure out how to represent it on the circular grid.

The Groove Pizza would be more useful for the purposes of trigonometry and circle geometry if it were presented slightly differently. Presently, the first beat of each pattern is at twelve o’clock, with playback running clockwise. However, angles are usually representing as originating at three o’clock and increasing in a counterclockwise direction. To create “math mode,” the radial grid would need to be reflected left-to-right and rotated ninety degrees.

References

Ankney, K.L. (2012). Alternative representations for musical composition. Visions of Research in Music Education, 20.

Bamberger, J., & DiSessa, A. (2003). Music As Embodied Mathematics: A Study Of A Mutually Informing Affinity. International Journal of Computers for Mathematical Learning, 8(2), 123–160.

Bamberger, J. (1996). Turning Music Theory On Its Ear. International Journal of Computers for Mathematical Learning, 1: 33-55.

Bamberger, J. (1994). Developing Musical Structures: Going Beyond the Simples. In R. Atlas & M. Cherlin (Eds.), Musical Transformation and Musical Intuition. Ovenbird Press.

Barth, E. (2011). Geometry of Music. In Greenwald, S. and Thomley, J., eds., Essays in Encyclopedia of Mathematics and Society. Ipswich, MA: Salem Press.

Bell, A. (2013). Oblivious Trailblazers: Case Studies of the Role of Recording Technology in the Music-Making Processes of Amateur Home Studio Users. Doctoral dissertation, New York University.

Benadon, F. (2007). A Circular Plot for Rhythm Visualization and Analysis. Music Theory Online, Volume 13, Issue 3.

Demaine, E.; Gomez-Martin, F.; Meijer, H.; Rappaport, D.; Taslakian, P.; Toussaint, G.; Winograd, T.; & Wood, D. (2009). The Distance Geometry of Music. Computational Geometry 42, 429–454.

Forth, J.; Wiggin, G.; & McLean, A. (2010). Unifying Conceptual Spaces: Concept Formation in Musical Creative Systems. Minds & Machines, 20:503–532.

Magnusson, T. (2010). Designing Constraints: Composing and Performing with Digital Musical Systems. Computer Music Journal, Volume 34, Number 4, pp. 62 – 73.

Marrington, M. (2011). Experiencing Musical Composition In The DAW: The Software Interface As Mediator Of The Musical Idea. The Journal on the Art of Record Production, (5).

Marshall, W. (2010). Mashup Poetics as Pedagogical Practice. In Biamonte, N., ed. Pop-Culture Pedagogy in the Music Classroom: Teaching Tools from American Idol to YouTube. Lanham, MD: Scarecrow Press.

McClary, S. (2004). Rap, Minimalism and Structures of Time in Late Twentieth-Century Culture. In Warner, D. ed., Audio Culture. London: Continuum International Publishing Group.

Monson, I. (1999). Riffs, Repetition, and Theories of Globalization. Ethnomusicology, Vol. 43, No. 1, 31-65.

New York State Learning Standards and Core Curriculum — Mathematics

Ruthmann, A. (2012). Engaging Adolescents with Music and Technology. In Burton, S. (Ed.). Engaging Musical Practices: A Sourcebook for Middle School General Music. Lanham, MD: R&L Education.

Thibeault, M. (2011). Wisdom for Music Education from the Recording Studio. General Music Today, 20 October 2011.

Thompson, P. (2012). An Empirical Study Into the Learning Practices and Enculturation of DJs, Turntablists, Hip-Hop and Dance Music Producers.” Journal of Music, Technology & Education, Volume 5, Number 1, 43 – 58.

Toussaint, G. (2013). The Geometry of Musical Rhythm. Cleveland: Chapman and Hall/CRC.

____ (2005). The Euclidean algorithm generates traditional musical rhythms. Proceedings of BRIDGES: Mathematical Connections in Art, Music, and Science, Banff, Alberta, Canada, July 31 to August 3, 2005, pp. 47-56.

____ (2004). A comparison of rhythmic similarity measures. Proceedings of ISMIR 2004: 5th International Conference on Music Information Retrieval, Universitat Pompeu Fabra, Barcelona, Spain, October 10-14, 2004, pp. 242-245.

____ (2003). Classification and phylogenetic analysis of African ternary rhythm timelines. Proceedings of BRIDGES: Mathematical Connections in Art, Music, and Science, University of Granada, Granada, Spain July 23-27, 2003, pp. 25-36.

____ (2002). A mathematical analysis of African, Brazilian, and Cuban clave rhythms. Proceedings of BRIDGES: Mathematical Connections in Art, Music and Science, Townson University, Towson, MD, July 27-29, 2002, pp. 157-168.

Whosampled.com. “The 10 Most Sampled Breakbeats of All Time.”

Wiggins, J. (2001). Teaching for musical understanding. Rochester, Michigan: Center for Applied Research in Musical Understanding, Oakland University.

Wilkie, K.; Holland, S.; & Mulholland, P. (2010). What Can the Language of Musicians Tell Us about Music Interaction Design?” Computer Music Journal, Vol. 34, No. 4, 34-48.

Inside the aQWERTYon

The MusEDLab and Soundfly just launched Theory For Producers, an interactive music theory course. The centerpiece of the interactive component is a MusEDLab tool called the aQWERTYon. You can try it by clicking the image below.

aQWERTYon screencap

In this post, I’ll talk about why and how we developed the aQWERTYon.

One of our core design principles is to work within our users’ real-world technological limitations. We build tools in the browser so they’ll be platform-independent and accessible anywhere there’s internet access (and where there isn’t internet access, we’ve developed the “MusEDLab in a box.”) We want to find out what musical possibilities there are in a typical computer with no additional software or hardware. That question led us to investigate ways of turning the standard QWERTY keyboard into a beginner-friendly instrument. We were inspired in part by GarageBand’s Musical Typing feature.

GarageBand musical typing

If you don’t have a MIDI controller, Apple thoughtfully made it possible for you to use your computer keyboard to play GarageBand’s many software instruments. You get an octave and a half of piano, plus other useful controls: pitch bend, modulation, sustain, octave shifting and simple velocity control. Many DAWs offer something similar, but Apple’s system is the most sophisticated I’ve seen.

Handy though it is, Musical Typing has some problems as a user interface. The biggest one is the poor fit between the piano keyboard layout and the grid of computer keys. Typing the letter A plays the note C. The rest of that row is the white keys, and the one above it is the black keys. You can play the chromatic scale by alternating A row, Q row, A row, Q row. That basic pattern is easy enough to figure out. However, you quickly get into trouble, because there’s no black key between E and F. The QWERTY keyboard gives no visual reminder of that fact, so you just have to remember it. Unfortunately, the “missing” black key happens to be the letter R, which is GarageBand’s keyboard shortcut for recording. So what inevitably happens is that you’re hunting for E-flat or F-sharp and you accidentally start recording over whatever you were doing. I’ve been using the program for years and still do this routinely.

Rather than recreating the piano keyboard on the computer, we drew on a different metaphor: the accordion.

The accordion: the user interface metaphor of the future!

We wanted to have chords and scales arranged in an easily discoverable way, like the way you can easily figure out the chord buttons on the accordion’s left hand. The QWERTY keyboard is really a staggered grid four keys tall and between ten and thirteen keys wide, plus assorted modifier and function keys. We decided to use the columns for chords and the rows for scales.

For the diatonic scales and modes, the layout is simple. The bottom row gives the notes in the scale starting on 1^. The second row has the same scale shifted over to start on 3^. The third row starts the scale on 5^, and the top row starts on 1^ an octave up. If this sounds confusing when you read it, try playing it, your ears will immediately pick up the pattern. Notes in the same column form the diatonic chords, with their roman numerals conveniently matching the number keys. There are no wrong notes, so even just mashing keys at random will sound at least okay. Typing your name usually sounds pretty cool, and picking out melodies is a piece of cake. Playing diagonal columns, like Z-S-E-4, gives you chords voiced in fourths. The same layout approach works great for any seven-note scale: all of the diatonic modes, plus the modes of harmonic and melodic minor.

Pentatonics work pretty much the same way as seven-note scales, except that the columns stack in fourths rather than fifths. The octatonic and diminished scales lay out easily as well. The real layout challenge lay in one strange but crucial exception: the blues scale. Unlike other scales, you can’t just stagger the blues scale pitches in thirds to get meaningful chords. The melodic and harmonic components of blues are more or less unrelated to each other. Our original idea was to put the blues scale on the bottom row of keys, and then use the others to spell out satisfying chords on top. That made it extremely awkward to play melodies, however, since the keys don’t form an intelligible pattern of intervals. Our compromise was to create two different blues modes: one with the chords, for harmony exploration, and one just repeating the blues scale in octaves for melodic purposes. Maybe a better solution exists, but we haven’t figured it out yet.

When you select a different root, all the pitches in the chords and scales are automatically changed as well. Even if the aQWERTYon had no other features or interactivity, this would still make it an invaluable music theory tool. But root selection raises a bigger question: what do you do about all the real-world music that uses more than one scale or mode? Totally uniform modality is unusual, even in simple pop songs. You can access notes outside the currently selected scale by pressing the shift keys, which transposes the entire keyboard up or down a half step. But what would be really great is if we could get the scale settings to change dynamically. Wouldn’t it be great if you were listening to a jazz tune, and the scale was always set to match whatever chord was going by at that moment? You could blow over complex changes effortlessly. We’ve discussed manually placing markers in YouTube videos that tell the aQWERTYon when to change its settings, but that would be labor-intensive. We’re hoping to discover an algorithmic method for placing markers automatically.

The other big design challenge we face is how to present all the different scale choices in a way that doesn’t overwhelm our core audience of non-expert users. One solution would just be to limit the scale choices. We already do that in the Soundfly course, in effect; when you land on a lesson, the embedded aQWERTYon is preset to the appropriate scale and key, and the user doesn’t even see the menus. But we’d like people to be able to explore the rich sonic diversity of the various scales without confronting them with technical Greek terms like “Lydian dominant”. Right now, the scales are categorized as Major, Minor and Other, but those terms aren’t meaningful to beginners. We’ve been discussing how we could organize the scales by mood or feeling, maybe from “brightest” to “darkest.” But how do you assign a mood to a scale? Do we just do it arbitrarily ourselves? Crowdsource mood tags? Find some objective sorting method that maps onto most listeners’ subjective associations? Some combination of the above? It’s an active area of research for us.

This issue of categorizing scales by mood has relevance for the original use case we imagined for the aQWERTYon: teaching film scoring. The idea behind the integrated video window was that you would load a video clip, set a mode, and then improvise some music that fit the emotional vibe of that clip. The idea of playing along with YouTube videos of songs came later. One could teach more general open-ended composition with the aQWERTYon, and in fact our friend Matt McLean is doing exactly that. But we’re attracted to film scoring as a gateway because it’s a more narrowly defined problem. Instead of just “write some music”, the challenge is “write some music with a particular feeling to it that fits into a scene of a particular length.

Would you like to help us test and improve the aQWERTYon, or to design curricula around it? Would you like to help fund our programmers and designers? Please get in touch.