Hip-hop teaches confidence lessons

I’m working on a paper about music education and hip-hop, and I’m going to use this post to work out some thoughts.

My wife and I spent our rare date night going to see Black Panther at BAM. It was uplifting. Many (most?) black audience members came dressed in full Afrofuturistic splendor. A group of women in our section were especially decked out:

Black Panther audience members at BAM

I was admiring their outfits and talking about how I wasn’t expecting such an emotional response to the movie. One of the women said it was as big a deal for them as the election of Barack Obama in 2008. I know representation is important, but this seems like it’s more than just seeing black faces on the movie screen. Black Twitter is talking about how this movie is different because it isn’t about overcoming historical pain or present-day hardship; it’s about showing black people as powerful, rich, technologically advanced, and above all, serenely confident.

Black Panther is heavily overdetermined, like all superhero movies. But I’m especially interested in the way we could read it as a metaphor for music, with the Wakandans as representing African musical traditions and Eric Killmonger as representing the global rise of hip-hop. I see Killmonger this way not only because he’s American, but because so many of his qualities and mannerisms remind me of the role of hip-hop in the public imagination. He’s stylish, effortlessly charismatic, and seemingly indifferent to anyone else’s approval. He’s funny, too, not in the warm and good-natured way that Shuri is, but in a more aggressive and sarcastic way. He’s both arrogant and vulnerable, using implacable cool to conceal deep hurt. And he wants to remake the world by fomenting black revolution, by any means necessary. The Wakandans, meanwhile, are uncomplicatedly strong, self-possessed, and at ease with their own power. But they are also withdrawn from the world, fearing that getting involving in other people’s struggles will destroy what makes their culture so unique and beautiful.

I want to emphasize that this reading is based solely on my watching the movie and reading Twitter. I have no special insight into the writers’ or actors’ intentions. But they do seem to be saying something about how the African diaspora in America has attained global reach and influence while also showing the malign influence of capitalism and imperialist violence. It’s significant that Killmonger isn’t just a criminal capitalist like Klaue; he honed his murder chops as a member of the US military. The American empire taught him how to kill mercilessly, and now he wants to use that same force to bring the empire down. I’m thinking here about the Public Enemy poster in his dad’s Oakland apartment, the one with the crosshairs. I was terrified of Public Enemy back in the late 80s, as I’m sure was the point of their imagery.

I am not a moralist about hip-hop’s violent content. I don’t believe that portraying something is the same thing as endorsing it, or that listening to music directly causes antisocial behavior. It’s too easy to blame rappers for being bad influences while giving a pass to The Sopranos or Breaking Bad. The only difference between Walter White and any gangsta rapper’s persona is whiteness. But just like I wouldn’t let my young children watch Breaking Bad, I’m not eager to have them listen to Lil Wayne either. And it’s going to be difficult to explain and contextualize all the harder rap songs in my iTunes library when the time comes (though I guess no harder than explaining why I love violent prestige cable dramas.)

I spend so much time defending hip-hop from its detractors that I haven’t given a lot of thought to why I think it’s so beautiful and great. Usually when I do, I point to formal aspects of the music–the grooves, the hypnotic quality of electronic beats, the intertextuality and timbral invention of sample-based production, and the spectacular verbal and vocal virtuosity of the best emcees. But there are more basic emotional reasons why I’m a hip-hop fan. When I listen to the music, I hear effortless cool, the power that comes from strong emotions held in reserve, and a defiant sense of pride.  I hear confidence, and that is a quality I have been severely deficient in for most of my life. As I get older, I have become more confident, but when I was younger I was desperately awkward and socially anxious, and that part of me is never far from the surface. I need swagger lessons, and hip-hop is an excellent teacher. I am not unusual among white rap fans for feeling this way.

It’s totally weird that the wealthiest and most powerful population of humans in history should be so uncertain in ourselves, and it’s equally surprising that we should be looking to the musical expression of our country’s most marginalized and oppressed minority group for help. All of America’s popular music has its origins in the African diaspora, but hip-hop is remarkable for the fact that most of its prominent and commercially successful artists are black. Imagine if the Roma utterly dominated Europe’s musical culture. There are plenty of Europeans who love Django Reinhardt, but not the way that Americans love Kanye West. I’m sure white Americans listen to rap for all kinds of reasons. But I believe that many of us are mostly drawn to it for confidence lessons.

I teach in a couple of music schools, and if I had to pick one adjective to describe the students, “confident” would not be it. Last spring, I was present for two recording sessions in NYU’s James Dolan Studio on two successive days. The Friday session was with NYU undergrads in my Music Education Technology Practicum class, a crash course in audio production for future music teachers. The Saturday session was with CORE (formerly known as Ed Sullivan Fellows), a community mentorship program for young rappers and producers. There were some stark socioeconomic differences between the two groups. NYU music education students are mostly white and Asian, and they tend to come from privileged backgrounds. They are mostly classical musicians, with a small minority playing jazz. The CORE members are nearly all black and Latinx, and are uniformly of low SES. They are almost all rappers or beatmakers, though some also work in the singer-songwriter or R&B idioms. Everyone in both sessions was recording material of their own choice, but while the NYU students all chose existing repertoire (classical pieces, jazz standards, musical theater songs), the rappers’ music was all original. I might naively have expected the NYU students to be confident and the rappers to be nervous, since the NYU students were “on their own turf,” while the rappers were in a new and unfamiliar environment. But the opposite turned out to be true.

During the NYU students’ session, the anxiety in the room was palpable. Recording can be stressful under the best of circumstances—the environment is daunting and clinical, like being under a microscope, and the clock is always ticking. But this was more than performance anxiety; one of the students was on the verge of panic just sitting and listening in the control room. The next day, then, I was surprised to find that the rap kids evinced little to no anxiety whatsoever. They were similarly new to the studio, and under the same pressures, but if anyone felt any nerves, they didn’t show it. The atmosphere was casual and relaxed, even to a fault. A greater sense of urgency might have made for a more productive session. But anxiety was no obstacle. This was all the more remarkable given that they were recording originals. Instead of being nervous about exposing their own feelings and ideas, apparently it added to their confidence.

The CORE kids are sometimes shy about opening up their material to scrutiny, especially if they consider it to be unfinished. But they will perform or play back finished work with remarkably little hesitation for their age. I wasn’t willing to play my original songs for people until deep into my twenties, and I wasn’t willing to sing them myself until my thirties. Meanwhile, the most proficient CORE emcees are sure enough of themselves to effortlessly freestyle in front of an audience. I have never in my life had the courage to do that.

Shamus Khan’s Privilege is a study of the ease taught by elite schools to their students. He argues that traditional markers of upper class status like tailored suits or a taste for classical music no longer function; in an era of (supposed) meritocracy, the elite must prove that they deserve their privilege because of their talents, abilities, and hard work. “Class” can be learned by anyone, but ease has to be carefully enculturated over time. I bring mention all of this because the third chapter of the book begins with an epigram by Jay-Z, from TI’s song “Swagga Like Us”:

But I can’t teach you my swag
You can pay for school but you can’t buy class

The whole point of Khan’s book is that the One Percent use exclusive institutions like St Paul’s to reproduce its privilege across generations. So what is Jay-Z doing in the book? He might be a member of the elite now, but he certainly wasn’t born to it. Khan talks about the way that white St Paul’s students treat POC as arbiters of cultural prestige, which is synonymous with authenticity. To be a real member of the elite, you can’t be a snob; now you have to an omnivore, in touch with “common people’s” music, and that means hip-hop. You have to both know Jay-Z’s music and be able to emulate his swagger if you want to grow up to run the country.

I’m planning to devote my dissertation research to hip-hop educators, to the ways that they think about preparing the next generation of artists, and to the ways that their approach differs from traditional music pedagogy. In particular, I’m interested in the improvisation-centered approach of Toni Blackman. Of all the mentors involved with the CORE program, Toni has the most unusual resume. She is the first Hip-Hop Cultural Envoy with the State Department, and has traveled to forty-six countries to give talks and perform. She has been a teaching artist for a variety of other institutions as well, ranging from the Soros Foundation to local community groups. Toni has a particular method based on the cypher, a circle of emcees in which everyone takes turns freestyling. Toni uses the cypher as a way to help her students develop not just their flow, but their emotional well-being. In person, she has the calm, attentive affect of a good therapist, which is effectively what she is. I was unsurprised to learn that Toni does public speaking coaching for politicians and businesspeople as her “day job”—she is a professional teacher of confidence, inside or outside the context of hip-hop.

Etymology Online tells me that word “confidence” comes from the Latin word confidentem, meaning ”firmly trusting” or “bold.” A confident person inspires “full trust or reliance.” This certainly describes Toni. At her keynote talk at last summer’s NYU IMPACT Conference, she wanted to do some freestyling, as she does in all of her presentations. She asked someone in the audience to come up and beatbox for her. It was 9:30 in the morning and no one was jumping to volunteer, so I finally raised my hand. I had never beatboxed in public before, but Toni knows how to empower people, even nerdy white dads. It felt great up there, effortless in fact, like all peak music experiences do. I was up there to earn Toni’s approval, while simultaneously feeling like I already had it, just for sticking my neck out and performing. If I ever have the courage to do a cypher, it will probably be under Toni’s leadership.

During the same conference, the CORE participants did a showcase concert. It was mostly the kids doing their own songs, along with appearances by a few mentors and pros. The concert began with a cypher–everyone in the concert came onstage and while the band put down a groove, they took turns freestyling verses. I struggle to imagine a group of conservatory students beginning a recital by all improvising a piece off the tops of their heads, but the CORE kids pulled it off with effortless cool. I still remember one of the entire verses verbatim. It was by Lady Logic, who is a bit older than most of the other CORE participants, but still very young. She rapped:

I’m begging your pardon, ain’t no snakes in my garden
I’m begging your pardon, ain’t no snakes in my garden
I’m begging your pardon, ain’t no snakes in my garden
I’m begging your pardon, ain’t no snakes in my garden

She didn’t come up with this line off the top of her head; I was told later that it’s something she has used in verses before. But she had the audacity to stand up there and just repeat it four times. It didn’t sound like she couldn’t think of anything else to say; it sounded like she knew the right line to use, and that it would only get better and more impactful with repetition. And she was right, it slayed.

Most music educators might believe themselves to be teaching confidence. But very often, they are trying to force kids to make particular kinds of music that are remote from the kids’ own interests and sensibilities. I recently had two white music teachers from a majority-black school visit my music technology class at Montclair State University. My lesson that day was on drum programming, on what makes a good beat. In a semi-joking tone, I warned the class that I was going to make a racist generalization, that Europeans like music that’s harmonically interesting and rhythmically boring, while Africans like music that’s rhythmically interesting and harmonically boring. After class, the older of the two visiting teachers wanted to talk to me about that comment. He leads his school’s chorus, and they sing Christmas carols around the school every year. While they were singing “Angels We Have Heard On High,” the girls in the chorus kept trying to add a beat by stomping and clapping. I was about to say what a great idea that was, when he said, “Of course I made them stop. I mean, “Angels We Have Heard On High” with a dubstep beat?” He meant to commiserate with me about how rhythm-obsessed black students are, and how hard it is to get them to focus on making music the “right” way. A version of this interaction plays out in music classrooms across America every day.

The CORE program is run by Jamie Ehrenfeld, a graduate of NYU’s music education program, who now teaches at Eagle Academy, an all-boys school in Brownsville. Like me, she had a left-wing Jewish upbringing with a strong social justice component. Most of the CORE participants are Eagle students who she recruited, or their friends. One is Keith (not his real name), a tall, quiet kid with a serious demeanor. He raps a little, but his main interest is beatmaking. Since finishing high school, he has been camped out in different studio spaces and computer labs at NYU, assiduously teaching himself Logic and making tracks. I’m interested in learning more about his creative process. One afternoon recently, Keith was hanging out in the Music Experience Design Lab office with Jamie, and I had a chance to talk to him at length.

I have a general idea how Keith learned his musical skills: informally, socially, along with his peers. However, I was curious if he has any more formal experience, in school or church or privately. At first he said no, but after some prompting, mentioned that he played in a steel pan ensemble with his dad, who is Trinidadian. I responded that steel pan counts. But Keith has that side of his musical life compartmentalized; it belongs to his dad, while beatmaking is all his own. I’d love to listen to Keith’s tracks in progress, and ask him about his creative choices at a granular level. But this is going to require building up more of a relationship with him. I figured I would start somewhere less sensitive, by asking about his favorite artists. He immediately mentioned Chance the Rapper, who is popular with other CORE participants too. Keith also likes Kendrick Lamar, but that’s like a rock fan saying they like the Beatles, it’s not a distinctive or interesting preference. Keith didn’t offer any more names until Jamie prodded him to bring up Mali Music (an American singer, not a national genre), and “Bust Your Windows” by Jazmine Sullivan. This is all music that Jamie described to me as being “for the cookout,” songs you play when your grandmother and little brother are present. Chance is perfect cookout music, what with his rhymes about “soil as soft as Mama’s hands.”

Keith and his friends also like a lot of music that’s not suitable for the cookout, that’s full of guns, drugs, and sex. After he left to go make beats, Jamie told me about some other rappers that he and his friends listen to, like 22 Gz and Nas Blixky. This is the most commercially successful kind of hip-hop at the moment, and it’s the kind that cultural conservatives blame for corrupting our nation’s youth. Some hip-hop heads are dismayed by it too. Tricia Rose blames commercial pressures for emphasizing the most destructive aspects of the music, and suppressing its consciousness-raising aspects.

By ignoring the extraordinary commercial penetration of hip-hop, and I use that word advisedly … what we’ve allowed to happen is to render meaningful criticism of the commercial takeover of a black cultural form designed not only to liberate, but to create critical consciousness and turned it into the cultural arm of predatory capitalism in the last thirty years.

Toni Blackman isn’t thrilled about misogynistic and violent lyrics, either, but she understands those songs’ appeal. She has described a particularly appalling Lil Wayne song as being “meditative”, “trance-like,” and “addictive.” I feel the contradiction too, feeling both attracted and repelled by the hardest edges of rap. For example, I feel equal amounts of awe and horror about “Got Your Money” by Ol Dirty Bastard, which includes this lyric:

I don’t have no trouble with you fucking me
But I have a little problem with you not fucking me

I choose to find that line funny, which helps me feel better about the fact that I walk around involuntarily repeating it to myself on a regular basis. Hip-hop has mostly been a youth music so far, and like all American youth musics, one of its purposes is to shock authority figures. As authority figures get harder to shock, musicians have to up their rhetorical firepower. It takes confidence to defy authority. There’s a ridiculous amount of cognitive involved in a privileged white person like me listening to music that was designed to help non-privileged non-white people cope with being oppressed by the likes of me. I’m hoping to use my dissertation to get out of my own head on these issues, and learn to see them more from rappers’ own perspectives.

Design for Real Life – QWERTYBeats research

Writing assignment for Design For The Real World with Claire Kearney-Volpe and Diana Castro – research about a new rhythm interface for blind and low-vision novice musicians

Definition

I propose a new web-based accessible rhythm instrument called QWERTYBeats.Traditional instruments are highly accessible to blind and low-vision musicians. Electronic music production tools are not. I look at the history of accessible instruments and software interfaces, give an overview of current electronic music hardware and software, and discuss the design considerations underlying my project.

QWERTYBeats logo

Historical overview

Acoustic instruments give rich auditory and haptic feedback, and pose little obstacle to blind musicians. We need look no further for proof than the long history of iconic blind musicians like Ray Charles and Stevie Wonder. Even sighted instrumentalists rarely look at their instruments once they have attained a sufficient level of proficiency. Music notation is not accessible, but Braille notation has existed since the language’s inception. Also, a great many musicians both blind and sighted play entirely by ear anyway.

Most of the academic literature around accessibility issues in music education focuses on wider adoption of and support for Braille notation. See, for example, Rush, T. W. (2015). Incorporating Assistive Technology for Students with Visual Impairments into the Music Classroom. Music Educators Journal, 102(2), 78–83. For electronic music, notation is rarely if ever a factor.

Electronic instruments pose some new accessibility challenges. They may use graphical interfaces with nested menus, complex banks of knobs and patch cables, and other visual control surfaces. Feedback may be given entirely with LED lights and small text labels. Nevertheless, blind users can master these devices with sufficient practice, memorization and assistance. For example, Stevie Wonder has incorporated synthesizers and drum machines in most of his best-known recordings.

Most electronic music creation is currently done not with instruments, but rather using specialized software applications called digital audio workstations (DAWs). Keyboards and other controllers are mostly used to access features of the software, rather than as standalone instruments. The most commonly-used DAWs include Avid Pro Tools, Apple Logic, Ableton Live, and Steinberg Cubase. Mobile DAWs are more limited than their desktop counterparts, but are nevertheless becoming robust music creation tools in their own right. Examples include Apple GarageBand and Steinberg Cubasis. Notated music is commonly composed using score editing software like Sibelius and Finale, whose functionality increasingly overlaps with DAWs, especially in regard to MIDI sequencing.

DAWs and notation editors pose steep accessibility challenges due to their graphical and spatial interfaces, not to mention their sheer complexity. In class, we were given a presentation by Leona Godin, a blind musician who records and edits audio using Pro Tools by means of VoiceOver. While it must have taken a heroic effort on her part to learn the program, Leona demonstrates that it is possible. However, some DAWs pose insurmountable problems even to very determined blind users because they do not use standard operating system elements, making them inaccessible via screen readers.

Technological interventions

There are no mass-market electronic interfaces specifically geared toward blind or low-vision users. In this section, I discuss one product frequently hailed for its “accessibility” in the colloquial rather than blindess-specific sense, along with some more experimental and academic designs.

Ableton Push

Push layout for IMPACT Faculty Showcase

Ableton Live has become the DAW of choice for electronic music producers. Low-vision users can zoom in to the interface and modify the color scheme. However, Live is inaccessible via screen readers.

In recent years, Ableton has introduced a hardware controller, the Push, which is designed to make the software experience more tactile and instrument-like. The Push combines an eight by eight grid of LED-lit touch pads with banks of knobs, buttons and touch strips. It makes it possible to create, perform and record a piece of music from scratch without looking at the computer screen. In addition to drum programming and sampler performance, the Push also has an innovative melodic mode which maps scales onto the grid in such a way that users can not play a wrong note. Other comparable products exist; see, for example, the Native Instruments Maschine.

There are many pad-based drum machines and samplers. Live’s main differentiator is its Session view, where the pads launch clips: segments of audio or MIDI that can vary in length from a single drum hit to the length of an entire song. Clip launching is tempo-synced, so when you trigger a clip, playback is delayed until the start of the next measure (or whatever the quantization interval is.) Clip launching is a forgiving and beginner-friendly performance method, because it removes the possibility of playing something out of rhythm. Like other DAWs, Live also gives rhythmic scaffolding in its software instruments by means of arpeggiators, delay and other tempo-synced features.

The Push is a remarkable interface, but it has some shortcomings for blind users. First of all, it is expensive, $800 for the entry-level version and $1400 for the full-featured software suite. Much of its feedback is visual, in the form of LED screens and color-coded lighting on the pads. It switches between multiple modes which can be challenging to distinguish even for sighted users. And, like the software it accompanies, the Push is highly complex, with a steep learning curve unsuited to novice users, blind or sighted.

The aQWERTYon

Most DAWs enable users to perform MIDI instruments on the QWERTY keyboard. The most familiar example is the Musical Typing feature in Apple GarageBand.

GarageBand musical typing

Musical Typing makes it possible to play software instruments without an external MIDI controller, which is convenient and useful. However, its layout counterintuively follows the piano keyboard, which is an awkward fit for the computer keyboard. There is no easy way to distinguish the black and white keys, and even expert users find themselves inadvertantly hitting the keyboard shortcut for recording while hunting for F-sharp.

The aQWERTYon is a web interface developed by the NYU Music Experience Design Lab specifically intended to address the shortcomings of Musical Typing.

aQWERTYon screencap

Rather than emulating the piano keyboard, the aQWERTYon draws its inspiration from the chord buttons of an accordion. It fills the entire keyboard with harmonically related notes in a way that supports discovery by naive users. Specifically, it maps scales across the rows of keys, staggered by intervals such that each column forms a chord within the scale. Root notes and scales can be set from pulldown menus within the interface, or preset using URL parameters. It can be played as a standalone instrument, or as a MIDI controller in conjunction with a DAW. Here is a playlist of music I created using the aQWERTYon and GarageBand or Ableton Live:

The aQWERTYon is a completely tactile experience. Sighted users can carefully match keys to note names using the screen, but more typically approach the instrument by feel, seeking out patterns on the keyboard by ear. A blind user would need assistance loading the aQWERTYon initially and setting the scale and root note parameters, but otherwise, it is perfectly accessible. The present project was motivated in large part by a desire to make exploration of rhythm as playful and intuitive as the aQWERTYon makes exploring chords and scales.

Soundplant

The QWERTY keyboard can be turned into a simple drum machine quite easily using a free program called Soundplant. The user simply drags audio files onto a graphical key to have it triggered by that physical key. I was able to create a TR-808 kit in a matter of minutes:

Soundplant with 808 samples

After it is set up and configured, Soundplant can be as effortlessly accessible as the aQWERTYon. However, it does not give the user any rhythmic assistance. Drumming in perfect time is an advanced musical skill, and playing drum machine samples out of time is not much more satisfying than banging on a metal bowl with a spoon out of time. An ideal drum interface would offer beginners some of the rhythmic scaffolding and support that Ableton provides via Session view, arpeggiators, and the like.

The Groove Pizza

Drum machines and their software counterparts offer an alternative form of rhythmic scaffolding. The user sequences patterns in a time-unit box system or piano roll, and the computer performs those patterns flawlessly. The MusEDLab‘s Groove Pizza app is a web-based drum sequencer that wraps the time-unit box system into a circle.

Groove Pizza - Bembe

The Groove Pizza was designed to make drum programming more intuitive by visualizing the symmetries and patterns inherent in musical-sounding rhythms. However, it is totally unsuitable for blind or low-vision users. Interaction is only possible through the mouse pointer or touch, and there are no standard user interface elements that can be parsed by screen readers.

Before ever considering designing for the blind, the MusEDLab had already considered the Groove Pizza’s limitations for younger children and users with special needs: there is no “live performance” mode, and there is always some delay in feedback between making a change in the drum pattern and hearing the result. We have been considering ways to make a rhythm interface that is more immediate, performance-oriented and tactile. One possible direction would be to create a hardware version of the Groove Pizza; indeed, one of the earliest prototypes was a hardware version built by Adam November out of a pizza box. However, hardware design is vastly more complex and difficult than software, so for the time being, software promises more immediate results.

Haenselmann-Lemelson-Effelsberg MIDI sequencer

This experimental interface is described in Haenselmann, T., Lemelson, H., & Effelsberg, W. (2011). A zero-vision music recording paradigm for visually impaired people. Multimedia Tools and Applications, 5, 1–19.

Haenselmann-Lemelson-Effelsberg MIDI sequencer

The authors create a new mode for a standard MIDI keyboard that maps piano keys to DAW functions like playback, quantization, track selection, and so on. They also add “earcons” (auditory icons) to give sonic feedback when particular functions have been activated that normally only give graphical feedback. For example, one earcon sounds when recording is enabled; another sounds for regular playback. This interface sounds promising, but there are significant obstacles to its adoption. While the authors have released the source code as a free download, that requires a would-be user to be able to compile and run it. This is presuming that they could access the code in the first place; the download link given in the paper is inactive. It is an all-too-common fate of academic projects to never get widespread usage. By posting our projects on the web, the MusEDLab hopes to avoid this outcome.

Statement

Music education philosophy

My project is animated by a constructivist philosophy of music education, which operates by the following axiomatic assumptions:

  • Learning by doing is better than learning by being told.
  • Learning is not something done to you, but rather something done by you.
  • You do not get ideas; you make ideas. You are not a container that gets filled with knowledge and new ideas by the world around you; rather, you actively construct knowledge and ideas out of the materials at hand, building on top of your existing mental structures and models.
  • The most effective learning experiences grow out of the active construction of all types of things, particularly things that are personally or socially meaningful, that you develop through interactions with others, and that support thinking about your own thinking.

If an activity’s challenge level is beyond than your ability, you experience anxiety. If your ability at the activity far exceeds the challenge, the result is boredom. Flow happens when challenge and ability are well-balanced, as seen in this diagram adapted from Csikszentmihalyi.

Flow

Music students face significant obstacles to flow at the left side of the Ability axis. Most instruments require extensive practice before it is possible to make anything that resembles “real” music. Electronic music presents an opportunity here, because even a complete novice can produce music with a high degree of polish quickly. It is empowering to use technologies that make it impossible to do anything wrong; it frees you to begin exploring what you find to sound right. Beginners can be scaffolded in their pitch explorations with MIDI scale filters, Auto-Tune, and the configurable software keyboards in apps like Thumbjam and Animoog. Rhythmic scaffolding is more rare, but it can be had via Ableton’s quantized clip launcher, by MIDI arpeggiators, and using the Note Repeat feature on many drum machines.

QWERTYBeats proposal

My project takes drum machine Note Repeat as its jumping off point. When Note Repeat is activated, holding down a drum pad triggers the corresponding sound at a particular rhythmic interval: quarter notes, eighth notes, and so on. On the Ableton Push, Note Repeat automatically syncs to the global tempo, making it effortless to produce musically satisfying rhythms. However, this mode has a major shortcoming: it applies globally to all of the drum pads. To my knowledge, no drum machine makes it possible to simultaneously have, say, the snare drum playing every dotted eighth note while the hi-hat plays every sixteenth note.

I propose a web application called QWERTYBeats that maps drums to the computer keyboard as follows:

  • Each row of the keyboard triggers a different drum/beatbox sound (e.g. kick, snare, closed hi-hat, open hi-hat).
  • Each column retriggers the sample at a different rhythmic interval (e.g. quarter note, dotted eighth note).
  • Circles dynamically divide into “pie slices” to show rhythmic values.

The rhythm values are shown below by column, with descriptions followed by the time interval as shown as a fraction of the tempo in beats per minute.

  1. quarter note (1)
  2. dotted eighth note (3/4)
  3. quarter note triplet (2/3)
  4. eighth note (1/2)
  5. dotted sixteenth note (3/8)
  6. eighth note triplet (1/3)
  7. sixteenth note (1/4)
  8. dotted thirty-second note (3/16)
  9. sixteenth note triplet (1/6)
  10. thirty-second note (1/8)

By simply holding down different combinations of keys, users can attain complex syncopations and polyrhythms. If the app is synced to the tempo of a DAW or music playback, the user can perform good-sounding rhythms over any song that is personally meaningful to them.

The column layout leaves some unused keys in the upper right corner of the keyboard: “-“, “=”, “[“, “]”, “”, etc. These can be reserved for setting the tempo and other UI elements.

The app defaults to Perform Mode, but clicking Make New Kit opens Sampler mode, where users can import or record their own drum sounds:

  • Keyboard shortcuts enable the user to select a sound, audition it, record, set start and end point, and set its volume level.
  • A login/password system enables users to save kits to the cloud where they can be accessed from any computer. Kits get unique URL identifiers, so users can also share them via email or social media.

It is my goal to make the app accessible to users with the widest possible diversity of abilities.

  • The entire layout will use plain text, CSS and JavaScript to support screen readers.
  • All user interface elements can be accessed via the keyboard: tab to change the keyboard focus, menu selections and parameter changes via the up and down arrows, and so on.

Perform Mode:

QWERTYBeats concept images - Perform mode

Sampler Mode:

sampler-mode

Mobile version

The present thought is to divide up the screen into a grid mirroring the layout of the QWERTY keyboard. User testing will determine whether this will produce a satisfying experience.

QWERTYDrum - mobile

Prototype

I created a prototype of the app using Ableton Live’s Session View.

QWERTYBeats - Ableton prototype

Here is a sample performance:

There is not much literature examining the impact of drum programming and other electronic rhythm sequencing on students’ subsequent ability to play acoustic drums, or to keep time more accurately in general. I can report anecdotally that my own time spent sequencing and programming drums improved my drumming and timekeeping enormously (and mostly inadvertently.) I will continue to seek further support for the hypothesis that electronically assisted rhythm creation builds unassisted rhythmic ability. In the meantime, I am eager to prototype and test QWERTYBeats.

Composing in the classroom

The hippest music teachers help their students create original music. But what exactly does that mean? What even is composition? In this post, I take a look at two innovators in music education and try to arrive at an answer.

Matt McLean is the founder of the amazing Young Composers and Improvisers Workshop. He teaches his students composition using a combination of Noteflight, an online notation editor, and the MusEDLab‘s own aQWERTYon, a web app that turns your regular computer keyboard into an intuitive musical interface.

http://www.yciw.net/1/the-interface-i-wish-noteflight-had-is-here-aqwertyon/

Matt explains:

Participating students in YCIW as well as my own students at LREI have been using Noteflight for over 6 years to compose music for chamber orchestras, symphony orchestras, jazz ensembles, movie soundtracks, video game music, school band and more – hundreds of compositions.

Before the advent of the aQWERTYon, students needed to enter music into Noteflight either by clicking with the mouse or by playing notes in with a MIDI keyboard. The former method is accessible but slow; the latter method is fast but requires some keyboard technique. The aQWERTYon combines the accessibility of the mouse with the immediacy of the piano keyboard.

For the first time there is a viable way for every student to generate and notate her ideas in a tactile manner with an instrument that can be played by all. We founded Young Composers & Improvisors Workshop so that every student can have the experience of composing original music. Much of my time has been spent exploring ways to emphasize the “experiencing” part of this endeavor. Students had previously learned parts of their composition on instruments after their piece was completed. Also, students with piano or guitar skills could work out their ideas prior to notating them. But efforts to incorporate MIDI keyboards or other interfaces with Noteflight in order to give students a way to perform their ideas into notation always fell short.

The aQWERTYon lets novices try out ideas the way that more experienced musicians do: by improvising with an instrument and reacting to the sounds intuitively. It’s possible to compose without using an instrument at all, using a kind of sudoku-solving method, but it’s not likely to yield good results. Your analytical consciousness, the part of your mind that can write notation, is also its slowest and dumbest part. You really need your emotions, your ear, and your motor cortex involved. Before computers, you needed considerable technical expertise to be able to improvise musical ideas, and remember them long enough to write them down. The advent of recording and MIDI removed a lot of the friction from the notation step, because you could preserve your ideas just by playing them. With the aQWERTYon and interfaces like it, you can do your improvisation before learning any instrumental technique at all.

Student feedback suggests that kids like being able to play along to previously notated parts as a way to find new parts to add to their composition. As a teacher I am curious to measure the effect of students being able to practice their ideas at home using aQWERTYon and then sharing their performances before using their idea in their composition. It is likely that this will create a stronger connection between the composer and her musical idea than if she had only notated it first.

Those of us who have been making original music in DAWs are familiar with the pleasures of creating ideas through playful jamming. It feels like a major advance to put that experience in the hands of elementary school students.

Matt uses progressive methods to teach a traditional kind of musical expression: writing notated scores that will then be performed live by instrumentalists. Matt’s kids are using futuristic tools, but the model for their compositional technique is the one established in the era of Beethoven.

Beethoven

(I just now noticed that the manuscript Beethoven is holding in this painting is in the key of D-sharp. That’s a tough key to read!)

Other models of composition exist. There’s the Lennon and McCartney method, which doesn’t involve any music notation. Like most untrained rock musicians, the Beatles worked from lyric sheets with chords written on them as a mnemonic. The “lyrics plus chords” method continues to be the standard for rock, folk and country musicians. It’s a notation system that’s only really useful if you already have a good idea of how the song is supposed to sound.

Lennon and McCartney writing

Lennon and McCartney originally wrote their songs to be performed live for an audience. They played in clubs for several years before ever entering a recording studio. As their career progressed, however, the Beatles stopped performing live, and began writing with the specific goal of creating studio recordings. Some of those later Beatles tunes would be difficult or impossible to perform live. Contemporary artists like Missy Elliott and Pharrell Williams have pushed the Beatles’ idea to its logical extreme: songs existing entirely within the computer as sequences of samples and software synths, with improvised vocals arranged into shape after being recorded. For Missy and Pharrell, creating the score and the finished recording are one and the same act.

Pharrell and Missy Elliott in the studio

Is it possible to teach the Missy and Pharrell method in the classroom? Alex Ruthmann, MusEDLab founder and my soon-to-be PhD advisor, documented his method for doing so in 2007.

As a middle school general music teacher, I’ve often wrestled with how to engage my students in meaningful composing experiences. Many of the approaches I’d read about seemed disconnected from the real-world musicality I saw daily in the music my students created at home and what they did in my classes. This disconnect prompted me to look for ways of bridging the gap’ between the students’ musical world outside music class and their in-class composing experiences.

It’s an axiom of constructivist music education that students will be most motivated to learn music that’s personally meaningful to them. There are kids out there for whom notated music performed on instruments is personally meaningful. But the musical world outside music class usually follows the Missy and Pharrell method.

[T]he majority of approaches to teaching music with technology center around notating musical ideas and are often rooted in European classical notions of composing (for example, creating ABA pieces, or restricting composing tasks to predetermined rhythmic values). These approaches require students to have a fairly sophisticated knowledge of standard music notation and a fluency working with rhythms and pitches before being able to explore and express their musical ideas through broader musical dimensions like form, texture, mood, and style.

Noteflight imposes some limitations on these musical dimensions. Some forms, textures, moods and styles are difficult to capture in standard notation. Some are impossible. If you want to specify a particular drum machine sound combined with a sampled breakbeat, or an ambient synth pad, or a particular stereo image, standard notation is not the right tool for the job.

Common approaches to organizing composing experiences with synthesizers and software often focus on simplified classical forms without regard to whether these forms are authentic to the genre or to technologies chosen as a medium for creation.

There is nothing wrong with teaching classical forms. But when making music with computers, the best results come from making the music that’s idiomatic to computers. Matt McLean goes to extraordinary lengths to have student compositions performed by professional musicians, but most kids will be confined to the sounds made by the computer itself. Classical forms and idioms sound awkward at best when played by the computer, but electronic music sounds terrific.

The middle school students enrolled in these classes came without much interest in performing, working with notation, or studying the classical music canon. Many saw themselves as “failed” musicians, placed in a general music class because they had not succeeded in or desired to continue with traditional performance-based music classes. Though they no longer had the desire to perform in traditional school ensembles, they were excited about having the opportunity to create music that might be personally meaningful to them.

Here it is, the story of my life as a music student. Too bad I didn’t go to Alex’s school.

How could I teach so that composing for personal expression could be a transformative experience for students? How could I let the voices and needs of the students guide lessons for the composition process? How could I draw on the deep, complex musical understandings that these students brought to class to help them develop as musicians and composers? What tools could I use to quickly engage them in organizing sound in musical and meaningful ways?

Alex draws parallels between writing music and writing English. Both are usually done alone at a computer, and both pose a combination of technical and creative challenges.

Musical thinking (thinking in sound) and linguistic thinking (thinking using language phrases and ideas) are personal creative processes, yet both occur within social and cultural contexts. Noting these parallels, I began to think about connections between the whole-language approach to writing used by language arts teachers in my school and approaches I might take in my music classroom.

In the whole-language approach to writing, students work individually as they learn to write, yet are supported through collaborative scaffolding-support from their peers and the teacher. At the earliest stages, students tell their stories and attempt to write them down using pictures, drawings, and invented notation. Students write about topics that are personally meaningful to them, learning from their own writing and from the writing of their peers, their teacher, and their families. They also study literature of published authors. Classes that take this approach to teaching writing are often referred to as “writers’ workshops”… The teacher facilitates [students’] growth as writers through minilessons, share sessions, and conferring sessions tailored to meet the needs that emerge as the writers progress in their work. Students’ original ideas and writings often become an important component of the curriculum. However, students in these settings do not spend their entire class time “freewriting.” There are also opportunities for students to share writing in progress and get feedback and support from teacher and peers. Revision and extension of students’ writing occur throughout the process. Lessons are not organized by uniform, prescriptive assignments, but rather are tailored to the students’ interests and needs. In this way, the direction of the curriculum and successive projects are informed by the students’ needs as developing writers.

Alex set about creating an equivalent “composers’ workshop,” combining composition, improvisation, and performing with analytical listening and genre studies.

The broad curricular goal of the composers’ workshop is to engage students collaboratively in:

  • Organizing and expressing musical ideas and feelings through sound with real-world, authentic reasons for and means of composing
  • Listening to and analyzing musical works appropriate to students’ interests and experiences, drawn from a broad spectrum of sources
  • Studying processes of experienced music creators through listening to, performing, and analyzing their music, as well as being informed by accounts of the composition process written by these creators.

Alex recommends production software with strong loop libraries so students can make high-level musical decisions with “real” sounds immediately.

While students do not initially work directly with rhythms and pitch, working with loops enables students to begin composing through working with several broad musical dimensions, including texture, form, mood, and affect. As our semester progresses, students begin to add their own original melodies and musical ideas to their loop-based compositions through work with synthesizers and voices.

As they listen to musical exemplars, I try to have students listen for the musical decisions and understand the processes that artists, sound engineers, and producers make when crafting their pieces. These listening experiences often open the door to further dialogue on and study of the multiplicity of musical roles’ that are a part of creating today’s popular music. Having students read accounts of the steps that audio engineers, producers, songwriters, film-score composers, and studio musicians go through when creating music has proven to be informative and has helped students learn the skills for more accurately expressing the musical ideas they have in their heads.

Alex shares my belief in project-based music technology teaching. Rather than walking through the software feature-by-feature, he plunges students directly into a creative challenge, trusting them to pick up the necessary software functionality as they go. Rather than tightly prescribe creative approaches, Alex observes the students’ explorations and uses them as opportunities to ask questions.

I often ask students about their composing and their musical intentions to better understand how they create and what meanings they’re constructing and expressing through their compositions. Insights drawn from these initial dialogues help me identify strategies I can use to guide their future composing and also help me identify listening experiences that might support their work or techniques they might use to achieve their musical ideas.

Some musical challenges are more structured–Alex does “genre studies” where students have to pick out the qualities that define techno or rock or film scores, and then create using those idioms. This is especially useful for younger students who may not have a lot of experience listening closely to a wide range of music.

Rather than devoting entire classes to demonstrations or lectures, Alex prefers to devote the bulk of classroom time to working on the projects, offering “minilessons” to smaller groups or individuals as the need arises.

Teaching through minilessons targeted to individuals or small groups of students has helped to maintain the musical flow of students’ compositional work. As a result, I can provide more individual feedback and support to students as they compose. The students themselves also offer their own minilessons to peers when they have designed to teach more about advanced features of the software, such as how to record a vocal track, add a fade-in or fade-out, or copy their musical material. These technology skills are taught directly to a few students, who then become the experts in that skill, responsible for teaching other students in the class who need the skill.

Not only does the peer-to-peer learning help with cultural authenticity, but it also gives students invaluable experience with the role of teacher.

One of my first questions is usually, “Is there anything that you would like me to listen for or know about before I listen?” This provides an opportunity for students to seek my help with particular aspects of their composing process. After listening to their compositions, I share my impressions of what I hear and offer my perspective on how to solve their musical problems. If students choose not to accept my ideas, that’s fine; after all, it’s their composition and personal expression… Use of conferring by both teacher and students fosters a culture of collaboration and helps students develop skills in peer scaffolding.

Alex recommends creating an online gallery of class compositions. This has become easier to implement since 2007 with the explosion of blog platforms like Tumblr, audio hosting tools like SoundCloud, and video hosts like YouTube. There are always going to be privacy considerations with such platforms, but there is no shortage of options to choose from.

Once a work is online, students can listen to and comment on these compositions at home outside of class time. Sometimes students post pieces in progress, but for the most part, works are posted when deemed “finished” by the composer. The online gallery can also be set up so students can hear works written by participants in other classes. Students are encouraged to listen to pieces published online for ideas to further their own work, to make comments, and to share these works with their friends and family. The realworld publishing of students’ music on the Internet seems to contribute to their motivation.

Assessing creative work is always going to be a challenge, since there’s no objective basis to assess it on. Alex looks at how well a student composer has met the goal of the assignment, and how well they have achieved their own compositional intent.

The word “composition” is problematic in the context of contemporary computer-based production. It carries the cultural baggage of Western Europe, the idea of music as having a sole identifiable author (or authors.) The sampling and remixing ethos of hip-hop and electronica are closer to the traditions of non-European cultures where music may be owned by everyone and no one. I’ve had good results bringing remixing into the classroom, having students rework each others’ tracks, or beginning with a shared pool of audio samples, or doing more complex collaborative activities like musical shares. Remixes are a way of talking about music via the medium of music, and remixes of remixes can make for some rich and deep conversation. The word “composition” makes less sense in this context. I prefer the broader term “production”, which includes both the creation of new musical ideas and the realization of those ideas in sound.

So far in this post, I’ve presented notation-based composition and loop-based production as if they’re diametrical opposites. In reality, the two overlap, and can be easily combined. A student can create a part as a MIDI sequence and then convert it to notation, or vice versa. The school band or choir can perform alongside recorded or sequenced tracks. Instrumental or vocal performances can be recorded, sampled, and turned into new works. Electronic productions can be arranged for live instruments, and acoustic pieces can be reconceived as electronica. If a hip-hop track can incorporate a sample of Duke Ellington, there’s no reason that sample couldn’t be performed by a high school jazz band. The possibilities are endless.

Rohan lays beats

The Ed Sullivan Fellows program is an initiative by the NYU MusEDLab connecting up-and-coming hip-hop musicians to mentors, studio time, and creative and technical guidance. Our session this past Saturday got off to an intense start, talking about the role of young musicians of color in a world of the police brutality and Black Lives Matter. The Fellows are looking to Kendrick Lamar and Chance The Rapper to speak social and emotional truths through music. It’s a brave and difficult job they’ve taken on.

Eventually, we moved from heavy conversation into working on the Fellows’ projects, which this week involved branding and image. I was at kind of a loose end in this context, so I set up the MusEDLab’s Push controller and started playing around with it. Rohan, one of the Fellows, immediately gravitated to it, and understandably so.

Indigo lays beats

Rohan tried out a few drum sounds, then some synths. He quickly discovered a four-bar synth loop that he wanted to build a track around. He didn’t have any Ableton experience, however, so I volunteered to be his co-producer and operate the software for him.

We worked out some drum parts, first with a hi-hat and snare from the Amen break, and then a kick, clap and more hi-hats from Ableton’s C78 factory instrument. For bass, Rohan wanted that classic booming hip-hop sound you hear coming from car stereos in Brooklyn. He spotted the Hip-Hop Sub among the presets. We fiddled with it and he continued to be unsatisfied until I finally just put a brutal compressor on it, and then we got the sound he was hearing in his head.

While we were working, I had my computer connected to a Bluetooth speaker that was causing some weird and annoying system behavior. At one point, iTunes launched itself and started playing a random song under Rohan’s track, “I Can’t Realize You Love Me” by Duke Ellington and His Orchestra, featuring The Harlem Footwarmers and Sid Garry.

Rohan liked the combination of his beat and the Ellington song, so I sampled the opening four bars and added them to the mix. It took me several tries to match the keys, and I still don’t think I really nailed it, but the hip-hop kids have broad tolerance for chord clash, and Rohan was undisturbed.

Once we had the loops assembled, we started figuring out an arrangement. It took me a minute to figure out that when Rohan refers to a “bar,” he means a four-measure phrase. He’s essentially conflating hypermeasures with measures. I posted about it on Twitter later and got some interesting responses.

In a Direct Message, Latinfiddler also pointed out that Latin music calls two measures a “bar” because that’s the length of one cycle of the clave.

Thinking about it further, there’s yet another reason to conflate measures with hypermeasures, which is the broader cut-time shift taking place in hip-hop. All of the young hip-hop beatmakers I’ve observed lately work at half the base tempo of their DAW session. Rohan, being no exception, had the session tempo set to 125 bpm, but programmed a beat with an implied tempo of 62.5 bpm. He and his cohort put their backbeats on beat three, not beats two and four, so they have a base grid of thirty-second notes rather than sixteenth notes. A similar shift took place in the early 1960s when the swung eighth notes of jazz rhythm gave way to the swung sixteenth notes of funk.

Here’s Rohan’s track as of the end of our session:

By the time we were done working, the rest of the Fellows had gathered around and started freestyling. The next step is to record them rapping and singing on top. We also need to find someone to mix it properly. I understand aspects of hip-hop very well, but I mix amateurishly at best.

All the way around, I feel like a learn a ton about music whenever I work with young hip-hop musicians. They approach the placement of sounds in the meter in ways that would never occur to me. I’m delighted to be able to support them technically in realizing their ideas, it’s a privilege for me.

Compositional prompts

One of the challenges in creating Theory for Producers (or any online learning experience) is to build community. When you’re in a classroom with people, community emerges naturally, but on the web it’s harder. We’re using email to remind students to stay engaged over time, but we don’t want to end up in their spam folders. To make our emails welcome rather than intrusive, we decided to do Weekly Challenges, one-line prompts for music creation. Participants post their challenges in our SoundCloud group.

I’ve been doing something similar with guitar students for a long time, in person rather than via email, for example with the one-note groove. In coming up with more prompts, I’ve been drawing on my recent foray into prose scores, inspired by the example of Pauline Oliveros.

Pauline Oliveros

Really, you could think of my collection of prompts as very short and simple prose scores. Please feel free to use these, for yourself, for students, or for any other purpose. All I ask is that you drop me a line to tell me how you used them.

The Prompts

One Note Groove: Create a melody using only one pitch.

Two Note Groove: Create a melody using only two distinct pitches.

Three Note Groove: Create a melody using only three distinct pitches.

Four Note Groove: Create a melody using only four distinct pitches.

Arpeggio Groove: Create a melody using only a single column of the aQWERTYon.

Call And Response: Create a melody that includes a call phrase and response phrase.

Repeat Four Times: Create a melody consisting of a phrase that repeats identically four times.

Repeat Eight Times: Create a melody consisting of a phrase that repeats identically eight times.

Repeat Sixteen Times: Create a melody consisting of a phrase that repeats identically sixteen times.

Narrow Range: Create a melody that only uses the notes between C and E-flat.

Angular: Create a melody where no interval between one note and the next is smaller than a fifth.

Avoid The Root: Create a melody using any of the notes in a scale except the root.

Avoid The Triad: Create a melody using any of the notes in a scale except the root, third and fifth.

Dissonance: Create the “ugliest” melody you can.

Avoid The Tonic: Create a chord progression using any chords from a scale except for the tonic.

Fourths: Create a melody and/or chords using only the interval of a perfect fourth.

Universal Solvent: Create a blues scale melody over non-blues accompaniment.

Emotional Extremes I: Create the happiest melody you can.

Emotional Extremes II: Create the saddest melody you can.

Palindrome: Create a melody consisting of a sequence of notes, then that same sequence backwards.

Pattern Sequence: Create a melody by moving a “shape” to different locations on the aQWERTYon.

Minimalism: Create a melody that is mostly silence.

Maximalism: Create a melody containing no gaps or pauses.

Melodic Adaptation: Take an existing melody and adapt it into a new one by keeping the rhythms but changing the pitches.

Rhythmic Adaptation: Take an existing melody and adapt it into a new one by keeping the pitches but changing the rhythms.

Birdsong: Recreate a bird call as closely as you can.

Speech Melody: Recreate the pitches of a spoken phrase.

Inside the aQWERTYon

The MusEDLab and Soundfly just launched Theory For Producers, an interactive music theory course. The centerpiece of the interactive component is a MusEDLab tool called the aQWERTYon. You can try it by clicking the image below.

aQWERTYon screencap

In this post, I’ll talk about why and how we developed the aQWERTYon.

One of our core design principles is to work within our users’ real-world technological limitations. We build tools in the browser so they’ll be platform-independent and accessible anywhere there’s internet access (and where there isn’t internet access, we’ve developed the “MusEDLab in a box.”) We want to find out what musical possibilities there are in a typical computer with no additional software or hardware. That question led us to investigate ways of turning the standard QWERTY keyboard into a beginner-friendly instrument. We were inspired in part by GarageBand’s Musical Typing feature.

GarageBand musical typing

If you don’t have a MIDI controller, Apple thoughtfully made it possible for you to use your computer keyboard to play GarageBand’s many software instruments. You get an octave and a half of piano, plus other useful controls: pitch bend, modulation, sustain, octave shifting and simple velocity control. Many DAWs offer something similar, but Apple’s system is the most sophisticated I’ve seen.

Handy though it is, Musical Typing has some problems as a user interface. The biggest one is the poor fit between the piano keyboard layout and the grid of computer keys. Typing the letter A plays the note C. The rest of that row is the white keys, and the one above it is the black keys. You can play the chromatic scale by alternating A row, Q row, A row, Q row. That basic pattern is easy enough to figure out. However, you quickly get into trouble, because there’s no black key between E and F. The QWERTY keyboard gives no visual reminder of that fact, so you just have to remember it. Unfortunately, the “missing” black key happens to be the letter R, which is GarageBand’s keyboard shortcut for recording. So what inevitably happens is that you’re hunting for E-flat or F-sharp and you accidentally start recording over whatever you were doing. I’ve been using the program for years and still do this routinely.

Rather than recreating the piano keyboard on the computer, we drew on a different metaphor: the accordion.

The accordion: the user interface metaphor of the future!

We wanted to have chords and scales arranged in an easily discoverable way, like the way you can easily figure out the chord buttons on the accordion’s left hand. The QWERTY keyboard is really a staggered grid four keys tall and between ten and thirteen keys wide, plus assorted modifier and function keys. We decided to use the columns for chords and the rows for scales.

For the diatonic scales and modes, the layout is simple. The bottom row gives the notes in the scale starting on 1^. The second row has the same scale shifted over to start on 3^. The third row starts the scale on 5^, and the top row starts on 1^ an octave up. If this sounds confusing when you read it, try playing it, your ears will immediately pick up the pattern. Notes in the same column form the diatonic chords, with their roman numerals conveniently matching the number keys. There are no wrong notes, so even just mashing keys at random will sound at least okay. Typing your name usually sounds pretty cool, and picking out melodies is a piece of cake. Playing diagonal columns, like Z-S-E-4, gives you chords voiced in fourths. The same layout approach works great for any seven-note scale: all of the diatonic modes, plus the modes of harmonic and melodic minor.

Pentatonics work pretty much the same way as seven-note scales, except that the columns stack in fourths rather than fifths. The octatonic and diminished scales lay out easily as well. The real layout challenge lay in one strange but crucial exception: the blues scale. Unlike other scales, you can’t just stagger the blues scale pitches in thirds to get meaningful chords. The melodic and harmonic components of blues are more or less unrelated to each other. Our original idea was to put the blues scale on the bottom row of keys, and then use the others to spell out satisfying chords on top. That made it extremely awkward to play melodies, however, since the keys don’t form an intelligible pattern of intervals. Our compromise was to create two different blues modes: one with the chords, for harmony exploration, and one just repeating the blues scale in octaves for melodic purposes. Maybe a better solution exists, but we haven’t figured it out yet.

When you select a different root, all the pitches in the chords and scales are automatically changed as well. Even if the aQWERTYon had no other features or interactivity, this would still make it an invaluable music theory tool. But root selection raises a bigger question: what do you do about all the real-world music that uses more than one scale or mode? Totally uniform modality is unusual, even in simple pop songs. You can access notes outside the currently selected scale by pressing the shift keys, which transposes the entire keyboard up or down a half step. But what would be really great is if we could get the scale settings to change dynamically. Wouldn’t it be great if you were listening to a jazz tune, and the scale was always set to match whatever chord was going by at that moment? You could blow over complex changes effortlessly. We’ve discussed manually placing markers in YouTube videos that tell the aQWERTYon when to change its settings, but that would be labor-intensive. We’re hoping to discover an algorithmic method for placing markers automatically.

The other big design challenge we face is how to present all the different scale choices in a way that doesn’t overwhelm our core audience of non-expert users. One solution would just be to limit the scale choices. We already do that in the Soundfly course, in effect; when you land on a lesson, the embedded aQWERTYon is preset to the appropriate scale and key, and the user doesn’t even see the menus. But we’d like people to be able to explore the rich sonic diversity of the various scales without confronting them with technical Greek terms like “Lydian dominant”. Right now, the scales are categorized as Major, Minor and Other, but those terms aren’t meaningful to beginners. We’ve been discussing how we could organize the scales by mood or feeling, maybe from “brightest” to “darkest.” But how do you assign a mood to a scale? Do we just do it arbitrarily ourselves? Crowdsource mood tags? Find some objective sorting method that maps onto most listeners’ subjective associations? Some combination of the above? It’s an active area of research for us.

This issue of categorizing scales by mood has relevance for the original use case we imagined for the aQWERTYon: teaching film scoring. The idea behind the integrated video window was that you would load a video clip, set a mode, and then improvise some music that fit the emotional vibe of that clip. The idea of playing along with YouTube videos of songs came later. One could teach more general open-ended composition with the aQWERTYon, and in fact our friend Matt McLean is doing exactly that. But we’re attracted to film scoring as a gateway because it’s a more narrowly defined problem. Instead of just “write some music”, the challenge is “write some music with a particular feeling to it that fits into a scene of a particular length.

Would you like to help us test and improve the aQWERTYon, or to design curricula around it? Would you like to help fund our programmers and designers? Please get in touch.