The aQWERTYon pitch wheels and the future of music theory visualization

The MusEDLab will soon be launching a revamped version of the aQWERTYon with some enhancements to its visual design, including a new scale picker. Beyond our desire to make our stuff look cooler, the scale picker represents a challenge that we’ve struggled with since the earliest days of aQW development. On the one hand, we want to offer users a wide variety of intriguing and exotic scales to play with. On the other hand, our audience of beginner and intermediate musicians is likely to be horrified by a list of terms like “Lydian dominant mode.” I recently had the idea to represent all the scales as colorful icons, like so:

Read more about the rationale and process behind this change here. In this post, I’ll explain what the icons mean, and how they can someday become the basis for a set of new interactive music theory visualizations.

Musical pitches rise and fall linearly, but pitch class is circular. When you go up or down the chromatic scale, the note names “wrap around” every twelve notes. This naming convention reflects the fact that we hear notes an octave apart as being “the same”, probably because they share so many overtones. (Non-human primates hear octaves as being equivalent too.)

chromatic circle

The note names and numbers are all based on the C major scale, which is Western music’s “default setting.” The scale notes C, D, E, F, G, A and B (the white keys on the piano) are the “normal” notes. (Why do they start on C and not A? I have no idea.) You get D-flat, E-flat, G-flat, A-flat and B-flat (the black keys on the piano) by lowering (flatting) their corresponding white key notes. Alternately, you can get the black key notes by raising or sharping the white key notes, in which case they’ll be called C-sharp, D-sharp, F-sharp, G-sharp, and A-sharp. (Let’s just briefly acknowledge that the imagery of the “normal” white and “deviant” black keys is just one of many ways that Western musical culture is super racist, and move on.)

You can represent any scale on the chromatic circle just by “switching” notes on and off. For example, if you activate the notes C, D, E-flat, F, G, A-flat and B, you get C harmonic minor. (Alternatively, you could just deactivate D-flat, E, G-flat, A, and B-flat.) Here’s how the scale looks when you write it this way:

C harmonic minor - monochrome

This is how I conceive scales in my head, as a pattern of activated and deactivated chromatic scale notes. As a guitarist, it’s the most intuitive way to think about them, because each box on the circular grid corresponds to a fret, so you can read the fingering pattern right off the circle. When I think “harmonic minor,” I don’t think of note names, I think “pattern of notes and gaps with one unusually wide gap.”

Another beauty of the circle view is that you can get the other eleven harmonic minor scales just by rotating the note names while keeping the pattern of activated/deactivated notes the same. If I want E-flat harmonic minor, I just have to grab the outer ring and rotate it counterclockwise a few notches:

E-flat harmonic minor

My next thought was to color-code the scale tones to give an indication of their sound and function:

C harmonic minor scale necklace

Here’s how the color scheme works:

  • Green – major, natural, sharp, augmented
  • Blue – minor, flat, diminished
  • Purple – perfect (neither major nor minor)
  • Grey – not in the scale

Scales with more green in them sound “happier” or brighter. Scales with more blue sound “sadder” or darker. Scales with a mixture of blue and green (like harmonic minor) will have a more complex and ambiguous feeling.

My ambition with the pitch wheels is not just to make the aQWERTYon’s scale menu more visually appealing. I’d eventually like to have it be an interactive way to visualize chords too. Followers of this blog will notice a strong similarity between the circular scale and the rhythm necklaces that inspired the Groove Pizza. Just like symmetries and patterns on the rhythm necklace can tell you a lot about how beats work, so too can symmetries and patterns on the scale necklace can tell you how harmony works. So here’s my dream for the aQWERTYon’s future theory visualization interface. If you load the app and set it to C harmonic minor, here’s how it would look. To the right is a staff notation view with the appropriate key signature.

When you play a note, it would change color on the keyboard and the wheel, and appear on the staff. The app would also tell you which scale degree it is (in this case, seven.)

If you play two notes simultaneously, in this case the third and seventh notes in C Mixolydian mode, the app would draw a line between the two notes on the circle:

If you play three notes at a time, like the first, fourth and fifth notes in C Lydian, you’d get a triangle.

If your three notes spell out a chord, like the second, fourth and sixth notes in C Phrygian mode, the app would recognize it and shows the chord symbol on the staff.

The pattern continues if you play four notes at a time:

Or five notes at a time:

By rotating the outer ring of the pitch wheel, you could change the root of the scale, like I showed above with C harmonic minor. And if you rotated the inner ring, showing the scale degrees, you could get different modes of the scale. Modes are one of the most difficult concepts in music theory. That is, they’re difficult until you learn to imagine them as rotations of the scale necklace, at which point they become nothing harder than a memorization exercise.

I’m designing this system to be used with the aQWERTYon, but there’s no reason it couldn’t take ordinary MIDI input as well. Wouldn’t it be nice to have this in a window in your DAW or notation program?

Music theory is hard. There’s a whole Twitter account devoted to retweeting students’ complaints about it. Some of this difficulty is due to the intrinsic complexity of modern harmony. But a lot of it is due to terminology and notation. Our naming system for notes and chords is a set of historically contingent kludges. No rational person would design it this way from the ground up. Thanks to path dependency, we’re stuck with it, much like we’re stuck with English grammar and the QWERTY keyboard layout. Fortunately, technology gives us a lot of new ways to make all the arcana more accessible, by showing multiple representations simultaneously and by making those representations discoverable through playful tinkering.

Do you find this idea exciting? Would you like it to be functioning software, and not just a bunch of flat images I laboriously made by hand? Help the MusEDLab find a partner to fund the developer and designer time. A grant or gift would work, and we’d also be open to exploring a commercial partnership. The aQW has been a labor of volunteer love for the lab so far, and it’s already one of the best music theory pedagogy tools on the internet. But development would go a lot faster if we could fund it properly. If you have ideas, please be in touch!

Update: Will Kuhn’s response to this post.

Affordances and Constraints

Note-taking for User Experience Design with June Ahn

Don Norman discusses affordances and constraints in The Design of Everyday Things, Chapter Four: Knowing What To Do.

Don Norman - The Design of Everyday Things

User experience design is easy in situations where there’s only one thing that the user can possibly do. But as the possibilities multiply, so do the challenges. We can deal with new things using information from our prior experiences, or by being instructed. The best-designed things include the instructions for their own use, like video games whose first level act as tutorials, or doors with handles that communicate how you should operate them by their shape and placement.

We use affordances and constraints to learn how things work. Affordances suggest the range of possibilities, and constraints limit the alternatives. Constraints include:

  • Physical limitations. Door keys can only be inserted into keyholes vertically, but you can still insert the key upside down. Car keys work in both orientations.
  • Semantic constraints. We know that red lights mean stop and green lights mean go, so we infer that a red light means a device is off or inoperative, and a green light means it’s on or ready to function. We have a slow cooker that uses lights in the opposite way and it screws me up every time.
  • Cultural constraints. Otherwise known as conventions. (Not sure how these are different from semantic constraints.) Somehow we all know without being told that we’re supposed to face forward in the elevator. Google Glass was an epic failure because its early adopters ran into the cultural constraint of people not liking to be photographed without consent.
  • Logical constraints. The arrangement of knobs controlling your stove burners should match the arrangement of the burners themselves.

The absence of constraints makes things confusing. Norman gives examples of how much designers love rows of identical switches which give no clues as to their function. Distinguishing the switches by shape, size, or grouping might not look as elegant, but would make it easier to remember which one does what thing.

Helpful designs use visibility (making the relevant parts visible) and feedback (giving actions an immediate and obvious effect.) Everyone hates the power buttons on iMacs because they’re hidden on the back, flush with the case. Feedback is an important way to help us distinguish the functional parts from the decorative ones. Propellerheads Reason is an annoying program because its skeuomorphic design puts as many decorative elements on the screen as functional ones. Ableton Live is easier to use because everything on the screen is functional.

When you can’t make things visible, you can give feedback via sound. Pressing a Mac’s power button doesn’t immediately cause the screen to light up, but that’s okay, because it plays the famous startup sound. Norman’s examples of low-tech sound feedback include the “zzz” sound of a functioning zipper, a tea kettle’s whistle, and the various sounds that machines make when they have mechanical problems. The problem with sound as feedback is that it can be intrusive and annoying.

The term “affordance” is the source for a lot of confusion. Norman tries to clarify it in his article “Affordance, Conventions and Design.” He makes a distinction between real and perceived affordances. Anything that appears on a computer screen is a perceived affordance. The real affordances of a computer are its physical components: the screen itself, the keyboard, the trackpad. The MusEDLab was motivated to create the aQWERTYon by considering the computer’s real affordances for music making. Most software design ignores the real affordances and only considers the perceived ones.

Designers of graphical user interfaces rely entirely on conceptual models and cultural conventions. (Consider how many programs use a graphic of a floppy disk as a Save icon, and now compare to the last time you saw an actual floppy disk.) For Norman, graphics are perceived affordances by definition.

Joanna McGrenere and Wayne Ho try to nail the concept down harder in “Affordances: Clarifying and Evolving a Concept.” The term was coined by the perceptual psychologist James J. Gibson in his book The Ecological Approach to Visual Perception. For Gibson, affordances exist independent of the actor’s ability to perceive them, and don’t depend on the actor’s experiences and culture. For Norman, affordances can include both perceived and actual properties, which to me makes more sense. If you can’t figure out that an affordance exists, then what does it matter if it exists or not?

Norman collapses two distinct aspects of design: an object’s utility of an object and the way that users learn or discover that utility. But are designing affordances and designing the information about the affordances the same thing? McGrenere and Ho say no, that it’s the difference between usefulness versus usability. They complain that the HCI community has focused on usability at the expense of usefulness. Norman says that a scrollbar is a learned convention, not a real affordance. McGrenere and Ho disagree, because the scrollbar affords scrolling in a way that’s built into the software, making it every bit as much a real affordance as if it were a physical thing. The learned convention is the visual representation of the scrollbar, not the basic fact of it.

The best reason to distinguish affordances from their communication or representation is that sometimes the communication gets in the way of the affordance itself. For example, novice software users need graphical user interfaces, while advanced users prefer text commands and keyboard shortcuts. A beginner needs to see all the available commands, while a pro prefers to keep the screen free of unnecessary clutter. Ableton Live is a notoriously beginner-unfriendly program because it prioritizes visual economy and minimalism over user handholding. A number of basic functions are either invisible or so tiny as to be effectively invisible. Apple’s GarageBand welcomes beginners with photorealistic depictions of everything, but its lack of keyboard shortcuts makes it feel like wearing oven mitts for expert users. For McGrenere and Ho, the same feature of one of these programs can be an affordance or anti-affordance depending on the user.

Learning music from Ableton

Ableton recently launched a delightful web site that teaches the basics of beatmaking, production and music theory using elegant interactives. If you’re interested in music education, creation, or user experience design, you owe it to yourself to try it out.

Ableton - Learning Music site

One of the site’s co-creators is Dennis DeSantis, who wrote Live’s unusually lucid documentation, and also their first book, a highly-recommended collection of strategies for music creation (not just in the electronic idiom.)

Dennis DeSantis - Making Music

The other co-creator is Jack Schaedler, who also created this totally gorgeous interactive digital signal theory primer.

If you’ve been following the work of the NYU Music Experience Design Lab, you might notice some strong similarities between Ableton’s site and our tools. That’s no coincidence. Dennis and I have been having an informal back and forth on the role of technology in music education for a few years now. It’s a relationship that’s going to get a step more formal this fall at the 2017 Loop Conference – more details on that as it develops.

Meanwhile, Peter Kirn’s review of the Learning Music site raises some probing questions about why Ableton might be getting involved in education in the first place. But first, he makes some broad statements about the state of the musical world that are worth repeating in full.

I think there’s a common myth that music production tools somehow take away from the need to understand music theory. I’d say exactly the opposite: they’re more demanding.

Every musician is now in the position of composer. You have an opportunity to arrange new sounds in new ways without any clear frame from the past. You’re now part of a community of listeners who have more access to traditions across geography and essentially from the dawn of time. In other words, there’s almost no choice too obvious.

The music education world has been slow to react to these new realities. We still think of composition as an elite and esoteric skill, one reserved only for small class of highly trained specialists. Before computers, this was a reasonable enough attitude to have, because it was mostly true. Not many of us can learn an instrument well enough to compose with it, then learn to notate our ideas. Even fewer of us will be able to find musicians to perform those compositions. But anyone with an iPhone and twenty dollars worth of apps can make original music using an infinite variety of sounds, and share that music online to anyone willing to listen. My kids started playing with iOS music apps when they were one year old. With the technical barriers to musical creativity falling away, the remaining challenge is gaining an understanding of music itself, how it works, why some things sound good and others don’t. This is the challenge that we as music educators are suddenly free to take up.

There’s an important question to ask here, though: why Ableton?

To me, the answer to this is self-evident. Ableton has been in the music education business since its founding. Like Adam Bell says, every piece of music creation software is a de facto education experience. Designers of DAWs might even be the most culturally impactful music educators of our time. Most popular music is made by self-taught producers, and a lot of that self-teaching consists of exploring DAWs like Ableton Live. The presets, factory sounds and affordances of your DAW powerfully inform your understanding of musical possibility. If DAW makers are going to be teaching the world’s producers, I’d prefer if they do it intentionally.

So far, there has been a divide between “serious” music making tools like Ableton Live and the toy-like iOS and web apps that my kids use. If you’re sufficiently motivated, you can integrate them all together, but it takes some skill. One of the most interesting features of Ableton’s web site, then, is that each interactive tool includes a link that will open up your little creation in a Live session. Peter Kirn shares my excitement about this feature.

There are plenty of interactive learning examples online, but I think that “export” feature – the ability to integrate with serious desktop features – represents a kind of breakthrough.

Ableton Live is a superb creation tool, but I’ve been hesitant to recommend it to beginner producers. The web site could change my mind about that.

So, this is all wonderful. But Kirn points out a dark side.

The richness of music knowledge is something we’ve received because of healthy music communities and music institutions, because of a network of overlapping ecosystems. And it’s important that many of these are independent. I think it’s great that software companies are getting into the action, and I hope they continue to do so. In fact, I think that’s one healthy part of the present ecosystem.

It’s the rest of the ecosystem that’s worrying – the one outside individual brands and what they support. Public music education is getting squeezed in different ways all around the world. Independent content production is, too, even in advertising-supported publications like this one, but more so in other spheres. Worse, I think education around music technology hasn’t even begun to be reconciled with traditional music education – in the sense that people with specialties in one field tend not to have any understanding of the other. And right now, we need both – and both are getting their resources squeezed.

This might feel like I’m going on a tangent, but if your DAW has to teach you how harmony works, it’s worth asking the question – did some other part of the system break down?

Yes it did! Sure, you can learn the fundamentals of rhythm, harmony, and form from any of a thousand schools, courses, or books. But there aren’t many places you can go to learn about it in the context of Beyoncé, Daft Punk, or A Tribe Called Quest. Not many educators are hip enough to include the Sleng Teng riddim as one of the fundamentals. I’m doing my best to rectify this imbalance–that’s what my courses with Soundfly classes are for. But I join Peter Kirn in wondering why it’s left to private companies to do this work. Why isn’t school music more culturally relevant? Why do so many educators insist that you kids like the wrong music? Why is it so common to get a music degree without ever writing a song? Why is the chasm between the culture of school music and music generally so wide?

Like Kirn, I’m distressed that school music programs are getting their budgets cut. But there’s a reason that’s happening, and it isn’t that politicians and school boards are philistines. Enrollment in school music is declining in places where the budgets aren’t being cut, and even where schools are offering free instruments. We need to look at the content of school music itself to see why it’s driving kids away. Both the content of school music programs and the people teaching them are whiter than the student population. Even white kids are likely to be alienated from a Eurocentric curriculum that doesn’t reflect America’s increasingly Afrocentric musical culture. The large ensemble model that we imported from European conservatories is incompatible with the riot of polyglot individualism in the kids’ earbuds.

While music therapists have been teaching songwriting for years, it’s rare to find it in school music curricula. Production and beatmaking are even more rare. Not many adults can play oboe in an orchestra, but anyone with a guitar or keyboard or smartphone can write and perform songs. Music performance is a wonderful experience, one I wish were available to everyone, but music creation is on another level of emotional meaning entirely. It’s like the difference between watching basketball on TV and playing it yourself. It’s a way to understand your own innermost experiences and the innermost experiences of others. It changes the way you listen to music, and the way you approach any kind of art for that matter. It’s a tool that anyone should be able to have in their kit. Ableton is doing the music education world an invaluable service; I hope more of us follow their example.

Research proposal – Hip-Hop Pedagogy

Final paper for Principles of Empirical Research with Catherine Voulgarides

Research questions

Jamie Ehrenfeld is a colleague of mine in the NYU Music Experience Design Lab. She graduated from NYU’s music education program, and now teaches music at Eagle Academy in Brownsville. Like many members of the lab, she straddles musical worlds, bringing her training in classical voice to her work mentoring rappers and R&B singers. We often talk about our own music learning experiences. In one such discussion, Jamie remarked: “I got a music degree without ever writing a song” (personal communication, April 29 2017). Across her secondary and undergraduate training, she had no opportunity to engage with the creative processes behind popular music. Her experience is hardly unusual. There is a wide and growing divide behind the culture of school music and the culture of music generally. Music educators are steeped in the habitus of classical music, at a time when our culture is increasingly defined by the music of the African diaspora: hip-hop, R&B, electronic dance music, and rock. 

The music academy’s near-exclusive focus on Western classical tradition places it strikingly at odds with the world that our students inhabit. In this paper, I examine the ideological basis for this divide. Why does the music academy generally and the training of music educators in particular hold so closely to the traditions of Western European classical music? Why has the music academy been slow to embrace African diasporic vernacular musics? Why does it outspokenly reject hip-hop? What racial and class forces drive the divide between music educators and the culture of their students? How might we make music education more culturally responsive? How can music educators support students in developing their own musical creativity via songwriting and beatmaking? What assumptions about musical and educational values must we challenge in order to do so?

Framing of research topic

Music education scholars commonly use “non-Western” as a shorthand for music outside the European classical tradition. This might lead one to naively believe that hip-hop is non-Western music. But it arose in the United States, so how can that be? Are our racial and ethnic minorities part of our civilization, or are they not? While the American cultural mainstream has increasingly embraced black musical styles, the music education field has not followed suit. As an example, consider a meme posted to a group for music teachers on Facebook. The meme’s original author is unknown. The caption was something like, “Typical middle school/high school student.” I will leave the person who posted it to Facebook anonymous, because they no doubt meant well.

You kids like the wrong music

The meme-maker is dismayed that young people do not care how little their music adheres to the stylistic norms of the Western European classical tradition. The author dismisses contemporary popular music and can not imagine why anyone else might enjoy it. The condescending presumption is that young people do not “really” enjoy pop, that they are being tricked into it by marketing and image, and that they are too lazy and ignorant to make critical choices. The choice of the word “molester” is a remarkable one, with its connotation of sexual violence. Classically trained educators feel their culture to be under attack, with their own students leading the charge.

Eurocentrism in American music education

In examining educational practice, we must look for the “hidden curriculum” (Anyon, 1980), the ideological content that comes along with the ostensible curricular goals. For example, The Complete Musician by Steven Laitz (2015) is a widely used college-level theory text. (I used a similar book of Laitz’s to fulfill my own graduate music theory requirement.) The title asserts an all-encompassing scope, but the text only discusses Western classical harmony and counterpoint. Other elements of music, like rhythm or timbre, receive cursory treatment at most. African diasporic and non-Western musics are not mentioned. The hidden curriculum here is barely even hidden. Mcclary (2000) asks why the particular musical conventions that emerged in Europe during the eighteenth and nineteenth centuries appealed so much to musicians and audiences, what needs they satisfied, and what cultural functions they performed. We might ask, since those conventions no longer appeal to most musicians or audiences, whose needs are being satisfied by school music? What cultural functions is it performing?

America has embraced every black musical form from ragtime through trap. But while our laws and culture have become less overtly racist over time, the oppression of people of color continues, African-Americans especially. For example, while they are no more likely to use drugs than white people, black people are many times more likely to be incarcerated for it. A white applicant with a felony drug conviction is more likely to get a callback for an entry-level job than a black applicant with no criminal record at all (Pager, 2007). Our large cities are extraordinarily segregated, with black neighborhoods isolated and concentrated (Denton & Massey, 1993). Perhaps this isolation has contributed to the evolution of hip-hop and its radical break with European-descended musical practices. Perry (2004) argues that, while hip-hop is a hybrid music, it is nevertheless a fundamentally black one due to four central characteristics:

(1) its primary language is African American Vernacular English (AAVE); (2) it has a political location in society distinctly ascribed to black people, music, and cultural forms; (3) it is derived from black American oral culture; and (4) it is derived from black American musical traditions (Perry 2004, 10).

The white mainstream adores the music while showering the people who created it with contempt (Perry 2004, 27).

Black music versus white educators

If the popular mainstream is dominated by innovations in black music, the field of musical education is unified by its extraordinary whiteness, both demographically and musically. Prospective teachers tend to be white, and come from suburban, low-poverty areas (Doyle, 2014). There is corresponding disproportionality among participants in formal music classes and ensembles—privileged groups are overrepresented, while less-privileged groups are underrepresented. This is true for white students versus students of color, high-SES students versus low-SES students, native English speakers versus English language learners, students whose parents have more versus less education, and so on (Elpus & Abril, 2011). Some of the disparity is due to the fact that schools in less privileged communities are less likely to offer music in the first place. But the disparities hold true among schools that do offer music, and persist even when schools supply free instruments. Lack of access alone can not explain the overwhelming whiteness and privilege of most participants in school music.

A great deal of research shows enrollment in school music declining precipitously for the past few decades. Budget cuts alone can not explain this decline, since enrollment in other arts courses has not declined as much (Kratus, 2007). As America’s student population becomes less white, its Eurocentric music education culture is evidently becoming steadily less appealing. Finney (2007) attributes the gap between music educators and their students to differing musical codes. “Teachers tend to use elaborated codes derived from Western European ‘elite’ culture, whereas students use vernacular codes… Students and teachers are therefore in danger of standing on opposite sides of a musical and linguistic chasm with few holding the key to unlock the other’s code” (18). Williams (2011) points to large ensemble model of school music that was imported to the United States from the European conservatory tradition in the early twentieth century, and which has barely changed since. Music educators teach what they learned, and what they learned is likely to have been the conservatory-style large ensemble.

Is the solution to expand the canon of “acceptable” music to include more artists of color? A typical undergraduate music history curriculum now tacks Duke Ellington or Charlie Parker onto the end of the succession of white European composers. But the canon is a political entity, not just an aesthetic one. If we try to expand the canon to include a greater diversity of musics, we will fail to challenge the basic fact of its existence and its role in academic culture. “[T]he canon is an epistemology; it is a way of understanding the world that privileges certain aesthetic criteria and that organizes a narrative about the history and development of music around such criteria and based on that understanding of the world. In other worlds, the canon is an ideology more than a specific repertory” (Madrid 2017, 125). Diversity is of no help if we simply use it to perpetuate privilege and power inequalities. “What does it mean when the tools of a racist patriarchy are used to examine the fruits of that same patriarchy? It means that only the most narrow parameters of change are possible and allowable” (Lorde 1984, 110). Rather than making incremental changes to the canon, we must ask how we can re-orient the basic assumptions of music education, its mission, its values, and its goals.

Literature review

In this section, I examine the present state of music education scholarship addressing the racial and class dynamics of music education, as well as the rise of culturally responsive pedagogies, particularly surrounding hip-hop.

Who is school music for?

By excluding entire categories of music and musicianship from the official curriculum, music educators send powerful and lasting messages to students (and everyone else) about what our society values and what it does not (Bledsoe, 2015). I am living proof; my own experiences with school music left me bored and alienated, and I came to the conclusion that I was not a musician at all. It took me years of self-guided practice to disabuse myself of that notion. I have had endless conversations with non-classical musicians at every level about how they do not regard themselves as “real” or “legitimate” musicians, no matter how professionally or creatively accomplished they may be. Fortunately, school music is not the only vector for music education. Most popular musicians learn informally from peers or on their own, a method that has become easier thanks to the internet. Still, the stigma of “failure” is a heavy psychological burden to overcome.

School music is usually competitive. There is a competitive process to become part of an ensemble, and those ensembles compete intramurally in much the same way that sports teams do. Conservatories that produce professional musicians need to be competitive. But should we continue to model all school music on the conservatory? The similarity between school ensembles and sports teams should trouble us. Schools are not obligated to let everyone play varsity football, regardless of ability. However, we do believe that schools should teach everyone reading and math. Our efforts to support struggling readers and math learners may be inadequate or even counterproductive, but at least we try to meet all students’ needs, and we certainly do not exclude low performers from studying these subjects entirely.

Some music teachers appear to exhibit the attitude of a physician who complains that all the patients in the waiting room are sick! In other words, they prefer to work only with the talented, ‘musically healthy’ few, when it is those who are in the most need of intervention who deserve at least equal attention (Regelski 2009, 32).

What if we held music teachers to the standards of math teachers rather than football coaches? We might follow the model of physical education classes and public health initiatives, prioritizing lifetime wellness over the identification and training of elite athletes only (Dillon, 2007).

Music and identity

In traditional aesthetic approaches to the Eurocentric canon, the locus of musical expressivity and meaning of the music is embedded entirely within the music itself. Listeners’ subjective experiences are not considered to be significant; our job is to decipher the formal relationships that the composer has encoded into the score. By contrast, Elliott and Silverman (2015) argue that we should take an embodied approach to musical understanding, seeing music as an enactive process emerging from the performance and listeners’ experience of it in social/emotional context. In the embodied approach, we see music as a tool for listeners to make their own meaning, to build their identity, and to communicate and modulate their emotions, all by means of bodily and social lived experience (van der Schyff, Schiavio & Elliott, 2016). Music is “a device for ordering the self” (DeNora 2000, 73). The role of music in building individual and group identity and a sense of belonging is especially critical in adolescence, when its ability to release or control difficult emotions may be literally lifesaving (Campbell, Connell & Beegle, 2007).

Music can also be the organizing principle behind new cultures and subcultures, a locus for tribal self-identification. Turino (2016) proposes that participatory music cultures offer an alternative form of citizenship, with the potential to be fundamental to our sense of self and a cornerstone of our happiness.

Fostering creative expression

Ruthmann (2007) suggests that we teach music the way that English teachers teach writing: use creative prompts that encourage students to develop individual authentic voices capable of expressing their own ideas and thoughts. Like writing generally, songwriting is hardly an elite or specialized practice. All young children spontaneously make up songs, which can sometimes be strangely catchy. My son wrote his first song at age four without any prompting or assistance, inspired by an episode of Thomas The Tank Engine (Pomykala-Hein, 2017). For many young people, music is entirely comprised of songs (Kratus, 2016). But after elementary school, school music is more about “pieces” than songs, symptomatic of the broader gap between in-school and out-of-school music cultures.

While music therapists have long taught songwriting, it is a rare practice in school music curricula. Kratus advocates songwriting for its therapeutic benefits, and for its lifelong learning benefits as well. Few adults have the opportunity to play oboe in an orchestra, but anyone with a guitar or keyboard or smartphone can write and perform songs. Historically, the technology for writing English has been dramatically more accessible than the technology for writing music, but that is changing rapidly. The software and hardware for recording, producing and composing music becomes cheaper and more user-friendly with each passing year. The instrumental backing track for “Pride” by Kendrick Lamar (2017) was produced by the eighteen-year-old Steve Lacy entirely on his iPhone. What are the other creative possibilities inherent in the devices students carry in their pockets and backpacks?

The psychological benefits of songwriting extend beyond musical learning. Like other art media, songwriting is an opportunity to practice what Sennett (2008) calls “craftsmanship,” defined as “the desire to do a job well for its own sake.” Craftsmanship is a habit of mind that “serves the computer programmer, the doctor, and the artist; parenting improves when it is practiced as a skilled craft, as does citizenship” (Sennett 2008, 9). Musical performers exercise craftsmanship as well, but not along as many different dimensions as songwriters and producers do.

Music creation is also a potential site of ethical development. We treat our favorite songs as imaginary people who we feel loving toward and protective of. This kind of idealization is akin to what we do “when we constitute others as persons, or when we invest others with personhood” (Elliott & Silverman 2015, 190). We imagine a personhood for the music, and we try to make that personhood real. In so doing, we learn how to create personhood for each other, and for ourselves. The point of musical education should not just be training in music, but developing ethical people through music (Bowman 2007, 2016). We can consider musical sensitivity to be a particular form of emotional sensitivity, and musical intelligence to be a particular application of emotional intelligence. Musical problem solving is an excellent simulator for social problem solving generally. Both in music and in life, the challenges are ambiguous, contingent, and loaded with irreconcilable contradiction. Performance and interpretation entail some musical problem-solving, but in the classical ensemble model that is typically the purview of the conductor. Songwriting poses musical problem-solving challenges to all who attempt it.

Hip-hop pedagogies

Brian Eno (2004) observes that the recording studio is a creative medium unto itself, one with different requirements for musicality from composition or performance. Indeed, no “composing” or “performing” need ever take place in modern studio practice. Eno is a case in point—while he has produced a string of famous and revered recordings, he does not consider himself to be adept at any instrument, and can not read or write notation. The digital studio has collapsed the distinction between musicians, composers, and engineers (Bell, 2014). The word “producer” is a useful descriptor for creators working across such role boundaries. In the analog recording era, producers were figures like Quincy Jones, executive managers of a commercial process. However, the term “producer” has come to describe anyone creating recorded music in any capacity, including songwriting, beatmaking, MIDI sequencing, and audio manipulation. We might expand the word further to include anyone who actively creates music, be it recorded, notated or live. To be a producer is a category of behavior, not a category of person.

Contemporary popular music is produced more than it is performed. This is nowhere more true than in the case of hip-hop, which in its instrumental aspect is almost entirely “postperformance” (Thibeault, 2010). The processes of producers like J Dilla and Kanye West resemble those of Brian Eno far more than those of Quincy Jones. This dramatic break with traditional musical practice poses major challenges for educators trained in the classical idiom, but it also presents new opportunities for culturally relevant and critically engaged pedagogy. Hip-hop-based education is mostly discussed in the urban classroom context, aimed toward “at-risk” youth (Irby & Hall, 2011). However, as hip-hop has expanded from its black urban origins to define the rest of mainstream musical culture, so too can it move into the educational mainstream as well.

There are several ways to incorporate hip-hop into education. Pedagogies with hip-hop connect hip-hop cultures and school experiences, using hip-hop as a bridge. Pedagogies about hip-hop engage teachers and students with critical perspectives on issues within the music and its culture, using hip-hop as a lens. Pedagogies of hip-hop apply hip-hop worldviews and practices within education settings (Kruse, 2016). Music educators can use hip-hop to enhance cultural relevance and connect to the large and growing percentage of students who identify as part of hip-hop culture. However, it is the use of hip-hop practices that most interests me as a research direction.

We should avoid using hip-hop as bait to get kids interested in “legitimate” music. Instead, we can apply the hip-hop ethos of authentic, culturally engaged expression to music education generally. Kratus (2007) points out that large ensembles are some of the last remaining school settings where the teaching model maintains a top-down autocratic structure, untouched by the cognitive revolution. This method does not create independently functioning musicians. How might we find ways for students to engage in music on their own cultural and technological terms? One method might be to do sampling and remixing of familiar music as an entry point into creation. This is the approach taken by Will Kuhn (personal communication, 2017), who teaches high school students to build songs entirely out of pieces of existing songs. Students can then replace those appropriated samples with material of their own.

Hip-hop has many controversial aspects, but none provokes the ire of legacy musicians more than the practice of sampling. There is a widespread perception that sampling is nothing more than a way to avoid learning instruments or hiring musicians. This may be true in some instances, but it is easy to identify examples of artists who went to considerable expense and trouble to license samples when they did not need to do so. For example, while Ahmir “Questlove” Thompson of the Roots is a highly regarded drummer, he still uses sampled breakbeats in his productions. Why would he prefer a sample to his own playing? In hip-hop, “[e]xisting recordings are not randomly or instrumentally incorporated so much as they become the simultaneous subject and object of a creative work” (Culter 2004, 154). Samples have specific timbral qualities that evoke specific memories and associations, situating the music in webs of intertextual reference.

Rice (2003) encourages non-music educators to draw on the practice of sampling. Students might approach cultural artifacts and texts the way that producers approach recorded music, looking for fragments that might be appropriated and repurposed to form the basis of new works.

The pedagogical sampler, with a computer or without a computer, allows cultural criticism to save isolated moments and then juxtapose them as a final product. The student writer looks at the various distinct moments she has collected and figures out how these moments together produce knowledge. Just as DJs often search for breaks and cuts in the music that reveal patterns, so, too, does the student writer look for a pattern as a way to unite these moments into a new alternative argument and critique (465).

Rice advocates what he calls the “whatever” principle of sampling. In the hip-hop context, “whatever” can have two meanings. First, there is the conventional sense of the word, that everything is on the table, that anything goes. There is also the slang sense of “whatever” as a statement of defiance, indifference, and dismissal. In a pedagogical context, the “whatever” principle encourages us to be accepting of what is new and unexpected, and be dismissive of what is fake or irrelevant. As Missy Elliott (2002) puts it: “Whatever, let’s just have fun. It’s hip-hop, man, this is hip-hop.”

I asked Jamie Ehrenfeld, if she had written songs while getting her music degree, what kind of material might she have written? She responded:

I would think of bits of music in my head and then associate them with some other song I’d already heard and felt like nothing I could think of was really original, and I didn’t get that it’s okay that in writing a song having some elements of other songs can come together to make something new, and that actually being original is more of what existing pieces you weave together in addition to ‘original’ thought (personal communication, April 28 2017).

In other words, the sampling ethos might have validated the intuitive creative processes she was already spontaneously carrying out, whether she had realized those impulses in the form of digitally produced recordings or pencil-and-paper scores.

Can a work based on samples be wholly original? Perhaps not. But hip-hop slang offers a different standard of quality that may be more apposite: the idea of freshness. There are several different definitions of “fresh.” It can mean new or different; well-rested, energetic, and healthy-looking; or appealing food, water, or air. “Fresh” is also a dated slang term for impudence or impertinence. In hip-hop culture, “fresh” is one among many synonyms for “cool,” but it could be referencing any of the various original senses of the word: new, refreshing, appetizing, attractive, or sassy. Rather than evaluating music in terms of its originality, we might judge music by its freshness (Hein, 2015). A track that includes samples can not be wholly original by definition, but it can be fresh. It is this sense of making new meaning out of existing resources that animates the Fresh Ed curriculum (Miles et al, 2015), a culturally responsive teaching resource created by the Urban Arts Partnership. Rather than treating students as receptacles for information, Fresh Ed places new knowledge in familiar contexts, for example in the form of rap songs. When students are able to draw on their prior knowledge and cultural competencies, they are better equipped to engage and think critically.

Proposed methods

Luker (2008) describes the case that chooses you, or that you sample yourself into (131). My own trajectory as a musician and educator has made me an exemplar of the shortcomings of Eurocentric music pedagogy and the benefits of personal creativity through producing and songwriting; certainly it feels like this case chose me. Since my own motivations are borne out of subjective experience, and since my research questions were provoked by the experiences of others like me, my research into those questions must necessarily follow an interpretivist paradigm. In choosing methods aligning to that paradigm, I want to identify one that supports the use of music creation itself as a tool for inquiry into music pedagogy. One such method is Eisner’s (1997) model of educational inquiry by means of connoisseurship and criticism. Connoisseurship is the “ability to make fine-grained discriminations among complex and subtle qualities” (Eisner 1997, 63). Criticism is judgment that illuminates and interprets the qualities of a practice in order to transform it. As a subjective researcher, I am obliged to systematically identify my subjectivity  (Peshkin, 1988), and I view my role as connoisseur and critic in music as a source of clarity rather than bias.

Ethnography

An interpretivist paradigm is well supported by methods of ethnography, since participant observation and unstructured interviews dovetail exactly with a subjectivist epistemology. Ethnographers typically allow their methods to evolve over the course of the study, and can only define their procedures in retrospect, in the form of a narrative of what actually happened, rather than a detailed plan ahead of time. This form of research is iterative, like agile software development. Data comes in the form of interpretations of interpretations of interpretations, and in that sense is a “fiction”—not in the sense that it is counterfactual (we hope), but in the original sense of the word, a thing that is constructed. We must involve our imagination in constructing our interpretive fictions (Geertz, 1973).

Institutional ethnographers examine work settings and processes, combining observation with discourse analysis of texts, documents and procedures. The goal is to show how people in the workplace align their activities with structures that may originate elsewhere (Devault, 2006). This method asks us to seek out “ruling relations” (Smith 2005, 11), textually mediated connections and organizations shaping everyday life, especially those that are the most taken for granted. In so doing, we examine the ways that texts bind small social groups into institutions, and bind those together into larger power structures. This method is well suited to a profession like music teaching.

Taber (2010) combines autoethnography with institutional ethnography to tell the story of her own experience in the military, as an entry point into understanding the experience of other women. She questions whether researching the lives of others was a way to hide from her own problematic experience, and chooses instead to foreground her internal conflicts, using a “reflexivity of discomfort” (19). This is emblematic of the institutional ethnographic practice of examining aspects of organizations that their inhabitants find problematic, troubling or contradictory. Since the story of my own music education is one of internal conflict and discomfort, I expect a similar method to Taber’s to yield rich results.

Naturally, an inquiry into music education will involve some ethnomusicology. Given how technologically mediated hip-hop and other contemporary forms are, it will be useful to take on the lens of “technomusicology” (Marshall, 2017). Music educators who feel pressured to use computers in their practice quickly run up against the fact that digital audio tools are a poor fit for classical music. However, these tools are the most natural medium for hip-hop and other electronic dance musics. The technological and cultural issues are inseparable.

Hip-hop grows out orality and African-American Vernacular English. Therefore, it is prone to being dismissed by scholars working in a literate value system. Similarly, it is all too common to view AAVE through the lens of deprivationism, as a failure to learn “correct” English. To overcome this spurious attitude, we can employ an ethnopoetic approach. Speakers of AAVE are only linguistically “impoverished” because we institutionally deem them to be so, not because they have any difficulty communicating or expressing themselves (McDermott & Varenne, 1995). By the same token, classical music culture sees the lack of complex harmony and melody in African diasporic music like hip-hop as a shortcoming, a poverty of musical means. But the hip-hop aesthetic puts a premium on rhythm and timbre, and harmony functions mostly as a way to signpost locations within the cyclical metrical structure. In learning to value hip-hop on its own terms, we broaden our ability to understand other musical and cultural value systems as well.

Participatory research

Participatory research methods like cooperative inquiry and participatory action research treat research participants as collaborators, rather than as objects of study. The related method of constructivist instructional design puts these principles into action in the form of new technologies, experiences and curricula, the educational equivalent of critical theorists’ activism. When teachers and designers act as researchers, they function as participant observers. While I am an avid hip-hop fan and a dedicated student of it, I am ultimately a tourist. My research will therefore necessarily be incomplete unless it is a collaborative effort with members of hip-hop culture.

Instructional design as participatory research follows a Reflective and Recursive Design and Development (R2D2) model, based on the principles of recursion, nonlinearity and reflection (Willis, 2007). Designers test and prototype continually alongside users, and feed the results back into the next design iteration. This process for developing instructional material enables end users and experts to work jointly toward the end product. This loop of feedback and iteration is an example of reflective practice, made up of the “arts” of problem framing, implementation, and improvisation (Schön, 1987). These same arts are the ones used in musical problem-solving, both as a practitioner and educator. The Music Experience Design Lab follows a participatory design methodology in developing our technologies for music learning and expression, and the idea of using the same techniques to examine the broader social context of our work is quite appealing to me.

Narrative inquiry

There may be universal physical truths, but mental, emotional and social truths are contextual and particular. To examine these truths, then, we need verstehen, understanding of context, both historical and contemporary (Willis, 2007). To that end, we can draw on phenomenology, asking how humans view themselves and the world around them. This perspective attends to experience “from the neck down,” not just to cognition. We need to understand the bodily sensations of numbness, anxiety or anger that too many students feel in the music classroom, knowing that something is wrong but not knowing how to name it. For example, I spent my music graduate theory seminar in a continual low boil of rage, and it was only years later that I was able to point to the white supremacist ideology animating the curriculum as the source of this intense emotion. A number of my fellow musicians aligned with black music have described the same feelings. It is a primary research goal of mine to give those feelings a name and a clear target, so they can be put to work in the service of systemic change.

Bruner (1991) cites Vygotsky’s dictum that cultural products like language mediate our thought and shape our representations of reality. (This is certainly true of music.) Constructionists assume that we produce reality through the social exchange of meanings. We use language not as isolated individuals, but within social groups, organizations, institutions and cultures. Within our contexts, we speak as we understand it to be appropriate to speak (Galasinski & Ziólkowska, 2013). As narratives accrue into traditions, they take on a life of their own that can outlive their original context—this is a likely explanation for the persistence of classical music habitus far beyond the conservatory.

Close readers of narrative must study not only the syntactic content of the words themselves, but also their literary qualities, their tone (Riessman, 2008). There is a close parallel here with musicology. When we compare Julie Andrews’ performance of “My Favorite Things” in The Sound Of Music (1965) with the one recorded by John Coltrane (1961), it is like comparing the same text spoken by two very different speakers. We can perform a neat inverse of this process by examining the same musical performance across contexts; for example, comparing Tom Scott’s recording of Balin and Kantner’s “Today” (1967) with the sample of that recording that forms the centerpiece of Pete Rock & C.L. Smooth’s “They Reminisce Over You (T.R.O.Y.)” (1992). Here, the same performance gives rise to different musical meanings in different settings. We should be similarly attentive to the performative and contextual aspects of narrative.

Validity and reliability

If we are examining attitudes and interpretations rather than more easily observable “facts,” how do we ensure validity and reliability? In place of a search for straightforward logical explanations, we can instead build a case on Lyotardian paralogy, and “let contradictions remain in tension” (Lather 1993, 679), like the unresolved tritones enriching the blues and jazz. We should not expect to find tree-shaped hierarchies of explanation, but instead hold ourselves to a “rhizomatic” standard of validity. “Rather than a linear progress, rhizomatics is a journey among intersections, nodes, and regionalizations through a multi-centered complexity” (Lather 1993, 680). We can understand the complexities of music and schooling and race to have the topology of a network, not a tree. We should expect that when we pull on any part of the network, we will encounter a tangle.

In my research thus far, I have instinctively used reciprocity to treat my interviews more as two-way conversations. Such judicious use of self-disclosure can give rise to richer data. We can attain further reciprocity by showing participants field notes and drafts, building in “member checks” early on to ensure trustworthiness throughout the process. As feminist researchers, Harrison, MacGibbon and Morton (2001) hold attention to emotional aspects of the research and the relationships it entails as a key criterion of trustworthiness. This kind of emotionally aware collaborative/shared authorship aligns naturally with participatory research, and with hip-hop pedagogy. Larson (1997) argues that narrative inquiry gains greater validity by having the story-giver reflect on the transcript and analysis so they can revise or go deeper into their story. If a lived experience is an iceberg, then its initial retelling may just describe the tip. It takes reflection to bring more of the iceberg to the surface. We may therefore do better to examine a few icebergs thoroughly than to survey many tips.

Sample data and future research

Ed Sullivan Fellows (ESF) is a mentorship and artist development program run by the NYU Steinhardt Music Experience Design Lab. Participants are young men and women between the ages of 15 and 20, mostly low-SES people of color. They meet on Saturday afternoons at NYU to write and record songs; to get mentorship on the music business, marketing and branding; and to socialize. Sessions have a clubhouse feel, a series of ad-hoc jam sessions, cyphers, informal talks, and open-ended creativity. Conversations are as likely to focus on participants’ emotions, politics, social life and identity as they are on anything pertaining to music. I intend to conduct my research among hip-hop educators like Jamie and the other ESF mentors. They teach music concepts like song structure and harmony, but their project is much larger: to provide emotional support, to build resilience and confidence, to foster social connections across class and racial lines. Hein (2017) is a set of preliminary observations on ESF, showing the close connection between its musical and social values.

Conclusion

If music education is failing to address the needs of the substantial majority of students, it should be no wonder that enrollment and societal support are declining.

Every ‘failure’ to succeed in competition, every drop-out, and every student who is relieved to have compulsory music study behind them (including lessons enforced by parental fiat) represents not just a lack of ‘conversion’ to musical ‘virtue’ but gives such future members of the public compelling reason to doubt whether their music education has served any lasting purpose or value (Regelski 2009, 12).

Music educators’ advocacy efforts are mostly devoted to preserving existing methods and policies. However, these same methods and practices are driving music education’s irrelevance. At some point, advocacy starts to look less like a high-minded push for society’s interest, and more like an effort on behalf of music teachers’ self-interest.

Most (if not all) people have an inborn capacity and intrinsic motivation for engaging in music. However, that capacity and motivation need to be activated and nurtured by “musically and educationally excellent teachers and… inspiring models of musicing in contexts of welcoming, sustaining, and educative musical settings, including home and community contexts” (Elliott & Silverman 2015, 240). To restrict this opportunity to “talented” students is anti-democratic in Dewey’s sense. Good music serves particular human needs. One of those needs is aesthetic contemplation and appreciation of the Eurocentric canon. But there are many other legitimate ends that music education can pursue. In order to meet more students’ musical needs, we must embrace the musical culture of the present, and confront all the challenges of race and class that entails.

References

Anyon, J. (1980). Social Class and the Hidden Curriculum of Work. The Journal of Education, 162(1), 67–92.

Bell, A. P. (2014). Trial-by-fire : A case study of the musician – engineer hybrid role in the home studio. Journal of Music, Technology & Education, 7(3), 295–312.

Bledsoe, R. (2015). Music Education for All? General Music Today, 28(2), 18–22.

Bowman, W. (2007). Who is the “We”? Rethinking Professionalism in Music Education. Action, Criticism, and Theory for Music Education, 6(4), 109–131.

Bowman, W. (2016). Artistry, Ethics, and Citizenship. In D. Elliott, M. Silverman, & W. Bowman (Eds.), Artistic Citizenship: Artistry, Social Responsibility, and Ethical Praxis. New York: Oxford University Press.

Campbell, P. S., Connell, C., & Beegle, A. (2007). Adolescents’ expressed meanings of music in and out of school. Journal of Research in Music Education, 55(3), 220–236.

Cutler, C. (2004). Plunderphonia. In C. Cox & D. Warner (Eds.), Audio culture: Readings in modern music (pp. 138–156). London: Continuum International Publishing Group.

DeNora, T. (2000). Music in everyday life. New York: Cambridge University Press.

Devault, M. L. (2006). Introduction: What is Institutional Ethnography? Social Problems, 53(3), 294–298.

Dillon, S. (2007). Music, Meaning and Transformation: Meaningful Music Making for Life. Cambridge Scholars Publishing.

Doyle, J. L. (2014). Cultural relevance in urban music education: a synthesis of the literature. Applications of Research in Music Education, 32(2), 44–51.

Eisner, E. (1991). The enlightened eye: Qualitative inquiry and the enhancement of educational practice. Toronto: Macmillan.

Elliott, D. J., & Silverman, M. (2014). Music Matters: A Philosophy of Music Education (2nd ed.). Oxford: Oxford University Press.

Elpus, K., & Abril, C. R. (2011). High School Music Ensemble Students in the United States: A Demographic Profile. Journal of Research in Music Education, 59(2), 128–145.

Eno, B. (2004). The Studio As Compositional Tool. In C. Cox & D. Warner (Eds.), Audio culture: Readings in modern music (pp. 127–130). London: Continuum International Publishing Group.

Ester, D. P., & Turner, K. (2009). The impact of a school loaner-instrument program on the attitudes and achievement of low-income music students. Contributions to Music Education, 36(1), 53–71.

Finney, J. (2007). Music Education as Identity Project in a World of Electronic Desires. In J. Finney & P. Burnard (Eds.), Music education with digital technology. London: Bloomsbury Academic.

Harrison, J., MacGibbon, L., & Morton, M. (2001). Regimes of Trustworthiness in Qualitative Research: The Rigors of Reciprocity. Qualitative Inquiry, 7(3), 323–345.

Hein, E. (2015). Mad Fresh. NewMusicBox. Retrieved March 24, 2015, from http://www.newmusicbox.org/articles/mad-fresh/

Hein, E. (2017). A participant ethnography of the Ed Sullivan Fellows program. Retrieved May 9, 2017, from http://www.ethanhein.com/wp/2017/a-participant-ethnography-of-the-ed-sullivan-fellows-program/

Irby, D. J., & Hall, H. B. (2011). Fresh Faces, New Places: Moving Beyond Teacher-Researcher Perspectives in Hip-Hop-Based Education Research. Urban Education, 46(2), 216–240.

Kratus, J. (2016). Songwriting: A new direction for secondary music education. Music Educators Journal, 102(3), 60–65.

Kratus, J. (2007). Music Education at the Tipping Point. Music Educators Journal, 94(2), 42–48.

Kruse, A. J. (2016). Toward hip-hop pedagogies for music education. International Journal of Music Education, 34(2), 247–260.

Laitz, S. G. (2015). The complete musician: An integrated approach to tonal theory, analysis, and listening (4th ed.). Oxford University Press.

Lather, P. (1993). Fertile Obsession : Validity after Poststructuralism. The Sociological Quarterly, 34(4), 673–693.

Lorde, A. (1984). The master’s tools will never dismantle the master’s house. Sister Outsider: Essays and Speeches by Audre Lorde, 110–113.

Luker, K. (2008). Salsa Dancing into the Social Sciences: Research in an Age of Info-glut. Cambridge: Harvard University Press.

Madrid, A. L. (2017). Diversity, Tokenism, Non-Canonical Musics, and the Crisis of the Humanities in U.S. Academia, 7(2), 124–129.

Marshall, W. (2017). Technomusicology | Harvard Extension School. Retrieved May 8, 2017, from http://www.extension.harvard.edu/academics/courses/technomusicology/24318

Massey, D. S., & Denton, N. A. (1993). American apartheid: segregation and the making of the underclass. Cambridge: Harvard University Press.

Mcclary, S. (2000). Conventional Wisdom: The Content of Musical Form. University of California Press.

McDermott, R., & Varenne, H. (1995). Culture as Disability. Anthropology and Education Quarterly, 26(3), 324–348.

Miles, J., Hogan, E., Boland, B., Ehrenfeld, J., & Berry, L. (2015). Fresh Ed: A Field Guide to Culturally Responsive Pedagogy. New York: Urban Arts Partnership. Retrieved from http://freshed.urbanarts.org/fresh-field-guide/

Perry, I. (2004). Prophets of the Hood. Duke University Press.

Peshkin, A. (1988). In Search of Subjectivity–One’ s Own. Educational Researcher, 17–21.

Pomykala-Hein, M. (2017). Searching [Online musical score]. Retrieved May 5, 2017, from https://www.noteflight.com/scores/view/180d4db69af3646e6e70fae8002648d7f2048a7d

Regelski, T. A. (2009). The Ethics of Music Teaching as Profession and Praxis. Visions of Research in Music Education, 13(2009), 1–34.

Rice, J. (2016). The 1963 hip-hop machine: Hip-hop pedagogy as composition. College Composition and Communication, 54(3), 453–471.

Ruthmann, A. (2007). The Composers’ Workshop: An Approach to Composing in the Classroom. Music Educators Journal, 93(4), 38.

Schön, D. (1987). Teaching artistry through reflection in action. In Educating the reflective practitioner: Educating the reflective practitioner for teaching and learning in the professions (pp. 22–40). San Francisco: Jossey-Bass.

Sennett, R. (2008). The Craftsman (New Haven). Yale University Press.

Smith, D. (2005). Institutional Ethnography: A Sociology for People. Walnut Creek, CA: AltaMira Press.

Taber, N. (2010). Institutional ethnography, autoethnography, and narrative: an argument for incorporating multiple methodologies. Qualitative Research, 10(1), 5–25. http://doi.org/10.1177/1468794109348680

Thibeault, M. (2010). Hip-Hop, Digital Media, and the Changing Face of Music Education. General Music Today, 24(1), 46–49. http://doi.org/10.1177/1048371310379097

Turino, T. (2016). Music, Social Change, and Alternative Forms of Citizenship. In D. Elliott, M. Silverman, & W. Bowman (Eds.), Artistic Citizenship: Artistry, Social Responsibility, and Ethical Praxis (p. 616). New York: Oxford University Press.

van der Schyff, D., Schiavio, A., & Elliott, D. J. (2016). Critical ontology for an enactive music pedagogy. Action, Criticism, and Theory for Music Education, 15(5), 81–121.

Williams, D. A. (2011). The Elephant in the Room. Music Education: Navigating the Future, 98(1), 51–57.

Willis, J. W. (2007). Foundations of Qualitative Research: Interpretive and Critical Approaches. Thousand Oaks, CA: Sage.

Wise, R. (1965). The Sound Of Music. United States: 20th Century Fox.

Discography

Balin, M. and Kantner, P. (1967). Today [recorded by Tom Scott and The California Dreamers]. On The Honeysuckle Breeze [LP]. Santa Monica: Impulse! (1967)

Elliott, Missy (2002). Work It. On Under Construction [CD]. New York: Goldmind/Elektra. (November 12, 2002)

Lamar, Kendrick (2017). Pride. On DAMN. [CD/streaming]. Santa Monica, CA: Top Dawg/Aftermath/Interscope. (April 14, 2017)

Pete Rock & C.L. Smooth (1992). They Reminisce Over You (T.R.O.Y.). On Mecca and the Soul Brother [LP]. New York: Untouchables/Elektra. (April 2, 1992)

Rodgers, Richard and Hammerstein, Oscar (1959). My Favorite Things [recorded by John Coltrane]. On My Favorite Things [LP]. New York: Atlantic. (March, 1961)

Design for Real Life – QWERTYBeats research

Writing assignment for Design For The Real World with Claire Kearney-Volpe and Diana Castro – research about a new rhythm interface for blind and low-vision novice musicians

Definition

I propose a new web-based accessible rhythm instrument called QWERTYBeats.Traditional instruments are highly accessible to blind and low-vision musicians. Electronic music production tools are not. I look at the history of accessible instruments and software interfaces, give an overview of current electronic music hardware and software, and discuss the design considerations underlying my project.

QWERTYBeats logo

Historical overview

Acoustic instruments give rich auditory and haptic feedback, and pose little obstacle to blind musicians. We need look no further for proof than the long history of iconic blind musicians like Ray Charles and Stevie Wonder. Even sighted instrumentalists rarely look at their instruments once they have attained a sufficient level of proficiency. Music notation is not accessible, but Braille notation has existed since the language’s inception. Also, a great many musicians both blind and sighted play entirely by ear anyway.

Most of the academic literature around accessibility issues in music education focuses on wider adoption of and support for Braille notation. See, for example, Rush, T. W. (2015). Incorporating Assistive Technology for Students with Visual Impairments into the Music Classroom. Music Educators Journal, 102(2), 78–83. For electronic music, notation is rarely if ever a factor.

Electronic instruments pose some new accessibility challenges. They may use graphical interfaces with nested menus, complex banks of knobs and patch cables, and other visual control surfaces. Feedback may be given entirely with LED lights and small text labels. Nevertheless, blind users can master these devices with sufficient practice, memorization and assistance. For example, Stevie Wonder has incorporated synthesizers and drum machines in most of his best-known recordings.

Most electronic music creation is currently done not with instruments, but rather using specialized software applications called digital audio workstations (DAWs). Keyboards and other controllers are mostly used to access features of the software, rather than as standalone instruments. The most commonly-used DAWs include Avid Pro Tools, Apple Logic, Ableton Live, and Steinberg Cubase. Mobile DAWs are more limited than their desktop counterparts, but are nevertheless becoming robust music creation tools in their own right. Examples include Apple GarageBand and Steinberg Cubasis. Notated music is commonly composed using score editing software like Sibelius and Finale, whose functionality increasingly overlaps with DAWs, especially in regard to MIDI sequencing.

DAWs and notation editors pose steep accessibility challenges due to their graphical and spatial interfaces, not to mention their sheer complexity. In class, we were given a presentation by Leona Godin, a blind musician who records and edits audio using Pro Tools by means of VoiceOver. While it must have taken a heroic effort on her part to learn the program, Leona demonstrates that it is possible. However, some DAWs pose insurmountable problems even to very determined blind users because they do not use standard operating system elements, making them inaccessible via screen readers.

Technological interventions

There are no mass-market electronic interfaces specifically geared toward blind or low-vision users. In this section, I discuss one product frequently hailed for its “accessibility” in the colloquial rather than blindess-specific sense, along with some more experimental and academic designs.

Ableton Push

Push layout for IMPACT Faculty Showcase

Ableton Live has become the DAW of choice for electronic music producers. Low-vision users can zoom in to the interface and modify the color scheme. However, Live is inaccessible via screen readers.

In recent years, Ableton has introduced a hardware controller, the Push, which is designed to make the software experience more tactile and instrument-like. The Push combines an eight by eight grid of LED-lit touch pads with banks of knobs, buttons and touch strips. It makes it possible to create, perform and record a piece of music from scratch without looking at the computer screen. In addition to drum programming and sampler performance, the Push also has an innovative melodic mode which maps scales onto the grid in such a way that users can not play a wrong note. Other comparable products exist; see, for example, the Native Instruments Maschine.

There are many pad-based drum machines and samplers. Live’s main differentiator is its Session view, where the pads launch clips: segments of audio or MIDI that can vary in length from a single drum hit to the length of an entire song. Clip launching is tempo-synced, so when you trigger a clip, playback is delayed until the start of the next measure (or whatever the quantization interval is.) Clip launching is a forgiving and beginner-friendly performance method, because it removes the possibility of playing something out of rhythm. Like other DAWs, Live also gives rhythmic scaffolding in its software instruments by means of arpeggiators, delay and other tempo-synced features.

The Push is a remarkable interface, but it has some shortcomings for blind users. First of all, it is expensive, $800 for the entry-level version and $1400 for the full-featured software suite. Much of its feedback is visual, in the form of LED screens and color-coded lighting on the pads. It switches between multiple modes which can be challenging to distinguish even for sighted users. And, like the software it accompanies, the Push is highly complex, with a steep learning curve unsuited to novice users, blind or sighted.

The aQWERTYon

Most DAWs enable users to perform MIDI instruments on the QWERTY keyboard. The most familiar example is the Musical Typing feature in Apple GarageBand.

GarageBand musical typing

Musical Typing makes it possible to play software instruments without an external MIDI controller, which is convenient and useful. However, its layout counterintuively follows the piano keyboard, which is an awkward fit for the computer keyboard. There is no easy way to distinguish the black and white keys, and even expert users find themselves inadvertantly hitting the keyboard shortcut for recording while hunting for F-sharp.

The aQWERTYon is a web interface developed by the NYU Music Experience Design Lab specifically intended to address the shortcomings of Musical Typing.

aQWERTYon screencap

Rather than emulating the piano keyboard, the aQWERTYon draws its inspiration from the chord buttons of an accordion. It fills the entire keyboard with harmonically related notes in a way that supports discovery by naive users. Specifically, it maps scales across the rows of keys, staggered by intervals such that each column forms a chord within the scale. Root notes and scales can be set from pulldown menus within the interface, or preset using URL parameters. It can be played as a standalone instrument, or as a MIDI controller in conjunction with a DAW. Here is a playlist of music I created using the aQWERTYon and GarageBand or Ableton Live:

The aQWERTYon is a completely tactile experience. Sighted users can carefully match keys to note names using the screen, but more typically approach the instrument by feel, seeking out patterns on the keyboard by ear. A blind user would need assistance loading the aQWERTYon initially and setting the scale and root note parameters, but otherwise, it is perfectly accessible. The present project was motivated in large part by a desire to make exploration of rhythm as playful and intuitive as the aQWERTYon makes exploring chords and scales.

Soundplant

The QWERTY keyboard can be turned into a simple drum machine quite easily using a free program called Soundplant. The user simply drags audio files onto a graphical key to have it triggered by that physical key. I was able to create a TR-808 kit in a matter of minutes:

Soundplant with 808 samples

After it is set up and configured, Soundplant can be as effortlessly accessible as the aQWERTYon. However, it does not give the user any rhythmic assistance. Drumming in perfect time is an advanced musical skill, and playing drum machine samples out of time is not much more satisfying than banging on a metal bowl with a spoon out of time. An ideal drum interface would offer beginners some of the rhythmic scaffolding and support that Ableton provides via Session view, arpeggiators, and the like.

The Groove Pizza

Drum machines and their software counterparts offer an alternative form of rhythmic scaffolding. The user sequences patterns in a time-unit box system or piano roll, and the computer performs those patterns flawlessly. The MusEDLab‘s Groove Pizza app is a web-based drum sequencer that wraps the time-unit box system into a circle.

Groove Pizza - Bembe

The Groove Pizza was designed to make drum programming more intuitive by visualizing the symmetries and patterns inherent in musical-sounding rhythms. However, it is totally unsuitable for blind or low-vision users. Interaction is only possible through the mouse pointer or touch, and there are no standard user interface elements that can be parsed by screen readers.

Before ever considering designing for the blind, the MusEDLab had already considered the Groove Pizza’s limitations for younger children and users with special needs: there is no “live performance” mode, and there is always some delay in feedback between making a change in the drum pattern and hearing the result. We have been considering ways to make a rhythm interface that is more immediate, performance-oriented and tactile. One possible direction would be to create a hardware version of the Groove Pizza; indeed, one of the earliest prototypes was a hardware version built by Adam November out of a pizza box. However, hardware design is vastly more complex and difficult than software, so for the time being, software promises more immediate results.

Haenselmann-Lemelson-Effelsberg MIDI sequencer

This experimental interface is described in Haenselmann, T., Lemelson, H., & Effelsberg, W. (2011). A zero-vision music recording paradigm for visually impaired people. Multimedia Tools and Applications, 5, 1–19.

Haenselmann-Lemelson-Effelsberg MIDI sequencer

The authors create a new mode for a standard MIDI keyboard that maps piano keys to DAW functions like playback, quantization, track selection, and so on. They also add “earcons” (auditory icons) to give sonic feedback when particular functions have been activated that normally only give graphical feedback. For example, one earcon sounds when recording is enabled; another sounds for regular playback. This interface sounds promising, but there are significant obstacles to its adoption. While the authors have released the source code as a free download, that requires a would-be user to be able to compile and run it. This is presuming that they could access the code in the first place; the download link given in the paper is inactive. It is an all-too-common fate of academic projects to never get widespread usage. By posting our projects on the web, the MusEDLab hopes to avoid this outcome.

Statement

Music education philosophy

My project is animated by a constructivist philosophy of music education, which operates by the following axiomatic assumptions:

  • Learning by doing is better than learning by being told.
  • Learning is not something done to you, but rather something done by you.
  • You do not get ideas; you make ideas. You are not a container that gets filled with knowledge and new ideas by the world around you; rather, you actively construct knowledge and ideas out of the materials at hand, building on top of your existing mental structures and models.
  • The most effective learning experiences grow out of the active construction of all types of things, particularly things that are personally or socially meaningful, that you develop through interactions with others, and that support thinking about your own thinking.

If an activity’s challenge level is beyond than your ability, you experience anxiety. If your ability at the activity far exceeds the challenge, the result is boredom. Flow happens when challenge and ability are well-balanced, as seen in this diagram adapted from Csikszentmihalyi.

Flow

Music students face significant obstacles to flow at the left side of the Ability axis. Most instruments require extensive practice before it is possible to make anything that resembles “real” music. Electronic music presents an opportunity here, because even a complete novice can produce music with a high degree of polish quickly. It is empowering to use technologies that make it impossible to do anything wrong; it frees you to begin exploring what you find to sound right. Beginners can be scaffolded in their pitch explorations with MIDI scale filters, Auto-Tune, and the configurable software keyboards in apps like Thumbjam and Animoog. Rhythmic scaffolding is more rare, but it can be had via Ableton’s quantized clip launcher, by MIDI arpeggiators, and using the Note Repeat feature on many drum machines.

QWERTYBeats proposal

My project takes drum machine Note Repeat as its jumping off point. When Note Repeat is activated, holding down a drum pad triggers the corresponding sound at a particular rhythmic interval: quarter notes, eighth notes, and so on. On the Ableton Push, Note Repeat automatically syncs to the global tempo, making it effortless to produce musically satisfying rhythms. However, this mode has a major shortcoming: it applies globally to all of the drum pads. To my knowledge, no drum machine makes it possible to simultaneously have, say, the snare drum playing every dotted eighth note while the hi-hat plays every sixteenth note.

I propose a web application called QWERTYBeats that maps drums to the computer keyboard as follows:

  • Each row of the keyboard triggers a different drum/beatbox sound (e.g. kick, snare, closed hi-hat, open hi-hat).
  • Each column retriggers the sample at a different rhythmic interval (e.g. quarter note, dotted eighth note).
  • Circles dynamically divide into “pie slices” to show rhythmic values.

The rhythm values are shown below by column, with descriptions followed by the time interval as shown as a fraction of the tempo in beats per minute.

  1. quarter note (1)
  2. dotted eighth note (3/4)
  3. quarter note triplet (2/3)
  4. eighth note (1/2)
  5. dotted sixteenth note (3/8)
  6. eighth note triplet (1/3)
  7. sixteenth note (1/4)
  8. dotted thirty-second note (3/16)
  9. sixteenth note triplet (1/6)
  10. thirty-second note (1/8)

By simply holding down different combinations of keys, users can attain complex syncopations and polyrhythms. If the app is synced to the tempo of a DAW or music playback, the user can perform good-sounding rhythms over any song that is personally meaningful to them.

The column layout leaves some unused keys in the upper right corner of the keyboard: “-“, “=”, “[“, “]”, “”, etc. These can be reserved for setting the tempo and other UI elements.

The app defaults to Perform Mode, but clicking Make New Kit opens Sampler mode, where users can import or record their own drum sounds:

  • Keyboard shortcuts enable the user to select a sound, audition it, record, set start and end point, and set its volume level.
  • A login/password system enables users to save kits to the cloud where they can be accessed from any computer. Kits get unique URL identifiers, so users can also share them via email or social media.

It is my goal to make the app accessible to users with the widest possible diversity of abilities.

  • The entire layout will use plain text, CSS and JavaScript to support screen readers.
  • All user interface elements can be accessed via the keyboard: tab to change the keyboard focus, menu selections and parameter changes via the up and down arrows, and so on.

Perform Mode:

QWERTYBeats concept images - Perform mode

Sampler Mode:

sampler-mode

Mobile version

The present thought is to divide up the screen into a grid mirroring the layout of the QWERTY keyboard. User testing will determine whether this will produce a satisfying experience.

QWERTYDrum - mobile

Prototype

I created a prototype of the app using Ableton Live’s Session View.

QWERTYBeats - Ableton prototype

Here is a sample performance:

There is not much literature examining the impact of drum programming and other electronic rhythm sequencing on students’ subsequent ability to play acoustic drums, or to keep time more accurately in general. I can report anecdotally that my own time spent sequencing and programming drums improved my drumming and timekeeping enormously (and mostly inadvertently.) I will continue to seek further support for the hypothesis that electronically assisted rhythm creation builds unassisted rhythmic ability. In the meantime, I am eager to prototype and test QWERTYBeats.

Composing in the classroom

The hippest music teachers help their students create original music. But what exactly does that mean? What even is composition? In this post, I take a look at two innovators in music education and try to arrive at an answer.

Matt McLean is the founder of the amazing Young Composers and Improvisers Workshop. He teaches his students composition using a combination of Noteflight, an online notation editor, and the MusEDLab‘s own aQWERTYon, a web app that turns your regular computer keyboard into an intuitive musical interface.

http://www.yciw.net/1/the-interface-i-wish-noteflight-had-is-here-aqwertyon/

Matt explains:

Participating students in YCIW as well as my own students at LREI have been using Noteflight for over 6 years to compose music for chamber orchestras, symphony orchestras, jazz ensembles, movie soundtracks, video game music, school band and more – hundreds of compositions.

Before the advent of the aQWERTYon, students needed to enter music into Noteflight either by clicking with the mouse or by playing notes in with a MIDI keyboard. The former method is accessible but slow; the latter method is fast but requires some keyboard technique. The aQWERTYon combines the accessibility of the mouse with the immediacy of the piano keyboard.

For the first time there is a viable way for every student to generate and notate her ideas in a tactile manner with an instrument that can be played by all. We founded Young Composers & Improvisors Workshop so that every student can have the experience of composing original music. Much of my time has been spent exploring ways to emphasize the “experiencing” part of this endeavor. Students had previously learned parts of their composition on instruments after their piece was completed. Also, students with piano or guitar skills could work out their ideas prior to notating them. But efforts to incorporate MIDI keyboards or other interfaces with Noteflight in order to give students a way to perform their ideas into notation always fell short.

The aQWERTYon lets novices try out ideas the way that more experienced musicians do: by improvising with an instrument and reacting to the sounds intuitively. It’s possible to compose without using an instrument at all, using a kind of sudoku-solving method, but it’s not likely to yield good results. Your analytical consciousness, the part of your mind that can write notation, is also its slowest and dumbest part. You really need your emotions, your ear, and your motor cortex involved. Before computers, you needed considerable technical expertise to be able to improvise musical ideas, and remember them long enough to write them down. The advent of recording and MIDI removed a lot of the friction from the notation step, because you could preserve your ideas just by playing them. With the aQWERTYon and interfaces like it, you can do your improvisation before learning any instrumental technique at all.

Student feedback suggests that kids like being able to play along to previously notated parts as a way to find new parts to add to their composition. As a teacher I am curious to measure the effect of students being able to practice their ideas at home using aQWERTYon and then sharing their performances before using their idea in their composition. It is likely that this will create a stronger connection between the composer and her musical idea than if she had only notated it first.

Those of us who have been making original music in DAWs are familiar with the pleasures of creating ideas through playful jamming. It feels like a major advance to put that experience in the hands of elementary school students.

Matt uses progressive methods to teach a traditional kind of musical expression: writing notated scores that will then be performed live by instrumentalists. Matt’s kids are using futuristic tools, but the model for their compositional technique is the one established in the era of Beethoven.

Beethoven

(I just now noticed that the manuscript Beethoven is holding in this painting is in the key of D-sharp. That’s a tough key to read!)

Other models of composition exist. There’s the Lennon and McCartney method, which doesn’t involve any music notation. Like most untrained rock musicians, the Beatles worked from lyric sheets with chords written on them as a mnemonic. The “lyrics plus chords” method continues to be the standard for rock, folk and country musicians. It’s a notation system that’s only really useful if you already have a good idea of how the song is supposed to sound.

Lennon and McCartney writing

Lennon and McCartney originally wrote their songs to be performed live for an audience. They played in clubs for several years before ever entering a recording studio. As their career progressed, however, the Beatles stopped performing live, and began writing with the specific goal of creating studio recordings. Some of those later Beatles tunes would be difficult or impossible to perform live. Contemporary artists like Missy Elliott and Pharrell Williams have pushed the Beatles’ idea to its logical extreme: songs existing entirely within the computer as sequences of samples and software synths, with improvised vocals arranged into shape after being recorded. For Missy and Pharrell, creating the score and the finished recording are one and the same act.

Pharrell and Missy Elliott in the studio

Is it possible to teach the Missy and Pharrell method in the classroom? Alex Ruthmann, MusEDLab founder and my soon-to-be PhD advisor, documented his method for doing so in 2007.

As a middle school general music teacher, I’ve often wrestled with how to engage my students in meaningful composing experiences. Many of the approaches I’d read about seemed disconnected from the real-world musicality I saw daily in the music my students created at home and what they did in my classes. This disconnect prompted me to look for ways of bridging the gap’ between the students’ musical world outside music class and their in-class composing experiences.

It’s an axiom of constructivist music education that students will be most motivated to learn music that’s personally meaningful to them. There are kids out there for whom notated music performed on instruments is personally meaningful. But the musical world outside music class usually follows the Missy and Pharrell method.

[T]he majority of approaches to teaching music with technology center around notating musical ideas and are often rooted in European classical notions of composing (for example, creating ABA pieces, or restricting composing tasks to predetermined rhythmic values). These approaches require students to have a fairly sophisticated knowledge of standard music notation and a fluency working with rhythms and pitches before being able to explore and express their musical ideas through broader musical dimensions like form, texture, mood, and style.

Noteflight imposes some limitations on these musical dimensions. Some forms, textures, moods and styles are difficult to capture in standard notation. Some are impossible. If you want to specify a particular drum machine sound combined with a sampled breakbeat, or an ambient synth pad, or a particular stereo image, standard notation is not the right tool for the job.

Common approaches to organizing composing experiences with synthesizers and software often focus on simplified classical forms without regard to whether these forms are authentic to the genre or to technologies chosen as a medium for creation.

There is nothing wrong with teaching classical forms. But when making music with computers, the best results come from making the music that’s idiomatic to computers. Matt McLean goes to extraordinary lengths to have student compositions performed by professional musicians, but most kids will be confined to the sounds made by the computer itself. Classical forms and idioms sound awkward at best when played by the computer, but electronic music sounds terrific.

The middle school students enrolled in these classes came without much interest in performing, working with notation, or studying the classical music canon. Many saw themselves as “failed” musicians, placed in a general music class because they had not succeeded in or desired to continue with traditional performance-based music classes. Though they no longer had the desire to perform in traditional school ensembles, they were excited about having the opportunity to create music that might be personally meaningful to them.

Here it is, the story of my life as a music student. Too bad I didn’t go to Alex’s school.

How could I teach so that composing for personal expression could be a transformative experience for students? How could I let the voices and needs of the students guide lessons for the composition process? How could I draw on the deep, complex musical understandings that these students brought to class to help them develop as musicians and composers? What tools could I use to quickly engage them in organizing sound in musical and meaningful ways?

Alex draws parallels between writing music and writing English. Both are usually done alone at a computer, and both pose a combination of technical and creative challenges.

Musical thinking (thinking in sound) and linguistic thinking (thinking using language phrases and ideas) are personal creative processes, yet both occur within social and cultural contexts. Noting these parallels, I began to think about connections between the whole-language approach to writing used by language arts teachers in my school and approaches I might take in my music classroom.

In the whole-language approach to writing, students work individually as they learn to write, yet are supported through collaborative scaffolding-support from their peers and the teacher. At the earliest stages, students tell their stories and attempt to write them down using pictures, drawings, and invented notation. Students write about topics that are personally meaningful to them, learning from their own writing and from the writing of their peers, their teacher, and their families. They also study literature of published authors. Classes that take this approach to teaching writing are often referred to as “writers’ workshops”… The teacher facilitates [students’] growth as writers through minilessons, share sessions, and conferring sessions tailored to meet the needs that emerge as the writers progress in their work. Students’ original ideas and writings often become an important component of the curriculum. However, students in these settings do not spend their entire class time “freewriting.” There are also opportunities for students to share writing in progress and get feedback and support from teacher and peers. Revision and extension of students’ writing occur throughout the process. Lessons are not organized by uniform, prescriptive assignments, but rather are tailored to the students’ interests and needs. In this way, the direction of the curriculum and successive projects are informed by the students’ needs as developing writers.

Alex set about creating an equivalent “composers’ workshop,” combining composition, improvisation, and performing with analytical listening and genre studies.

The broad curricular goal of the composers’ workshop is to engage students collaboratively in:

  • Organizing and expressing musical ideas and feelings through sound with real-world, authentic reasons for and means of composing
  • Listening to and analyzing musical works appropriate to students’ interests and experiences, drawn from a broad spectrum of sources
  • Studying processes of experienced music creators through listening to, performing, and analyzing their music, as well as being informed by accounts of the composition process written by these creators.

Alex recommends production software with strong loop libraries so students can make high-level musical decisions with “real” sounds immediately.

While students do not initially work directly with rhythms and pitch, working with loops enables students to begin composing through working with several broad musical dimensions, including texture, form, mood, and affect. As our semester progresses, students begin to add their own original melodies and musical ideas to their loop-based compositions through work with synthesizers and voices.

As they listen to musical exemplars, I try to have students listen for the musical decisions and understand the processes that artists, sound engineers, and producers make when crafting their pieces. These listening experiences often open the door to further dialogue on and study of the multiplicity of musical roles’ that are a part of creating today’s popular music. Having students read accounts of the steps that audio engineers, producers, songwriters, film-score composers, and studio musicians go through when creating music has proven to be informative and has helped students learn the skills for more accurately expressing the musical ideas they have in their heads.

Alex shares my belief in project-based music technology teaching. Rather than walking through the software feature-by-feature, he plunges students directly into a creative challenge, trusting them to pick up the necessary software functionality as they go. Rather than tightly prescribe creative approaches, Alex observes the students’ explorations and uses them as opportunities to ask questions.

I often ask students about their composing and their musical intentions to better understand how they create and what meanings they’re constructing and expressing through their compositions. Insights drawn from these initial dialogues help me identify strategies I can use to guide their future composing and also help me identify listening experiences that might support their work or techniques they might use to achieve their musical ideas.

Some musical challenges are more structured–Alex does “genre studies” where students have to pick out the qualities that define techno or rock or film scores, and then create using those idioms. This is especially useful for younger students who may not have a lot of experience listening closely to a wide range of music.

Rather than devoting entire classes to demonstrations or lectures, Alex prefers to devote the bulk of classroom time to working on the projects, offering “minilessons” to smaller groups or individuals as the need arises.

Teaching through minilessons targeted to individuals or small groups of students has helped to maintain the musical flow of students’ compositional work. As a result, I can provide more individual feedback and support to students as they compose. The students themselves also offer their own minilessons to peers when they have designed to teach more about advanced features of the software, such as how to record a vocal track, add a fade-in or fade-out, or copy their musical material. These technology skills are taught directly to a few students, who then become the experts in that skill, responsible for teaching other students in the class who need the skill.

Not only does the peer-to-peer learning help with cultural authenticity, but it also gives students invaluable experience with the role of teacher.

One of my first questions is usually, “Is there anything that you would like me to listen for or know about before I listen?” This provides an opportunity for students to seek my help with particular aspects of their composing process. After listening to their compositions, I share my impressions of what I hear and offer my perspective on how to solve their musical problems. If students choose not to accept my ideas, that’s fine; after all, it’s their composition and personal expression… Use of conferring by both teacher and students fosters a culture of collaboration and helps students develop skills in peer scaffolding.

Alex recommends creating an online gallery of class compositions. This has become easier to implement since 2007 with the explosion of blog platforms like Tumblr, audio hosting tools like SoundCloud, and video hosts like YouTube. There are always going to be privacy considerations with such platforms, but there is no shortage of options to choose from.

Once a work is online, students can listen to and comment on these compositions at home outside of class time. Sometimes students post pieces in progress, but for the most part, works are posted when deemed “finished” by the composer. The online gallery can also be set up so students can hear works written by participants in other classes. Students are encouraged to listen to pieces published online for ideas to further their own work, to make comments, and to share these works with their friends and family. The realworld publishing of students’ music on the Internet seems to contribute to their motivation.

Assessing creative work is always going to be a challenge, since there’s no objective basis to assess it on. Alex looks at how well a student composer has met the goal of the assignment, and how well they have achieved their own compositional intent.

The word “composition” is problematic in the context of contemporary computer-based production. It carries the cultural baggage of Western Europe, the idea of music as having a sole identifiable author (or authors.) The sampling and remixing ethos of hip-hop and electronica are closer to the traditions of non-European cultures where music may be owned by everyone and no one. I’ve had good results bringing remixing into the classroom, having students rework each others’ tracks, or beginning with a shared pool of audio samples, or doing more complex collaborative activities like musical shares. Remixes are a way of talking about music via the medium of music, and remixes of remixes can make for some rich and deep conversation. The word “composition” makes less sense in this context. I prefer the broader term “production”, which includes both the creation of new musical ideas and the realization of those ideas in sound.

So far in this post, I’ve presented notation-based composition and loop-based production as if they’re diametrical opposites. In reality, the two overlap, and can be easily combined. A student can create a part as a MIDI sequence and then convert it to notation, or vice versa. The school band or choir can perform alongside recorded or sequenced tracks. Instrumental or vocal performances can be recorded, sampled, and turned into new works. Electronic productions can be arranged for live instruments, and acoustic pieces can be reconceived as electronica. If a hip-hop track can incorporate a sample of Duke Ellington, there’s no reason that sample couldn’t be performed by a high school jazz band. The possibilities are endless.

Rohan lays beats

The Ed Sullivan Fellows program is an initiative by the NYU MusEDLab connecting up-and-coming hip-hop musicians to mentors, studio time, and creative and technical guidance. Our session this past Saturday got off to an intense start, talking about the role of young musicians of color in a world of the police brutality and Black Lives Matter. The Fellows are looking to Kendrick Lamar and Chance The Rapper to speak social and emotional truths through music. It’s a brave and difficult job they’ve taken on.

Eventually, we moved from heavy conversation into working on the Fellows’ projects, which this week involved branding and image. I was at kind of a loose end in this context, so I set up the MusEDLab’s Push controller and started playing around with it. Rohan, one of the Fellows, immediately gravitated to it, and understandably so.

Indigo lays beats

Rohan tried out a few drum sounds, then some synths. He quickly discovered a four-bar synth loop that he wanted to build a track around. He didn’t have any Ableton experience, however, so I volunteered to be his co-producer and operate the software for him.

We worked out some drum parts, first with a hi-hat and snare from the Amen break, and then a kick, clap and more hi-hats from Ableton’s C78 factory instrument. For bass, Rohan wanted that classic booming hip-hop sound you hear coming from car stereos in Brooklyn. He spotted the Hip-Hop Sub among the presets. We fiddled with it and he continued to be unsatisfied until I finally just put a brutal compressor on it, and then we got the sound he was hearing in his head.

While we were working, I had my computer connected to a Bluetooth speaker that was causing some weird and annoying system behavior. At one point, iTunes launched itself and started playing a random song under Rohan’s track, “I Can’t Realize You Love Me” by Duke Ellington and His Orchestra, featuring The Harlem Footwarmers and Sid Garry.

Rohan liked the combination of his beat and the Ellington song, so I sampled the opening four bars and added them to the mix. It took me several tries to match the keys, and I still don’t think I really nailed it, but the hip-hop kids have broad tolerance for chord clash, and Rohan was undisturbed.

Once we had the loops assembled, we started figuring out an arrangement. It took me a minute to figure out that when Rohan refers to a “bar,” he means a four-measure phrase. He’s essentially conflating hypermeasures with measures. I posted about it on Twitter later and got some interesting responses.

In a Direct Message, Latinfiddler also pointed out that Latin music calls two measures a “bar” because that’s the length of one cycle of the clave.

Thinking about it further, there’s yet another reason to conflate measures with hypermeasures, which is the broader cut-time shift taking place in hip-hop. All of the young hip-hop beatmakers I’ve observed lately work at half the base tempo of their DAW session. Rohan, being no exception, had the session tempo set to 125 bpm, but programmed a beat with an implied tempo of 62.5 bpm. He and his cohort put their backbeats on beat three, not beats two and four, so they have a base grid of thirty-second notes rather than sixteenth notes. A similar shift took place in the early 1960s when the swung eighth notes of jazz rhythm gave way to the swung sixteenth notes of funk.

Here’s Rohan’s track as of the end of our session:

By the time we were done working, the rest of the Fellows had gathered around and started freestyling. The next step is to record them rapping and singing on top. We also need to find someone to mix it properly. I understand aspects of hip-hop very well, but I mix amateurishly at best.

All the way around, I feel like a learn a ton about music whenever I work with young hip-hop musicians. They approach the placement of sounds in the meter in ways that would never occur to me. I’m delighted to be able to support them technically in realizing their ideas, it’s a privilege for me.

Inside the aQWERTYon

The MusEDLab and Soundfly just launched Theory For Producers, an interactive music theory course. The centerpiece of the interactive component is a MusEDLab tool called the aQWERTYon. You can try it by clicking the image below.

aQWERTYon screencap

In this post, I’ll talk about why and how we developed the aQWERTYon.

One of our core design principles is to work within our users’ real-world technological limitations. We build tools in the browser so they’ll be platform-independent and accessible anywhere there’s internet access (and where there isn’t internet access, we’ve developed the “MusEDLab in a box.”) We want to find out what musical possibilities there are in a typical computer with no additional software or hardware. That question led us to investigate ways of turning the standard QWERTY keyboard into a beginner-friendly instrument. We were inspired in part by GarageBand’s Musical Typing feature.

GarageBand musical typing

If you don’t have a MIDI controller, Apple thoughtfully made it possible for you to use your computer keyboard to play GarageBand’s many software instruments. You get an octave and a half of piano, plus other useful controls: pitch bend, modulation, sustain, octave shifting and simple velocity control. Many DAWs offer something similar, but Apple’s system is the most sophisticated I’ve seen.

Handy though it is, Musical Typing has some problems as a user interface. The biggest one is the poor fit between the piano keyboard layout and the grid of computer keys. Typing the letter A plays the note C. The rest of that row is the white keys, and the one above it is the black keys. You can play the chromatic scale by alternating A row, Q row, A row, Q row. That basic pattern is easy enough to figure out. However, you quickly get into trouble, because there’s no black key between E and F. The QWERTY keyboard gives no visual reminder of that fact, so you just have to remember it. Unfortunately, the “missing” black key happens to be the letter R, which is GarageBand’s keyboard shortcut for recording. So what inevitably happens is that you’re hunting for E-flat or F-sharp and you accidentally start recording over whatever you were doing. I’ve been using the program for years and still do this routinely.

Rather than recreating the piano keyboard on the computer, we drew on a different metaphor: the accordion.

The accordion: the user interface metaphor of the future!

We wanted to have chords and scales arranged in an easily discoverable way, like the way you can easily figure out the chord buttons on the accordion’s left hand. The QWERTY keyboard is really a staggered grid four keys tall and between ten and thirteen keys wide, plus assorted modifier and function keys. We decided to use the columns for chords and the rows for scales.

For the diatonic scales and modes, the layout is simple. The bottom row gives the notes in the scale starting on 1^. The second row has the same scale shifted over to start on 3^. The third row starts the scale on 5^, and the top row starts on 1^ an octave up. If this sounds confusing when you read it, try playing it, your ears will immediately pick up the pattern. Notes in the same column form the diatonic chords, with their roman numerals conveniently matching the number keys. There are no wrong notes, so even just mashing keys at random will sound at least okay. Typing your name usually sounds pretty cool, and picking out melodies is a piece of cake. Playing diagonal columns, like Z-S-E-4, gives you chords voiced in fourths. The same layout approach works great for any seven-note scale: all of the diatonic modes, plus the modes of harmonic and melodic minor.

Pentatonics work pretty much the same way as seven-note scales, except that the columns stack in fourths rather than fifths. The octatonic and diminished scales lay out easily as well. The real layout challenge lay in one strange but crucial exception: the blues scale. Unlike other scales, you can’t just stagger the blues scale pitches in thirds to get meaningful chords. The melodic and harmonic components of blues are more or less unrelated to each other. Our original idea was to put the blues scale on the bottom row of keys, and then use the others to spell out satisfying chords on top. That made it extremely awkward to play melodies, however, since the keys don’t form an intelligible pattern of intervals. Our compromise was to create two different blues modes: one with the chords, for harmony exploration, and one just repeating the blues scale in octaves for melodic purposes. Maybe a better solution exists, but we haven’t figured it out yet.

When you select a different root, all the pitches in the chords and scales are automatically changed as well. Even if the aQWERTYon had no other features or interactivity, this would still make it an invaluable music theory tool. But root selection raises a bigger question: what do you do about all the real-world music that uses more than one scale or mode? Totally uniform modality is unusual, even in simple pop songs. You can access notes outside the currently selected scale by pressing the shift keys, which transposes the entire keyboard up or down a half step. But what would be really great is if we could get the scale settings to change dynamically. Wouldn’t it be great if you were listening to a jazz tune, and the scale was always set to match whatever chord was going by at that moment? You could blow over complex changes effortlessly. We’ve discussed manually placing markers in YouTube videos that tell the aQWERTYon when to change its settings, but that would be labor-intensive. We’re hoping to discover an algorithmic method for placing markers automatically.

The other big design challenge we face is how to present all the different scale choices in a way that doesn’t overwhelm our core audience of non-expert users. One solution would just be to limit the scale choices. We already do that in the Soundfly course, in effect; when you land on a lesson, the embedded aQWERTYon is preset to the appropriate scale and key, and the user doesn’t even see the menus. But we’d like people to be able to explore the rich sonic diversity of the various scales without confronting them with technical Greek terms like “Lydian dominant”. Right now, the scales are categorized as Major, Minor and Other, but those terms aren’t meaningful to beginners. We’ve been discussing how we could organize the scales by mood or feeling, maybe from “brightest” to “darkest.” But how do you assign a mood to a scale? Do we just do it arbitrarily ourselves? Crowdsource mood tags? Find some objective sorting method that maps onto most listeners’ subjective associations? Some combination of the above? It’s an active area of research for us.

This issue of categorizing scales by mood has relevance for the original use case we imagined for the aQWERTYon: teaching film scoring. The idea behind the integrated video window was that you would load a video clip, set a mode, and then improvise some music that fit the emotional vibe of that clip. The idea of playing along with YouTube videos of songs came later. One could teach more general open-ended composition with the aQWERTYon, and in fact our friend Matt McLean is doing exactly that. But we’re attracted to film scoring as a gateway because it’s a more narrowly defined problem. Instead of just “write some music”, the challenge is “write some music with a particular feeling to it that fits into a scene of a particular length.

Would you like to help us test and improve the aQWERTYon, or to design curricula around it? Would you like to help fund our programmers and designers? Please get in touch.