Samuel Halligan’s awesome Pop-Up Piano for Ableton Live

I recently met a gentleman named Samuel Halligan, who, among other things, makes music education utilities using Max For Live. One of them is called Pop-Up Piano. If you use Max or Ableton and you could use some help learning music theory, you should go and download it immediately. It’s a Max For Live Device that you can place on any MIDI track in Ableton, or just open as a Max standalone. The concept is simple: as you play notes on a MIDI controller, the Pop Up Piano shows you their names, and notates them on the staff. You can also set a particular key and scale, and then the Pop Up Piano will show you whether the notes you’re playing fall within that scale. In the image below, I’m holding down the notes C and E-flat, the second and fourth notes in the B-flat harmonic minor scale.

Pop-Up Piano

Samuel made this thing to help pianists navigate the Ableton Push. But I could see this being useful for any musician. I’m going to use it in my intro-level music theory course that I’m teaching at the New School this fall. I’d be interested to hear from any theory pedagogues out there how you would structure lessons or assignments around this tool.

I’ve done some work around music theory visualization with the NYU MusEDLab. My fondest wish would be to combine the visualization scheme of the Scale Wheel, the immediacy and physical playability of the aQWERTYon, and the music-theoretic depth of the Pop-Up Piano.

Scale Wheel - Bb harmonic minor

In a perfect world, this combination of instrument and music-theoretic Rosetta stone would exist both as a web app and as a DAW-native plugin. Samuel is working on adding more note representations to the Pop-Up Piano, including guitar tab, so that’s awesome. What do you say, developers? Let’s make this happen!

Affordances and Constraints

Note-taking for User Experience Design with June Ahn

Don Norman discusses affordances and constraints in The Design of Everyday Things, Chapter Four: Knowing What To Do.

Don Norman - The Design of Everyday Things

User experience design is easy in situations where there’s only one thing that the user can possibly do. But as the possibilities multiply, so do the challenges. We can deal with new things using information from our prior experiences, or by being instructed. The best-designed things include the instructions for their own use, like video games whose first level act as tutorials, or doors with handles that communicate how you should operate them by their shape and placement.

We use affordances and constraints to learn how things work. Affordances suggest the range of possibilities, and constraints limit the alternatives. Constraints include:

  • Physical limitations. Door keys can only be inserted into keyholes vertically, but you can still insert the key upside down. Car keys work in both orientations.
  • Semantic constraints. We know that red lights mean stop and green lights mean go, so we infer that a red light means a device is off or inoperative, and a green light means it’s on or ready to function. We have a slow cooker that uses lights in the opposite way and it screws me up every time.
  • Cultural constraints. Otherwise known as conventions. (Not sure how these are different from semantic constraints.) Somehow we all know without being told that we’re supposed to face forward in the elevator. Google Glass was an epic failure because its early adopters ran into the cultural constraint of people not liking to be photographed without consent.
  • Logical constraints. The arrangement of knobs controlling your stove burners should match the arrangement of the burners themselves.

The absence of constraints makes things confusing. Norman gives examples of how much designers love rows of identical switches which give no clues as to their function. Distinguishing the switches by shape, size, or grouping might not look as elegant, but would make it easier to remember which one does what thing.

Helpful designs use visibility (making the relevant parts visible) and feedback (giving actions an immediate and obvious effect.) Everyone hates the power buttons on iMacs because they’re hidden on the back, flush with the case. Feedback is an important way to help us distinguish the functional parts from the decorative ones. Propellerheads Reason is an annoying program because its skeuomorphic design puts as many decorative elements on the screen as functional ones. Ableton Live is easier to use because everything on the screen is functional.

When you can’t make things visible, you can give feedback via sound. Pressing a Mac’s power button doesn’t immediately cause the screen to light up, but that’s okay, because it plays the famous startup sound. Norman’s examples of low-tech sound feedback include the “zzz” sound of a functioning zipper, a tea kettle’s whistle, and the various sounds that machines make when they have mechanical problems. The problem with sound as feedback is that it can be intrusive and annoying.

The term “affordance” is the source for a lot of confusion. Norman tries to clarify it in his article “Affordance, Conventions and Design.” He makes a distinction between real and perceived affordances. Anything that appears on a computer screen is a perceived affordance. The real affordances of a computer are its physical components: the screen itself, the keyboard, the trackpad. The MusEDLab was motivated to create the aQWERTYon by considering the computer’s real affordances for music making. Most software design ignores the real affordances and only considers the perceived ones.

Designers of graphical user interfaces rely entirely on conceptual models and cultural conventions. (Consider how many programs use a graphic of a floppy disk as a Save icon, and now compare to the last time you saw an actual floppy disk.) For Norman, graphics are perceived affordances by definition.

Joanna McGrenere and Wayne Ho try to nail the concept down harder in “Affordances: Clarifying and Evolving a Concept.” The term was coined by the perceptual psychologist James J. Gibson in his book The Ecological Approach to Visual Perception. For Gibson, affordances exist independent of the actor’s ability to perceive them, and don’t depend on the actor’s experiences and culture. For Norman, affordances can include both perceived and actual properties, which to me makes more sense. If you can’t figure out that an affordance exists, then what does it matter if it exists or not?

Norman collapses two distinct aspects of design: an object’s utility of an object and the way that users learn or discover that utility. But are designing affordances and designing the information about the affordances the same thing? McGrenere and Ho say no, that it’s the difference between usefulness versus usability. They complain that the HCI community has focused on usability at the expense of usefulness. Norman says that a scrollbar is a learned convention, not a real affordance. McGrenere and Ho disagree, because the scrollbar affords scrolling in a way that’s built into the software, making it every bit as much a real affordance as if it were a physical thing. The learned convention is the visual representation of the scrollbar, not the basic fact of it.

The best reason to distinguish affordances from their communication or representation is that sometimes the communication gets in the way of the affordance itself. For example, novice software users need graphical user interfaces, while advanced users prefer text commands and keyboard shortcuts. A beginner needs to see all the available commands, while a pro prefers to keep the screen free of unnecessary clutter. Ableton Live is a notoriously beginner-unfriendly program because it prioritizes visual economy and minimalism over user handholding. A number of basic functions are either invisible or so tiny as to be effectively invisible. Apple’s GarageBand welcomes beginners with photorealistic depictions of everything, but its lack of keyboard shortcuts makes it feel like wearing oven mitts for expert users. For McGrenere and Ho, the same feature of one of these programs can be an affordance or anti-affordance depending on the user.

Freedom ’90

Since George Michael died, I’ve been enjoying all of his hits, but none of them more than this one. Listening to it now, it’s painfully obvious how much it’s about George Michael’s struggles with his sexual orientation. I wonder whether he was being deliberately coy in the lyrics, or if he just wasn’t yet fully in touch with his identity. Being gay in the eighties must have been a nightmare.

This is the funkiest song that George Michael ever wrote, which is saying something. Was he the funkiest white British guy in history? Quite possibly. 

The beat

There are five layers to the drum pattern: a simple closed hi-hat from a drum machine, some programmed bongos and congas, a sampled tambourine playing lightly swung sixteenth notes, and finally, once the full groove kicks in, the good old Funky Drummer break. I include a Noteflight transcription of all that stuff below, but don’t listen to it, it sounds comically awful.

George Michael uses the Funky Drummer break on at least two of the songs on Listen Without Prejudice Vol 1. Hear him discuss the break and how it informed his writing process in this must-watch 1990 documentary.

The intro and choruses

Harmonically, this is a boilerplate C Mixolydian progression: the chords built on the first, seventh and fourth degrees of the scale. You can hear the same progression in uncountably many classic rock songs.

C Mixolydian chords

For a more detailed explanation of this scale and others like it, check out Theory For Producers.

The rhythm is what makes this groove so fresh. It’s an Afro-Cuban pattern full of syncopation and hemiola. Here’s an abstraction of it on the Groove Pizza. If you know the correct name of this rhythm, please tell me in the comments!

The verses

There’s a switch to plain vanilla C major, the chords built on the fifth, fourth and root of the scale.

C major chords

Like the chorus, this is standard issue pop/rock harmonically speaking, but it also gets its life from a funky Latin rhythm. It’s a kind of clave pattern, five hits spread more or less evenly across the sixteen sixteenth notes in the bar. Here it is on the Groove Pizza.

The prechorus and bridge

This section unexpectedly jumps over to C minor, and now things get harmonically interesting. The chords are built around a descending chromatic bassline: C, B, B-flat, A. It’s a simple idea but with complicated implications, because it implies four chords built on three different scales between them. First, we have the tonic triad in C natural minor, no big deal there. Next comes the V chord in C harmonic minor. Then we’re back to C natural minor, but with the seventh in the bass. Finally, we go to the IV chord in C Dorian mode. Really, all that we’re doing is stretching C natural minor to accommodate a couple of new notes, B natural in the second chord, and A natural in the fourth one.

C minor - descending chromatic bassline

The rhythm here is similar but not identical to the clave-like pattern in the verse–the final chord stab is a sixteenth note earlier. See and hear it on the Groove Pizza.

I don’t have the time to transcribe the whole bassline, but it’s absurdly tight and soulful. The album credits list bass played both by Deon Estus and by George Michael himself. Whichever one of them laid this down, they nailed it.

Song structure

“Freedom ’90” has an exceedingly peculiar structure for a mainstream pop song. The first chorus doesn’t hit until almost two minutes in, which is an eternity–most pop songs are practically over that that point. The graphic below shows the song segments as I marked them in Ableton.

Freedom '90 structure

The song begins with a four bar instrumental intro, nothing remarkable about that. But then it immediately moves into an eight bar section that I have trouble classifying. It’s the spot that would normally be occupied by verse one, but this part uses the chorus harmony and is different from the other verses. I labeled it “intro verse” for lack of a better term. (Update: upon listening again, I realized that this section is the backing vocals from the back half of the chorus. Clever, George Michael!) Then there’s an eight bar instrumental break, before the song has really even started. George Michael brings you on board with this unconventional sequence because it’s all so catchy, but it’s definitely strange.

Finally, twenty bars in, the song settles into a more traditional verse-prechorus-chorus loop. The verses are long, sixteen bars. The prechorus is eight bars, and the chorus is sixteen. You could think of the chorus as being two eight bar sections, the part that goes “All we have to do…” and the part that goes “Freedom…” but I hear it as all one big section.

After two verse-prechorus-chorus units, there’s a four bar breakdown on the prechorus chord progression. This leads into sixteen bar bridge, still following the prechorus form. Finally, the song ends with a climactic third chorus, which repeats and fades out as an outtro. All told, the song is over six minutes. That’s enough time (and musical information) for two songs by a lesser artist.

A word about dynamics: just from looking at the audio waveform, you can see that “Freedom ’90” has very little contrast in loudness and fullness over its duration. It starts sparse, but once the Funky Drummer loop kicks in at measure 13, the sound stays constantly big and full until the breakdown and bridge. These sections are a little emptier without the busy piano part. The final chorus is a little bigger than the rest of the song because there are more vocals layered in, but that still isn’t a lot of contrast. I guess George Michael decided that the groove was so hot, why mess with it by introducing contrast for the sake of contrast? He was right to feel that way.

Learning music from Ableton

Ableton recently launched a delightful web site that teaches the basics of beatmaking, production and music theory using elegant interactives. If you’re interested in music education, creation, or user experience design, you owe it to yourself to try it out.

Ableton - Learning Music site

One of the site’s co-creators is Dennis DeSantis, who wrote Live’s unusually lucid documentation, and also their first book, a highly-recommended collection of strategies for music creation (not just in the electronic idiom.)

Dennis DeSantis - Making Music

The other co-creator is Jack Schaedler, who also created this totally gorgeous interactive digital signal theory primer.

If you’ve been following the work of the NYU Music Experience Design Lab, you might notice some strong similarities between Ableton’s site and our tools. That’s no coincidence. Dennis and I have been having an informal back and forth on the role of technology in music education for a few years now. It’s a relationship that’s going to get a step more formal this fall at the 2017 Loop Conference – more details on that as it develops.

Meanwhile, Peter Kirn’s review of the Learning Music site raises some probing questions about why Ableton might be getting involved in education in the first place. But first, he makes some broad statements about the state of the musical world that are worth repeating in full.

I think there’s a common myth that music production tools somehow take away from the need to understand music theory. I’d say exactly the opposite: they’re more demanding.

Every musician is now in the position of composer. You have an opportunity to arrange new sounds in new ways without any clear frame from the past. You’re now part of a community of listeners who have more access to traditions across geography and essentially from the dawn of time. In other words, there’s almost no choice too obvious.

The music education world has been slow to react to these new realities. We still think of composition as an elite and esoteric skill, one reserved only for small class of highly trained specialists. Before computers, this was a reasonable enough attitude to have, because it was mostly true. Not many of us can learn an instrument well enough to compose with it, then learn to notate our ideas. Even fewer of us will be able to find musicians to perform those compositions. But anyone with an iPhone and twenty dollars worth of apps can make original music using an infinite variety of sounds, and share that music online to anyone willing to listen. My kids started playing with iOS music apps when they were one year old. With the technical barriers to musical creativity falling away, the remaining challenge is gaining an understanding of music itself, how it works, why some things sound good and others don’t. This is the challenge that we as music educators are suddenly free to take up.

There’s an important question to ask here, though: why Ableton?

To me, the answer to this is self-evident. Ableton has been in the music education business since its founding. Like Adam Bell says, every piece of music creation software is a de facto education experience. Designers of DAWs might even be the most culturally impactful music educators of our time. Most popular music is made by self-taught producers, and a lot of that self-teaching consists of exploring DAWs like Ableton Live. The presets, factory sounds and affordances of your DAW powerfully inform your understanding of musical possibility. If DAW makers are going to be teaching the world’s producers, I’d prefer if they do it intentionally.

So far, there has been a divide between “serious” music making tools like Ableton Live and the toy-like iOS and web apps that my kids use. If you’re sufficiently motivated, you can integrate them all together, but it takes some skill. One of the most interesting features of Ableton’s web site, then, is that each interactive tool includes a link that will open up your little creation in a Live session. Peter Kirn shares my excitement about this feature.

There are plenty of interactive learning examples online, but I think that “export” feature – the ability to integrate with serious desktop features – represents a kind of breakthrough.

Ableton Live is a superb creation tool, but I’ve been hesitant to recommend it to beginner producers. The web site could change my mind about that.

So, this is all wonderful. But Kirn points out a dark side.

The richness of music knowledge is something we’ve received because of healthy music communities and music institutions, because of a network of overlapping ecosystems. And it’s important that many of these are independent. I think it’s great that software companies are getting into the action, and I hope they continue to do so. In fact, I think that’s one healthy part of the present ecosystem.

It’s the rest of the ecosystem that’s worrying – the one outside individual brands and what they support. Public music education is getting squeezed in different ways all around the world. Independent content production is, too, even in advertising-supported publications like this one, but more so in other spheres. Worse, I think education around music technology hasn’t even begun to be reconciled with traditional music education – in the sense that people with specialties in one field tend not to have any understanding of the other. And right now, we need both – and both are getting their resources squeezed.

This might feel like I’m going on a tangent, but if your DAW has to teach you how harmony works, it’s worth asking the question – did some other part of the system break down?

Yes it did! Sure, you can learn the fundamentals of rhythm, harmony, and form from any of a thousand schools, courses, or books. But there aren’t many places you can go to learn about it in the context of Beyoncé, Daft Punk, or A Tribe Called Quest. Not many educators are hip enough to include the Sleng Teng riddim as one of the fundamentals. I’m doing my best to rectify this imbalance–that’s what my courses with Soundfly classes are for. But I join Peter Kirn in wondering why it’s left to private companies to do this work. Why isn’t school music more culturally relevant? Why do so many educators insist that you kids like the wrong music? Why is it so common to get a music degree without ever writing a song? Why is the chasm between the culture of school music and music generally so wide?

Like Kirn, I’m distressed that school music programs are getting their budgets cut. But there’s a reason that’s happening, and it isn’t that politicians and school boards are philistines. Enrollment in school music is declining in places where the budgets aren’t being cut, and even where schools are offering free instruments. We need to look at the content of school music itself to see why it’s driving kids away. Both the content of school music programs and the people teaching them are whiter than the student population. Even white kids are likely to be alienated from a Eurocentric curriculum that doesn’t reflect America’s increasingly Afrocentric musical culture. The large ensemble model that we imported from European conservatories is incompatible with the riot of polyglot individualism in the kids’ earbuds.

While music therapists have been teaching songwriting for years, it’s rare to find it in school music curricula. Production and beatmaking are even more rare. Not many adults can play oboe in an orchestra, but anyone with a guitar or keyboard or smartphone can write and perform songs. Music performance is a wonderful experience, one I wish were available to everyone, but music creation is on another level of emotional meaning entirely. It’s like the difference between watching basketball on TV and playing it yourself. It’s a way to understand your own innermost experiences and the innermost experiences of others. It changes the way you listen to music, and the way you approach any kind of art for that matter. It’s a tool that anyone should be able to have in their kit. Ableton is doing the music education world an invaluable service; I hope more of us follow their example.

Design for Real Life – QWERTYBeats research

Writing assignment for Design For The Real World with Claire Kearney-Volpe and Diana Castro – research about a new rhythm interface for blind and low-vision novice musicians

Definition

I propose a new web-based accessible rhythm instrument called QWERTYBeats.Traditional instruments are highly accessible to blind and low-vision musicians. Electronic music production tools are not. I look at the history of accessible instruments and software interfaces, give an overview of current electronic music hardware and software, and discuss the design considerations underlying my project.

QWERTYBeats logo

Historical overview

Acoustic instruments give rich auditory and haptic feedback, and pose little obstacle to blind musicians. We need look no further for proof than the long history of iconic blind musicians like Ray Charles and Stevie Wonder. Even sighted instrumentalists rarely look at their instruments once they have attained a sufficient level of proficiency. Music notation is not accessible, but Braille notation has existed since the language’s inception. Also, a great many musicians both blind and sighted play entirely by ear anyway.

Most of the academic literature around accessibility issues in music education focuses on wider adoption of and support for Braille notation. See, for example, Rush, T. W. (2015). Incorporating Assistive Technology for Students with Visual Impairments into the Music Classroom. Music Educators Journal, 102(2), 78–83. For electronic music, notation is rarely if ever a factor.

Electronic instruments pose some new accessibility challenges. They may use graphical interfaces with nested menus, complex banks of knobs and patch cables, and other visual control surfaces. Feedback may be given entirely with LED lights and small text labels. Nevertheless, blind users can master these devices with sufficient practice, memorization and assistance. For example, Stevie Wonder has incorporated synthesizers and drum machines in most of his best-known recordings.

Most electronic music creation is currently done not with instruments, but rather using specialized software applications called digital audio workstations (DAWs). Keyboards and other controllers are mostly used to access features of the software, rather than as standalone instruments. The most commonly-used DAWs include Avid Pro Tools, Apple Logic, Ableton Live, and Steinberg Cubase. Mobile DAWs are more limited than their desktop counterparts, but are nevertheless becoming robust music creation tools in their own right. Examples include Apple GarageBand and Steinberg Cubasis. Notated music is commonly composed using score editing software like Sibelius and Finale, whose functionality increasingly overlaps with DAWs, especially in regard to MIDI sequencing.

DAWs and notation editors pose steep accessibility challenges due to their graphical and spatial interfaces, not to mention their sheer complexity. In class, we were given a presentation by Leona Godin, a blind musician who records and edits audio using Pro Tools by means of VoiceOver. While it must have taken a heroic effort on her part to learn the program, Leona demonstrates that it is possible. However, some DAWs pose insurmountable problems even to very determined blind users because they do not use standard operating system elements, making them inaccessible via screen readers.

Technological interventions

There are no mass-market electronic interfaces specifically geared toward blind or low-vision users. In this section, I discuss one product frequently hailed for its “accessibility” in the colloquial rather than blindess-specific sense, along with some more experimental and academic designs.

Ableton Push

Push layout for IMPACT Faculty Showcase

Ableton Live has become the DAW of choice for electronic music producers. Low-vision users can zoom in to the interface and modify the color scheme. However, Live is inaccessible via screen readers.

In recent years, Ableton has introduced a hardware controller, the Push, which is designed to make the software experience more tactile and instrument-like. The Push combines an eight by eight grid of LED-lit touch pads with banks of knobs, buttons and touch strips. It makes it possible to create, perform and record a piece of music from scratch without looking at the computer screen. In addition to drum programming and sampler performance, the Push also has an innovative melodic mode which maps scales onto the grid in such a way that users can not play a wrong note. Other comparable products exist; see, for example, the Native Instruments Maschine.

There are many pad-based drum machines and samplers. Live’s main differentiator is its Session view, where the pads launch clips: segments of audio or MIDI that can vary in length from a single drum hit to the length of an entire song. Clip launching is tempo-synced, so when you trigger a clip, playback is delayed until the start of the next measure (or whatever the quantization interval is.) Clip launching is a forgiving and beginner-friendly performance method, because it removes the possibility of playing something out of rhythm. Like other DAWs, Live also gives rhythmic scaffolding in its software instruments by means of arpeggiators, delay and other tempo-synced features.

The Push is a remarkable interface, but it has some shortcomings for blind users. First of all, it is expensive, $800 for the entry-level version and $1400 for the full-featured software suite. Much of its feedback is visual, in the form of LED screens and color-coded lighting on the pads. It switches between multiple modes which can be challenging to distinguish even for sighted users. And, like the software it accompanies, the Push is highly complex, with a steep learning curve unsuited to novice users, blind or sighted.

The aQWERTYon

Most DAWs enable users to perform MIDI instruments on the QWERTY keyboard. The most familiar example is the Musical Typing feature in Apple GarageBand.

GarageBand musical typing

Musical Typing makes it possible to play software instruments without an external MIDI controller, which is convenient and useful. However, its layout counterintuively follows the piano keyboard, which is an awkward fit for the computer keyboard. There is no easy way to distinguish the black and white keys, and even expert users find themselves inadvertantly hitting the keyboard shortcut for recording while hunting for F-sharp.

The aQWERTYon is a web interface developed by the NYU Music Experience Design Lab specifically intended to address the shortcomings of Musical Typing.

aQWERTYon screencap

Rather than emulating the piano keyboard, the aQWERTYon draws its inspiration from the chord buttons of an accordion. It fills the entire keyboard with harmonically related notes in a way that supports discovery by naive users. Specifically, it maps scales across the rows of keys, staggered by intervals such that each column forms a chord within the scale. Root notes and scales can be set from pulldown menus within the interface, or preset using URL parameters. It can be played as a standalone instrument, or as a MIDI controller in conjunction with a DAW. Here is a playlist of music I created using the aQWERTYon and GarageBand or Ableton Live:

The aQWERTYon is a completely tactile experience. Sighted users can carefully match keys to note names using the screen, but more typically approach the instrument by feel, seeking out patterns on the keyboard by ear. A blind user would need assistance loading the aQWERTYon initially and setting the scale and root note parameters, but otherwise, it is perfectly accessible. The present project was motivated in large part by a desire to make exploration of rhythm as playful and intuitive as the aQWERTYon makes exploring chords and scales.

Soundplant

The QWERTY keyboard can be turned into a simple drum machine quite easily using a free program called Soundplant. The user simply drags audio files onto a graphical key to have it triggered by that physical key. I was able to create a TR-808 kit in a matter of minutes:

Soundplant with 808 samples

After it is set up and configured, Soundplant can be as effortlessly accessible as the aQWERTYon. However, it does not give the user any rhythmic assistance. Drumming in perfect time is an advanced musical skill, and playing drum machine samples out of time is not much more satisfying than banging on a metal bowl with a spoon out of time. An ideal drum interface would offer beginners some of the rhythmic scaffolding and support that Ableton provides via Session view, arpeggiators, and the like.

The Groove Pizza

Drum machines and their software counterparts offer an alternative form of rhythmic scaffolding. The user sequences patterns in a time-unit box system or piano roll, and the computer performs those patterns flawlessly. The MusEDLab‘s Groove Pizza app is a web-based drum sequencer that wraps the time-unit box system into a circle.

Groove Pizza - Bembe

The Groove Pizza was designed to make drum programming more intuitive by visualizing the symmetries and patterns inherent in musical-sounding rhythms. However, it is totally unsuitable for blind or low-vision users. Interaction is only possible through the mouse pointer or touch, and there are no standard user interface elements that can be parsed by screen readers.

Before ever considering designing for the blind, the MusEDLab had already considered the Groove Pizza’s limitations for younger children and users with special needs: there is no “live performance” mode, and there is always some delay in feedback between making a change in the drum pattern and hearing the result. We have been considering ways to make a rhythm interface that is more immediate, performance-oriented and tactile. One possible direction would be to create a hardware version of the Groove Pizza; indeed, one of the earliest prototypes was a hardware version built by Adam November out of a pizza box. However, hardware design is vastly more complex and difficult than software, so for the time being, software promises more immediate results.

Haenselmann-Lemelson-Effelsberg MIDI sequencer

This experimental interface is described in Haenselmann, T., Lemelson, H., & Effelsberg, W. (2011). A zero-vision music recording paradigm for visually impaired people. Multimedia Tools and Applications, 5, 1–19.

Haenselmann-Lemelson-Effelsberg MIDI sequencer

The authors create a new mode for a standard MIDI keyboard that maps piano keys to DAW functions like playback, quantization, track selection, and so on. They also add “earcons” (auditory icons) to give sonic feedback when particular functions have been activated that normally only give graphical feedback. For example, one earcon sounds when recording is enabled; another sounds for regular playback. This interface sounds promising, but there are significant obstacles to its adoption. While the authors have released the source code as a free download, that requires a would-be user to be able to compile and run it. This is presuming that they could access the code in the first place; the download link given in the paper is inactive. It is an all-too-common fate of academic projects to never get widespread usage. By posting our projects on the web, the MusEDLab hopes to avoid this outcome.

Statement

Music education philosophy

My project is animated by a constructivist philosophy of music education, which operates by the following axiomatic assumptions:

  • Learning by doing is better than learning by being told.
  • Learning is not something done to you, but rather something done by you.
  • You do not get ideas; you make ideas. You are not a container that gets filled with knowledge and new ideas by the world around you; rather, you actively construct knowledge and ideas out of the materials at hand, building on top of your existing mental structures and models.
  • The most effective learning experiences grow out of the active construction of all types of things, particularly things that are personally or socially meaningful, that you develop through interactions with others, and that support thinking about your own thinking.

If an activity’s challenge level is beyond than your ability, you experience anxiety. If your ability at the activity far exceeds the challenge, the result is boredom. Flow happens when challenge and ability are well-balanced, as seen in this diagram adapted from Csikszentmihalyi.

Flow

Music students face significant obstacles to flow at the left side of the Ability axis. Most instruments require extensive practice before it is possible to make anything that resembles “real” music. Electronic music presents an opportunity here, because even a complete novice can produce music with a high degree of polish quickly. It is empowering to use technologies that make it impossible to do anything wrong; it frees you to begin exploring what you find to sound right. Beginners can be scaffolded in their pitch explorations with MIDI scale filters, Auto-Tune, and the configurable software keyboards in apps like Thumbjam and Animoog. Rhythmic scaffolding is more rare, but it can be had via Ableton’s quantized clip launcher, by MIDI arpeggiators, and using the Note Repeat feature on many drum machines.

QWERTYBeats proposal

My project takes drum machine Note Repeat as its jumping off point. When Note Repeat is activated, holding down a drum pad triggers the corresponding sound at a particular rhythmic interval: quarter notes, eighth notes, and so on. On the Ableton Push, Note Repeat automatically syncs to the global tempo, making it effortless to produce musically satisfying rhythms. However, this mode has a major shortcoming: it applies globally to all of the drum pads. To my knowledge, no drum machine makes it possible to simultaneously have, say, the snare drum playing every dotted eighth note while the hi-hat plays every sixteenth note.

I propose a web application called QWERTYBeats that maps drums to the computer keyboard as follows:

  • Each row of the keyboard triggers a different drum/beatbox sound (e.g. kick, snare, closed hi-hat, open hi-hat).
  • Each column retriggers the sample at a different rhythmic interval (e.g. quarter note, dotted eighth note).
  • Circles dynamically divide into “pie slices” to show rhythmic values.

The rhythm values are shown below by column, with descriptions followed by the time interval as shown as a fraction of the tempo in beats per minute.

  1. quarter note (1)
  2. dotted eighth note (3/4)
  3. quarter note triplet (2/3)
  4. eighth note (1/2)
  5. dotted sixteenth note (3/8)
  6. eighth note triplet (1/3)
  7. sixteenth note (1/4)
  8. dotted thirty-second note (3/16)
  9. sixteenth note triplet (1/6)
  10. thirty-second note (1/8)

By simply holding down different combinations of keys, users can attain complex syncopations and polyrhythms. If the app is synced to the tempo of a DAW or music playback, the user can perform good-sounding rhythms over any song that is personally meaningful to them.

The column layout leaves some unused keys in the upper right corner of the keyboard: “-“, “=”, “[“, “]”, “”, etc. These can be reserved for setting the tempo and other UI elements.

The app defaults to Perform Mode, but clicking Make New Kit opens Sampler mode, where users can import or record their own drum sounds:

  • Keyboard shortcuts enable the user to select a sound, audition it, record, set start and end point, and set its volume level.
  • A login/password system enables users to save kits to the cloud where they can be accessed from any computer. Kits get unique URL identifiers, so users can also share them via email or social media.

It is my goal to make the app accessible to users with the widest possible diversity of abilities.

  • The entire layout will use plain text, CSS and JavaScript to support screen readers.
  • All user interface elements can be accessed via the keyboard: tab to change the keyboard focus, menu selections and parameter changes via the up and down arrows, and so on.

Perform Mode:

QWERTYBeats concept images - Perform mode

Sampler Mode:

sampler-mode

Mobile version

The present thought is to divide up the screen into a grid mirroring the layout of the QWERTY keyboard. User testing will determine whether this will produce a satisfying experience.

QWERTYDrum - mobile

Prototype

I created a prototype of the app using Ableton Live’s Session View.

QWERTYBeats - Ableton prototype

Here is a sample performance:

There is not much literature examining the impact of drum programming and other electronic rhythm sequencing on students’ subsequent ability to play acoustic drums, or to keep time more accurately in general. I can report anecdotally that my own time spent sequencing and programming drums improved my drumming and timekeeping enormously (and mostly inadvertently.) I will continue to seek further support for the hypothesis that electronically assisted rhythm creation builds unassisted rhythmic ability. In the meantime, I am eager to prototype and test QWERTYBeats.

Rohan lays beats

The Ed Sullivan Fellows program is an initiative by the NYU MusEDLab connecting up-and-coming hip-hop musicians to mentors, studio time, and creative and technical guidance. Our session this past Saturday got off to an intense start, talking about the role of young musicians of color in a world of the police brutality and Black Lives Matter. The Fellows are looking to Kendrick Lamar and Chance The Rapper to speak social and emotional truths through music. It’s a brave and difficult job they’ve taken on.

Eventually, we moved from heavy conversation into working on the Fellows’ projects, which this week involved branding and image. I was at kind of a loose end in this context, so I set up the MusEDLab’s Push controller and started playing around with it. Rohan, one of the Fellows, immediately gravitated to it, and understandably so.

Indigo lays beats

Rohan tried out a few drum sounds, then some synths. He quickly discovered a four-bar synth loop that he wanted to build a track around. He didn’t have any Ableton experience, however, so I volunteered to be his co-producer and operate the software for him.

We worked out some drum parts, first with a hi-hat and snare from the Amen break, and then a kick, clap and more hi-hats from Ableton’s C78 factory instrument. For bass, Rohan wanted that classic booming hip-hop sound you hear coming from car stereos in Brooklyn. He spotted the Hip-Hop Sub among the presets. We fiddled with it and he continued to be unsatisfied until I finally just put a brutal compressor on it, and then we got the sound he was hearing in his head.

While we were working, I had my computer connected to a Bluetooth speaker that was causing some weird and annoying system behavior. At one point, iTunes launched itself and started playing a random song under Rohan’s track, “I Can’t Realize You Love Me” by Duke Ellington and His Orchestra, featuring The Harlem Footwarmers and Sid Garry.

Rohan liked the combination of his beat and the Ellington song, so I sampled the opening four bars and added them to the mix. It took me several tries to match the keys, and I still don’t think I really nailed it, but the hip-hop kids have broad tolerance for chord clash, and Rohan was undisturbed.

Once we had the loops assembled, we started figuring out an arrangement. It took me a minute to figure out that when Rohan refers to a “bar,” he means a four-measure phrase. He’s essentially conflating hypermeasures with measures. I posted about it on Twitter later and got some interesting responses.

In a Direct Message, Latinfiddler also pointed out that Latin music calls two measures a “bar” because that’s the length of one cycle of the clave.

Thinking about it further, there’s yet another reason to conflate measures with hypermeasures, which is the broader cut-time shift taking place in hip-hop. All of the young hip-hop beatmakers I’ve observed lately work at half the base tempo of their DAW session. Rohan, being no exception, had the session tempo set to 125 bpm, but programmed a beat with an implied tempo of 62.5 bpm. He and his cohort put their backbeats on beat three, not beats two and four, so they have a base grid of thirty-second notes rather than sixteenth notes. A similar shift took place in the early 1960s when the swung eighth notes of jazz rhythm gave way to the swung sixteenth notes of funk.

Here’s Rohan’s track as of the end of our session:

By the time we were done working, the rest of the Fellows had gathered around and started freestyling. The next step is to record them rapping and singing on top. We also need to find someone to mix it properly. I understand aspects of hip-hop very well, but I mix amateurishly at best.

All the way around, I feel like a learn a ton about music whenever I work with young hip-hop musicians. They approach the placement of sounds in the meter in ways that would never occur to me. I’m delighted to be able to support them technically in realizing their ideas, it’s a privilege for me.

Ilan meets the Fugees

My youngest private music production student is a kid named Ilan. He makes moody trip-hop and deep house using Ableton Live. For our session today, Ilan came in with a downtempo, jazzy hip-hop instrumental. I helped him refine and polish it, and then we talked about his ideas for what kind of vocal might work on top. He wanted an emcee to flow over it, so I gave him my folder of hip-hop acapellas I’ve collected. The first one he tried was “Fu-Gee-La [Refugee Camp Remix]” by the Fugees.

I had it all warped out already, so all he had to do was drag and drop it into his session and press play. It sounded great, so he ran with it. Here’s what he ended up with:

At this point, let me clarify something. To his knowledge, Ilan had never heard “Fu-Gee-La” before using it in his track. His first exposure was the acapella over his own instrumental. His track is quite a bit faster than the original (well, technically, it’s slower, but the kids these days like their rapping doubletime.) Also, we needed to pitch the acapella down a minor third to match the key of Ilan’s instrumental. As of this writing, he has heard his remix about a thousand more times than the original.

And now, let’s consider the Fugees’ “original” song. Ilan used the acapella from a remix, not from the original original, which makes a difference since the remix has some different lyrics. The Fugees’ original original is not itself totally original. It contains several samples, including liberal interpolations of Teena Marie, and a quote from “Shakiyla (JRH)” by Poor Righteous Teachers, which itself contains several samples.

Hip-hop’s sampling culture was still radical back in the 90s when “Fu-Gee-La” was released, but has since become absorbed into mainstream sensibilities. Ilan is ambitious and talented, but his sensibilities are well in keeping with most of his millennial peers. So it’s worth looking into his norms and values around authorship and ownership. During our session, he was interested in the Fugees song simply as raw material for his own creativity, not as a self-contained work that needed to be “appreciated” first (or ever.) Ilan’s concerns about where he sources his sounds comes down one hundred percent to expediency. He buys sounds from the Ableton web site because that’s easy. The same goes for buying tracks from iTunes, if they surface with a quick search. Otherwise Ilan just does YouTube to mp3 conversion. I’ve never heard him voice any concern about the idea of intellectual property, or any desire to seek anyone’s permission.

So here we have a young musician who created an original track, and then after the fact layered in a commercially released hip-hop vocal track on a whim. If that one hadn’t worked, he would have just dropped in another one chosen more or less at random. This kind of effortless drag-and-drop remixing requires some facility with Ableton Live, which is expensive and has a learning curve. But this practice is easier than it was five years ago, and is only going to get easer. Music educators: are we ready for a world where this kind of creativity is so accessible? Rights holders: do you know just how little the kids know or care about the concept of musical intellectual property? And musicians: have you experienced the pleasure and inspiration of freely mixing your ideas with everyone else’s? This is a crazy time we live in.

Project-based music technology teaching

I use a project-based approach to teaching music technology. Technical concepts stick with you better if you learn them in the course of making actual music. Here’s the list of projects I assign to my college classes and private students. I’ve arranged them from easiest to hardest. The first five projects are suitable for a beginner-level class using any DAW–my beginners use GarageBand. The last two projects are more advanced and require a DAW with sophisticated editing tools and effects, like Ableton Live. If you’re a teacher, feel free to use these (and let me know if you do). Same goes for all you bedroom producers and self-teachers.

The projects are agnostic as to musical content, style or genre. However, the computer is best suited to making electronic music, and most of these projects work best in the pop/hip-hop/techno sphere. Experimental, ambient or film music approaches also work well. Many of them draw on the Disquiet Junto. Enjoy.

Tristan gets his FFT on

Loops

Assignment: Create a song using only existing loops. You can use these or these, or restrict yourself to the loops included with your DAW. Do not use any additional sounds or instruments.

For beginners, I like to separate this into two separate assignments. First, create a short (two or four bar) phrase using four to six instrument loops and beats. Then use that set of loops as the basis of a full length track, by repeating, and by having sounds enter and exit.

Concepts:

  • Basic DAW functions
  • Listening like a producer
  • Musical form and song structures
  • Intellectual property, copyright and authorship

Hints:

  • MIDI loops are easier to edit and customize than audio loops.
  • Try slicing audio loops into smaller segments. Use only the front or back half of the loop. Or rearrange segments into a different order.

final song

MIDI

Assignment: Create a piece of music using MIDI and software instruments. Do not record or import any audio. You can use MIDI from any source, including: playing keyboards, drum pads or other interfaces; drawing in the piano roll; importing scores from notation programs; downloading MIDI files from the internet (for example, from here); or using the Audio To MIDI function in your DAW. 

I don’t treat this as a composition exercise (unless students want to make it one.) Feel free to use an existing piece of music. The only requirement is that the end result has to sound good. Simply dragging a classical or pop MIDI into the DAW is likely to sound terrible unless you put some thought into your instrument choices. If you do want to create something original, try these compositional prompts.

Concepts:

  • MIDI recording and editing
  • Quantization, swing, and grooves
  • “Real” vs “fake” instruments
  • Synthesized vs sampled sounds
  • Drum programming
  • Interfaces and controllers

Hints:

  • For beginners, see this post on beatmaking fundamentals.
  • Realism is unattainable. Embrace the fakeness.
  • Find a small segment of a classical piece and loop it.
  • Rather than playing back a Bach keyboard piece on piano or harpsichord, set your instrument to drums or percussion, and get ready for joy.

Montclair State Music Tech 101

Found sound

Assignment: Record a short environmental sound and incorporate it into a piece of music. You can edit and process your found sound as you see fit. Variation: use existing sounds from Freesound.

Concepts:

  • Audio recording, editing, and effects
  • The musical potential of “non-musical” sounds

Hints:

  • Students usually record their sounds with their phones, and the resulting recording quality is usually bad. Try using EQ, compression, delay, reverb, distortion, and other effects to mitigate or enhance poor sound quality and background noise.

pyt stems

Peer remix

Assignment: Remix a track by one of your classmates (or friends, or a stranger on the internet.) Feel free to incorporate other pieces of music as well. Follow your personal definition of the word “remix.” That might mean small edits and adjustments to the mix and effects, or a radical reworking leading to complete transformation of the source material.

There are endless variations on the peer remix. Try the “metaremix,” where students remix each others’ remixes, to the nth degree as time permits. Also, do group remix activities like Musical Shares or FX Roulette.

Concepts:

  • Collaboration and authorship
  • Sampling
  • Mashups
  • Evolution of musical ideas
  • Musical critique using musical language

Hints:

  • A change in tempo can have dramatic effects on the mood and feel of a track.
  • Adding sounds is the obvious move, but don’t be afraid to remove things too.

Self remix

Assignment: Remix one of your own projects, using the same guidelines as the peer remix. This is a good project for the end of the semester/term.

Song transformation

Assignment: Take an existing song and turn it into a new song. Don’t use any additional sounds or MIDI.

Concepts:

  • Advanced audio editing and effects
  • Musical form and structure
  • The nature of originality

Hints:

  • You can transform short segments simply by repeating them out of context. For example, try taking single chords or lyrical phrases and looping them.

Serato

Shared sample

Assignment: Take a short audio sample (five seconds or less) and build a complete piece of music out of it. Do not use any other sounds. This is the most difficult assignment here, and the most rewarding one if you can pull it off successfully.

Concepts:

  • Advanced audio editing and effects
  • Musical form and structure
  • The nature of originality

Hints:

  • Pitch shifting and timestretching are your friends.
  • Short bursts of noise can be tuned up and down to make drums.
  • Extreme timestretching produces great ambient textures.

Mobile music at IMPACT

Writing assignments

I like to have students document their process in blog posts. I ask: What sounds and techniques did you use? Why did you use them? Are you happy with the end result? Given unlimited time and expertise, what changes would you make? Do you consider this to be a valid form of musical creativity?

This semester I also asked students to write reviews of each others’ work in the style of their preferred music publication. In the future, I plan to have students write a review of an imaginary track, and then assign other students to try to create the track being described.

The best way to learn how to produce good recordings is to do critical listening exercises. I assign students to create musical structure and space graphs in the spirit of William Moylan.

Further challenges

The projects above were intended to be used for a one-semester college class. If I were teaching over a longer time span or I needed more assignments, I would draw from the Disquiet JuntoMaking Music by Dennis DeSantis, or the Oblique Strategies cards. Let me know in the comments if you have additional recommendations.

Milo meets Beethoven

For his birthday, Milo got a book called Welcome to the Symphony by Carolyn Sloan. We finally got around to showing it to him recently, and now he’s totally obsessed.

Welcome To The Symphony by Carolyn Sloan

The book has buttons along the side which you can press to hear little audio samples. They include each orchestra instrument playing a short Beethoven riff. All of the string instruments play the same “bum-bum-bum-BUMMM” so you can compare the sounds easily. All the winds play a different little phrase, and the brass another. The book itself is fine and all, but the thing that really hooked Milo is triggering the riffs one after another, Ableton-style, and singing merrily along.

Milo got primed to enjoy this book by two coincidental things. One is that in his preschool, they’ve been listening to Peter and the Wolf a lot, dancing to it, acting it out, etc. They use a YouTube video that shows both the story and the instruments side by side, so Milo has very clear ideas of what the oboe, clarinet, etc all look like and sound like. When he saw them in the orchestra book, he recognized them all immediately.

The other thing is this weird computer animated cartoon called Taratabong, which is about anthropomorphic musical instruments. Milo has been watching it on YouTube a bunch, to the point of wanting me to pretend to be different characters and “talk” to him (which is an entertaining challenge for me–how do you have a conversation as a snare drum?) So Milo also recognizes different instruments in the orchestra book as Taratabong characters.

Milo has now voluntarily watched a YouTube video of the entire first movement of Beethoven’s Fifth conducted by Leonard Bernstein, several times. That’s like nine minutes of classical music, which for a three-year-old is equivalent to nine hours. He sings along to all the riffs he recognized, announces each instrument as he sees it, and tells me about how Leonard Bernstein is Grandfather from Peter and the Wolf. I want to emphasize that we haven’t pushed him into any of this. If you read this blog, you know that I’m an outspoken anti-fan of Beethoven. We just put this stuff under Milo’s nose, and if he hadn’t been interested, we wouldn’t have pushed it.

The classical music tribe expresses continual anguish about how hard it is to draw people into the music. Having inadvertently created a budding Beethoven lover, I have a few insights to offer. Milo got connected to the music through multiple media simultaneously, in multiple settings. He was exposed initially in the context of stories about animals and cartoon characters. That exposure happened in the context of acting and dancing, not passive sitting or being lectured to. And when he did start listening, it was via playback devices that he controls completely: YouTube Kids on the iPad, and the buttons on the book.

Of all these different music experiences, the Ableton-like sample triggering is the one that has most seized Milo’s enthusiasm. Sometimes he wants to read the book and play the sounds when the text indicates. Sometimes he wants to systematically listen through each sound, singing along and acting out the instruments. Sometimes he just jams out, playing the excerpts in different orders and in different rhythms. I suspect he’d be even happier if he could get the sounds to loop. He wants to sing along, but the little phrases are half over before he can even get oriented. If the phrases looped in a musical-sounding way, I bet he would dig in much deeper.

This is not Milo’s first experience triggering sample playback. Before he even turned two, we spent a lot of time playing around with an APC 40.

APC40

Milo adores the lights and colors, and instantly grasped how the volume faders work. In general, though, the APC experience was too complicated for him. It was too easy to make it stop working, to lose the connection between button pushes and the music changing, and to generally get lost in the interface. (I have some of those same problems!) The orchestra book has the advantage of being vastly simpler and more predictable.

There’s a page in the book that shows Beethoven with quill pen, writing the music. (Milo is continually disappointed not to see Beethoven himself in any of the performance videos.) Interestingly, Milo has started using the phrase “writing music” as a synonym for “playing music”, either from an instrument or from iTunes. He seems not to know or care about the distinction between playing back pre-recorded music and creating new music. This conflation of writing and playing music was likely helped by the time Milo has spent with the aQWERTYon, an interface developed by the NYU MusEDLab for performing music on the computer keyboard.

aQWERTYon screencap

Milo isn’t extremely interested in the musical aspect of the aQWERTYon. He calls it “ABCs” and is mostly interested in using it to type his favorite letters. He also enjoys singing the alphabet song while playing semi-randomly along.

The MusEDLab’s work is motivated by the fact that computers make it enormously easier for total novices to participate actively in music. If Beethoven symphonies can be played with as toys, participated in as games, and connected to meaningful stories and activities, then it’s inevitable that kids are going to want to get involved. If I had experienced Beethoven as raw material for my own expression, I’d probably feel quite differently about him.

Teaching reflections

Here’s what happened in my life as an educator this past semester, and what I have planned for the coming semester.

Montclair State University Intro To Music Technology

I wonder how much longer “music technology” is going to exist as a subject. They don’t teach “piano technology” or “violin technology.” It makes sense to teach specific areas like audio recording or synthesis or signal theory as separate classes. But “music technology” is such a broad term as to be meaningless. The unspoken assumption is that we’re teaching “musical practices involving a computer,” but even that is both too big and too small to structure a one-semester class around. On the one hand, every kind of music involves computers now. On the other hand, to focus just on the computer part is like teaching a word processing class that’s somehow separate from learning how to write.

MSU Intro to Music Tech

The newness and vagueness of the field of study gives me and my fellow music tech educators wide latitude to define our subject matter. I see my job as providing an introduction to pop production and songwriting. The tools we use for the job at Montclair are mostly GarageBand and Logic, but I don’t spend a lot of time on the mechanics of the software itself. Instead, I teach music: How do you express yourself creatively using sample libraries, or MIDI, or field recordings, or pre-existing songs? What kinds of rhythms, harmonies, timbres and structures make sense aesthetically when you’re assembling these materials in the DAW? Where do you get ideas? How do you listen to recorded music analytically? Why does Thriller sound so much better than any other album recorded in the eighties? We cover technical concepts as they arise in the natural course of producing and listening. My hope is that they’ll be more relevant and memorable that way.

Having now taught three semesters of Intro to Music Tech at MSU, my format is starting to gel. The students spend most of the semester creating tracks. They do one using only the loops that come with GarageBand, one using only MIDI and software instruments, one that includes a field recording they made with their phones, and so on. I started having them remix each other’s tracks this past semester, and it was such a smash hit that I’m going to have future classes do a whole series of peer remixes.

Montclair is a fairly traditional conservatory. For many students, my class is the only time in their college careers they get to make music according to their own sensibilities and tastes. It’s also usually the only time they engage critically with recordings, or electronic dance music, or hip-hop, or pop song forms, or sampling, or mixing and audio processing. I’m glad to be able to fill these vacuums, but I wish I had more than one semester to do it in.

Aside from creative music-making, the students do a couple of presentations, one on a song they think is interesting, and one on a topic of their choice. They also write blog posts about the process of creating their tracks. This last assignment is a persistent obstacle, since no one seems to share my enthusiasm for process documentation. Next semester I’m going to try introducing some of the cooperative/competitive spirit of the peer remixes by having them write reviews of each other’s tracks. Maybe that will get them to invest their writing with the same creativity they put into the music assignments.

Montclair State Advanced Computer Music Composition

This past fall I got to teach my first advanced class, and it went amazingly well. We used Ableton Live, my DAW of choice, and the guys (it was all guys) banged out tracks at a rapid clip for the entire semester. As with the intro class, I spent most of the time on the creative process, and dealt with Ableton functionality and audio engineering topics as they came up.

Tristan gets his FFT on

Each assignment came with some kind of tight technical restriction, but no stylistic restrictions. As with the intro class, the advanced dudes did tracks using only existing loops, only MIDI, and found sound. They did peer remixing and self remixing as well. The two hardest and most interesting assignments were to create a new track using only samples of an existing track, and then to create a new track using only a single five-second Duke Ellington sample. (These assignments were inspired heavily by the Disquiet Junto.) The more tightly I constrained the students, the more ingenuity they displayed. Listen for yourself:

As with the intro class, I tried to have the advanced dudes document their process with blog posts. As with the intro class, they showed zero interest. In the future, I’ll have to get more creative with the writing component. Also, I’d like to not have the class be entirely male.

NYU Music Education Technology Practicum

This class is meant to be a grounding in music tech for future music teachers. I’m even more time-constrained at NYU than at Montclair, and I teach in a regular classroom rather than a computer lab. While my class time at Montclair is mostly devoted to music-making, at NYU I’m forced to do more lectures, demos and listening sessions. It is very far from ideal. I have no idea how NYU can charge so much money without offering such a basic-seeming amenity as a room with computers in it for the music students. However, NYU does have one advantage over Montclair as a teaching environment, which is that I can hold a couple of class sessions in an extremely fancy recording studio.

Catherine and Joseph in the Dolan Studio

I mostly take the same approach at NYU as I do at Montclair, and use most of the same assignments. The major difference is that the NYU kids do a critical listening project, where they pick a recording and graph out its musical structure and spatial layout. It’s a difficult exercise, but an invaluable one. I did it in grad school, and it improved my analytical listening abilities significantly. We used to do the same assignment at Montclair, but the students were really not into it, like to the point of refusing to do it, so sadly we had to drop it from the syllabus. I hope we can find a way to reinstate it.

This past semester, the majority of my NYU kids were music business majors, which was pretty great. They came in with less musical experience than the education majors–sometimes with none at all–but they had less to unlearn, and they threw themselves confidently into producing tracks. This coming semester I have a bunch more music business kids. I’m attracting them because my class is the only one at Steinhardt that does intro-level creative music making in the pop idiom. I’m clearly filling a vacuum, and I’m hoping that I’m just the thin edge of the wedge, both for my own sake and the future music educators of NYU.

Interface designs

The NYU Music Experience Design Lab is baking education into a suite of creative music making and learning tools. As my friend and colleague Adam Bell likes to say, purchasers of a computer are purchasing a music education. We’re trying to make that education a better and more enjoyable one, whether our users are in formal classroom settings or playing around on their own. You can read about the lab’s various projects here. My own contributions are largely conceptual, though I’ve also devoted a lot of attention to making useful and inspiring presets.

Cold Sweat on the Groove Pizza

The Ed Sullivan Fellows Program

This winter, the MusEDLab is launching a brand new initiative, mentoring a group of young people from challenging circumstances in music and technology. I’ll be teaching the music side, doing a custom-tailored version of my intro class syllabus. Sullivan Fellows will also work with my colleagues in the lab on programming and design projects. This summer, we’ll have a showcase event as part of the 2016 IMPACT Conference. The goal is to help the Fellows get launched in careers in music and/or technology. I’ll be writing a lot more about this in the coming weeks.

Online courses with Soundfly

The MusEDLab is working with a music ed startup on some new interactive online courses. The first is called Music Theory For Bedroom Producers, and we expect to launch next month. I wrote a lot of the materials, and am appearing in some videos. Soundfly has ace designers, animators and programmers, so expect a rich multimedia experience. More on this as it gets closer.

Everything else

For the past few years, I’ve been a teaching artist with NYU’s IMPACT workshop. Below, you can see some participants making beats on an iPad. The workshop is a crash course not just in music, but in theater, dance, video, and the intersection of all of the above. I’m still very much figuring out my role in the whole thing, but so is everyone involved.

Mobile music at IMPACT

I continue to teach private lessons, do freelance production and composition, do some consulting, write for online publications, and generally keep hustling for gigs. If you’d like to have me do any of these things, be in touch.