Project-based music technology teaching

I use a project-based approach to teaching music technology. Technical concepts stick with you better if you learn them in the course of making actual music. Here’s the list of projects I assign to my college classes and private students. I’ve arranged them from easiest to hardest. The first five projects are suitable for a beginner-level class using any DAW–my beginners use GarageBand. The last two projects are more advanced and require a DAW with sophisticated editing tools and effects, like Ableton Live. If you’re a teacher, feel free to use these (and let me know if you do). Same goes for all you bedroom producers and self-teachers.

The projects are agnostic as to musical content, style or genre. However, the computer is best suited to making electronic music, and most of these projects work best in the pop/hip-hop/techno sphere. Experimental, ambient or film music approaches also work well. Many of them draw on the Disquiet Junto. Enjoy.

Tristan gets his FFT on

Loops

Assignment: Create a song using only existing loops. You can use these or these, or restrict yourself to the loops included with your DAW. Do not use any additional sounds or instruments.

For beginners, I like to separate this into two separate assignments. First, create a short (two or four bar) phrase using four to six instrument loops and beats. Then use that set of loops as the basis of a full length track, by repeating, and by having sounds enter and exit.

Concepts:

  • Basic DAW functions
  • Listening like a producer
  • Musical form and song structures
  • Intellectual property, copyright and authorship

Hints:

  • MIDI loops are easier to edit and customize than audio loops.
  • Try slicing audio loops into smaller segments. Use only the front or back half of the loop. Or rearrange segments into a different order.

final song

MIDI

Assignment: Create a piece of music using MIDI and software instruments. Do not record or import any audio. You can use MIDI from any source, including: playing keyboards, drum pads or other interfaces; drawing in the piano roll; importing scores from notation programs; downloading MIDI files from the internet (for example, from here); or using the Audio To MIDI function in your DAW. 

I don’t treat this as a composition exercise (unless students want to make it one.) Feel free to use an existing piece of music. The only requirement is that the end result has to sound good. Simply dragging a classical or pop MIDI into the DAW is likely to sound terrible unless you put some thought into your instrument choices. If you do want to create something original, try these compositional prompts.

Concepts:

  • MIDI recording and editing
  • Quantization, swing, and grooves
  • “Real” vs “fake” instruments
  • Synthesized vs sampled sounds
  • Drum programming
  • Interfaces and controllers

Hints:

  • For beginners, see this post on beatmaking fundamentals.
  • Realism is unattainable. Embrace the fakeness.
  • Find a small segment of a classical piece and loop it.
  • Rather than playing back a Bach keyboard piece on piano or harpsichord, set your instrument to drums or percussion, and get ready for joy.

Montclair State Music Tech 101

Found sound

Assignment: Record a short environmental sound and incorporate it into a piece of music. You can edit and process your found sound as you see fit. Variation: use existing sounds from Freesound.

Concepts:

  • Audio recording, editing, and effects
  • The musical potential of “non-musical” sounds

Hints:

  • Students usually record their sounds with their phones, and the resulting recording quality is usually bad. Try using EQ, compression, delay, reverb, distortion, and other effects to mitigate or enhance poor sound quality and background noise.

pyt stems

Peer remix

Assignment: Remix a track by one of your classmates (or friends, or a stranger on the internet.) Feel free to incorporate other pieces of music as well. Follow your personal definition of the word “remix.” That might mean small edits and adjustments to the mix and effects, or a radical reworking leading to complete transformation of the source material.

There are endless variations on the peer remix. Try the “metaremix,” where students remix each others’ remixes, to the nth degree as time permits. Also, do group remix activities like Musical Shares or FX Roulette.

Concepts:

  • Collaboration and authorship
  • Sampling
  • Mashups
  • Evolution of musical ideas
  • Musical critique using musical language

Hints:

  • A change in tempo can have dramatic effects on the mood and feel of a track.
  • Adding sounds is the obvious move, but don’t be afraid to remove things too.

Self remix

Assignment: Remix one of your own projects, using the same guidelines as the peer remix. This is a good project for the end of the semester/term.

Song transformation

Assignment: Take an existing song and turn it into a new song. Don’t use any additional sounds or MIDI.

Concepts:

  • Advanced audio editing and effects
  • Musical form and structure
  • The nature of originality

Hints:

  • You can transform short segments simply by repeating them out of context. For example, try taking single chords or lyrical phrases and looping them.

Serato

Shared sample

Assignment: Take a short audio sample (five seconds or less) and build a complete piece of music out of it. Do not use any other sounds. This is the most difficult assignment here, and the most rewarding one if you can pull it off successfully.

Concepts:

  • Advanced audio editing and effects
  • Musical form and structure
  • The nature of originality

Hints:

  • Pitch shifting and timestretching are your friends.
  • Short bursts of noise can be tuned up and down to make drums.
  • Extreme timestretching produces great ambient textures.

Mobile music at IMPACT

Writing assignments

I like to have students document their process in blog posts. I ask: What sounds and techniques did you use? Why did you use them? Are you happy with the end result? Given unlimited time and expertise, what changes would you make? Do you consider this to be a valid form of musical creativity?

This semester I also asked students to write reviews of each others’ work in the style of their preferred music publication. In the future, I plan to have students write a review of an imaginary track, and then assign other students to try to create the track being described.

The best way to learn how to produce good recordings is to do critical listening exercises. I assign students to create musical structure and space graphs in the spirit of William Moylan.

Further challenges

The projects above were intended to be used for a one-semester college class. If I were teaching over a longer time span or I needed more assignments, I would draw from the Disquiet JuntoMaking Music by Dennis DeSantis, or the Oblique Strategies cards. Let me know in the comments if you have additional recommendations.

The evolution of the Groove Pizza

The Groove Pizza is a playful tool for creating grooves using math concepts like shapes, angles, and patterns. Here’s a beat I made just nowTry it yourself!

 
This post explains how and why we designed Groove Pizza.

What it does

The Groove Pizza represents beats as concentric rhythm necklaces. The circle represents one measure. Each slice of the pizza is a sixteenth note. The outermost ring controls the kick drum; the middle one controls the snare; and the innermost one plays cymbals.

Connecting the dots on a given ring creates shapes, like the square formed by the snare drum in the pattern below.

Groove Pizza - jazz swing

The pizza can play time signatures other than 4/4 by changing the number of slices. Here’s a twelve-slice pizza playing an African bell pattern.

Groove Pizza - Bembe

You can explore the geometry of musical rhythm by dragging shapes onto the circular grid. Patterns that are visually appealing tend to sound good, and patterns that sound good tend to look cool.

Groove Pizza - shapes

Herbie Hancock did some user testing for us, and he suggested that we make it possible to show the interior angles of the shapes.

Groove Pizza - angles

Groove Pizza History

The ideas behind the Groove Pizza began in my masters thesis work in 2013 at NYU. For his NYU senior thesis, Adam November built web and physical prototypes. In late summer 2015, Adam wrote what would become the Groove Pizza 1.0 (GP1), with a library of drum patterns that he and I curated. The MusEDLab has been user testing this version for the past year, both with kids and with music and math educators in New York City.

In January 2016, the Music Experience Design Lab began developing the Groove Pizza 2.0 (GP2) as part of the MathScienceMusic initiative.

MathScienceMusic Groove Pizza Credits:

  • Original Ideas: Ethan Hein, Adam November & Alex Ruthmann
  • Design: Diana Castro
  • Software Architect: Kevin Irlen
  • Creative Code Guru: Matthew Kaney
  • Backend Code Guru: Seth Hillinger
  • Play Testing: Marijke Jorritsma, Angela Lau, Harshini Karunaratne, Matt McLean
  • Odds & Ends: Asyrique Thevendran, Jamie Ehrenfeld, Jason Sigal

The learning opportunity

The goals of the Groove Pizza are to help novice drummers and drum programmers get started; to create a gentler introduction to beatmaking with more complex tools like Logic or Ableton Live; and to use music to open windows into math and geometry. The Groove Pizza is intended to be simple enough to be learned easily without prior experience or formal training, but it must also have sufficient depth to teach substantial and transferable skills and concepts, including:

  • Familiarity with the component instruments in a drum beat and the ability to pick them individually out of the sound mass.
  • A repertoire of standard patterns and rhythmic motifs. Understanding of where to place the kick, snare, hi-hats and so on to produce satisfying beats.
  • Awareness of different genres and styles and how they are distinguished by their different degrees of syncopation, customary kick drum patterns and claves, tempo ranges and so on.
  • An intuitive understanding of the difference between strong and weak beats and the emotional effect of syncopation.
  • Acquaintance with the concept of hemiola and other more complex rhythmic devices.

Marshall (2010) recommends “folding musical analysis into musical experience.” Programming drums in pop and dance idioms makes the rhythmic abstractions concrete.

Visualizing rhythm

Western music notation is fairly intuitive on the pitch axis, where height on the staff corresponds clearly to pitch height. On the time axis, however, Western notation is less easily parsed—horizontal space need not have any bearing at all on time values. A popular alternative is the “time-unit box system,” a kind of rhythm tablature used by ethnomusicologists. In a time-unit box system, each pulse is represented by a square. Rhythmic onsets are shown as filled boxes.

Clave patterns in TUBS

Nearly all electronic music production interfaces use the time-unit box system scheme, including grid sequencers and the MIDI piano roll.

Ableton TUBS

A row of time-unit boxes can also be wrapped in a circle to form a rhythm necklace. The Groove Pizza is simply a set of rhythm necklaces arranged concentrically.

Circular rhythm visualization offers a significant advantage over linear notation: it more clearly shows metrical function. We can define meter as “the grouping of perceived beats or pulses into equivalence classes” (Forth, Wiggin & McLean, 2010, 521). Linear musical concepts like small-scale melodies depend mostly on relationships between adjacent events, or at least closely spaced events. But periodicity and meter depend on relationships between nonadjacent events. Linear representations of music do not show meter directly. Simply by looking at the page, there is no indication that the first and third beats of a measure of 4/4 time are functionally related, as are the second and fourth beats.

However, when we wrap the musical timeline into a circle, meter becomes much easier to parse. Pairs of metrically related beats are directly opposite one another on the circle. Rotational and reflectional symmetries give strong clues to metrical function generally. For example, this illustration of 2-3 son clave adapted from Barth (2011) shows an axis of reflective symmetry between the fourth and twelfth beats of the pattern. This symmetry is considerably less obvious when viewed in more conventional notation.

Son clave symmetry

The Groove Pizza adds a layer of dynamic interaction to circular representation. Users can change time signatures during playback by adding or removing slices. In this way, very complex metrical shifts can be performed by complete novices. Furthermore, each rhythm necklace can be rotated during playback, enabling a rhythmic modularity characteristic of the most sophisticated Afro-Latin and jazz rhythms. Exploring rotational rhythmic transformation typically requires very sophisticated music-reading and performance skills to understand and execute, but doing so is effortlessly accessible to Groove Pizza users.

Visualizing swing

We traditionally associate swing with jazz, but it is omnipresent in American vernacular music: in rock, country, funk, reggae, hip-hop, EDM, and so on. For that reason, swing is a standard feature of notation software, MIDI sequencers, and drum machines. However, while swing is crucial to rhythmic expressiveness, it is rarely visualized in any explicit way, in notation or in software interfaces. Sequencers will sometimes show swing by displacing events on the MIDI piano roll, but the user must place those events first. The grid itself generally does not show swing.

The Groove Pizza uses a novel (and to our knowledge unprecedented) graphical representation of swing on the background grid, not just on the musical events. The slices alternately expand and contract in width according to the amount of swing specified. At 0% swing, the wedges are all of uniform width. At 50% swing, the odd-numbered slice in each pair is twice as long as the following even-numbered slice. As the user adjusts the swing slider, the slices dynamically change their width accordingly.

Straight 16ths vs swing 16ths

Our swing visualization system also addresses the issue of whether swing should be applied to eighth notes or sixteenths. In the jazz era, swing was understood to apply to eighth notes. However, since the 1960s, swing is more commonly applied to sixteenth notes, reflecting a broader shift from eighth note to sixteenth note pulse in American vernacular music. To hear the difference, compare the swung eighth note pulse of “Rockin’ Robin” by Bobby Day (1958) with the sixteenth note pulse of “I Want You Back” by the Jackson Five (1969). Electronic music production tools like Ableton Live and Logic default to sixteenth-note swing. However, notation programs like Sibelius, Finale and Noteflight can only apply swing to eighth notes.

The Groove Pizza supports both eighth and sixteenth swing simply by changing the slice labeling. The default labeling scheme is agnostic, simply numbering the slices sequentially from one. In GP1, users can choose to label a sixteen-slice pizza either as one measure of sixteenth notes or two measures of eighth notes. The grid looks the same either way; only the labels change.

Drum kits

With one drum sound per ring, the number of sounds available to the user is limited by the number of rings that can reasonably fit on the screen. In my thesis prototype, we were able to accommodate six sounds per “drum kit.” GP1 was reduced to five rings, and GP2 has only three rings, prioritizing simplicity over musical versatility.

GP1 offers three drum kits: Acoustic, Hip-Hop, and Techno. The Acoustic kit uses samples of a real drum kit; the Hip-Hop kit uses samples of the Roland TR-808 drum machine; and the Techno kit uses samples of the Roland TR-909. GP2 adds two additional kits: Jazz (an acoustic drum kit played with brushes), and Afro-Latin (congas, bell, and shaker.) Preset patterns automatically load with specific kits selected, but the user is free to change kits after loading.

In GP1, sounds can be mixed and matched at wiell, so the user can, for example, combine the acoustic kick with the hip-hop snare. In GP2, kits cannot be customized. A wider variety of sounds would present a wider variety of sonic choices. However, placing strict limits on the sounds available has its own creative advantage: it eliminates option paralysis and forces users to concentrate on creating interesting patterns, rather than struggling to choose from a long list of sounds.

It became clear in the course of testing that open and closed hi-hats need not operate separate rings, since it is not desirable to ever have them sound at the same time. (While drum machines are not bound by the physical limitations of human drummers, our rhythmic traditions are.) In future versions of the GP, we plan to place closed and open hi-hats together on the same ring. Clicking a beat in the hi-hat ring will place a closed hi-hat; clicking it again will replace it with an open hi-hat; and a third click will return the beat to silence. We will use the same mechanic to toggle between high and low cowbells or congas.

Preset patterns

In keeping with the constructivist value of working with authentic cultural materials, the exercises in the Groove Pizza are based on rhythms drawn from actual music. Most of the patterns are breakbeats—drums and percussion sampled from funk, rock and soul recordings that have been widely repurposed in electronic dance and hip-hop music. There are also generic rock, pop and dance rhythms, as well as an assortment of traditional Afro-Cuban patterns.

The GP1 offers a broad selection of preset patterns. The GP2 uses a smaller subset of these presets.

Breakbeats

  • The Winstons, ”Amen, Brother” (1969)
  • James Brown, ”Cold Sweat” (1967)”
  • James Brown, “The Funky Drummer” (1970)
  • Bobby Byrd, “I Know You Got Soul” (1971)
  • The Honeydrippers, “Impeach The President” (1973)
  • Skull Snaps, “It’s A New Day” (1973)
  • Joe Tex, ”Papa Was Too” (1966)
  • Stevie Wonder, “Superstition” (1972)
  • Melvin Bliss, “Synthetic Substitution”(1973)

Afro-Cuban

  • Bembé—also known as the “standard bell pattern”
  • Rumba clave
  • Son clave (3-2)
  • Son clave (2-3)

Pop

  • Michael Jackson, ”Billie Jean” (1982)
  • Boots-n-cats—a prototypical disco pattern, e.g. “Funkytown” by Lipps Inc (1979)
  • INXS, “Need You Tonight” (1987)
  • Uhnntsss—the standard “four on the floor” pattern common to disco and electronic dance music

Hip-hop

  • Lil Mama, “Lip Gloss” (2008)
  • Nas, “Nas Is Like” (1999)
  • Digable Planets, “Rebirth Of Slick (Cool Like Dat)” (1993)
  • OutKast, “So Fresh, So Clean” (2000)
  • Audio Two, “Top Billin’” (1987)

Rock

  • Pink Floyd, ”Money” (1973)
  • Peter Gabriel, “Solisbury Hill” (1977)
  • Billy Squier, “The Big Beat” (1980)
  • Aerosmith, “Walk This Way” (1975)
  • Queen, “We Will Rock You” (1977)
  • Led Zeppelin, “When The Levee Breaks” (1971)

Jazz

  • Bossa nova, e.g. “The Girl From Ipanima” by Antônio Carlos Jobim (1964)
  • Herbie Hancock, ”Chameleon” (1973)
  • Miles Davis, ”It’s About That Time” (1969)
  • Jazz spang-a-lang—the standard swing ride cymbal pattern
  • Jazz waltz—e.g. “My Favorite Things” as performed by John Coltrane (1961)
  • Dizzy Gillespie, ”Manteca” (1947)
  • Horace Silver, ”Song For My Father” (1965)
  • Paul Desmond, ”Take Five” (1959)
  • Herbie Hancock, “Watermelon Man” (1973)

Mathematical applications

The most substantial new feature of GP2 is “shapes mode.” The user can drag shapes onto the grid and rotate them to create geometric drum patterns: triangle, square, pentagon, hexagon, and octagon. Placing shapes in this way creates maximally even rhythms that are nearly always musically satisfying (Toussaint 2011). For example, on a sixteen-slice pizza, the pentagon forms rumba or bossa nova clave, while the hexagon creates a tresillo rhythm. As a general matter, the way that a rhythm “looks” gives insight into the way it sounds, and vice versa.

Because of the way it uses circle geometry, the Groove Pizza can be used to teach or reinforce the following subjects:

  • Fractions
  • Ratios and proportional relationships
  • Angles
  • Polar vs Cartesian coordinates
  • Symmetry: rotations, reflections
  • Frequency vs duration
  • Modular arithmetic
  • The unit circle in the complex plane

Specific kinds of music can help to introduce specific mathematical concepts. For example, Afro-Cuban patterns and other grooves built on hemiola are useful for graphically illustrating the concept of least common multiples. When presented with a kick playing every four slices and a snare playing every three slices, a student can both see and hear how they will line up every twelve slices. Bamberger and diSessa (2003) describe the “aha” moment that students have when they grasp this concept in a music context. One student in their study is quoted as describing the twelve-beat cycle “pulling” the other two beats together. Once students grasp least common multiples in a musical context, they have a valuable new inroad into a variety of scientific and mathematical concepts: harmonics in sound analysis, gears, pendulums, tiling patterns, and much else.

In addition to eighth and sixteenth notes, GP1 users can also label the pizza slices as fractions or angles, both Cartesian and polar. Users can thereby describe musical concepts in mathematical terms, and vice versa. It is an intriguing coincidence that the polar angle π/16 represents a sixteenth note. One could go even further with polar mode and use it as the unit circle on the complex plane. From there, lessons could move into powers of e, the relationship between sine and cosine waves, and other more advanced topics. The Groove Pizza could thereby be used to lay the ground work for concepts in electrical engineering, signal processing, and anything else involving wave mechanics.

Future work

The Groove Pizza does not offer any tone controls like duration, pitch, EQ and the like. This choice was due to a combination of expediency and the push to reduce option paralysis. However, velocity (loudness) control is a high-priority future feature. While nuanced velocity control is not necessary for the artificial aesthetic of electronic dance music, a basic loud/medium/soft toggle would make the Groove Pizza a more versatile tool.

The next step beyond preset patterns is to offer drum programming exercises or challenges. In exercises, users are presented with a pattern. They may alter this pattern as they see fit by adding and removing drum hits, and by rotating instrument parts within their respective rings. There are restraints of various kinds, to ensure that the results are appealing and musical-sounding. The restraints are tighter for more basic exercises, and looser for more advanced ones. For example, we might present users with a locked four-on-the-floor kick pattern, and ask them to create a satisfying techno beat using the snares and hi-hats. We also plan to create game-like challenges, where users are given the sound of a beat and must figure out how to represent it on the circular grid.

The Groove Pizza would be more useful for the purposes of trigonometry and circle geometry if it were presented slightly differently. Presently, the first beat of each pattern is at twelve o’clock, with playback running clockwise. However, angles are usually representing as originating at three o’clock and increasing in a counterclockwise direction. To create “math mode,” the radial grid would need to be reflected left-to-right and rotated ninety degrees.

References

Ankney, K.L. (2012). Alternative representations for musical composition. Visions of Research in Music Education, 20.

Bamberger, J., & DiSessa, A. (2003). Music As Embodied Mathematics: A Study Of A Mutually Informing Affinity. International Journal of Computers for Mathematical Learning, 8(2), 123–160.

Bamberger, J. (1996). Turning Music Theory On Its Ear. International Journal of Computers for Mathematical Learning, 1: 33-55.

Bamberger, J. (1994). Developing Musical Structures: Going Beyond the Simples. In R. Atlas & M. Cherlin (Eds.), Musical Transformation and Musical Intuition. Ovenbird Press.

Barth, E. (2011). Geometry of Music. In Greenwald, S. and Thomley, J., eds., Essays in Encyclopedia of Mathematics and Society. Ipswich, MA: Salem Press.

Bell, A. (2013). Oblivious Trailblazers: Case Studies of the Role of Recording Technology in the Music-Making Processes of Amateur Home Studio Users. Doctoral dissertation, New York University.

Benadon, F. (2007). A Circular Plot for Rhythm Visualization and Analysis. Music Theory Online, Volume 13, Issue 3.

Demaine, E.; Gomez-Martin, F.; Meijer, H.; Rappaport, D.; Taslakian, P.; Toussaint, G.; Winograd, T.; & Wood, D. (2009). The Distance Geometry of Music. Computational Geometry 42, 429–454.

Forth, J.; Wiggin, G.; & McLean, A. (2010). Unifying Conceptual Spaces: Concept Formation in Musical Creative Systems. Minds & Machines, 20:503–532.

Magnusson, T. (2010). Designing Constraints: Composing and Performing with Digital Musical Systems. Computer Music Journal, Volume 34, Number 4, pp. 62 – 73.

Marrington, M. (2011). Experiencing Musical Composition In The DAW: The Software Interface As Mediator Of The Musical Idea. The Journal on the Art of Record Production, (5).

Marshall, W. (2010). Mashup Poetics as Pedagogical Practice. In Biamonte, N., ed. Pop-Culture Pedagogy in the Music Classroom: Teaching Tools from American Idol to YouTube. Lanham, MD: Scarecrow Press.

McClary, S. (2004). Rap, Minimalism and Structures of Time in Late Twentieth-Century Culture. In Warner, D. ed., Audio Culture. London: Continuum International Publishing Group.

Monson, I. (1999). Riffs, Repetition, and Theories of Globalization. Ethnomusicology, Vol. 43, No. 1, 31-65.

New York State Learning Standards and Core Curriculum — Mathematics

Ruthmann, A. (2012). Engaging Adolescents with Music and Technology. In Burton, S. (Ed.). Engaging Musical Practices: A Sourcebook for Middle School General Music. Lanham, MD: R&L Education.

Thibeault, M. (2011). Wisdom for Music Education from the Recording Studio. General Music Today, 20 October 2011.

Thompson, P. (2012). An Empirical Study Into the Learning Practices and Enculturation of DJs, Turntablists, Hip-Hop and Dance Music Producers.” Journal of Music, Technology & Education, Volume 5, Number 1, 43 – 58.

Toussaint, G. (2013). The Geometry of Musical Rhythm. Cleveland: Chapman and Hall/CRC.

____ (2005). The Euclidean algorithm generates traditional musical rhythms. Proceedings of BRIDGES: Mathematical Connections in Art, Music, and Science, Banff, Alberta, Canada, July 31 to August 3, 2005, pp. 47-56.

____ (2004). A comparison of rhythmic similarity measures. Proceedings of ISMIR 2004: 5th International Conference on Music Information Retrieval, Universitat Pompeu Fabra, Barcelona, Spain, October 10-14, 2004, pp. 242-245.

____ (2003). Classification and phylogenetic analysis of African ternary rhythm timelines. Proceedings of BRIDGES: Mathematical Connections in Art, Music, and Science, University of Granada, Granada, Spain July 23-27, 2003, pp. 25-36.

____ (2002). A mathematical analysis of African, Brazilian, and Cuban clave rhythms. Proceedings of BRIDGES: Mathematical Connections in Art, Music and Science, Townson University, Towson, MD, July 27-29, 2002, pp. 157-168.

Whosampled.com. “The 10 Most Sampled Breakbeats of All Time.”

Wiggins, J. (2001). Teaching for musical understanding. Rochester, Michigan: Center for Applied Research in Musical Understanding, Oakland University.

Wilkie, K.; Holland, S.; & Mulholland, P. (2010). What Can the Language of Musicians Tell Us about Music Interaction Design?” Computer Music Journal, Vol. 34, No. 4, 34-48.

Theory for Producers: the White Keys

I’m pleased to announce the second installment of Theory For Producers, jointly produced by Soundfly and the MusEDLab. The first part discussed the scales you can play on the black keys of the piano. This one talks about three of the scales you get from the white keys. The next segment will deal with four additional white-key scales. Go try it!

Theory for Producers: the White Keys

If you’re a music educator or theory nerd, and would like to read more about the motivation behind the course design, read on.

Some of my colleagues in the music teaching world are puzzled by the order in which we’re presenting concepts. Theory resources almost always start with the C major scale, but we start with E-flat minor pentatonic. While it’s harder to represent in notation, E-flat minor pentatonic is easier to play and learn by ear, and for our target audience, that’s the most important consideration.

Okay, fine, the pentatonics are simple, it makes sense to start with them. But surely we would begin the white key part on the major scale, right? Nope! We start with Mixolydian mode. In electronica, hip-hop, rock, and pop, Mixolydian is more “basic” than major is. The sound of the flat seventh is more native to this music than the leading tone, and V-I cadences are rare or absent. I once had a student complain that the major scale makes everything sound like “Happy Birthday.” Our Mixolydian example, a Michael Jackson tune, was chosen to make our audience of producers feel culturally at home, to make them feel like we value the dance music of the African diaspora over the folk and classical of Western Europe.

After Mixolydian, we discuss Lydian mode. While it’s a pretty exotic scale, we chose to address it before major because it’s more forgiving to improvise with–Lydian doesn’t have any “wrong” notes. In major, you have to be careful about the fourth, because it has strong functional connotations, and because it conflicts hard with the third. In Lydian, you can play notes in any order and any combination without fear of hitting a clunker. Also, exotic though it may be, Lydian does pop up in a few well-known songs, like in a recent Katy Perry hit.

Finally, we do get to major, using David Bowie, and Queen. Even here, though, we downplay functional harmony, treating major as just another mode. Our song example uses a I-IV-V chord progression, but it runs over a static riff bassline, which makes it float rather than resolve.

This class only deals with the three major diatonic modes. We’ll get to the minor ones (natural minor, Dorian, Phrygian and Locrian) in the third class. We debated doing minor first, but there are more of the minor modes, and they’re more complicated.

We also debated whether or not to talk about chords. The chord changes in our examples are minimal, but they’re present. We ultimately decided to stick to horizontal scales only for the time being, and to treat chords separately. We plan to go back through all of the modes and talk about the chord progressions characteristic of each one. For example, with Mixolydian, we’ll talk about I-bVII-IV; with Lydian we’ll do I-II; and with major we’ll do all the permutations of I, IV, V and vi.

Once again, we know it’s unconventional to deal with modes so thoroughly before even touching any chords, but for our audience, we think this approach will make more sense. Electronic music is not big on complex harmony, but it is big on modes.

Milo meets Beethoven

For his birthday, Milo got a book called Welcome to the Symphony by Carolyn Sloan. We finally got around to showing it to him recently, and now he’s totally obsessed.

Welcome To The Symphony by Carolyn Sloan

The book has buttons along the side which you can press to hear little audio samples. They include each orchestra instrument playing a short Beethoven riff. All of the string instruments play the same “bum-bum-bum-BUMMM” so you can compare the sounds easily. All the winds play a different little phrase, and the brass another. The book itself is fine and all, but the thing that really hooked Milo is triggering the riffs one after another, Ableton-style, and singing merrily along.

Milo got primed to enjoy this book by two coincidental things. One is that in his preschool, they’ve been listening to Peter and the Wolf a lot, dancing to it, acting it out, etc. They use a YouTube video that shows both the story and the instruments side by side, so Milo has very clear ideas of what the oboe, clarinet, etc all look like and sound like. When he saw them in the orchestra book, he recognized them all immediately.

The other thing is this weird computer animated cartoon called Taratabong, which is about anthropomorphic musical instruments. Milo has been watching it on YouTube a bunch, to the point of wanting me to pretend to be different characters and “talk” to him (which is an entertaining challenge for me–how do you have a conversation as a snare drum?) So Milo also recognizes different instruments in the orchestra book as Taratabong characters.

Milo has now voluntarily watched a YouTube video of the entire first movement of Beethoven’s Fifth conducted by Leonard Bernstein, several times. That’s like nine minutes of classical music, which for a three-year-old is equivalent to nine hours. He sings along to all the riffs he recognized, announces each instrument as he sees it, and tells me about how Leonard Bernstein is Grandfather from Peter and the Wolf. I want to emphasize that we haven’t pushed him into any of this. If you read this blog, you know that I’m an outspoken anti-fan of Beethoven. We just put this stuff under Milo’s nose, and if he hadn’t been interested, we wouldn’t have pushed it.

The classical music tribe expresses continual anguish about how hard it is to draw people into the music. Having inadvertently created a budding Beethoven lover, I have a few insights to offer. Milo got connected to the music through multiple media simultaneously, in multiple settings. He was exposed initially in the context of stories about animals and cartoon characters. That exposure happened in the context of acting and dancing, not passive sitting or being lectured to. And when he did start listening, it was via playback devices that he controls completely: YouTube Kids on the iPad, and the buttons on the book.

Of all these different music experiences, the Ableton-like sample triggering is the one that has most seized Milo’s enthusiasm. Sometimes he wants to read the book and play the sounds when the text indicates. Sometimes he wants to systematically listen through each sound, singing along and acting out the instruments. Sometimes he just jams out, playing the excerpts in different orders and in different rhythms. I suspect he’d be even happier if he could get the sounds to loop. He wants to sing along, but the little phrases are half over before he can even get oriented. If the phrases looped in a musical-sounding way, I bet he would dig in much deeper.

This is not Milo’s first experience triggering sample playback. Before he even turned two, we spent a lot of time playing around with an APC 40.

APC40

Milo adores the lights and colors, and instantly grasped how the volume faders work. In general, though, the APC experience was too complicated for him. It was too easy to make it stop working, to lose the connection between button pushes and the music changing, and to generally get lost in the interface. (I have some of those same problems!) The orchestra book has the advantage of being vastly simpler and more predictable.

There’s a page in the book that shows Beethoven with quill pen, writing the music. (Milo is continually disappointed not to see Beethoven himself in any of the performance videos.) Interestingly, Milo has started using the phrase “writing music” as a synonym for “playing music”, either from an instrument or from iTunes. He seems not to know or care about the distinction between playing back pre-recorded music and creating new music. This conflation of writing and playing music was likely helped by the time Milo has spent with the aQWERTYon, an interface developed by the NYU MusEDLab for performing music on the computer keyboard.

aQWERTYon screencap

Milo isn’t extremely interested in the musical aspect of the aQWERTYon. He calls it “ABCs” and is mostly interested in using it to type his favorite letters. He also enjoys singing the alphabet song while playing semi-randomly along.

The MusEDLab’s work is motivated by the fact that computers make it enormously easier for total novices to participate actively in music. If Beethoven symphonies can be played with as toys, participated in as games, and connected to meaningful stories and activities, then it’s inevitable that kids are going to want to get involved. If I had experienced Beethoven as raw material for my own expression, I’d probably feel quite differently about him.

Inside the aQWERTYon

The MusEDLab and Soundfly just launched Theory For Producers, an interactive music theory course. The centerpiece of the interactive component is a MusEDLab tool called the aQWERTYon. You can try it by clicking the image below.

aQWERTYon screencap

In this post, I’ll talk about why and how we developed the aQWERTYon.

One of our core design principles is to work within our users’ real-world technological limitations. We build tools in the browser so they’ll be platform-independent and accessible anywhere there’s internet access (and where there isn’t internet access, we’ve developed the “MusEDLab in a box.”) We want to find out what musical possibilities there are in a typical computer with no additional software or hardware. That question led us to investigate ways of turning the standard QWERTY keyboard into a beginner-friendly instrument. We were inspired in part by GarageBand’s Musical Typing feature.

GarageBand musical typing

If you don’t have a MIDI controller, Apple thoughtfully made it possible for you to use your computer keyboard to play GarageBand’s many software instruments. You get an octave and a half of piano, plus other useful controls: pitch bend, modulation, sustain, octave shifting and simple velocity control. Many DAWs offer something similar, but Apple’s system is the most sophisticated I’ve seen.

Handy though it is, Musical Typing has some problems as a user interface. The biggest one is the poor fit between the piano keyboard layout and the grid of computer keys. Typing the letter A plays the note C. The rest of that row is the white keys, and the one above it is the black keys. You can play the chromatic scale by alternating A row, Q row, A row, Q row. That basic pattern is easy enough to figure out. However, you quickly get into trouble, because there’s no black key between E and F. The QWERTY keyboard gives no visual reminder of that fact, so you just have to remember it. Unfortunately, the “missing” black key happens to be the letter R, which is GarageBand’s keyboard shortcut for recording. So what inevitably happens is that you’re hunting for E-flat or F-sharp and you accidentally start recording over whatever you were doing. I’ve been using the program for years and still do this routinely.

Rather than recreating the piano keyboard on the computer, we drew on a different metaphor: the accordion.

The accordion: the user interface metaphor of the future!

We wanted to have chords and scales arranged in an easily discoverable way, like the way you can easily figure out the chord buttons on the accordion’s left hand. The QWERTY keyboard is really a staggered grid four keys tall and between ten and thirteen keys wide, plus assorted modifier and function keys. We decided to use the columns for chords and the rows for scales.

For the diatonic scales and modes, the layout is simple. The bottom row gives the notes in the scale starting on 1^. The second row has the same scale shifted over to start on 3^. The third row starts the scale on 5^, and the top row starts on 1^ an octave up. If this sounds confusing when you read it, try playing it, your ears will immediately pick up the pattern. Notes in the same column form the diatonic chords, with their roman numerals conveniently matching the number keys. There are no wrong notes, so even just mashing keys at random will sound at least okay. Typing your name usually sounds pretty cool, and picking out melodies is a piece of cake. Playing diagonal columns, like Z-S-E-4, gives you chords voiced in fourths. The same layout approach works great for any seven-note scale: all of the diatonic modes, plus the modes of harmonic and melodic minor.

Pentatonics work pretty much the same way as seven-note scales, except that the columns stack in fourths rather than fifths. The octatonic and diminished scales lay out easily as well. The real layout challenge lay in one strange but crucial exception: the blues scale. Unlike other scales, you can’t just stagger the blues scale pitches in thirds to get meaningful chords. The melodic and harmonic components of blues are more or less unrelated to each other. Our original idea was to put the blues scale on the bottom row of keys, and then use the others to spell out satisfying chords on top. That made it extremely awkward to play melodies, however, since the keys don’t form an intelligible pattern of intervals. Our compromise was to create two different blues modes: one with the chords, for harmony exploration, and one just repeating the blues scale in octaves for melodic purposes. Maybe a better solution exists, but we haven’t figured it out yet.

When you select a different root, all the pitches in the chords and scales are automatically changed as well. Even if the aQWERTYon had no other features or interactivity, this would still make it an invaluable music theory tool. But root selection raises a bigger question: what do you do about all the real-world music that uses more than one scale or mode? Totally uniform modality is unusual, even in simple pop songs. You can access notes outside the currently selected scale by pressing the shift keys, which transposes the entire keyboard up or down a half step. But what would be really great is if we could get the scale settings to change dynamically. Wouldn’t it be great if you were listening to a jazz tune, and the scale was always set to match whatever chord was going by at that moment? You could blow over complex changes effortlessly. We’ve discussed manually placing markers in YouTube videos that tell the aQWERTYon when to change its settings, but that would be labor-intensive. We’re hoping to discover an algorithmic method for placing markers automatically.

The other big design challenge we face is how to present all the different scale choices in a way that doesn’t overwhelm our core audience of non-expert users. One solution would just be to limit the scale choices. We already do that in the Soundfly course, in effect; when you land on a lesson, the embedded aQWERTYon is preset to the appropriate scale and key, and the user doesn’t even see the menus. But we’d like people to be able to explore the rich sonic diversity of the various scales without confronting them with technical Greek terms like “Lydian dominant”. Right now, the scales are categorized as Major, Minor and Other, but those terms aren’t meaningful to beginners. We’ve been discussing how we could organize the scales by mood or feeling, maybe from “brightest” to “darkest.” But how do you assign a mood to a scale? Do we just do it arbitrarily ourselves? Crowdsource mood tags? Find some objective sorting method that maps onto most listeners’ subjective associations? Some combination of the above? It’s an active area of research for us.

This issue of categorizing scales by mood has relevance for the original use case we imagined for the aQWERTYon: teaching film scoring. The idea behind the integrated video window was that you would load a video clip, set a mode, and then improvise some music that fit the emotional vibe of that clip. The idea of playing along with YouTube videos of songs came later. One could teach more general open-ended composition with the aQWERTYon, and in fact our friend Matt McLean is doing exactly that. But we’re attracted to film scoring as a gateway because it’s a more narrowly defined problem. Instead of just “write some music”, the challenge is “write some music with a particular feeling to it that fits into a scene of a particular length.

Would you like to help us test and improve the aQWERTYon, or to design curricula around it? Would you like to help fund our programmers and designers? Please get in touch.

Theory for Producers

I’m delighted to announce the launch of a new interactive online music course called Theory for Producers: The Black Keys. It’s a joint effort by Soundfly and the NYU MusEDLab, representing the culmination of several years worth of design and programming. We’re super proud of it.

Theory for Producers: The Black Keys

The course makes the abstractions of music theory concrete by presenting them in the form of actual songs you’re likely to already know. You can play and improvise along with the examples right in the web browser using the aQWERTYon, which turns your computer keyboard into an easily playable instrument. You can also bring the examples into programs like Ableton Live or Logic for further hands-on experimentation. We’ve spiced up the content with videos and animations, along with some entertaining digressions into the Stone Age and the auditory processing abilities of frogs.

So what does it mean that this is music theory for producers? We’re organizing the material in a way that’s easiest and most relevant to people using computers to create the dance music of the African diaspora: techno, hip-hop, and their various pop derivatives. This music carries most of its creative content outside of harmony: in rhythm, timbre, and repetitive structure. The harmony is usually static, sitting on a loop of a few chords or just a single mode. Alongside the standard (Western) major and minor scales, you’re just as likely to encounter more “exotic” (non-Western) sounds.

Music theory classes and textbooks typically begin with the C major scale, because it’s the easiest scale to represent and read in music notation. However, C major is not necessarily the most “basic” or fundamental scale for our intended audience. Instead, we start with E-flat minor pentatonic, otherwise known as the black keys on the piano. The piano metaphor is ubiquitous both in electronic music hardware and software, and pentatonics are even easier to play on piano than diatonic scales. E-flat minor pentatonic is more daunting in notated form than C major, but since dance and hip-hop producers tend not to be able to read music anyway, that’s no obstacle. And if producers want to use keys other than E-flat minor (or G-flat major), they can keep playing the black keys and then transpose the MIDI later.

The Black Keys is just the first installment in Theory For Producers. Next, we’ll do The White Keys, otherwise known as the modes of C major. We’re planning to start that course not with C major itself, but with G Mixolydian mode, because it’s a more familiar sound in Afrodiasporic music than straight major. After that, we’ll do a course about chords, and one about rhythm. We hope you sign up!

Update: oh hey, we’re on Lifehacker

Teaching reflections

Here’s what happened in my life as an educator this past semester, and what I have planned for the coming semester.

Montclair State University Intro To Music Technology

I wonder how much longer “music technology” is going to exist as a subject. They don’t teach “piano technology” or “violin technology.” It makes sense to teach specific areas like audio recording or synthesis or signal theory as separate classes. But “music technology” is such a broad term as to be meaningless. The unspoken assumption is that we’re teaching “musical practices involving a computer,” but even that is both too big and too small to structure a one-semester class around. On the one hand, every kind of music involves computers now. On the other hand, to focus just on the computer part is like teaching a word processing class that’s somehow separate from learning how to write.

MSU Intro to Music Tech

The newness and vagueness of the field of study gives me and my fellow music tech educators wide latitude to define our subject matter. I see my job as providing an introduction to pop production and songwriting. The tools we use for the job at Montclair are mostly GarageBand and Logic, but I don’t spend a lot of time on the mechanics of the software itself. Instead, I teach music: How do you express yourself creatively using sample libraries, or MIDI, or field recordings, or pre-existing songs? What kinds of rhythms, harmonies, timbres and structures make sense aesthetically when you’re assembling these materials in the DAW? Where do you get ideas? How do you listen to recorded music analytically? Why does Thriller sound so much better than any other album recorded in the eighties? We cover technical concepts as they arise in the natural course of producing and listening. My hope is that they’ll be more relevant and memorable that way.

Having now taught three semesters of Intro to Music Tech at MSU, my format is starting to gel. The students spend most of the semester creating tracks. They do one using only the loops that come with GarageBand, one using only MIDI and software instruments, one that includes a field recording they made with their phones, and so on. I started having them remix each other’s tracks this past semester, and it was such a smash hit that I’m going to have future classes do a whole series of peer remixes.

Montclair is a fairly traditional conservatory. For many students, my class is the only time in their college careers they get to make music according to their own sensibilities and tastes. It’s also usually the only time they engage critically with recordings, or electronic dance music, or hip-hop, or pop song forms, or sampling, or mixing and audio processing. I’m glad to be able to fill these vacuums, but I wish I had more than one semester to do it in.

Aside from creative music-making, the students do a couple of presentations, one on a song they think is interesting, and one on a topic of their choice. They also write blog posts about the process of creating their tracks. This last assignment is a persistent obstacle, since no one seems to share my enthusiasm for process documentation. Next semester I’m going to try introducing some of the cooperative/competitive spirit of the peer remixes by having them write reviews of each other’s tracks. Maybe that will get them to invest their writing with the same creativity they put into the music assignments.

Montclair State Advanced Computer Music Composition

This past fall I got to teach my first advanced class, and it went amazingly well. We used Ableton Live, my DAW of choice, and the guys (it was all guys) banged out tracks at a rapid clip for the entire semester. As with the intro class, I spent most of the time on the creative process, and dealt with Ableton functionality and audio engineering topics as they came up.

Tristan gets his FFT on

Each assignment came with some kind of tight technical restriction, but no stylistic restrictions. As with the intro class, the advanced dudes did tracks using only existing loops, only MIDI, and found sound. They did peer remixing and self remixing as well. The two hardest and most interesting assignments were to create a new track using only samples of an existing track, and then to create a new track using only a single five-second Duke Ellington sample. (These assignments were inspired heavily by the Disquiet Junto.) The more tightly I constrained the students, the more ingenuity they displayed. Listen for yourself:

As with the intro class, I tried to have the advanced dudes document their process with blog posts. As with the intro class, they showed zero interest. In the future, I’ll have to get more creative with the writing component. Also, I’d like to not have the class be entirely male.

NYU Music Education Technology Practicum

This class is meant to be a grounding in music tech for future music teachers. I’m even more time-constrained at NYU than at Montclair, and I teach in a regular classroom rather than a computer lab. While my class time at Montclair is mostly devoted to music-making, at NYU I’m forced to do more lectures, demos and listening sessions. It is very far from ideal. I have no idea how NYU can charge so much money without offering such a basic-seeming amenity as a room with computers in it for the music students. However, NYU does have one advantage over Montclair as a teaching environment, which is that I can hold a couple of class sessions in an extremely fancy recording studio.

Catherine and Joseph in the Dolan Studio

I mostly take the same approach at NYU as I do at Montclair, and use most of the same assignments. The major difference is that the NYU kids do a critical listening project, where they pick a recording and graph out its musical structure and spatial layout. It’s a difficult exercise, but an invaluable one. I did it in grad school, and it improved my analytical listening abilities significantly. We used to do the same assignment at Montclair, but the students were really not into it, like to the point of refusing to do it, so sadly we had to drop it from the syllabus. I hope we can find a way to reinstate it.

This past semester, the majority of my NYU kids were music business majors, which was pretty great. They came in with less musical experience than the education majors–sometimes with none at all–but they had less to unlearn, and they threw themselves confidently into producing tracks. This coming semester I have a bunch more music business kids. I’m attracting them because my class is the only one at Steinhardt that does intro-level creative music making in the pop idiom. I’m clearly filling a vacuum, and I’m hoping that I’m just the thin edge of the wedge, both for my own sake and the future music educators of NYU.

Interface designs

The NYU Music Experience Design Lab is baking education into a suite of creative music making and learning tools. As my friend and colleague Adam Bell likes to say, purchasers of a computer are purchasing a music education. We’re trying to make that education a better and more enjoyable one, whether our users are in formal classroom settings or playing around on their own. You can read about the lab’s various projects here. My own contributions are largely conceptual, though I’ve also devoted a lot of attention to making useful and inspiring presets.

Cold Sweat on the Groove Pizza

The Ed Sullivan Fellows Program

This winter, the MusEDLab is launching a brand new initiative, mentoring a group of young people from challenging circumstances in music and technology. I’ll be teaching the music side, doing a custom-tailored version of my intro class syllabus. Sullivan Fellows will also work with my colleagues in the lab on programming and design projects. This summer, we’ll have a showcase event as part of the 2016 IMPACT Conference. The goal is to help the Fellows get launched in careers in music and/or technology. I’ll be writing a lot more about this in the coming weeks.

Online courses with Soundfly

The MusEDLab is working with a music ed startup on some new interactive online courses. The first is called Music Theory For Bedroom Producers, and we expect to launch next month. I wrote a lot of the materials, and am appearing in some videos. Soundfly has ace designers, animators and programmers, so expect a rich multimedia experience. More on this as it gets closer.

Everything else

For the past few years, I’ve been a teaching artist with NYU’s IMPACT workshop. Below, you can see some participants making beats on an iPad. The workshop is a crash course not just in music, but in theater, dance, video, and the intersection of all of the above. I’m still very much figuring out my role in the whole thing, but so is everyone involved.

Mobile music at IMPACT

I continue to teach private lessons, do freelance production and composition, do some consulting, write for online publications, and generally keep hustling for gigs. If you’d like to have me do any of these things, be in touch.