2016 LVC Summer Course Projects & Links

Getting started

Scratch+Makey MakeyScratch Jazz Tutorial

Play With Your Music

Soundtrap/MusEDLab Creating

Young Composers and Improvisers Workshop

Music Theory for Bedroom Producers Courses

MusEDLab Apps & Tools

OIID Partnership Apps

iPad Apps Shared and Explored

  • Blob Chorus
  • Young Persons Guide to the Orchestra
  • MadPad
  • Music Makers Jam
  • OIID
  • Figure
  • ThumbJam
  • Reflector
  • Singing Fingers
  • Soundtrap
  • GarageBand (smart instruments and jam session mode)

Bibliography and Readings for Technological Trends in Music Education – Fall 2013

Bibliography for MPAME-GE 2035 – Technological Trends in Music Education: Designing Technologies and Experiences for Music Making, Learning and Engagement – NYU Fall 2013

Making it Easier to be Musical in Scratch

One of our summer research projects has focused on the refinement of audio, sound and music blocks and strategies for Scratch 2.0, the visual programming environment for kids developed by the Lifelong Kindergarten Group at the MIT Media Lab. Out of the box, Scratch provides some basic sound and audio functionality via the following blocks of the left hand side:

Scratch Sound Blocks

These blocks allow the user to play audio files selected from a built-in set of sounds or from user-imported MP3 or WAV files, play MIDI drum and instrument sounds and rests, and change and set the musical parameters of volume, tempo, pitch, and duration. Most Scratch projects that involve music utilize the “play sound” blocks for triggering sound effects or playing MP3s in the background of interactive animation or game projects.

This makes a lot of sense. Users have sound effects and music files that have meaning to them, and these blocks make it easy to insert them into their projects where they want.

What’s NOT easy in Scratch for most kids is making meaningful music with a series of “play note”, “rest for”, and “play drum” blocks. These blocks provide access to music at the  phoneme rather than morpheme levels of sound. Or, as Jeanne Bamberger puts it, at the smallest musical representations (individual notes, rests, and rhythms) rather than the simplest musical representations (motives, phrases, sequences) from the perspective of children’s musical cognition. To borrow a metaphor from chemistry, yet another comparison would be the atomic/elemental vs. molecular levels of music.

To work at the individual note, rest, and rhythms levels requires quite a lot of musical understanding and fluency. It can often be hard to “start at the very beginning.” One needs to understand and be able to dictate proportional rhythm, as well as to divine musical metadimensions by ear such as key, scale, and meter. Additionally, one needs to be fluent in chromatic divisions of the octave, and that in MIDI “middle C” = the note value 60. In computer science parlance, one could describe the musical blocks included with Scratch as “low level” requiring a lot of prior knowledge and understanding with which to work.

To help address this challenge within Scratch, our research group has been researching ways of making it easier for users to get musical ideas into Scratch, exploring what musical data structures might look like in Scratch, and developing custom blocks for working at a higher, morpheme level of musical abstraction. The new version of scratch (2.0) enables power users to create their own blocks, and we’ve used that mechanism for many of our approaches. If you want to jump right in to the work, you can view our Performamatics @ NYU Scratch Studio, play with, and remix our code.

Here’s a quick overview of some of the strategies/blocks we’ve developed:

  • Clap Engine – The user claps a rhythm live into Scratch using the built-in microphone on the computer. If the claps are loud enough, Scratch samples the time the clap occurred and stores that in one list, as well as the intensity of the clap in a second list. These lists are then available to the user as a means for “playing back” the claps. The recorded rhythm and clap intensities can be mapped to built in drum sounds, melodic notes, or audio samples. The advantage of this project is that human performance timing is maintained, and we’ve provided the necessary back-end code to make it easy for users to play back what they’ve recorded in.
  • Record Melody in List – This project is a presentation of a strategy developed by a participant in one of our interdisciplinary Performamatics workshops for educators. The user can record a diatonic melody in C major using the home row on the computer keyboard. The melody performed is then added to a list in Scratch, which can then be played back. This project (as of now) only records the pitch information, not rhythm). It makes it easier for users to get melodies into computational representation (i.e., a Scratch list) for manipulation and playback.
  • play chord from root pitch block – This custom block enables the user to input a root pitch (e.g., middle C = 60), a scale type (e.g., major, minor, dim7, etc.), and duration to generate a root position chord above the chosen root note. Playing a chord now only takes 1 “play chord” block, rather than 8-9 blocks.
  • play drum beats block – This block enables the user to input a string of symbols representing a rhythmic phrase. Modelled after the drum notation in the Gibber Javascript live coding environment, the user works at the rhythmic motive or phrase level by editing symbols that the Scratch program interprets as rhythmic sounds.
  • play ‘ya’ beats block –  This block is very similar in design to the ‘play drum beats’ block in that the works with short strings of text, but instead triggers a recorded sound file. The symbols used to rhythmically trigger audio samples in this block are modelled after Georgia Tech’s EarSketch project for teaching Python through Hip-Hop beats.
  • Musical Typing with Variable Duration – This project solves a problem faced by our group for a long time. If one connects a computer keyboard key to a play note block, an interesting behavior happens: The note is played, but as the key is held down the note is restarted multiple times in rapid-fire succession. To help solve this, we needed to write some code that would “debounce” the computer key inputs, but keep sustaining the sound until the key is released. We did this with a piece of Scratch code that “waits until the key is not pressed” followed by a “stop all” command to stop the sounds. It’s a bit of a hack, but it works.
  • MIDI Scratch Alpha Keyboard – This project implements the new Scratch 2.0 Extension Mechanism to add external MIDI functionality. The project uses a new set of custom MIDI blocks to trigger sounds in either the browser’s built-in Java synthesizer, or any software or hardware synthesizer or sample you have in or connected to your computer. With these blocks, you now have access to full quality sampled sounds, stereo pan control, access to MIDI continuous controllers and pitch bend, and fine grained note on/note off. Read more about this on our research page.

I hope you find these strategies & blocks useful in your own Scratch/Computing+Music work.

Designing Technology & Experiences for Music Making, Learning, & Engagement

This Fall I will be teaching a graduate course at NYU called Designing Technologies and Experiences for Music Making, Learning, and Engagement. This course is heavily inspired by the Hack Day process, but applied over the span of a semester-long course. Students from across the many programs within the NYU Department of Music and Performing Arts Professions will work together individually and in teams to develop a technology and/or experience that that they will iterate at least twice over the course of the semester with a specified audience/group of stakeholders. Students will read articles about and case studies of best practices in music education, meaningful engagement, experience design, technology development and entrepreneurialism, and meet regularly with guest presenters from industry and education. At the end of the course, students will present their projects to a panel of music educators and industry representatives for feedback. Selected students will have the opportunity to compete for scholarships to work within my research group and some of the industry sponsors during the Spring 2014 semester to potentially license and commercialize their ideas and projects.

In this course, we will be implementing a research & development process designed by Andrew R. Brown called Software Development as (Music Education) Research (SoDaR). This process was piloted and used throughout the development of the Jam2Jam networked media jamming software project led by the late Steve Dillon. This process actively involves the end users of a particular piece of software in the design process at all stages. The field of music education technology is just now starting to move toward this end, where in the past educators were often marketed music technologies designed for professional musicians (e.g., professional keyboard synthesizer, Finale, Sibelius, Reason, ProTools, Ableton, etc.). It’s notable that relatively new technologies NoteflightMusicFirst, and MusicDelta have engaged educators in the design and refinement of their tools, and see music educators and students as their primary user audience.