2016 LVC Summer Course Projects & Links

Getting started

Scratch+Makey MakeyScratch Jazz Tutorial

Play With Your Music

Soundtrap/MusEDLab Creating

Young Composers and Improvisers Workshop

Music Theory for Bedroom Producers Courses

MusEDLab Apps & Tools

OIID Partnership Apps

iPad Apps Shared and Explored

  • Blob Chorus
  • Young Persons Guide to the Orchestra
  • MadPad
  • Music Makers Jam
  • OIID
  • Figure
  • ThumbJam
  • Reflector
  • Singing Fingers
  • Soundtrap
  • GarageBand (smart instruments and jam session mode)

Exciting music and technology related sessions at the upcoming MayDay Group colloquium

This week Brent Talbot and the folks at Gettysburg College will be hosting the 26th MayDay Group Colloquium. Unfortunately, I will not be able to attend due to our summer Performamatics.org computational/musical thinking workshop, but the line up of sessions is absolutely great. For those of you interested in the latest critical scholarship in music education and technology, here are some of the related session abstracts:

Eva Egolf (PhD, NYU 2014) –

New York City Electronic Dance Music Club DJs: Learning Processes and Cross-Disciplinary Collaboration

Early 1980s dance music genres, such as house and techno, have transformed and fractured into a myriad of subgenres and practices.  Collectively, these genres are often grouped by the moniker electronic dance music (EDM).  EDM is performed in dance clubs by a disc jockey (DJ). The hero of EDM is the DJ.  Within the last approximately five years EDM has gained popularity in the United States with increased radio air time, and prominent DJs such as Tiesto commanding an average of $250,000 per appearance for a two hour club set (Greenburg, 2012). Despite the commercial success of EDM, the learning processes and the musical understanding of DJs have largely been unexplored by the music education community.

The purpose of this paper is to explore the processes of learning among EDM club DJs in New York City, and the ways in which they collaborate across multiple disciplines.  This paper provides an overview of musical understanding as it exists among DJ participants, illuminates the processes by which DJs acquire this understanding, and examines the context in which musical understanding is learned. This case study of five DJs who predominantly work in underground gay dance clubs employs semi-structured interviews, and observations of performances as the primary methods in this inquiry. The frameworks of musical understanding, (Elliott, 1995) informal learning, (Green, 2002) and situated learning (Lave & Wenger, 1991) were used to guide the analysis of data.

Learning among participant DJs involved multifaceted processes. Self taught solitary learning processes were significant among DJs as they engaged with trial and error approaches. Additionally, group learning (Green, 2002) figured into learning experiences, as participants engaged with peers. Situated learning processes (Lave & Wenger, 1991) emerged as some participants accessed experienced DJs, and formed casual mentoring relationships. Community music schools specializing in DJing helped some participants in the early stages of learning to DJ. Participants described the DJ community music school experiences as providing basic initial instruction, and found they needed to learn more outside of the school through solitary practicing, peers, and experience performing. K-12 and university school music experiences were minimally helpful in learning how to DJ from the perspective of participants. This paper outlines these learning experiences among participants, which are of interest to the music education community.

In addition to describing the processes of learning, this paper explores the context of clubbing in underground gay clubs in New York City. In this context are instances of collaboration between music makers and practitioners from disciplines outside of music. Understanding the historical/cultural context of DJing and clubbing was valued among participants in several ways. An understanding of the standards and traditions associated with the context were essential to DJs in their music making efforts. Also, while describing the context of music making, participants emphasized the influence of  houses in New York City clubbing communities. These are social and professional alliances that are named after prominent fashion houses, such as the House of Aviance, and the House of Extravaganza. One function of the houses in the clubbing community is that they facilitate collaboration across multiple disciplines as members of the house work together to “throw parties” at various clubs. In doing this, houses draw on a range of expertise from lighting designers, singers, dancers, DJs, web designers, and promoters. The house, as a professional and social affiliation facilitates this multi-discipline collaboration.

This paper has implications for the field of music education as it explores the processes by which this group of musicians learn. The musical understanding of club DJs is presented as distinctive with an emphasis on music making within the context of New York City dance clubs. In addition to the context influencing music making activities, the context includes the cross-discipline collaborative engagements of the house.

 

Adam P. Bell (PhD, NYU 2013; Montclair State University)

The DAW Double-Edged Sword

The shift to software-enabled recording has significantly reduced the cost of entry-level equipment, which has improved the quality and capacity of home recording…Software and code have made possible a regime of more distributed musical creativity, which represents a democratization of technology. (Leyshon, 2009, p. 1325)

The 2009 global report of the National Association of Music Merchants details that computer-based recording experienced a financial boom between 1999 and 2008. The computer music market rose almost 200 percent to become a 400-million-dollar industry. This trend coincides with the proliferation of digital audio workstations (DAWs) that were made available to the consumer and “prosumer” markets commencing in the early 2000s, exemplifying the “democratization” of which Leyshon speaks. With regard to music education, an intriguing result of this trend is that the skillset typically associated with the trade of audio engineering has been placed into the hands of musicians, which prompts two lines of questioning: First, what and how do musicians learn from the audio engineering community? Second, what are the learning implications for music-makers utilizing DAWs?

Historical accounts document that the occupation of audio engineering emerged from recording practices developed in home studios (e.g., Horning, 2002), typified by acts of “technological enthusiasm” and “tinkering” (Waksman, 2004). Pioneers in the field such as Les Paul and Joe Meek honed their techniques in the bedrooms and basements of their home studios (Buskin, 2007; Cleveland, 2001). They conceptualized the recording studio as a musical instrument, a concept that was popularized by the mid-1960s as “pop music soon discovered the potential of the studio as a place to make music rather than just to record it” (Clarke, 2007, p. 54). The postulation of recording engineer Dave Pensado captures the sentiment of those who laud the practice of home recording: “In general, the creativity that emerges from home studios almost always surpasses that of an expensive studio” (as cited in Simons, 2004, p. 10).

Despite it’s popularity and the access it creates for creativity, computer-based composition has its limitations. Jennings (2007) rightly charges that these softwares “subliminally direct the actions of users, in both musical and non-musical ways” (p. 78). Programs make assumptions and covertly steer users by limiting options. Limiting the possibilities in the process of composition leads users of the same software to compose in a generic method, resulting in generic outcomes; the software itself becomes the genre. Programs that use preset sounds limit diversity and constrain the styles composed. Both Mellor (2008) and Latartara (2011) found that the design of some DAWs privilege certain user actions more than others such as composing section-by-section, looping audio segments, and overdubbing.

Drawing on primary and secondary sources of observational and interview data, my proposed paper presentation will problematize the DAW and its place in music education. Eschewing a “how to” or “here’s what’s great about this” approach, my paper will instead probe software design considerations of popular DAWs and examine the implied underlying educational philosophies of their creators. What do these decisions imply about what is important in a computer-centric music education and what course has been charted for the tech-dependent music learner of the future?

 

Janice Waldron – University of Windsor

Going “Digitally Native”: Music Learning and Teaching in a Brave New World

The convergence of the Internet and mobile phones with social networks – what new media scholars deem “networked technologies” – has been the subject of much debate over the past three decades. In this paper, I consider what new media researchers have already discerned regarding networked technologies; most importantly, that more significant than any given technology itself is how we use it, the effect(s) its use has on us, and the relationships we form through it and with the technology. Because this has obvious implications for music teaching and learning, the discussion is an important one, especially so as it has remained largely unaddressed by music education scholars.

New Media scholar Sherry Turkle contends (1995, 2011) that “the computer offers us new opportunities as a medium that embodies our ideas and expresses our diversity” (p. 31), but she also recognizes that peoples’ interactions with computers could result in unintended and ambiguous effects because intentions of use do not reside within the computer, but are instead determined by how people interact with, perceive their relationship to and develop expectations of what their machines can or should do over a period of time:

“We construct our technologies, and our technologies construct us and our times. We become the objects we look upon but they become what we make of them (p. 46). . . People think they are getting an instrumentally useful product, and there is little question that they are. But now it is in their home and they interact with it every day. And it turns out they are also getting an object that teaches them a new way of thinking and encourages them to develop new expectations about the kinds of relationships they and their children will have with machines.” (p. 49)

There are several key points to take away from Turkle. The first and most important one being that technological determinism is a false modernist construct of where technology leads, because what any given technology is capable of becomes clear only during the process of its use, including how it is used and what it is used for. The act of use itself changes the technology, the person interacting with it, and the expectations that such exchanges have for a culture and society. Simply put, a machine’s maker and/or programmer cannot possibly predict what can be produced with a connected computer and a creative individual – or individuals –manipulating it.

Secondly, Turkle’s argument that 1) children develop new expectations of what their machines can and/or should do, and 2) that computer use changes the way people think is particularly prescient today; writing in 1995, Turkle was discussing the relationship of people to their newly acquired personal home computers. Her argument is even more fitting now given the omnipresence of smartphones along with the expectation that  “there’s an app for everything.”

As a profession, music educations researchers and practitioners have tended to focus on technology as a knowable “thing” – i.e. hardware and/or software with its “practical classroom applications” – and not the greater epistemological issues underlying its use. Further, we have been slow to examine how using networked technologies could change our beliefs about music teaching and learning in the larger sphere. How will we engage musically in a meaningful way with a generation of students – “digital natives” – who have grown up technologically “tethered?” How will these different “ways of knowing” change music learning and teaching now and in the not-so-distant future?

 

Tom Malone – UMass Lowell

Turning the Tables Back: Pedagogy and Praxis from Hip-Hop’s First Generation DJs

The world has changed, again. Once Western Classical music was the music of the world’s elite, the internationally rich and famous.  Today, at exclusive nightclubs from Ibiza to Goa, it is electronic dance music that provides the soundtrack to people’s fantasies of wealth and material success, and has made the DJ is an international superstar.  House, Turntablism, Dubstep, Hip Hop and EDM are just some of the musical styles in which this cultural figure, standing mysteriously behind a table of equipment and moving some faders and knobs, creates a pulsing tapestry of beats loops and timbres and often gets paid quite well to do it. But it wasn’t always jets, limousines and endless red carpets – in fact, the true pioneers of DJ culture had no such advantages. They were three young Caribbean-American teenagers growing up 40 years ago in one of America’s most impoverished and toughest urban centers, the South Bronx.  This paper offers history and social critique alongside musical examples from a community music initiative that teaches the art of mixing soul and funk records to create the extended breaks and beats that formed the musical foundation of Hip Hop in the 1970s.  For music educators seeking an authentic and interactive way teach the roots of Hip Hop and today’s Electronic Dance Music culture, teaching with turntables offers the chance to learn about and interact with the artist of past and present and remix their music in realtime.

When one begins a discussion about teaching with turntables, some are quick to assume that the focus would be on “scratching” or “turntablism,” but there is a form that is much earlier and more fundamental. It is called break-mixing, or simply mixing. This style does not center around aggressive or virtuosic scratch solos, but rather on establishing a smooth and even flow between two turntables, and on setting up a new groove by extending and combining one or more segments of pre-existing tracks. This is an analog form of ‘looping’ and live remixing that requires an extensive knowledge of soul and funk records, skill in manipulating the vinyl and the turntables without losing the beat, and a clear sense of musical pulse and form. Like many valuable pursuits and musical domains this art can be learned by patient and focused practice, and access to quality teaching and materials. The present author has derived these teaching strategies through studies of video, recordings, and interviews with the first generation of Hip Hop’s DJs to focus on the essential art of the ‘breakmix’ as a core and teachable component that links todays EDM and DJ culture to the very first Black and Caribbean innovators who devised the art.

Like Rock-and-roll and Jazz, Ragtime and Blues before it, Hip Hop is a American musical culture, of largely African and Caribbean descent that has grown to become a ubiquitous part of our global music and cultural landscape. Music Educators cannot afford to ignore it, and yet when and how shall a serious attempt to be made to teach it? This article proposes that the “break-mix” with two turntables as an essential component of both Hip-Hop EDM culture that can be taught authentically and creatively in a school or community setting. Thus allowing students to do more than learn about influential Hip Hop, Soul, and Funk artists from books, media, and video, but to directly engage with this music in a hands-on way, creating their own beats and breaks in the same way that Hip Hop’s pioneers did almost 40 years ago. Furthermore since the pioneers of this style are still living, and can serve as culture-bearers to motivated music educators who would prefer to delve into and respect the Caribbean-American cultural bedrock of DJ culture rather than complain about the material excess of today’s software-based celebrity “pushbutton” DJs widely promoted in corporate and commercial media.

Play With Your Music 2.0 – Featuring the music of Peter Gabriel

NYU Steinhardt, P2PU, and Peter Gabriel Team Up to Launch Free Online Audio Production Course and Community 

Music enthusiasts of all ability levels who want to learn how to mix their own songs using the newest tools on the web can now learn online from the experts for free as part of “Play With Your Music” #PWYM – an online community and course offered through the cooperative efforts of the music technology program at the NYU Steinhardt School, recording artist Peter Gabriel and Peer 2 Peer University. Signup is open now at playwithyourmusic.org, and the course will commence on May 16, 2014.

Students will explore creative music production through working with two of Peter Gabriel’s famous multi track recordings – “In Your Eyes,” and “Sledgehammer.” Learners will use the latest web-based production tools, analyze songs of their choice, and mix, remix and share their work, helping to build an audience for their music. The course will be augmented by interviews with musicians who’ve performed and recorded with Peter Gabriel, including Jerry Marotta, percussionist on Gabriel’s So album, from which both tracks were taken, and Kevin Killen, Gabriel’s sound engineer. Supplemental instructional content on audio FX techniques (Alex U. Case, UMass Lowell) and songwriting (Phil Galdston, NYU Steinhardt) will also be available.“I am a big fan of what the Peer 2 Peer University is trying to do to open up education to the world, and am very happy to be supporting this music project with NYU Steinhardt. It should make the world of music production much more accessible for those who want to explore it,” said Gabriel.“This course creates a unique opportunity for anybody with a computer and an Internet connection to learn the ins and outs of creative music production through working directly with musical multi track recordings of music legend, Peter Gabriel,” said Alex Ruthmann, Associate Professor of Music Education and Music Technology at NYU and lead designer of Play With Your Music.The course builds upon the success of the first iteration of #PWYM, which attracted 5,000 learners in the fall of 2013. New learners will explore multitrack recordings in Soundation’s online Digital Audio Workstation, and are supported by the active Google+ community of over 1,700 members and the bustling SoundCloud group which has over 30,000 listens. “We are particularly excited about designing examples for online learning that look nothing like a typical course, but take advantage of the loose and distributed nature the web has to offer” said P2PU learning lead Vanessa Gennarelli.

The “Play With Your Music” course in partnership with Peter Gabriel is free and open to all participants. For information on how to sign up, visit playwithyourmusic.org.

About NYU Steinhardt Department of Music and Performing Arts Professions:

Steinhardt’s Department of Music and Performing Arts Professions was established in 1925. Today, 1,600 students majoring in renowned music and performing arts programs are guided by 400 faculty. The department’s degree programs—baccalaureate through doctorate—share the School’s spirit of openness and innovation that encourages the pursuit of high artistic and academic goals. Music and Performing Arts Professions serves as NYU’s “school” of music and is a major research and practice center in music technology, music business, music composition, film scoring, songwriting, music performance practices, performing arts therapies, and the performing arts-in-education (music, dance, and drama).

About P2PU:

Peer 2 Peer University (www.p2pu.org) is a community-driven education project that organizes learning outside of institutional walls and gives learners recognition for their achievements. P2PU creates a model for lifelong learning alongside traditional formal higher education. Leveraging the internet and educational materials openly available online, P2PU enables high-quality low-cost education opportunities.

About Peter Gabriel:

Peter Gabriel is an English singer-songwriter, musician, and humanitarian activist who rose to fame as the lead vocalist of the progressive rock band Genesis. His 1986 solo album, So, is his most commercially successful, selling five million copies in America. A six-time Grammy winner, Gabriel has won numerous awards throughout his career. In recognition of his many years of human rights activism, he received the Man of Peace award from the Nobel Peace Prize Laureates in 2006, and in 2008, TIME magazine named Gabriel one of the 100 most influential people in the world. Gabriel was inducted into the Rock and Roll Hall of Fame as a member of Genesis in 2010 and as a solo artist in New York on April 10, 2014.

Report – Play With Your Music MOOC 1.0 – P2PU/MusEDLab Whitepaper

If someone says “online course” you might think about a very passive experience, like watching lectures, talking multiple choice quizzes, or reading out of an e-book. At P2PU we focus on passion, play and projects, and we were seeing less “play” in the exercises and tutorials in the online education landscape. We wanted to change the conversation, to encourage creative expression and nurture originality. We believed we could offer a learning experience that would include mastering new skills, certainly, but do so as a byproduct of creative projects.

Play With Your Music is about learning music while playing with music. It’s a hands-on introduction to the creative processes of audio engineers and producers. The first six-week course provided an introduction to critical listening, strategies for learning from recordings, musical uses of audio effects, mixing and remixing. In November 2013, over 5,000 participants signed up to participate. They worked together in small “Learning Ensembles” of 30-40 people to make their own mixes and remixes, and shared their sounds along the way via SoundCloud.com.

From the outset, the #PWYM team–Dr. Alex Ruthmann (NYU), Ethan Hein (NYU), Vanessa Gennarelli (P2PU) and Dirk Uys (P2PU)– approached the course as a design-based research project. We were interested in ways to build a sense of “belonging” to a community in an online space. Our hypothesis was that small groups with shared musical interests or tastes would sustain engagement, support peer learning, and prompt deeper feedback. We wanted folks to feel like they had a “crew” they could depend on.

We wanted to develop not just a great learning experience, but a design-driven research project to grow knowledge around peer-driven learning online. In this paper we reflect on the aims, the experience, and the lessons learned from PWYM.

pwym_map

Read the full report at http://reports.p2pu.org/reports/PWYM/

Bibliography and Readings for Technological Trends in Music Education – Fall 2013

Bibliography for MPAME-GE 2035 – Technological Trends in Music Education: Designing Technologies and Experiences for Music Making, Learning and Engagement – NYU Fall 2013

Fall 2013 Course – Designing Technologies & Experiences for Music Making, Learning and Engagement

MPAME-GE 2035 – 3 Units – Technological Trends in Music Education

Designing Technologies & Experiences for Music Making, Learning and Engagement

Thursdays – 4:55pm-6:35pm – EDUC 877 – Fall 2013

In this course students will work individually and in teams in the design of music technologies and/or experiences for music making, learning and engagement. The course will begin with an introduction to emerging trends in music technology and education, especially related to web- and mobile-based musical experiences and principles of making music with new media. Innovations in and applications of music production, musical interaction, technology design, musical experience design, user-centered design & engagement, scaffolded learning, musical metadata, pedagogies of play and making, and music entrepreneurship will also be explored.

Students will identify an audience of end users (e.g., students, fans, friends, adults, community members, professional musicians, etc.) with whom they will collaborate in the design of their technology and/or experience. Students will participate in an iterative design process with their chosen audience, moving at least twice through a prototyping, implementation, and revision cycle of their music technology and/or experience design.

At the end of the semester, students will present their projects to the class and to an external panel of educators, technology developers, music industry professionals, and venture capitalists. This panel will provide feedback on each project and award small seed-funding grants to selected students and/or teams to work with Dr. Ruthmann in the Spring 2014 semester to potentially license, commercialize or distribute their projects with community and industry partners.

Link to Course Bibliography 

Link to Course Prezi

 

About the Instructor

S. Alex Ruthmann is a researcher, educator, and musician whose research and practice explores new media musicianship, creative computing, the creative processes of young musical creators, and the development of music and media technologies for use in school- and community-based youth programs. He and his collaborators are the recipients of two National Science Foundation grants exploring the interdisciplinary teaching of computational and musical thinking. Ruthmann currently serves as President of the Association for Technology in Music Instruction, Past Chair of the Creativity special research interest group of the Society for Research in Music Education, as Co-Editor of the International Journal of Education & the Arts, and Associate Editor of the Journal of Music, Technology, and Education. He also serves on the editorial/advisory boards of the British Journal of Music Education and International Journal of Music Education: Practice. Ruthmann received an interdisciplinary B.Mus. degree from the University of Michigan in Music and Technology, and M.Mus. and Ph.D. degrees in Music Education from Oakland University. Active in social media, you can follow his curated posts on music learning, teaching and technology as @alexruthmann on Twitter and on his research blog http://experiencingaudio.org/.

Any questions about the course may be directed to alex.ruthmann@nyu.edu.

 

 

Making it Easier to be Musical in Scratch

One of our summer research projects has focused on the refinement of audio, sound and music blocks and strategies for Scratch 2.0, the visual programming environment for kids developed by the Lifelong Kindergarten Group at the MIT Media Lab. Out of the box, Scratch provides some basic sound and audio functionality via the following blocks of the left hand side:

Scratch Sound Blocks

These blocks allow the user to play audio files selected from a built-in set of sounds or from user-imported MP3 or WAV files, play MIDI drum and instrument sounds and rests, and change and set the musical parameters of volume, tempo, pitch, and duration. Most Scratch projects that involve music utilize the “play sound” blocks for triggering sound effects or playing MP3s in the background of interactive animation or game projects.

This makes a lot of sense. Users have sound effects and music files that have meaning to them, and these blocks make it easy to insert them into their projects where they want.

What’s NOT easy in Scratch for most kids is making meaningful music with a series of “play note”, “rest for”, and “play drum” blocks. These blocks provide access to music at the  phoneme rather than morpheme levels of sound. Or, as Jeanne Bamberger puts it, at the smallest musical representations (individual notes, rests, and rhythms) rather than the simplest musical representations (motives, phrases, sequences) from the perspective of children’s musical cognition. To borrow a metaphor from chemistry, yet another comparison would be the atomic/elemental vs. molecular levels of music.

To work at the individual note, rest, and rhythms levels requires quite a lot of musical understanding and fluency. It can often be hard to “start at the very beginning.” One needs to understand and be able to dictate proportional rhythm, as well as to divine musical metadimensions by ear such as key, scale, and meter. Additionally, one needs to be fluent in chromatic divisions of the octave, and that in MIDI “middle C” = the note value 60. In computer science parlance, one could describe the musical blocks included with Scratch as “low level” requiring a lot of prior knowledge and understanding with which to work.

To help address this challenge within Scratch, our research group has been researching ways of making it easier for users to get musical ideas into Scratch, exploring what musical data structures might look like in Scratch, and developing custom blocks for working at a higher, morpheme level of musical abstraction. The new version of scratch (2.0) enables power users to create their own blocks, and we’ve used that mechanism for many of our approaches. If you want to jump right in to the work, you can view our Performamatics @ NYU Scratch Studio, play with, and remix our code.

Here’s a quick overview of some of the strategies/blocks we’ve developed:

  • Clap Engine – The user claps a rhythm live into Scratch using the built-in microphone on the computer. If the claps are loud enough, Scratch samples the time the clap occurred and stores that in one list, as well as the intensity of the clap in a second list. These lists are then available to the user as a means for “playing back” the claps. The recorded rhythm and clap intensities can be mapped to built in drum sounds, melodic notes, or audio samples. The advantage of this project is that human performance timing is maintained, and we’ve provided the necessary back-end code to make it easy for users to play back what they’ve recorded in.
  • Record Melody in List – This project is a presentation of a strategy developed by a participant in one of our interdisciplinary Performamatics workshops for educators. The user can record a diatonic melody in C major using the home row on the computer keyboard. The melody performed is then added to a list in Scratch, which can then be played back. This project (as of now) only records the pitch information, not rhythm). It makes it easier for users to get melodies into computational representation (i.e., a Scratch list) for manipulation and playback.
  • play chord from root pitch block – This custom block enables the user to input a root pitch (e.g., middle C = 60), a scale type (e.g., major, minor, dim7, etc.), and duration to generate a root position chord above the chosen root note. Playing a chord now only takes 1 “play chord” block, rather than 8-9 blocks.
  • play drum beats block – This block enables the user to input a string of symbols representing a rhythmic phrase. Modelled after the drum notation in the Gibber Javascript live coding environment, the user works at the rhythmic motive or phrase level by editing symbols that the Scratch program interprets as rhythmic sounds.
  • play ‘ya’ beats block –  This block is very similar in design to the ‘play drum beats’ block in that the works with short strings of text, but instead triggers a recorded sound file. The symbols used to rhythmically trigger audio samples in this block are modelled after Georgia Tech’s EarSketch project for teaching Python through Hip-Hop beats.
  • Musical Typing with Variable Duration – This project solves a problem faced by our group for a long time. If one connects a computer keyboard key to a play note block, an interesting behavior happens: The note is played, but as the key is held down the note is restarted multiple times in rapid-fire succession. To help solve this, we needed to write some code that would “debounce” the computer key inputs, but keep sustaining the sound until the key is released. We did this with a piece of Scratch code that “waits until the key is not pressed” followed by a “stop all” command to stop the sounds. It’s a bit of a hack, but it works.
  • MIDI Scratch Alpha Keyboard – This project implements the new Scratch 2.0 Extension Mechanism to add external MIDI functionality. The project uses a new set of custom MIDI blocks to trigger sounds in either the browser’s built-in Java synthesizer, or any software or hardware synthesizer or sample you have in or connected to your computer. With these blocks, you now have access to full quality sampled sounds, stereo pan control, access to MIDI continuous controllers and pitch bend, and fine grained note on/note off. Read more about this on our research page.

I hope you find these strategies & blocks useful in your own Scratch/Computing+Music work.

Hack Days as a Model for Innovation in Schools of Music

This past weekend saw the first Music Education Hack event hosted by the Spotify streaming music service and the NYC Department of Education’s iZone/InnovateNYC program. I’ve been to several music-themed Hack Days in the past, but this was the first event focusing specifically on hacking new technologies in service to music education.

This post is the first of several reflecting on the Music Ed Hack experience. Since the concept of a Hack Day may be foreign to many of my readers, I will start this post off with a description of what a Hack Day actually is, and put forward a vision of how collegiate schools of music (and even K-12 schools) could adopt this model as a way of building community among their students and reinforcing that music is a living, creative art. I’d love to hear what you think about that.

What is a Hack Day?

Hack Days and Hackathons are now common events within large technology companies like Google, technology startups, and in major technology innovation hubs like New York City, Austin, Boston, and Silicon Valley. The purpose of these events is to spawn innovation by giving coder/programmers 24 hours to a couple days to work as teams to create a new product, or technology, often around a specific theme or problem. These events are often sponsored by a single host company or a group of companies. The structure of these events are pretty similar in that interested coders assemble at a particular time and are introduced to the theme/challenge of the hack. The coders then often listen to short presentations/demos from sponsor companies around their Application Programming Interfaces (APIs). Most of the “hacking” that happens at these events is in the web-based and online realms, rather than hardware space. However, every Hack Day I’ve attended around music has always had some people playing with hardware such as Arduino boards, Microsoft Kinect controllers, & most recently Leap Motion, for building new physical interfaces.

After the API presentations finish, there is often an “open call” for collaboration where attendees can get up in front of the group and float the idea they have in hopes of soliciting other interested attendees in joining their team. Once that’s finished, the newly formed teams have approximately 24 hours to create their “hack.” Many teams work through the night, are well fed, and also have opportunities to meet with developers and technical experts from the sponsor companies to get advice on how to build their designs.

These events are not only for pure coders and developers. Graphic and website developers, entrepreneurs, musicians, and other interested people often show up and join teams to lend their expertise in User Experience, marketing spin, or knowledge of the application context. After about 24 hours of hacking, the deadline comes to pass and teams submit their “hacks” and present them as live demonstrations in front of the audience of programmers and other interested people. The demo sessions are often also open to the general public for those interested parties who don’t want to pull an all-nighter with the programming teams. There is palpable excitement during these demo sessions (and throughout the whole Hack Day, really). The audience gets to see brand new, emerging technologies, and the teams finally get a release of energy in sharing their ideas with the crowd.

The sponsors of the Hack Day, along with companies that provide API support for the event often give out prizes to the Hack teams that create the best hack or make the best use of their APIs. These prizes can range from nice cash sums of  upwards of $10,000 to iPad minis, to web credits, to concert tickets. Yes, there is a corporate/competitive context that surrounds these Hack Days, but as a participant in a few of them, I can also say that there is a strong intrinsic reward for creating something new that solves a challenge or puts forward a new idea. Aside from the prizes, most hacks never directly turn into a marketable product or service. However, they do influence future product design and a few do make it to the startup phase.

Hack Days as a Model for Innovation in Schools of Music?

I often wonder what a parallel event might look like in the formal music school space. Would it be a 24 hour challenge to bring together composers, producers, and performers to create/improvise/produce new chamber works? What could be gained from such an approach as an alternative to traditional band/choir/orchestra/chamber music festivals and competitions in high schools and in schools of music? I think it would be very cool to structure a new music festival hack day in every collegiate school of music as a way of building community and reinforcing music as a living, creative art. Students enrolled in the school across all music majors could compete for scholarships, or even sponsored prizes from publishers, instrument manufacturers, digital equipment companies, and music services – or participate just for the intrinsic fun of the event. Students would have 24 hours to form teams, create, rehearse, and refine their pieces. The demo sessions would be in the form of a concert of the newly created pieces.  As happens at a technology-based Hack Day, some demos fail to come together and others blow the audience away. “Failure” is seen as a necessary, positive learning experience within the tech/startup world. In order to have big rewards, big risks need to be taken and these Hack Days are a small, semi-controlled safe settings for those failures to occur. Sure, not every piece created would be a masterpiece, but isn’t that ok? Isn’t there a lot to be learned through trying and putting your ideas out there? Will the “musical academy” allow for this kind of disruptive innovation within their walls? Can they afford not to?

My favorite hacks from Music Education Hack 2013

Music Education Hack 2013 saw the presentation of 44 hacks created by around 200 participants throughout the event. As explained in my first post about Hack Days, not every hack (or presentation of the hack!) is successful. However, there is never a shortage of cool, new, and innovative ideas. Some appear at the demo session fully realized, yet others remain mere glimpses of what might come in the future. Nonetheless, it’s an exhilarating experience to just attend one of these events, let alone engage as a participant.

Here’s a list of my 14 favorites, in no particular order: (Note: Not every hack has a working demo).

  • Exemplify – (FIRST PRIZE – $10,000) – Online tool for teaching students around a streaming piece of music. Exemplify uses a variety of APIs to automatically provide historical context articles about the piece of music or composer, provides a built in comment or quiz tool to be tied to specific times within a song, and enables teacher to pause song, etc. 
  • Poke-a-Text – (EchoNest API PRIZE – iPad Mini) – Teach grammar while listening to music. The user selects a favorite song and the app presents streaming phrases from the song’s lyrics with varying degrees of grammatical correctness. The user selects the version of each lyric line they think is grammatically correct and their choices are graded. Scores can be sent back to the teacher to monitor progress.
  • Rock Steady – Mobile phone app for pulse/rhythm training in the context of your favorite song. Using the built in accelerometer in your phone, control the tempo of your favorite song or try to follow along. Keeps accuracy score. A cool way of practicing pulse in a contextually meaningful way.
  • JamAlong – This is a Spotify app that creates a simple diatonic xylophone interface that automatically maps onto the key of your favorite song within Spotify. It queries the key and mode of the song using a variety of APIs, and maps out the diatonic scale that best matches the song. The user can “jam” with their favorite tunes automatically through playing a virtual diatonic xylophone mapped to the solfege of the particular mode with their computer mouse.
  • Spotifact – This app enables a teacher to create affinity groups based on musical preferences. Have multiple friends go to the demo and enter “hack” as the class code. The app links to Facebook and joins users together into groups based on listening preferences as identified in Facebook. Use this to form groups within large gatherings of people. The app can run on mobile devices in the web browser.
  • Map That Music – App for learning geography through music and vice versa. Listen to a Spotify song and guess the country of origin. Also, explore a world map to hear songs from that particular country. A concept similar to a prior Music Hack Day hack by Paul Lamere: Roadtrip Mixtape.
  • RosettaTone – (THIRD PRIZE – $1000 in Amazon AWS credits) – Teaching a foreign language through music videos. Users watch foreign language music videos with live lyric translation in original and second language.
  • Kashual – Trigonometry functions mapped to music synthesis, with interactive performance controls. See the actual functions for various musical samples, and adjust the mathematical function values to create new tones. Play those tones on a virtual keyboard. Inspired by a direct request from a NYC high school math teacher.
  • Parrot Lunaire – (Peachnote API PRIZE – $100 gift certificate to Carnegie Hall) – Search the classical musical score corpus by singing or playing in the theme.
  • AirTrainer – Leap motion kinaesthetic tone matching program created with Max/MSP. Move your finger up or down in the air to play and match tones by ear and hand.
  • SuperTonic – Active listening app with Noteflight. Students click a button during “interesting” parts of a song. Creates an interactive graph for the teacher to use to document listener engagement.
  • Teach Beats – App for linking buskers in NYC with students who want to take lessons from them.
  • TapeTest – (NYU SPECIAL PRIZE to an Educator/Hacker – Nick Jaworski – a MaKey MaKey kit) – a simple web-based app for teachers to assign students to record and submit playing tests for individual playing assessments.
  • Remixing Your Musical World: The MaKey MaKey Musical Construction Kit  – (SECOND PRIZE – $2000 in Amazon AWS Credits & 25 hours of mentoring from NYC Dev Shop) – A musical construction kit based on the MaKey MaKey and MIT’s Scratch visual programming language. The hack completed at Music Ed Hack was the MaKey MaKey Chord Board – a demo project for exploring chords and their inversions creatively.

My role at the Music Ed Hack event was first as a prize sponsor. My research group at NYU along with MaKey MaKey sponsored an educator/hacker prize of a MaKey MaKey kit awarded to the educator(s) who served as active collaborators on a great hack. I wanted to especially encourage and award educators who got their hands dirty in providing active input into the development and realization of a hack. Congratulations again to Nick Jaworski for his involvement in the Tape Test app! In addition, all educator participants were invited to participate in my research group’s Music Technology Educator Meetup (link coming soon) to be held monthly at NYU starting in October. Each educator who actively participated in Music Education Hack and attends the monthly meetup will also receive a MaKey MaKey for use in their classrooms.

I also attended as an observer and as an informal mentor at the event. Spotify and the NYC Department of Education assembled a great team of formal mentors, including music educators Barbara Freedman from the Greenwich, CT schools, Robert Lamont from Gramercy Arts High School in NYC, and Darla Hanley from Berklee. A full list of mentors can be found by scrolling to the bottom of this page: http://musicedhack.com/.

I am also extremely proud of my summer co-op scholar research students Graham Allen and Matt Cohen from UMass Lowell! Their hack won 2nd price for their MaKey MaKey Chord Board, a part of the MaKey MaKey Musical Construction Kit that my research group is currently developing. What’s significant about their hack from my perspective is that they did all of their development using a MaKey MaKey kit ($50), $12 worth of common household items, and did all of the software programming using Scratch 2.0 – a free web-based visual programming environment developed by the Lifelong Kindergarten Group at the MIT Media Lab for kids. All of the materials they assembled for the hack are meant to be remixed and reused by students. All of the code and hardware they created can be viewed and customized by students freely, encouraging users to think and create musically, computationally, mathematically, while exploring engineering design. Graham and Matt competed against professional developers using professional tools from all around the country and came in 2nd with their project and great presentation.

Their project is as much STEM (Science, Technology, Engineering, & Math) as it is Music and Design. I think it’s a great interdisciplinary model for educators of all levels. The Scratch environment enables K-12 educators to bring the process of Hack Days to their own students. Not only are students exploring creating, performing, responding and connecting (the new Arts Standards framework), but they are also working as instrument builders, designers, and engineers. If you are interested in other ideas for exploring expanded and reformed visions of music education pedagogy and curriculum, check out Evan Tobias’s work exploring ways of teaching popular music through producing, songwriting and composing. Our Experiencing Audio research group plans on releasing the MaKey MaKey Musical Construction Kit plans as a completely open-source, open-hardware project, while also making it available for purchase in the near future. We’re also working with various Music API providers to create custom Scratch 2.0 blocks enabling Scratch users to “hack” their own music apps using commercial APIs.

I really hope that Music Education Hack will become an annual event.

Designing Technology & Experiences for Music Making, Learning, & Engagement

This Fall I will be teaching a graduate course at NYU called Designing Technologies and Experiences for Music Making, Learning, and Engagement. This course is heavily inspired by the Hack Day process, but applied over the span of a semester-long course. Students from across the many programs within the NYU Department of Music and Performing Arts Professions will work together individually and in teams to develop a technology and/or experience that that they will iterate at least twice over the course of the semester with a specified audience/group of stakeholders. Students will read articles about and case studies of best practices in music education, meaningful engagement, experience design, technology development and entrepreneurialism, and meet regularly with guest presenters from industry and education. At the end of the course, students will present their projects to a panel of music educators and industry representatives for feedback. Selected students will have the opportunity to compete for scholarships to work within my research group and some of the industry sponsors during the Spring 2014 semester to potentially license and commercialize their ideas and projects.

In this course, we will be implementing a research & development process designed by Andrew R. Brown called Software Development as (Music Education) Research (SoDaR). This process was piloted and used throughout the development of the Jam2Jam networked media jamming software project led by the late Steve Dillon. This process actively involves the end users of a particular piece of software in the design process at all stages. The field of music education technology is just now starting to move toward this end, where in the past educators were often marketed music technologies designed for professional musicians (e.g., professional keyboard synthesizer, Finale, Sibelius, Reason, ProTools, Ableton, etc.). It’s notable that relatively new technologies NoteflightMusicFirst, and MusicDelta have engaged educators in the design and refinement of their tools, and see music educators and students as their primary user audience.