Inspired by Oskar Fischinger, one of the pioneer artists in graphic sound field, ShadowPlay is an interactive sound generating experience where users can create their unique music by moving bodies in front of Microsoft Kinect camera. The body images captured are shown as shadows and projected onto the screen along with basic shapes in color representing the sound.
Learning an instrument and using it to play harmonious music is not an easy thing to achieve for people especially those who do not know about music theory. With the aim of making music in a more creative way for everyone and sparking non-musicians’ interest in music, ShadowPlay provides an interactive sound generating experience where users can create their unique music by moving bodies in front of Microsoft Kinect camera.
The project is inspired by Oskar Fischinger’s ornament sound experiment. Oskar Fischinger was one of the pioneer artists who produced what was later called “graphical sound” by running strips of film of hand-drawn ornamental patterns through the projector. This experiment was conducted in 1930s and was an early successful trial to create sound out of images. Will it reveal new possibilities if we incorporate modern technology into it and re-create this visual-to-sound conversion in the contemporary context with engaging audiovisual user experience? ShadowPlay is an experiment to explore the possible answer to this question. Compared with the original experiment, ShadowPlay, with cutting-edge technology involved, has more interactive elements. Body movements is the main means of interaction and there is no specific rule or requirement for generating music, which lowers the threshold of interaction and makes it more accessible. As long as users can move their bodies, they can gain an immersive experience. As opposed to the pre-drawn film strips and predictable sound waves, musical notes are generated in real-time when users make different body movements in ShadowPlay. With the help of knowledge of signal processing and music theory, all sounds are synthesized according to different parameters taken from users’ position and movement data by computer. For example, the horizontal position is mapped to the pitch of the root note in the compound sound, then based on that root note, different chords are played in a crisp timbre and a long-drawn-out soft one, which adds to the richness of the music. The width of the shadow determines the waveform of the root note users hear. Narrower shadows produce sharper notes of triangle waves or square waves while a wide shadow created by user’s fully stretched body produces smoother sine waves. The sound generation mechanism keeps the core of Fischinger’s experiment which is to produce sound by interpreting shapes as sound waves and adds flexibility and playability to it. As for the visuals, the body images captured are shown as black shadows against white background and projected onto the screen along with basic shapes in color representing the sound. Black and white interface with random glitches are chosen to reproduce the 1930s feeling and bring users back to Fischinger’s time. In the meantime, on the shadows there are colorful shapes Oskar Fischinger’s music animation was famous for that tell users the position of the detection point on his/her body and gives clues to how his/her body movement is affecting the sound.
This interactive music generation project targets audience of all ages. As they experience this music production process, they will also figure out the underlying relationship between their body movement and the sound generated. People can also learn about Oskar Fischinger’s achievements in graphical sound field in the information page easily accessible from the main game page. This project can serve as an introduction to music theory in a broad sense such as chord theories, waveforms, and arouse their interest in music for non-musicians especially kids while they have fun producing their unique music.
Tags:#soundGeneration#motionSensing#graphicalSound