Code of Music A22: Final Project – Alex Wang

Task:

Create a generative or interactive musical piece. It can be web-based, a physical device, or exist in space. In your presentation, discuss musical, design, and programmatic aspects of your project.

Video:

Inspiration and concept:

I am really inspired by the Chinese electronic music artist Howie Lee. In his music video titled “Tomorrow cannot be waited” there is a scene at around 1:30 – 2:05 where it seems like he is performing live, controlling the sounds we hear with the movement of his hands.

I thought this was interesting because one of the major downfalls of using a electronic based instrument is that it lacks real time control over timbre, making it less viable to be performed live. However, with more advanced human computer interaction interfaces, these live performances are starting to work.

I remembered one of my previous projects that I had been working on which I titled “ML Dance”, it is a rhythm game utilizing ML5 posenet pose recognition. I thought if I change this from a game to something more music oriented, this could be beneficial for musicians looking to do some real time audio visual performance. It did not require any extra hardware similar to the one seen in the video, no sensors, no depth sensing camera, etc. 

Combining the concept of real time timbre control with body movement along with machine learning powered pose recognition, I came up with a new project that I named “Expression”. The idea is that by adding tone JS I can not control the sounds of each individual musical element, such as changing volume or lowpass. I can also do musically synced animation that adds to the visual aspect of the project.

A description of your main music and design decisions:

The music is a song that I produced, the vocal and lyrics is from a classmate who is taking the same music tech class with me. I chose this song because I really like the simplicity of my arrangement choices, the song can be broken down to just a few different instruments: drums, vocals, plucks, and bass. Also it is more convenient to use my own track since I will not have access to all the stem files if I were to use a song from the internet, even if I did download the stem tracks to another song I needed the sounds unprocessed, because I need the raw audio signal to go into my program where tone JS applies the new effects.(if I feed an already low passed sound and try to remove the lowpass within tone JS it wouldn’t work)

As for my design decisions, most of the work was already done when I originally planned this project to be a rhythm game. I spent a lot of time filtering the machine learning model outputs for a smooth interaction, as well as a lot of pose interaction specific design decisions which can be found here

Aside from work already implemented from that project, I added a lot more when I am converting this project from a game to a more artistic project. First of all, I added visuals to the lyrics and synced them by calculating the time stamp. I also added ripple visuals that is taken from my rhythm project(Luminosus), I synced the ripples to the music and it also responds dynamically to how much the user is low passing the pluck sound. Finally, I added a layer of white rect to match the rising white noise during a typical EDM build up and created a series of animation styled images during the drop to keep the user interested.

An overview of how it works:

By sending posenet my camera capture, it returns me values that corresponds with its estimated pose position. I then use that value to not only update your virtual avatar, but also use that value to control low pass filters from tone JS. I import my tracks stems individually by grouping sounds in to 4 main tracks, three of which can be controlled by low pass and one where everything else plays in the background. I also ran a Tone.scheduleRepeat function at the same time to sync the triggering time of animations, such as having the ripples spawn at a 16th note pace or having the lyric subtitles appear and disappear at the right time.

Challenges:

I believe the most challenging parts are already sorted out when I was initially creating this as a game, problems such as posenet being inaccurate, how to sync gameplay with music, and how to design for a interface that is not controlled with the classic mouse+keyboard. However, when translating things from my old project into what I have now, I still encountered many annoying errors when trying to add tone JS and changing some of my original codes to match this new way of working with audio files. 

Future Work:

I originally envisioned this project to be a much more complex control over timbre, not just low pass. But as I worked more and more towards my goal, it just makes more sense to have simple lowpass controls over multiple different musical elements. Perhaps if I have the right music, where the main element of the song heavily relies on the change in timbre, I can change from controlling multiple elements to using all of your body to control one single sound. With a much more complicated way of calculating these changes, not just x y coordinates of your hand, but the distance between all your separate body parts and their relationships can all affect a slight change in timbre. I also plan on emphasizing on a more user controlled visual, similar to the ripple generated by the pluck sound, as opposed to pre made animations synced to the music. I want this project to be about the expression of who ever is using it, not the expression of me as the producer and programmer. Also incorporating more accurate means of tracking, with sensors or wearable technologies, could also greatly benefit this project and even make it a usable tool for musicians looking to add synthetic sounds.

Code of Music A21: Harmony Project – Alex Wang

Task:

Harmony Project: Design and implement an exploration of harmony using code. This could be an interactive piece or a fixed composition. Record a 20-second snippet of the piece in action*. Collaborations between two students are encouraged.

P5 Editor Link:

https://editor.p5js.org/alexwang/present/3iZ2RjcdR

Final Product:

Inspiration and concept:

Interest in music is becoming more and more common and people can still enjoy playing music without a solid understanding of music theory. A quick google search will tell you the chords to your favorite song and what individual notes and voicing are suitable for playing the chord. However, it does not go the other way. When people are experimenting with harmony themselves it is harder to go the other way and have someone tell you which chord you are playing. I find this personally finds this frustrating, not being able to tell the name of a chord I landed on. This is why I wanted to create an interface where the program will tell you which chord you are playing, regardless of voicing.

A description of your main music and design decisions:

I did not include any music in this project, mainly because I wanted this project to be something more useful as opposed to something more artistic, I also made the overall visual design simple because of this. Playing a note changes the color of the corresponding block on the keyboard, also displaying the pitch of the note above it. Playing 3 or more notes will display the name of the chord you are playing, if it is a non-ambiguous chord.

An overview of how it works:

The way this program works is by storing all triads and seven chords in an array. Then I use another array full of boolean values to check what notes are currently being played. After sorting all the notes being played in order(solves the voicing issue) and eliminating any duplicates when playing octaves of the same note, I will compare that set of notes with the library of known triads and seven chords to see if anything matches.

Challenges:

This project is definitely way more complex than I thought it would be. A lot of the chords did not exactly have a specific name and many chords shares the same exact notes. For example, A sus4 (A, D, E ) and Dsus 2 (D, E, A) has identical notes. Some chords doesn’t even have names, chords like D A C can be interpreted differently depending the context of the music. It could either be a Dm7 by adding a F, or it could be a D7 chord by adding a Gb, it could even be a C6/9 chord by adding E and G. Chords like this can only be identified with the full context of music, information such as chords played before and after this chord in the progression, as well as key of the song, and the notes played by other instruments(bass usually indicates root).

Future Work:

With these challenges in mind, I would like to explore more with harmony in potential future work. Perhaps using algorithms to analyze the most possible chord that these harmonies are trying to suggest, maybe even utilize machine learning to train the perception of harmony according to the context of the music. These could all be interesting things to explore in future versions of chord analyzing software.

Code of Music A19: Harmony example – Alex Wang

Task:

Post a link to or blog post about a song whose harmony appeals to you

Song:

Hip Dipper by Arch Echo is a very harmonically interesting song, the song itself blends multiple genres and takes a very creative approach to the composition. A short break in in the song that starts at around 35 seconds introduces a short piano solo where the harmony sounded very jazzy, I do not know the exact chords but it sounded like it was more than just triads.

Code of Music A18: Tone.js Sound Design – Alex Wang

task:

Using code, design two different timbres, applying synthesis / sampling techniques of your choice. 

Detuned Porta Lead:

Link: https://editor.p5js.org/alexwang/present/KvXuYGjqd (Use number keys)

The first sound I decided to make is a detuned lead with my own attempt of creating a portamento or glide effect.

First I created 6 oscillators of different fundamental wave forms, then I shifted each oscillator away from the core frequency, creating a detuned sound with oscillators +-5 hz away from the center, I also added some chorus effect to make it sound bigger. 

Then I created my own glide effect by using the lerp() function to interpolate between previous note and current pitch, also creating a new way of triggering attack and release so that the pitch constantly updates(in draw loop) but also triggers properly.

Bell Pluck:

For my second sound I decided to try manipulating a sample, I used the iconic “ding” sound commonly used in video editing. I added many effects available in Tone.js, such as autopan, ping pong delay, and reverb.

Link: https://editor.p5js.org/alexwang/present/UoFsOZZ5g (ASDFG)

Code of Music A15: Interesting Timbre + Synth Sound Design – Alex Wang

Task:

1. find a piece of music you find interesting in terms of timbre.

2.Using the synth at the end of the tutorial, design three different sounds. You can share them by creating screen capture videos.

Song:

The song I chose is by a Chinese electronic music artist named TSAR, the reason why I chose this song is because the main focus for this piece is the timbre. The melody is simple and the rhythm is simple, yet the track is really engaging with the constant change in timbre(0:34-1:03).

Sound Design:

for this assignment I created 3 different sounds that I thought sounded good, I usually work with serum which gives a lot more options and freedom but I find this web synth very fun to play with as well.

For convenience I grouped all three recordings into one video but I will explain them separately.

Pluck:

I made a plucky sound using a basic saw and square wave, adjusted the ADSR so that the sound dies like a percussive instrument. I then added a LFO so that the note repeats like a gated pad, I also manually adjusted the lowpass filter to make it sound like a part of a generic EDM song.

Reese Bass:

The second sound I made is my take on the Reese bass, which is also a very common sound in electronic music. I messed around with the detune to make the saw sound wider and fuller, and then gave the lowpass filter some resonance to create some distortion to the sound.

FX:

The last sound I made is a modulated sound where the ADSR envelope affects the LFO rate, a technique usually found in white noises or dubstep bass designs. The resulting sound oscillates from very fast to a constant rate, corresponding to the attack to sustain section of the envelope.

Video clip of sound design: