Code of Music A3: AV Instruments – Alex Wang

Task:

Create a digital, audio-visual, sample-based instrument.

Final Result:

https://editor.p5js.org/alexwang/present/Tmu87zpB

Controls:

elevator samples: qwers123

drums: mkjio

Video Demo:

Process:

I was inspired by the elevator music discussion we had in class, and decided to create a sample based instrument around sliced pieces of the elevator music

I first downloaded the song and moved it to my DAW, I manually matched the tempo and began slicing the song into individual parts

Then I coded a Launchpad-like style of keyboard sample trigger, also adding a keyrelease() function to stop the sample. Choking samples that are not meant to be played for its whole duration, while drums and other percussive sounds with a short tail are not included in this code.

Then I added animations to match the elevator theme. Each slice has its own animation that is controlled by the currentTime() function

Original idea sketch:

I originally only wanted four animations for each of the sample slices, but in the end I also added flashes of whites for both kick and snare to make the visuals more appealing.

Source:

elevator music: https://www.youtube.com/watch?v=xy_NKN75Jhw

drum samples: from online sources like cymatics.fm, some where FL Studio stock samples

Code of Music A2: Catalogue of Attributes – Alex Wang

The music that I chose to analyze is an electronic track made by a Chinese producer called PLSM, very inspirational musician who tried to blend many styles of music in his song titled “Cosmic Homesick”. 

Catalogue of Attributes:

Tempo: around 140 bpm

Genre: influence from many genres such as trap, future bass, jazz

Structure: follows a regular mainstream EDM format, with an intro, build up, drop, and outro.

Rhythm: trap style half time beat, weird time signatures during verse(15/4), polyrhythmic pads.

Sounds: mostly sounds generated by a synthesizer such as a distorted supersaw with heavy automation on volume and lowpass.

Alot of keyboards such as piano and rhodes, I believe there is a combination of heavy compression and digital reverb because of the large amount of sustain even when there was not much velocity on the piano breaks. Quick keyboard riffs that are usually found in jazz improvisations.

Mixture between electronic drums and acoustic drums, I think there are multiple layers of snare, a snappy one with no tail and a longer one that resembles a clap. Hi hats are written in a trap style with quick bursts of 16th notes and changes in pitch, Kick has a strong high end transient.

Many fx sounds such as a rising white noise, distorted synth flams, and pads oscillating in a polyrhythmic pattern during the verse(1:40). Also a repetitive vocal like sound during the drop.

Interactive Music Experience Concept:

I was not exposed to much interactive music experiences before, but I believe it is very important to not have the interaction stop the flow of the music in a disruptive way. A basic concept that I came up with to match the music I have chosen is to have interactive objects appear with the corresponding elements of the music, and have that object be interactable in someway. 

For example the vocal chop sound from the song can have a corresponding avatar or object, and by interacting with that object, can have affects on the sound from the song(distortion, eq, or other effects that can be automated).

The reason why I chose fx sounds like the vocal chop is because the main melody or harmonic elements of the music can easily disrupt the flow of music when altered by an unintended effect.

The interaction portion can either be done through motion tracking, physical controller, or even the traditional mouse and keyboard interaction.

rough sketch:

VR Production and Demo Experience – Alex Wang

The VR production project is a great way to actually get involved with hands on production techniques and put the theories we learned in class into practice, I had a great time working with VR production and also learned a lot of VR related knowledge along the way. Since I have a certain level of proficiency with video editing in premiere, I thought this project will be fairly simple in the post production phase. However, I ended up spending the whole weekend to accomplish what I had in mind. I learned that while working with new forms of media such as VR, there is not as much resources at our disposal since it is still a developing area. Not only is there a lack of support from different video applications, VR specific technologies such as the Insta 360 camera we used in class can have many problems since the algorithms for VR is still in development for perfection. The production process itself was also much more complicated than I thought it would be, I have to do many effects by hand just so that the left and right eye can have the same effect. While it is very simple to fool the eyes in a traditional video, having post production effects align is another story, I ended up manually masking the sky on both eyes to achieve a successful special effect, while also creating a separate mask for the ground to make the colors look natural as opposed to just having filters over the lens. If I get the chance to do this project again, I will definitely film the day and night footage in one shooting session, without moving the camera

When we first got access to the 2k files from previous classes, I experimented with premiere of what I can and cannot do: 

adding greenscreen effects and avatars

Another problem I had during production is the alignment of day and night footage, filmed at the same spot but with slightly different angle. I had a feeling that this would not work well according to the degrees of freedom talk we had in class, how it is extremely difficult to find the exact same pan/tilt angle as the previous shot. Yet through some experiments with post production it seems like I can manipulate the footage to some degree to make it better.

One attempt I made was to adjust the footages in premiere so that they can line up, however changing one section of the VR video will throw the rest of the alignment off, making it impossible to get a perfect alignment without having the original footage be correct, or have heavy photoshop editing like the “911” call of Avatar, which is expensive/time-consuming and not ideal.

alignment attempt

On second thought I decided to only photoshop one frame of the night footage to lay over the day footage, Instead of using a whole video which is extremely hard to manipulate frame by frame by hand.

showing day and night footage

Aside from all the technical difficulties with video editing skill, the biggest problem I had with this project is the handling of huge data files. The 8k 30fps footage from the 360 pro is a completely different thing from the 2k files I have been experimenting with. The IMA studio has decent computers(32 ram, 1080 GPU), yet the 8k footage is too for the computer to even function normally. However, the production was a lot smoother after the downgrading from 8k to 4k, though the process of downgrading took a few hours as well. I believe that if the hardware catches up, we can have a much easier time with VR production because it is a very processing power intensive form of media. 

video encoding constantly freezing

As for demoing, I was not able to do so during the IMA show because I had other projects I had to demo during the show. But I did demo the video to many of my friends during the production phase just to get suggestions and feedback. It seemed like most people are interested in VR and its possibilities, and I believe that the Hyperreal theme that I decided to go with is a perfect fit for the VR platform. Since no other forms of media can have the same amount of immersiveness and impact to humans, VR combined with spatialized audio is the best way to convince the human body to really engage with what they are watching.

creating a hyperrealistic world

glitch effect and audio

I am very grateful that I had the chance to formally learn the theories of VR and AR in an academic setting, since I learned most of my audio and video knowledge on my own. I think this class is a great way to get me started in VR production, but most importantly get me thinking in a VR mindset. Before formally learning about the VR AR terminologies and theories I had a rough idea of how the technology works by aligning left and right eye with their corresponding difference in position depending on their distance from you, but now I can use the correct terms such as disparity and parallax. One exchange I had with Professor Naimark changed the way I think about VR, when I proposed the idea of using machine learning to replace humans with solid silhouettes, I thought that the VR will look correct as long as the left and right eye positions are correctly matched. However, the response I got really surprised me, I was told that even if they look spatially correct, it will still be a flat figure in the correct spot. Very obvious observation if you think about it, yet so abstract in concept. Which is why actually getting to try VR production in this class is a great way for us to explore and learn how things work in the world of virtual reality. I will definitely continue to think about human perceptions with the concept I learned in this class, I am sure that these theories will be beneficial to the projects that I will work on in the future. 

MLNI Final Project – Alex Wang

Task:

Develop a user interface on the web or mobile platform utilizing any machine learning models covered during the class. Students are strongly encouraged to explore the diverse tracking/transferring methods and find their own ways how they effectively interact with body movement or webcam images. Successful student outcomes can be created in various forms. Projects can be anything from a visual that is manipulated by the models to bringing such visuals into augmented/virtual reality; a drawing tool; informative web application; entertaining game; virtual dance performance; or creating/playing a musical instrument on the web. More details about the final project will be discussed during class and at the concept presentations.

My Project:

ML Dance is a web-based interactive rhythm game that is implemented in p5.js, it utilizes machine learning models to track the player’s body position. The player controls a virtual avatar in real-time simply by moving their body in front of the camera, accurate tempo syncing and real-time motion tracking allows competitive gameplay while still keeping the interaction natural and fun. Unlike other dance games on the market, the game does not require excessive hardware such as depth-sensing cameras or remote controls. By implementing machine learning models, the game is now much more accessible for all users only requiring a laptop with a normal camera.

Inspiration:

There are already many dance games and rhythm games that goes beyond the interaction of just clicking buttons, traditional games like Taiko: Drum Master creates a interactive gameplay using simple techniques by just creating a physical drum for the player to smack.

Taiko no tatsujin arcade machine.jpg

Taiko: Drum Master (2004)

while early dance games are only possible through pressure sensitive plates on the floor, which can not include the movement of the whole body since the arms and head movements can not be tracked.

Dance Dance Revolution (1998)

More recently dance games started to adapt body tracking techniques such as the Microsoft Kinect or Nintendo WII.

Image result for wii dance

Just Dance (2009)

My goal is to create a interactive dance game that is more accessible to players without WII or Kinect hardware. Also adapting gameplay modes from non dance rhythm games that I thought was successful with their game design. (osu and beat saber)

osu (2007)

I would like to use a note display system similar that of osu, I believe the circle closing in is a good indicator for timing in a rhythm game where notes are not moving across the screen.

Beat Saber (2018)

Beat Saber is a VR rhythm game that have been very popular among the VR games today, due to its ability to track hand position with a controller it is able to color code the notes either blue or red. I decided that since I am able to track hand position, this will be a great feature to implement into my game.

Application of this Project/Why I chose this Topic:

The reason why I think this project makes sense is because dance games are a great way to help people get exercise. Most games today are only using mouse and keyboard, gamers sit in front of computers their whole day and this is very bad for their health. While there are already many games like Just Dance and Beat Saber that allows the gamer to get up and move, not everyone owns a VR setup or a body tracking game console. However, most gamers own a laptop with a camera, making machine learning the perfect application to make this kind of game accessible to the public.

Final Product Demo Video:

Development Process:

Setting up game system:

Posenet position update:

First I imported the Posenet model from ML5, then I use that value to control the position of a avatar for the player to know their position on the canvas. At first I used the direct value from the model output, which was glitchy. I improved this by only taking one pose value(as opposed to multiple pose detections) and also adding a filter of linear interpolation to smooth the transition.

By not jumping directly to the model value, only updating the current avatar position by a percentage of their difference can avoid the glitch effect that is caused by noises from the model outputs.

Beat Matching:

I made sure that all notes are exactly on time with the music no matter the computer processing speed. This is crucial to making a good rhythm game, even when many popular rhythm games ignore this fact. After some research I learned that the best way to synchronize the song with the game is to check the current time of the audio file being played. I did some simple math to calculate the time value of every note.

formula below:

let beat = 60/bpm;
let bar = beat*4;

and since the update speed of computers can not be controlled, the current music time might not be a whole number. So I store the value of the previous time it updated and as soon as the value surpasses the beat value, I recognize it as the next beat of the music.

if (music.currentTime()%beat < previous)

This makes sure that I can accurately get the time of any song as long as the bpm value is known

Note object:

I created a class for note objects so that they can be displayed on the screen, I pass in position and as parameters so it knows where to spawn and type value so that it knows which color it should be. I then sync it to the music by shrinking its size over the course of a bar.

I then created another class called points, it creates an animation of whether the player scored or missed, as well as how much they scored.

Collision detection:

I created a function called check() that takes both the coordinates of the user and the coordinates of the note to check whether their distance difference is small enough to be considered a score.

function check(target,px,py){
let distance = dist(target.x,target.y,px,py);
if (distance <= 70){
score += (100+floor(combo/10)*10);
combo+=1;
scorefx.play();
animations.push(new points(target.x,target.y,(100+floor(combo/10)*10) ));

}
else{
combo=0;
animations.push(new points(target.x,target.y,0));
missfx.play();
}
}

Note mapping:

I manually map all the notes just like most rhythm games do, creating a json variable that stores both the time and x y coordinates of the note, so that the note object can be created at the right time. I also created different variables for different colored notes, so that multiple notes can appear on the screen at the same time. Different sets of variables for different songs so that this game can hold infinite numbers of songs while using the same system.

UI Design:

adding a visualizer:

I took a visualizer code by Fabian Cober(see attributions) and modified it to fit my project, the visualizer is responsive to the audio and music being played. Adjusting the values and threshold of the visualizer I made it more visually pleasing. I also evened out the values around the spectrum so that the treble frequency does not have a huge difference with the bass frequency.

adding a breathing color:

The game title as well as the ring of the visualizer all breath with a constantly changing color. I achieved this by utilizing the frameCount variable and the sin() function to adjust rgb values.

col=color(150+sin(map(frameCount,0,500,0,2*3.14159))*100,150+cos(map(frameCount,0,500,0,2*3.14159))*100, 150+(0.75+sin(map(frameCount,0,500,0,2*3.14159)))*100,100);

High score display:

All scores larger than 0 is stored in a list of scores which is displayed using a for loop at the end of the game, the game will automatically remove the lowest score if the size exceeds 10. If the score of the game that just ended is on the list, the game will highlight the score indicating that it is the score of the current player. I avoided highlighting multiple scores by only checking for one score in the list.

Pose toggled buttons:

To avoid mouse/keyboard interaction, all commands in the game are triggered by hand position. The buttons are triggered by placing the right hand over a certain button over a period of time, and is made visible by drawing a rect over the button to show progress.

Particles:

I added a simple particle system to enhance the visual of the game, I modified the sample code from an in-class exercise to add red and blue particles to the two hands, leaving a trail when the user moves.

Interfaces:

aside from the actual gameplay, there is a main menu page which gives the option to either toggle camera on and off or switch in game avatars. Then there is a song selection to select the music you want to play as well as a high score page that only displays when a song finishes playing.

Aesthetics:

I decided to go for an arcade style of visuals, the font and the avatars are all inspired by pixelated graphics.

the hand sprite is my remake of the windows hand icon 

the face sprite is my remake of the sprite used in space invaders

Attribution:

background music:

(I created and produced all music/sound effects used in this game,  except for the hand clap sound file)

menu – Aqua

songs – Pixelation, Takeaway

score sound effect and miss sound effects are generated using a synthesizer

clap sound effect

Arcade font

Visualizer code is a manipulated version of Circular Audio Visualizer by by Fabian Kober

Model by Posenet (ML5)

Week 11: Top 4 VR/AR news of the week – Alex Wang

Task:

VR/AR News of the Week has closed for the season with 92 entries, all of which we’ve seen at least briefly in class. Some of the stories will be looked back in 5 years and be considered accurate, prophetic, powerful. Others will be looked back in 5 years and be considered off-track, clueless, and ridiculousPLEASE SELECT YOUR TOP 4 OF EACH. Post with a detailed description of why you’ve selected them in Documentation and be prepared to present.

 Accurate, prophetic, powerful:

1.FACEBOOK Brain Interface

I believe that the FACEBOOK acquisition of CTRL Labs is a big step towards the development of brain controlled VR. Though it still seems like that a truly functional brain interface is still many years away from what technology we have right now, I do believe that this will be one of the most powerful and game-changing technology related to Virtual Reality.

2. Massage Chair VR

I am a big fan of massages, so I really do see potentials in this combination. A major advantage of VR is to have freedom in the environment you are surrounded with, making it very easy for VR games to create certain moods as opposed to more traditional mediums. That includes the mood of a very relaxed and zen feeling. Having a comfortable lake environment around you, or even some kind of space environment can definitely elevate the whole massage experience.

3. Dream Walk

Dream walk seems like a silly idea at first but I think it has great potential if used properly.  Having VR on  when you are walking is just too dangerous, if it could blend AR techniques into this product, this could be a great tool to provide utility. Imagine having maps, and text messages in your eyes when you are walking down the street. Maybe even arrows on the ground to guide you to your destination or warnings when there are cars near by. The potentials are limitless. 

4. VR Skin

Aside from VR brain interfaces, the next big thing to make the virtual more realistic is to feel haptics. VR skins, gloves, vest, or whatever it takes to fool your senses is the next thing to achieve after getting the visuals right. Through examples we explored in class, I am aware that even the slightest changes in senses can greatly contribute to the immersing experience and make VR more real. This is why I believe VR skin and other haptic products will be a great deal in the next 5 years.

Off-track, clueless, and ridiculous.

  1. also FACEBOOK Brain Interface

as much as I like the idea of a brain controlled VR technology, I also think CTRL Lab is one of the most off-track companies. As most science fiction and mainstream culture predicts, the only way to achieve brain powered VR is by plugging a chord into your body. Movies like The Matrix or ExistZ that we watched in class both suggests this kind of technology. I do not deny the potentials of other alternatives to approach brain-computer interaction but I just don’t buy CTRL Lab’s wristband idea. I do not believe that pulse is enough information to execute brain commands accurately.

2. Virtual Graffiti

 I do not see virtual graffiti as something that will be popular, nor do i see it as a good application of the AR technology.  Though Pokemon Go was very successful during its launch, I would like to argue that most of its success came from the brand of Pokemon as opposed to the innovative technology. Virtual graffiti not only lack a big brand to back it up, it also lacks a rich interactive experience like the Pokemon Go game. Which is why I do not believe that virtual graffiti will be successful.

3. Phone based VR is over

I agree with many points made in the article on how computer based VR will take over phone based VR but I also believe that 5 years later phone based VR will become the trend. This is because that the phone VR we have today does not have the proper set up we need for a smooth experience, there is both a lack of controllers and also a lack of processing power to match the needs of a VR program. There are already many attempts right now on making phone gaming more professional, multiple gaming gear companies are shifting their focus onto making phones specifically for gaming. There are so many advantages to having phone VR over computer VR, there will no longer be wires attached, and that it is much more convenient and small in size. The controller problem can also easily be solved with the help of machine learning, FACEBOOK is already implementing gesture detection through front cameras on their Oculus devices, this is perfect for phones since most phones have cameras on the back, the new iPhone has multiple cameras on the back.

4. VR Concerts

I am very passionate about music but I do not think VR concerts will be something that is going to work. There are many obstacles from achieving a good concert experience, the sound of a concert is very hard to replicate. This would require expensive audio gear along with VR headsets to replicate the kind of sound you hear at a concert, also big sub woofers to replicate the kind of vibrations in the body from bass frequencies. Even if all that is properly replicated, the whole point of seeing a concert live as opposed to listening to a studio quality recording at home is to meet the artist in real life. A VR mock off would lack the ability to interact with the performers, making the whole point of seeing a concert void.