Final Reflection by Ryan

It has been a quite short semester. I really have enjoyed the VR class, I wish I can take this course every semester to make my own VR videos every year. The experience of producing a VR video is so amazing, the feeling of making a VR video is totally different from making a normal video, I feel more sense of achievement. Every time I load my video into Oculus Go and check how it works, I will be amazed at what I have done, and how they have turned to something real enough that I can feel I am in it. I am very thankful for my choice of taking this course, and having such amazing professor Naimark and learning assistant Dave and all my classmates, especially Amy and John who are my teammates working together for the final project.

For the production part, shooting comes the first. The process of taking footage did not have many problems, people passing by hardly pay attention to the camera during the shooting. The only problem we meet, from my perspective, is the antenna. For the first footage, I bend the antenna too low that the camera can include it into the footage, and we cannot delete it in Pr, this is a relatively huge problem. There is one point needs to be mentioned, is the setting in the phone application of the Insta 360 Pro 2, we need to have the setting of 360 3D, or the footage we take cannot be stitched correctly. Also there are problem exporting the footage from the SD cards, even though we did not meet the problem of the corruption of the footage, some of the other teams had met the problem which was upset. And we also need to make sure the setting in the stitcher, so that the video stitched will be in two parts, and the part for left eye will be placed top with the part for right eye placed bottom.

For the Pr part, I and John have been working days for the final effect. I found out that if we want to apply the effect in Pr that are not included in the folder of immersive video, we need to put the effect seperately in other cover to be applied to the video. If we put other effect with the effect of plane to sphere, the video will be shrinked and distorted, which has been troubling us for several hours. Then I try to apply the effect posterize, and other effect to the video, these work well, except for the problem that there is a bar in the middle of the video showing that the effect needs GPU acceleration. And the problems is not solved till now, I have checked the option of GPU acceleration rendering in the setting, so I am thinking about it might be the problem of GPU, or I should not apply these effect to the video since these effects might be too many for the video. Since our theme is data, so we want to realize the effect similar to movie Matrix, and we apply the effect of VR glitch which perfectly fits our idea. Then I play with key frames to do the animation with mask, we draw the mask of all the buildings, to make the effects only work on certain buildings in sequence, since the video only takes one minute, so for each mask, we need to include a lot of buildings at the same time, or we will not have enough time to make buildings start becoming glitchy one by one. The hardest part of drawing the mask is to draw the mask of the sky, we need to sketch the frame of all the buildings which takes a lot of time and is very hard to have a very detailed frame of all these buildings. Thanks to the camera for being at one spot, we only need to have one mask for the sky since it does not move. After the finishing the glitchy effect, we try to change the HSB of the video, and find out a color combination of purple and green, as all the buildings are changed into green, combined with the effect of glitch, everything look so similar to the scene in Matrix or a apocalyptic scene. Playing  with key frames and all kinds of effect can always bring me surprising result, so does it in VR video, after finishing applying all the effects and load the video into the headset, I am so amazed by myself and feel so proud. But still we need sound, which we make things even better. But the result is that the spatial auido work station did not work really well, so we failed to make spatial audio for the video, but just do several sound correction and manipulation for the video. This part is mainly done by John, so I appreciate him a lot.

For the demoning part, since I have another project to show, so I only spent a few amount of time doing the demo. But still I try my best to show the video to as many people as possible and receive many good feedbacks from them. Everyone I have shown to are amazed and enjoy the video. I am very proud of myself every time they praise the video, it is the sense of achievement that the producer of a video can receive positive feedback face-to-face from the viewers. Even though there are some problems for the viewers to watch the video due to the controler and the interface of Oculus Go, everyone enjoy it very much.

For conclusion, I want to say everything just end so fast. I really want to enjoy more, it is a pity for me that I can only make one video, and I have tried more for the video. I can make something better that can satisfy myself more, but still, want we have made is quite a success. Next time if I am going to make a VR video, I will definitly work better since I have the exeperience and will not make same mistakes. Also I will try more new things except for just adding effects, I really want to add interaction to the video, but this time, since we do not have enough time for that, we have not done that. But I will definitely explore the use of Unity in the interaction part of VR videos, that will bring more fun and possibilities for it, I want something not just an experience, but something to interact with. I love this course, for all we have achieved, and all the amazing faculties and classmates.

Final Blog Post for Final Project by Ryan Yuan

Project Name: Intereferit

For this project, I work alone, and the final work is a new interface for musical instrument. 

Based on the previous research I had done before the production started, the project would be related to music. The keyboard MIDI player I had found and the website of The International Conference on New Interfaces for Musical Expression, AKA NIME, gave me a lot of inspirations on making a musical instrument. As what had been mentioned by professor Eric during class, conductive tape could be an interesting way of interaction, so I finally thought of making a musical instrument that is only based on interaction built by conducive tape. The concept of utilizing conductive tape is by making a circuit based on the tape, the sensor part connects to an input port on Arduino board, the trigger part connects to the ground. 

After thinking about the way of interaction, I started thinking about how the interface would look like. I am very interested in Japanese history, and I have been playing a Japanese game that is related to the Japanese Warring States Period in recent days, so I was inspired by it and wanted to make the interface related to something about the history. There are family lines for each family during the Warring States Period, and these lines all look differently from each other and has its own meaning. Oda family and Tokugawa family are the two most famous family during that period as these two families are the two that had united the whole country, and I like these two families very much, so I want to adapt their family lines into my project. And this is the reason why my final project is the combination of their two family lines. 

The idea of the whole project is not only a musical instrument, but it is also about connection. For the physical interaction part, the two family lines indicate the actual connection of the two family in history. Then for the virtual interaction part, I want to make visualization of the instrument, that when the user is playing the instrument, there will be effects on the screen, and these effects will be passed on in some way to the next user to make connection with others. So I think of water ripples effect, as water ripples will be triggered and be passed on since they will last for some time in real life when a water ripple is triggered. For the effect of water ripples in processing, the water ripples will be realized by pixel manipulation, that the RGB value will be passed to other pixels when a water ripple is triggered at one pixel, to make it look like spreading. And I am thinkging of the way of how to control where to trigger the water ripples on the screen, so I have thinked about color tracking by using camera capture. The concept is that, if the object is in certain RGB value and if I set a condition to only track those pixels and make them visible, the result will be like as if I am doing object tracking. So I get a traditional Japanese ghost mask which is red, not only to fit the Japanes style and realize the tracking function, but also to fit the concept that, we are the ghost in the vision of the camera, and we will bring intereference to the pixels.

Now talking about the reason why I name the project Intereferit, it is a word combined with interefere and inherit, intereference means we will interefere the pixels and these intereference, or as to say the water ripples will be passed on to the next user on the screen, which is inherit. 

For the production part. The physical component, firstly, the musical instrument. I get the picture of the two family lines online, but find out that there are so many gaps in these pictures that if I am going to laser cut these two pictures will be resulting in have a lot of scattered parts. So I have to connect these parts to make them together, I have been struggled for this part, I want to connect these parts in AI, but I barely know how to draw in AI, so I waste hours figuring out the problem and result in nothing. I finally finish the work with Photoshop and then inport the file into AI to do the laser cutting. I want to make the family lines bended to make the physical part looks stereoscopic, but not just two flat pictures to tap on, and this fits the idea of a new interface that is designed by me. Since the icon should be bended in different direction, so wood can not be used to do the cutting as it can only be bended in one way, so I use acrylic board to make the icons. And I use the thermal spray gun in the fab lab to do the bending job, I heat the part where I need to bend, and then when it is hot enough, it will be bended due to the property of plastic, and then I can modify the angle to make it look like how I want it to be. Then I use AB gum to stick the two icons together to finish the physical part. Then I need to stick the tape onto it, to make the keys to tap on for the interaction. I first stick thirty tapes on the instrument to make thirty keys, and each is connected to a wire connected to the breadboard. I also borrow an Arduino Mega board to make sure I have enough input ports. But the result is that, since there are too many wires, and it is hard to stick the wires with the tapes, and it is very often that some wires will fall, and some are not well connected. Also due to the conductivity and the resistance of the conductive tape, some tape are not sensitive. So the result is that, only nighteen keys survive. And during the process of connecting the wire to the board, it is very hard to clear up all the wires since there are too many of them, also it takes me a long time to figure out which key corresponds to which port to do the coding. For the trigger, I get to rubber gloves, since they are easy to wear even though they are hard to take off. And also the wires that I stick on they will not fall off easily since the they are well sticked on the gloves. But I only use the wires to touch the tapes, but not to use tape to connect with tape due to the resistence problem.

For the coding part, the basic idea is that, each port is connected to certain sound file that are all one-shot sound. So these are notes from C4 to C6, 11 notes in total, and I also have three keys for drum set and bass, two keys for mode switch including Japanese style and future electronic one. The water ripple effect is realize by pixel manipulation combined with the computer vision object tracking. The last part of the documentation is the code.

For the reflection, the design of the project fits the idea of a new interface, but the interaction is not sensitive enough due to the problem of connecting circuit based on condctive tape. So it is hard to really play on the instrument, and also people are hard to understand the function of the mask and the meaning of the ripples on the screen, so the delivery of the concept is not clear as I have imagined. This project is a trial for making a new interface for musical instrument, but I need to reconsider a better way of interaction next time. And also for the meaning of the project, I want to show people a new interface of a musical instrument, to let them play with it, and also some users may think of the concept of connection between others based on the processing image.

     

CODE for Processing:

import processing.serial.*;
import processing.sound.*;
import processing.video.*;

ThreadsSystem ts;//threads
SoundFile c1;
SoundFile c2;
SoundFile c3;
SoundFile e1;
SoundFile e2;
SoundFile f1;
SoundFile f2;
SoundFile b1;
SoundFile b2;
SoundFile a1;
SoundFile a2;
SoundFile taiko;
SoundFile rim;
SoundFile tamb;
SoundFile gong;
SoundFile hintkoto;
SoundFile hintpeak;
Capture video;
PFont t;

int cols = 200;//water ripples
int rows = 200;
float[][] current;
float[][] previous;
boolean downc1 = true;
boolean downc2 = true;
boolean downc3 = true;
boolean downe1 = true;
boolean downe2 = true;
boolean downf1 = true;
boolean downf2 = true;
boolean downb1 = true;
boolean downb2 = true;
boolean downa1 = true;
boolean downa2 = true;
boolean downtaiko = true;
boolean downtamb = true;
boolean downrim = true;
boolean title = true;
boolean downtitle = true;
boolean koto = true;
boolean peak = false;
boolean downleft = true;
boolean downright = true;

float dampening = 0.999;

color trackColor; //tracking head
float threshold = 25;
float havgX;
float havgY;

String myString = null;//serial communication
Serial myPort;
int NUM_OF_VALUES = 34; /** YOU MUST CHANGE THIS ACCORDING TO YOUR PROJECT **/
int[] sensorValues; /** this array stores values from Arduino **/

void setup() {
fullScreen();
//size(800, 600);

ts = new ThreadsSystem();//threads

cols = width;//water ripples
rows = height;
current = new float[cols][rows];
previous = new float[cols][rows];

setupSerial();//serialcommunication

String[] cameras = Capture.list();
printArray(cameras);
video = new Capture(this, cameras[11]);
video.start();
trackColor = color(255, 0, 0);

c1 = new SoundFile(this, "kotoc.wav");//loading sound
e1 = new SoundFile(this, "kotoE.wav");
f1 = new SoundFile(this, "kotoF.wav");
b1 = new SoundFile(this, "kotoB.wav");
a1 = new SoundFile(this, "kotoA.wav");
c2 = new SoundFile(this, "kotoC2.wav");
e2 = new SoundFile(this, "kotoE2.wav");
f2 = new SoundFile(this, "kotoF2.wav");
b2 = new SoundFile(this, "kotoB2.wav");
a2 = new SoundFile(this, "kotoA2.wav");
c3 = new SoundFile(this, "kotoC3.wav");
rim = new SoundFile(this, "rim.wav");
tamb = new SoundFile(this, "tamb.wav");
taiko = new SoundFile(this, "taiko.wav");
gong = new SoundFile(this, "gong.wav");
hintpeak = new SoundFile(this, "hintpeak.wav");
hintkoto = new SoundFile(this, "hintkoto.wav");

}

void captureEvent(Capture video) {
video.read();
}

//void mousePressed() {
// if(down){
// down = false;
// int fX = floor(havgX);
// int fY = floor(havgY);
// for ( int i = 0; i < 5; i++){
// current[fX+i][fY+i] = random(500,1000);
// }
// }

// sound.play();
//}

//void mouseReleased() {
// if(!down) {
// down = true;
// }
//}

void draw() {
background(0);

/////setting up serial commuicatioin/////
updateSerial();
//printArray(sensorValues);
//println(sensorValues[0]);
int fX = floor(havgX);
int fY = floor(havgY);

if(sensorValues[2] == 0){
if(downleft){
downleft = false;
if(koto){
koto = false;
peak = true;
//if(koto){
c1 = new SoundFile(this, "pc1.wav");//loading sound
e1 = new SoundFile(this, "pe1.wav");
f1 = new SoundFile(this, "pf1.wav");
b1 = new SoundFile(this, "pb1.wav");
a1 = new SoundFile(this, "pa1.wav");
c2 = new SoundFile(this, "pc2.wav");
e2 = new SoundFile(this, "pe2.wav");
f2 = new SoundFile(this, "pf2.wav");
b2 = new SoundFile(this, "pb2.wav");
a2 = new SoundFile(this, "pa2.wav");
c3 = new SoundFile(this, "pc3.wav");
rim = new SoundFile(this, "Snare.wav");
tamb = new SoundFile(this, "HH Big.wav");
taiko = new SoundFile(this, "Kick drum 80s mastered.wav");

}
hintpeak.play();
}
}
if(sensorValues[2] != 0){
if(!downleft) {
downleft = true;
}
}

if(sensorValues[4] == 0){
if(downright){
downright = false;
if(peak){
koto = true;
peak = false;
c1 = new SoundFile(this, "kotoc.wav");//loading sound
e1 = new SoundFile(this, "kotoE.wav");
f1 = new SoundFile(this, "kotoF.wav");
b1 = new SoundFile(this, "kotoB.wav");
a1 = new SoundFile(this, "kotoA.wav");
c2 = new SoundFile(this, "kotoC2.wav");
e2 = new SoundFile(this, "kotoE2.wav");
f2 = new SoundFile(this, "kotoF2.wav");
b2 = new SoundFile(this, "kotoB2.wav");
a2 = new SoundFile(this, "kotoA2.wav");
c3 = new SoundFile(this, "kotoC3.wav");
rim = new SoundFile(this, "rim.wav");
tamb = new SoundFile(this, "tamb.wav");
taiko = new SoundFile(this, "taiko.wav");
gong = new SoundFile(this, "gong.wav");

}
hintkoto.play();
}
}
if(sensorValues[4] != 0){
if(!downright) {
downright = true;
}
}

if(sensorValues[0] == 0){
//println("trigger");
println(title);
if(downtitle){
downtitle = false;
if(title){
title = false;
}
else if(!title){
title = true;
}
}
if(!downtitle){
downtitle = true;
}
}

if(sensorValues[19] == 0){//c1
//println(down);
//println(sensorValues[19]);
if(downc1){
downc1 = false;
println("yes");
println(downc1);
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
c1.play();
}
}
if(sensorValues[19] != 0){
if(!downc1) {
downc1 = true;
}
}

if(sensorValues[26] == 0){//e1
if(downe1){
downe1 = false;
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
e1.play();
}
}
if(sensorValues[26] != 0){
if(!downe1) {
downe1 = true;
}
}

if(sensorValues[31] == 0){//f1
if(downf1){
downf1 = false;
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
f1.play();
}
}
if(sensorValues[31] != 0){
if(!downf1) {
downf1 = true;
}
}

if(sensorValues[20] == 0){//b1
if(downb1){
downb1 = false;
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
b1.play();
}
}
if(sensorValues[20] != 0){
if(!downb1) {
downb1 = true;
}
}

if(sensorValues[9] == 0){//a1
if(downa1){
downa1 = false;
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
a1.play();
}
}
if(sensorValues[9] != 0){
if(!downa1) {
downa1 = true;
}
}

if(sensorValues[15] == 0){//c3
if(downc3){
downc3 = false;
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
c3.play();
}
}
if(sensorValues[15] != 0){
if(!downc3) {
downc3 = true;
}
}

if(sensorValues[23] == 0){//c2
if(downc2){
downc2 = false;
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
c2.play();
}
}
if(sensorValues[23] != 0){
if(!downc2) {
downc2 = true;
}
}

if(sensorValues[16] == 0){//e2
if(downe2){
downe2 = false;
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
e2.play();
}
}
if(sensorValues[16] != 0){
if(!downe2) {
downe2 = true;
}
}

if(sensorValues[11] == 0){//f2
if(downf2){
downf2 = false;
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
f2.play();
}
}
if(sensorValues[11] != 0){
if(!downf2) {
downf2 = true;
}
}

if(sensorValues[12] == 0){//b2
if(downb2){
downb2 = false;
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
b2.play();
}
}
if(sensorValues[12] != 0){
if(!downb2) {
downb2 = true;
}
}

if(sensorValues[17] == 0){//a2
if(downa2){
downa2 = false;
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
a2.play();
}
}
if(sensorValues[17] != 0){
if(!downa2) {
downa2 = true;
}
}

if(sensorValues[7] == 0){//rim
if(downrim){
downrim = false;
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
rim.play();
}
}
if(sensorValues[7] != 0){
if(!downrim) {
downrim = true;
}
}

if(sensorValues[8] == 0){//taiko
if(downtaiko){
downtaiko = false;
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
taiko.play();
}
}
if(sensorValues[8] != 0){
if(!downtaiko) {
downtaiko = true;
}
}

if(sensorValues[28] == 0){//tamb
if(downtamb){
downtamb = false;
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
tamb.play();
}
}
if(sensorValues[28] != 0){
if(!downtamb) {
downtamb = true;
}
}
////water ripples/////
loadPixels();
for ( int i = 1; i < cols - 1; i++) {
for ( int j = 1; j < rows - 1; j++) {
current[i][j] = (
previous[i-1][j] +
previous[i+1][j] +
previous[i][j+1] +
previous[i][j-1]) / 2 -
current[i][j];
current[i][j] = current[i][j] * dampening;
int index = i + j * cols;
pixels[index] = color(current[i][j]);
}
}
updatePixels();
float[][] temp = previous;
previous = current;
current = temp;

/////drawing threads///
ts.addThreads();
ts.run();

////head tracking///
video.loadPixels();
threshold = 80;

float avgX = 0;
float avgY = 0;

int count = 0;

// Begin loop to walk through every pixel
for (int x = 0; x < video.width; x++ ) {
for (int y = 0; y < video.height; y++ ) {
int loc = x + y * video.width;
// What is current color
color currentColor = video.pixels[loc];
float r1 = red(currentColor);
float g1 = green(currentColor);
float b1 = blue(currentColor);
float r2 = red(trackColor);
float g2 = green(trackColor);
float b2 = blue(trackColor);

float d = distSq(r1, g1, b1, r2, g2, b2);

float hX = map(x,0,video.width,0,width);
float hY = map(y,0,video.height,0,height);

if (d < threshold*threshold) {
stroke(255,0,0);
strokeWeight(1);
point(hX, hY);
avgX += x;
avgY += y;
count++;
}
}
}

// We only consider the color found if its color distance is less than 10.
// This threshold of 10 is arbitrary and you can adjust this number depending on how accurate you require the tracking to be.
if (count > 0) {
avgX = avgX / count;
avgY = avgY / count;

havgX = map(avgX,0,video.width,0,width);
//havgY = map(avgY,0,video.height,0,height);
// Draw a circle at the tracked pixel
fill(255,0,0,100);
noStroke();
textSize(50);
if(koto){
text("koto",havgX,havgY);
}
if(peak){
text("peak",havgX,havgY);
}
}

if(title){
textSize(200);
fill(255,0,0);
text("Interferit",width*0.3,height/2);
textSize(40);
text("Wear on the mask and the claws, using your enchanted vessel to interfere the world of pixels!",width*0.03,height*0.7);
//println(1);
}
if(!title){
fill(0);
}
}

float distSq(float x1, float y1, float z1, float x2, float y2, float z2) {
float d = (x2-x1)*(x2-x1) + (y2-y1)*(y2-y1) +(z2-z1)*(z2-z1);
return d;
}

//void keyPressed() {
// background(0);
//}

void setupSerial() {
printArray(Serial.list());
myPort = new Serial(this, Serial.list()[ 0 ], 9600);
// WARNING!
// You will definitely get an error here.
// Change the PORT_INDEX to 0 and try running it again.
// And then, check the list of the ports,
// find the port "/dev/cu.usbmodem----" or "/dev/tty.usbmodem----"
// and replace PORT_INDEX above with the index number of the port.

myPort.clear();
// Throw out the first reading,
// in case we started reading in the middle of a string from the sender.
myString = myPort.readStringUntil( 10 ); // 10 = '\n' Linefeed in ASCII
myString = null;

sensorValues = new int[NUM_OF_VALUES];
}

void updateSerial() {
while (myPort.available() > 0) {
myString = myPort.readStringUntil( 10 ); // 10 = '\n' Linefeed in ASCII
if (myString != null) {
String[] serialInArray = split(trim(myString), ",");
if (serialInArray.length == NUM_OF_VALUES) {
for (int i=0; i<serialInArray.length; i++) {
sensorValues[i] = int(serialInArray[i]);
}
}
}
}
}

VR/AR NEWS PREDICTIONS BY RYAN

TOP 4 for accurate, prophetic, powerful

  1. Is 5G a Game Changer for VR and AR? (yes!)
    Sifting Reality From Hype: What 5G Does (and Doesn’t) Mean for VR & AR

5G boosts up the speed for communication and can transfer bigger and more data, which means this is a lot better than 4G as VR and AR need huge amount of pixel data to be processed all the time, and to make sure everything work well in the user communication, fast transfer speed is definitely needed.

2.New wearable skin lets you touch things in VR and and be touched, too

Haptic interaction is a lot better than a hand controller, as your way of interaction is fixed on the shape of a controller with the touch pads and buttons on it. Wearable skin will give more ways of controlling in VR while using your tactus, and can also feel the pressure and strength, for example, when the user is grabing or catching something.

3.Brain interface

Using brain interface is a very powerful way of controlling the VR devices, as this extends the ways of interaction, we can do all the stuff based on the signals given by brain. I am thinking of using this for the application of a deep diving experience, that you can sense everything in the VR world like in real life, even though this can be dangerous, but this is very interesting.

4.Create An Entire Home Gym With Oculus Quest

The scale of VR experience can be bigger than what we have now, we can utilize the space around us in a lot more ways than we can image, we can change our rooms into any place we want. Changing the home into a gym from my perspective, is just a trial for a bigger change. There are going to be more designs for home VR. This is just a beginning.

TOP 4 for off-track, clueless, and ridiculous.

1.Phone-based VR is officially over

I do not think that phone-based VR is already over, even though the phones nowadays do not have enough spec for supporting a VR application in terms of power and many factors. But I think there are going to be improvement on phone development, so there are possibilities that the power problem will be solved, and the processing power will be a lot better.

2.Virtual graffiti

I just don’t like it, I cannot really think of a practical way of interaction of such application, nobody will keep using their phone to look around and paint around, it does not make sense for me. Also the cotent of the paintings made by users are hard to regulate.

3.Facebook: We Don’t Collect Or Store Quest Camera Or Guardian Data

Impossible, I just do not believe these kind of news. When using the VR headset, the camera is collecting the data of our whole body, this is related to user privacy.

4.Facebook announces Horizon, a VR massive-multiplayer world

This is going to be a failure. The avatar looks creepy, and how Mark has described the world makes me feel unsure, he promises to have much freedom to controll everything around you, firstly I just do not think it is possible to make everything controllable in a virtual world, especially it is a VR application, also the regulation in Horizon is going to be  a problem, how to detect the violations, what is the penalty, a lot more can be problems.

Recitation 10: OOP

These two videos are the particles that I have made for the oop workshop, the idea is to let the videos firstly randomly generated on the screen and then follow the cursar wherever it is and orbit around it.

I use map() function in the first video example to declare the randomly generated position of the particles, in the second example the function is used to set the opacity of the particles in terms of their distance from the cursar.

The following code is for the first video, including three files.

Particles 

ParticleSystem ps;

void setup() {
size(1920,1080);
ps = new ParticleSystem();
ps.addParticle();
}

void draw() {

background(0);

ps.run();

}

Particle1

class Particle1{

PVector curPosition;
PVector accelerationX;
PVector accelerationY;
PVector position;
PVector velocity;
int r = 2;
color c;

Particle1(PVector curPos, PVector pos, PVector v, color parc){
//acceleration = new PVector(0.05,0.05);
position = pos.get();
velocity = v.get();
curPosition = curPos.get();
accelerationX = new PVector(random(0,2),0);
accelerationY = new PVector(0,random(0,2));
c = parc;
}

// Method to update position
void update() {
if(position.x <= 0){
velocity = new PVector(velocity.x*=-1,velocity.y);
float prey = position.y;
position = new PVector(1,prey);
}
if(position.x >= width){
velocity = new PVector(velocity.x*=-1,velocity.y);
float prey = position.y;
position = new PVector(width-1,prey);
}
if(position.y <= 0){
velocity = new PVector(velocity.x,velocity.y*=-1);
float prex = position.x;
position = new PVector(prex,1);
}
if(position.y >= height){
velocity = new PVector(velocity.x,velocity.y*=-1);
float prex = position.x;
position = new PVector(prex,height-1);
}

velocity.mult(0.99);//slowing particles down

if(position.y < curPosition.y - r){
velocity.add(accelerationY);
}
else if (position.y > curPosition.y + r){
velocity.sub(accelerationY);
}else if(position.y >= curPosition.y - r && position.y <= curPosition.y){
velocity.add(accelerationY.mult(2));
}else{
velocity.sub(accelerationY.mult(2));
}
if(position.x < curPosition.x - r){
velocity.add(accelerationX);
}
else if (position.x > curPosition.x + r){
velocity.sub(accelerationX);
}else if(position.x >= curPosition.x - r && position.x <= curPosition.x){
velocity.add(accelerationX.mult(2));
}else{
velocity.sub(accelerationX.mult(2));
}
position.add(velocity);
}

void run() {
update();
push();
display();
pop();
}

void push() {
pushMatrix();
}

void pop() {
popMatrix();
}

// Method to display
void display() {
noStroke();
fill(c);
translate(position.x,position.y);
ellipse(0,0,8,8);
}

PVector getPos(){
return position;
}

PVector getV(){
return velocity;
}

}

ParticleSystem

class ParticleSystem {
ArrayList<Particle1> particles1;
PVector position;
PVector velocity;
color c;
color[] co = new color[10];
PVector[] pos = new PVector[10];
PVector[] v = new PVector[10];
int pNum = 10;

ParticleSystem() {

particles1 = new ArrayList<Particle1>();
for(int i = 0; i<pNum; i++){
position = new PVector(random(width*1/4,width*3/4), random(height/4,height*3/4));
velocity = new PVector(random(-5,5),random(-5,5));
c = color(random(0,255), random(0,255), random(0,255));
pos[i] = position;
v[i] = velocity;
co[i] = c;
}
}

void addParticle() {
for (int i = 0; i<pNum; i++){
float x = map(mouseX,0,width,500,600);
particles1.add(new Particle1(new PVector(x,mouseY), pos[i],v[i],co[i]));
}
}

void run() {
for (int i = 0; i<pNum; i++) {
Particle1 p1 = particles1.get(i);
p1 = new Particle1(new PVector(mouseX,mouseY), pos[i],v[i],co[i]);
p1.run();
pos[i] = p1.getPos();
v[i] = p1.getV();
}
//printArray(particles);
println(particles1.size());
}
}

Essay for Final Project

Glowing Sound Visualizer

I have made several researches on sound visualizations, the most popular ways are using sand and water, there are also ways of using light and fire. But light is hard to control in terms of many conditions and the projection will be depending, using fire looks cool but it is too dangerous, using water is also hard to control since it will mess up the electronics if not paying attention, so I will do the visualization using sand. And this is realized by using Chaldni Plate, it is a plate that resonates with an amplifier, and when sand is spread on it, it will form certain pattern based on the sound frequency. Also I tend to use the glowing sand, to make the device glow when the plate resonates with the amplifier. To make it look even cooler, I am going to put in a photo frame to make an infinite mirror box, and the glowing pattern of sand will be refelcting in the box. At the same time, the device is going to be controlled by an interface made by processing, and I am also considering using interaction based on makey makey kit, building my own unique sensors to make it more interesting while being interactive. The problem is to figure out how the interaction will be and how the interface will look like. And I think the intended users might be families who needs artistic amplifiers to decorate there rooms or stages that need special light effect.

Firstly, I am going to finish the visualizer as soon as possible. I need to get a photo frame to be used as the plate to hold the sand, and it should be flat and well-balanced, then I need to get an amplifier to combine it with the frame and test whether the sand will change patterns based on different frequencies. This is the most important part which will directly influence the visual part of the project. After making sure the visualizer works, I need to work on the interface and the interaction part, these two should be related to make it understandable and also I need to make it interactive, which is also very important. I think I will do more research on music interfaces and more interesting ways of interacting with sound. The visualizer should be finished at most around the beginning of december, so that everything can be finished in time.

I think that visualizing sound in real life is very interesting and it might be inspiring when we actually see the sound instead of just hearing, offering a different way to feel sound will be an amazing experience. The interaction is direct, you change the sound frequency based on the interface, then the pattern of sand will change according to it, so the user will feel the sound not only by hearing it, but also seeing it. I’m inspired by Nagel Stanford from his video Cymatics about different ways of sound visualization, what I am doing it now is to re-create on one of the ways and adding more ways to feel it, like visual effects, and also make it more interactive instead of just playing specific notes for specific frequencies. If the project is completed successful, I think I will try to build upon making it more appliable, and is for those who want to have another way to feel sound in their life.