Week 1-reflection on the reading by Yuru Chen(Lindsay)

From the reading, I received  a strong sense of power as a human who has the ability of communicating. Unlike Vashti and other passengers in the air-ship, who did not think communication as important at all, Kuno is a positive example because he realized that shutting ourselves down to other people would do us no good. I have the same feeling as Kuno as I understand the importance of communication. As far as I’m concerned, communication is like a bridge, connecting us to everything else in the world. If we can make good use of this bridge, it will take us to anywhere we want to be. On the contrary, the machine which is mentioned in the reading is an example of the “prison” in our head. People who lack the ability of communication are confined to this prison. Of course, the very first step is always scary. But when we are brave enough to take that first step to get involved, we will find  out that the world with communication is so much more prettier than that little “prison” in our head. 

Communication is not only about exchanging our ideas or thoughts with other people but also a way to improve ourselves and explore new regions of  knowledge. If we refuse to communicate and keep everything to ourselves, we will be trapped in our own little world. Only if we have the ability of communication can we find a way to acquire new knowledge, thus to improve ourselves. 

You Know You Have Magic! by Yuru Chen-Marcela

You Know You Have Magic!

by Yuru Chen and Molly He

For my final, I’m working with Molly He. Molly and I created a “magic” drawing project. The basic concept of our project is that the camera detects the color we want it to track and then show what the user draws on the canvas. On top of that,  there is a “Follow Me!” sign moving on the screen which serves the purpose of telling user what gesture to make to trigger the car to move. 

Here is the demonstration of our project:

Here is a picture of our car:

Basically, we wanted our users to hold a “magic stick”, which is the same as Harry Potter. And then we needed the camera to detect the position of it, so we thought of color-tracking. Based on that we made a red spot on top of the stick so that the camera can detect it. Also, we wanted user to make a certain gesture, or to say, to draw a specific shape, which would serve as a spell,  to trigger the car, so we thought of the most direct way which was to add a moving “Follow Me!” text on the screen to indicate what the user should do. In addition, for the processing part, we used HSV color-tracking in opencv to help us track the color. We actually had other options, one of which was to calculate the average distance between the color we wanted to track and the other colors, and by using PGraphic and topLayer function to draw on the screen. However, in the process that follows we found that this method is very unstable because the camera cannot find the preset color correctly. So we replaced this method with opencv, which solved the problem of the color-tracking difficulty. The criteria we used to choose the method of color-tracking was that first, it is feasible. Before we started the project, I reached out to several professors and staffs to ask what kind of method is the best for our project, and they suggested several ways to achieve our goal, from which I chose to use average position at first, and then with the problem I switched to opencv. Second criteria, I would say, was the accuracy. Because we were trying to track a specific color, there might be some distractions in the background which influence the accuracy of color-tracking, so the accuracy of color-tracking was the second criteria we considered during the process. 

For failures, after we successfully completed the color-tracking part, we wanted to make the interface a better-looking one, so we thought of covering the image captured by camera by an image of parchment, however, we failed to do so, so we just kept what we had before. As for successes, we had trouble triggering the car. We asked one of the professors and he suggested that we can use “boolean”. We preset all the pixels as “false”, if the pixel detects a different color, then the pixel becomes true, and then we count the “true” pixels, when the number reaches 40, it will trigger the car. This is a very important success because if this process does not work then we couldn’t continue the Arduino part. During the user testing session, because we hadn’t finished the entire project yet, we replaced the car-triggering part with the appearance of a little ellipse on the corner of the canvas, referring to the car moving we initially wanted to achieve. However, this made users very confused about what our project was for. Also, some users suggested that it would be better if we could make the line we drew previously disappear, because it looked very messy with all the things we drew on the canvas, thus made it hard to tell where exactly the tracked spot was. Based on the feedbacks from the users, we made the following changes: 1) we added our car to it and successfully triggered it with pixel detection. 2) we decided not to use topLayer, instead, we chose to use storingInput to achieve the goal of making the previously drawn lines disappear. Those adaptions that we made were very effective, according to our final result. 

In conclusion, the goal of our project is to give users an interesting and  interactive experience. By asking users to do certain gestures to trigger the car, we create an interactive “communication” between the machine and user. This result aligns with my definition of interaction in the way that both user and machine interact with each other. However, if I had more time, I would add an sand table on top of the car which will cover the car up, it also serves the purpose of showing the route of the movement of the car. Molly and I initially wanted to make the car draw based on what the users draw, but because of the limited time we could not finish that part, so if we had more time we would definitely make that possible and make the whole project cooler. What’s more, if we had more time we would also improve the accuracy of color-tracking part, because although we switched from calculating the distances between colors to opencv, this is still not very stable, so if there’s more time I would fix this problem as well. During the process, we incurred several failures, among which the most memorable one would be when we were working on the color-tracking part. We stayed in the studio for at least 4 hours working on this, and then at last we found out that it was because of we didn’t change the canvas’s size at the very beginning, which is a very detailed mistake and requires patience. Therefore, the biggest takeaway I got from this class is that you need to be patient and always be careful with the details. Also, from what I’ve achieved, I learned that never be afraid to try new things for that you might gain extra experience from it. The key element of my project experience is that saying is always easier than doing, and if you want to achieve something, you need to put in effort, take it seriously, be patient and work for it. Also, if I were to further develop my project, I would try to use bluetooth to connect the car with the computer. In this way, I can put the car somewhere further, say, another room, when the user draw in a room the car will draw in another one. I think this is very meaningful as it can make our life more convenient. 

Recitation 11 workshop by Yuru Chen

For this recitation I chose to go to the workshop for media manipulation. 

We were asked to pick a tv show/music video/entrance video and recreate it using images and videos from the web. So what I did for this exercise is that I chose the Marvel opening sequence video as the original background video, and then if I press the mouse the video will change to an image of Game of Thorns which consists of pixels changing with the position of the mouse.  

Here is the demo of my exercise:

Processing Code:

import processing.video.*;
PImage photo;
Movie myMovie;

void setup() {
size(480, 270);
myMovie = new Movie(this, “marvel.mp4”);
myMovie.play();
photo = loadImage(“GOT2.jpg”);

}
void draw() {

if (mousePressed){
image(photo, 0, 0);
int rectSize = 5;
int w = photo.width;
int h = photo.height;
photo.loadPixels();
for(int y = 0; y < h; y=y+rectSize){
for(int x = 0; x < w; x=x+rectSize) {
int i=x+y*w;
fill(photo.pixels[i]);
ellipse(x,y,rectSize,rectSize);
int index = (y*photo.width)+x;
fill(photo.pixels[index]);

float d = dist(x, y, mouseX, mouseY);//you can change the mouseX, mouseY to the value from arduino
d = map(d, 0, sqrt(width*width+height*height), 1, rectSize*2);
float angle = map(d, 1, rectSize*2, 0, PI);
pushMatrix();
translate(x, y);
rotate(angle);

rect(0, 0, d, 3*d );
popMatrix();
}
}

}else{
if (myMovie.available()) {
myMovie.read();
}
image(myMovie, 0, 0);
}
}

Recitation 10:Media controller by Yuru Chen

For this recitation I used Arduino to control the image showed inn processing. 

Here is the demo:

Arduino code:

// IMA NYU Shanghai
// Interaction Lab
// This code sends one value from Arduino to Processing

void setup() {
Serial.begin(9600);
}

void loop() {
int sensorValue = analogRead(A0) / 4;
Serial.write(sensorValue);

// too fast communication might cause some latency in Processing
// this delay resolves the issue.
delay(10);
}

Processing code:

import processing.video.*;
Capture myCam;

import processing.serial.*;
Serial myPort;
int valueFromArduino;

//PImage myImg;

int size;//we can change the size
void setup(){
size(480, 300);
//myImg = loadImage(“1.jpg”);
myCam = new Capture(this, 480, 300);
myCam.start();
printArray(Serial.list());
myPort = new Serial(this, Serial.list()[ 5 ], 9600);
myPort.clear();

}

void draw(){

if(myCam.available()){
myCam.read();
}

background(0);
noStroke();
myCam.loadPixels();
while ( myPort.available() > 0) {
valueFromArduino = myPort.read();
}
println(valueFromArduino);
if(valueFromArduino < 100){
size =50;
}else {
size=10;
}
for( int x = 0; x < myCam.width; x = x + size){//for loop, it will draw lots of squares on top of the previous one
//”size” here is a variable
for( int y = 0; y < myCam.height; y = y + size){

int i = (y*myCam.width)+x;
fill(myCam.pixels[i]);

float d = dist(x, y, valueFromArduino, valueFromArduino/2);//you can change the mouseX, mouseY to the value from arduino
d = map(d, 0, sqrt(width*width+height*height), 1, size*2);

pushMatrix();

float angle = map(d, 1, size*2, 0, PI);

translate(x, y);
rotate(angle);
ellipse(0, 0, 3*d, d );
popMatrix();
}

}

//colorMode(HSB, 100);
//fill(x*100/width, y*100/height, 100);

}

The technology used in my project, I think, is very interactive, because in my project the vision we see one the screen is controlled by the potentiometer which is controlled by the user.

Recitation 9:Final Project Process by Yuru Chen

Step 1:

I critiqued three projects during the recitation. First, we looked at Sam Li’s project proposal. Basically, what she is going to do is to create an device with buttons which can turn people’s feelings for art pieces into tangible expressions which will be reflected on Mona Lisa’s face. We all thought that this was a very good idea and liked it very much. However, to improve the project, we thought that maybe she can use analog input to express users’ feeling instead of only use digital input like several button. It will better reflect on users’ feeling as it has more choices and wider range.

Second was Alex, who is going to create an archery game with arrows and bows. He was going to use distance sensors to sense the position of the user and use VR to achieve the interface of the game. This is very cool since Alex said he was going to make the game 3D. We think he should figure out how he can use those sensors to make the bow and arrow move along with the user. 

The last one was Tiana, who is thinking of making a data consumption gif with potentiometer. Basically, her project is to use Arduino to collect the time users are spending on their electronic devices and send gifs to remind them. We suggested that she could use another alarm to remind the user, for example, a car or something that shakes instead of only using gif. 

What I think is interesting about their concepts is that they all have different definitions of what interaction is. They are trying to create something that aligned with their own concepts, which I think is really interesting.

I think for their projects they are creating something that involves not only seeing, thinking but also listening or physically moving to interact with the technology. 

Step 2

For my own project, I received the following suggestions:

  1. Consider using distance sensor to capture the users’ gesture to draw on the screen.
  2. Probably can create a frame where users can draw to be more specific and user-friendly.

According to my group members’ feedback, the most successful part of my proposal is the idea of drawing in the air which will appear on the computer screen. However, the least successful part is that the sensor I chose to achieve my goal is not good enough. I agree with these feedbacks because I also think I should think more carefully about which sensor to use to make my project better. 

After receiving these feedbacks, I’m thinking of adding a frame, as suggested by my group members, for users to draw. Also, I will try to use distance sensor, or other feasible ways to achieve my goal.