Interaction Lab Midterm project: test your eyesight by yourself —— Skye (Spring 2018)

Midterm project : Testing your EYE-Q!

Partner: Louis

Goals and original ideas

For midterm project, we wanted to build a device that can imitate the process of eyesight test. The idea comes from the experience of testing eyesight in the hospital. In the hospital, we have to have a doctor to help us do the test, we  find this not effective enough since we have to wait in line. Also, having some one looking at you when you are having a eyesight test may make some people, especially those with poor eyesight feel uncomfortable. Now that digital devices can be applied everywhere, we think a device that can allow patients do their eyesight test by themselves can make this precess more private and effective.

The original thought was that when the test starts, there will one image from eyesight test chart showing on the screen, and users can use a remote control to choose the corresponding directions.

one of the image is like this (it means down);

The direction will be random, and the size of image will be changing from big to small. Every time users make a correct choice, the size will become smaller. while if users make a wrong choice, the size will not change but the direction will become a different one. The chart of a ideal precess is like this:

Material:

  • 1* Arduino Kit and its contents, including:
  • 1 * Breadboard
  • 1 * Arduino Uno
  • 1 * remote control
  • 1 * USB A to B Cable
  • Jumper Cables (Hook-up Wires)
  • 1* infrared receiver
  • 6*big buttons
  • 6*220K resisters

Process and problems:

Since Louis was occupied by some stuffs at this time, I was mostly responsible for this part.

Since we did not learn how to use the remote control, I checked some instructions online and consulted Nick for detailed information. The circuit I later built was like this:

It took me al lot of time to insert all the data into Arduino and processing. To imitate the precess of eyesight testing to the most, I searched online for the exact sizes of eyesight chart images . The information for a 5-meter-far eyesight test chart was like this (sorry it is in Chinese…):

Then I found one problem was that these sizes are for human eyes. While for a computer screen, it is  pixels that constitute all the images, which means the image size will be different. In order to make sure the image on the screen looks exactly the same size as in human eyes, I need to convert visual size into pixels form. To do that,  I found a website that can transform mm to pixels. The link is here :https://www.unitconverters.net/typography/millimeter-to-pixel-x.htm

I thought it would be the exact outcome. However, after some test by professor, it turned out that the actual size shown on the screen was still not the same as  the visual size. And there was no other way to convert it concisely. So I used a mathematic method. Instead of changing the size of image, I calculated the proportion of distance and images, and changed the distance from 5 meters to 3.7 meters, so that the visual size will be the same. In order to reach that distance, professor helped me to borrow a long USB form the resource center, which is like this:

After this test, professor  gave me another suggestion that the remote control was not convenient enough for using  as it is too small and may cause confusion for users that which button should be pressed. Also, the processing of using remote control was not interesting enough for interaction. Meanwhile, from my other tests, I found that the data sent by remote control was not stable enough, as it can send several data at one time and cause the image to change several times while the button was just pressed once. Considering all these elements, I decided to change the remote control to some buttons.  I wished this can solve these problems. Since I wanted to make the operating device like a box, the buttons in my Kit were too small for operating. So I borrowed some big buttons from resource center. The buttons as well as the circuit I built was like this:

After having all the components and datasets done, I moved on to do the logics in processing. This is most challenging part for me, as I was not good at coding.

For coding, in order to reach our ideal outcome, first of all, we made a cover for the project for some instructions and named one button as “begin”. Every time button “begin” is pressed, the test will begin and images will show up. Then we divided our coding into two parts: the right choices part and the wrong choices part. We first dealt with the “right” part. (I was still responsible for this part since my partner haven’t return.)For this part, I tried three ways to do the coding. For the first one, I simply used the “if/else” and “boolean” condition to classify all the variables. However, this totally did not work. After consulting our fellows(Nick and Leon), I  added a switch function and used several “if / else” condition to match all the corresponding datasets. This trial could let the image become smaller each time a right choice was made, However, it also brought up another problem. I found out the button can also send several data at one time when keeping pressing it, which meant the data was still not stable. To further modify the data, I consulted another fellow (Lewis) how to do that.  I learnt how to set each value back to the original state after it change the value, so that they will not be influenced by the data sent by buttons. Till then, I can make the right part work. Also, I wrote some texts on the screen to show the eyesight level of each size of image. The outcome looked like below:

Then I moved on to work on the “wrong” part. At that time my partner came back so we did the following process together. We thought the logic for  “wrong” will be the similar with the “right” part, just with different variables. Unfortunately, it turned out that there must some logical conflict between these two parts. We spent a whole day one this part and still could not figure this out. Since we did not had enough time, we decided to simply replace the ‘wrong’ part with a ‘stop’ button. Every time users can not figure out the direction ( which may indicate a wrong decision), they can press the ‘stop’ button to check the result of their eye-sight level. We knew this is not the best solution, but we had no idea how to figure the logic out at that time…

After adding the ‘stop’ button, we added another back cover for our project to show the result. We wanted to let the test begin again after pressing ‘begin’ again, but we can only make it return to the first image instead of the cover. This is another failure.

Context and decoration:

After barely finished the functional part of our project, we continued to make some decorations and context for our project. (I was responsible for this part when Louis was doing final modification.) I used a box to hold all the circuits and made it as a operating device, which looks like this:

To let users know what they need to do, I drew the images and “begin” and “stop” signal besides the corresponding buttons. To make the instruction more clear and give a full context to our project, I made another two storyboards to give specific instructions, which looked like this:

Also, as you can see, to make our project be entertaining as well as functional, we added the minion characters to our context (because they wear glasses…). We also add these characters on the screen.

Outcome:

Here is the outcome tested by one of our friends:

User Testing:

For our project: most of the people who used it found it very useful and operable. They can immediately understand what they are supposed to do and find the process very similar to their experience of eyesight testing. Some of them really like the figure of Minion because they are very cute. Some of them really appreciate the idea of adapting interactive devices to medial use. Many people tried it again and again to see whether the result matches that of their tests in hospitals. One user found it very nice that she will not relate our device to anything like Arduino because the decoration made her focus on the functions.

Here is one video of user testing:

Here some suggestions that we received for improving our project:

  • give some messages about wrong choices, and show the result after several wrong trials. prevent people from guessing answers
  • make the buttons closer so that users can operate it without looking at it (like a game controller)
  • if want to apply this in real life, better take special conditions into consideration, like color-blind and other eye diseases.
  • do more research about concision, make sure the distance is correct
  • add instructions and logics for “covering one eye”
  • take different ages into consideration. make it more applicable for kids

Considering all these suggestions, we conclude that to improve our projects, first of all, we need to fix the logic for wrong choices and add more indications. Also, to provide better user experience, we should make the controller more comfortable to operate and considering different user requirement. What’s more, we should keep working to make the device more accurate.

For someone else’s project: one project I remembered most was a drawing book. There was a book with several tags on it. The data receivers ware light sensors in the labels. Each time you open a label, you can do a certain kind of drawing on the screen. While when you close the label, the images will disappear. The idea was actually very interesting, cause it gave another way for drawing. However, there were some problems during my test. One main problem was that the light sensors were very sensitive, which caused the data very unstable and affected the outcomes. I think use other components like buttons or pressure sensors can be a better idea. Secondly, the user instructions was not vey clear. When I looked at the devices, I was not quite sure what to do until the developer told me. I think there should be more thoughts about the appropriate components to use and user experiences.

Source Code:

//Arduino
const int buttonPinBegin = 6;
const int buttonPinLeft = 7;
const int buttonPinUp = 8;
const int buttonPinDown = 9;
const int buttonPinRight = 10;
const int buttonPinStop = 5;

int buttonStateBegin = 1;
int buttonStateLeft = 1;
int buttonStateUp = 1;
int buttonStateDown = 1;
int buttonStateRight = 1;
int buttonStateStop = 1;

int Direction = 0;

void setup() {
Serial.begin(9600);
pinMode(6, INPUT);
pinMode(7, INPUT);
pinMode(8, INPUT);
pinMode(9, INPUT);
pinMode(10, INPUT);
pinMode(5, INPUT);
}
void loop() {
buttonStateBegin = digitalRead(6);
buttonStateLeft = digitalRead(7);
buttonStateUp = digitalRead(8);
buttonStateDown = digitalRead(9);
buttonStateRight = digitalRead(10);
buttonStateStop = digitalRead(5);

if (buttonStateRight == HIGH) {
//Serial.println(2);
Direction = 2;
} else if (buttonStateLeft == HIGH) {
Direction = 1;
} else if (buttonStateUp == HIGH) {
Direction = 3;
} else if (buttonStateDown == HIGH) {
Direction = 4;
} else if (buttonStateBegin== HIGH) {
// Serial.println(‘6’);
Direction = 5;
} else if (buttonStateStop == HIGH) {
Direction = 6;
}else{
Direction =0;
}
//if (Direction != 0){
//Serial.println(buttonStateUp);
Serial.write(Direction);
Direction =0;
// irrecv.resume(); // Receive the next value

delay(100);
}

// Processing
import processing.serial.*;
Serial myPort;
int valueFromArduino;
PImage leftimg, rightimg, upimg, downimg;
PImage cover,end;

PImage [] img ;
String [] direction ={“up”, “down”, “right”, “left”};
//the size of “E” from 4.0 to 5.3
float [] size = {274.847244094, 218.305511811, 173.404724409, 137.763779528, 109.417322835, 86.929133858,
69.051968504, 54.840944882, 43.577952756, 34.620472441, 27.477165354, 21.845669291, 17.348031496, 13.757480315} ;
int i= round(random (0, 3));
float s= 4.0;
int counter = 0;
String D;
int error=0;
int pval=0;
int transparency = 255;

int yPos, yPos1 = -500;
int xPos, xPos1, xPos2, xPos3 = -100;
int interval = 1750;
boolean state = false;
boolean startover=false;

void setup() {
size (displayWidth, displayHeight);
//fullScreen();
background (0);
printArray(Serial.list());
myPort = new Serial(this, Serial.list()[3], 9600);
// Images must be in the “data” directory to load correctly
img = new PImage [4];
img [0]= leftimg = loadImage(“E left.jpeg”);
img [1]= rightimg = loadImage(“E right.jpeg”);
img[2] = upimg = loadImage(“E up.jpeg”);
img[3]= downimg = loadImage(“E down.jpeg”);
D = “0”;
cover=loadImage(“cover.jpg”);
cover.resize(displayWidth, displayHeight);
end=loadImage(“end.png”);
}

void draw() {
//receiving data from Arduino
while ( myPort.available() > 0) {
valueFromArduino = myPort.read();
println (valueFromArduino);
}

background(cover);
textSize (100);
fill (0, 0, 0,150);
noStroke();
textMode (CENTER);
tint(255, transparency);
text (“Press ‘BEGIN’ to start !”, 200, height*6/7);
imageMode(CENTER);

if (valueFromArduino==5& pval!=5) {
//valueFromArduino = 0;
background(255);
imageMode(CENTER);
D = direction[i];
textSize (50);
}
fill(255);

switch (D) {
case “up”:
background(255);
image(img[2], width/2, height/2, size[counter], size[counter]);
break;
case “down”:
background(255);
image(img [3], width/2, height/2, size[counter], size[counter]);
break;
case “left”:
background(255);
image(img [0], width/2, height/2, size[counter], size[counter]);
break;
case “right”:
background(255);
image(img [1], width/2, height/2, size[counter], size[counter]);
break;

}
//Correctly guessed “up”
if (D==”up” && valueFromArduino==3 && pval!=3) {
pval=9;
counter+=1;
s=s+0.1;
i= round(random (0, 3));
D = direction[i];
//Correctly guessed “down”
} else if (D==”down” && valueFromArduino==4 && pval!=4) {
pval=9;
counter+=1;
s=s+0.1;
i= round(random (0, 3));
D = direction[i];
//Correctly guessed “left”
} else if (D==”left” && valueFromArduino==1 && pval!=1) {
counter+=1;
s=s+0.1;
i= round(random (0, 3));
D = direction[i];
//Correctly guessed “right”
} else if (D==”right” && valueFromArduino==2 && pval!=2) {
pval=9;
counter+=1;
s=s+0.1;
i= round(random (0, 3));
D = direction[i];
}
//STOP Pressed or 14 Correct Answers
if (valueFromArduino==6 || counter == 14) {
state=true;
}

if (state==true){

background (#FCE58F);
pushMatrix();
fill(0, 0, 0,150);
textSize(90);
text (” THIS IS YOUR EYESIGHT LEVEL:”, 10, height/3);
popMatrix();
textSize(300);
fill(255,0,0,150);
text(s,200, height*7/10);
fill(0, 0, 0,150);
textSize(80);
text (” PRESS ‘BEGIN’ TO TRY AGAIN!”, 90, height*7/8);
image(end, width/2, height/10);
startover=true;
}
//Press BEGIN to start over
if (startover==true && valueFromArduino==5& pval!=5) {
background(255);
imageMode(CENTER);
counter=0;
D = direction[i];
textSize (50);
s = 4.0;
state=false;

}

pval=valueFromArduino;
}

Interaction Lab Final Project: Kaleidoshare —— Skye Gao (Spring 2018)

Final project: Kaleidoshare

Partner: Louis Veazey

Idea and inspirations:

For our final project, I first came up with the idea of building a device with the elements of a kaleidoscope. The concept inspiration comes from my own experience of playing with kaleidoscope in childhood. Seeing the changeable and amazing patterns coming from our own hands was a fantastic experience, which is also meaningful for children’s artistic appreciation and imagination. While for a traditional kaleidoscope, the user experience is quite private and temporary. One can only use one eye to see the pattern due to the small scale of the kaleidoscope, and it is hard to share the outcome with others because of its instability. Considering all these experience, I want to combine the physical theory and artistic elements of kaleidoscope with digital toolsto advance the user experience, making it more interactive, multisensory, shared and memorable.

Materials: 

  • 1* Arduino Kit and its contents, including:
  • 1 * Breadboard
  • 1 * Arduino Uno
  • 1 * USB A to B Cable
  • Jumper Cables (Hook-up Wires)
  • 1*DC motor
  • 2* 1k resisters
  • 1* 10k resisters
  • 1*big button
  • 9*220K resisters
  • 3* mirrors

Working process:

After discussing with Louis, we all agreed this idea is feasible, so we started to work on it. We divided our project into two parts, one is the functional part, namely making the basic components and coding work together; the other is the experiential part, which are the physical components and outfits that provide ideal user experiences.

Before all, we made a design for how the whole device will work. We made a 3-D model to present the idea. The pictures looks like below .

The idea is, at the back of the box (where the star stands) will be the screen of the computer, where we will use processing to present images. On the other side of the box will be a hubless wheel, which people will look through to see the screen and rotate to change the patterns on the screen. And between the screen and the wheel, there will be a triple prism to make reflections of the images. By this design, we tried to combine the physical theory of traditional kaleidoscope with digital media to create some innovative experiences.

Also, to make the kaleidoscope being shared and memorable, we thought of adding an button to  saw the images on the screen, and sent them to people keep and share their favorite images.

We got some inspirations for the idea of rotating and the structure of the box from two Youtube videos, and here are the links:

Demo & DC motor as input:

So we started with the functional part. Learning from our midterm project, this time we tried to make the coding as simple as possible. We found a demo on Youtube which exactly display the effect of a kaleidoscope, and here is the link:

We planned to use a Arduino input to play/stop the video thus to create a effect of the users are controlling the change of the patterns. 🌚YES, WE PLAYED A TRICK.

As for the input, in order to meet the effect of rotating, we first tried to use a rotary sense, however, the rotator sensor in ER can only go from 0 to 300 angle and this cannot work. So we did some research and found a tutorial on Youtube about how to use a DC motor as an input. Here is the link for the tutorial: Using a DC motor as an analog input (including the circuits and the model code). Following the tutorial in the video, we set up the circuit as well as the codes. Below is the circuit we built:

Wheels and gears:

After completing the circuit and coding, we began to think about how to build the experiential part. Since we want the users to rotate a wheel to control the input, that means we need to let the wheel drive the motor. We did some research and our initial plan was to use toothed belt to drive them. Here is the ideal model and what we found in our research:

So to do this, we first bought a wheel-like thing online, with some rubber band, and I also made a 3-D gear for the motor. Like below:

.

But these did not work, one thing was the rubber band did not have enough friction to make both of the components move together; the other thing is the 3-D motor was not a fit for the motor pin. So we changed our plan. As Nick suggested, we could use gears to make them  work together, and we could use laser cutting to make gears. So we further did some research about how to make wooden gears. We firstly tried to calculate and draw the gears by illustrato,r but Louis (fellow) suggested us to use a website named Gear Generatorto design it. So here is how we did the research and used the Gear Generator:

According to our design for the box, we need another wheel for the users to hold and rotate, so we design three hub less wheels, one is wide, one is thin, and another is the one with gears. And these  three would be put together as a sandwich and we will using the board to hold the middle one so that the whole gears will stand. Here are our laser cutting design:

And here are how the gears matches(the motor pin will be in the small hole on the smaller gear and will be driven by the larger gear). We put the “sandwich” together and made a small holder for the large wheel using wires:

since there was a lot friction between the board and the wheels,they are not rotating smoothly,  we tried several materials to make it work better. Here are the materials we tried (including the sandpaper, machine will and paper tapes, and the sandpaper worked the best):

After we set up all the things, the display was like this:

Website sharing, screen capture & QR code:

Next step, we started to work on the screen capture and share. We first used the mousePressed() function to test the screen capture, and it worked well. Our initial plan was to send the picture directly to the users’ emails, however, we considered that not all the people here have a gmail or VPN, but everyone here has a weChat. So we thought that rather than using email, we can use a QR code. However, there was a difficulty to connect the Processing with WeChat. After asking Professor Rudi, we thought we could upload the images to a website first and then make a QR code for the website. However, neither of us know how to make a web server, so professor offered to help us. He helped building the website for us and shared the code for uploading in Processing. (The code going credits to professor will be noted in the codes below).

After we had the website, we created the QR code. Also, we even made ourself a logo and named our project Kaleidoshare(kaleidoscope + share), because we wanted the experience can be shared. Here are our logo and the original design of  QR code:

         

Then we tries to use a button to control the screen capture. We found a big button from the storage room, which seemed to be a perfect match. We initially tried to built box to hold the button, but it may do harm to the integrity of the project, so we decided to cut a hole on the board to hold the button. The button and the box looked like below:

We also wanted to print our logo on the board, all by laser cutting. What’s more, considering the users may not get a signal whether they have captured or not, we added LEDs to the board which making up a shape of an arrow. So that every time people pressed the button, the arrow will turn on, pointing at the QR code.  Below are the process of laser cutting and outcomes, as well as the video of display:

Video for button display

Mirrors:

Then we bought three mirrors online and glued them together, here is the effect: (IT IS BEAUTIFUL !!!)😍

User experience and Vlog making:

After finishing all the process, we made a video of use experience, here is the link to the video:

User Experience: https://youtu.be/AMHgh1sSBr4

Also, since we made videos of the main process, we also madea vlog for our project process, here is the link to it:

Process Video: https://youtu.be/bzdadfvhV6E Hope you enjoy this video and do not forget to give a thumb up👍😛!

FINAL SHOW & User feedback:

On Friday, we set up our device before the final show. However, there suddenly appeared a lot of bugs. The first thing was the button was not working well. Every time the button was pressed, it sent several data and caused several images were saved and uploading. Also, the button seemed to have some interfere with the DC motor. We spent a lot of time on it trying to restrict the data input from both Arduino and Processing. With the help of Nick, we finally adjusted the data. The other thing was, since the motor was hanged in the air, it was easily to be loosen and caused errors. At that time we could do nothing but use more tapes to fix it. The last thing happened, was the QR code. Just an hour before the show, a user test showed that the QR code was expired!!! 😱 It was impossible to laser cutting another board at that time, so we decided to make another QR code, print it out and cover the old one. To make them look the same, I adjusted the color of the QR code which looked exactly the same as the wooden one. Here are the original code on the board & the new code:

:      

We showed the changed code to Professor and fellows, they did not even realized we changed it.😛 ANOTHER TRICK.TIP: Chinese domestic QR generating websites usually do not charge, while some google website charge a lot! Here we share with you the link to a good QR-generating website(it is in Chinese, which maybe a concern).

Also, considering that the QR code only goes to a website, that means as long as people do not have the QR code, they cannot visit the website anymore. So we printed out some QR code for people to take away with them.

On the show, I think it may because our project is very visually attractive, a lot people stopped by and gave a try. I think all of them loved it a lot, and just as them commented, this kaleidoscope is interactive, artistic, and memorable. We are glad that we successfully brought all our expectations into reality. Since there were so many people stopped by and we needed to introduce every time, we actually did not have time to taking more videos. Here are the video and pictures we took:

(above you can see the code we prepared for users to take away)

Also, we got some precious observation from the users, here are some:

  • People got attracted usually by the effect of patten changing, especially when somebody else was playing with the device.
  • Some people get confused at the first place about what to do with the device (including rotating the wheel and pressing the button)
  • The images shown on their screen were not arranged in order, it was hard for them to find the imaged they captured.
  • When people rotated with a narrow range, the images on screen did not change smoothly.
  • The motor still loosen sometimes
  • some people act too fiercely the whole devices shook a lot

Future improvement:

So based on those observations in our working process as well as on the show, we proposed the following future improvement:

  • Allow each user to select their own shapes, images, colors, etc. to show diversity patterns
  • rearrange the order to the images shown on the website and Allow each user to pair his/her name with the saved picture so that they can easily distinguish their own picture from others’
  • When the button is pressed then have an indication on screen
  • A similar concept with a small looking-hole to show different experiences
  • Better website looking
  • Better looking and more stable frame for the device (including the motor, mirrors and other components)
  • show clearer user instructions
  • As one of our users suggested, we can use gears with more teeth and smaller pitch so that the drive can be more sensitive

Conclusion:

For this project, we devoted much efforts and time into it. We are glad about the outcome and see people really enjoy the experience. And we presented our sincere thanks to:

  • Professor Rudi for the Website and instructions!❤️
  • All the fellows who helped us!❤️
  • All the audiences who gave us precious suggestions!❤️

A group photo of me, my partner and Professor Rudi!

Remember to see our volg lol!😛

-THE END-

Source Code:

//*code for Arduino
int playerPosition = 0;
int buttonState = 0;
const int buttonPin = 13;
const int ledPin1 = 2 ;
const int ledPin2 = 3;
const int ledPin3 = 4;
const int ledPin4 = 5 ;
const int ledPin5 = 6 ;
const int ledPin6 = 7 ;
const int ledPin7 = 8 ;
const int ledPin8 = 9;
const int ledPin9 = 10;
bool Signal = false;
int change2;
int buttonState2 = 0; // current state of the button
int lastButtonState = 0;

void setup() {
// put your setup code here, to run once:
Serial. begin(9600);
pinMode(13, INPUT);
pinMode(A1, INPUT);
pinMode(2, OUTPUT);
pinMode(3, OUTPUT);
pinMode(4, OUTPUT);
pinMode(5, OUTPUT);
pinMode(6, OUTPUT);
pinMode(7, OUTPUT);
pinMode(8, OUTPUT);
pinMode(9, OUTPUT);
pinMode(10, OUTPUT);
}

void loop() {
//put your main code here, to run repeatedly:
byte change;
playerPosition += change;
// Serial.print(change);
buttonState = digitalRead(13);
change = (analogRead (A1) – 511) / 2;
change = (analogRead (A1) – 511) / 2;
// modify the signal from motor
if (change == 0) {
change2 = 0;
}
else if (change > 0 && change < 100) {
change2 = 100;
} else if (change >= 100 && change != 255) {
change2 = 200;
}
//only when the button state changes will sent a signal to processing
if (buttonState == 0 && lastButtonState == 1) {
Signal = true;
}
else {
Signal = false;
}
if (Signal == true) {
change = 0;
change2 = 0;
}

//control leds
if (buttonState == HIGH) {
//Signal = true;
change = 0;
change2 = 0;
digitalWrite(2, HIGH);
digitalWrite(3, HIGH);
digitalWrite(4, HIGH);
digitalWrite(5, HIGH);
digitalWrite(6, HIGH);
digitalWrite(7, HIGH);
digitalWrite(8, HIGH);
digitalWrite(9, HIGH);
digitalWrite(10, HIGH);
}
else {
//Signal = false;
//change2 = 0;

// change = (analogRead (A1) – 511) / 2;
// change = (analogRead (A1) – 511) / 2;
// if (change == 0) {
// change2 = 0;
// }
// else if (change > 0 && change < 100) {
// change2 = 100;
// } else if (change >= 100 && change != 255) {
// change2 = 200;

digitalWrite(2, LOW);
digitalWrite(3, LOW);
digitalWrite(4, LOW);
digitalWrite(5, LOW);
digitalWrite(6, LOW);
digitalWrite(7, LOW);
digitalWrite(8, LOW);
digitalWrite(9, LOW);
digitalWrite(10, LOW);
}
// Serial.print(Signal);
//delay(100);
//Signal = 0;

// Serial.print(buttonState);
// Serial.print(‘,’);
// Serial.println(lastButtonState);
// delay(100);

// sent and test data
Serial.write(change2);
Serial.write(Signal);
delay(1);

// Serial.print(change);
// Serial.print(‘,’);
// Serial.print(change2);
// Serial.print(‘,’);
// Serial.println(Signal);
// delay(150);

//return the last state
lastButtonState = buttonState;
}

// *code for Processing
import processing.serial.*;
import processing.video.*;
Movie myMovie;
Serial myPort;
int change;
int change2;
int Signal;
int[] valueFromArduino= new int [2];

FTPClient ftp; // Declare a new FTPClient
String[] files; // Declare an array to hold directory listings
//boolean saved = false;

void setup() {
size (displayWidth, displayHeight);
background(0);
//frameRate(30);
myMovie = new Movie(this, “3.mp4”);
//myMovie = new Movie(this, “4.mp4”);
// myMovie.frameRate(2);
myMovie.loop();
printArray(Serial.list());
myPort = new Serial(this, Serial.list()[3], 9600);
//myMovie.resize(displayWidth, displayHeight);
}

//void signal() {
////println(millis());
// saveftp();
//}

void draw() {
// control the display of the video
while ( myPort.available()>0) {
for (int i=0; i<2; i ++) {
valueFromArduino[i]=myPort.read();
}
if (myMovie.available()) {
if (change2!=0 && change2!=1) {
//if (change!=0&&change!=255&&change!=1&&change!=200 &&change!=2&&change!=3&&Signal!=-1) {
myMovie.read();
imageMode(CENTER);
image(myMovie, displayWidth/2, (displayHeight/2)-25);
}
}
// insert a function for screen capture
buttonsave();
//read data from Arduino
change2 = valueFromArduino[0];
Signal= valueFromArduino[1];
println (“change2 : “+ change2);
println (“Signal : “+Signal);
}
}
void buttonsave() {
if (Signal==1||change2==1) {
saveftp();
print(“saved to ftp”);
//saved = true;
}
}
//use mouse press for test
//void mousePressed(){
// saveftp();
//}

//captured images saving and uploading *credits go to Professor Rodolfo Cossovich*
void saveftp() {
String name = “kal-“+millis() + “.png”;
saveFrame(“/Users/xinyigao/” + name);

try
{

// set up a new ftp client
ftp = new FTPClient();
ftp.setRemoteHost(“plobot.com”); // ie. ftp.site.com

// set up listener
FTPMessageCollector listener = new FTPMessageCollector();
ftp.setMessageListener(listener);

// connect to the ftp client
println (“Connecting”);
ftp.connect();

// login to the ftp client
println (“Logging in”);
ftp.login(“ixlab2018@plobot.com”, “ixlab2018”);

// set up in passive mode
println (“Setting up passive, ASCII transfers”);
ftp.setConnectMode(FTPConnectMode.PASV);

// set up for ASCII transfers
ftp.setType(FTPTransferType.BINARY);

// copy BINARY file to server and overwrite the existing file
println (“Putting file”);
ftp.put(“.//”+name, “.//images/”+name, false);
// Shut down client
println (“Quitting client”);
ftp.quit();

// Print out the listener messages
String messages = listener.getLog();
println (“Listener log:”);

// End message – if you get to here it must have worked
println(messages);
println (” complete”);
}
catch (Exception e)
{

//Print out the type of error
println(“Error “+e);
}
}

Comic Project Documentation (Fall 2018)–Skye Gao (Chen)

Assignment: Interactive comic project

Professor: Ann Chen

Date: 10/13/2018

 Linkhttp://imanas.shanghai.nyu.edu/~xg679/Commlab/comicproject/

Story idea & synopsis:

Instead of making an entertaining comic, Emily and me wanted to make our comic meaningful and   let people really connect to it. Therefore, we chose the topic of gay love.

The comic story is based on time line, and will demonstrate the story of two boys who are from the same neighborhood. The story begins with the boys meeting at the backyard, becoming best friends and feeling naturally connected. When they grow older, they go to school together with the relationship which is criticized a lot by people. They has faced a lot of challenges but get through them together. Finally both them and other people embrace their identities.

We think this experience may represent the experience of many gay people, and we hope this story will help people understand more about the community.

Process:

In order to make the comic easy to understand and really get people connected, we decided to make the layout as simple as possible. We wanted to have: 1) single linear story line 2) simple but clear drawing & interaction 3) minimum texts. I prepared and drew all the comic assets

(thanks to Frank for his iPad). It took almost a week to finish all the drawings.

With all the assets prepared, we started the structuring and coding. The initial plan included:

  1.  scroll to change the panels
  2. insert two interactions which are: 1) on the panel with crowds, users will click to see what they are saying (bubbles with texts); 2) swap to get rid of the depressive thoughts (messy clouds)

In the process of coding, we meet several challenges and made a series of change.

  1. For the scrolling effect, considering users reading habits, we were concerned that people might not get informed that there is an interaction on “this” panel, thus may miss the interaction. To make the interaction, we decided to let users click to change the panel instead of scrolling. Here  we referred tothe example of slideshow effect from W3 schools. Jingyi helped us to make it happen.
  2. For the interaction of swapping away the clouds, we found out if we wanted to have the effect of erasing, we would have to apply the canvas of P5 which has not been covered in class yet. So we made an adjustment that instead to swapping, we will let the users to click on the clouds and make it disappear by changing its opacity. To make it more clear, we attached an image if duster  to the mouse pointer, and to do that we adopted an example from stack overflowWe put an button to the panel to let people pick up the duster and then clean the clouds. With the clouds disappearing, the image laying behind will show up in which they are helping each other. Another student from lap (who I don’t know his name) and Konrad helped with this part.
  3. For the interaction of click to see bubbles. In the first place, we used a button by clicking on which the bubbles will show up.However, after talking to some fellows, we got feedbacks that the button may seems to be unnecessary for the consistency of story. So we found a better plan : there will be blank bubbles showing at the first place and when users are clicking on the bubbles, the texts in them will reveal. To do so we referred to the examples about rotate-photos we learned in class. Frank and Jingyi helped us to implement the coding.
  4. We have a lot of panels for the comic which made the comic a little bit long, So we deleted or combined some of the panels to make the story more effective.

Future improvement:

  1. We want to add background music to the comic, and make it change regarding to the story.
  2. We want to make the swapping interaction happen.
  3. We may want to add more interactions and narratives to comic to make it more attractive.

Week 11: Interactive Video Project (Fall 2018)—— Skye Gao (Chen)

Assignment: Interactive Video project

Project name: Robota

Partners:Skye, Zane and Candy

Professor: Ann Chen

Date: 11/19/2018

 Linkhttp://imanas.shanghai.nyu.edu/~xg679/Commlab/videoproject/

Description: 

For our video project, our group use stop motionto tell a story about a small robot. The story is about a robot who is charged (by users) has live at the beginning. It went exploring the world for a while and met  some other toys. It thought other toys should be alive as it so it tried to play with them. But it turned out that when its battery went out, it also returned to be a dull toy.

The original idea about the video as well as the website was to have  a crank on the robot which the user should scroll to wind it up at the first place. As the crank being winded up, the video will be start playing and finished without other interruptions.

The mood of the story should be a little bit sad. With a robot narrative, the purpose of our project is to set people thinking about the meaning of life that human endow to robots, thus to reflect on the relationship between human and robots.

Process:

We first wrote down our main storyline and visualized it with story board.

(Storyline)

(Storyboard)

   

As our main character is  a small robot, we ordered the robot model on Taobao. However, the delivery had some problems so we did not get the robot until Wednesday.

Since our original idea about the interaction is scrolling to wind up a crank, we 3D-printed a crank with online model. However, when we started to shooting ,we met several challenges.

First of all we found it really hard to attach the crank with the robot. Since we are doing stop motion, we need the crank to be stable but changeable on the robot. Considering tape will appear in the scene (which we did not want to see), we tried to dig a hole on the robot, however the robot is made of hard plastic so after several trails, we have to give up on the idea.

So to maintain our storyline instead using a crank to power the robot, we decided to use a charger and modified the story a little bit. So that when the users see the website, they will need to click on a blog to activate the video.

Also, we always could not get a 500mm camera lens for the first two days, but we finally managed to get one at last.

The whole shooting process took about 3 days, we used the Dragonframes to shot stop motions. During the process, we added some scenes while also deleted some in consideration of the practicality and consistency of the whole video. And here are some shots from the process:

After we finished shooting, me and candy added sound effect for the video while Zane added background music and worked on the website. The sound effect we used are most from iMove. We  cut, reverse and modifies tones to create ideal effects.

Considering the time limit, the ultimate outcome for our project is quite simple. We only let the users to click the plug to start the video and watch it to the end.

Reflection:

All three of us have dedicatedly contributed to the project. I really enjoy the final outcome of the video. However, from self-refelciton as well as audiences’ feedback, we also have a lot to improve:

Because of all kinds of issues as mentioned before, we started our shooting really late, which is main the reason that we did not have enough time to implement more interaction. Also, because the quality of the video is compromised through WeChat and email, I had to send the audio to Zane for combination. However, through this process, the level of sound effect is not adjusted well, thus during the presentation, the sound effect was almost inaudible.

So for future improvement, I think we can add more interaction during the  video, like scrolling for the robot to land on the ground and clicking to help the it climb up books. Also, we need to add more texts or scenes to make the story more explicit, since people get confused with the ending.

Communication Lab Final project (Fall 2018)——Skye Gao (Chen)

Final project: “A friend of my friend died yesterday”

Partner: Alex Zhang

Professor: Ann Chen

Date: 12/11/2018

Link:http://imanas.shanghai.nyu.edu/~xg679/Commlab/CommlabFinal/desktop/

  1. Concept and design

Our project dives into the topic of digital death and its presence on various social media platform. With media communication permeating our everyday lives, fundamental topics of humanity such as death, loss, grief, and mourning are increasingly transferred to and negotiated in media environments. Social media platforms with their strong focus on emotions and interactions are especially prone to expressions of grief and mourning, and provide an expansion to traditional mourning spaces in society. Holding the questions whether social media is facilitating or missing certain norms in comparison to traditional mourning rituals, our project aims to approach this controversial topic by visually displaying pieces from various social media sites which represent the incident, subsequences of and social reactions to one’s death online.

With the pre-context “A friend of my friend died yesterday”, the project is settled in an intermediate space between artistic fiction and real-life commentary. It consists of mock-ups of the interface of macOS desktop, Facebook, YouTube and wechat, which are connected by imitating the user experience of the macOS notification system. By re-creating such experience, we aim to emphasize the digital nature of our project, and thus immerse the audience into the context we designed.

The whole project starts off with a preface of our project which is settled in a window on the “desktop”. The introduction includes the fictional context “A friend of my friend died yesterday”, and several question related to this information which we expect the audience to take away while going through the following content. We also place three notification boxes on the corner of the page which will navigate the user to the next stage.

The following part of our project is the mock-up pages of three social medias: weChat, YouTube, and Facebook. We created gif pieces which represent the incident, subsequences of and social reactions to one’s death online, and arrange them according to the layout of these interface. Users can click into the hover boxes on the interface page to see the detailed content of that pieces.

The project is not set with an ending because we want it to be an open discourse and we expect the audience to think deep on this topic of death and digital age basing on the fundamental questions in the preface.

Sources:

Do not Click ‘Like When Somebody has Died: The Role of Norms for Mourning Practices in Social Media” by Anna J. M. Wagner.

User Experience of a Heartbreak” by Sarah Hallacher.

2.  Process

I researched and chose the project idea at the very beginning because this is a topic I had been interested in for a long. While in the class when we shared our ideas, Alex showed great interests in my proposal so we decide to work on it together.

In my proposal, I planned to designed a storyline to present the concept. However, after professor showed us another project by Sarah Hallacher “User Experience of a Heartbreak”, which is also a social media related topic, we were inspirited by its display of the information and decide to take this idea as our project’s form. Based on that, we first discussed what kind of information we would include in our project and sketched them. At the first place, we have pieces from more social medias like Instagram and google chat, however, consider the content and workload, we narrowed the scope down of the presented three social medias.

I first started working on the Facebook page to ensure the feasibility of our plan. I used Canca.comand Marvel.comto edit images and used this online gif generator to make the gif pieces. Then I mocked up the Facebook page in the same way and used the hover-to-show texts examplefrom W3school to indicate the entrance for display.

Then I imitated the desktop interface of Mac while at the same time Alex worked on the YouTube page by creating the two videos: One is about the memories of the deceased in which we used edited photos to make up the video, and the second one is a pieces of funeral.

In our first round of user-test, we presented the parts of mac interface and Facebook page, and we got the following main comments about both of our content and visual layout, and made changes accordingly.

  • As the first version of our desktop is like below, it did not look like a desktop but a slide, so we alter changed it to the current version.
  • The Facebook interface as well as the description we added were confusing, but we did not change much at last considering the time limitation. Also, we expected that when the remaining parts pf project is finished, itwill make more sense for the user.
  • Our idea was fully expressed by going through these parts, so we later added an introduction part at the beginning.

After the user test, we continued to finishing the remaining parts of wechat and YouTube and made modification according to the feedbacks we already got. While Alex was continuing to build the YouTube page, I changed the background of the desk top and added a preface to the mac interface with a typing effect which I learnt from this online example.

3. Feedback and future improvement.

From both user tests and presentation, people really appreciated our concept of death and digital life and the editing work we have made for creating the gifs and interface of social medias. However, we also got the following critiques which are mostly related to the conveying of our idea:

  • There are too much is too open space that people get confused whether this is a real narrative or just a general commentary, which makes them unable to follow the flow at the first place.
  • The setting “a friend of my friend” is too ambiguous.
  • Despite the heavy content, it doesn’t really have an emotional draw because it’s too generic.
  • It might not have enough visual cues to make sense.
  • There seems to be a disconnect as to whether it’s offering insight as to how one should react to death in the modern age or is offering a commentary of how people do react in the modern age, which is similar to the first one.
  • The description besides the images are not consistent and some of them is not complementary.

So in relation to these critiques, people also provided corresponding solutions:

  • The general flow seems to be through questions, however, it might make more sense if it flowed through a specific narrative or developed questions more relevant to a specific narrative.
  • It can be better to create a more linear storyline rather than just a demonstration along with possibly a guide towards how to function.
  • Add text or names can possibly make better personal connection
  • It would have more of an impact if it was more exploratory of a specific narrative which would, in its execution, develop a deeper concept and message.
  • The level of message and emotion of each message should be more consistent.

Drawing together all these comments, we think the main problem of our project is about the conveying of messages and ideas. The audience’s thinking flow should be clearly guided and progressed by our project. Thus for future improvement, we think we should focus on improving the flow of our project in a more organized. Like the audience suggested, we can arrange our information based on a storyline or place them in a specific scenario which can better help users to make personal connection. Also, we need to rephrase the description/interpretation for each images so that the user can understand. Moreover, we think implement more complementary messages into the display can make the content of our project more rich and propounding.