Interaction Lab: Stupid pet project Skye (Spring 2018)

Skye Gao

Professor Rudi

Stupid pet trick project

February 23rd

PROJECT SUMMARY: For this project, we are required to use digital/analog sketch to build a simple interactive device. It must respond to a physical action or series of actions a person takes, and it must be amusing, surprising, or otherwise engaging. For my project, I decided to choose my equipment first then design a circumstance for my device.

MATERIALS:

  • 1* Arduino Kit and its contents, including:
  • 1 * Breadboard
  • 1 * Arduino Uno
  • 1 * Buzzer
  • 1 * 1m Resistor
  • 1 *  Vibration Sensor
  • 1 * USB A to B Cable
  • Jumper Cables (Hook-up Wires)

BUILDING PROJECT: I planned to use a sensor as my interactive equipment, and the sensor I wanted to apply is a vibrations sensor. I tired to build a circuit in which I use the vibration sensor to control a buzzer. The circuit I draw is like this:

There were two ideas I had for the circumstance in which the devices works:

The initial idea was that I would make a toy (I have a toy bunny) with which when people spank or shake or touch it gently, it will make different sounds. The idea emerges from my last recitation, as I also used the vibration sensor to control the buzzer, and when I knocked on the sensor, the buzzer made noises, which was quite funny.

The circuit is easy to built, while my problem is how to write the sketch properly so that the outcome will work as I expected. For the first trial I used the IF condition to modify the degree of vibration and its responding noice. professor helped me try two kinds of code and the code is like below:

/*using map –> it doesn’t change the frequency of the speaker but the delay */

// map it to the range of the analog out:
outputValue = map(sensorValue, 0, 1023, 0, 880);
// change the analog out value:
if (outputValue >= 20) {
analogWrite(analogOutPin, outputValue);
delay(outputValue);
} else {
noTone (9);
}

/*using tone / noTone with IF */

if (sensorValue >= 50) {
tone (9,440);
delay (100);
} else if (sensorValue >50 && sensorValue <= 150) {
tone (9,24.50);
} else {
noTone (9);
}

The outcome was not ideal, as when I tried to shake the bunny with different frequency, I could figure out the change in tone but I found the sound was not that constant and corresponding with my shaking frequency. The different sounds were mixed even when I shake it with quit different force. To figure out why, I check the serial monitor, as I saw, the actual sensor value that the sensor accept is not stable, a gentle shake has sensor value that over 100 while a quite strong shake can range from 20 to 150. In terms of such problem, professor suggested me to take the instability of vibration sensor Into consideration. Also, I need to take into consideration  that how to let my experiencers understand what I want them to do, like what level they are expected to touch the bunny and what responses they are supposed to get for their action.

Considering the unstable result, I found it hard to let my audiences understand my idea, and more seriously, I cannot make sure what the outcome will be like. so this idea is not discreet. I needed to come up with a better idea.

I did not want to demolish all my previous thoughts, so my second idea was based on my first one. At this step, my top mission was to find a stable way to use the vibration sensor, i.e. Let the sensor value read be stable with every acts. The solution was totally an inspiration  in my mind, as I was staring at the serial monitor, trying to think out a solution, I accidentally put the bunny lying on the desk and pushed his belly unconsciously. I suddenly found out that when I do nothing with the bunny, the statistics shows around 0, while when I push the his belly every time it shows like below:

The income stats seems to be more stable this time, which makes me really excited. This time I did not need to worry that the bunny will make noice by himself as the “mute” state and “unmet” state is clearly separated. So I continued to think about a suitable and vivid context for this device.

As I saw my bunny lying on the ground and I was pushing his belly,  I found my acton really like saving a person with CPR. So I thought it would be a great idea to name this project as “saving this bunny with CPR”. People can push on bunny’s bully trying to “save” him, and there will be response to their actions each time they push as well as in the end to indicate whether they have saved it successfully or not.

To achieve such effect, I did not change my circuit but modified my sketch based on my precious one. As can be seen in the photo above, the sensor value will reach over 100. so I still used the IF condition and modified the restriction as 80, when the sensor value is over 80 (i.e. your push it really hard), the buzzer will make a sound. That is the interaction o f each action. This part of code is as below:

if (sensorValue >80 && sensorValue <= 150) {
tone (9,1500);
count = count + 1;
} else {
noTone (9);
count=count;
}

And as a response for result of the whole procedure, I decided to use a count function to count people’s pushing time so that when people push to certain times, the buzzer will play a short melody by using the tone() function to indicate the result. By doing so people can feel interested in keeping dong so and fully experience the procedure of saving somebody (instead of just try one to two times and lose interest) and get a reward for that, which I found may be more real and inspirational.

This part of code as well as the whole coding is shown in the source code .

Initially I set the range of the sensor value as between 80 and 150, basing on the statistics shown in the serial monitor and my personal experience of pushing force (which is likely to be similar to the feeling of doing CPR). The stats I use as a reference is as below:

I met one problem when I tested the range. For some times at the first place it worked out well, which is really exciting. However, at one moment  the stats shown appeared to be unstable again. As when the device stayed static, the stats were around 10, while when I push it it just around 20 no matter what force I was using. Which loos like below:

Considering that it has worked for some times, the code should be no problem, so the problem may be the circuit. After checking the circuit again, I found the connection of sensor is loosen which may happened when I moved the whole device, and that is the cause of the problem.After reconnecting the circuits, the outcome becomes normal and world just as I expected!!

DECORATION: SO the following part is just to do my decoration (which is my favorite part). Since I have built a context for my device, which is “save the bunny with CPR”, I want people to know that they are supposed to pushing the bunny’s belly just as CPR when they look at my project. So I draw a display board on which there is big characters saying “SOMEBODY SAVE THE BUNNY with CPR!!!“, and I draw a doctor and a nurse to further indicate this bunny may have some medical incident. When people see this board, they can relate this to those emergence happen in daily life, and the words will tell them what to do.  To further inform my audience what specially they should do, I also draw a heart and attached it to the bunny’s belly, (which can also cover the sensor), so that people know where to put their hands. And just for entertaining, I put crosses on the bunny’s eye to indicate he is dying. The outcome decoration looks like below:

What’s more, I want to add more meaning to my project, not just an entertaining device. So I searched for the standard procedure of doing CPR and attached that to my show board, thus when people are experiencing my project, they can also learnt about how to use CPR to save people’s life in real life. I found this educational and practical.

(Learn how to do CPR!):

SHOW TIME: Till then, my project was completed, and what came next was the show time. My project as well as its decoration did appear to be attractive. Many people came to my table to give a try and asked me questions about my design. Here is my project on the show:

FEEDBACKS: I got several feedbacks from my peers as well as professors.

  1. People say the project is really cute and interesting !! 🙂
  2. In terms of the response melody, some people can understand that they successfully saved the bunny immediately, but some people conceived the melody as “fail”, (it may due to the melody and the crosses on the bunny’s eyes), so I have to tell them they had succeeded every time the melody plays. One suggestion I got is to change the melody and  play a more delightful one. That’s really reasonable.
  3. Some people started their push with a really soft touch because they are not sure what to do, I have to tell them to push harder, I think I may need to add this to my show board so that people can understand they need to push hard. (But I think that’s point that people need to learn about it through trials).  One relevant suggestion I got is to add a feedback to each pushing action, like adding a led which will be be on each time the bunny is pushed. (But there IS sound each time the sensor is vibrated to certain extent…maybe they did not hear that, maybe visual effect is better than sounds?)
  4. Some people asked me how do they know they succeed or not. That is also a concern for me during my design. It would be better if there can be another melody or some signal to indicate the failure besides success, but that will be much more complicated and I do not know how to do it at this time. But I think I will find a way to figure it out in the future.
  5. One professor found my project interesting and suggested me to put into practical use…(he said I can put it in the health centre). That will be really nice if I can do that, but I think I need to first improve those drawbacks that are brought up in feedbacks. Also, I think for practical use, the device should be more accurate and close to reality. It better brings to people an experience similar to the real CPR procedure as much as possible, so that it can be really educational and applicable.  (But being a toy like that is fine…I think…)

CONCLUSION:  This project is really interesting and inspiring. It is my first project and I really enjoy the precess of completing it. There still remains many problems for me to figure out and improve, and I think that is my goal for the next stage of studying IMA!

Source Code:

/*
Melody

Plays a melody

circuit:
– 8 ohm speaker on digital pin 8

created 21 Jan 2010
modified 30 Aug 2011
by Tom Igoe

This example code is in the public domain.

http://www.arduino.cc/en/Tutorial/Tone
*/

#include “pitches.h”

// notes in the melody:
int melody[] = {
NOTE_C4, NOTE_G3, NOTE_G3, NOTE_A3, NOTE_G3, 0, NOTE_B3, NOTE_C4
};

// note durations: 4 = quarter note, 8 = eighth note, etc.:
int noteDurations[] = {
4, 8, 8, 4, 4, 4, 4, 4
};
// These constants won’t change. They’re used to give names to the pins used:
const int analogInPin = A0; // Analog input pin that the potentiometer is attached to
const int analogOutPin = 9; // Analog output pin that the LED is attached to

int sensorValue = 0; // value read from the pot
int outputValue = 0; // value output to the PWM (analog out)
int count = 0;

void setup() {
// initialize serial communications at 9600 bps:
Serial.begin(9600);
pinMode(9,OUTPUT);
}

void loop() {
// read the analog in value:
sensorValue = analogRead(analogInPin);
// map it to the range of the analog out:
//outputValue = map(sensorValue, 0, 1023, 0, 50);
// change the analog out value:
//analogWrite(analogOutPin, outputValue);

if (sensorValue >80 && sensorValue <= 150) {
tone (9,1500);
count = count + 1;
} else {
noTone (9);
count=count;
}
if (count == 50){
// iterate over the notes of the melody:
for (int thisNote = 0; thisNote < 8; thisNote++) {

// to calculate the note duration, take one second divided by the note type.
//e.g. quarter note = 1000 / 4, eighth note = 1000/8, etc.
int noteDuration = 1000 / noteDurations[thisNote];
tone(9, melody[thisNote], noteDuration);

// to distinguish the notes, set a minimum time between them.
// the note’s duration + 30% seems to work well:
int pauseBetweenNotes = noteDuration * 1.30;
delay(pauseBetweenNotes);
// stop the tone playing:
noTone(9);
}// no need to repeat the melody.
count = 0;
delay(100);
}
// print the results to the Serial Monitor:
Serial.print(“sensor = “);
Serial.print(sensorValue);
Serial.print(“t output = “);
Serial.println(outputValue);

// wait 2 milliseconds before the next loop for the analog-to-digital
// converter to settle after the last reading:
delay(2);
}

Interaction Lab Midterm project: test your eyesight by yourself —— Skye (Spring 2018)

Midterm project : Testing your EYE-Q!

Partner: Louis

Goals and original ideas

For midterm project, we wanted to build a device that can imitate the process of eyesight test. The idea comes from the experience of testing eyesight in the hospital. In the hospital, we have to have a doctor to help us do the test, we  find this not effective enough since we have to wait in line. Also, having some one looking at you when you are having a eyesight test may make some people, especially those with poor eyesight feel uncomfortable. Now that digital devices can be applied everywhere, we think a device that can allow patients do their eyesight test by themselves can make this precess more private and effective.

The original thought was that when the test starts, there will one image from eyesight test chart showing on the screen, and users can use a remote control to choose the corresponding directions.

one of the image is like this (it means down);

The direction will be random, and the size of image will be changing from big to small. Every time users make a correct choice, the size will become smaller. while if users make a wrong choice, the size will not change but the direction will become a different one. The chart of a ideal precess is like this:

Material:

  • 1* Arduino Kit and its contents, including:
  • 1 * Breadboard
  • 1 * Arduino Uno
  • 1 * remote control
  • 1 * USB A to B Cable
  • Jumper Cables (Hook-up Wires)
  • 1* infrared receiver
  • 6*big buttons
  • 6*220K resisters

Process and problems:

Since Louis was occupied by some stuffs at this time, I was mostly responsible for this part.

Since we did not learn how to use the remote control, I checked some instructions online and consulted Nick for detailed information. The circuit I later built was like this:

It took me al lot of time to insert all the data into Arduino and processing. To imitate the precess of eyesight testing to the most, I searched online for the exact sizes of eyesight chart images . The information for a 5-meter-far eyesight test chart was like this (sorry it is in Chinese…):

Then I found one problem was that these sizes are for human eyes. While for a computer screen, it is  pixels that constitute all the images, which means the image size will be different. In order to make sure the image on the screen looks exactly the same size as in human eyes, I need to convert visual size into pixels form. To do that,  I found a website that can transform mm to pixels. The link is here :https://www.unitconverters.net/typography/millimeter-to-pixel-x.htm

I thought it would be the exact outcome. However, after some test by professor, it turned out that the actual size shown on the screen was still not the same as  the visual size. And there was no other way to convert it concisely. So I used a mathematic method. Instead of changing the size of image, I calculated the proportion of distance and images, and changed the distance from 5 meters to 3.7 meters, so that the visual size will be the same. In order to reach that distance, professor helped me to borrow a long USB form the resource center, which is like this:

After this test, professor  gave me another suggestion that the remote control was not convenient enough for using  as it is too small and may cause confusion for users that which button should be pressed. Also, the processing of using remote control was not interesting enough for interaction. Meanwhile, from my other tests, I found that the data sent by remote control was not stable enough, as it can send several data at one time and cause the image to change several times while the button was just pressed once. Considering all these elements, I decided to change the remote control to some buttons.  I wished this can solve these problems. Since I wanted to make the operating device like a box, the buttons in my Kit were too small for operating. So I borrowed some big buttons from resource center. The buttons as well as the circuit I built was like this:

After having all the components and datasets done, I moved on to do the logics in processing. This is most challenging part for me, as I was not good at coding.

For coding, in order to reach our ideal outcome, first of all, we made a cover for the project for some instructions and named one button as “begin”. Every time button “begin” is pressed, the test will begin and images will show up. Then we divided our coding into two parts: the right choices part and the wrong choices part. We first dealt with the “right” part. (I was still responsible for this part since my partner haven’t return.)For this part, I tried three ways to do the coding. For the first one, I simply used the “if/else” and “boolean” condition to classify all the variables. However, this totally did not work. After consulting our fellows(Nick and Leon), I  added a switch function and used several “if / else” condition to match all the corresponding datasets. This trial could let the image become smaller each time a right choice was made, However, it also brought up another problem. I found out the button can also send several data at one time when keeping pressing it, which meant the data was still not stable. To further modify the data, I consulted another fellow (Lewis) how to do that.  I learnt how to set each value back to the original state after it change the value, so that they will not be influenced by the data sent by buttons. Till then, I can make the right part work. Also, I wrote some texts on the screen to show the eyesight level of each size of image. The outcome looked like below:

Then I moved on to work on the “wrong” part. At that time my partner came back so we did the following process together. We thought the logic for  “wrong” will be the similar with the “right” part, just with different variables. Unfortunately, it turned out that there must some logical conflict between these two parts. We spent a whole day one this part and still could not figure this out. Since we did not had enough time, we decided to simply replace the ‘wrong’ part with a ‘stop’ button. Every time users can not figure out the direction ( which may indicate a wrong decision), they can press the ‘stop’ button to check the result of their eye-sight level. We knew this is not the best solution, but we had no idea how to figure the logic out at that time…

After adding the ‘stop’ button, we added another back cover for our project to show the result. We wanted to let the test begin again after pressing ‘begin’ again, but we can only make it return to the first image instead of the cover. This is another failure.

Context and decoration:

After barely finished the functional part of our project, we continued to make some decorations and context for our project. (I was responsible for this part when Louis was doing final modification.) I used a box to hold all the circuits and made it as a operating device, which looks like this:

To let users know what they need to do, I drew the images and “begin” and “stop” signal besides the corresponding buttons. To make the instruction more clear and give a full context to our project, I made another two storyboards to give specific instructions, which looked like this:

Also, as you can see, to make our project be entertaining as well as functional, we added the minion characters to our context (because they wear glasses…). We also add these characters on the screen.

Outcome:

Here is the outcome tested by one of our friends:

User Testing:

For our project: most of the people who used it found it very useful and operable. They can immediately understand what they are supposed to do and find the process very similar to their experience of eyesight testing. Some of them really like the figure of Minion because they are very cute. Some of them really appreciate the idea of adapting interactive devices to medial use. Many people tried it again and again to see whether the result matches that of their tests in hospitals. One user found it very nice that she will not relate our device to anything like Arduino because the decoration made her focus on the functions.

Here is one video of user testing:

Here some suggestions that we received for improving our project:

  • give some messages about wrong choices, and show the result after several wrong trials. prevent people from guessing answers
  • make the buttons closer so that users can operate it without looking at it (like a game controller)
  • if want to apply this in real life, better take special conditions into consideration, like color-blind and other eye diseases.
  • do more research about concision, make sure the distance is correct
  • add instructions and logics for “covering one eye”
  • take different ages into consideration. make it more applicable for kids

Considering all these suggestions, we conclude that to improve our projects, first of all, we need to fix the logic for wrong choices and add more indications. Also, to provide better user experience, we should make the controller more comfortable to operate and considering different user requirement. What’s more, we should keep working to make the device more accurate.

For someone else’s project: one project I remembered most was a drawing book. There was a book with several tags on it. The data receivers ware light sensors in the labels. Each time you open a label, you can do a certain kind of drawing on the screen. While when you close the label, the images will disappear. The idea was actually very interesting, cause it gave another way for drawing. However, there were some problems during my test. One main problem was that the light sensors were very sensitive, which caused the data very unstable and affected the outcomes. I think use other components like buttons or pressure sensors can be a better idea. Secondly, the user instructions was not vey clear. When I looked at the devices, I was not quite sure what to do until the developer told me. I think there should be more thoughts about the appropriate components to use and user experiences.

Source Code:

//Arduino
const int buttonPinBegin = 6;
const int buttonPinLeft = 7;
const int buttonPinUp = 8;
const int buttonPinDown = 9;
const int buttonPinRight = 10;
const int buttonPinStop = 5;

int buttonStateBegin = 1;
int buttonStateLeft = 1;
int buttonStateUp = 1;
int buttonStateDown = 1;
int buttonStateRight = 1;
int buttonStateStop = 1;

int Direction = 0;

void setup() {
Serial.begin(9600);
pinMode(6, INPUT);
pinMode(7, INPUT);
pinMode(8, INPUT);
pinMode(9, INPUT);
pinMode(10, INPUT);
pinMode(5, INPUT);
}
void loop() {
buttonStateBegin = digitalRead(6);
buttonStateLeft = digitalRead(7);
buttonStateUp = digitalRead(8);
buttonStateDown = digitalRead(9);
buttonStateRight = digitalRead(10);
buttonStateStop = digitalRead(5);

if (buttonStateRight == HIGH) {
//Serial.println(2);
Direction = 2;
} else if (buttonStateLeft == HIGH) {
Direction = 1;
} else if (buttonStateUp == HIGH) {
Direction = 3;
} else if (buttonStateDown == HIGH) {
Direction = 4;
} else if (buttonStateBegin== HIGH) {
// Serial.println(‘6’);
Direction = 5;
} else if (buttonStateStop == HIGH) {
Direction = 6;
}else{
Direction =0;
}
//if (Direction != 0){
//Serial.println(buttonStateUp);
Serial.write(Direction);
Direction =0;
// irrecv.resume(); // Receive the next value

delay(100);
}

// Processing
import processing.serial.*;
Serial myPort;
int valueFromArduino;
PImage leftimg, rightimg, upimg, downimg;
PImage cover,end;

PImage [] img ;
String [] direction ={“up”, “down”, “right”, “left”};
//the size of “E” from 4.0 to 5.3
float [] size = {274.847244094, 218.305511811, 173.404724409, 137.763779528, 109.417322835, 86.929133858,
69.051968504, 54.840944882, 43.577952756, 34.620472441, 27.477165354, 21.845669291, 17.348031496, 13.757480315} ;
int i= round(random (0, 3));
float s= 4.0;
int counter = 0;
String D;
int error=0;
int pval=0;
int transparency = 255;

int yPos, yPos1 = -500;
int xPos, xPos1, xPos2, xPos3 = -100;
int interval = 1750;
boolean state = false;
boolean startover=false;

void setup() {
size (displayWidth, displayHeight);
//fullScreen();
background (0);
printArray(Serial.list());
myPort = new Serial(this, Serial.list()[3], 9600);
// Images must be in the “data” directory to load correctly
img = new PImage [4];
img [0]= leftimg = loadImage(“E left.jpeg”);
img [1]= rightimg = loadImage(“E right.jpeg”);
img[2] = upimg = loadImage(“E up.jpeg”);
img[3]= downimg = loadImage(“E down.jpeg”);
D = “0”;
cover=loadImage(“cover.jpg”);
cover.resize(displayWidth, displayHeight);
end=loadImage(“end.png”);
}

void draw() {
//receiving data from Arduino
while ( myPort.available() > 0) {
valueFromArduino = myPort.read();
println (valueFromArduino);
}

background(cover);
textSize (100);
fill (0, 0, 0,150);
noStroke();
textMode (CENTER);
tint(255, transparency);
text (“Press ‘BEGIN’ to start !”, 200, height*6/7);
imageMode(CENTER);

if (valueFromArduino==5& pval!=5) {
//valueFromArduino = 0;
background(255);
imageMode(CENTER);
D = direction[i];
textSize (50);
}
fill(255);

switch (D) {
case “up”:
background(255);
image(img[2], width/2, height/2, size[counter], size[counter]);
break;
case “down”:
background(255);
image(img [3], width/2, height/2, size[counter], size[counter]);
break;
case “left”:
background(255);
image(img [0], width/2, height/2, size[counter], size[counter]);
break;
case “right”:
background(255);
image(img [1], width/2, height/2, size[counter], size[counter]);
break;

}
//Correctly guessed “up”
if (D==”up” && valueFromArduino==3 && pval!=3) {
pval=9;
counter+=1;
s=s+0.1;
i= round(random (0, 3));
D = direction[i];
//Correctly guessed “down”
} else if (D==”down” && valueFromArduino==4 && pval!=4) {
pval=9;
counter+=1;
s=s+0.1;
i= round(random (0, 3));
D = direction[i];
//Correctly guessed “left”
} else if (D==”left” && valueFromArduino==1 && pval!=1) {
counter+=1;
s=s+0.1;
i= round(random (0, 3));
D = direction[i];
//Correctly guessed “right”
} else if (D==”right” && valueFromArduino==2 && pval!=2) {
pval=9;
counter+=1;
s=s+0.1;
i= round(random (0, 3));
D = direction[i];
}
//STOP Pressed or 14 Correct Answers
if (valueFromArduino==6 || counter == 14) {
state=true;
}

if (state==true){

background (#FCE58F);
pushMatrix();
fill(0, 0, 0,150);
textSize(90);
text (” THIS IS YOUR EYESIGHT LEVEL:”, 10, height/3);
popMatrix();
textSize(300);
fill(255,0,0,150);
text(s,200, height*7/10);
fill(0, 0, 0,150);
textSize(80);
text (” PRESS ‘BEGIN’ TO TRY AGAIN!”, 90, height*7/8);
image(end, width/2, height/10);
startover=true;
}
//Press BEGIN to start over
if (startover==true && valueFromArduino==5& pval!=5) {
background(255);
imageMode(CENTER);
counter=0;
D = direction[i];
textSize (50);
s = 4.0;
state=false;

}

pval=valueFromArduino;
}

Interaction Lab Final Project: Kaleidoshare —— Skye Gao (Spring 2018)

Final project: Kaleidoshare

Partner: Louis Veazey

Idea and inspirations:

For our final project, I first came up with the idea of building a device with the elements of a kaleidoscope. The concept inspiration comes from my own experience of playing with kaleidoscope in childhood. Seeing the changeable and amazing patterns coming from our own hands was a fantastic experience, which is also meaningful for children’s artistic appreciation and imagination. While for a traditional kaleidoscope, the user experience is quite private and temporary. One can only use one eye to see the pattern due to the small scale of the kaleidoscope, and it is hard to share the outcome with others because of its instability. Considering all these experience, I want to combine the physical theory and artistic elements of kaleidoscope with digital toolsto advance the user experience, making it more interactive, multisensory, shared and memorable.

Materials: 

  • 1* Arduino Kit and its contents, including:
  • 1 * Breadboard
  • 1 * Arduino Uno
  • 1 * USB A to B Cable
  • Jumper Cables (Hook-up Wires)
  • 1*DC motor
  • 2* 1k resisters
  • 1* 10k resisters
  • 1*big button
  • 9*220K resisters
  • 3* mirrors

Working process:

After discussing with Louis, we all agreed this idea is feasible, so we started to work on it. We divided our project into two parts, one is the functional part, namely making the basic components and coding work together; the other is the experiential part, which are the physical components and outfits that provide ideal user experiences.

Before all, we made a design for how the whole device will work. We made a 3-D model to present the idea. The pictures looks like below .

The idea is, at the back of the box (where the star stands) will be the screen of the computer, where we will use processing to present images. On the other side of the box will be a hubless wheel, which people will look through to see the screen and rotate to change the patterns on the screen. And between the screen and the wheel, there will be a triple prism to make reflections of the images. By this design, we tried to combine the physical theory of traditional kaleidoscope with digital media to create some innovative experiences.

Also, to make the kaleidoscope being shared and memorable, we thought of adding an button to  saw the images on the screen, and sent them to people keep and share their favorite images.

We got some inspirations for the idea of rotating and the structure of the box from two Youtube videos, and here are the links:

Demo & DC motor as input:

So we started with the functional part. Learning from our midterm project, this time we tried to make the coding as simple as possible. We found a demo on Youtube which exactly display the effect of a kaleidoscope, and here is the link:

We planned to use a Arduino input to play/stop the video thus to create a effect of the users are controlling the change of the patterns. 🌚YES, WE PLAYED A TRICK.

As for the input, in order to meet the effect of rotating, we first tried to use a rotary sense, however, the rotator sensor in ER can only go from 0 to 300 angle and this cannot work. So we did some research and found a tutorial on Youtube about how to use a DC motor as an input. Here is the link for the tutorial: Using a DC motor as an analog input (including the circuits and the model code). Following the tutorial in the video, we set up the circuit as well as the codes. Below is the circuit we built:

Wheels and gears:

After completing the circuit and coding, we began to think about how to build the experiential part. Since we want the users to rotate a wheel to control the input, that means we need to let the wheel drive the motor. We did some research and our initial plan was to use toothed belt to drive them. Here is the ideal model and what we found in our research:

So to do this, we first bought a wheel-like thing online, with some rubber band, and I also made a 3-D gear for the motor. Like below:

.

But these did not work, one thing was the rubber band did not have enough friction to make both of the components move together; the other thing is the 3-D motor was not a fit for the motor pin. So we changed our plan. As Nick suggested, we could use gears to make them  work together, and we could use laser cutting to make gears. So we further did some research about how to make wooden gears. We firstly tried to calculate and draw the gears by illustrato,r but Louis (fellow) suggested us to use a website named Gear Generatorto design it. So here is how we did the research and used the Gear Generator:

According to our design for the box, we need another wheel for the users to hold and rotate, so we design three hub less wheels, one is wide, one is thin, and another is the one with gears. And these  three would be put together as a sandwich and we will using the board to hold the middle one so that the whole gears will stand. Here are our laser cutting design:

And here are how the gears matches(the motor pin will be in the small hole on the smaller gear and will be driven by the larger gear). We put the “sandwich” together and made a small holder for the large wheel using wires:

since there was a lot friction between the board and the wheels,they are not rotating smoothly,  we tried several materials to make it work better. Here are the materials we tried (including the sandpaper, machine will and paper tapes, and the sandpaper worked the best):

After we set up all the things, the display was like this:

Website sharing, screen capture & QR code:

Next step, we started to work on the screen capture and share. We first used the mousePressed() function to test the screen capture, and it worked well. Our initial plan was to send the picture directly to the users’ emails, however, we considered that not all the people here have a gmail or VPN, but everyone here has a weChat. So we thought that rather than using email, we can use a QR code. However, there was a difficulty to connect the Processing with WeChat. After asking Professor Rudi, we thought we could upload the images to a website first and then make a QR code for the website. However, neither of us know how to make a web server, so professor offered to help us. He helped building the website for us and shared the code for uploading in Processing. (The code going credits to professor will be noted in the codes below).

After we had the website, we created the QR code. Also, we even made ourself a logo and named our project Kaleidoshare(kaleidoscope + share), because we wanted the experience can be shared. Here are our logo and the original design of  QR code:

         

Then we tries to use a button to control the screen capture. We found a big button from the storage room, which seemed to be a perfect match. We initially tried to built box to hold the button, but it may do harm to the integrity of the project, so we decided to cut a hole on the board to hold the button. The button and the box looked like below:

We also wanted to print our logo on the board, all by laser cutting. What’s more, considering the users may not get a signal whether they have captured or not, we added LEDs to the board which making up a shape of an arrow. So that every time people pressed the button, the arrow will turn on, pointing at the QR code.  Below are the process of laser cutting and outcomes, as well as the video of display:

Video for button display

Mirrors:

Then we bought three mirrors online and glued them together, here is the effect: (IT IS BEAUTIFUL !!!)😍

User experience and Vlog making:

After finishing all the process, we made a video of use experience, here is the link to the video:

User Experience: https://youtu.be/AMHgh1sSBr4

Also, since we made videos of the main process, we also madea vlog for our project process, here is the link to it:

Process Video: https://youtu.be/bzdadfvhV6E Hope you enjoy this video and do not forget to give a thumb up👍😛!

FINAL SHOW & User feedback:

On Friday, we set up our device before the final show. However, there suddenly appeared a lot of bugs. The first thing was the button was not working well. Every time the button was pressed, it sent several data and caused several images were saved and uploading. Also, the button seemed to have some interfere with the DC motor. We spent a lot of time on it trying to restrict the data input from both Arduino and Processing. With the help of Nick, we finally adjusted the data. The other thing was, since the motor was hanged in the air, it was easily to be loosen and caused errors. At that time we could do nothing but use more tapes to fix it. The last thing happened, was the QR code. Just an hour before the show, a user test showed that the QR code was expired!!! 😱 It was impossible to laser cutting another board at that time, so we decided to make another QR code, print it out and cover the old one. To make them look the same, I adjusted the color of the QR code which looked exactly the same as the wooden one. Here are the original code on the board & the new code:

:      

We showed the changed code to Professor and fellows, they did not even realized we changed it.😛 ANOTHER TRICK.TIP: Chinese domestic QR generating websites usually do not charge, while some google website charge a lot! Here we share with you the link to a good QR-generating website(it is in Chinese, which maybe a concern).

Also, considering that the QR code only goes to a website, that means as long as people do not have the QR code, they cannot visit the website anymore. So we printed out some QR code for people to take away with them.

On the show, I think it may because our project is very visually attractive, a lot people stopped by and gave a try. I think all of them loved it a lot, and just as them commented, this kaleidoscope is interactive, artistic, and memorable. We are glad that we successfully brought all our expectations into reality. Since there were so many people stopped by and we needed to introduce every time, we actually did not have time to taking more videos. Here are the video and pictures we took:

(above you can see the code we prepared for users to take away)

Also, we got some precious observation from the users, here are some:

  • People got attracted usually by the effect of patten changing, especially when somebody else was playing with the device.
  • Some people get confused at the first place about what to do with the device (including rotating the wheel and pressing the button)
  • The images shown on their screen were not arranged in order, it was hard for them to find the imaged they captured.
  • When people rotated with a narrow range, the images on screen did not change smoothly.
  • The motor still loosen sometimes
  • some people act too fiercely the whole devices shook a lot

Future improvement:

So based on those observations in our working process as well as on the show, we proposed the following future improvement:

  • Allow each user to select their own shapes, images, colors, etc. to show diversity patterns
  • rearrange the order to the images shown on the website and Allow each user to pair his/her name with the saved picture so that they can easily distinguish their own picture from others’
  • When the button is pressed then have an indication on screen
  • A similar concept with a small looking-hole to show different experiences
  • Better website looking
  • Better looking and more stable frame for the device (including the motor, mirrors and other components)
  • show clearer user instructions
  • As one of our users suggested, we can use gears with more teeth and smaller pitch so that the drive can be more sensitive

Conclusion:

For this project, we devoted much efforts and time into it. We are glad about the outcome and see people really enjoy the experience. And we presented our sincere thanks to:

  • Professor Rudi for the Website and instructions!❤️
  • All the fellows who helped us!❤️
  • All the audiences who gave us precious suggestions!❤️

A group photo of me, my partner and Professor Rudi!

Remember to see our volg lol!😛

-THE END-

Source Code:

//*code for Arduino
int playerPosition = 0;
int buttonState = 0;
const int buttonPin = 13;
const int ledPin1 = 2 ;
const int ledPin2 = 3;
const int ledPin3 = 4;
const int ledPin4 = 5 ;
const int ledPin5 = 6 ;
const int ledPin6 = 7 ;
const int ledPin7 = 8 ;
const int ledPin8 = 9;
const int ledPin9 = 10;
bool Signal = false;
int change2;
int buttonState2 = 0; // current state of the button
int lastButtonState = 0;

void setup() {
// put your setup code here, to run once:
Serial. begin(9600);
pinMode(13, INPUT);
pinMode(A1, INPUT);
pinMode(2, OUTPUT);
pinMode(3, OUTPUT);
pinMode(4, OUTPUT);
pinMode(5, OUTPUT);
pinMode(6, OUTPUT);
pinMode(7, OUTPUT);
pinMode(8, OUTPUT);
pinMode(9, OUTPUT);
pinMode(10, OUTPUT);
}

void loop() {
//put your main code here, to run repeatedly:
byte change;
playerPosition += change;
// Serial.print(change);
buttonState = digitalRead(13);
change = (analogRead (A1) – 511) / 2;
change = (analogRead (A1) – 511) / 2;
// modify the signal from motor
if (change == 0) {
change2 = 0;
}
else if (change > 0 && change < 100) {
change2 = 100;
} else if (change >= 100 && change != 255) {
change2 = 200;
}
//only when the button state changes will sent a signal to processing
if (buttonState == 0 && lastButtonState == 1) {
Signal = true;
}
else {
Signal = false;
}
if (Signal == true) {
change = 0;
change2 = 0;
}

//control leds
if (buttonState == HIGH) {
//Signal = true;
change = 0;
change2 = 0;
digitalWrite(2, HIGH);
digitalWrite(3, HIGH);
digitalWrite(4, HIGH);
digitalWrite(5, HIGH);
digitalWrite(6, HIGH);
digitalWrite(7, HIGH);
digitalWrite(8, HIGH);
digitalWrite(9, HIGH);
digitalWrite(10, HIGH);
}
else {
//Signal = false;
//change2 = 0;

// change = (analogRead (A1) – 511) / 2;
// change = (analogRead (A1) – 511) / 2;
// if (change == 0) {
// change2 = 0;
// }
// else if (change > 0 && change < 100) {
// change2 = 100;
// } else if (change >= 100 && change != 255) {
// change2 = 200;

digitalWrite(2, LOW);
digitalWrite(3, LOW);
digitalWrite(4, LOW);
digitalWrite(5, LOW);
digitalWrite(6, LOW);
digitalWrite(7, LOW);
digitalWrite(8, LOW);
digitalWrite(9, LOW);
digitalWrite(10, LOW);
}
// Serial.print(Signal);
//delay(100);
//Signal = 0;

// Serial.print(buttonState);
// Serial.print(‘,’);
// Serial.println(lastButtonState);
// delay(100);

// sent and test data
Serial.write(change2);
Serial.write(Signal);
delay(1);

// Serial.print(change);
// Serial.print(‘,’);
// Serial.print(change2);
// Serial.print(‘,’);
// Serial.println(Signal);
// delay(150);

//return the last state
lastButtonState = buttonState;
}

// *code for Processing
import processing.serial.*;
import processing.video.*;
Movie myMovie;
Serial myPort;
int change;
int change2;
int Signal;
int[] valueFromArduino= new int [2];

FTPClient ftp; // Declare a new FTPClient
String[] files; // Declare an array to hold directory listings
//boolean saved = false;

void setup() {
size (displayWidth, displayHeight);
background(0);
//frameRate(30);
myMovie = new Movie(this, “3.mp4”);
//myMovie = new Movie(this, “4.mp4”);
// myMovie.frameRate(2);
myMovie.loop();
printArray(Serial.list());
myPort = new Serial(this, Serial.list()[3], 9600);
//myMovie.resize(displayWidth, displayHeight);
}

//void signal() {
////println(millis());
// saveftp();
//}

void draw() {
// control the display of the video
while ( myPort.available()>0) {
for (int i=0; i<2; i ++) {
valueFromArduino[i]=myPort.read();
}
if (myMovie.available()) {
if (change2!=0 && change2!=1) {
//if (change!=0&&change!=255&&change!=1&&change!=200 &&change!=2&&change!=3&&Signal!=-1) {
myMovie.read();
imageMode(CENTER);
image(myMovie, displayWidth/2, (displayHeight/2)-25);
}
}
// insert a function for screen capture
buttonsave();
//read data from Arduino
change2 = valueFromArduino[0];
Signal= valueFromArduino[1];
println (“change2 : “+ change2);
println (“Signal : “+Signal);
}
}
void buttonsave() {
if (Signal==1||change2==1) {
saveftp();
print(“saved to ftp”);
//saved = true;
}
}
//use mouse press for test
//void mousePressed(){
// saveftp();
//}

//captured images saving and uploading *credits go to Professor Rodolfo Cossovich*
void saveftp() {
String name = “kal-“+millis() + “.png”;
saveFrame(“/Users/xinyigao/” + name);

try
{

// set up a new ftp client
ftp = new FTPClient();
ftp.setRemoteHost(“plobot.com”); // ie. ftp.site.com

// set up listener
FTPMessageCollector listener = new FTPMessageCollector();
ftp.setMessageListener(listener);

// connect to the ftp client
println (“Connecting”);
ftp.connect();

// login to the ftp client
println (“Logging in”);
ftp.login(“ixlab2018@plobot.com”, “ixlab2018”);

// set up in passive mode
println (“Setting up passive, ASCII transfers”);
ftp.setConnectMode(FTPConnectMode.PASV);

// set up for ASCII transfers
ftp.setType(FTPTransferType.BINARY);

// copy BINARY file to server and overwrite the existing file
println (“Putting file”);
ftp.put(“.//”+name, “.//images/”+name, false);
// Shut down client
println (“Quitting client”);
ftp.quit();

// Print out the listener messages
String messages = listener.getLog();
println (“Listener log:”);

// End message – if you get to here it must have worked
println(messages);
println (” complete”);
}
catch (Exception e)
{

//Print out the type of error
println(“Error “+e);
}
}

Comic Project Documentation (Fall 2018)–Skye Gao (Chen)

Assignment: Interactive comic project

Professor: Ann Chen

Date: 10/13/2018

 Linkhttp://imanas.shanghai.nyu.edu/~xg679/Commlab/comicproject/

Story idea & synopsis:

Instead of making an entertaining comic, Emily and me wanted to make our comic meaningful and   let people really connect to it. Therefore, we chose the topic of gay love.

The comic story is based on time line, and will demonstrate the story of two boys who are from the same neighborhood. The story begins with the boys meeting at the backyard, becoming best friends and feeling naturally connected. When they grow older, they go to school together with the relationship which is criticized a lot by people. They has faced a lot of challenges but get through them together. Finally both them and other people embrace their identities.

We think this experience may represent the experience of many gay people, and we hope this story will help people understand more about the community.

Process:

In order to make the comic easy to understand and really get people connected, we decided to make the layout as simple as possible. We wanted to have: 1) single linear story line 2) simple but clear drawing & interaction 3) minimum texts. I prepared and drew all the comic assets

(thanks to Frank for his iPad). It took almost a week to finish all the drawings.

With all the assets prepared, we started the structuring and coding. The initial plan included:

  1.  scroll to change the panels
  2. insert two interactions which are: 1) on the panel with crowds, users will click to see what they are saying (bubbles with texts); 2) swap to get rid of the depressive thoughts (messy clouds)

In the process of coding, we meet several challenges and made a series of change.

  1. For the scrolling effect, considering users reading habits, we were concerned that people might not get informed that there is an interaction on “this” panel, thus may miss the interaction. To make the interaction, we decided to let users click to change the panel instead of scrolling. Here  we referred tothe example of slideshow effect from W3 schools. Jingyi helped us to make it happen.
  2. For the interaction of swapping away the clouds, we found out if we wanted to have the effect of erasing, we would have to apply the canvas of P5 which has not been covered in class yet. So we made an adjustment that instead to swapping, we will let the users to click on the clouds and make it disappear by changing its opacity. To make it more clear, we attached an image if duster  to the mouse pointer, and to do that we adopted an example from stack overflowWe put an button to the panel to let people pick up the duster and then clean the clouds. With the clouds disappearing, the image laying behind will show up in which they are helping each other. Another student from lap (who I don’t know his name) and Konrad helped with this part.
  3. For the interaction of click to see bubbles. In the first place, we used a button by clicking on which the bubbles will show up.However, after talking to some fellows, we got feedbacks that the button may seems to be unnecessary for the consistency of story. So we found a better plan : there will be blank bubbles showing at the first place and when users are clicking on the bubbles, the texts in them will reveal. To do so we referred to the examples about rotate-photos we learned in class. Frank and Jingyi helped us to implement the coding.
  4. We have a lot of panels for the comic which made the comic a little bit long, So we deleted or combined some of the panels to make the story more effective.

Future improvement:

  1. We want to add background music to the comic, and make it change regarding to the story.
  2. We want to make the swapping interaction happen.
  3. We may want to add more interactions and narratives to comic to make it more attractive.

Week 11: Interactive Video Project (Fall 2018)—— Skye Gao (Chen)

Assignment: Interactive Video project

Project name: Robota

Partners:Skye, Zane and Candy

Professor: Ann Chen

Date: 11/19/2018

 Linkhttp://imanas.shanghai.nyu.edu/~xg679/Commlab/videoproject/

Description: 

For our video project, our group use stop motionto tell a story about a small robot. The story is about a robot who is charged (by users) has live at the beginning. It went exploring the world for a while and met  some other toys. It thought other toys should be alive as it so it tried to play with them. But it turned out that when its battery went out, it also returned to be a dull toy.

The original idea about the video as well as the website was to have  a crank on the robot which the user should scroll to wind it up at the first place. As the crank being winded up, the video will be start playing and finished without other interruptions.

The mood of the story should be a little bit sad. With a robot narrative, the purpose of our project is to set people thinking about the meaning of life that human endow to robots, thus to reflect on the relationship between human and robots.

Process:

We first wrote down our main storyline and visualized it with story board.

(Storyline)

(Storyboard)

   

As our main character is  a small robot, we ordered the robot model on Taobao. However, the delivery had some problems so we did not get the robot until Wednesday.

Since our original idea about the interaction is scrolling to wind up a crank, we 3D-printed a crank with online model. However, when we started to shooting ,we met several challenges.

First of all we found it really hard to attach the crank with the robot. Since we are doing stop motion, we need the crank to be stable but changeable on the robot. Considering tape will appear in the scene (which we did not want to see), we tried to dig a hole on the robot, however the robot is made of hard plastic so after several trails, we have to give up on the idea.

So to maintain our storyline instead using a crank to power the robot, we decided to use a charger and modified the story a little bit. So that when the users see the website, they will need to click on a blog to activate the video.

Also, we always could not get a 500mm camera lens for the first two days, but we finally managed to get one at last.

The whole shooting process took about 3 days, we used the Dragonframes to shot stop motions. During the process, we added some scenes while also deleted some in consideration of the practicality and consistency of the whole video. And here are some shots from the process:

After we finished shooting, me and candy added sound effect for the video while Zane added background music and worked on the website. The sound effect we used are most from iMove. We  cut, reverse and modifies tones to create ideal effects.

Considering the time limit, the ultimate outcome for our project is quite simple. We only let the users to click the plug to start the video and watch it to the end.

Reflection:

All three of us have dedicatedly contributed to the project. I really enjoy the final outcome of the video. However, from self-refelciton as well as audiences’ feedback, we also have a lot to improve:

Because of all kinds of issues as mentioned before, we started our shooting really late, which is main the reason that we did not have enough time to implement more interaction. Also, because the quality of the video is compromised through WeChat and email, I had to send the audio to Zane for combination. However, through this process, the level of sound effect is not adjusted well, thus during the presentation, the sound effect was almost inaudible.

So for future improvement, I think we can add more interaction during the  video, like scrolling for the robot to land on the ground and clicking to help the it climb up books. Also, we need to add more texts or scenes to make the story more explicit, since people get confused with the ending.