Final Blog Post for Final Project by Ryan Yuan

Project Name: Intereferit

For this project, I work alone, and the final work is a new interface for musical instrument. 

Based on the previous research I had done before the production started, the project would be related to music. The keyboard MIDI player I had found and the website of The International Conference on New Interfaces for Musical Expression, AKA NIME, gave me a lot of inspirations on making a musical instrument. As what had been mentioned by professor Eric during class, conductive tape could be an interesting way of interaction, so I finally thought of making a musical instrument that is only based on interaction built by conducive tape. The concept of utilizing conductive tape is by making a circuit based on the tape, the sensor part connects to an input port on Arduino board, the trigger part connects to the ground. 

After thinking about the way of interaction, I started thinking about how the interface would look like. I am very interested in Japanese history, and I have been playing a Japanese game that is related to the Japanese Warring States Period in recent days, so I was inspired by it and wanted to make the interface related to something about the history. There are family lines for each family during the Warring States Period, and these lines all look differently from each other and has its own meaning. Oda family and Tokugawa family are the two most famous family during that period as these two families are the two that had united the whole country, and I like these two families very much, so I want to adapt their family lines into my project. And this is the reason why my final project is the combination of their two family lines. 

The idea of the whole project is not only a musical instrument, but it is also about connection. For the physical interaction part, the two family lines indicate the actual connection of the two family in history. Then for the virtual interaction part, I want to make visualization of the instrument, that when the user is playing the instrument, there will be effects on the screen, and these effects will be passed on in some way to the next user to make connection with others. So I think of water ripples effect, as water ripples will be triggered and be passed on since they will last for some time in real life when a water ripple is triggered. For the effect of water ripples in processing, the water ripples will be realized by pixel manipulation, that the RGB value will be passed to other pixels when a water ripple is triggered at one pixel, to make it look like spreading. And I am thinkging of the way of how to control where to trigger the water ripples on the screen, so I have thinked about color tracking by using camera capture. The concept is that, if the object is in certain RGB value and if I set a condition to only track those pixels and make them visible, the result will be like as if I am doing object tracking. So I get a traditional Japanese ghost mask which is red, not only to fit the Japanes style and realize the tracking function, but also to fit the concept that, we are the ghost in the vision of the camera, and we will bring intereference to the pixels.

Now talking about the reason why I name the project Intereferit, it is a word combined with interefere and inherit, intereference means we will interefere the pixels and these intereference, or as to say the water ripples will be passed on to the next user on the screen, which is inherit. 

For the production part. The physical component, firstly, the musical instrument. I get the picture of the two family lines online, but find out that there are so many gaps in these pictures that if I am going to laser cut these two pictures will be resulting in have a lot of scattered parts. So I have to connect these parts to make them together, I have been struggled for this part, I want to connect these parts in AI, but I barely know how to draw in AI, so I waste hours figuring out the problem and result in nothing. I finally finish the work with Photoshop and then inport the file into AI to do the laser cutting. I want to make the family lines bended to make the physical part looks stereoscopic, but not just two flat pictures to tap on, and this fits the idea of a new interface that is designed by me. Since the icon should be bended in different direction, so wood can not be used to do the cutting as it can only be bended in one way, so I use acrylic board to make the icons. And I use the thermal spray gun in the fab lab to do the bending job, I heat the part where I need to bend, and then when it is hot enough, it will be bended due to the property of plastic, and then I can modify the angle to make it look like how I want it to be. Then I use AB gum to stick the two icons together to finish the physical part. Then I need to stick the tape onto it, to make the keys to tap on for the interaction. I first stick thirty tapes on the instrument to make thirty keys, and each is connected to a wire connected to the breadboard. I also borrow an Arduino Mega board to make sure I have enough input ports. But the result is that, since there are too many wires, and it is hard to stick the wires with the tapes, and it is very often that some wires will fall, and some are not well connected. Also due to the conductivity and the resistance of the conductive tape, some tape are not sensitive. So the result is that, only nighteen keys survive. And during the process of connecting the wire to the board, it is very hard to clear up all the wires since there are too many of them, also it takes me a long time to figure out which key corresponds to which port to do the coding. For the trigger, I get to rubber gloves, since they are easy to wear even though they are hard to take off. And also the wires that I stick on they will not fall off easily since the they are well sticked on the gloves. But I only use the wires to touch the tapes, but not to use tape to connect with tape due to the resistence problem.

For the coding part, the basic idea is that, each port is connected to certain sound file that are all one-shot sound. So these are notes from C4 to C6, 11 notes in total, and I also have three keys for drum set and bass, two keys for mode switch including Japanese style and future electronic one. The water ripple effect is realize by pixel manipulation combined with the computer vision object tracking. The last part of the documentation is the code.

For the reflection, the design of the project fits the idea of a new interface, but the interaction is not sensitive enough due to the problem of connecting circuit based on condctive tape. So it is hard to really play on the instrument, and also people are hard to understand the function of the mask and the meaning of the ripples on the screen, so the delivery of the concept is not clear as I have imagined. This project is a trial for making a new interface for musical instrument, but I need to reconsider a better way of interaction next time. And also for the meaning of the project, I want to show people a new interface of a musical instrument, to let them play with it, and also some users may think of the concept of connection between others based on the processing image.

     

CODE for Processing:

import processing.serial.*;
import processing.sound.*;
import processing.video.*;

ThreadsSystem ts;//threads
SoundFile c1;
SoundFile c2;
SoundFile c3;
SoundFile e1;
SoundFile e2;
SoundFile f1;
SoundFile f2;
SoundFile b1;
SoundFile b2;
SoundFile a1;
SoundFile a2;
SoundFile taiko;
SoundFile rim;
SoundFile tamb;
SoundFile gong;
SoundFile hintkoto;
SoundFile hintpeak;
Capture video;
PFont t;

int cols = 200;//water ripples
int rows = 200;
float[][] current;
float[][] previous;
boolean downc1 = true;
boolean downc2 = true;
boolean downc3 = true;
boolean downe1 = true;
boolean downe2 = true;
boolean downf1 = true;
boolean downf2 = true;
boolean downb1 = true;
boolean downb2 = true;
boolean downa1 = true;
boolean downa2 = true;
boolean downtaiko = true;
boolean downtamb = true;
boolean downrim = true;
boolean title = true;
boolean downtitle = true;
boolean koto = true;
boolean peak = false;
boolean downleft = true;
boolean downright = true;

float dampening = 0.999;

color trackColor; //tracking head
float threshold = 25;
float havgX;
float havgY;

String myString = null;//serial communication
Serial myPort;
int NUM_OF_VALUES = 34; /** YOU MUST CHANGE THIS ACCORDING TO YOUR PROJECT **/
int[] sensorValues; /** this array stores values from Arduino **/

void setup() {
fullScreen();
//size(800, 600);

ts = new ThreadsSystem();//threads

cols = width;//water ripples
rows = height;
current = new float[cols][rows];
previous = new float[cols][rows];

setupSerial();//serialcommunication

String[] cameras = Capture.list();
printArray(cameras);
video = new Capture(this, cameras[11]);
video.start();
trackColor = color(255, 0, 0);

c1 = new SoundFile(this, "kotoc.wav");//loading sound
e1 = new SoundFile(this, "kotoE.wav");
f1 = new SoundFile(this, "kotoF.wav");
b1 = new SoundFile(this, "kotoB.wav");
a1 = new SoundFile(this, "kotoA.wav");
c2 = new SoundFile(this, "kotoC2.wav");
e2 = new SoundFile(this, "kotoE2.wav");
f2 = new SoundFile(this, "kotoF2.wav");
b2 = new SoundFile(this, "kotoB2.wav");
a2 = new SoundFile(this, "kotoA2.wav");
c3 = new SoundFile(this, "kotoC3.wav");
rim = new SoundFile(this, "rim.wav");
tamb = new SoundFile(this, "tamb.wav");
taiko = new SoundFile(this, "taiko.wav");
gong = new SoundFile(this, "gong.wav");
hintpeak = new SoundFile(this, "hintpeak.wav");
hintkoto = new SoundFile(this, "hintkoto.wav");

}

void captureEvent(Capture video) {
video.read();
}

//void mousePressed() {
// if(down){
// down = false;
// int fX = floor(havgX);
// int fY = floor(havgY);
// for ( int i = 0; i < 5; i++){
// current[fX+i][fY+i] = random(500,1000);
// }
// }

// sound.play();
//}

//void mouseReleased() {
// if(!down) {
// down = true;
// }
//}

void draw() {
background(0);

/////setting up serial commuicatioin/////
updateSerial();
//printArray(sensorValues);
//println(sensorValues[0]);
int fX = floor(havgX);
int fY = floor(havgY);

if(sensorValues[2] == 0){
if(downleft){
downleft = false;
if(koto){
koto = false;
peak = true;
//if(koto){
c1 = new SoundFile(this, "pc1.wav");//loading sound
e1 = new SoundFile(this, "pe1.wav");
f1 = new SoundFile(this, "pf1.wav");
b1 = new SoundFile(this, "pb1.wav");
a1 = new SoundFile(this, "pa1.wav");
c2 = new SoundFile(this, "pc2.wav");
e2 = new SoundFile(this, "pe2.wav");
f2 = new SoundFile(this, "pf2.wav");
b2 = new SoundFile(this, "pb2.wav");
a2 = new SoundFile(this, "pa2.wav");
c3 = new SoundFile(this, "pc3.wav");
rim = new SoundFile(this, "Snare.wav");
tamb = new SoundFile(this, "HH Big.wav");
taiko = new SoundFile(this, "Kick drum 80s mastered.wav");

}
hintpeak.play();
}
}
if(sensorValues[2] != 0){
if(!downleft) {
downleft = true;
}
}

if(sensorValues[4] == 0){
if(downright){
downright = false;
if(peak){
koto = true;
peak = false;
c1 = new SoundFile(this, "kotoc.wav");//loading sound
e1 = new SoundFile(this, "kotoE.wav");
f1 = new SoundFile(this, "kotoF.wav");
b1 = new SoundFile(this, "kotoB.wav");
a1 = new SoundFile(this, "kotoA.wav");
c2 = new SoundFile(this, "kotoC2.wav");
e2 = new SoundFile(this, "kotoE2.wav");
f2 = new SoundFile(this, "kotoF2.wav");
b2 = new SoundFile(this, "kotoB2.wav");
a2 = new SoundFile(this, "kotoA2.wav");
c3 = new SoundFile(this, "kotoC3.wav");
rim = new SoundFile(this, "rim.wav");
tamb = new SoundFile(this, "tamb.wav");
taiko = new SoundFile(this, "taiko.wav");
gong = new SoundFile(this, "gong.wav");

}
hintkoto.play();
}
}
if(sensorValues[4] != 0){
if(!downright) {
downright = true;
}
}

if(sensorValues[0] == 0){
//println("trigger");
println(title);
if(downtitle){
downtitle = false;
if(title){
title = false;
}
else if(!title){
title = true;
}
}
if(!downtitle){
downtitle = true;
}
}

if(sensorValues[19] == 0){//c1
//println(down);
//println(sensorValues[19]);
if(downc1){
downc1 = false;
println("yes");
println(downc1);
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
c1.play();
}
}
if(sensorValues[19] != 0){
if(!downc1) {
downc1 = true;
}
}

if(sensorValues[26] == 0){//e1
if(downe1){
downe1 = false;
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
e1.play();
}
}
if(sensorValues[26] != 0){
if(!downe1) {
downe1 = true;
}
}

if(sensorValues[31] == 0){//f1
if(downf1){
downf1 = false;
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
f1.play();
}
}
if(sensorValues[31] != 0){
if(!downf1) {
downf1 = true;
}
}

if(sensorValues[20] == 0){//b1
if(downb1){
downb1 = false;
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
b1.play();
}
}
if(sensorValues[20] != 0){
if(!downb1) {
downb1 = true;
}
}

if(sensorValues[9] == 0){//a1
if(downa1){
downa1 = false;
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
a1.play();
}
}
if(sensorValues[9] != 0){
if(!downa1) {
downa1 = true;
}
}

if(sensorValues[15] == 0){//c3
if(downc3){
downc3 = false;
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
c3.play();
}
}
if(sensorValues[15] != 0){
if(!downc3) {
downc3 = true;
}
}

if(sensorValues[23] == 0){//c2
if(downc2){
downc2 = false;
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
c2.play();
}
}
if(sensorValues[23] != 0){
if(!downc2) {
downc2 = true;
}
}

if(sensorValues[16] == 0){//e2
if(downe2){
downe2 = false;
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
e2.play();
}
}
if(sensorValues[16] != 0){
if(!downe2) {
downe2 = true;
}
}

if(sensorValues[11] == 0){//f2
if(downf2){
downf2 = false;
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
f2.play();
}
}
if(sensorValues[11] != 0){
if(!downf2) {
downf2 = true;
}
}

if(sensorValues[12] == 0){//b2
if(downb2){
downb2 = false;
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
b2.play();
}
}
if(sensorValues[12] != 0){
if(!downb2) {
downb2 = true;
}
}

if(sensorValues[17] == 0){//a2
if(downa2){
downa2 = false;
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
a2.play();
}
}
if(sensorValues[17] != 0){
if(!downa2) {
downa2 = true;
}
}

if(sensorValues[7] == 0){//rim
if(downrim){
downrim = false;
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
rim.play();
}
}
if(sensorValues[7] != 0){
if(!downrim) {
downrim = true;
}
}

if(sensorValues[8] == 0){//taiko
if(downtaiko){
downtaiko = false;
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
taiko.play();
}
}
if(sensorValues[8] != 0){
if(!downtaiko) {
downtaiko = true;
}
}

if(sensorValues[28] == 0){//tamb
if(downtamb){
downtamb = false;
for ( int i = 0; i < 10; i++){
current[fX+i][fY+i] = random(255,500);
}
//ts.addThreads();
//ts.run();
tamb.play();
}
}
if(sensorValues[28] != 0){
if(!downtamb) {
downtamb = true;
}
}
////water ripples/////
loadPixels();
for ( int i = 1; i < cols - 1; i++) {
for ( int j = 1; j < rows - 1; j++) {
current[i][j] = (
previous[i-1][j] +
previous[i+1][j] +
previous[i][j+1] +
previous[i][j-1]) / 2 -
current[i][j];
current[i][j] = current[i][j] * dampening;
int index = i + j * cols;
pixels[index] = color(current[i][j]);
}
}
updatePixels();
float[][] temp = previous;
previous = current;
current = temp;

/////drawing threads///
ts.addThreads();
ts.run();

////head tracking///
video.loadPixels();
threshold = 80;

float avgX = 0;
float avgY = 0;

int count = 0;

// Begin loop to walk through every pixel
for (int x = 0; x < video.width; x++ ) {
for (int y = 0; y < video.height; y++ ) {
int loc = x + y * video.width;
// What is current color
color currentColor = video.pixels[loc];
float r1 = red(currentColor);
float g1 = green(currentColor);
float b1 = blue(currentColor);
float r2 = red(trackColor);
float g2 = green(trackColor);
float b2 = blue(trackColor);

float d = distSq(r1, g1, b1, r2, g2, b2);

float hX = map(x,0,video.width,0,width);
float hY = map(y,0,video.height,0,height);

if (d < threshold*threshold) {
stroke(255,0,0);
strokeWeight(1);
point(hX, hY);
avgX += x;
avgY += y;
count++;
}
}
}

// We only consider the color found if its color distance is less than 10.
// This threshold of 10 is arbitrary and you can adjust this number depending on how accurate you require the tracking to be.
if (count > 0) {
avgX = avgX / count;
avgY = avgY / count;

havgX = map(avgX,0,video.width,0,width);
//havgY = map(avgY,0,video.height,0,height);
// Draw a circle at the tracked pixel
fill(255,0,0,100);
noStroke();
textSize(50);
if(koto){
text("koto",havgX,havgY);
}
if(peak){
text("peak",havgX,havgY);
}
}

if(title){
textSize(200);
fill(255,0,0);
text("Interferit",width*0.3,height/2);
textSize(40);
text("Wear on the mask and the claws, using your enchanted vessel to interfere the world of pixels!",width*0.03,height*0.7);
//println(1);
}
if(!title){
fill(0);
}
}

float distSq(float x1, float y1, float z1, float x2, float y2, float z2) {
float d = (x2-x1)*(x2-x1) + (y2-y1)*(y2-y1) +(z2-z1)*(z2-z1);
return d;
}

//void keyPressed() {
// background(0);
//}

void setupSerial() {
printArray(Serial.list());
myPort = new Serial(this, Serial.list()[ 0 ], 9600);
// WARNING!
// You will definitely get an error here.
// Change the PORT_INDEX to 0 and try running it again.
// And then, check the list of the ports,
// find the port "/dev/cu.usbmodem----" or "/dev/tty.usbmodem----"
// and replace PORT_INDEX above with the index number of the port.

myPort.clear();
// Throw out the first reading,
// in case we started reading in the middle of a string from the sender.
myString = myPort.readStringUntil( 10 ); // 10 = '\n' Linefeed in ASCII
myString = null;

sensorValues = new int[NUM_OF_VALUES];
}

void updateSerial() {
while (myPort.available() > 0) {
myString = myPort.readStringUntil( 10 ); // 10 = '\n' Linefeed in ASCII
if (myString != null) {
String[] serialInArray = split(trim(myString), ",");
if (serialInArray.length == NUM_OF_VALUES) {
for (int i=0; i<serialInArray.length; i++) {
sensorValues[i] = int(serialInArray[i]);
}
}
}
}
}

“Who’s Ordering your Food” – Anica Yao – Marcela

Project Name: Who’s ordering your food?
Partner
: Xueping 
Final Creation

Project Photo
Final Show Ongoing ~

Codes: Arduino + Processing

Conception and Design:

The interaction of our project consists of two parts: processing ( with a series of scenarios ) + Arduino ( a Menu made of buttons )

In this project, the users are allowed to explore by themselves. The final feedback/report they get is also based on what they choose. When we made the proposal ideas we wanted to make it an educational device informing people to have a healthy diet by keeping the balance among all the nutritions intakes. But thanks to Prof. Marcela, we do think that idea will only be accepted by a small range of audiences —— people who care about nutritional balance to gain a healthy diet. People have their own standard to choose a healthy diet. Otherwise, our concept was too narrow to make our device an inspiring one. 

Brainstorm
Brainstorm

The three scenarios are created to make a comparison. In the first one, there’s no limit. The users can choose whatever they want in the menu. ( People want to try curry rice simply because it’s been a while not having it ). In the second one, there are three clips of weekly vlog filmed by a famous Youtuber ( I will put the link down below ). It indicates that people nowadays can be easily influenced by social media. Psychologically speaking, most people tend to follow what others do. When those famous Youtubers are promoting their healthy styles, the audience will follow suit, not considering whether the recipes suit them or not. Or the recipes themselves are not healthy at all. In our project, people need to press “v” to watch several clips. In the last one, there’re news/scientific reports popping up on the screen with the corresponding voice. It’s like we are surrounded in a world filled up with either real or fake information. We tend to believe the so-called “scientific facts” when they are actually not. The most important lesson we learned from the midterm project is to create an all-dimensional experience for the user, in which the audio is inevitable most time.  So here, from both visual and audio sides, we want to make it more a reality. The experience is like reading newspaper or checking daily news. When you look through that news, you can choose to read thoroughly or just skip it. But you can’t resist the information pouring onto you. That’s how we develop the third scenario. Finally, based on whether they’ve chosen a different dish or not during the process, we will give a feedback to the user. 

For the visual design, we made a menu with buttons on top so that it feels like a restaurant. We specially chose a similar cartoon style. We made some visuals (pictures and texts), but due to time constraints, we didn’t draw all the dishes. We definitely want to try next time to make it more aesthetic. 

We could have used more words but we think that there might be too much load or more like a lecture instead of an interactive experience where people are put into the “conversations” all the way. 

Fabrication and Production:

The most significant and challenging part of our production is to put all the scenarios in a single processing page. We use if statement to count the scenario number, delay() to make transitions and refresh the background in between. Also, it’s important to decide when to display the image or to play the sound. For example, it happens that the new image covers the old one but the sound continues to play. It’s important to put them under the correct conditions. 

During the user text, we only finished the third scenario then. And we received lots of helpful feedback from classmates and professors :
(1) Better to have clearer instructions. ( This is actually a tricky one. We do want to make it clear, but not too obvious. Or the experience will be responsive rather than interactive. So later we added some hints or notes.)
(2) The information given may be overwhelming? ( We thought of this problem. But that’s why we also added the voice. It also makes the scenario more realistic. Later people also told us that now it’s better because more visuals, more elements(sound, video, image), and more interactions are involved.)
(3) The menu can be better designed. ( We also think so. If we still have time, I want to make it more like a sheet or book or make more decorations on the box we have)
(4) Make it a little game. ( That’s also a very good starting point.)
(5) Create a replay/reset button so that the user can play it again. ( Yes! later we added that on the report page).

All the feedback is very helpful for our following production decisions. We made some improvements:

We try to put the user into a conversation, where he or she is making choices not only for himself/herself but also for the girl, Lisa, in the story. When the user is making decisions for that girl, it also indicates that it’s not you but the social media that influences your food decision. Besides, the conversation like this can be part of daily life so the users may quickly familiarize themselves with what’s going on. We want the conversation to make the scenarios more of storytelling, rather than just a “lecture”.

For the fabrication part, we used the layer cutter to make the box. The dishes are made of cardboard with pictures atop.


Conclusions:

The goal of our project is to convey the information that we can be easily affected by social media in terms of food choice to get a healthy lifestyle. You are the receiver of all kinds of information, and you thought you were making your own decision. But think about it: Are you ordering your food? Or is there anyone else ordering for you? Are you really making independent decisions? This project is never to tell people the best nutritional balance to reach in order to keep healthy. Instead, all the people can get their personalized experience in this device. Based on our report at the end of the project, if you are someone who holds the original choice you’ve made, it’s a good thing, except that you choose the junk food three times. The latter will be reminded to have a healthier alternative. As we observed, most people easily changed their mind after they got that information. We hope this project will make them realize the invisible power of social media. 

Our project generally aligns with my definition of interaction that it is a process in which an actor receives and processes the information from another through a certain medium and then gives the results accordingly. It’s sufficient to describe a basic interaction. But to be a successful one, in my opinion, the experience should (1) self-explanatory, clear, and obvious (2) put the user in a continuous loop to make responses (3)  be multi-dimensional with visuals, audio and other factors involved so that the user will be more engaged. I think we still need to make our project more self-explanatory. It can be achieved through various forms of interaction like recognizing the users’ gestures, which may be more close to daily life. Or we can provide more hints rather than just texts. I think we did well in (2) and (3), but there’s still space for improvements.

Since the users need to stay focused on the scenario, I’m glad to see they are more than willing to navigate throughout. Although sometimes they got confused about what’s the next key to press, and that we didn’t have a very detailed, personalized report in the end ( we expected that the feedback is given based on every food combination. But due to technical constraints, we found it difficult to realize). some of our friends said that they really saw the difference and improvements we made after the user test.

The values we learned from our setbacks and failures are that we need to thoroughly think about the ideas we want to show rather than the particular techniques for interactions. But when we begin to think about the interactions, we easily neglect some details. For example, we made the video wrong size/format, or we forgot to put the bracket, etc. Therefore, we also need to spare time for these possible mistakes besides the main parts. 

Another thing we need to reflect on setbacks is how to convey the information more explicitly and quickly. So we designed a plan B for the 3rd scenario: we let the user chooses whether to go through every piece of news or simply get the abstract by skimming. The latter is meant to create a feeling of the explosion of information,  mimicking a realistic environment filled up with all kinds of information. It has all the news and big titles moving while the voiceover will be a combined soundtrack with varied voices telling different news or slogans. In this case, even if they are not patient to see the news one by one, they get the main ideas rather quickly. 

To conclude, we want to let people realize the influence of social media on their decision making. Studies show that when we order our food, we are affected or misled by many external factors. Social media is the major one in today’s world. If we just blindly follow whatever others eat or the so-called scientific report says, we are going to face some food safety issues. The prevailing pop culture including live streams and articles gains profits mostly by driving the traffic and catching people’s eyes. Some may be true facts, and some are really not. But consumers tend to believe whatever they see or hear. Following the social media blindly may lead to some disorders or obesity or heart-diseases. In our project, by asking people “who’s ordering your food”, we want to let people think twice about their food choice or decision making in general. 

Puppet – Eric Shen – Eric

Project name: Puppet

CONCEPTION AND DESGIN

        Originally, our understanding of the interaction between the users and our project only focuses on showing our theme of social expectation. In order to better explain our theme, we want the user to be the forces in the society that force people behave in a certain way and impose the social expectation on others, while the puppet is the representative of the people who are being controlled and meet the social expectation. Therefore, the initial interaction that we thought of was to let users use the keyboard and the mouse to control the simulative image of the puppet in Processing to set a pose.  After that, the real puppet on the stage would first change several random poses, and finally stay at the pose that the user sets. For the purpose of using Arduino to make the real puppet move, we chose to use 4 servo motors to control the legs and the arms of the puppet. Our criteria for motors is that it can be easily attached with things and it can rotate within a certain angle in a precise way. Stepper motor was once under our consideration, but it’s hard to be attached with things. Another disadvantage of stepper motor is that each stepper motor needs power supply. If we use stepper motors, it needs a huge amount of power and if something goes wrong in the circuit, there would be potential danger. Due to the reasons listed above, we gave up using stepper motors. In order to make the users resonate with our a bit sad theme, we need to find a puppet that is not funny and childish. Finally, after a long time of searching, we decided to use this particular puppet.

The Puppet

When selecting the material to build the stage, we first thought of using laser cut to create a delicate box. Yet, we also need to contain all the components including the Arduino, the puppet and the servo motors inside this box leading to the fact that this stage will be of a large size. If we chose to laser cut, it will use too much materials in the fabrication room. Therefore, eventually, we chose to use a carton box as the stage and the container of the components. 
We also used 3D printing to make the parts attached to both the servo and the string connected to the puppet more stable. We first use the cardboard to build those, but it turned out that they were too easy to be bent and couldn’t stand the resistance of the string.

1
With cardboard
3
With 3D Printing
2
The basic concept

FABRICATION AND PRODUCTION

        According to our original plan, one of the most important steps in the process of creating the project was to make the  real puppet move accordingly with the digital puppet in Processing. After my partner and I both finished the coding of the Arduino and Processing, we started to test how the data from Processing could be sent to the Arduino. At first, I thought that I needed to figure out how to map the data that I had in Processing to the angle of the servo motors. Then I suddenly realized that I could just create another 4 new variables that stand in Processing and directly transfer them to Arduino to make the servo motors rotate, which turned out to be a success. 
After we got over this most significant technical problem in our project, we sought advice from the fellows, they pointed out that it may be hard for the users to perceive and understand of our theme of social expectation through such a simple interaction. In addition, we present the exact same thing both on Arduino and Processing, which is not of much use, but even a bit unnecessary and redundant. They asked a question that we couldn’t answer: why would I interact with the computer to make the puppet move which does not make sense instead of just controlling it with physical interaction. In that way, the interaction would be more interesting and easier to understand. After getting the suggestion from the fellows, we reflected on our project. First, when I thought of the definition that I gave to a successful interactive project in the previous research, this final project that I was working actually contradicts the very definition that I gave before because the interaction with the project is mundane and the users would know clearly about how their interaction influents the puppet. This project is more like a display of things instead of being an interactive one. After due consideration, we decided that we would also make the curtain controlled by the Arduino so that the users can interact with it. The puppet shown in Processing would be black and white to be projected on the stage so that it can be interpreted as the shadow of the real puppet. 

        During the user test session, the technical basis of our project completely changed. After we explain the theme of the project, Professor Marcela said that our theme is intriguing and plausible, but with our original plan for the project, we couldn’t explain the theme with logic. The first suggestion that she gave us was similar to the suggestions from the fellows. We should not display the almost exact same thing both on Arduino and Processing and we should use the cross to interact with the project instead of merely using the keyboard and the mouse. In that way, this project makes more sense and the interaction would be more interesting and perceivable. Besides, she puts forward an interesting idea that we could use the webcam to capture the users’ face to replace the face of the puppet so that we can show the users that they are also being controlled while trying to control others and the logic of this theme is clear. Another useful advice that we got from this session was that we could add a voice of the puppet and make some lines for the puppet to make the theme clearer. 
We were transferring data from Processing to Arduino, but now we needed to switch to transferring data from Arduino to Processing. The sensor that a fellow recommended us to use was the accelerometer. And some weird things happened after I applied it to the project. When I was testing the each two servo motors with x-axis or y-axis, they work fine. But when I tested with all four servo motors together, the code ran well for a period of time, and then the Arduino Uno would be dead and the Arduino Uno couldn’t connect to the computer. This happened one day before the presentation. Professor Marcela and Tristan both came to help and examined the code and the circuit, they were both good. After we worked together trying to find the problem for a long time, they suggested me to either switch an accelerometer or just switch the sensor to tilt switch sensor. After I changed every components in the circuit, it still failed to run normally. Eventually, I gave up using the accelerometer, and used two tilt switch sensor in my project to control the movement of the arms and legs respectively. And the logic is, if the left arm rises up, the right arm would fall down, vice versa. Though the tilt switch sensor can only use digital output, but it provides stability for the rotation of the servo motors because the angle of each rotation is fixed. 
Another difficulty is to map the transferred data in Processing to a certain range so that the movement of the legs and arms can accord with the real puppet. After a lot of testing and calculation, we made it work. Another problem was that the animation in Processing was not smooth enough. The legs and the arms would look like jump to a certain position. Then, Tristan introduced me a function called lerp(); which solved the problem and I apply this method to control the movement of the string. 

The outlook
6
The instruction
2
The explanation

CONCLUSIONS:

        The goal of our project is to show users the theme of social expectation. In the society, there is a phenomenon that people are trying to impose their social expectation on others. But while they are giving out their social expectation on others, they themselves are also being controlled and meeting others’ social expectation of them. In my preparatory research and analysis, my definition of a successful interactive project was that the interaction between users and the project should straightforward so that the users can tell how their interactions affect the project. The project should have many forms of interaction instead of merely one type.  At the same time, the project should have a specific meaning. From my perspective, I think our project this time actually mostly align with my definition of a good interactive project. The audience can tell the logic behind the movement of the puppet with cross tilting. Besides, the meaning of our project which is about social expectation is really clear and has its explainable logic. The aspect that it doesn’t align the definition is that the interaction of our project only contains one forms of interaction which is tilting the cross if we don’t the process of taking a selfie into account. The users interaction is that they hold and tilt the cross, trying to figure out how it controls the puppet, while listening to the background music and the monologue of the puppet. The only thing that is not expected by us before is that the audience would neglect our projection on the wall because they focus too much on looking at the puppet inside the box. If we have more time, we will make the instructions more clear. Moreover, we would probably make the whole process of the interaction longer so that the user can have time to reflect on what is going on and figure out the theme by themselves. Another thing is, we should project the animation of Processing inside the stage in order to let the audience see what is going on in Processing and for the purpose of integrating Arduino and Processing better. From the process of building this project, I learned that things would always go as how you expect, just like what happens to the accelerometer, but what we can do is to be patient and find an alternative or find what is going wrong. 

The Whole Process

Code for Arduino

3
Code for Arduino

Code for Processing 

import processing.sound.*;
SoundFile sound;
SoundFile sound1;
import processing.video.*; 
Capture cam;
PImage cutout = new PImage(160, 190);

import processing.serial.*;

String myString = null;
Serial myPort;
int NUM_OF_VALUES = 2;   /** YOU MUST CHANGE THIS ACCORDING TO YOUR PROJECT **/
int[] sensorValues;      /** this array stores values from Arduino **/

PImage background;
PImage body;
PImage arml;
PImage armr;
PImage stringlr;
PImage stringar;
PImage stringal;
PImage legl;
PImage stringll;
PImage legr;
float yal=100;
float yll=0;
float yar=0;
float ylr=0;
float leftangle=PI/4;
float rightangle=-PI/4;
float leftleg = 570;
float rightleg = 570;
float armLerp = 0.22;
float legLerp = 0.22;
float pointleftx =-110;
float pointlefty =148;
PImage body2;
boolean playSound = true;
void setup() {
  size(850, 920);
  setupSerial();
  cam = new Capture(this, 640, 480);
  cam.start(); 
  background = loadImage("background.png");
  body=loadImage("body.png");
  arml=loadImage("arml.png");
  stringal=loadImage("stringal.png");
  armr=loadImage("armr.png");
  legl=loadImage("legl.png");
  stringll=loadImage("stringll.png");
  legr=loadImage("legr.png");
  stringar=loadImage("stringar.png");
  stringlr=loadImage("stringlr.png");
  body2 =loadImage("body2.png");
  sound = new SoundFile(this, "voice.mp3");
  sound1 = new SoundFile(this, "bgm.mp3");
  sound1.play();
  sound1.amp(0.3);
 
  
}


void draw() {
  updateSerial();
  printArray(sensorValues);
  if (millis()<15000) {
    if (cam.available()) { 
      cam.read();
    } 
    imageMode(CENTER);

    int xOffset = 220;
    int yOffset = 40;

    for (int x=0; x<cutout.width; x++) {
      for (int y=0; y<cutout.height; y++) {
        color c = cam.get(x+xOffset, y+yOffset);
        cutout.set(x, y, c);
      }
    }

    background(0);
    image(cutout, width/2, height/2);

    fill(255);
    textSize(30);
    textAlign(CENTER);
    text("Place your face in the square", width/2, height-100);
    text(15 - (millis()/1000), width/2, height-50);
  } else { 
    if (!sound.isPlaying()) {
      // play the sound
      sound.play();
     
      // and prevent it from playing again by setting the boolean to false
    } 
    imageMode(CORNER);
    image(background, 0, 0, width, height);
    image(legl, 325, leftleg, 140, 280);  
    image(legr, 435, rightleg, 85, 270);
    image(body, 0, 0, width, height);
    if (millis()<43000) {
      image(body, 0, 0, width, height);
    } else {
      image(cutout, 355, 95);
      image(body2, 0, 0, width, height);
 
      sound.amp(0);
    }
    arml();
    armr();
    //stringarmleft();
    image(stringal, 255, yal, 30, 470);
    image(stringll, 350, yll, 40, 600);
    image(stringar, 605, yar, 30, 475);
    image(stringlr, 475, ylr, 40, 600);

    //if(sensorValues[0]=90){
    //}
    //else if (){
    //}
    // use the values like this!
    // sensorValues[0] 
    int a = sensorValues[0];
    int b = sensorValues[1];
    float targetleftangle= PI/4 + radians(a/2);
     float targetrightangle= -PI/4 + radians(a/2);
     float targetleftleg= 570+b*1.6;
     float targetrightleg= 570-b*1.6;
     
     leftangle = lerp(leftangle, targetleftangle, armLerp);
     rightangle = lerp(rightangle, targetrightangle, armLerp);
     leftleg = lerp(leftleg, targetleftleg, legLerp);
     rightleg = lerp(rightleg, targetrightleg, legLerp);
     
float targetpointr = -100-a*1.1;
float targetpointl = -120+a*1.1;
float targetpointr1 = -50+b*1.3;
float targetpointr2 = -50-b*1.3;
yal= lerp(yal, targetpointr, armLerp);
yar = lerp(yar,targetpointl,armLerp);
yll= lerp(yll,targetpointr1,legLerp);
ylr = lerp(ylr,targetpointr2,legLerp);

    //delay(10);
  }
}

void arml() {
  pushMatrix();
  translate(375, 342);
  rotate(leftangle);
  image(arml, -145, -42, 190, 230);
  fill(255, 0, 0);
  noStroke();

  popMatrix();
}



void armr() {
  //fill(0);
  //ellipse(500,345,10,10);
  pushMatrix();
  translate(490, 345);
  rotate(rightangle);
  //rotate(millis()*PI/800);
  image(armr, -18, -30, 190, 200); 
  popMatrix();
}

void setupSerial() {
  printArray(Serial.list());
  myPort = new Serial(this, Serial.list()[ 11 ], 9600);
  // WARNING!
  // You will definitely get an error here.
  // Change the PORT_INDEX to 0 and try running it again.
  // And then, check the list of the ports,
  // find the port "/dev/cu.usbmodem----" or "/dev/tty.usbmodem----" 
  // and replace PORT_INDEX above with the index number of the port.

  myPort.clear();
  // Throw out the first reading,
  // in case we started reading in the middle of a string from the sender.
  myString = myPort.readStringUntil( 10 );  // 10 = '\n'  Linefeed in ASCII
  myString = null;

  sensorValues = new int[NUM_OF_VALUES];
}



void updateSerial() {
  while (myPort.available() > 0) {
    myString = myPort.readStringUntil( 10 ); // 10 = '\n'  Linefeed in ASCII
    if (myString != null) {
      String[] serialInArray = split(trim(myString), ",");
      if (serialInArray.length == NUM_OF_VALUES) {
        for (int i=0; i<serialInArray.length; i++) {
          sensorValues[i] = int(serialInArray[i]);
        }
      }
    }
  }
}

Whack A Thumb – Connor S – Inmi

Conception and Design:

As I began considering my final project, the hardest part was deciding what theme or type of interaction it would be. From the beginning I was fairly set on creating a game of some kind that was usable by a wide range of audiences and did not involve an overly complex design in the hope not to confuse or alienate anyone from the experience of using whatever it would end up being. At first, my thoughts revolved around the idea of accessibility, and more specifically, providing a user experience that would be unquestioned in a sense because of how obvious the design is in terms of design, aesthetic, usability, etc. In following what I understood and appreciated about good design, I wanted to reintroduce and expand upon something that the majority of users would be familiar with, which is part of the reason why I went with a thumb war type game. Another reason I chose to use a thumb war as my inspiration is because a standard thumb war between two people literally fits in the palm of your hand, and I wanted my project to be as unintimidating and intuitive as possible. 

Fabrication and Production:

I knew my project would eventually be some sort of handheld device, but I wanted to be sure not to come up with an overly complex or ambitious design only to reach a roadblock because of the logistics of making it. I came to the conclusion that to make a functioning “thumb war”, I would need: (1) Some sort of handle to emulate the experience of holding a hand as one would in the event of a thumb war, and (2) a thumb that could move at a brisk enough pace to create a challenge. With this design concept in mind,  I started to consider the various tools at my disposal. I was lucky enough to find a piece of pipe in the fabrication room I could cut to use as a sturdy handle. I  wanted to create a simple to understand, yet challenging user experience, and concluded that an effective way to make this goal a reality would be to attach a thumb to a servo motor to mimic some of the challenge and quick thinking associated with pinning a real thumb. My first iteration included a flat, makeshift thumb made from cardboard, and while this worked for the purpose of movement and durability, multiple people suggested I 3D model a more realistic thumb to add to the experiential/immersive aspect of my project. Even though I ran into a bit of trouble with the printer at first…

…I would consider the decision to include a more realistically sized and shaped thumb a good one. 

 Since this would be a game, I needed to incorporate something that could be used to indicate the user successfully pinned the thumb, which is how the employment of a pressure sensor came in to focus. The pressure sensor serves as an appropriately sized target to pin the opposing thumb, but a potential problem I encountered with this idea was the fact that someone could accidentally trigger the sensor without successfully pinning the thumb. For this reason, since nobody playing the game would ever hold their thumb flat on the surface for an extended time while the game is afoot, I decided to add a condition in my code that required the sensor be held for at least 3 seconds to trigger a victory. After completing a decent amount of the code and wiring for the project, I realized I would need somewhere to store the Arduino, breadboard, and all the wires, which is how I arrived at this rough sketch: 

 

After my code was settled, the physical production process was relatively straightforward. I needed to feed wires through holes cut into the top of the bottom container and top surface, while also ensuring the project remained simple and clean in appearance. 

I thought this would be a good way to house all the necessary components, while not taking up an inordinate amount of space, or detracting from the design’s simplicity with uncovered wires. As shown in the photo above, there would have been a lot of visually unpleasing aspects of the project visible if not for the admittedly bulky, but necessary bottom container.         

Conclusions: 

The initial goal of my project was to create an interactive experience with an intuitive, simple design that would not alienate anyone due to its over-complexity, and I feel that with those criteria in mind, I successfully laid the groundwork for how a professionally produced thumb war game could operate and look. My definition of interaction was centered mainly on the concept of user friendliness in the form of prompts and responses provided by the design in question, and I think my project achieved these goals to varying extents. I think “Whack A Thumb” was user friendly in the sense that almost anyone could understand the concept, its uses, and the game play mechanics after a quick inspection.  While my biggest regret is that I spent too little time on ensuring the physical project would retain its structural integrity, I still have faith in my project’s concept and presentation. If I had more time I would make sure the device is made entirely from a durable yet light-weight material, and include a more ergonomically designed handle. 

Looking back on my process planning, constructing, and implementing my project, I learned that, while the basic concepts involving its design and function were well-formulated, there is no substitute for repeated and thorough user testing which help contribute to a fully polished final product. A seemingly foolproof concept will almost always encounter unexpected troubles when attempting to implement the idea in an uncontrolled environment. If I could add a footnote to the original definition of “interaction” I proposed in the beginning of the semester, I would stress the fact that the principles of interaction are observable between the user and the project, and not the user, project, and creator. Implementation matters a lot, and even if something “works” for the designer, it certainly will not to the same degree with the user.        

Happiness Vending Machine – Jiayi Liang(Mary) – Rudi

PROJECT TITLE

Happiness Vending Machine

CONCEPTION AND DESIGN:

For my final project, I am working on how to connect my project to the society. As an unique individual, I think every one of us has the duty to take care of each other as a community.  My project is called “Happiness Vending Machine”, which is an interactive vending machine that sells happiness. Happiness here is “those happy moments that can make someone feel happy”. It can be a small incident in life, an interesting picture, or a beautiful song. We can put this vending machine in big shopping marts or university campuses to spread happiness and comfort more people.

The vending machine focuses on urban people’s prevailing mental issues that they always feel stressful and upset in their daily life. According to my research, there is a popular new trend called “丧文化“ –“disheartenment culture”, which means people’s pessimistic attitudes towards life. Just simply takes me as an example. When I was working on my EAP final project, I cannot help complaining about the huge workload and difficult tasks that are impossible to fulfill. I always say, “I feel like I am a waste” “Why do I choose NYUSH?” “I will definitely get  F”. However, I ignore that there is also something enjoyable in the project. It is nice to cooperate with my group member. It is fun to interact with others to get more information that is useful for my research. After I finish my presentation, I feel a sense of self-pride. These are all good moments in my final period, but I just choose to ignore them and fall into a mental mood of sorrow. 

.

↑A super popular image that stands for 丧文化 —— a middle-aged man lying on the sofa with a sad face

Thus, I come up with an idea that maybe I can collect those  ignored but joyful things in our lives to comfort more people who has faced the same dilemma as mine. Then, I work on finding ways to share those happy moments. I find a vending machine may be a good way by the use of interaction. Usually we can see vending machine selling food or drinks, but it is rare to see it selling abstract products like happiness, which can draw users’ attention and arouse their curiosity. The vending machine here is a metaphor. You don’t actually needs to pay money for it, however, you just use your interaction– raise your hand in front of your sensor– to buy it.  This machine creates a utopia that you don’t have to take great costs to get happiness. It is the one and only vending machine that doesn’t require money stuffs just like a fairy tale, which is healing for us who live in a busy metropolis.

Recently, there is a kind of toy which is popular among youngsters called blind box. Not knowing what exact cute figure really is in the box, buyers feel a sense of surprise when unpacking the product, which adds playfulness in the process of purchasing. Thus, I am going to take the random policy on my happiness vending machine , which makes the buyers feel more surprised.

FABRICATION AND PRODUCTION:

My first step is to collect the happy moments from my friends. I send a message on wechat, and many people share their own ideas to me. Actually the process of collecting information has already healed me. Their contribution ranges variously. Through the process, I feel I am more engaged with my social circle. 

Then I make these contributions into cute and adorable pictures. I classify these information into three categories. A. simple sentences B. photos C. songs.

A.

B.

C.

see more on google docs: 

https://docs.google.com/document/d/1UwE5T8zDDEsuJ1LK87foyfl99ZrMSl5MsjROrVcUBrs/edit

the songs

https://music.163.com/#/song?id=1336856864

https://music.163.com/#/song?id=1366036963

https://music.163.com/#/song?id=1314353013

I make my vending machine into a model in wood. All the users need to do is to stand in front of the machine, raise your hand to collect the “happy power” and wait for a little while. Then, you can select a button to  purchase a happy moment shared by others by paying the happy power he or she collects. I made a instruction front page to guide the users.

To make my project work, I list two main codes to write. The first is to write a code to count the happy power. The second is to make the button work — if you press it, the machine will show you a picture.

For the first task, I use “analog write” to get the information received by the distance sensor. If the distance sensor senses something is getting closer, the code will add one point to the total happy power. 

Code for task 1 in arduino:

The second task is much more complicated. I set a value called “button state”, if the first button is pressed, button state goes to 1, second to 2, third to 3 and forth to 4. At first I get lost when I use the example while code on the internet. I face a problem that the distance sensor only works when I am pressing the button. Then the fellow tells me that I should just us if sentences to realize my goal. I face another question that I misconnected three pins of the buttons. The result is that the buttons do not work for a long period. Luckily I find the mistake at last. The biggest problem is that if I keep pressing the button, the image keeps changing. I add one more value of the previous button state to solve it. The picture only changes when the button state is different from the previous one. (This also leads to one more problem that I cannot press the same button twice, so I add one “return” button). I also add some lines that once a button is pressed, the happy power will be reduced by 20.

The complete arduino code:

I use the processing to store different pictures. Once Processing knows that the button state from arduino has changed,  it will randomly show the relevant picture( and song). I use the array function to store the information. I edit the songs into clips of about 15s.  I also add some sound effects to make it sounds like a vending machine.

Finally, my project works like this.

Conclusion:

In conclusion, my project pays attention on people’s mental issues that they have a pessimistic attitude towards life. The happiness vending machine sells happy moments to people in order to heal and comfort them. My project aligns with our definition of interaction because there is a procedure of “input, process and output” as Crawford claimed . The vending machine interacts with the audience by noticing the users’ action to collect “happy power”, receiving their request of buying happiness, and offer them a funny picture of music as an output. The uniqueness about my project is that it is raising people’s awareness on the tiny but certain beautiful moments in their life. By sharing them an adorable cartoon image, they feel more relaxed and happier. This vending machine create a utopia space that is different from the busy and boring social life. In front of the machine, you can get rid of the busy life and confusing material stuffs, gaining pure happiness without paying money. All you need to do is spend a little time to relax and enjoy the happy moments.

However, there are still lots of space for revising. I get feedbacks that maybe it is better to add more interesting instructions on the buttons instead of just using ABC. For example, “if you are angry, press this”. And I will also add more descriptions at the beginning to let the user realize what this machine is for.

For the subsequent projects, I will add one more function to the vending machine to make it more interactive. The buyer can upload their own happy moments to the machine too. Thus, the project can also work as an illustrated handbook that record people’s various kinds of happiness.

References:

http://en.people.cn/n3/2017/0712/c90782-9240823.html

The Art of Interactive Design, Crawford (pages 1-5)

Blind Boxes

https://baike.baidu.com/item/%E4%B8%A7%E6%96%87%E5%8C%96/19892924?fr=aladdin