Recitation 10 Documentation – Jackson Simon

For this recitation, the workshop recitation, I decided to attend the one on Serial Communication by Mister Young. I feel it was quite important for my project, since I would need to both be communicating from Arduino to Processing and Processing to Arduino at the same time.

I learned how to make a sensor value from the Arduino influence Processing (for example, in my Final Project, an accelerometer influenced whether the game would continue in Processing).

In this simple example shown in the video below, I connect an Infrared Sensor to Arduino and map the values from 0-1023 to 0-50 and have them being read in Processing.

It ended up being useful (at the start of my project, before I decided to have the start of the game be a different way than with an infrared sensor) since when a person would walk in front of the sensor it would turn on the game in Processing, and start the audio from Processing.

Familiar Faces-Christina Bowllan-Inmi Lee

Familiar Faces- Christina Bowllan- Inmi Lee

For our final project, Isabel and I wanted to address why it is that people do not naturally befriend others from different cultures. While this of course does not relate to everyone, we realized in our school that people who speak the same language oftentimes hang out together and people from the same country do as well. This does answer one part of the question, but the real problem is we fail to realize that people from other cultures are more similar to us than we think; We all have hobbies, hometowns, things we love to do, foods that we miss from our hometowns, struggles within our lives, etc. In order to illustrate this, we interviewed several workers within our school such as the aiyis and halal food people because we wanted to open their stories since they are a group in our school that we often overlook. 

827 (Video 1)

829(Video 2)

In order to create this project, we had three different sensor spots: one was a house, one was a radio and one was a key card swiper. When the user would push the key inside the house, the audio relating to the workers’ home experience would play, the radio played miscellaneous sound clips about what they missed from their hometown or what they do in Shanghai over the weekends, and the card swiper randomized their faces on the processing image. We decided to create these different physical structures because we wanted each to be a representation of a different aspect of their lives and we created the processing image in order to show people our stories are not that different than one another— after all, we all have eyes, a nose and a mouth. We tried to make the interaction resemble what people do in their everyday lives, or if they do not use a radio, for example, have different structures that the users would already know how to interact with. On the whole, this worked because people knew how to use the card for the swiper and push the radio button, but for some reason, people did not understand what they should do with the key. So, in order to construct each part, we did a lot of laser cutting as this is what our house and radio were made out of it. This proved to be a great method because the boxes were really easy to put together, they looked clean, and the radio could hold our arduino as well. In the early stages, we had thought about maybe 3d printing, but it would be hard to construct a sensor inside this material. For the card swiper, it would have been too difficult to build all of the pieces for laser cutting, so we designed it using cardboard, which proved to be effective. We were able to tape up the various sides and it held the sensor in place very well, so the interaction between processing and arduino was, spot on!

Above shows how our final project ended up, but it did not start this way. Our initial idea was to hang four different types of gloves on the wall which would all represent people from different backgrounds and classes. And the user was intended to high five the gloves which would change the randomized face simulation to show if we cooperate and get to know one another, we can understand that our two worlds are not that different. For user testing, we had the gloves and the randomized face simulation, but the interaction was a bit basic. At first we wanted to put LED lights on each glove so that people could have more of a reason to interact with our game, but the project in general was not conveying our meaning. Users found the project cool and liked seeing pictures of their friends change on the screen, but they did not recognize the high five element to show cooperation or the bigger idea. The main feedback we got was that we needed to be more specific in what it means for people from all backgrounds to come together.

At this point, we decided to create what was our final project and focus in on a certain group of people to show that we have shared identities. So, while the gloves were great, we did not end up using them, and we created the house, radio and card swiper to show different connection points between people. 

For our project, we wanted to show people that we are not so different after all and we used the different workers within our school to illustrate this idea. Our project definitely aligned with my definition; We did have “a cyclic process in which two actors, think and speak” (Crawford 3) and we created a meaningful interaction which we should strive for in this class. Ultimately, I think people did understand our project through the final version we created, but if we continued working on it, of course we could make some changes. Some examples included, we could add subtitles of the interviews so that English speakers understand and Tristan had a good idea to add a spotlight so people know which interaction to focus on. Also, I mentioned this above, but people did not really know what to do with the key… It ended up working out because I believe that slowly understanding what to do with each part resembles what it’s like to get to know someone, but this was not our intended interaction. I have learned from doing this project that “all. good. things. take. time”. I am so used to “cranking out” work in school and then not looking at it again, so it became tedious having to fix different dilemmas here and there. But, once I did the interviews and constructed the card swiper by myself, I felt a wave of confidence and that motivated me to keep working on the project. Overall, people should care about our project because if you care about building a cohesive and unified community and improving school spirit, this is an unavoidable first step. 

CODE

Arduino Code:

// IMA NYU Shanghai
// Interaction Lab
// For sending multiple values from Arduino to Processing

void setup() {
Serial.begin(9600);
}

void loop() {
int sensor1 = digitalRead(9);
int sensor2 = digitalRead(7);
int sensor3 = digitalRead(8);

// keep this format
Serial.print(sensor1);
Serial.print(“,”); // put comma between sensor values
Serial.print(sensor2);
Serial.print(“,”);
Serial.print(sensor3);
Serial.println(); // add linefeed after sending the last sensor value

// too fast communication might cause some latency in Processing
// this delay resolves the issue.
delay(100);
}

Processing Code:

// IMA NYU Shanghai
// Interaction Lab
// For receiving multiple values from Arduino to Processing

/*
 * Based on the readStringUntil() example by Tom Igoe
 * https://processing.org/reference/libraries/serial/Serial_readStringUntil_.html
 */

import processing.serial.*;
import processing.video.*; 
import processing.sound.*;
SoundFile sound;
SoundFile sound2;

String myString = null;
Serial myPort;


int NUM_OF_VALUES = 3;   /** YOU MUST CHANGE THIS ACCORDING TO YOUR PROJECT **/
int[] sensorValues;      /** this array stores values from Arduino **/
int[] prevSensorValues;


int maxImages = 7; // Total # of images
int imageIndex = 0; // Initial image to be displayed
int maxSound= 8;
int maxSound2= 10;
boolean playSound = true;
// Declaring three arrays of images.
PImage[] a = new PImage[maxImages]; 
PImage[] b = new PImage[maxImages]; 
PImage[] c = new PImage[maxImages]; 
//int [] d = new int [maxSound];
//int [] e = new int [maxSound2];
ArrayList<SoundFile> d = new ArrayList<SoundFile>();
ArrayList<SoundFile> e = new ArrayList<SoundFile>();

void setup() {

  setupSerial();
  size(768, 1024);
  prevSensorValues= new int [4];

  imageIndex = constrain (imageIndex, 0, 0);
  imageIndex = constrain (imageIndex, 0, height/3*1);
  imageIndex = constrain (imageIndex, 0, height/3*2);  
  // Puts  images into eacu array
  // add all images to data folder
  for (int i = 0; i < maxSound; i++ ) {
    d.add(new SoundFile(this, "family" + i + ".wav"));
  }
  for (int i = 0; i < maxSound2; i ++ ) {

    e.add(new SoundFile(this, "fun" + i + ".wav"));
  }
  for (int i = 0; i < a.length; i ++ ) {
    a[i] = loadImage( "eye" + i + ".jpg" );
  }
  for (int i = 0; i < b.length; i ++ ) {
    b[i] = loadImage( "noses" + i + ".jpg" );
  }
  for (int i = 0; i < c.length; i ++ ) {
    c[i] = loadImage( "mouths" + i + ".jpg" );
  }
}


void draw() {
  updateSerial();
  // printArray(sensorValues);
  image(a[imageIndex], 0, 0);
  image(b[imageIndex], 0, height/2*1);
  image(c[imageIndex], 0, height/1024*656);




  // use the values like this!
  // sensorValues[0] 
  // add your code
  if (sensorValues[2]!=prevSensorValues[2]) {
    //imageIndex += 1;
    println("yes");
    imageIndex = int(random(a.length));
    imageIndex = int(random(b.length));
    imageIndex = int(random(c.length));//card
  }
  if (sensorValues[1]!=prevSensorValues[1]) {
    //imageIndex += 1;
    println("yes");
    
    int soundIndex = int(random(d.size()));//pick a random number from array
    sound = d.get(soundIndex); //just like d[soundIndex]
    
    if (playSound == true) {
      // play the sound

      sound.play();
      // and prevent it from playing again by setting the boolean to false
      playSound = false;
    } else {
      // if the mouse is outside the circle, make the sound playable again
      // by setting the boolean to true
      playSound = true;
    }
  }
  if (sensorValues[0]!=prevSensorValues[0]) {
    //imageIndex += 1;
    println("yes");
  
    int soundIndex = int(random(e.size()));
    sound2 = e.get(soundIndex); //just like e[soundIndex]
    if (playSound == true) {
      // play the sound
      sound2.play();
      // and prevent it from playing again by setting the boolean to false
      playSound = false;
    } else {
      
      playSound = true;
    }
  }

  prevSensorValues[0] = sensorValues[0];
  println(sensorValues[0], prevSensorValues[0]);
  println (",");
  prevSensorValues[1] = sensorValues[1];
  println(sensorValues[1], prevSensorValues[1]);
  println (",");
  prevSensorValues[2] = sensorValues[2];
  println(sensorValues[2], prevSensorValues[2]);

}



void setupSerial() {
  printArray(Serial.list());
  myPort = new Serial(this, Serial.list()[ 1 ], 9600);
  // WARNING!
  // You will definitely get an error here.
  // Change the PORT_INDEX to 0 and try running it again.
  // And then, check the list of the ports,
  // find the port "/dev/cu.usbmodem----" or "/dev/tty.usbmodem----" 
  // and replace PORT_INDEX above with the index number of the port.

  myPort.clear();
  // Throw out the first reading,
  // in case we started reading in the middle of a string from the sender.
  myString = myPort.readStringUntil( 10 );  // 10 = '\n'  Linefeed in ASCII
  myString = null;

  sensorValues = new int[NUM_OF_VALUES];
}



void updateSerial() {
  while (myPort.available() > 0) {
    myString = myPort.readStringUntil( 10 ); // 10 = '\n'  Linefeed in ASCII
    if (myString != null) {
      String[] serialInArray = split(trim(myString), ",");
      if (serialInArray.length == NUM_OF_VALUES) {
        for (int i=0; i<serialInArray.length; i++) {
          sensorValues[i] = int(serialInArray[i]);
        }
      }
    }
  }
}

Watch Your Words- Rodrigo Reyes- Eric

Watch Your Words

CONCEPTION AND DESIGN:

We wanted a model that would actually inform the user of what they had to as they went on through the game instead of giving the user all the information at once in the beginning. In fact, we gave very specific instructions at the beginning that were at the same time not fully complete. Once users got asked what was the last word they saw and are given four-word choices, they now know what they have to do. Another aspect of the project that we really wanted people to focus on, as the name implies, is the words themselves. Our project is meant to improve eyesight and reading skills as well as memory skills. On every level, there was an array of nine words from which the code would randomly pick four. The words were programmed to blink in a pattern were only one word out of the four would show on screen, something we got inspired by my games for dyslexia as a kid. The pattern was meant to make sure the user follows the words. Each level coherently had words that were about a specific topic. For instance, on the beach level, we had shells, sea, and sunscreen. To add to this, we had a soundtrack for each level; on the beach level we had the sound of the sea. The soundtracks were meant to allow the user to have a sensory experience that would enhance their eidetic memory.  To facilitate the already complex process of this project, Sarah and I opted in having a fairly simple design for the Arduino part.  I made an Arduino case with four buttons, that we got from the Shanghai Electonic Market (really cool LED buttons). To us and to the users who tested our project, the buttons worked because the instructions on the code pointed users to refer to the buttoned case to answer. To make it obvious I engraved the case with A, B, C, D for each button.  It was a very personal project for me because we made it wishing I would have had this project helping me deal with dyslexia as I grew up. We wanted to include something else rather than just focusing it on memory and reading in the way we had it already, thus, we wanted to play with incorporating coloring into the equation. Nevertheless,  we had a time constraint.

 

FABRICATION AND PRODUCTION:

In this section, describe and assess the most significant steps in your production process, both in terms of failures and successes. What happened during the User Testing Session? How did your user testing process influence some of your following production decisions? What kind of adaptations did you make? Were they effective? In short, keeping in mind your project goals, how do you account for and justify the various production choices you made for your project? Include sketches and drawings.

On the User Testing Session, Sarah and I concretized our idea for the project. We were not so sure about how to make the “game” feel challenging for people, but still, make it enjoyable and easy to understand. People would tell Sarah and me separately to make it like a “multiple choice quiz” so people would feel pressured to answer correctly. Back in the testing session, we did not have a background track included in the code. We had planned to have a soundtrack but did not have time to include it. People really stressed to us how much it would help to have sound in the project. People in the testing session wanted us to have “cool” buttons for the visual aesthetic of the project, especially the Fellows. I finally got LED buttons from the electronic market that Eric recommend. Because the design we already had for the Arduino case was way too small, and the time constraint we had, did not allow us to incorporate the LED part of the buttons. 

  

CONCLUSIONS:

As I have previously stated before, interaction as a conversation,  a dialogue, a transmission and a tool that requires an input a process and an output. For it to be a more complex interaction, there must be an input a process and an output that can go back and forth. By the nature of it, you would need a relationship. To add to what I had said before I would like to stress in one key element: Understanding. In the communication that interaction facilitates there must be an understanding of how and what to communicate. For this final project, we wanted people to communicate with the computer so that people could get a different experience each time (words are randomized). We wanted this dialogue to feel unique and compelling each time. The buttons on Arduino could have easily been taken off; we could have just used the computer mouse or the keyboard on the computer. I think we could have done a better job with the interface’s design. So I learned from this that we could have planned even more and invested time into building something even specifically meant for people with Alzheimer’s like Eric suggested.  I love the idea of using technology as a means to bring art, and social thought together. An insightful element of the project experience was working on something that was constantly subjected to other’s opinions. OnThe User testing we got a lot of valuable opinions. Being open to hearing everything, even when it modified your initial idea of the project and modified your “art”, in the end, you do it for the people.   In fact, our idea of a video game to help people with dyslexia is something I would like to maybe propose to a tech company. 

Final Project:Truth about Truth – Chloe(Yuqing) Wang – Rudi 

Truth about Truth – Chloe(Yuqing) Wang – Rudi 

My idea of how this installation would look like changed a lot during my development process. At first I wanted to be a complicated series of questions and answers with data collected from user interactions. But after talking to various people about my project, I gathered different opinions and came up with the final version. 

Artist Statement

Contemporary media has the power to shape the way we think. We are all blind when it comes to truth. In this environment, how do we determine right from wrong, black from white? In what ways do we interpret something as a fact and how likely is it for us to view something/someone without prejudice?Truth about Truth is an installation that is intended to remind individual observers the importance of not categorize and define others from only one perspective, not to have our decisions influenced by what the media portrayals or merely considering what’s on the surface. However, this work is also open to all interpretations. 

Conception and Design

As written in my final essay, I based my project on two medi theories: “Selective Exposure” and “Third-Person Effect”. Selective Exposure means that people have the tendency to read and accept those news that are in accordance with their beliefs. Third person effect is the concept that individuals believe others are easily influenced by the media but they themselves are not. With this project, I wished users can slowly explore by themselves and come to their own understanding of what this project is about. 

The Three photos

Changing the images to any group photo would make sense, but in this case, I choose three images with opposing groups of people in them. If we look at these images without cropping out the individual faces, we can automatically give each of them a definition, a job title, and we decide whether they are good or bad people. 

1.Hong Kong Protest: I got the idea of my project because of the Hong Kong protests. For me, the protest did not happen only on the news, but they have impacted some of my closest friends who were studying in Hong Kong. As someone sandwiched in between the two sides of this conflict, I wish to maintain a neutral perspective. Each person holds their own perspective when looking at this image. By cropping out the individual faces, I wish to put the emphasize individuals’ roles in this whole situation. Interestingly, although the image I use doesn’t explicitly say that it is Hong Kong, many people automatically relate the image to the Hong Kong protests. This shows that a pre-set image of what the protesters and the polices are like. 

Image1 (combined two photos)

2.Andy Lau: I think this image can successfully reflect my main idea of media portrayals and the star-making process. Andy Lau is one of the most well-known and commercially successful Chinese singers. His fame was built by mass media in China in the 1990s. Being the focus of this image, Andy Lay is still a well-known figure even when cropped out. The media has made us(at least the Chinese users) connect Andy Lau’s face with fame. People get crazy when they see him. So in this image, Andy Lau is separated from everyone else. He is a commercial phenomenon, while the others are consumers of this fame.

image2

3.Occupy Wall Street in 2011:There are not so many protester-police opposition images that are available. I chose an image of the 2011 Occupy Wall Street protest. This protest was about economic inequalities in the U.S. Many were arrested while the police and protestors conflicts were intense. With this image, it does not limit the topic of this project to just China. Who’s side are you on when the protest is happening in New York? How are people’s perspectives changed when the context of the image is not so obvious and recent?

image3

The Vintage MonitorFor this project, I didn’t want to have my project shown directly on my computer screen. My initial idea was to laser-cut a box that can cover my computer so it would look like a TV screen. Then I wanted to have an old TV set that is smaller, with buttons on the sides, to create a more intuitive user experience. I have also considered getting a newer monitor, but it still does not fit the aesthetics and goals I want to achieve with this project. However, what I found in the second hand market was a huge monitor with a rather high resolution. With the monitor, I wanted to make the users feel as if they are controlling a surveillance camera to observe individuals, as the set-up of this machine only allows you to look at one face at a time unless you press the button. There is another layer of meanings. Nowadays with our smartphones and laptops, we feel like er control the world, we know everything that is happening, we can easily reach out to those we want to talk to or comment on things we want to comment on. However, at the time when this monitor was used, information is only starting to be transferred faster. At the same time, with this machine, it also shows that the social media today only gives us fragments of information. You still cannot fully understand someone just by looking at  a small fragment of their face.

love at the first sight

The Case: The clear case for the Arduino is also a hint to the title of my project. I remember in class we talked about putting an input to a black box and receiving an output, but we don’t know what happens in between. This transparent case is to show that there are lots going on underneath the deceptive surface. I think it was necessary in this project, to have a see-through wire case.

The Transparent Case
The Transparent Case

Sound Effects: The sound effects as the user is turning the knob signifies surveillance camera focused on individual faces. Although this interpretation might be adding another layer to the project, it has made the project more engaging for the users. Although I received recommendations to add some background music to my project, I realize that if the project was in an environment like the IMA show, adding a background noise would not improve the whole experience. 

Fabrication and Production

I started my project based on what I have done in one of the recitations where users can control two potentiometers to reveal parts of an image. At first, I wanted to make a similar interaction device with other sensors. But I realized that my ideas can be better expressed if it only reveals faces. So I chose the first police and protestors image and cropped out those people’s faces with Gravit, labeled them image1 to image13. The last image is the original photo. I also used two push buttons and one potentiometer for the actions of revealing the whole image, changing the image, and changing individual faces. One problem I encountered was that the images all had different sizes. Rather than changing the canvas size every time, I fitted all the images to the same size so the image change was smoother. 

Cropped out faces
Cropped out faces

User Testing

The user testing process helped me to foresee many problems that could occur once the project is done. Many people also gave me critical suggestions for my project. During user testing, I only had one image and two buttons. They wanted more images and more ways to interact with the machine. I also realized that people tend to push the buttons first and only focus on the screen. The light-up button did not signify “press me” for the users. Everyone interpreted the project as: focusing on individuals as parts of a whole. Some people thought that one image was enough to explore and it gives a strong statement. While others wishes to see more images so the project has a more diverse or complete storyline. Furthermore, most liked to have the image only revealed when the knob was turned to the end. Because of this suggestion, I changed this part for the final version. However, I did not realize adding more images is already making the installation more complicated. I should have the image revealed more often as the user is turning the knob. 

Video: User-testing version

To organize the wires but still showing them for the aesthetics, I laser-cutted a clear box that can fit the breadboard, Arduino, and can fit in front of the monitor while not blocking the screen. I also made sure to have three holes with the size of the two buttons and the potentiometer. Then I 3D printed a knob for the potentiometer. However, when I assembled the case to the buttons, the potentiometer was not so stable. So I soldered it and put a tape underneath to support the potentiometer. 

Here are some photos of the building process:

Building Process

In one of the images, there are 13 faces. While the other two have 7 faces. To make sure the potentiometer can evenly spread out the degree it turns to the number of images, I used the map function in Processing.   

  prevknobValueMapped= int(map(sensorValues[0], 0, 1023, 13, 1 ));
  prevknobValueMapped2= int(map(sensorValues[0], 0, 1023, 6, 1 ));
  prevknobValueMapped3= int(map(sensorValues[0], 0, 1023, 5, 1 ));

Because there are three different images each with different faces cropped out, I had to separate them as three groups of images with three individual folders: pic1, pic2, pic3 in the data folder. To retrieve those images from those folders, I used

  for (int i=0; i<14; i++) {
    photos[i]=loadImage("pic1/image"+i+".jpg");
  }

and

    if ((sensorValues[1] == 1) &&(knobValueMapped == 13) ) { //sv1 = big picture, sv0=knob
      image(loadImage("pic1/image14.jpg"), 0, 0);
    }

The two push buttons correspond to two digital values and the potentiometer corresponds to the analog value. They only have 1 and 0 two values of whether they are being pressed or not. 

This is the code for the second group of images:

    if ((sensorValues[1] == 1 ) && (knobValueMapped == 6)) {
      image(loadImage("pic2/image7.jpg"), 0, 0);
    }
    if (prevknobValueMapped2 != knobValueMapped) {
      sound.play();
    }

Conclusion
I wanted to show that contemporaty media has the power to shape the way we think, and that individuals matter. During my mid-term project, I defined games as the most interactive forms of art. However, they are only interactive because they are immersive.  The process of developing my ideas for this project and finally getting it done has been quite a journey for me. There were many times when I got lost in my own ideas. However, the end result of my project was successful for me. All the users I interviewed came to the conclusion that individuals are more important in the big picture. Most people I saw were able to figure out how this installation was operated if they focused on it for longer periods of time. However, I still needed to improve the intuitiveness of the machine. Many people got lost when using it, and even if I put a note explaining how to use it, people still would not read it. Although the machine’s “complication” can make people want to explore it more, some people do not have enough patience to explore, or they simply did not see the button lighting up and did not interprete it as time to press them. If I had more time, I would add more images and try to make the operation of my project simpler. From this project, I realized that everyone thinks differently. It is quite difficult to arrange three input values that fits everyone’s habits. I used some  images that can be considered controversial in this project, but there were no words accompanying the images so there is not a forced understanding to my project. Also, because I used three “irrevalent” images, this project covers quite a diverse topic and can be applied to more areas, which traces back to my initial struggle of being neutral and not taking sides. 

Here are all the version of my codes for the final project: https://gist.github.com/Chloeolhc/2662115e00a0ab9844e73ec224b66792

Here is a video of the final presentation of my project:

Sources:

https://www.mpweekly.com/entertainment/focus/local/20170120-33977

https://www.thepeninsulaqatar.com/article/02/09/2019/Hong-Kong-students-rally-peacefully-after-weekend-of-protest-violence

https://www.flickr.com/photos/akinloch/6207968006

https://www.channelnewsasia.com/news/asia/hong-kong-protests-police-children-bullied-data-leak-yuen-long-11746034

Familiar Faces-Isabel Brack- Inmi

Overview:

Image of our project set up

Project display including Processing images, house and key, radio, and card swiper with NYU card. (Not Pictured paper English Bios)

Throughout this project and the phases of design, fabrication, and production our project has completely transformed. Originally, our project was a game/activity in which people would place their hands against different gloves/hand. These hands would be linked up to the processing unit which would rapidly change faces split into three sections the eyes, nose, and mouth. These faces randomly cycled through in the array each time the hands were pressed against. Originally, the eyes section used the webcam to show the user’s face mixed in with the different students faces. However, during user testing we realized that the interaction itself was fairly boring and lacked a greater meaning and experience. During our user testing, we received lots of feedback about the physical difficulties with our project being the live webcam’s accessibility based on height, the connection between users’ actions and meaning is not explicit enough, and the interaction itself being non immersive and simplified. We received overall positive responses to the message and theme of our project is to try and understand and get to know different groups of people at our school that most students didn’t fully understand. Particularly we got feedback from professors and students about incorporating sound/interviews to allow people to tell their own stories. Our project we presented on Thursday is an interactive project intended to share the stories of the workers at NYUSH with the student body and faculty who often overlook them as people and classify them solely as school staff workers. This project involved a Processing element to control sound and visual which used different interview clips which we conducted and different faces which we cut up and assembled into three sections like our original project. Christina conducted most of the interviews and took the pictures along with doing fabrication, and we both contributed to design. I wrote the original code and modified it to this project adding in sound arrays with some help from various professors, fellows, and LAs. I also fabricated the physical projecta creating the buttons and helped Christina with the general fabrication of each element. In addition, I also wired the circuit and cut the audio and photo images to put into the different arrays. Our original inspiration came from face swapping interaction technology like snapchat filters and different face swapping programs, however we adapted this technology to better fit the goal we had, which was sharing the stories of workers who are often overlooked. Also, I came across a similar code to mine (in picture array context) which was inspiring for my code, specifically reminding me to place constraints on the picture. However, the use of his code was more interesting as he planned to share sexual assault survivors awareness through his interaction project, which began my thinking process on how to articulate a story through Processing.

CONCEPTION AND DESIGN:

Once we changed our plan after user testing and were influenced and informed by the user testing response we decided we would create three objects to represent each element of the story we would like to tell about the aiyi and workers at NYUSH. This was majorly informed by suggestions from user testing on what people would like to hear and see about the workers, including their interviews mostly in Chinese about work, life, where they are from, etc. People also liked the idea of seeing different faces all mashed up to show both a sense of individuality to the stories, belonging to each worker, but it also showed a bit of group identity representing the workers as a whole and how NYUSH students often overlook and generalize the workers and aiyi at our school. We chose a radio to represent the different stories the workers told with a button wired in to control an array of sound files from interviews where we asked workers a variety of questions about their everyday life including work and outside of school. The only issue we did not account for regarding this radio is once our class saw and heard what the radio did with the audio the only used the radio and disregarded the card swiper and house/house key for a few minutes. The second element to our project was the card swiper which included an NYUSH 2D laser-cut keycard designed to look like a worker’s card. The card and swiper changed the images of the processing unit each time a new “worker swiped in.”  This element was meant to bring a real work element of theirs into the interaction to help associate the interaction with our school and NYUSH staff. The last physical element was a house and key. When the user inserted the key audios about their family/home/hometown would play providing a personal connection and deeper background to each worker. This third element was directly impacted by the feedback we got during user testing, explaining people wanted deeper background information on each person to understand the person and their identity not just their face. During our prototyping we used cardboard and real gloves to make the original project, but after we changed ideas we had little time to make a prototype so we went straight to 2D laser cutting a box for the radio and a key card for the swiper. In addition, we used cardboard to and clear tape along with black paint to make our own card swiper creating a button at the bottom of the swiper which sent a 1 to Arduino every time pressure was applied with the key card. For the house and key we used my room key and built a 2D house from a laser cut box. We believed that 2D laser cutting would give us a fast, clean, and professional looking finished product will still be able to modify the final look by painting and adding details to transform the radio from a box into an old-time radio. We rejected the idea of 2D cutting the card swiper because it would be too many pieces and to complicated to add a pressure plate if we cut it. Instead we opted for cardboard and tap still getting a fairly finished look, but the building and assembly process was much quicker. Also, because of the button in the bottom of the swiper we needed access to the base which was easier with flexible cardboard. For the house and radio the 2D cut box was cleaner and we could glue all sides but one for easy access to the Arduino and the switch inside the house.

FABRICATION AND PRODUCTION:

Full Sketch:

sketch of design

In fabricating and producing our final project we went through much trial and error to get the two pressure plates(learned from Building a DIY Dance Dance Revolution) we made work how we planned. Both the house and the card swiper had switches in them which we made with two wires, tinfoil, and cardboard. Each wire attached to a piece of tinfoil and the two sides had a cardboard buffer between them keeping the tinfoil from touching. But, when pressure is applied by the key card or key the tinfoil connects and sends a 1 to the Arduino. The trial and error of creating these buttons with a buffer that is not to think so a light pressure will trigger the button was quite difficult. In addition, building the radio box and physical house were the easiest as the laser cutting went well and all the pieces lined up. The User Testing completely changed our final product in the fabrication and physical output. Although the code for the first and second project are quite similar excluding the addition of sound arrays, the physical aspect of the project changed completely. Our design and production was mostly influenced by creating an accessible project and creating an interaction that connects the physical objects with the meaning to the piece more straightforwardly. We created the house and the radio to represent their background and stories. We also focused on the meaning of our project around getting to know and understand the workers of our school who are often overlooked. The card swiper, house, and radio were also more accessible to all audience no matter the height, which is why we removed the live webcam. I believe these major changes to the physical project helped connect the meaning of the project to the physical interaction and use an interface which matched the output better especially the card swiper and faces along with the radio and stories. Where are project could continue to progress in production is the language accessibility, compared to most other projects ours was geared more towards Chinese language speakers and learners, it would benefit from adding subtitles to the pictures like Tristan suggested during our presentation. The biographies were ok information however the paper interface did not match the digital Processing.

Working Card Swiper: photos change as card swipes

Working House Key: as the house key is fully inserted into the door lock the sound array of background information from different workers plays.

The Radio: As the radio button is pushed a random interview clip from a worker explains her  working conditions and how long she has lived in Shanghai.

processing display

CONCLUSIONS:

Although our project and meaning changed a lot throughout this process, our final goal was to share the stories of workers and aiyi at our school who are often misunderstood, overlooked, and even ignored. Many of the students both Chinese and international don’t have the chance or make the effort to understand and get to know the workers. We wanted to create an easily accessible interface for NYUSH students and faculty to hear the stories of the staff told by the staff along with sharing the familiar faces of the workers which people often recognize but don’t really know or understand. Through interviews with many different workers at the school including aiyi, Lanzhou workers, and Sproutworks workers we hoped to share their stories and their faces with our audience.  According to What Exactly is Interactivity?,understanding our original definition of interaction our project used input, processing, and output along with having two actors in each interaction. The input was the initial pressing of the button or using the card opr key. The processing occurred in Arduino and Processing to communicate between Arduino for the code and circuit and Processing for the code, sound, and visual. The output was the sound clips and the changing faces. However, beyond that definition of interaction, our project also created a larger interaction which made people experience and think about what these workers are saying and what their stories are, hopefully learning their names, a bit about them etc. We hope that the interaction included pushing the buttons and using they key and card but also involved understanding the stories of the workers and the broader message and a sound immersive experience. Although this project had no major user testing other that a few people we found because our final project changed completely after the User Testing, the interaction by our audience was mostly as expected people used the different elements of our project here many audio interviews with different workers and seemed eager to continue to listen and use the face changer. Once the audience used the car swiper and the key they became more intrigued and continued to use each element(but it took awhile for them to switch to the different elements other than just the radio). Overall, I would take many suggestions we heard to improve the project including adding and English element and trying to differentiate the different buttons we have to help the audience understand there are three different options. I would also like to make the piece more experiential and more interactive beyond buttons if you could somehow click on peoples faces or swap them on a touch screen to hear the different stories, but this is not fully realized. Due to the untimely setback/failure of our first project which we learned in user testing, I have learned to sketch, design, prototype, and fabricate(realize my project idea) much faster and more efficiently, which overall is an important skill. In addition, I have learned the value of enjoying your project and its message as the first project’s failure was probably partially due to my lack of understanding its purpose and meaning, but the second project was much more successful because I enjoyed working on it and understood the meaning. I believe the “so what” factor of our project was the importance of NYUSH students to not overlook the staff at NYUSH who work tirelessly to keep the building running. In addition, they should not only be not overlooked but also recognized for their work and their stories as our students often see them as just workers and not full people. One of the most interesting things I learned about these people when conducting interviews was all but one interviewee was from a different province than Shanghai, which means many of these workers are not only separated from their families but also deal with the harsh reality of China’s Hukou system.

Arduino Code:

// IMA NYU Shanghai
// Interaction Lab
// For sending multiple values from Arduino to Processing

void setup() {
Serial.begin(9600);
}

void loop() {
int sensor1 = digitalRead(9);
int sensor2 = digitalRead(7);
int sensor3 = digitalRead(8);

// keep this format
Serial.print(sensor1);
Serial.print(“,”); // put comma between sensor values
Serial.print(sensor2);
Serial.print(“,”);
Serial.print(sensor3);
Serial.println(); // add linefeed after sending the last sensor value

// too fast communication might cause some latency in Processing
// this delay resolves the issue.
delay(100);
}

Processing Code:

// IMA NYU Shanghai
// Interaction Lab
// For receiving multiple values from Arduino to Processing

/*
 * Based on the readStringUntil() example by Tom Igoe
 * https://processing.org/reference/libraries/serial/Serial_readStringUntil_.html
 */

import processing.serial.*;
import processing.video.*; 
import processing.sound.*;
SoundFile sound;
SoundFile sound2;

String myString = null;
Serial myPort;


int NUM_OF_VALUES = 3;   /** YOU MUST CHANGE THIS ACCORDING TO YOUR PROJECT **/
int[] sensorValues;      /** this array stores values from Arduino **/
int[] prevSensorValues;


int maxImages = 7; // Total # of images
int imageIndex = 0; // Initial image to be displayed
int maxSound= 8;
int maxSound2= 10;
boolean playSound = true;
// Declaring three arrays of images.
PImage[] a = new PImage[maxImages]; 
PImage[] b = new PImage[maxImages]; 
PImage[] c = new PImage[maxImages]; 
//int [] d = new int [maxSound];
//int [] e = new int [maxSound2];
ArrayList<SoundFile> d = new ArrayList<SoundFile>();
ArrayList<SoundFile> e = new ArrayList<SoundFile>();

void setup() {

  setupSerial();
  size(768, 1024);
  prevSensorValues= new int [4];

  imageIndex = constrain (imageIndex, 0, 0);
  imageIndex = constrain (imageIndex, 0, height/3*1);
  imageIndex = constrain (imageIndex, 0, height/3*2);  
  // Puts  images into eacu array
  // add all images to data folder
  for (int i = 0; i < maxSound; i++ ) {
    d.add(new SoundFile(this, "family" + i + ".wav"));
  }
  for (int i = 0; i < maxSound2; i ++ ) {

    e.add(new SoundFile(this, "fun" + i + ".wav"));
  }
  for (int i = 0; i < a.length; i ++ ) {
    a[i] = loadImage( "eye" + i + ".jpg" );
  }
  for (int i = 0; i < b.length; i ++ ) {
    b[i] = loadImage( "noses" + i + ".jpg" );
  }
  for (int i = 0; i < c.length; i ++ ) {
    c[i] = loadImage( "mouths" + i + ".jpg" );
  }
}


void draw() {
  updateSerial();
  // printArray(sensorValues);
  image(a[imageIndex], 0, 0);
  image(b[imageIndex], 0, height/2*1);
  image(c[imageIndex], 0, height/1024*656);




  // use the values like this!
  // sensorValues[0] 
  // add your code
  if (sensorValues[2]!=prevSensorValues[2]) {
    //imageIndex += 1;
    println("yes");
    imageIndex = int(random(a.length));
    imageIndex = int(random(b.length));
    imageIndex = int(random(c.length));//card
  }
  if (sensorValues[1]!=prevSensorValues[1]) {
    //imageIndex += 1;
    println("yes");
    
    int soundIndex = int(random(d.size()));//pick a random number from array
    sound = d.get(soundIndex); //just like d[soundIndex]
    
    if (playSound == true) {
      // play the sound

      sound.play();
      // and prevent it from playing again by setting the boolean to false
      playSound = false;
    } else {
      // if the mouse is outside the circle, make the sound playable again
      // by setting the boolean to true
      playSound = true;
    }
  }
  if (sensorValues[0]!=prevSensorValues[0]) {
    //imageIndex += 1;
    println("yes");
  
    int soundIndex = int(random(e.size()));
    sound2 = e.get(soundIndex); //just like e[soundIndex]
    if (playSound == true) {
      // play the sound
      sound2.play();
      // and prevent it from playing again by setting the boolean to false
      playSound = false;
    } else {
      
      playSound = true;
    }
  }

  prevSensorValues[0] = sensorValues[0];
  println(sensorValues[0], prevSensorValues[0]);
  println (",");
  prevSensorValues[1] = sensorValues[1];
  println(sensorValues[1], prevSensorValues[1]);
  println (",");
  prevSensorValues[2] = sensorValues[2];
  println(sensorValues[2], prevSensorValues[2]);

}



void setupSerial() {
  printArray(Serial.list());
  myPort = new Serial(this, Serial.list()[ 1 ], 9600);
  // WARNING!
  // You will definitely get an error here.
  // Change the PORT_INDEX to 0 and try running it again.
  // And then, check the list of the ports,
  // find the port "/dev/cu.usbmodem----" or "/dev/tty.usbmodem----" 
  // and replace PORT_INDEX above with the index number of the port.

  myPort.clear();
  // Throw out the first reading,
  // in case we started reading in the middle of a string from the sender.
  myString = myPort.readStringUntil( 10 );  // 10 = '\n'  Linefeed in ASCII
  myString = null;

  sensorValues = new int[NUM_OF_VALUES];
}



void updateSerial() {
  while (myPort.available() > 0) {
    myString = myPort.readStringUntil( 10 ); // 10 = '\n'  Linefeed in ASCII
    if (myString != null) {
      String[] serialInArray = split(trim(myString), ",");
      if (serialInArray.length == NUM_OF_VALUES) {
        for (int i=0; i<serialInArray.length; i++) {
          sensorValues[i] = int(serialInArray[i]);
        }
      }
    }
  }
}