Recitation 9: Media Controller by Ashley Zhu

In recitation today, we manipulated media and used Arduino as a controller to induce media output through processing. This exercise was simpler, after working with AtoP and PtoA using multiple values from last week’s recitation.

Steps

For my task, I used a potentiometer to control the translation and rotation feature of my sunset image that I took. After putting the image in a new folder in my big folder, I called on the image and then manipulated the pushMatrix and popMatrix functions to allow my image to rotate as I twisted my potentiometer for interactivity. I looked at a few of the example codes given to us in class earlier this week and adjusted the movie rotate code to fit into my image rotating code. I also put the relevant ledPin in the Arduino code for the two programs to connect.

Reading

After reading the article “Computer Visions for Artists and Designers” by Golan Levin, it made me reflect a lot about technology and the course of interaction lab as a whole, as we manipulate technology with media in the class. As computer programming develops, the ways people use it evolves as well. Whereas before, people mostly coded for hardware or software for websites, businesses or other mediums, today, coding can also be used to display art and media. One of the quotes in the article stood out to me, “many more software techniques exist, at every level of sophistication, for detecting, recognizing, and interacting with people and other objects of interest” (Levin). This is very interesting because the interaction between technology and art is possible through these developments in specific levels of detection and interaction that allows the audience to communicate with the subject. I was inspired by how this article displays many different art projects to present art in a new fashion, through technology. In my project, I was able to combine media with interaction using technology, which is both innovative and interesting not only to look at but also to create. I was also inspired by how I can use technology as a medium to communicate and interact with the audience in an amusing and modern way.

Video

Final Code: Arduino

void setup() {
Serial.begin(9600);
}

void loop() {
int sensor1 = analogRead(A0);
int sensorValue = map(sensor1, 0, 1023, 0, 255);
Serial.write(sensorValue);

delay(10);
}

Final Code: Processing

// IMA NYU Shanghai
// Interaction Lab
// This code receives one value from Arduino to Processing 

import processing.serial.*;


Serial myPort;
int valueFromArduino;
PImage img1;
color mouseColor;
float mouseR, mouseG, mouseB;

void setup() {
  size(1086, 724);
  img1 = loadImage("SUNSET.jpeg");
  background(0);

  printArray(Serial.list());
  // this prints out the list of all available serial ports on your computer.

  myPort = new Serial(this, Serial.list()[13], 9600);
}


void draw() {
  // to read the value from the Arduino
  while ( myPort.available() > 0) {
    valueFromArduino = myPort.read();
  }
  println(valueFromArduino);//This prints out the values from Arduino
  
  pushMatrix();
  translate(100, 100);  
  rotate(radians(map(valueFromArduino, 0, height, 0, 500)));
  
  image(img1,100,100,width/(valueFromArduino+1),height/(valueFromArduino+1));
  
  
  popMatrix();
  
  mouseColor = img1.get(mouseX, mouseY);
  mouseR = red(mouseColor);
  mouseG = green(mouseColor);
  mouseB = blue(mouseColor);
  println(mouseR+" "+mouseG+" "+mouseB);
  set(width/2,height/2,mouseColor);
}

Final Project Process: Essay——Cindy Xie

FEEL ME, CATCH ME

  • PROJECT STATEMENT OF PURPOSE

The work I’m going to do is a game with three players, one “ghost” and two “people”.The player does not know the identity of the others, the purpose of the ghost is to find and identify two “people”, and the goal of the “people” is to avoid the “ghost” at the same time to find another companion. Each of them would wear a headset, and when the “ghost” and “person” approached, they would hear a rapid heartbeat. When “person” and “person” are close, the heartbeat will be heard more gently. In this way, both parties can tell which is a “person” and which is a “ghost”.The game is aimed at everyone, and anyone can play it.

The inspiration for this work came from another idea I had. I originally wanted to make a teddy bear that could generate simulated heartbeat vibration while remote hugging, but this idea was overlapped with a product called pillow talk, so I wanted to continue exploring with the inspiration of heartbeat vibration and remote control. At the same time, there is a famous game called Dead by daylight.Image result for dead by daylight

I refer to its game design. One player in this game is a killer. The killer wants to kill all the other players. The other players have to work together against the killer. Although my game has simplified a lot of content, I have applied the core idea of Dead by daylight to real life, using heartbeat simulation to let players feel the tension and stimulation that the virtual game could not provide in real life.

  • PROJECT PLAN
  • Equipment:

Arduino,  infrared ray receiver and exporter for Arduino, headphones, independent power, speakers……

  • Description:

I think this work needs to generate different reactions by judging the distance between players, so I need infrared sensors to receive and export infrared rays to judge the distance. Meanwhile, I need processing to control the sound of the heartbeat. I will use the knowledge I learned in class to connect processing programs with Arduino. Arduino mainly controls the receiving and output of infrared ray, that is, the estimation of distance. Processing, on the other hand, generates different vibration frequencies according to the distance, that is, plays the heartbeat vibration. So I might need A to P code. I also need to put the Arduino into the headset, which means I need a separate power source to put the Arduino into the headset. And I need speakers to put into the headphones to play sounds.

  • Steps:
  1. Buy equipment above on Taobao or borrow from the lab. (DDL:11/24)
  2. Write codes to receive and export infrared rays by using Arduino and build the circuit. (DDL:11/30)
  3. Using processing to mimic the vibration of the heart beating. (DDL:12/3)
  4. Connect the circuit that made in step 2 to processing code.(DDL:12/5)
  5. Assemble the circuit to the headphones. (DDL:12/7)
  6. Add independent power to both circuits to let the circuit work without computers. (DDL:12/10)
  7. Try to use in the real-life and gain some suggestions from others.
  • CONTEXT AND SIGNIFICANCE

In the mid-term, our project was a library noise sensor. At that time, my definition of interaction focused more on the communication between humans and machinery. It was more important to analyze human behavior and make responses mechanically than just receives and export simply.After a period of study and the unique experience of the field trip, I have a deeper understanding of interaction. In the beginning, I found some products suitable for long-distance couples, such as vibrating bracelets and message boxes, etc. These products are not only meaningful but also have aesthetic value. At the same time, they can achieve interpersonal communication through the communication between people and machinery, that is, the machine becomes a bridge for interpersonal interaction.

For this product, I think it is a good reflection of my thinking process. First of all, I think it satisfies the most basic definition of interaction, namely, receives, process, and output. At the same time, it has certain playability, not only practical value but also entertainment value. It also serves as a bridge between players, allowing players to feel the scene of mobile games in real life, and it also allows players to establish an unusual connection. This connection is not through language, but through hearing, through feeling, through observing others’ emotions.

Although the idea for my product originated from the game Dead by Daylight, applying this game to daily life can promote teamwork ability, and at the same time, the actual stimulation and tension can’t be provided by ordinary games. I think my products can be applied to large team activities, such as ice-breaking activities, this time we can add more players and there may be more roles for it. This allows you to quickly familiar with each other because each player doesn’t know the identity of the other party, only after hearing the vibration and watching each other’s facial expressions that they can decide whether they are”ghosts” or”people”. Using this product into ice-breaking and other activities can not only deepen the ability of teamwork but also help team members get familiar with each other quickly.

Final Project Essay — Jiayi Liang (Mary)

Project Title Animal Mirror

Project Statement of Purpose

This project is an interactive mirror that change people’s face into an animal image. When we look at the mirror in the first minutes, the mirror will reflect the real human face. After you knock the mirror, the image will gradually turn into an animal face, which looks like the magic mirror in Snow White. The mirror can customize a unique animal-like image for every user according to their personal feature that be used as a personal avatar.

Nowadays, many people are confusing with one simple but sophisticated question: who I am.  I think my project “Animal Mirror” can help the confused users to have a clearer understanding of their personal identity by giving them a cute and vivid animal avatar.  The different animals will stands for different features. For example, herbivores embody the women and carnivores embody the men. What’s more, it challenges on humans’ previous arrogant attitude that humans are superior to animals. Contrarily, it uses animals to describe humans.

Project Plan

 My project aims at using the facial recognition technology to replace human face with an animal figure. To make my project come true, I point out several important steps I have to complete in each specific time. The first step is to design basic animal images. What I want to do is to create some sample animals such as rabbits, lions, wolves and cats, and randomly change the fur color accordingly to the users.  Step 1 is due in 11.23. The second step is to make the frame of the mirror. I want to use the laster cut technology to make a wood frame. Step 2 is due in 11. 26. The third step is to write the code to realize the main part — replace people’s face with the animal figures I have created. I have researched on the Internet that the combined use of processing and faceosc can help me realize I want to do. The coding work will be complicated so the due date will be 12.4. Step 4 is to make the mirror much more like a magic mirror. I will  write a code that if the noise sensor hear the knock sound, the arduino will send message to the processing and the animal face will appear. The 4th due date will be 12.6. In the following days I will work on adding more extra effects to the mirror. For example, changing the background into a forest. The final project will be a mirror whose animal-like reflection will act according to the users, which make the audience understand that the animal in the mirror stands for themselves.  

Context and Significance

I was originally inspired by a Japanese Cartoon called <Beastars>. The story mainly tells about the conflicts between the herbivores and carnivores. I think the cartoon is reflecting something happening in real life. The animals may stand for different social groups in society, such as male, female, the black, the white etc., which is a very common technique that is used in books, comics and films. For example, <Zootopia>,<Animal Farm> and<Maus>. Thus, I come up with an idea that using animals to stand for personal identities.

Then I work on how to deliver the concept in an interactive way. I researched on several projects, and I find the interactive mirror very useful. The mirror is used as a virtual dressing room in a department store. When I am changing positions, the clothes I wear on the screen is changing too. This project gives an idea that maybe use the item mirror to enhance the users’ empathy when they see the animal figures. When the user is looking at the mirror, they are seeing themselves. And after they do something, for example, knocking the mirror, their faces turn into an animal figure. With the facial recognition technology, the animal will act vividly according to the users’ face, which make them believe that the animal is an embodiment of themselves.

The Animoji on iPhone lets the users to choose cute images as a personal avatar. It tells me that people love using cartoon images to stand for themselves, which gives me confidence on my project.

Thus, based on my research, my proposal is that using a mirror to reflect people’s different social identities(black and white, male and female, tall and short etc.)  by turning their faces into animal faces.

My project aligns with our definition of interaction because there is a procedure of “input, process and output” as Crawford claimed . The mirror interact with the audience by seeing their faces, recognize and analyze their faces, and change their face into an animal face as an output. The procedure is cyclical because it is constant. The uniqueness about my project is that it is raising people’s awareness on their social identities in a funny and acceptable way. By sharing them a vivid animal figure, they get to know more about their person features and their social status. Compared with Snapchat that only using the facial recognition technology to decorate the selfies, the project is reflecting something deeper on self recognition. My project is intended for every member of the society to offer them a chance to review their identities. 

For the subsequent projects, I will make a kind of illustrated handbook to record the different animal images, which shows the diversity of a society. Thus, the project can teach people to accept, tolerate and respect the diverse society. 

Work Cited

www.youtube.com/watch?v=UhOzN2z3wtl

www.youtube.com/watch?v=vFqOjoQcIro

The Art of Interactive Design, Crawford (pages 1-5)

Recitatioin 8–Tao Wen

Exercise one:

Regarding coding, there’s one aspect to notice: the map function. Otherwise, the circle will go out of the frame.

Exercise one- Etch A Sketch:

The difference between this and the last step is that this one uses “line” function. I tried to use tiny little ellipse to consist a line, but the fact is that not the whole trajectory would be shown on canvas. And when using line, it’s not enough just to connect(x1,y1) and (x2,y2), otherwise nothing would be drawn.  You have to use the previous coordinate, just like what we do with pmouseX and pmouseY.

Concerning the interaction experience, I don’t like it at all, probably because I’m not used to drawing by thinking about its horizontal and vertical coordiantes. However, I think it’s a useful way of algorithmic thinking when designing drawing-related projects.

Code that matters(Arduino):

void setup() {
  Serial.begin(9600);
}
 
void loop() {
  int sensor1 = analogRead(A0);
  int sensor2 = analogRead(A1);
  int mapped1= map(sensor1,0,1023,0,500);
   int mapped2= map(sensor2,0,1023,0,500);
  Serial.print(mapped1);
  Serial.print(“,”);
  Serial.print(mapped2);
  Serial.println();
}

Code that matters(Processing):

void draw() {
  updateSerial();
  printArray(sensorValues);
 
println ("sensor1=",sensorValues[0]);
println ("sensor2=",sensorValues[1]);

line(p1,p2,sensorValues[0],sensorValues[1]);
p1=sensorValues[0];
p2=sensorValues[1];
}

Exercise two:

The idea is to create a piano key board, which requires two variables: the on-and-off option and the tone, corresponded by mouse press and mouseX. Since I wasn’t familiar with coding, the interaction experience is not ideal. The five sounds, put together, are so disturbing to play. I would like to expand on this idea: the user can choose a music style (e.g Japanese, Chinese pentatonic, Arabian), the tone of keys will switch accordingly, and then the user (ideally amateurs) can explore and create their own melodies.

Code that matters(Processing)

void draw() {
background(0);
values[0] = pmouseX;
if (mousePressed){
values[1]=1;
}else{
values[1]=0;
}
printArray(values);
sendSerialData();
echoSerialData(200);
}

Code that matters (Arduino):

int melody[] = {
  262, 349, 196, 440, 4186, 2093, 3136, 175
};
int freq;
 
void setup() {
  Serial.begin(9600);
  pinMode(13, OUTPUT);
}
 
void loop() {
  getSerialData();
 
freq= melody[int(map(values[0],0,500,0,8))];
if (values[1] == 1) {
    tone(13, freq);
  } else {
    noTone(13);
  }
}

FINAL PROJECT ESSAY

Veggie-table

From my previous definition of interactivity, I decided that interactivity not only meant actual interaction, but also included the understanding of how a machine works and what its purpose is, as well as leaving the interaction with something being learned. That means a successful interactive project for me would have to be one that addresses a problem the user may have, and attempt to fix it.

The issue I try to tackle in this project comes from people’s diets, and the lack of vegetables or other healthy foods in some people’s daily food intake. The target audience would be people roaming around in grocery stores, buying their groceries. As some people walk by the vegetables section, they would not bat an eye, but my project would aim to draw people into the vegetable section, and encourage them to buy more vegetables.

The way that my project would work requires simple sensors from arduino, and visuals from processing. A person would walk through the vegetable isle and see a screen with a bunch of different vegetables laid in front of it. The screen would say “Pick one up and see what happens” and a sensor would sense when the if a certain vegetable is picked up. On the screen would display the vegetable’s nutrition facts, as will as a bar that would be filled up as the customer would pick up more and more vegetables. If the customer picks up a sufficient amount of vegetables, the screen would say “Congratulations! You now have a balanced diet!”

What my project basically aims to do is to give information about health and encourage better choices to be made in an interactive and aesthetically pleasing way. The way that most people find out about nutrition in grocery stores nowadays is by looking at the nutritional facts label at the back of the food packaging. Other than being boring and tedious to read, some people might not even know what those nutrients are good for, or what the recommended amount daily nutrients should be. Instead of reading the back of the food label, my project will display the health benefits or drawbacks of a vegetable or an unhealthy food on the screen, while encouraging the person buy more healthy foods rather than unhealthy foods. In order to make the project impact the audience, I plan to use live facial tracking on processing to place the persons head on top of a body. If the person gets more healthy foods, the body will get more fit, and more and more health benefits of eating well will be displayed, whereas the opposite would happen if the person would pick up more unhealthy foods. 

I plan to brainstorm visual ideas and sketch out the project until the 27th, and then spend until December 4th creating the processing visuals, and then combining it with arduino sensors with the different foods until the 7th, and lastly go through final testing and last minute changes before showing the final product. 

From what I’ve researched, there does not seem to be any physical form of my project idea, but there are only interactive websites which can visually display the health benefits and drawbacks of foods in a visually pleasing way for children. I feel like because I plan to make my project large and physical, as well as targeting older audiences rather than younger ones with harsher and more personal imagery, the impact my project will have on one’s diet will be greater than the simple online healthy food games that I’ve found.