“Who’s Ordering your Food” – Anica Yao – Marcela

Project Name: Who’s ordering your food?
Partner
: Xueping 
Final Creation

Project Photo
Final Show Ongoing ~

Codes: Arduino + Processing

Conception and Design:

The interaction of our project consists of two parts: processing ( with a series of scenarios ) + Arduino ( a Menu made of buttons )

In this project, the users are allowed to explore by themselves. The final feedback/report they get is also based on what they choose. When we made the proposal ideas we wanted to make it an educational device informing people to have a healthy diet by keeping the balance among all the nutritions intakes. But thanks to Prof. Marcela, we do think that idea will only be accepted by a small range of audiences β€”β€” people who care about nutritional balance to gain a healthy diet. People have their own standard to choose a healthy diet. Otherwise, our concept was too narrow to make our device an inspiring one. 

Brainstorm
Brainstorm

The three scenarios are created to make a comparison. In the first one, there’s no limit. The users can choose whatever they want in the menu. ( People want to try curry rice simply because it’s been a while not having it ). In the second one, there are three clips of weekly vlog filmed by a famous Youtuber ( I will put the link down below ). It indicates that people nowadays can be easily influenced by social media. Psychologically speaking, most people tend to follow what others do. When those famous Youtubers are promoting their healthy styles, the audience will follow suit, not considering whether the recipes suit them or not. Or the recipes themselves are not healthy at all. In our project, people need to press “v” to watch several clips. In the last one, there’re news/scientific reports popping up on the screen with the corresponding voice. It’s like we are surrounded in a world filled up with either real or fake information. We tend to believe the so-called “scientific facts” when they are actually not. The most important lesson we learned from the midterm project is to create an all-dimensional experience for the user, in which the audio is inevitable most time.  So here, from both visual and audio sides, we want to make it more a reality. The experience is like reading newspaper or checking daily news. When you look through that news, you can choose to read thoroughly or just skip it. But you can’t resist the information pouring onto you. That’s how we develop the third scenario. Finally, based on whether they’ve chosen a different dish or not during the process, we will give a feedback to the user. 

For the visual design, we made a menu with buttons on top so that it feels like a restaurant. We specially chose a similar cartoon style. We made some visuals (pictures and texts), but due to time constraints, we didn’t draw all the dishes. We definitely want to try next time to make it more aesthetic. 

We could have used more words but we think that there might be too much load or more like a lecture instead of an interactive experience where people are put into the “conversations” all the way. 

Fabrication and Production:

The most significant and challenging part of our production is to put all the scenarios in a single processing page. We use if statement to count the scenario number, delay() to make transitions and refresh the background in between. Also, it’s important to decide when to display the image or to play the sound. For example, it happens that the new image covers the old one but the sound continues to play. It’s important to put them under the correct conditions. 

During the user text, we only finished the third scenario then. And we received lots of helpful feedback from classmates and professors :
(1) Better to have clearer instructions. ( This is actually a tricky one. We do want to make it clear, but not too obvious. Or the experience will be responsive rather than interactive. So later we added some hints or notes.)
(2) The information given may be overwhelming? ( We thought of this problem. But that’s why we also added the voice. It also makes the scenario more realistic. Later people also told us that now it’s better because more visuals, more elements(sound, video, image), and more interactions are involved.)
(3) The menu can be better designed. ( We also think so. If we still have time, I want to make it more like a sheet or book or make more decorations on the box we have)
(4) Make it a little game. ( That’s also a very good starting point.)
(5) Create a replay/reset button so that the user can play it again. ( Yes! later we added that on the report page).

All the feedback is very helpful for our following production decisions. We made some improvements:

We try to put the user into a conversation, where he or she is making choices not only for himself/herself but also for the girl, Lisa, in the story. When the user is making decisions for that girl, it also indicates that it’s not you but the social media that influences your food decision. Besides, the conversation like this can be part of daily life so the users may quickly familiarize themselves with what’s going on. We want the conversation to make the scenarios more of storytelling, rather than just a “lecture”.

For the fabrication part, we used the layer cutter to make the box. The dishes are made of cardboard with pictures atop.


Conclusions:

The goal of our project is to convey the information that we can be easily affected by social media in terms of food choice to get a healthy lifestyle. You are the receiver of all kinds of information, and you thought you were making your own decision. But think about it: Are you ordering your food? Or is there anyone else ordering for you? Are you really making independent decisions? This project is never to tell people the best nutritional balance to reach in order to keep healthy. Instead, all the people can get their personalized experience in this device. Based on our report at the end of the project, if you are someone who holds the original choice you’ve made, it’s a good thing, except that you choose the junk food three times. The latter will be reminded to have a healthier alternative. As we observed, most people easily changed their mind after they got that information. We hope this project will make them realize the invisible power of social media. 

Our project generally aligns with my definition of interaction that it is a process in which an actor receives and processes the information from another through a certain medium and then gives the results accordingly. It’s sufficient to describe a basic interaction. But to be a successful one, in my opinion, the experience should (1) self-explanatory, clear, and obvious (2) put the user in a continuous loop to make responses (3)  be multi-dimensional with visuals, audio and other factors involved so that the user will be more engaged. I think we still need to make our project more self-explanatory. It can be achieved through various forms of interaction like recognizing the users’ gestures, which may be more close to daily life. Or we can provide more hints rather than just texts. I think we did well in (2) and (3), but there’s still space for improvements.

Since the users need to stay focused on the scenario, I’m glad to see they are more than willing to navigate throughout. Although sometimes they got confused about what’s the next key to press, and that we didn’t have a very detailed, personalized report in the end ( we expected that the feedback is given based on every food combination. But due to technical constraints, we found it difficult to realize). some of our friends said that they really saw the difference and improvements we made after the user test.

The values we learned from our setbacks and failures are that we need to thoroughly think about the ideas we want to show rather than the particular techniques for interactions. But when we begin to think about the interactions, we easily neglect some details. For example, we made the video wrong size/format, or we forgot to put the bracket, etc. Therefore, we also need to spare time for these possible mistakes besides the main parts. 

Another thing we need to reflect on setbacks is how to convey the information more explicitly and quickly. So we designed a plan B for the 3rd scenario: we let the user chooses whether to go through every piece of news or simply get the abstract by skimming. The latter is meant to create a feeling of the explosion of information,  mimicking a realistic environment filled up with all kinds of information. It has all the news and big titles moving while the voiceover will be a combined soundtrack with varied voices telling different news or slogans. In this case, even if they are not patient to see the news one by one, they get the main ideas rather quickly. 

To conclude, we want to let people realize the influence of social media on their decision making. Studies show that when we order our food, we are affected or misled by many external factors. Social media is the major one in today’s world. If we just blindly follow whatever others eat or the so-called scientific report says, we are going to face some food safety issues. The prevailing pop culture including live streams and articles gains profits mostly by driving the traffic and catching people’s eyes. Some may be true facts, and some are really not. But consumers tend to believe whatever they see or hear. Following the social media blindly may lead to some disorders or obesity or heart-diseases. In our project, by asking people “who’s ordering your food”, we want to let people think twice about their food choice or decision making in general. 

Recitation 10: Object Oriented Programming workshop – By Anica Yao

Reflection

 I wanted to enhance my skills then apply it to my final project, so I chose this OOP workshop. Firstly, we created a class called “Emoji” in another tab.  In this class, we defined the values,  Emoji ( later I learned that “Emoji” it’s the abbreviation of β€œEmoji Emoji” adjusted by the developer. ), display(), and move(). After that, in the first tab, we uploaded every new emoji in the setup() function and used them in the draw() function. To improve the codes, we added the mouse press interaction and put all the emojis in an array, which has a different format from the array I learned before. 
After class, I realized this works well when we need to draw things in a same pattern. So maybe we don’t need to do the same things to pictures in our final project. But still, I did some improvements to the codes. 

Processing Codes

ArrayList<Shape> shapeList;

void setup() {
  size(600, 600);
  background(255);
  shapeList = new ArrayList<Shape>();

  //draw the new shape
  for (int i=0; i<50; i++) {
    shapeList.add(new Shape(random(width), random(height), color(0, 0, random(255),200)));
  }
}

void draw() {
  //background(255);  
  for(int i=0; i<shapeList.size();i++){
    Shape temp = shapeList.get(i);
    temp.display(); //same as emojiList[i]
    temp.move(); 
  }
}

void mousePressed(){
  float x =map(mouseX, 0, width, width/4, 3*width/4);
  float y =map(mouseX, 0, height, height/4, 3*height/4);
  shapeList.add(new Shape(x, y, color(random(255), random(255), random(255))));
}
#TabName: Shape
class Shape {
  //only define them w/o value. The default values are all 0 here
  float x, y;
  float size;
  color clr;
  float spdX;

  Shape(float startingX, float startingY, color startingColor) { //without the para, it will be the same;
    //Not x,y, or it will not refer to the one defined at the beginning
    x = startingX;
    y = startingY;
    size = random(50, 200);
    clr = startingColor;
    spdX = random(-10, 10);
  }

  void display() {
    noStroke();
    fill(clr);
    square(x,y,size);
  }

  void move() {
    x += spdX;
    if (x<0 || x>width) {
      spdX = -spdX;
    } 
  }
}

Final Creation

Basically, there are all blue squares moving horizontally. When they touch the edges they will bounce back. When you press your mouse, a new square with random color will pop up and also begin to move.

Recitation 9: Media Controller – by Anica Yao

In this project, I connected the button/switch with a video about subway. You long press the button to play the video, otherwise the video will pause. There are three points that I need to pay attention to:
(1) Before I play the video, I need to define the movie and upload the file first. If necessary, draw the frame. 
(2) I couldn’t play the first frame at first ( the screen is all black ), and thanks to Jintian’s help, I learned that I need to draw the first frame in the setup().
(3) The video couldn’t play smoothly. After I change the value of delay() it went better. 
In the article ” Computer Vision for Artist and Designers”, I learned that computer vision is important in not only physical world but multimedia authoring tools. When I was doing my project, I simply used single variable from Arduino to processing to control the video to be played or paused. But by using multivariable serial communication it’s possible to use physical components to adjust both visual and audio things (e.g. the tent, the speed, the frequency and the volume of the sound, etc. ) Besides, the video pixel capture might also be a worthy method to start with. In my opinion, processing is more a bridge than a destination. It should process the physical information from the video, and create the computer vision beyond the original content of the video, which contains more art and interaction experience.

Processing Codes: 

// IMA NYU Shanghai
// Interaction Lab
// This code receives one value from Arduino to Processing 

import processing.serial.*;
import processing.video.*;
Movie myMovie;

Serial myPort;
int valueFromArduino;


void setup() {
  size(1000, 600);
  myMovie = new Movie(this, "Pexels.mov");
  //myMovie.play();

 if (myMovie.available()) {
    myMovie.read();
  } // read the file
  myMovie.play(); //play the video
  image(myMovie, 0, 0, width, height); //draw the frame

  printArray(Serial.list());
  // this prints out the list of all available serial ports on your computer.

  myPort = new Serial(this, Serial.list()[ 7 ], 9600);
  // WARNING!
  // You will definitely get an error here.
  // Change the PORT_INDEX to 0 and try running it again.
  // And then, check the list of the ports,
  // find the port "/dev/cu.usbmodem----" or "/dev/tty.usbmodem----" 
  // and replace PORT_INDEX above with the index number of the port.
}


void draw() {
  // to read the value from the Arduino
  while ( myPort.available() > 0) {
    valueFromArduino = myPort.read();
  }
  println(valueFromArduino);//This prints out the values from Arduino
  background(valueFromArduino); // only reads 0-255. if over range it reset to 0 again

  if (myMovie.available()) {
    myMovie.read();
  }
  if (valueFromArduino ==1) {
    myMovie.play();
  } else {
    myMovie.pause();
  }
    image(myMovie, 0, 0, width, height); //draw the frame
}

Notes:
Video by Danilo ObradoviΔ‡ from Pexels

Final Project Essay – by Anica Yao

Project name:  Who’s ordering you food?

Partner: Xueping

Project Statement of Purpose:

This project is targeted at anyone who is devoted to improving their dieting. Through this project, we expect the users to realize how easy we can be affected and misled by our surroundings, such as the social media and who’s eating with you. Initially, we got the inspiration from our observation that many classmates care about their diets but don’t have an overall knowledge of what they should eat. They like to watch the Youtuber sharing their dieting experience, or look for quick tutorials in TikTok, most of which are sponsered by businesses and published only for promotion. Some friends are also used to checking the calories whenever they buy food. 

The first research I did is to think about the proper targeted audience. In “Gender differences in food choice: the contribution of health beliefs and dieting”, results show that “women were more likely than men to report avoiding high-fat foods, eating fruit and fiber, and limiting salt (to a lesser extent) in almost all of the 23 countries. They were also more likely to be dieting and attached greater importance to healthy eating”. Also, conclusion is made that “further research is needed to understand the additional factors that could promote men’s participation in simple healthy eating practices”. So we may choose the female as the specific audience considering that they are more exposed to the social media. But generally, the project will raise the awareness of a reasonable food choice both for men and women. After all,  the social media can provide some health inspiration. 

The second reseach (in Chinese) is about the analysis of weight-reducing industry in China. It discovers that many businesses promote their products in various forms. Indeed, some of them are effective in the short run but it turned out to be harmful to our health in the long run (e.g. disorders of digestion). More importantly, more consumers are not really over-weight. They just think they are not skinny enough compared to those famous models. This research gives some insights for us to explore how the fake news puts consumers in a fraud, and how the consumers make their food choices simply based on visual experience.

There are other related research we’ve made. Since they cover the similar topics, and due to the space constraints, I will put them at the end of the essay.

Project Plan:

Generally, we want to build three scenarios with processing, and the users are expected to make their own food choices with Arduino.  At the beginning, there should be a girl (or comic person) sitting alone in the restaurant. She (now suppose it’s “she”) is looking at the menu and about to order. Possible interactions here are flipping the pages or talking with the waitress, Then she can push the bottons that represent various food. 

After the beginning scenario, the user enters the first scenario, and the dieting videos of some Youtubers will be played. We are considering adding a captured cam picture/live video, or putting the reaction of the girl on screen. After that, the user will be asked to make the food choice again. To give some psychological implications to the users, feedback will be sent in form of voice message or images in accordance with the food they choose. (Here, we can create an array so that the notes are not given at random but determined by everyone’s choice.) It’s similar with the second and third scenario. They focus more on the influence of news/reports, and the person you eat with. For the week after Thanksgiving break, we plan to have all the video clips ready (create the arrays if possible) and do the fabrication work; For another week we are going to start on the coding.

Context and Significance:

After doing the preparatory research and analysis, I find to create a successful interactive experience, the project should be (1) self-explanatory, clear, and obvious (2) put the user in a continuous loop to make responses (3) be multi-dimensional with visuals, audio and other factors involved so that the user will be more engaged. Also by reflecting on our midterm project, we decide to use various media like sound or touch to enrich the user experience. Our current idea is built on the first one in our project proposal, β€œDoctor’s training kitchen”. As discussed with Prof. Marcela, although it has educational significance in it, we need to spare more space for the users to explore the meaning by themselves, rather than directly infuse the meaning to the users, in which case interaction is not fully involved. Although generally we give the notes whenever you make your new food choice, but the notes are different between user and user, creating a “personalized experience”.  Through this project, we hope more people can realize how their food choice is influenced by our surroundings, as well as the potential bad results like digestive disorder or malnutrition caused by the social media. 

Notes and Resources:

  1. https://www.ncbi.nlm.nih.gov/pubmed/15053018
  2. https://www.healthline.com/health/social-media-choices#discussion-vs.-isolation
  3. https://yourhealthjournal.com/the-influence-of-media-on-our-food-choices/
  4. https://wenku.baidu.com/view/1e3c8503bed5b9f3f90f1c2c.html (Chinese)
  5. https://wenku.baidu.com/view/1e3c8503bed5b9f3f90f1c2c.html (Chinese)
  6. http://www.xinhuanet.com/fortune/2018-06/27/c_1123040998.htm (Chinese)
  7. https://health.clevelandclinic.org/can-social-media-influence-what-your-child-eats/
  8. https://link.springer.com/article/10.1007/S10964-012-9898-9
  9. https://link.springer.com/article/10.1176/appi.ap.30.3.257\

Recitation 8: Serial Communication by Anica Yao

Exercise 1: Make a Processing Etch A Sketch

In this exercise, the values of the two potentiometers are transmitted from Arduino to Processing. Firstly, what I drew was continuous dots and each dot appears according to the assigned X, Y position. In this process, I found the processing can not give a quick response. So I adjusted the delay() function and it got better. Secondly, I tried to make lines just as shown in Etch a Sketch. The challenge was about how to locate the previous position. Later I defined them before the UpdateSerial(). It works well. It’s also quite fun. 

The schematic: 

The changes of codes:
– In Arduino:

  delay(100);        // delay in between reads for stability

– In Processing:

int NUM_OF_VALUES = 2;   /** important. YOU MUST CHANGE THIS ACCORDING TO YOUR PROJECT **/


float px;
float py;

void draw() {
  px = sensorValues[0];
  py = sensorValues[1];
  updateSerial(); ///important. Update the string you got
  printArray(sensorValues);
  
  strokeWeight(5);
  stroke(255);
  line(sensorValues[0]/2, sensorValues[1]/2, px, py);
}

What I notice is that I need to convert the range of sensorValues from 0-1023 to the actual value (eg. width, height) so that it won’t go out of the edge. 

Here’s the final creation:

Exercise 2: Make a musical instrument with Arduino

This is relatively easier compared to the first one. I didn’t use potentiometers in this circuit. Instead, I made value mouseX as the frequency of the buzzer and mouseY the duration. 

Here’s the schematic:

Here’re the changes I made in the codes:

In Processing:
  values[0] = mouseX;
  values[1] = mouseY;
In Arduino:
  tone(9, value[0],value[1]);

Final creation: