PROJECT PROPOSAL by Xueping

The Aviator 

The Aviator is a multiplayer interactive game which trains and tests people’s ability of coordination and multitasking with an interesting one-on-one match. It requires players moving their arms/hands to change the position of the aviator (which is shown on screen) to avoid obstacles or to get props while at the same time blowing into the tube to control the speed of the aviator. Buttons need to be pressed when users want to use a prop. It can also be used for team-building while each member takes one responsibility (having a group of three with a person watching the screen giving orders while the other two people blinded by eye masks control the direction and the speed).  

Multitasking and team-building games are popular and needed either in schools or workplaces but there is not much device-based interactive games for small groups. For multitasking ones mostly they are online games designed for individuals (focus on eye-hand coordination) and for team-building ones they usually needs a larger area (often outdoor) and more people to carry out. But with this game we designed, it is possible for users to have these games indoor within small groups or pairs only and more body parts are required for the multitasking which increases difficulty but also urges users to have a little exercise. There are a lot of existed similar aviator games but only training eye-hand coordination or purely for fun (one example is the link below). 

https://tympanus.net/Tutorials/TheAviator/

Doctor’s Training Kitchen

This is an interactive game begins with entering your height into the device and stepping onto a scale. The computed BMI will be indicated via an animated image showing the user’s health status (skinny/fatty/fit). Then a recommended recipe will pop up on the screen and users can learn to cook that step by step with guidance. User’s performance (the act of stirring, mashing, cutting, seasoning adding etc.) will be measured with sensors and taken into account to produce the final score which shows how healthy and suitable is the food they made for themselves. 

The game is target at people who live on their own especially the youth who does not know pretty well what food is good for them and are too lazy to cook or learn to cook. Current cooking games in the market are mostly simulation games which does not teach recipes and process of making any dish. The only games I found which involves step-by-step food making is Cooking MAMA (as in the video shown below) but everything user do is on the screen so users are not learning much real skills. But as our design (if possible) use flex sensors and force sensors to catch hand movements and acts, it resembles what people do in reality and trains users better in the cooking skill for that particular dish while offering evaluation on their works. It will help more young people staying healthy by helping them to learn recipes good for them and learn how to cook that. 

The Best Detective

This is a party game with a group of users. Photos of animals/scenes/ objects will be shown on screen in dots but initially covered with a dark layer and needs one chosen player to wipe out. The chosen player is going to do the wiping bit by bit while the rest players can guess what that picture is about. Chances to wipe is limited while only a restricted length of stroke can be wipe out each time. The wining person (who guess correctly) can order a victim to do one thing or answer a question and then he/she becomes the next wiper. 

There is now wipe-out guessing games for individuals and I think it will be more interesting to put in a party setting so collective efforts can be paid and the interactions and competition between players also adds to the experience. But since I find existing games relatively easy and assume it will be easier in a group setting, we change the picture to present with dots. So this to some extent makes the game harder and more entertaining.

Recitation 7: Functions and Arrays by Xueping

Question 1:

In your own words, please explain the difference between having your for loop from Step 2 in setup() as opposed to in draw().

The above video shows the result when the loop is placed in setup().

It is clear that when the for loop is in setup(), the monkeys show up with random color and position and will not move or change. But when the loop is in draw(), since every function in draw() will be looped, the monkeys will continuously change their position, size and color. The following video shows the result when the for loop is placed in draw().

Question 2:

What is the benefit of using arrays?  How might you use arrays in a potential project?

The benefit of using arrays is that it makes coding a lot easier with neat/tidy looks of shorter coding sentences if we want to run the same function with different variables. For instance, if we want to have photos of different people all showing and doing similar moves on the screen, it is easier to use arrays when we code them to do the same moves. When the variables are like color, size, or position and especially when we want to use random variables, using array avoids repetitive work and saves our time.

The main problem I met in doing this recitation work is how to draw the ears for the monkey face, which needs semi-circles. I’m still struggling with the last two parameters used in arc( ) function…..

int monkeys = 50;
float[] x = new float [monkeys];
float[] y = new float [monkeys];

float[] speedX =  new float [monkeys];
float[] speedY =  new float [monkeys];

float[] size =  new float [monkeys];
//float[] Color =  new float [monkeys];
color[] c = new color[monkeys];


void setup() {
  pixelDensity(2); 
  size(800, 800);
  background(254);
  for(int i=0; i<x.length; i++){
    x[i] = random(width);
    y[i] = random(height);
    speedX[i] = random(-10,10);
    speedY[i] = random(-10,10);
    size[i] = random(30,200);
    c[i] = color(random(255),random(255),random(255));
  }
}

void draw() {
  background(254);
  for (int i=0; i<x.length; i++){
  display(x[i], y[i], size[i], c[i]);
  //move the monkey
  x[i] = x[i] + speedX[i];
  y[i] = y[i] + speedY[i];
  // check the edges
  if (x[i] > width || x[i] < 0) {
    speedX[i] = - speedX[i];
  }
  if (y[i] > height || y[i] < 0) {
    speedY[i] = - speedY[i];
  }
 }
}

void keyPressed(){
  if(keyPressed){
    for(int i=0; i<x.length; i++){
    c[i] = color(random(255),random(255),random(255));
    speedX[i] = random(-10,10);
    speedY[i] = random(-10,10);
  }
  }
}
void mousePressed(){
  if(mousePressed){
    for(int i=0; i<x.length; i++){
    size[i] = random(30,200);
  }
}
 }
 
void display(float u, float v, float size, color c) {
  // display 
  noStroke();
  fill(c);
  ellipse(u, v, size, size);
  fill(255);
  ellipse(u-0.2*size, v-0.1*size, 0.5*size, 0.5*size);
  ellipse(u+0.2*size, v-0.1*size, 0.5*size, 0.5*size);
  ellipse(u, v+0.1*size, 0.6*size, 0.6*size);
  fill(c);
  triangle(u, v+0.15*size, u-0.08*size, v+0.1*size, u+0.08*size, v+0.1*size);
  rect(u-0.3*size, v-0.12*size, 0.2*size, 0.07*size);
  rect(u+0.1*size, v-0.12*size, 0.2*size, 0.07*size);
  arc(u-0.5*size, v-0.01*size, 0.25*size, 0.5*size, 0, PI+QUARTER_PI);
  arc(u-0.5*size, v-0.01*size, 0.25*size, 0.5*size, PI+QUARTER_PI, 2*PI);
  arc(u+0.5*size, v-0.01*size, 0.25*size, 0.5*size, PI+QUARTER_PI, 2*PI+QUARTER_PI);
  arc(u+0.5*size, v-0.01*size, 0.25*size, 0.5*size, 2*PI, 3*PI);
  arc(u, v+0.2*size, 0.25*size, 0.25*size, 0, PI);
}

PREPARATORY RESEARCH AND ANALYSIS by Xueping

Chronus exhibition, like most other exhibitions of technology-based art work, creates a much more involving environment for visitors with the art works. Upon first glance, the movement in the work itself even without interaction already arouse people’s attention and interest (might be because human beings are also carnivores so moving things naturally catches our mind as evolutionary theories explained). The sound and media shown combined with moving components and the installation as a whole stimulates visitors’ senses so it creates a tense sensory experience, more than just having a sight of an installation. And this kind of experience stimulates people to figure out the behind ideologies and develop their own understanding.

This is an interactive project which I really liked. At first when I saw this project the speaker of my laptop is accidentally not working so I thought this is a boring idea only with tumble shapes and the interaction is too simple and not interesting enough. But when I got my sound effects back, the whole interaction process becomes attracting and interesting since the speed and pattern how shapes tumble echoes the  tempo of notes being played when the user rotates the screen plate. Users can create their own music piece and enjoy their production with their eyes, ears and limbs (which they need to use to create the music).

The butterfly effect is also a successful useful interactive project. While what the designers is to provide people a fun and new experience with their blowing. For me it visualizes the ability of of a person’s vital capacity with much more interesting and pleasant experience. I will definitely want to have this aesthetic and lively interactive device at my vital capacity test compared with the cold machine showing numbers I don’t really understand. 

The two interactive digital installation shown in the video shows a contrast itself while the first has more children attracted and interact with it while the other has less because it shows little response to what people on it does. The users’ intention to interact with is not valued in the latter one and people thus lost interest in it. The little girl who happily interacted with fishes tried to run and see difference in the latter work too but saw no difference. 

I used to define interaction as a mutual reaction in which at least one party gets personal and unique experience. By this definition, I mean that the interactive experience for user (if it is with a device) should be involving and meaningful. An interactive experience will be successful only if it is user-centered, which means it needs to consider the user’s point of view, making it clear for them to understand how to interact while at the same time ensuring the interaction is interesting and meaningful for them. Like what is suggested by the idea of “Norman Door”, a door is only successfully designed when a person can almost subconsciously realize whether to push or to pull instead of having to try or read the sign. The most common feedback we receive in our user test feedback for the midterm project is that it is hard to realize how to interact and the inability to form interaction or the not-so-good experience ruins the interaction. So knowing who will be the user and what they really need is necessary. Like Norman says, “the work starts with understanding people’s needs and capabilities. The goal is to devise solutions for those needs, making sure that the end results are understandable, affordable, and, most of all, effective”. The latter work in the mall certainly fails in satisfying people’s needs as the interaction is not comprehensive and not effective. Meanwhile the butterfly effect is very clear and if it does apply to be used for vital capacity test it fits social needs and improves the current experience. At least from my own point of view, it will be a great people-centered design with this use.

  • Norman, Don. “People-Centered (Not Tech-Driven) Design*.” Jnd.org, Jnd.org, 26 July 2019, jnd.org/people-centered-not-tech-driven-design/.

Recitation 6: Processing Animation by Xueping

For the first part of the recitation, I used the previous work I made in recitation 5 and try to add some interaction part. Basically, what I’m trying to do is to use keyboard and mouse to change the color of the background.

The following are the codes for this part.

Recitation Code 1
Code part 1
Code 2
Code part 2
Code 3
Code part 3

Homework part:

I learned a lot about ellipseMode while trying to do the last step of keeping my circle in the canvas. Although it finally turn out that the mode does not work for this purpose, I still think these might be quite useful. But i need to be very careful when changing Modes since for each ellipseMode the parameters have different meanings. 

Just to keep a reminder for myself:

Better keep the equation for changing variables at the end of the loop so the if-statement controlling extent of changing can be checked before the actual changing.

If mouseX & mouseY is used as inputs but defined under other names, it needs to be put into the loop or it only counts the first time where you placed the mouse.

pixelDensity(2);    // is useful when the sharpness does not show up right

About ellipseMode:

The default mode is ellipseMode(CENTER), which interprets the first two parameters of ellipse() as the shape’s center point, while the third and fourth parameters are its width and height.

ellipseMode(RADIUS) also uses the first two parameters of ellipse() as the shape’s center point, but uses the third and fourth parameters to specify half of the shapes’s width and height.

ellipseMode(CORNER) interprets the first two parameters of ellipse() as the upper-left corner of the shape, while the third and fourth parameters are its width and height.

ellipseMode(CORNERS) interprets the first two parameters of ellipse() as the location of one corner of the ellipse’s bounding box, and the third and fourth parameters as the location of the opposite corner.

Plastic Sea – Xueping – Marcela

Little Light
Final Looks

CONTEXT AND SIGNIFICANCE

No matter in our group project (the Watch Alpha) or other presentations I have seen, what is clear is that most devices are customer-centered. It means each individual gets a unique experience when they use the device and that specificity to some extent is what really features interaction. Because if all people receives similar replies, then probably the device is only performing certain encoded reaction instead of having actual interaction. As have been seen in works done by our previous classmates, the reason some of the games they created is welcomed, especially for group games, is the fact that their interaction with the device also involves the experience of interacting with other players which makes a unique interaction every time they play. Interaction is more than simple reaction to someone’s acts but should have meaning or at least practical uses. The works shown on wildlife interactive media exhibition (see videos below) though not showing a wide range of different responses, still creates great interaction and provides good feedback because the fact that people never know what might happen or show up before they see the response and there are more than one possible responses. so for each individual experiencing it, the experience is still personal and unique. Our projects takes the idea of these wildlife interactive media arts and transform it into sea animal protection. We also make plastic use which is everyday encounters as the trigger to make people feel more obliged to have practical changes. The primary idea is to create a saving act like what those wildlife arts do to create “prizing” conditioning but because of practical reasons we finally did the “killing” process to form “punishing” conditioning. The target is normal people but more suitable for younger students/children as the interaction is simple and easy for them to understand with vivid lovely sea animals while it is most suitable for these young people to realize the urgent need to protect sea animals as they are more willing to change their behaviors and they have strong influence to people around them.

CONCEPTION AND DESIGN:

Our plan was to have the users using model trash throwing into the supposed bins inserted in the sea and see how the sea animals’s life status will be. So the device is designed to have two parts, the sea animal condition one (reacting part) and the trash sensing one (triggering part). Therefore the consequence of their behavior is very clear and obvious. DC motors with H-bridge to change directions are chosen to control the sea animals so they can present different status. Those sea animals are tied on a white acrylic tree-like model with clay decorations suggesting coral reef which makes the animals higher and more visible within context to help users understand. Because of practical reasons and the feedback from user test we give up the ideas of letting users throwing model trash into bins since users get confused how they can interact when they have model trash. So we created the user-friendly vending machine with cardboard which makes it much clearer for users what they can do. Users can take any bottle in the machine to see what happens. And the bouncing (alive) v.s. still (dead) acts as feedback. But if the device have a larger size and if we have more time to learn the coding, it might still be a good idea to use trash throwing as trigger. By using force sensor or weight sensor to record the realtime accumulated weight of different type of model trash (with specific weight add to it according to the type it represents) which have been thrown into the boxes and showing animal’s status at certain weights for me is still a idea showing better interaction because people can try different composition of trash and have more personal experience. Within my understanding and previous study of interaction I still assume people regard more personal experiences with the device as more interactive because it suggests what the user has done contributes more to the result and this sense of self-value also makes users to contribute more to the interaction.

FABRICATION AND PRODUCTION:

Because we are trying to keep people aware of sea animal protection so a small sea-life environment modeling needs to be created so that users get the interactive visual image. This parts need more fabrication while the trash sensor has more to do with circuit building. We have encounter a lot of failures while trying to use the 3D-printer and the laser cut. Problems with the laser cut are mainly due to the deviation in size where the machines cuts more on edges and makes pieces cannot be tightly connected to each other. But this problem can easily be fixed through changing length or size.

One panel

The major failure in our production is the bearings made with 3D printer. The tiny size is a possible contributor the the failure but the fact that none of the holes are hollow does disappointed us and though we try to use drill to fix the problem, it does not work quite well and we have to give up using those bearings and directly tie strings to motors which leads to relatively unstable interaction because of intertwining strings. Apart from the fabrication problems, circuit building seems to go pretty well. In the user test we realize that the rotation speed of the motors even without plugging to 12v is too fast and that the sensors are not responding well in the box causing bad interaction because of delays and not perceivable responses. We fix the problem by changing to slower motors and take out those sensors and have a separate outside sensing space (as talked about in the designing part). Water are also added to plastic bottle props to weigh more, helping sensors function better. 

Sketch
Sketch1
Sketch 2
Sketch 2

CONCLUSIONS:

The goal of our project is to raise public awareness of how plastic use and pollution harms sea animals by having the interactive device showing some bad results. Our project aligns with my definition that it does show responsive results when detecting different acts however, for me it is not yet a good interaction because the responses are limited and need to restart to function every time one loop ends. So if i have more time, I will still tend to change back to the trash throwing and weighing idea as I think it is more thought-provoking, interesting and having more creative individual experience for users and in this sense more interactive. Currently, the project’s interaction is more responsive than interactive although it succeeds in letting users figure out how to interact and getting the thoughts we are promoting from their interaction with the device. The whole producing process, user tests and Q & A in final presentation help me think more from the user’s standpoint and try to make the “less plastic pollution  = saving sea animals” ideology more explicit for users. I learned to have indications about the project’s main focus and implicit guidance (e.g. coral decorations indicating the whole platform to be ocean, vending machine sign suggesting that taking bottles away means usage). These indications and implicit guidances are a lot better than having nothing at all which confuses users or having step-by-step instructions which ruins the whole interaction process. Getting those hints, having personal comprehension and using this interpretation to interact makes user more involved in the whole interaction process and thus have more take-aways. 

Environment and animal protection is a globally-valued urgent issue. Having an educational interactive device helps visualizing and animating possible consequence of our casual daily acts and arouses awareness. Since it targets more on teenagers/children, its influence can be broad. The project broadens my knowledge in coding, circuit building and fabrication. I learn to use more components and tools during the process although a lot of trial results in failures. But failures especially bad user feedback is what helps me learn and improve. My personal understanding about interaction is deepened through this whole creating process and especially from the way users actually choose to interact with it when i find out users always have their own interpretation and creative thoughts. This is what sometimes ruins the interactive experience especially when they interpret in the converse way, but this is also the charm of interactive arts as everyone has their own experience. So that will be something I consider and take more seriously when working on my next project.

Reference:

https://www.youtube.com/watch?v=7mvX2XYQSNk

https://pixabay.com/vectors/shark-jaws-sea-fish-ocean-white-305004/

https://pixabay.com/vectors/animal-fish-ocean-sea-seahorse-2027685/

http://poofycheeks.com/2019/05/sea-turtle-svg-dxf-png/

https://learn.adafruit.com/force-sensitive-resistor-fsr/using-an-fsr

– Class 10 – Thursday Oct 10 – DC Motor Control (Direction) / Stepper Motor Control and High Current Loads

https://www.creativeapplications.net/python/aweigh-open-navigation-system-inspired-by-insect-eyes/

http://www.smartshanghai.com/event/55029