Final Project–The epic of us–Ketong Chen–Inmi

  For this final project, I teamed with Tya to build our project”The epic of us”. I learned a lot during the process of making the project and I want to thank my professor and my best partner Tya and all assistants who give me a lot of help.

CONCEPTION AND DESIGN:

  Since our project is a board game that involves two players to role-play the leader of two civilizations, we want them to try to attack each other for their development of the country. We first thought about using the fsr sensor to create a device to let the user to hit and the force the user put on will determine the damage of the other civilization. To use the fsr sensor, we need to figure out how to store the maximum value that the fsr senses during a certain period of time. But later we find it hard to control the sensor(the same problem we have met during the midterm project). And we also had two buttons for the players to begin the game, so we decide to use the button to do all interactions, the times the button be pressed would decide the damage degree. And to make the game more engaging, we put 48 LEDs on our board which nearly drove us crazy. We first try to use a chip called 74HC595 to control 8 LEDs with 3 pins from the Arduino(since we need 48LEDs and there are not enough pins), but after several days of struggling, it did not work. Though really discouraged, we still want LEDs on our board to show the steps of the players. Finally, we use Arduino Mega which has 54 pins in total to connect LEDs and Adruino. When connecting the LEDs, the Mega did not work for some reason, after asking the fellow, I learned that I should not connect the pin 0 and pin 1 to LEDs since they are used for Mega to talk with the computer. Also, the LEDs are not stable and the wires always fall off so we need to frequently check them and fix them. And about the materials of the board, we first used cupboard but we were not satisfied with it because it was a little bit soft and did not delicate enough. We used wood at last and to make the board larger, we used two wood boards and stuck them together.

FABRICATION AND PRODUCTION:

  We laser cut many pictures on our board to decorate and show the process of the development of human societies. It is a very annoying job to turn all the pictures into the forms that the machine can recognize. I asked the fellow for help and the link here is very useful. But due to the different versions of the illustrator and photoshop, I first had difficulties delete the white color in the pictures. But later I figured it out. During the user testing, because we failed to figure put the use of the HC74595 chip so we did not use LEDs to show the steps of the players, we had to use two real characters and let the users to move the characters by themselves. Then the problems came out that the game board is separate from the computer screen so when users moved their characters they had to turn away to looked the board which caused them to miss some instructions on the screen. So we were determined to add the LEDs and combined the screen and the board together to create a better game. So we later laser cut a bigger board with holes on it to out LEDs. And also during the process of watching the users play the game, we found that the speed of changing the instructions is too high for someone who played the game for the first time so we later make some adjustments to make it more readable. 

CONCLUSIONS:

  According to my definition of interaction before —— a cyclic process that requires at least two objects (both animate and inanimate are accepted) individually has its input-analyze-output, and the whole process should be meaningful. Our project aims to let people be aware of a better way to develop is to collaborate rather than fight with each other. If people fight with each other for the resources they want they will go astray together. Since it is a board game it has a cyclic process and also meaning behind it which aligns with my definition of interaction. When people interact with our project, they did not hesitate to attack each other and they were surprised to see both of two civilizations be destroyed in the end. But later someone said that we did not give the player a clear instruction that they can choose not to attack. We want to let people choose from attack or not to attack, but it turned out that they did not have the intention to choose. That’s the point we need to further think of and improve. Usually, people will not do what you expect them to do and that’s why there is always room for improvement. If we have more time, we will make it clear for people that they have the choice to not to attack other civilizations. From my perspective, I am happy that we conveyed our idea to people. Hope our project has brought fun and deep thoughts. 

The picture for our project:

Final Reflection on “Self Censor” by Kat Van Sligtenhorst

The development of the project involved a lot of back and forth, trying to balance the strong and sometimes abrasive message and the technological aspect, which needed to be simple enough to let the interaction speak for itself. For the survey, I originally had a list of 20 statements, which I edited down to 14 after some users said that the experience stretched on a little long. The goal was for them to be relatively short and simple, both so they could hold users’ attention and so they were answerable by the target demographic, most of whom could be assumed to have a basic working knowledge of the subjects. Within this process, I determined how the monitor should respond to certain answers, and the most effective ways of subtly giving users the sensation that they were being surveilled or that there were “right” or “wrong” responses. I wanted this idea to build over time to create feelings of unease and panic, without ever outright stopping a user from answering as they wished. Although the camera light comes on immediately, most users don’t notice, and the most dramatic reminder of surveillance is when a live video feed is actually flashed onscreen. This made people duck out of view and even want to stop the simulation. This is the danger of self-censorship–that varying methods and degrees of conditioning will push someone to omit, adjust, or change their opinion entirely. In the final step before beginning to code, I sourced images to flash in between statements, some of which were pulled from news coverage, and others that I took myself while in Hong Kong.

As far as other design choices, I knew early on that I wanted the response mechanism to be relatively simple, something where a user can choose only between a yes button and a no button. This is, first, to mimic casting a vote, although I wrapped the ballot box in trash bags to show that an individual opinion does not particularly matter when the CCP is creating policy. This is also to show the lack of much gray area in China in terms of what is acceptable to discuss. It seems that, more often than not, issues are clearly divided into safe topics and taboo topics, and there is very much a right or wrong stance to take. I originally coded the replies with keyPressed just to get everything working, then moved to the push buttons. I also had to adjust the code so that users could only record one answer for each statement, and added a counter so that the “safe” and “warning” tallies would correspond with the responses. For the processing interface, I used simple, black and white typewriter text as well as red flashes, photos, and live video feed (without really storing data) to, again, reinforce the “safe” or “warning” answers. The other consideration was to use more physical elements to drive the interaction, like placing little protestor or police figurines on pressure sensors in order to express an opinion. Ultimately, I decided that less was more, because I didn’t want to distract from the intensity of the statements and the need to make definitive, perhaps controversial, responses to them.

ILFinal

In the end, my project did align with my definition of interactivity as something that goes through multiple cycles of input and output between audience and product, and ultimately challenges the user in some way. Based on my observations of people using it, the experience was intuitive in its design and impactful in its message. It could be argued that the project is not interactive enough, and that the yes and no push buttons could be changed to something more meaningful or engaging. If I had more time to develop this idea, I would like to make it even more immersive, incorporating sound or light displays to draw attention to the user for “wrong” answers.

The most challenging moment of development was when my computer crashed and I lost the majority of my code. This happened on the Friday morning of user testing, which meant I was unable to get the full experience, although I did have a few people try my rewritten code on Monday and Tuesday. All in all, though, I learned a lot about the nuances of Processing, serial communication, and developing an interactive experience that blends technology and social issues in order to challenge an audience.

As I said in my earlier essay, I think this project is unique in that it addresses a particular group of people, who face all the nuances and challenges of attending the first joint US-Sino university. Our student body is in a position to both observe the affairs of China and to bring international perspectives and standards into our considerations of these issues. We have a distinct ability on our campus to discuss and debate topics that are taboo in wider Chinese society. Therefore, my goal was to take real-world current events and issues that are of huge concern to students in our position and force users to reconsider not only why they believe what they do, but how strong those beliefs really are when they are challenged, explicitly or implicitly. Watching people go through the survey, I believe my project did achieve its goal.

Final Project- Emily Wright

Map(Tweeter)

Purpose of Project

The idea to create this project came from my experience going throughout the American school system. In my school district, there was not a focus on current event in any of the classes, despite the knowledge of current events being an important part of education. Because of this, I wished to introduce a more interactive way to have news presented. My original plan was to take the most current events that I would find on Google, and program processing to display what ever news story I chose. While this would have worked, I wished to have information that was a recent as possible. The project “Moon” by Ai Weiwei and Olafur Eliasson, gave me the inspiration to include the news from regular people around the world, and from this we chose to use Twitter as our new source. This project was targeted more toward children, as it has a very whimsical look to it. It would be best used in a classroom setting where children can actively see where in the world they are receiving news from. This helps create knowledge on current events, and it also helps children further develop geography skills. 

Process of Creation

Coding- 

To integrate the use of Twitter’s information into our project, we used the API that connects Twitter with Processing. We had a difficult time getting this to work. In the beginning, we could not figure out how to get the permission to use the API at all. After this, it was a matter of integrating the components of the project into the API coding. We had to integrate the use of buttons, LEDs, the whistle sound, and code the interface in Processing to look nice. The most interesting part about using the Twitter API was that we would place any keyword we wanted into the code, and it would find the most recent tweet that has to do with that key word. This means that this project could be tweaked in many ways to serve more specific projects. We actually thought about focusing our entire project on climate, but we decided to keep the key word as news in order to generate more tweets. This was the most interactive project that I have made because of the programs ability to search for a keyword and then find the most recent tweet. It aligns perfectly with my definition of interaction, two parties able to receive information, think about it, and then respond. Overall, the coding proved to be the most difficult part of this project, but it reaped very cool result when we figured out how it worked. 

Physical Project- 

We were originally going to create the physical box by laser cutting a box, but the map that we used was far too large to cut a box. From this, we decided to use cardboard, but this meant that our original plan of our project being something that you can step on would not work. This proved to be a better option because the project would last longer through testing and presenting. After adding supports to the bottom, the project was very sturdy. The only thing that was a problem was integrating the buttons and LEDs. A lot of hot glue was necessary.

Fabrication- 

Our original fabrication plan was to print a compass and have it spin with a servo motor. We had the compass printed, but then we came back to find it after we had finished the rest of the project and it was no where to be found. In a mild state of panic, we decided to use the old printed parts from our midterm project to create another compass. While we were disappointed to have lost the original compass, our makeshift did the job. 

User Testing

Our physical project did not change very much from user testing. The buttons for Australia and the world did not work, so we had to fix that. The main change came in the interface that the viewer saw in Processing. We originally had the webcam working, and then the tweet would pop up next to the user’s face. The idea behind this was that we wanted to highlight the inclusivity of Twitter, that everyday people are able to voice their opinions. This was not received as well as we had hoped during user testing. We loved the feedback we received, and it defiantly moved our project to a higher level. We were suggested and decided to change the interface to resemble the physical map, and have the tweets pop up over the continent that the user pressed. This was to give the project more cohesion, and I think it paid off. 

Conclusions

I really enjoyed making this project. The interaction between the user and project was interesting because it took the the familiar idea of Twitter and put it into a new kind of interaction. Our final project received very good feedback, people were interested in continuing to interact with it because of the constant updates of information. My continuation of this project would be to make the physical display more like our original idea of having it be a carpet. I would like to continue to work with the Twitter API; to see what kind of projects can be made with it, and to see other ways we can spread news. 

Recitation 7 Functions and Arrays

Nov 7

Step 1:

Make a function, similar to the one here, which displays some graphic of your own design.  It should take parameters such as x position, y position, and color to display the graphic in the desired way.  Your graphic should involve at least three different shapes.  Feel free to expand this function as much as you want, and make sure to run your function. 

Step 2: 

Create a for loop in the setup() to display 100 instances of your graphic in a variety of positions and colors.  Make sure to use the display function you created in Step 1.  Then move your for loop to the draw() loop, and note the difference

int Arrows = 100;

float posX[] = new float [Arrows];
float posY[] = new float [Arrows];
float size[] = new float [Arrows];
color c[] = new color [Arrows];

void setup(){
  size(600, 600);
  background(233,227,255);
  for (int i=0; i<Arrows; i++) {
    posX[i] = random(width);
    posY[i] = random(height);
    size[i] = random(4, 7);
    c[i] = color(random(255), random(255), random(255));
  }
  
  for(int i=0; i<Arrows; i++){
  shoot(posX[i],posY[i],size[i],c[i]);
  }
  
}

void shoot(float x, float y, float size, color c){
 strokeWeight(3);
 fill(c);
 rect(x,y,size*10,size);
 fill(255);
 triangle(x,y-size,x,y+size*2,x-size*3,y+size*0.5);
 fill(0);
 line(x+size*8,y,x+size*10,y-size*2);
 line(x+size*8,y+size,x+size*10,y+size*2);
 line(x+size*10,y,x+size*12,y-size*2);
 line(x+size*10,y+size,x+size*12,y+size*2);
}

Step 3:

Create three Arrays to store the x, y, and color data.  In setup(), fill the arrays with data using a for loop, then in draw() use them in another for loop to display 100 instances of your graphic (that’s two for loops total).  You can use this example to help you do this.  Make sure to use the display function you created in Step 1, and if you’ve added any other parameters to your display function you should create new arrays for them as well.

int Arrows = 100;

float posX[] = new float [Arrows];
float posY[] = new float [Arrows];
float size[] = new float [Arrows];
color c[] = new color [Arrows];

float speedX[] = new float [Arrows];


void setup(){
  size(600, 600);
  background(233,227,255);
    //speedX[i] = random(-3,0);

}


void draw(){
  background(233,227,255);
   for (int i=0; i<Arrows; i++) {
    posX[i] = random(width);
    posY[i] = random(height);
    size[i] = random(4, 7);
    c[i] = color(random(255), random(255), random(255));
   }
  for(int i=0; i<Arrows; i++){
  shoot(posX[i],posY[i],size[i],c[i]);
  posX[i] = posX[i]+speedX[i];
  }
}

void shoot(float x, float y, float size, color c){
 strokeWeight(3);
 fill(c);
 rect(x,y,size*10,size);
 fill(255);
 triangle(x,y-size,x,y+size*2,x-size*3,y+size*0.5);
 fill(0);
 line(x+size*8,y,x+size*10,y-size*2);
 line(x+size*8,y+size,x+size*10,y+size*2);
 line(x+size*10,y,x+size*12,y-size*2);
 line(x+size*10,y+size,x+size*12,y+size*2);
}

Step 4:

Add individual movement to each instance of your graphic by modifying the content of the x and y arrays.  Make sure that your graphics stay on the canvas (hint: use an if statement).

int Arrows = 100;

float posX[] = new float [Arrows];
float posY[] = new float [Arrows];
float size[] = new float [Arrows];
color c[] = new color [Arrows];

float speedX[] = new float [Arrows];


void setup(){
  size(600, 600);
  background(233,227,255);
    for (int i=0; i<Arrows; i++) {
    posX[i] = random(width);
    posY[i] = random(height);
    size[i] = random(4, 7);
    c[i] = color(random(255), random(255), random(255));
    speedX[i] = random(-3,0);
   }

}


void draw(){
    background(233,227,255);
  for(int i=0; i<Arrows; i++){
  shoot(posX[i],posY[i],size[i],c[i]);
  posX[i] = posX[i]+speedX[i];

  
  if (posX[i]> width || posX[i]<0) {
    speedX[i] = -speedX[i];
    }
  }
}

void shoot(float x, float y, float size, color c){
 strokeWeight(3);
 fill(c);
 rect(x,y,size*10,size);
 fill(255);
 triangle(x,y-size,x,y+size*2,x-size*3,y+size*0.5);
 fill(0);
 line(x+size*8,y,x+size*10,y-size*2);
 line(x+size*8,y+size,x+size*10,y+size*2);
 line(x+size*10,y,x+size*12,y-size*2);
 line(x+size*10,y+size,x+size*12,y+size*2);
}

Question 1:

In your own words, please explain the difference between having your for loop from Step 2 in setup() as opposed to in draw().

When the forloop is in setup(), the array of arrows only draw once which result in a still image. When the forloop is put in draw(), no matter if the background is updated in the drawloop, the arrows’ drawing is nonstoppable. 

Question 2:

What is the benefit of using arrays?  How might you use arrays in a potential project?

When a large amount of the same group of drawings is needed, then using arrays will save a lot of efforts and make the code tidy and clean. Also because all information are stored in the arrays, one only has to change the parameters in the arrays if many changes are wanted. I will use arrays when many images or sounds are needed so that I don’t have to load them to processing one by one. 

Final Project Documentation – Jackson Simon

Auditory ‘Temple Run’ – Jackson Simon – Rodolfo Cossovich

Conception and Design:

I started off just really wanting to create a game, and through conversations with Rudi “Temple Run” came up. I was not about to create a replica of an already made game, but I had this idea that “Temple Run” could be turned into a pure audio game. Audio of a direction played would let the user know where to go, instead of having visual aid. At first, this audio game  was intended to try and potentially help visually impaired people in some way (perhaps helping them know which way they could move in a day to day life, while walking down the street, with the directions being said out loud). However, this was quite presumptuous seeing as I do not know people who have lost, or never had, their eyesight. Therefore, I could not accurately figure out what best way to aid them. The game then turned into an inclusive game, allowing both people who can see, and those who cannot, to be on the same playing field and enjoy themselves. 

Fabrication and Production:

In the beginning (for user testing), I neglected to emphasize the experience of being blind for those who indeed were not. In fact, I started with a joystick as the means of going up, down, left or right. After the feedback I received, it was clear that I needed to switch the way of interacting with the game, and after conversations with Rudi: I decided to use an accelerometer attached to a headset with ‘blinding’ glasses. This allowed for an amplification of the dulling of senses. I realize now that a gyroscope might have been easier, and more successful, in reading the directions (now attached to the way the user moved their head) and in user usability. I believe changing the way the user moves directions, while having their eyesight dulled (for users who are not visually impaired), made it so that a sort of equal ground for playing games was added. Plus, it made it more fun and interactive in a different way then just a simple joystick.

Conclusion:

This game was meant to be able to be played by visually impaired and non-visually impaired people alike. My definition of interaction doesn’t necessarily involve a back and forth: it could be, for example, just reading a book, the words interacting with your brain, however, your brain doesn’t necessarily interact with the book. In the case of my game, there is a back and forth: the sound with the user, the user with the accelerometer (and by extension the game itself). Therefore, my game adheres to my definition but also expands it since there is more than just a singular interaction (which I believe is all that is needed for something to be called an interactive exchange). The audience therefore receives a stimulation, and causes a stimulation themselves. 

If I had more time there is some definite improvement that I could’ve done. For example, improve the ‘blinding’ of non-visually impaired people (even after tweaking the glasses multiple times you could still sort of see through the corner of your eye) and also make sure that the directional readings were as perfect as could be (they worked well, you could definitely get ten points by going in the right direction 10 times, but it still was a little off at times). It has taught me that the experience had by the user for a game is paramount. I got complaints about the uncomfortableness of the headset (which definitely could’ve been made to be nicer), which leads to people not necessarily wanting to where it: which means they wouldn’t play the game! If I were to make another game, similar or not to this project, I would put more focus on the experience (even though I had people that did enjoy it and have fun at the IMA Show) and not just the idea behind the game (even if it is still important).

So what? Why should people care about this project? It definitely did accomplish my goal to a certain extent: a level playing field, no matter if you can see or not. I feel that equalizing the way games are played, while enhancing user experience, is a goal all games should strive for.