Touch the Stars – Lana Henrich – Eric Parren

Touch the Stars: Final Interaction Lab Project

Conception and Design

When my partner (Caren Yim) and I initially began thinking about our final project, we had the idea of creating a game called Ready, Set, Guess!, in which two players would compete to see who could identify a blurring image on the screen first. However, after the recitation and lecture session in which we received other people’s feedback, we decided to change our idea. During recitation, we were told that our game didn’t seem ‘innovative’ enough, at that the interaction we had planned (of hitting buttons which corresponded to colors next to answer choices appearing on the screen) was not interactive enough. We were advised by one of the teachers that creating a new, effective game would be very hard to achieve, and that it might make more sense for us to create something which involved more interactive elements. In lecture, we were further advised against making games, and noticed that everyone else in our class also wanted to create a game. The feedback we received was all accurate- hitting a button, for a sort of ‘game show’ like game was not interactive enough. After this lecture, my partner and I met after class to discuss our project further.

We wanted to differentiate our project from the other ones in our class, and thought about switching to something more creative and artistic. Caren and I remembered the “Eyewriter”, a project we had read and heard about in class, which combined tracking technology with art. Earlier this year, I went to the Yayoi Kusama art exhibit in Shanghai, and was very inspired by the experience I got from being there. Thus, Caren and I decided that we wanted to create an art project that could be displayed in a museum for users to interact with. We were further inspired by the saying, “reach for the starts”, and wanted to create an art exhibit in which people could get a relaxing, interactive experience of brushing their hands through a galaxy of stars and planets. We thought the name Touch the Stars was fitting, and got to work. After asking a teaching fellow about suggestions for which type of motion-tracking technology to use, we decided on LeapMotion. Though we knew it would be a challenge to work with, as we had never used it in class (versus a webcam tracking feature we had previously learned about and worked with), we knew it would be most effective for helping us achieve our project, as we wanted just someone’s hand to be tracked and not have to make them wear gloves or a light (as, if this were in an art museum, people would not want to go through the trouble of putting something on in order to interact with the exhibit). We wanted the motion tracker to just track the movements of someone’s hand and be able to give them a calming and fun virtual experience. 

Fabrication and Production

We began our production process by playing around with code to see how we could best utilize LeapMotion in our project. We looked at example codes on openprocessing.org, and found this code by Konstantin Makhmutov: https://www.openprocessing.org/sketch/529835, and asked a fellow if we were allowed to use example codes to change and work off of. We really like the fluidity of the movement of the dots in this code, and so we got to work on trying to incorporate the particles from this code into ours. As this code was in a different language than we learned, we had to go through a lot of trial and error to incorporate the particles into our code. Our most challenging task was working with LeapMotion, as we had never used it before and the instructions on the LeapMotion website on how to set up the program and code were a bit confusing to us. It took us a few days of many hours at the lab and assistance from the fellows to finally figure out how to use hand tracking to move the stars in our code. We also wanted to create some ‘shooting stars’ in the background of the moving stars, as well as planets, to give a more galaxy-like feel to our project. Creating the appearance of slowly-moving stars in the background was a challenge, as the code kept ‘spazzing’ when we ran the program and the stars moved very quickly or not at all. In order to incorporate planets into our background, we drew the planets by hand on Adobe Illustrator, and converted them into our code afterwards. We initially thought we would want the planets to move with the rest of the stars, but after coding this saw that the interaction was more fluid and made more sense with the planets static in the background. The planets we made were much bigger than the stars, and so it just made more sense for us to put them in the background (also because planets are much bigger and thus would not move as quickly as the shooting or other stars).

We decided to add a button which said “PRESS to shoot to a new galaxy” in order to: 1. allow for the stars to be ‘reset’ into the original positions, so that, if this were in an art exhibit for example, each user could reset their galaxy, and 2. to change the color and position of stars and planets, to create a slightly different experience for each user. Changing the code so that the planets changed positions but remained static was difficult for us, but we finally created a boolean statement in order to solve this problem. Considering that we imagined this project in an art exhibit, we wanted to create a simple box with a big, glowing button. During our presentation (and if this were in an exhibit), we would want the lights to be off in order to create a better ‘space’ experience, and so a glowing button which was big and bright enough for people to see and that made sense scale-wise with the projection of our galaxy made sense. Sam, in our lecture class, was kind enough to let us borrow a button she had used for her Midterm project. We wanted to laser cut the box so we could create a solid, simple shape to hold all our wires and place the button in. Below is a picture of the box sketch on Illustrator, as well as a sketch of our planets while we were making them:

 

In order to further create the experience of our project being an exhibit in an art museum, we create a ‘name plate’, like those which hang next to paintings in museums, to display on one of the screens next to our project during our presentation. A picture of it is below:

Because we envisioned our project as part of an art exhibit, we got to thinking about how our project could remain meaningful, even after the exhibit would be closed and enough people had interacted with our project. We thought about how we could make our project more than just something artistic, but actually something useful for people besides the entertainment, relaxation, and experience it provided. We started researching physical hand-therapy treatments and the issues which they created, and saw a lot of sources online which intended to provide more ‘fun’ and engaging ways for people to do their hand therapy workouts. We realized that the movements which one did to interact with out project were similar to those recommend by physical therapists for people with hand injuries or impairments. After further researching information on these injuries and their treatments, we found out that our project could work perfectly as a tool to keep kids with hand injuries (or other reasons for hand exercises) engaged, incentivized, and motivated about their hand exercises. The injuries we found for which physical therapy involved some of the motions incorporated into our project are: Trigger Finger, Golfer’s Elbow, fine motor skills deficiencies, and strokes. Because LeapMotion is a small and portable motion tracker, we thought that our program could be a great way for people to interact with our project from the comfort of their own home. Below are images and a video of our project in action:

 

Conclusion

My main goal with this project was to make a project which created a personal and interactive experience, something which could go into an art exhibit. My definition of interaction is the interplay between technology and humans, giving them a chance to virtually experience something which would otherwise not be possible. Thus, I think Touch the Stars has aligned with this definition. Though my project doesn’t have an end ‘achievement’ which it rewards the user with after interacting with it, I have furthered my definition of interaction to not always have to fulfill a useful purpose, but simply be a tool for expanding creativity and art. My audience agreed with me (both during user testing and afterward) that the project provides an interactive, relaxing, and new artistic experience. If I had had more time, I would love to have create more ‘galaxies’ which could be played with; meaning that each time the button is pressed a complete new experience could emerge. From completing this project, I have learned a lot of about coding and interaction- it is difficult, and often the things you believe will be most easy will pose the greatest challenges. However, with a lot of time and the willingness to try new things and experiment, as well as allowing yourself to be versatile and open to new ideas, there is not limit to what kind of projects can be created. Furthermore, I believe that my project is a good blend of creativity (an interactive art exhibit in a museum) and purpose (something which people can use in their own homes for their own therapeutical and relaxation purposes). 

Touch the Stars in Action

Below is the code from our project. 

import processing.sound.*;
SoundFile file;
import processing.serial.*;

PImage photo;
PImage bg;

Serial myPort;
int valueFromArduino;
int particlesQuantity = 9000;
float fingerX, fingerY;
boolean drawHands = false;
float [] positionX = new float [particlesQuantity];
float [] positionY = new float [particlesQuantity];
float [] speedX = new float [particlesQuantity];
float [] speedY = new float[particlesQuantity];
ArrayList<Planet> planetlist = new ArrayList<Planet>();
int NUM_OF_IMAGES = 8;
Planet mars;

PImage [] images = new PImage[NUM_OF_IMAGES];
Star[] stars = new Star[800];
float speed;

boolean state = false;

void setup() {
size(1440, 900);
bg = loadImage(“sky.jpeg”);
frameRate (60);
file = new SoundFile(this, “IMA.mp3”);
file.loop();

printArray(Serial.list());
myPort = new Serial(this, Serial.list()[ 1 ], 9600);

for(int i = 0; i< NUM_OF_IMAGES ; i++){
images[i] = loadImage(“planet”+i+”.png”);
planetlist.add(new Planet(0,0,images[i]));
}

stroke(random(100, 225), random(100), random(100, 225));

for (int particle = 1; particle < particlesQuantity; particle++) {
positionX[particle] = random(0, width);
positionY[particle] = random(0, height);

}

for (int i = 0; i < stars.length; i++) {
stars[i] = new Star();
}

positionX[0] = 0;
positionY[0] = 0;

setupLeapMotion();

}

void draw() {
speed = (3);
background(bg);
for(int i = 0;i< planetlist.size();i++){

if (state == false) {
planetlist.get(i).rand();
}
}

for(int i = 0;i< planetlist.size();i++){
state = true;
if (state == true) {

planetlist.get(i).show();
}
}

updateLeapMotion();

speedX[0] = speedX[0] * 0.5 + (fingerX – positionX[0]) * 0.1; // allows to move particles with hand through LeapMotion
speedY[0] = speedY[0] * 0.5 + (fingerY – positionY[0]) * 0.1;

positionX[0] += speedX[0];
positionY[0] += speedY[0];

for (int particle = 1; particle < particlesQuantity; particle++) {
float whatever = 1024 / (sq(positionX[0] – positionX[particle]) + sq(positionY[0] – positionY[particle]));

speedX[particle] = speedX[particle] * 0.95 + (speedX[0] – speedX[particle]) * whatever;
speedY[particle] = speedY[particle] * 0.95 + (speedY[0] – speedY[particle]) * whatever;

positionX[particle] += speedX[particle];
positionY[particle] += speedY[particle];

if ((positionX[particle] < 0 && speedX[particle] < 0) || (positionX[particle] > width && speedX[particle] > 0)) {
speedX[particle] = -speedX[particle];
}

if ((positionY[particle] < 0 && speedY[particle] < 0) || (positionY[particle] > height && speedY[particle] > 0)) {
speedY[particle] = -speedY[particle];
}

point(positionX[particle], positionY[particle]);
}
pushMatrix();
translate(width/2, height/2);
for (int i = 0; i < stars.length; i++) {
stars[i].update();
stars[i].show();
}
popMatrix();
//for (int i = 0; i < NUM_OF_IMAGES ; i++){
// image(images[i],random(0,1440), random(0,900));
//}
//for (int i=0; i < NUM_OF_IMAGES; i++) {
// Planet p = images.get(i);
// //Planet(x,y, “planet.jpg”);
// image(p.img, p.x, p.y);
//}

while ( myPort.available() > 0) {
valueFromArduino = myPort.read();
}
println(valueFromArduino);//This prints out the values from Arduino
//if (valueFromArduino >=10){
// for (int i = 0; i < planetlist.size() ; i++){

// planetlist.get(i).rand();
// }
//}
if (valueFromArduino == 0) {
state = false;

for (int particle = 1; particle < particlesQuantity; particle++) {
stroke(random(100, 225), random(500), random(100, 225));
positionX[particle] = random(0, width);
positionY[particle] = random(0, height);
}

//for (int i = 0; i < planetlist.size() ; i++){

//}

}

}

import de.voidplus.leapmotion.*;

LeapMotion leap;

void setupLeapMotion() {
leap = new LeapMotion(this);
}

void updateLeapMotion() {
// …
int fps = leap.getFrameRate();

// ========= HANDS =========

for (Hand hand : leap.getHands ()) {

// —– BASICS —–

int hand_id = hand.getId();
PVector hand_position = hand.getPosition();
PVector hand_stabilized = hand.getStabilizedPosition();
PVector hand_direction = hand.getDirection();
PVector hand_dynamics = hand.getDynamics();
float hand_roll = hand.getRoll();
float hand_pitch = hand.getPitch();
float hand_yaw = hand.getYaw();
boolean hand_is_left = hand.isLeft();
boolean hand_is_right = hand.isRight();
float hand_grab = hand.getGrabStrength();
float hand_pinch = hand.getPinchStrength();
float hand_time = hand.getTimeVisible();
PVector sphere_position = hand.getSpherePosition();
float sphere_radius = hand.getSphereRadius();

// —– SPECIFIC FINGER —–

Finger finger_thumb = hand.getThumb();
// or hand.getFinger(“thumb”);
// or hand.getFinger(0);

Finger finger_index = hand.getIndexFinger();
// or hand.getFinger(“index”);
// or hand.getFinger(1);

Finger finger_middle = hand.getMiddleFinger();
// or hand.getFinger(“middle”);
// or hand.getFinger(2);

Finger finger_ring = hand.getRingFinger();
// or hand.getFinger(“ring”);
// or hand.getFinger(3);

Finger finger_pink = hand.getPinkyFinger();
// or hand.getFinger(“pinky”);
// or hand.getFinger(4);

// —– DRAWING —–

if (drawHands) hand.draw();
// hand.drawSphere();

// save the position in the global variables
fingerX = finger_index.getPosition().x;
fingerY = finger_index.getPosition().y;

// ========= ARM =========

if (hand.hasArm()) {
Arm arm = hand.getArm();
float arm_width = arm.getWidth();
PVector arm_wrist_pos = arm.getWristPosition();
PVector arm_elbow_pos = arm.getElbowPosition();
}

// ========= FINGERS =========

for (Finger finger : hand.getFingers()) {
// Alternatives:
// hand.getOutstrechtedFingers();
// hand.getOutstrechtedFingersByAngle();

// —– BASICS —–

int finger_id = finger.getId();
PVector finger_position = finger.getPosition();
PVector finger_stabilized = finger.getStabilizedPosition();
PVector finger_velocity = finger.getVelocity();
PVector finger_direction = finger.getDirection();
float finger_time = finger.getTimeVisible();

// Let’s test this first!
// fill(255, 0, 0);
// ellipse(finger_position.x, finger_position.y, 10, 10);

// —– SPECIFIC FINGER —–

switch(finger.getType()) {
case 0:
// System.out.println(“thumb”);
break;
case 1:
// System.out.println(“index”);
break;
case 2:
// System.out.println(“middle”);
break;
case 3:
// System.out.println(“ring”);
break;
case 4:
// System.out.println(“pinky”);
break;
}

// —– SPECIFIC BONE —–

Bone bone_distal = finger.getDistalBone();
// or finger.get(“distal”);
// or finger.getBone(0);

Bone bone_intermediate = finger.getIntermediateBone();
// or finger.get(“intermediate”);
// or finger.getBone(1);

Bone bone_proximal = finger.getProximalBone();
// or finger.get(“proximal”);
// or finger.getBone(2);

Bone bone_metacarpal = finger.getMetacarpalBone();
// or finger.get(“metacarpal”);
// or finger.getBone(3);

// —– DRAWING —–

// finger.draw(); // = drawLines()+drawJoints()
// finger.drawLines();
// finger.drawJoints();

// —– TOUCH EMULATION —–

int touch_zone = finger.getTouchZone();
float touch_distance = finger.getTouchDistance();

switch(touch_zone) {
case -1: // None
break;
case 0: // Hovering
// println(“Hovering (#”+finger_id+”): “+touch_distance);
break;
case 1: // Touching
// println(“Touching (#”+finger_id+”)”);
break;
}
}

// ========= TOOLS =========

for (Tool tool : hand.getTools ()) {

// —– BASICS —–

int tool_id = tool.getId();
PVector tool_position = tool.getPosition();
PVector tool_stabilized = tool.getStabilizedPosition();
PVector tool_velocity = tool.getVelocity();
PVector tool_direction = tool.getDirection();
float tool_time = tool.getTimeVisible();

// —– DRAWING —–

// tool.draw();

// —– TOUCH EMULATION —–

int touch_zone = tool.getTouchZone();
float touch_distance = tool.getTouchDistance();

switch(touch_zone) {
case -1: // None
break;
case 0: // Hovering
// println(“Hovering (#”+tool_id+”): “+touch_distance);
break;
case 1: // Touching
// println(“Touching (#”+tool_id+”)”);
break;
}
}
}

// ========= DEVICES =========

for (Device device : leap.getDevices ()) {
float device_horizontal_view_angle = device.getHorizontalViewAngle();
float device_verical_view_angle = device.getVerticalViewAngle();
float device_range = device.getRange();
}
}

// ========= CALLBACKS =========

void leapOnInit() {
// println(“Leap Motion Init”);
}
void leapOnConnect() {
// println(“Leap Motion Connect”);
}
void leapOnFrame() {
// println(“Leap Motion Frame”);
}
void leapOnDisconnect() {
// println(“Leap Motion Disconnect”);
}
void leapOnExit() {
// println(“Leap Motion Exit”);
}

class Planet {

PImage photo;
float xpos;
float ypos;

//PImage [] images = new PImage[NUM_OF_IMAGES];

Planet(float _x, float _y, PImage img) {

xpos = _x;
ypos= _y;
photo = img;

//photo = loadImage(planetName);
}

void update(float x, float y) {

xpos = x;
ypos = y;

}

void show() {

image(photo, xpos, ypos);

}
void rand(){
xpos = random(0,width);
ypos= random(0,height);
}
}

class Star {

float x;
float y;
float position;
float Pvalue;

Star() {

x = random(-width/2, width/2);
y = random(-height/2, height/2);
position = random(width/2);

}

void update() {

position = position – speed;
if (position < 1) {
position = width/2;
x = random(-width/2, width/2);
y = random(-height/2, height/2);

}
}

void show() {
pushStyle();
fill(255);
noStroke();

float newX = map(x / position, 0, 1, 0, width/2);
float newY = map(y / position, 0, 1, 0, height/2);
ellipse(newX, newY, 2, 2);
popStyle();

}
}

The song we used as background music for our project (bensound.com):

Recitation 11, Media Manipulation – Lana Henrich

Media Manipulation Workshop

This week, I went to Leon’s media manipulation workshop. We reviewed the basics of importing images, videos, and live camera feeds into Processing, and learned how to rework media using tools in Processing.

Video Assignment

It was my first time working with pre-filmed videos on Processing, and I used Vimeo.com to look through non-copyrighted videos and find ones I liked. For our recitation assignment, we were supposed to rework videos to recreate a video. I decided to work with the general idea of SpongeBob SquarePants, and wanted to use footage of oceans and beaches with remixes of songs from SpongeBob in the background. To do this, I found videos on Vimeo.com of tropical places and combined them in Processing. I used stop(), noLoop(), jump(), duration(), and speed() to import the videos. To add the background music, I found an acoustic cover of the ending song of SpongeBob. I looked up how to import music on Processing references, and imported the song in the background. Manipulating videos on Processing was more complicated than I had expected. I ran into trouble when trying to cut from one part of the video to the next, but looked at the notes on video-manipulation and found the correct functions to use. Importing the song also did not work at first, which was due to the fact that the song I had downloaded was not in the ‘data’ folder I had created. Once I figured this out though, I uploaded the mp3 song file into the data folder and got it to work.

Video Assignment Result

Below is the video I created using footage from Vimeo.com and a song from YouTube.com:

Additional Notes

Because a major component of my Final Project will be image manipulation in Processing, I decided to note what functions may be most useful to my partner and I as we finish our project. They are: 

  • loadImage (to upload our pictures onto Processing),
  • resize (to adjust the width and height of our images as necessary (so that they are all the same size and change smoothly),
  • and filter (to tweak our images to better fit our game).

What I learned in this workshop will be very useful for my project, as my partner and I will need to manipulate images into pixelating on Processing. We will also have to add two separate images on the screen at the same time, one being the image itself and one being the answer choices, so this workshop was good practice and preparation for the coding on Processing we will have to do for the Final Project.

Media Controller – Recitation 10, Lana Henrich

Moving an Image

Documentation

This week, our goal for recitation to connect Arduino and Processing in order to manipulate media (an image or video). I decided to use a potentiometer, because I knew that it could be used to create different types of movements of images. I create a total of 3 different movements using the same image and potentiometer, and by just adjusting each code accordingly. The wiring and use of just a single potentiometer stayed the same, so the coding changed, and the potentiometer controlled in 2 of my projects the x-axis, and in one the y-axis. I wanted to see how many different manipulations could be created using the same foundation for code and circuit.

The first manipulation of media I coded was just scrolling left to right on a zoomed-in version of the image by twisting the potentiometer. The wiring was pretty simple, though I had to look up the powerpoint from last week to figure out how to upload the image onto Processing. I also made use of the example code which we was provided on the recitation website. 

The second manipulation I coded was moving the image form top to bottom on the screen. I also drew a background for this interaction, so that the image could start off-screen and be brought on-screen using the potentiometer.

The third and final manipulation was the most challenging and complicated. In my other two interactions, the  image was a bit too zoomed in for someone to be able to distinguish what it is supposed to be of. This is my most successful code of the recitation, and also makes the movement of the image faster. It was interesting to see what kinds of different movements on Processing and Arduino I could create by just using a single potentiometer.

Reflection about Interaction and Technology

In today’s societies, the interactions between humans and technology are becoming more and more prominent and important to modern ways of life. The projects I created today used one main piece of physical technology (the potentiometer connected to the Arduino), while the adjustments made to the project to create different interactions were all made on the side of physical computing. Technology was used in my project to combine human interaction with a potentiometer (twisting it) with a virtual element of a moving image on a computer screen. The code I typed out served as the link between the virtual world of the computer and image and the real world of the potentiometer and user of the project. The user has control over the visual representation that is on Processing, and can decide how much to turn the potentiometer by in order to manipulate the image.

Final Project Feedback – Recitation 9, Lana Henrich

Final Project Feedback

Other Projects

Project 1

Kyle’s project idea is creating an interactive art piece, in which waves and flowers are projected onto a wall (with the lights off), and the images being projected move based on the person moving in front of sensors. The main feedback I gave Kyle was to make sure the project is interactive enough, meaning that the user’s movements will trigger interesting, noteworthy, and unique responses within the projection. I think this concept is really interesting because he said he would incorporate music into the installation as well, meaning it is a project that response with both audio and visual feedback, making it a completely interactive art piece.

Project 2

Santi’s idea for his final project is creating a globe with holes in it, where every time you throw a bottle cap into one of the holes, the globe turns by a few degrees. The goal is to get the globe to turn a full 360 degrees. I like this idea because it has an environmentally-conscious purpose in mind, and I think it is cool to create a project like this which can incentivize recycling and make it something fun, rather than tedious. The comments which we gave Santi were to make the globe turn more than a few degrees each time a bottle cap is throw into a hole, so that it doesn’t take 30 bottle caps to get the globe to fully turn, and so that the rotation of the globe is visible enough to give an experience to the user each time they use it.

Project 3

For his project, he wanted to create a musical instrument which someone blows into so that pressure sensors are triggered and sounds come out of a speaker. Hand movement and strength with which air is blown into the instrument are supposed to adjust the volume and pitch of the sound is produced. The main piece of advice I gave for this project is to make sure that the sounds which can be produced are pleasant to hear, as someone with no musical experience would usually have trouble creating nice sounds using such a device. This can be done by limit the sounds which can be create to a certain range, so that extremely loud or pitchy sounds cannot be produced.

My Project

Feedback Given by my Group

The feedback I received was: to replace the button board with cards and a sensor, to add an on-screen (or off-screen) timer to make players feel more pressure and excitement to guess faster, and to add a countdown to the beginning of the game. According to my group members, the most successful part of my proposals is that the game is multi-player and can be played by anyone, making it fun for a lot of people. They said the least successful part of my project was the buttons and how simple the guessing is, suggesting that maybe instead of buttons I should think of something more complicated that makes the game a bit more challenging. Furthermore, they suggested I perhaps try to simplify how I will code the project, as they said it may be a bit complicated to code all the different categories, rounds of categories, and buttons. I agree that a major advantage of my project is the multi-player interaction, and agree that the coding may end up being very tricky to figure out. While I am open to switching out the button board for something else and possibly making the game a bit more complicated, I think it will be important for my partner and I to just start the wiring and coding process and see what we are able to accomplish and what we might need to edit. We will keep these suggestions in mind as we go forward, however. I will definitely discuss with my partner what we can do to make the project more challenging so that it is not too easy to play, which could include adding more difficult categories than just “animals” and “celebrities”. Perhaps we could also make each round more difficult, meaning round 1 is the easiest and round 3 will be the hardest, keeping the game challenging and interesting to play.

Ready, Set, Guess! – Final Project, Lana Henrich

Ready, Set, Guess!

A. Project Title

The title of our project will be “Ready, Set, Guess!’

B. Project Statement of Purpose

Ready, Set, Guess!

    will be an interactive two-player game, within the players will see a blurred image pop up on the screen (along with three answer choices of what this image could be), and will compete to see who can press the button corresponding to the correct answer first. Each round of the game will feature a different category which the images will be based on, giving players a clue of what they are looking at. Due to the partially-digital nature of our game, the software can continually come out with updates which include new categories and images, keeping players entertained as each round changes. Our project is intended for audiences of all ages , as an easy-to-play game for all demographics that does not require a lengthy explanation of rules or reading an instruction manual.

C. Project Plan 

Explain in further detail what your project aims to do. Flesh out your project’s process through careful description, analysis, and with specifics.  What steps will you take to empathize with your intended audience and analyze their needs/requirements? Compose a detailed project plan that explains each of the steps you will take in the next few weeks to deliver your project. Be specific about when you need to finish which part of your project to finish it on time.

Our project aims to be a game which people of all ages can play, wherein the different categories of the rounds will make it fun for people with different interests, and keep it interesting for people who have played before. The game will begin with each player standing in front of a board of 3 color-coded buttons. Once they start the game on the computer, a blurry image composed of slightly-moving ellipses will appear on the screen, along with 3 options for what the picture could be. Each round will be based on a category, such as celebrities, animals, or famous paintings. When the image is projected,  the answer choices will be listed at the bottom of the screen, with each color of the button corresponding to an answer choice. Whichever player presses the button corresponding to the correct answer choice first, wins that round. The winner of one play of the game will be whoever wins 2/3 of the rounds. Due to the varying categories, players of all different interests and knowledge can play our game knowing there will be something interesting in it for them.

The most important thing about our game will be the execution, as we want it to appear like an actual application which people could download (and buy the corresponding button-boards for) to play. The first thing we will need to do is put together the two circuits for the button boards, which we will need to complete before the end of April. Once we complete the circuit, we still start coding the project, which will be the most time-consuming and challenging part of our game. The difficulties will be in setting up the different rounds, and creating options displayed for each round at the beginning of each game, as well as time-sensitive buttons which detect who presses the correct button first, and creating a “start” button that allows players to begin the game. We might look up example codes on Arduino and Processing’s public databases in order to get ideas for how to begin the code. We will work on completing the code in two sessions, one the first week of May and one the second week of May. Once our coding is complete and we know our project works as we intended, we can laser-cut boards for our equipment, including the buttons and Arduino. From there, we can color code the buttons (if not given already-colored buttons), put together and label the board, and make our project look clean and presentable. Once our project is complete, we will test it a few times ourselves by playing the game and seeing if there are any minor improvements we could make to improve the user’s experience with our game.

D. Context and Significance

During my preparatory research, my favorite project that I found was one that created a real-life version of the Mario-mushroom which jumps out of a box when pushed at the bottom. I liked this project a lot because it seems very entertaining and creative, and like something which a lot of people would find enjoyable. This project and the one I am creating for my final both align with my definition of interaction, as they focus on user-experience, meaning the most important aspect of the design and coding is that the user enjoys using the project and will have a memorable journey navigating it. The Mario project, as well as Ready, Set, Guess! are both self-explanatory enough for people to use without needing extensive instructions, and can both by enjoyed by a variety of age groups. What is unique about my project is that it is a half-digital and half-real life game. This means that, if this were to be an actual game that were sold and distributed, the game could constantly be updated with new categories and rounds, with no major additional purchases needed on behalf of the user. This would allow our game to be fun for a long time, as we have a range of categories and rounds for users to play, meaning people with different interests and needs are all accounted for in our game design. This project can be played with family or friends, and by people of all genders, demographics, and ages. After successful completion, this project would be ideal as an application for a computer or smartphone, where people can choose to play the original way, with the button boards, or fully digitally. Additionally, people looking for a game like this could read our blog post and recreate our project, creating their own versions of our game and being able to personalize it to fit their own needs and wants. This game idea is valuable because it creates something that can be enjoyed by many different people, and can be entertaining for large groups, or single groups of two.