Final Project Blog Post by Tiana Lui

Facebook Car Chase – Tiana Lui – Professor Rudi

Conception and Design

At the beginning, we wanted our users to interact with our project by chasing a car, thus we planned to build an electric car that would communicate to our arduino via bluetooth. On the car, we also installed a rotating servo motor with an ultrasonic sensor placed on top, so that the servo motor would rotate the ultrasonic sensor so that the car could avoid obstacles 180 degrees around it. However, due to limitations and difficulty with setting up bluetooth, we altered our design to have our car drag the user’s laptop if they opened facebook, making the ultrasonic sensor and servo motor unnecessary. Here, we learned that sometimes it is necessary to innovate and find a way around obstacles. The limitations of this project forced us to think creatively and problem solve. For the physical design of the car, our main concern was to cover the wiring of the car, so we laser cut and decorated a wooden box. Laser cutting was a fast and appealing way to shape our car, and would be forgiving if we had to make any alterations (which we ended up needing to make due to the irregular wires and bumps of the car). We decided not to 3D print because we weren’t completely sure of the dimensions of our car.

Fabrication and Production

The most significant steps in our production process were building the car, building the chrome extension, coding in p5, coding in arduino, connecting arduino and p5 together, bluetooth failing to work, and post user testing debugging. 

As there were no instructions to build the car, Kathy followed a video tutorial online to build the car, testing the car’s functions one by one to make sure everything was built correctly. Then, referencing code online, she built a function to control the wheels of the car.  This step was crucial because the car was part of the fundamental idea of our project.

The other fundamental step was the chrome extension. By referencing code online, figuring things out, and asking Leon Eckert for help, I was able to create a chrome extension that set a 10 second timer whenever someone opened a social media site, ie. Facebook, and redirect the user to a p5 page that I coded to display gifs reminding users to get back to work.

The third crucial step, by far the hardest step, was serial communication between p5 and Arduino. While the concept was taught in class, this particular step was so challenging because I had no experience serially communicating specifically from p5 to Arduino, so I was lost as to how to solve coding errors when they appeared. With the help of my CS friend, I was able to pinpoint the problem being me not setting up the server properly, and after that problem was fixed, I was able to serially communicate between Arduino and p5. 

Bluetooth, normally a very simple step, failed to work despite coding as written in tutorials online. However, this failure pushed us to alter our project. Instead of making the car run on the ground, we made it so that our car would drag the user’s laptop. This proved to be an even better idea than our original idea as it would actually pry the laptop away from the user, actually forcing the user to take a quick break. 

Our car failed to work during user testing, which I thought was very strange, because it had worked in the morning with the exact same code. After user testing, we made sure to take note of the suggestions that could be given given our situation, and set to work trying to fix our car. The failure of the car taught me how to fix coding errors: isolating different sections and testing them out one by one. In order to fix complex coding with no idea where the error is, it is necessary to test each section/part of the code to verify each function is correct. During those few days, I spent my time debugging, function by function, to see what was wrong with my code. Perchance, I happened to pick up the car and the car started moving. It was then that I realized that it was the hardware that was the problem and not my code that was the problem. After that realization, I made sure all the hardware was in place and implemented the suggestions given during user testing (making sure to take a demo video, removing the alert when users were redirected to my p5 page, securing the cord to our laptop, and placing wheels below our laptop to prevent friction from disconnecting our car from our laptop).

Conclusions

The goal of our project was to remind users how long they were using social media and to make users take a quick break from their laptops. Our project aligned with our definition of interaction in that the laptop would receive information from us (what website we accessed and if it was social media how long we accessed it), process that information, and then output information back to the user (redirect to another webpage). Then the user would be forced to chase a physical car (another form of physical interaction in which the user sees the car dragging its laptop, interprets this threat to their computer, and takes action by proceeding to chase after their computer). Since our project was more of a daily use product and not most suitable for presentation, our audience did not necessarily know to open facebook on the computer. However, after they opened facebook, their reaction followed our expectation of them being surprised and trying to stop the computer from running away. If we had more time, we would try implementing more functions. For example, in my chrome extension, instead of triggering a 10 second timer every time a user opened facebook, I would come up with a timer that could pause itself if the user switched tabs to a non-social media website. For the car, perhaps we could make the car sensitive to the edges of the table, so that the car wouldn’t accidentally fall off the table.

This project really tested and pushed our boundaries. We encountered obstacles that essentially would prevent the operation of our project. During those times, I sometimes wondered if our project was even doable and whether we should switch to a different project. However, by being persistent, asking for help, and thinking outside of the box, we were able to solve the problems and accomplish more than we ever had. Being able to solve those problems was very rewarding. Just as this project has enabled us to reflect and grow, we hope that our project will also make others think and reflect on how they are spending their time.

Final Project User Testing by Tiana Lui

During user testing, one project that caught my eye was the project that simulated a road trip. This project was very engaging because the user was given many options to choose from, leading to many surprising experiences. For example, I was very impressed by the night time option, which turned on your car lights, two strips of leds. I also enjoyed the extra thought into making the interaction more car like, in that the user needed to press on a pedal to start the experience. 

Other projects that impressed me were a project that mapped your hand gestures to a processing canvas and allowed you to save or delete your sketch, and another hat project in which the user had to physically move up and down to eat jellyfish to gain points. The project that involved drawing in the air reminded me of google’s tilt brush and the game project’s graphics and presentation were very polished. 

During user testing, our car refused to move, which did not enable us to have the fullest user testing. However, through demonstrating how our project was supposed to function, we received feedback that we’ll be implementing. Because our project is meant to be a software that initiates when a user is using social media, it is difficult to make its function obvious in a presentation setting. One student suggested that we include a page in the beginning that tells a user to browse facebook in a new tab. Another piece of advice we were given was that some projects reflected better on video rather than in a presentation setting, and that we should make sure to take a video reenacting how a user would interact with our project. A few people also mentioned to us to use a shorter cord and to make sure the cord wouldn’t disconnect from the computer, because our goal was to drag the laptop using our car. Marcela suggested that we put our laptop on wheels to make sure the cord did not disconnect. 

One thing we also need to make clear is that our project is more a statement about how much we consume social media, and makes it tougher for users to access social media by penalty, physically dragging away their laptop, and admonishing users every time they open social media using gifs or self reflection tactics. 

Post user-testing

I tried to debug what was wrong with our car for the past few days, recoding both at home and in the lab. I found it very strange that our project was not running as it was working in the morning, with the same exact code. However, I thought it was a serial communication code error, as I kept getting an error message about my port. Today, I debugged step by step and decided that my code was fine. By chance, I happened to reposition the battery and the car started working again. However, I retested a few times after, and got an error message again, and was also not able to reupload code onto arduino; there were problems with the usb port I had been using. After this debugging, I have determined that the errors in our project come from hardware rather than code.

Recitation 11: Workshops by Tiana Lui

For this week’s recitation, I went to Leon’s presentation on media manipulation. We were tasked with editing a title sequence from a tv show, and because game of thrones is currently running its final season, I chose to edit the game of thrones intro.

During recitation, Leon showed us how to add a red filter to our video. My initial thought was to incorporate the red filter and a horror theme into my video. I planned to select violent videos of each actor and have the user be able to reveal those videos when the main Game of Thrones intro video displayed their name. However, the videos I found online were too long. I also wanted to make image animations and on click functions for Emilia Clarke (Daenerys). Because Daenerys is known as 龙马, I wanted to create a function where if the user clicks a button, dragons will pop up in the video.

In my final video, I added 4 functions. If the user clicks the mouse, a video of Tyrion will play. If the user presses ‘n’, an article about Nikolaj Coster-Waldau will appear. If the user presses ‘c’, the wiki fandom about page for cersei will open. Lastly, the up button will display dragons on the screen.

Code

import processing.video.*;

Movie GOT;

Movie Tyrion;

PFont f;

int time;

PImage dragon;

boolean nIsOpen=false;

boolean cIsOpen=false;

int size=floor(random(1,200));

void setup(){

 size(1000,700);

 GOT=new Movie(this,”GOT.mp4″);

 Tyrion=new Movie(this,”Tyrion.mp4″);

 GOT.play();

 dragon=loadImage(“dragon.png”);

}

void draw(){

 time=millis();

 if(mousePressed){

   if(time>3000){

     Tyrion.play();

     if(Tyrion.available()){

       Tyrion.read();

       GOT.pause();

       noTint();

       image(Tyrion,0,0);

     }

   }

 }

 else if(keyPressed){

   if(key==’n’){

     if(nIsOpen==false){

       link(“https://www.vanityfair.com/hollywood/2019/05/game-of-thrones-why-jaime-leaves-brienne-for-cersei-kill-her-nikolaj-interview”);

       nIsOpen=true;

     }

   }

   if(key==’c’){

      if(cIsOpen==false){

        link(“https://gameofthrones.fandom.com/wiki/Cersei_Lannister”);

        cIsOpen=true;

      }

   }

 }

 else{

   if(GOT.available()){

   GOT.read();

   Tyrion.pause();

   tint(255,0,0);

   image(GOT,0,0);

   }

 }

 if(!mousePressed){

   GOT.play();

   image(GOT,0,0);

   if(keyPressed){

     if(key==CODED){

       if(keyCode==UP){

         image(dragon,random(0,1000),random(0,700));

       }

     }

   }

 }

}

Recitation 10: Media Controller by Tiana Lui

Credit to Johannes Vermeer for Girl With a Pearl Earring painting. 

I created a Processing sketch that controls the left right movement of the eyes of the image Girl With The Pearl Earring by manipulating an analog potentiometer setup made with Arduino. A user inputs potentiometer values by twisting the potentiometer knob. These values are transmitted as inputs to Processing via serial communication and then Processing converts those inputs to position outputs for the eyes.

At first, I was considering storing the color pixel information for the eyes into an array, then moving that array by using the translate function. However, Rudi suggested that I create a separate image for the eyes of the painting, with a transparent background, then move that image of eyes by changing the image’s position.

In photoshop, I selected the eyes and pasted in place into a new, transparent document of the same size. Next, I dragged the two images, the original and the eyes, into processing and began coding.

In arduino, I created an int sensor to receive information from analog read pin A0. Then I mapped that sensor value to the range 1-255 using the map function.

Arduino Code

// IMA NYU Shanghai

// Interaction Lab

// For sending multiple values from Arduino to Processing

void setup() {

 Serial.begin(9600);

}

void loop() {

 int sensor1 = analogRead(A0);

 sensor1=map(sensor1,1,1023,1,255);

 // keep this format

 Serial.print(sensor1);

 Serial.println(); // add linefeed after sending the last sensor value

 // too fast communication might cause some latency in Processing

 // this delay resolves the issue.

 delay(100);

}

In Processing, I created two PImage, one for the original image and one for the eyes. I created two int values to get one potentiometer value a bit before the next potentiometer value. I compared the two values to each other to determine whether to move the eyes left or right, then moved the image right or left one, and up or down .5

Code for first Processing Sketch

// IMA NYU Shanghai

// Interaction Lab

// For receiving multiple values from Arduino to Processing

/*

* Based on the readStringUntil() example by Tom Igoe

* https://processing.org/reference/libraries/serial/Serial_readStringUntil_.html

*/

import processing.serial.*;

String myString = null;

Serial myPort;

PImage img1;

PImage img2;

int initialSensorVal;

int finalSensorVal;

int x=0;

int y=0;

int NUM_OF_VALUES = 1;   /** YOU MUST CHANGE THIS ACCORDING TO YOUR PROJECT **/

int[] sensorValues;      /** this array stores values from Arduino **/

//240-266, 348-366 133-155 326-345

void setup() {

 size(735, 1000);

 background(0);

 setupSerial();

 img1=loadImage(“girlWithPearlEarring.jpg”);

 img2=loadImage(“girlWithPearlEarringEyes.png”);

 image(img1,0,0,width,height);

}

void draw() {

 img1=loadImage(“girlWithPearlEarring.jpg”);

 updateSerial();

 printArray(sensorValues);

 finalSensorVal=sensorValues[0];

 //move to left

 if(initialSensorVal<finalSensorVal){

   image(img2,x++,y+=.5,width,height);

 }

 //move to right   

 if(initialSensorVal>finalSensorVal){

   image(img2,x–,y-=.5,width,height);

 }

 initialSensorVal=sensorValues[0];

}

void setupSerial() {

 printArray(Serial.list());

 myPort = new Serial(this, Serial.list()[ 3 ], 9600);

 // WARNING!

 // You will definitely get an error here.

 // Change the PORT_INDEX to 0 and try running it again.

 // And then, check the list of the ports,

 // find the port “/dev/cu.usbmodem—-” or “/dev/tty.usbmodem—-”

 // and replace PORT_INDEX above with the index number of the port.

 myPort.clear();

 // Throw out the first reading,

 // in case we started reading in the middle of a string from the sender.

 myString = myPort.readStringUntil( 10 );  // 10 = ‘\n’ Linefeed in ASCII

 myString = null;

 sensorValues = new int[NUM_OF_VALUES];

}

void updateSerial() {

 while (myPort.available() > 0) {

   myString = myPort.readStringUntil( 10 ); // 10 = ‘\n’  Linefeed in ASCII

   if (myString != null) {

     String[] serialInArray = split(trim(myString), “,”);

     if (serialInArray.length == NUM_OF_VALUES) {

       for (int i=0; i<serialInArray.length; i++) {

         sensorValues[i] = int(serialInArray[i]);

       }

     }

   }

 }

}

The problem with this processing sketch was that my potentiometer had to be set at 150 to be moving the right amount left or right. If it was set at 0, the potentiometer would only move the eyes to the right.

I created a second processing sketch where, instead of using if statements to determine left or right, I mapped the potentiometers’ values to the range I wanted the image to move in. However, the catch to this approach is that the potentiometer also dictates where the starting position of the eye is, so to have the original image line up with the eyes at the starting position, the potentiometer has to be set at a certain value too.

2nd Processing Sketch Code

// IMA NYU Shanghai

// Interaction Lab

// For receiving multiple values from Arduino to Processing

/*

* Based on the readStringUntil() example by Tom Igoe

* https://processing.org/reference/libraries/serial/Serial_readStringUntil_.html

*/

import processing.serial.*;

String myString = null;

Serial myPort;

PImage img1;

PImage img2;

float initialSensorVal;

float finalSensorVal;

float x=0;

float y=0;

float leftBound=-10;

float rightBound=2;

float topBound=1;

float bottomBound=0;

int NUM_OF_VALUES = 1;   /** YOU MUST CHANGE THIS ACCORDING TO YOUR PROJECT **/

int[] sensorValues;      /** this array stores values from Arduino **/

//240-266, 348-366 133-155 326-345

void setup() {

 size(735, 1000);

 background(0);

 setupSerial();

 img1=loadImage(“girlWithPearlEarring.jpg”);

 img2=loadImage(“girlWithPearlEarringEyes.png”);

 image(img1,0,0,width,height);

}

void draw() {

 img1=loadImage(“girlWithPearlEarring.jpg”);

 updateSerial();

 printArray(sensorValues);

 finalSensorVal=sensorValues[0];

 x=map(sensorValues[0],1,255,leftBound,rightBound);

 y=map(sensorValues[0],1,255,bottomBound,topBound);

 image(img2,x,y,width,height);

 initialSensorVal=sensorValues[0];

}

void setupSerial() {

 printArray(Serial.list());

 myPort = new Serial(this, Serial.list()[ 3 ], 9600);

 // WARNING!

 // You will definitely get an error here.

 // Change the PORT_INDEX to 0 and try running it again.

 // And then, check the list of the ports,

 // find the port “/dev/cu.usbmodem—-” or “/dev/tty.usbmodem—-”

 // and replace PORT_INDEX above with the index number of the port.

 myPort.clear();

 // Throw out the first reading,

 // in case we started reading in the middle of a string from the sender.

 myString = myPort.readStringUntil( 10 );  // 10 = ‘\n’ Linefeed in ASCII

 myString = null;

 sensorValues = new int[NUM_OF_VALUES];

}

void updateSerial() {

 while (myPort.available() > 0) {

   myString = myPort.readStringUntil( 10 ); // 10 = ‘\n’  Linefeed in ASCII

   if (myString != null) {

     String[] serialInArray = split(trim(myString), “,”);

     if (serialInArray.length == NUM_OF_VALUES) {

       for (int i=0; i<serialInArray.length; i++) {

         sensorValues[i] = int(serialInArray[i]);

       }

     }

   }

 }

}

In my project, technology was used to allow physical inputs to manipulate digital images, which to me was unthinkable of before. While my project did not use computer vision, algorithms that allow computers to make intelligent assumptions about images and videos, the mere fact that processing and arduino exist are testament to the improvements in software development tools that allow student programmers to experiment artistically.  Making tools with easier learning curves has enabled more people to be able to create interactive art with technology. 

Recitation 9: Final Project Process by Tiana Lui

Step 1

Sam Li

Sam’s project aims to get people to have a more fun, entertaining, and interactive experience with fine art. Sam’s project uses an illustration/representation of the Mona Lisa on a screen, accompanied by buttons users can press. By pressing buttons, users can choose how they feel and what they think about the art, and a corresponding visual will change the display on the screen.

The idea of making viewing and learning about fine art more engaging was very interesting to me. And, the user touch buttons/potentiometers, which the computer receives and processes, and outputs the correct facial expression based on the button the user pressed aligned to my and her definition of interaction.

For me, I interpreted Sam’s project as giving the user the ability to express their thoughts on the artwork, in which I suggested Sam to give users more options to express their opinions, instead of using buttons, use perhaps an input box. However, Sam clarified that her project was less about letting users express their thoughts on the Mona Lisa, and more about letting users play around with the facial features of Mona Lisa. The group also suggested that instead of using buttons, she could use alter Mona Lisa’s expression frame by frame, which would make her project more complex and interesting.

Alex

Alex wanted to create a project in which the user could interact with technology in a way that feels natural. He came up with a computer bow and arrow game, accompanied by a physical glove and button system that would mimic the experience of shooting an arrow.

Alex’s concept of making interaction with objects as natural as possible was very interesting to me, especially because my definition of interaction had evolved in a similar, human-centric way (I interpreted making objects interactive as trying to give human qualities and make the object more similar to human behavior and actions). However, Alex’s project seemed very challenging, especially if he aimed to align the project to his definition of interaction (interaction should be natural) as well as recreate a physical component to his project. We as a group advised Alex to come up with a more detailed plan as to how he would implement his project.

Lindsay Chen

Lindsay wanted to give people the ability to draw in the air. Her project reminded me of google tilt brush, a program that enables the artist to draw in the air, and see their creation in VR. I like the concept of giving artists another dimension to explore and generate new art. However, as a group, we were worried about how she would track the user’s position in the air and translate those values to a sketch in processing. We suggested limiting the range the user could draw in, perhaps by using 2 or more distance sensors, and mapping the user’s hand’s distance from the sensor to processing. Lindsay’s definition of interaction was, “interaction is a process of performance where people communicate with the machine in a specific language, and can only be completed with both the people and the machine together.” Her project fits her definition of interaction because drawing in the air has a performance like quality. The user uses their body to communicate with the computer. Compared to Lindsay’s definition of interaction, my definition of interaction is less focused on human performance, and goes more into detail on giving objects human like qualities.

Step 2

Feedback for my project: Feedback included to involve more data consumption by tracking facebook usage time rather than computer usage time. Making an object that vibrates, rather than making a car, because it would be hard to communicate information from the computer to the car without using a usb cord.

Most successful part, agree/disagree: Using gifs and an entertaining physical component to remind users of how long they have been using data.

Least successful part, agree/disagree: Tracking user’s computer time. If I were to track the computer’s user time, I would re-frame my project as a computer usage tracker rather than a data consumption related project.

How will this feedback influence your project?

I have decided to focus on tracking a user’s use of facebook, instead of tracking how long they spend on the computer. If possible, I will also freeze the user’s ability to access facebook if they use facebook for a specific amount of time.

I will not be incorporating the feedback regarding not making the car, because, we have figured out that we can use bluetooth to transmit information to the car.