Recitation 8: Serial Communication by Justin Wu

Exercise 1:

In this exercise, we used the notes from this week’s lectures. By using Arduino and Processing together through serial communication, I was able to build a circuit with two potentiometers included. The two potentiometers served a different purpose. The potentiometer located in the middle controlled the y-axis movement of the ellipse while the other potentiometer controlled the x-axis movement of the ellipse. Although it is not as complex as the Etch A Sketch example, the idea behind this circuit is identical and employs the same mechanism. Exercise One utilize the most simple interaction, depending on how each user wanted to create their image, they can twist the potentiometer according to their liking and create an unique piece.

Pictures and Videos:

Code:

/ IMA NYU Shanghai
// Interaction Lab
// For sending multiple values from Arduino to Processingvoid setup() {
Serial.begin(9600);}

void loop() {
int sensor1 = analogRead(A0);
int sensor2 = analogRead(A1);

// keep this format
Serial.print(sensor1);
Serial.print(“,”); // put comma between sensor values
Serial.println(sensor2);

// too fast communication might cause some latency in Processing
// this delay resolves the issue.
delay(100);
}

Processing: 

// IMA NYU Shanghai
// Interaction Lab
// For receiving multiple values from Arduino to Processing

/*
* Based on the readStringUntil() example by Tom Igoe
* https://processing.org/reference/libraries/serial/Serial_readStringUntil_.html
*/

import processing.serial.*;

String myString = null;
Serial myPort;

int NUM_OF_VALUES = 2; /** YOU MUST CHANGE THIS ACCORDING TO YOUR PROJECT **/
int[] sensorValues; /** this array stores values from Arduino **/

void setup() {
size(500, 500);
background(0); //if background in draw not called, circles drawn on top of all
setupSerial();
}

void draw() {
updateSerial();
printArray(sensorValues);

fill(255);
ellipse(map(sensorValues[0],0,1023,0,width), map(sensorValues[1],0,1023,0,height),50,50);
//
}

void setupSerial() {
printArray(Serial.list());
myPort = new Serial(this, Serial.list()[ 5 ], 9600);
// WARNING!
// You will definitely get an error here.
// Change the PORT_INDEX to 0 and try running it again.
// And then, check the list of the ports,
// find the port “/dev/cu.usbmodem—-” or “/dev/tty.usbmodem—-”
// and replace PORT_INDEX above with the index number of the port.

myPort.clear();
// Throw out the first reading,
// in case we started reading in the middle of a string from the sender.
myString = myPort.readStringUntil( 10 ); // 10 = ‘\n’ Linefeed in ASCII
myString = null;

sensorValues = new int[NUM_OF_VALUES];
}

void updateSerial() {
while (myPort.available() > 0) {
myString = myPort.readStringUntil( 10 ); // 10 = ‘\n’ Linefeed in ASCII
if (myString != null) {
String[] serialInArray = split(trim(myString), “,”);
if (serialInArray.length == NUM_OF_VALUES) {
for (int i=0; i<serialInArray.length; i++) {
sensorValues[i] = int(serialInArray[i]);
}
}
}
}
}

Exercise 2:

For Exercise 2 I decided to use the mousePressed function to randomly play a note. Intially I tried to incorporate the tone function into this exercise but after consulting Leon, I realized there was no use for the tone function. Instead I focused on using the if else and value statement to help create an array of notes. Although I did get everything to work, there was one issue. The notes were not playing long enough for each note to be differentiated from one another. Therefore we had to adjust the duration of each note. Following the adjustment, everything went smoothly. Exercise 2 consists of a more unpredictable interaction, depending on how long each user press the mousepad, they would generate a different note and a different duration of the note as well.

Pictures and Videos:

Code:

Arduino:

// IMA NYU Shanghai
// Interaction Lab

/**
This example is to send multiple values from Processing to Arduino.
You can find the Processing example file in the same folder which works with this Arduino file.
Please note that the echo case (when char c is ‘e’ in the getSerialData function below)
checks if Arduino is receiving the correct bytes from the Processing sketch
by sending the values array back to the Processing sketch.
**/

#define NUM_OF_VALUES 2 /** YOU MUST CHANGE THIS ACCORDING TO YOUR PROJECT **/

/** DO NOT REMOVE THESE **/
int tempValue = 0;
int valueIndex = 0;

/* This is the array of values storing the data from Processing. */
int values[NUM_OF_VALUES];
int melody[] = {
31, 1100, 123, 578, 882, 311, 4978
};

// note durations: 4 = quarter note, 8 = eighth note, etc.:
int noteDurations[] = {
4, 8, 8, 4, 4, 4, 4, 4
};

boolean state = false;

void setup() {
Serial.begin(9600);
pinMode(13, OUTPUT);
pinMode(9, OUTPUT);
Serial.begin(9600);
}

void loop() {
getSerialData();

// add your code here
// use elements in the values array
// values[0] // values[1] if (values[0] == 1) {
digitalWrite(13, HIGH);
} else {
digitalWrite(13, LOW);
}

//Serial.println(randomNumber);
if (values[0] == 1 && state == false) {
tone(9, melody[values[1]],5000);
state = true;
} else {
noTone(9);
//state = false;
}

// tone (8, melody[random(0, 7)], noteDuration);
// if (values[0] == 1) {
// tone (8, melody[random(0, 7)], noteDuration);
// } else {
// tone (8, melody[random(0, 7)], noteDuration);
// }

}

//recieve serial data from Processing
void getSerialData() {
if (Serial.available()) {
char c = Serial.read();
//switch – case checks the value of the variable in the switch function
//in this case, the char c, then runs one of the cases that fit the value of the variable
//for more information, visit the reference page: https://www.arduino.cc/en/Reference/SwitchCase
switch (c) {
//if the char c from Processing is a number between 0 and 9
case ‘0’…’9′:
//save the value of char c to tempValue
//but simultaneously rearrange the existing values saved in tempValue
//for the digits received through char c to remain coherent
//if this does not make sense and would like to know more, send an email to me!
tempValue = tempValue * 10 + c – ‘0’;
break;
//if the char c from Processing is a comma
//indicating that the following values of char c is for the next element in the values array
case ‘,’:
values[valueIndex] = tempValue;
//reset tempValue value
tempValue = 0;
//increment valuesIndex by 1
valueIndex++;
break;
//if the char c from Processing is character ‘n’
//which signals that it is the end of data
case ‘n’:
//save the tempValue
//this will b the last element in the values array
values[valueIndex] = tempValue;
//reset tempValue and valueIndex values
//to clear out the values array for the next round of readings from Processing
tempValue = 0;
valueIndex = 0;
break;
//if the char c from Processing is character ‘e’
//it is signalling for the Arduino to send Processing the elements saved in the values array
//this case is triggered and processed by the echoSerialData function in the Processing sketch
case ‘e’: // to echo
for (int i = 0; i < NUM_OF_VALUES; i++) {
Serial.print(values[i]);
if (i < NUM_OF_VALUES – 1) {
Serial.print(‘,’);
}
else {
Serial.println();
}
}
break;
}
}
}

Processing:

/*
Melody

Plays a melody

circuit:
– 8 ohm speaker on digital pin 8

created 21 Jan 2010
modified 30 Aug 2011
by Tom Igoe

This example code is in the public domain.

http://www.arduino.cc/en/Tutorial/Tone
*/

#include “pitches.h”

// notes in the melody:
int melody[] = {
NOTE_C4, NOTE_G3, NOTE_G3, NOTE_A3, NOTE_G3, 0, NOTE_B3, NOTE_C4
};

// note durations: 4 = quarter note, 8 = eighth note, etc.:
int noteDurations[] = {
4, 8, 8, 4, 4, 4, 4, 4
};

void setup() {
// iterate over the notes of the melody:
for (int thisNote = 0; thisNote < 8; thisNote++) {

// to calculate the note duration, take one second divided by the note type.
//e.g. quarter note = 1000 / 4, eighth note = 1000/8, etc.
int noteDuration = 1000 / noteDurations[thisNote];
tone(8, melody[thisNote], noteDuration);

// to distinguish the notes, set a minimum time between them.
// the note’s duration + 30% seems to work well:
int pauseBetweenNotes = noteDuration * 1.30;
delay(pauseBetweenNotes);
// stop the tone playing:
noTone(8);
}
}

void loop() {
// no need to repeat the melody.
}

Justin Wu’s Preparatory Research and Analysis

Preparatory Research and Analysis

A)

Throughout this semester, my definition of interaction has changed massively. From the start of the semester, I interpreted the interaction as two parties bouncing ideas, theories, movements off of each other. Then while executing my midterm project, I understood the interaction as a process that should also create a (1+1>2) effect. However, over the last two weeks, my definition of interaction has changed yet again. Building onto my midterm definition, interaction should still bring something new from the table, but it should also be from collaboration and cooperation. Through the lectures, we learned how to combine Arduino and Processing, and it represents the (1+1>2) effect. Interaction should also involve some creativity; interaction should encourage users and creators to envision a project, idea that is one of a kind. Like the random variable in Processing, we should strive to make every interaction unpredictable but enjoyable.

B)

While researching for our final project, I came across two inspiring projects online.
First, I found a light up leather brace created by Tim Deagan on makezine.com
Tim Deagan laments that while cosplay fashion has found a way to incorporate wearable microcontrollers, daily wear fashion still has not. In his project, he found a way to incorporate Adafruit Gemma and RGB NeoPixels in a leather arm brace. The arm brace had four light-up panels and used touch sensors to trigger each one of them. Tim Deagan’s project aligns with my definition of interaction. Not only does he use creativity by trying to break fashion boundaries, but he also uses rechargeable batteries inside his arm brace to help make charging easier. Tim Deagan successfully combines technology and daily wear and creates a (1+1>2) effect; therefore, his project lines up with my definition of interaction.

Light Up Leather Arm Braces

The second project I found online was a LED Nixie Display article written by Florian Schaffer.
Florian Schaffer cites how the original creator Connor Nishijima uses Arduino powered on 5V to drive a numeric display of multicolor LED lights. Schaffer attempts to teach his readers how to assemble your own LED light display quickly. Although Connor Nishijima does join a beautiful project, the project does not align with my definition of interaction. This LED light display allows users to change the digits illuminated and does incorporate creativity by laser cutting their numeral plates but does not create a (1+1>2) effect. Users get exactly what they want out of this project and nothing more. Therefore, Connor Nishijima’s project does not match my interpretation of interaction.

LED “Nixie” Display

D)

The standard definition of interaction is a reciprocal action or influence which means both parties are expected to contribute collaboratively. However, after reading online articles, I realize interaction should consist of a sense of unpredictability.

While reading Golan Levin and collaborators’ article “Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for Novice Programmers,” they try to redefine what human beings’ idea of computer vision technologies. They admit people feel that signal processing and artificial intelligence are limited to military and law-enforcement purposes but still try to break this boundary. These experts have employed multiple interactive media artists to help front their operation. They want people to understand computer vision should not be limiting but instead empowering, and they want to spread computer vision around the world and apply it around every field. After reading this article, it forced to polish my idea around interaction. I realized interaction should not be within the box, and it should promote people to think outside the box. After reading Judith Shulevitz’s article “Alexa, Should We Trust You?” on The Atlantic, I realize how powerful interaction can be. Judith Shulevitz discusses how smart speakers and their creators have been able to break conventional boundaries and use interaction to help elevate their importance. Amazon and Google have been pushing their versions of smart speakers to the world in a vigorous fashion over the last two years. While their marketing plan works, they also used computer programming and artificial intelligence to allow users to interact with the speakers truly. By using a smooth voice and an intelligent computer system, customers slowly build a relationship with their smart speaker by asking more questions over time. Google and Amazon did what Golan Levin and collaborators had hoped to do. They used computer vision to impact our globe and our living conditions truly. My idea of interaction has changed thanks to these two articles. I present my final definition of interaction as a process that includes two parties who bounce ideas off each other while using creativity to create a (1+1>2) effect. Golan Levin and Judith Shulevitz are pioneers of the computer vision movement, but Google and Amazon are the trailblazers who experimented with using AI technology to affect our everyday lives indeed. We should all be amazed and inspired by how smart speakers can interact with us, answer our questions, and almost act as another vital family member when it is nothing more than a plastic cylinder. Interaction still requires two parties, in this case, a smart speaker and a user, but it should not just be a collaboration, it should be a relationship that people will yearn for in the future.

http://www.flong.com/texts/essays/essay_cvad/

https://www.theatlantic.com/magazine/archive/2018/11/alexa-how-will-you-change-us/570844/

Recitation 7: Processing Animation by Justin Wu

Reflection:

During this recitation, I decided to upgrade my previous project. To include some level of interaction, I decided to add the “mousePressed” function. At first, I was indecisive of how to implement the mousePressed function into my code, but with the help of Nick, I decided to use the mousePressed function to change the colors of my ellipses with every click.
Initially, I wanted to change all sixteen ellipses to black colored ellipses with each click with the “if else” function but as I proceeded I decided to change my plan. Instead of having to assign a value to each color now, I had to name each color (circleColor). By using the function circleColor, I randomly assign a different color, that is bounded by their RGB color value, with each click. In general, it was refreshing to see the functions already bounded in Processing, and it was an eyeopener to see how I can use these functions such as “mousePressed,” “if else” and create my functions such as “circleColor” to code my project.

Video:

Code:

  

You Will Never Catch Me Mom! Justin Wu (Marcela)

You will NEVER Catch Me Mom! Justin Wu (Marcela)

Before this midterm project, we all had to participate in another group project. In the initial group project, we create a futuristic product that can help users identify skin issues while also offering different product samples for testing. In that project, named “iMirror,” it required a human user to interact with the artificial intelligence inside the mirror. After watching my group mates try to demonstrate the interaction and watch my peers talk to another peer behind the mirror (acting as the AI), it triggered me to further my understanding of interaction. Initially, I understood interaction as two parties bouncing ideas, theories, movements off of each other and acting on one another’s decisions but after the group project, I understood interaction should also create a (1+1>2) effect. Interaction should create a more significant impact than addition; interaction should bring something new to the table. On that note, we decided not to recreate any of our previous group projects, and we decided to recreate our childhood memory together. Our new project and concept are different from the other projects as it does not pay tribute to our contemporary lifestyle or an imagined future, but instead, it pays homage to a shared memory we all share. Our group project is meant to be for everyone, and users (targeted audience) will be able to relive what it meant to try to play the late night hide and seek game with our parents in order to stay up late to watch television shows, to play games or to do anything that we were not allowed to do.

How we envisioned our project

   

After concluding that our project should be surrounded by nostalgia, we quickly drafted different plans for our project. We finally settled with the idea of mimicking a kid trying to stay up late for a variety of reasons. As a kid, we always wanted to play an extra hour of video games, watch some more cartoon shows or play with our toys for longer but our parents always made us sleep early. Any kid in this situation would be very frustrated by their parents checking on them at night; therefore for this project, we wanted to implement automatic light sensors to help kids stay up later. It is a simple idea that most of us did not have as a kid, but we want to recreate the nervous memories. To use the automatic light sensors, we designed our project with a long hallway with a LED light in between the two bedrooms. We also decided to use a lego character to provide a three-dimensional feel. Because we chose to use the lego character, we decided to make a lego handle (that the lego character will stand on) that users will use to walk the mom to the kid’s room. To do that, we needed a thin material that we can laser cut the outline of the handle. After going through the available materials, we decided to choose the 3-millimeter wood panels so the laser cutting process will be more efficient. The wood panels provided a more homely feeling than the acrylic panels while also being more time efficient. On makercase.com, we also decided to make an open box with finger edge joints instead of the flat or t-slot joints, so we had more flexibility on how we want to use the case after printing. By using finger edge joints, we also make sure we can combine the foundations of our house by connecting the bones.

Laser cutting finger edge joint boxes

The most physically demanding of our midterm project was the fabrication and production process as it required precise measurements and meticulous planning. During the User Testing Session, although we had a working prototype, we did not have any fabrication. Many users pointed out the vague directions and poor design did not contribute to their experience. Therefore, after our user testing session, we immediately started to plan on creating a presentable project, and we created a list of objectives. First, our initial prototype was constructed with paper and plastic, and it was not a neat design. Hence, following the session, I consulted one of the teaching assistants, and I decided to use makercase.com to help create our house as it was going to be an easy method to solve how we configure our house. When it came to creating fabrication, we had many options by using makercase.com. After figuring out the dimensions of our house, I consulted groupmates Roger and Julie regarding our configurations. We realized, to improve our project, we needed a complete design that will be able to accomplish many objectives. First, we needed a two-story house to store the Arduino and breadboards on the first level and have the bedrooms stacked on top of it. Second, we needed specific measurements for our lego figurine to move freely. These ideas would all combine to help us create a newly constructed house.

Sketch of new design

 

Second, we also had to address the manual reset problem. During the testing session, many users were confused by our need to reset the project manually. We were also frustrated at ourselves that we did not think beforehand. Many users would try to move the mum to the kid’s bedroom again before we could manually reset the Arduino. We desperately needed to create a loophole that will be able to automatically reset the Arduino whenever the mum walks back to her bedroom for a coherent experience. With the help of the teaching assistants, we decided to create different stages that signify different parts of the experience. The first stage is when the mum was in her bedroom or before the light in the hallway, and everything was calm and peaceful. The second stage is when the mum blocks the LED light, this stage would trigger our codes and would turn all the lights off while also triggering the kid to fall back into his bed. The third stage is when the mum reaches the kid’s door to check on the kid. The final step is when the mum walks past the LED light to return to her bedroom. Stage four would trigger our code to respond to stage one and automatically reset the whole project.

Updated model with automatic reset

In general, we tried to recreate a fraction of our childhood memory by recalling how we all tried to dodge our parent’s supervision with this midterm project. Our project incorporated interaction as users not only react differently to the different stages of our project but also get a sense of nostalgia, something that is not expected. The project aligns with my sense of interaction as in addition to enjoying the experience; users get to comes across the (1+1>2) effect once they recall how they used to act the same when they are a child. During our group’s short presentation, many people were fascinated by our idea and were eager to try and test how we incorporated our lives into a real-life model. It was incredibly relieving to see our audience not only inquire how we made this project, the intricate details behind it. It was also rewarding to see people resound with our motivation to create this project because we managed to make our users think back to their childhood memories. On that note, if we had more time, I would try and create a losing scenario for the kid. In our current model, the kid can dodge the mother’s checkup every trial, but it would make our model more realistic if the kid can get caught because we have all been caught before. During these two weeks, our group went through a series of highs and lows, and it was both rewarding and punishing to be there for all of it. However, I took away the importance of splitting tasks according to our different expertise. As a part of a three-person group, we were able to divide and conquer. Each of us took care of different tasks efficiently, and it helped expedite the process.
Most importantly, I also learned the importance of staying patient. During our fabrication session, the laser cutting machine stopped working, and we allowed our emotions to get the better of us. I started to panic and wonder if we will be able to complete our project. However, with the help of Leon, we managed to get the laser cutting machine to work again, and we resumed our fabrication process. In short, our midterm group project brought us back to our childhood memories; only this time it was a lot more demanding and challenging to create it. Although it might seem like just another model, I believe our users should care about this experience because it displays not only detailed planning but also noteworthy programming and coding skills. Most importantly, this model provides an experience that coincides with everyone’s early days.

Final Project

Recitation 6: Processing Basics (Justin Wu)

In this week’s recitation, I found one of Damien Hirst’s art piece from his “Spot Series” and used it as inspiration for my image created on Processing. Attached below is the original piece.

I chose Damien Hirst for many reasons. Despite being nominated as one of the Young British Artists on the 1990s, he has been accused of plagiarizing on several occasions. However, these rumors never damaged his career, in fact, he is the richest living artist in the United Kingdom right now. I chose this painting, which is part of his Spot Series because it is actually curated by his assistant. I am intrigued if creating a bunch of randomly colored circles will be able to captivate people as he did.

3 Methylthymidine by Damien Hirst (White)

Image result for damien hirst spot

My Interpretation of Damien Hirst’s “3 Methylthymidine”

The code used in Processing

I wanted to replicate the painting in Processing while changing the brightness of the colors and the canvas color. To begin, I wanted to choose a different canvas color, something that would bring more liveliness to the painting. In his original painting, Damien Hirst used a color similar to the whitesmoke RGB color. On the other hand, I decided to go with beige for the canvas color as I believed it would help make the colored circles pop more. Moving forward, I had to find the exact coordinates in order to properly space the circles. With the help of the teaching assistants on site, I managed to find feasible coordinates. Once I found the correct coordinates, I began to find the colors I wanted to implement in Processing. I used a website to help me find the desired colors. (https://www.rapidtables.com/web/color/white-color.html)

My final creation is different from the motif for obvious reasons. First, the dimensions of the two pieces are different. Instead of opting for a vertical rectangle, I decided to use a horizontal rectangle to create flatter visual experience. I also used a different canvas color to bring a different life to the piece. However, for the most parts, the colors implemented on the two pieces are similar, although they might have different shades. Overall, drawing in Processing is a good way to realize my designs. I was able to run the drawing every time I added a new line of code and it gave me the flexibility to experiment.