Recitation 10: Workshop by Alexander Cleveland

Serial Communication

Although I was not present during this week’s observation, I choose to use the serial communication workshop with Young as a baseline of learning. I choose this workshop because I wanted to use past examples my work to further understand how I can use serial communication in the future. Alongside the workshop slides by Eszter and Jessica, I found both workshops to be very helpful by scanning through their notes and slides online.

Example of Serial Communication Using the Map Function

Because there is not a specific exercise involved with this workshop, I’ve chosen to properly showcase the mapping skills I’ve used throughout this class through the image tint I previously worked on.  The code for the image tinting is as follows:

My Code

Arduino

int a;
void setup() {
  Serial.begin(9600);
}


void loop() {
  int sensor1 = analogRead(A0);
  Serial.write(sensor1);
  int a = map(sensor1, 0, 1023, 0 , 255);
  Serial.write(a);
//  Serial.print(sensor1);
//  Serial.println();
  

  // too fast communication might cause some latency in Processing
  // this delay resolves the issue.
  delay(10);
}

Processing

import processing.serial.*;
PImage img;
Serial myPort;
int valueFromArduino;
int sensor1;


void setup() {
  size(600, 600);
  img = loadImage("HK.jpeg");
  imageMode(CENTER);
  background(255);
  
  printArray(Serial.list());
  myPort = new Serial(this, Serial.list()[16], 9600);
}

void draw() {
  if (img.width > 0) {
    image(img, 300, 300, width, height);
    //float a = map(sensor1, 0, 1023, 0, 600); 
    
    while ( myPort.available() > 0) {
    valueFromArduino = myPort.read();
  }
  println(valueFromArduino);//This prints out the values from Arduino
}
  
  if ( valueFromArduino > 150) { 
    tint(0, 0, 255, 150);
    image(img, 250, 0);
  }
 else if (valueFromArduino < 149) {
  tint(0, 255, 0);
 }
  }

 Video example

Through this code, I created a mapping value in the Arduino code of “int a = map(sensor1, 0, 1023, 0 , 255)” and by doing so, I correlated the “int a” value with the values of sensor 1 (potentiometer). The reasoning for this was to create the lowest possible and highest possible values within the potentiometer in the second and third slots of the mapping. In the third and fourth slots I wrote in my lowest target value and highest target value. I also wrote in Sensor 1 so that all of the value in slots 1-4 would correlate with it. Through serial communication, these mapped out values were transferred to processing. The corresponding lines of code in processing were represented by “if ( valueFromArduino > 150) {     tint(0, 0, 255, 150);     image(img, 250, 0);   }  else if (valueFromArduino < 149) {   tint(0, 255, 0);” These values of 0-255 were the original target values I created in my map on Arduino. And thus by printing “println(valueFromArduino)” it was then possible to correlate the two programs to work together. 

Recitation 10—Tao Wen

The idea is when user clicks on each character in the picture, a picture featuring them is shown, like future-telling. If given enough time, I would make a flashing effect.

PImage image1,image2,image3;
float x=400;
float y=300;
float r=110;
int state=0;


void setup() {
  size(800, 600);
  image1 = loadImage("fox.jpg");
  image2=loadImage("dad1.jpg");
  image3=loadImage("dad2.jpg");
}
void draw() {
  background(0);
  imageMode(CENTER);
  image(image1, width/2, height/2);
  fill(235, 174, 52);
  ellipse(pmouseX,pmouseY,30,30);
  noStroke();
  fill(0, 0,0, 0);
  ellipse(x, y, r, r);
  if (mousePressed && ((mouseX-x)*(mouseX-x)+(mouseY-y)*(mouseY-y)<=r*r)) {
    state=1;
  }
  if (state==1){
  flash();
  }
  
    delay(30);
  }

  void flash(){
    imageMode(CENTER);
    image(image2,width/2,height/3,400,300);
  }
  
  

Recitation10 Object Oriented Programming by Hangkai Qian

To be honest,  I didn’t very understand OOP when I attended the lecture class though I have worked a long time on the example of Ball in the class, I still didn’t figure it out. So I attended the OOP class. In the class, I knew that the class has variables, constructor and functions. So I used them in my final processing game part: there is a helicopter flying up, and there are pillars worked as obstacles. I had very difficulties in building these blocks, because I couldn’t come up with a way to generate infinite blocks. Finally, referring to the example in the lecture class, I finally worked it out using ArrayList and add new  items every time and use for loop and i<p.size to generate different pillars.

Here is my code:

ArrayList<pillar> p= new ArrayList<pillar>();

void setup(){
size(350,800);

}

void draw(){

int m=millis();
stroke(40);

h.drawheli();
h.moveX();

for (int i = 0; i<p.size(); i++) {
p.get(i).drawPillar();
p.get(i).movePillar();

if (p.get(i).yPos==h.y) {
if (h.x>p.get(i).xPos && h.x<p.get(i).xPos+p.get(i).xh) {
end=true;
} else {
end=false;
}
}
}

if (m%1000<15) {
p.add(new pillar());
}
}

}

class pillar {
float xPos=0;
float yPos=0;
float xh;
pillar() {
xh=round(random(400));
yPos=0;
xPos=random(1400/3.7);
}
void drawPillar() {
stroke(252,0,0);
strokeWeight(15);
line(xPos, yPos, xPos+xh, yPos);

}
void movePillar(){
yPos=yPos+10;
}

}

Here is the video.

P.S. Above is only the code of the pillar. However in my video I included not only pillars

Recitation 9: Media Controller by Min Jee (Lisa) Moon

For recitation 9, we were able to extend our learning from the transitions between Arduino and processing. Because we were free to choose what we wanted to do with the media, I decided to build the camera function that would be needed for my project. 

In order to make a camera in the phone screen, I took a screenshot of the phone camera screen and covered up the non-camera showing part with the image.

Camera section

Whenever I was pressing the button connected to Arduino (it would have been cooler if the image could be taken with the actual camera button though- we needed Arduino bit), the processing would make a screenshot of the screen and save in the specific location. 

However, there was a shortcoming though.

example shot

As you can see from above, because the processing is taking the screenshot of the entire screen, the camera APP part is showing as well, which is different from the usual images inside the gallery. 

Below is the code.

Processing:

// IMA NYU Shanghai
// Interaction Lab
// This code receives one value from Arduino to Processing 
import processing.serial.*;
import processing.video.*; 

int photoNum = 0;

Serial myPort;
int valueFromArduino;

PImage camera;
Capture cam;

void setup() {
  size(335, 690);
  background(0);

  printArray(Serial.list());
  // this prints out the list of all available serial ports on your computer.

  myPort = new Serial(this, Serial.list()[5], 9600);
  // WARNING!
  // You will definitely get an error here.
  // Change the PORT_INDEX to 0 and try running it again.
  // And then, check the list of the ports,
  // find the port "/dev/cu.usbmodem----" or "/dev/tty.usbmodem----" 
  // and replace PORT_INDEX above with the index number of the port.
  camera = loadImage("images/camera.png");
  setCamera();
}


void draw() {
  // to read the value from the Arduino
  while ( myPort.available() > 0) {
    valueFromArduino = myPort.read();
  }
  showCamera();
  //println(valueFromArduino);
  if(valueFromArduino == 1){
    save("gallery/"+str(photoNum)+".png");
    photoNum++;
    //println("pressed");
  }
}

void setCamera() { 
  cam = new Capture(this, 640, 480);
  cam.start(); 
} 

void showCamera() { 
  if (cam.available()) { 
   cam.read(); 
  }   
  noStroke();
  for (int i=0; i<500; i++) {
    int size = int( random(10, 30) );
    int x = int( random(width) );
    int y = int( random(80, 480) );
    // get the pixel color
    color c = cam.get(x, y);
    // draw a circle with the color
    fill(c);
    ellipse(width-x, y+40, size, size);
 }
 image(camera, 0, 0);
}

Arduino:

// IMA NYU Shanghai
// Interaction Lab
// This code sends one value from Arduino to Processing

void setup() {
Serial.begin(9600);
}

void loop() {
int sensorValue = digitalRead(A0);
Serial.write(sensorValue);

// too fast communication might cause some latency in Processing
// this delay resolves the issue.
delay(10);
}

Below is the demo video.

While I was doing the recitation workshop and was proceeding with my final project, reading through Computer Vision for Artist and Designers inspired and motivated me to work. I grew up in a country which has very serious bullying that approximately 63% of the people have had experienced bullying (including cyber-bullying), and 19% of the people having tried suicidal attempt due to bullying. Reading through this week’s reading, I found two pieces really interesting. 

The first was Suicide Box by the Bureau of Inverse Technology. As previously mentioned, the country that I was born and raised is rated as a country with the most suicides amongst OECD countries. Though it is little sad to see this idea having to count people who are jumping off the bridge and is merely counting the numbers when the jumped off person is losing his or her life, I think by knowing the exact statistics, other people may be able to do something about the environment to lessen the number of people doing the suicidal attempt. 

The other was Stills from Cheese, an installation by Christian Möller. This was especially interesting because I was thinking to project the user’s face on the trash bin, giving the user the feeling that they were viewed as an (emotional) trash bin. Stills from Cheese, by analyzing the current photo showing up on the screen, is able to tell when the user is smiling or not. If I were to analyze a person’s face, I would be able to only portray the person’s face on the trash bin looking figure. Therefore this piece has gotten me a little sad and motivating at the same time.