Recitation 9:Media controller- Andrew Xie

In this recitation, I chose to use a photosensitive resistor to control the brightness of the picture, but the effect was not obvious. The pictures are of Hobbits.

123

Code

PImage img;

void setup() {
size(600, 404);
PImage photo;
img = loadImage("hobbit.jpg");
tint(0,0,255,150);
point(mouseX,mouseY);

}

void draw() {
for (int i=0; i<100; i++) {
int size = int( random(1, 20) );
int x = int( random(img.width) );
int y = int( random(img.height) );
// get the pixel color
color c = img.get(x, y);
// draw a circle with the color
fill(c);
ellipse(x, y, size, size);
}
}

void mousePressed() {
point(mouseX,mouseY);
}

int photocellPin = 2; // 光敏电阻连接模拟端口【A2】

int ledPin = 13; // LED灯连接数字端口【D13】

int val = 0; // 存储光敏电阻值的变量

void setup() {

// 打开并设置串口

Serial.begin(9600);

// 设置数字端口ledPin用于输出

}

void loop() {

val = analogRead(photocellPin); // 读取光敏电阻的值

//在串口输出val的值 用于调试时使用

Serial.println(val);

if(val<=112){

digitalWrite(ledPin, HIGH);

}else{

digitalWrite(ledPin, LOW);

}

}

This article uses light and shadow to realize human interaction, which inspires me how to use multimedia in the project, such as using sound as a medium to trigger the interaction between the user and the machine.

Recitation 10: Workshops

For this recitation, I chose the object oriented programming workshop. We learned about object, class, and array, especially Arraylist.

For the exercise, I made an animation based on the code we wrote during the class and created a class for the Spiderman symbol. As the background, there are many of it moving and bouncing around. For interactivity, every time I click the mouse, it will add one more white symbol to the screen, and a pink one will be added every time the keyboard is pressed.  And I used map() function to make sure that the white symbol will only start at the center of the screen.

ArrayList<Spider> sList;

void setup(){
  size(1600,800);
 sList= new ArrayList<Spider>();
  for(int i=0; i<100; i++){
  sList.add(new Spider(random(width),random(height),color(random(100,255),random(0,50),random(0,85)),color(0)));
  }
}
void draw(){
background(0);
 for(int i=0; i<sList.size(); i++){
  Spider temp = sList.get(i);
   temp.display();
  temp.move();
 }
}
  void mousePressed(){
   float xx = map(mouseX,0,width,width/4,width/2);
float yy =map(mouseY,0,height,height/4,height/2);
   sList.add(new Spider(xx,yy,255,0));
 }
void keyPressed() {
  float x=random(width);
    float y=random(height); 
     sList.add(new Spider(x,y,255,#EA219A));
}
class Spider {
float x,y;
float size;
color clr;
float spdX;
float spdY;
color str;

Spider(float startingX,float startingY,color startingColor,color startingstr){
 x= startingX;
 y= startingY;
 size= random(50,100);
 clr=startingColor;
 str=startingstr;
spdX= random(0,6);
spdY= random(0,10);
}

void display(){
  fill(clr);
  noStroke();
  ellipse(x,y,size,size);
  stroke(str);
  strokeWeight(size/17);
  fill(255);
  arc(x-size/5,y-size/6,size/3,size/1.5,QUARTER_PI,PI,CHORD);
  arc(x+size/5,y-size/6,size/3,size/1.5,0,QUARTER_PI+HALF_PI,CHORD);
  
}
void move(){
  x+=spdX;
  y+=spdY;
 
if(x>=width || y>=height){
  spdX=spdX*-1;
  spdY=spdY*-1;
}
if(x<=0|| y<=0){
  spdX=spdX*-1;
  spdY=spdY*-1;
}
}

}

Recitation 10: Object Oriented Programming Workshop —— Leah Bian

 For this recitation, we first had a quick workshop about the map() function. I reviewed when to use this function, and what the formats are. After this workshop finished, we were asked to choose a workshop to attend. The choices included media manipulation, serial communication, and object oriented programming. In our final project, we will focus on how to send data between Arduino and Processing, and how to draw and control the image of the marionette in Processing. Therefore, my partner attended the workshop about serial communication, and I chose the workshop about object oriented programming.

   In the workshop, Tristan gave us a detailed explanation about what “object” means, and what the parts of it are, which include class and instance. We thus went over the process of writing codes of object oriented programming from bigger parts (class, instance) to the smaller ones (variables, constructor, functions). Using the emoji faces as example, we started to work on the code together. Based on the code that we wrote during recitation, I started to create my own animation as exercise.

We needed to use classes and objects, and the animation should include some level of interactivity. As requirements, we needed to use the map() function and an ArrayList. I decided to use the mousePressed function as way of interaction. The shapes would be created at the mouse’s position. After searching a basic function for a star online, I modified the code to meet the requirement of using arrays. Creating a star needs to use the vertex() function for ten times. I let the stars to fall down to the bottom of the screen by setting the yspeed to random(3,7). I wrote a bounce function, so that if the stars hit the boundaries on the left and the right they will turn to the opposite direction. Finally, I used the map() function to limit the area where the stars can be created.

These are my codes in Processing ( 2 parts):

ArrayList<Stars> stars = new ArrayList<Stars>();

void setup() {
  size(800, 600);
  noStroke();
}

void draw() {
  background(130,80,180);
  for (int i=0; i<stars.size(); i++) {
    Stars s = stars.get(i); 
    s.move();
    s.bounce();
    s.display();
  }
  float x=map(mouseX,0,width,width/6,width-width/6);
  float y=map(mouseY,0,height,height/6,height-height/6);
  if (mousePressed==true) {
    stars.add( new Stars(x,y));
  }
}

class Stars {
  float x, y, size;
  color clr;
  float xspeed, yspeed;

  Stars(float tempX, float tempY) {
    x = tempX;
    y = tempY;
    size = random(10, 100);
    clr = color(255, random(180,255), random(50,255));
    xspeed = random(-3, 3);
    yspeed = random(3, 7);
  }

  void display() {
    fill(clr);
   beginShape();
   vertex(x,y);
   vertex(x+14,y+30);
   vertex(x+47,y+35);
   vertex(x+23,y+57);
   vertex(x+29,y+90);
   vertex(x,y+75);
   vertex(x-29,y+90);
   vertex(x-23,y+57);
   vertex(x-47,y+35);
   vertex(x-14,y+30);
    endShape(CLOSE);
  }

  void move() {
    x += xspeed;
    y += yspeed;
  }

  void bounce() {
    if (x < 0) {
      xspeed = -xspeed;
    } else if (x > width) {
      xspeed = -xspeed;
    }
}
}

Interaction Lab Documentation 9-Kurt Xu

Documentation

In this section, we dig deeper in Arduino’s capability to manipulate the moving pictures, from existing documents, webcam or websites though the Processing.

To the whole project, the key idea is the communication between the Processing and the Arduino. As the transimission is from Arduino to the Processing, we give value in the Arduino and transfer the variarables to the Processing through this function:

1.start the serial library

import processing.serial.*;
Serial myPort;
void setup() {
  myPort = new Serial(this, Serial.list()[ PORT_INDEX ], 9600);

the PORT_INDEX depends on the port that Arduino occupies, which can be identified with this fuction:

printArray(Serial.list());

In my function i used three potentiometer to each control the speed, shade and the location of the video (if any key is pressed).

The main problem i face is that the potentiometers are not accurate enough to change the speed of the video, so i divide it for 100 times, and through the map() function, i fix the range of the location and the shade to (0,138) and (0,255) repectively.

Reflection:

In recent years, the art creation with technology involved is becoming more and more popular. Artists are trying to expand the way they can express themselves through their art works, and computers are also trying to expand their recognition of their operators from barely keyboard typing to a more multidimentional aspect, like sound, motion and even video itself, etc. The computer vision is a word that can conclude what i mentioned above, and is widely used “to track people’s activities”(VI,9). I’m deeply interested in that project which stimulates the development of interaction between human and computer, which should be a trend in the coming several decades.

About the project we do in the recitation, it’s actually a semi-computer vision as the computer is translating our manipulation and then manipulate the video accordingly. To improve that, we can expand our way of input, making it less intentional and endow the computer with more autonomy, which means, allowing it to process more on its own.

Work Cited:

Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for Novice Programmers

Video Used:

Recitation 9: Media Controller by Eric Shen

In this recitation we are asked to create a Processing sketch that controls media by manipulating that media’s attributes using a physical controller made with Arduino. For me I chose to use  a physical controller to manipulate a video shown in our class. Two potentiometers are used in my Arduino part, one is to control the position of the video and the other is to control the speed of the video. 

Code: 

Arduino part is the same as the example: 

// IMA NYU Shanghai
// Interaction Lab
// For sending multiple values from Arduino to Processing

const int buttonPin = 8;
void setup() {
Serial.begin(9600);
}

void loop() {
int sensor1 = analogRead(A0);
int sensor2 = analogRead(A1);

// keep this format
Serial.print(sensor1);
Serial.print(“,”); // put comma between sensor values
Serial.print(sensor2);
Serial.println(); // add linefeed after sending the last sensor value

// too fast communication might cause some latency in Processing
// this delay resolves the issue.
delay(100);
}

Processing: 

// IMA NYU Shanghai
// Interaction Lab
// For receiving multiple values from Arduino to Processing

/*
 * Based on the readStringUntil() example by Tom Igoe
 * https://processing.org/reference/libraries/serial/Serial_readStringUntil_.html
 */

import processing.serial.*;

String myString = null;
Serial myPort;


int NUM_OF_VALUES = 2;   /** YOU MUST CHANGE THIS ACCORDING TO YOUR PROJECT **/
int[] sensorValues;      /** this array stores values from Arduino **/


import processing.video.*;
Movie myMovie;

void setup() {
background(0);
  size(800, 800);
  myMovie = new Movie(this, "dancing.mp4");
  myMovie.loop();
  setupSerial();
}

void movieEvent(Movie movie) {
  myMovie.read();  
}

void draw() {
  background(0);
  updateSerial();
  float c = map(sensorValues[1], 0, 1023, 0 , 800);
  image(myMovie, c, c);   
  float newSpeed = map(sensorValues[0], 0, 1023, 0.1, 5);
  
  myMovie.speed(newSpeed);
  
   
 


} 
  // use the values like this!
  // sensorValues[0] 

  // add your code

  //




void setupSerial() {
  printArray(Serial.list());
  myPort = new Serial(this, Serial.list()[13], 9600);
  // WARNING!
  // You will definitely get an error here.
  // Change the PORT_INDEX to 0 and try running it again.
  // And then, check the list of the ports,
  // find the port "/dev/cu.usbmodem----" or "/dev/tty.usbmodem----" 
  // and replace PORT_INDEX above with the index number of the port.

  myPort.clear();
  // Throw out the first reading,
  // in case we started reading in the middle of a string from the sender.
  myString = myPort.readStringUntil( 10 );  // 10 = '\n'  Linefeed in ASCII
  myString = null;

  sensorValues = new int[NUM_OF_VALUES];
}



void updateSerial() {
  while (myPort.available() > 0) {
    myString = myPort.readStringUntil( 10 ); // 10 = '\n'  Linefeed in ASCII
    if (myString != null) {
      String[] serialInArray = split(trim(myString), ",");
      if (serialInArray.length == NUM_OF_VALUES) {
        for (int i=0; i<serialInArray.length; i++) {
          sensorValues[i] = int(serialInArray[i]);
        }
      }
    }
  }
}

Writing reflection:

A variety of computer vision techniques are introduced in the reading which inspire me the ways of how technology is used in my project. In the article, Myron Krueger, the creator of Videoplace, he states that technology should be  used as supportive tools which resonates with my understanding of a great interactive project. As for my project, I use technology to make the experience  and interaction of users and the project better and to make the audience identify with the theme of my project. And there is on particular project in the reading that really impressed me which is Messa di Voce’s interactive software. It can visualize the characteristic of water. Therefore , I think technology can be used to connect different senses together and this notion is used in my final project too. 

Reference

 Levin, Golan. “Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for Novice Programmers.” AI & Society, vol. 20, no. 4, 2006, pp. 462-482.