Recitation 9: Media Controller —— Jiayi Liang(Mary)

In this week’s recitation, I am asked to work individually to create a Processing sketch that controls media by  using a physical controller made with Arduino. I choose to use potentiometers to control an image. 

My Processing Code:

PImage img1;
import processing.serial.*;

String myString = null;
Serial myPort;

int NUM_OF_VALUES = 2; /** YOU MUST CHANGE THIS ACCORDING TO YOUR PROJECT **/
int[] sensorValues; /** this array stores values from Arduino **/

void setup() {
size(800, 800);
background(0);
setupSerial();
img1 = loadImage(“angel.png”);
imageMode(CENTER);
}

void draw() {
updateSerial();
printArray(sensorValues);
background(255);
image(img1,400,400,sensorValues[0],sensorValues[0]);
filter(BLUR, sensorValues[1]/100);
tint(255,sensorValues[0]/3);
}

void setupSerial() {
printArray(Serial.list());
myPort = new Serial(this, Serial.list()[ 3 ], 9600);

myPort.clear();
sensorValues = new int[NUM_OF_VALUES];
}

void updateSerial() {
while (myPort.available() > 0) {
myString = myPort.readStringUntil( 10 ); // 10 = ‘\n’ Linefeed in ASCII
if (myString != null) {
String[] serialInArray = split(trim(myString), “,”);
if (serialInArray.length == NUM_OF_VALUES) {
for (int i=0; i<serialInArray.length; i++) {
sensorValues[i] = int(serialInArray[i]);
}
}
}
}
}

I use the first potentiometer to change the size and the transparency, and the second potentiometer to change the blur length.

Since I have practiced how to use arduino to control Processing, this week’s recitation task is quite simple. All I need to do is to use the Pimage to load an image and use tint, blur etc. to edit the image. I think if I have more time, I will try to load more pictures to let the characters seem like interacting with each other by changing their positions and sizes.

Reflection:

After reading  Computer Vision for Artist and Designers , I got a lot of inspirations.  The article introduces various types of computer vision techniques. The project mentioned in this article I am interested in most is Messa di Voce’s interactive software. It visualizes the sound. If the user is speaking, the sound he or she made will be transformed into an image. This makes me think of one writing skill I learned in my high school —synaesthesia. It inspires me that different senses can be associated with each other. I can comment  a song as blue to show that it is sorrowful, and I can define a girl’s smile as sweet to show that she is so cute. In my project, I can also use this kind of skill to use people’s different senses to fertilize the interaction process.

Recitation 9: Media Controller – Ariana Alvarez

For this week’s recitation, we were assigned to manipulate media in Processing through Arduino. I decided to explore the live video webcam in processing and manipulate the tint function in live images with the help of a potentiometer and infrared distance sensor in Arduino. 

For the first attempt in the code, I was able to change the opacity in the tint applied to the live video through the potentiometer. However, as I wasn’t redrawing the background in processing, the opacity was just change upon itself, and led to the creation of an even more interesting effect of blurred image, that made any user on the screen have some kind of a ghostly look.

The second manipulation, was inspired by the readings in “Computer Vision in Artists and Designer”, as it mentioned how algorithms and computing media has been used in detecting motion, specially that which involved “the movements of people”. Therefore, I developed a code that changed the tint of an image from blue to red, depending on how close an individual was situated from the infrared distance sensor in Arduino. 

For the codes of both iterations of media manipulation, I was going to use the multiple values example so that both sensors in Arduino were connected to processing. However, I first did it with the one value example, and when changing it to multiple values, both sensors were not working in the most efficient way while working simultaneously, therefore I changed it back to one value codes for the time provided. Here I am attaching the codes I used for both Arduino and Processing.

I was inspired in the ways technology was used in my project, specially in the sense that I felt as if I created an object that helped enhance security systems in stores. Similar to the idea of the game system LimboTime, which was developed for participants to attempt and pass below an imaginary line. If such individual crosses above the line, the game rings an alarm. Similar to this idea, for my second media manipulation, if a person went passed the allowed distance, the colors started to change in the webcam.

Code from Arduino

void setup() {
Serial.begin(9600);
}

void loop() {
int pin1 = analogRead(A0);
int sensorValue = map(pin1,0,1023,0,255);
Serial.write(sensorValue);

// too fast communication might cause some latency in Processing
// this delay resolves the issue.
delay(10);
}

Code from Processing

import processing.serial.*;


Serial myPort;
int valueFromArduino;


import processing.video.*; 
Capture cam;


void setup() { 
  size(1280, 480); 
  cam = new Capture(this, 640, 480);
  cam.start(); 
  
  
  printArray(Serial.list());
  // this prints out the list of all available serial ports on your computer.

  myPort = new Serial(this, Serial.list()[4], 9600);
  // WARNING!
  // You will definitely get an error here.
  // Change the PORT_INDEX to 0 and try running it again.
  // And then, check the list of the ports,
  // find the port "/dev/cu.usbmodem----" or "/dev/tty.usbmodem----" 
  // and replace PORT_INDEX above with the index number of the port.
  
} 

void draw() { 
  //background(255);
  if (cam.available()) { 
   cam.read(); 
  } 
     
  int x = valueFromArduino;
  scale(-1,1);
  image(cam, -640, 0); 
  scale(-1,1);
  image(cam, 640, 0);
  tint(0, 153, 204, x); 
 

//  if (valueFromArduino < 100) {
//      tint(0, 153, 204);
//  } else { 
//    tint(255,0,0);
//  }
  
   // to read the value from the Arduino
  while ( myPort.available() > 0) {
    valueFromArduino = myPort.read();
  }
  println(valueFromArduino);//This prints out the values from Arduino
}

*Side Note: I have been trying to include images of the circuit and pictures of the media manipulation, however it is not allowing me to do so as it says there’s an error with the images, therefore I’ll try it again tomorrow and if not possible I’ll send them through e-mail directly to Rudi*

Recitation 9 by Jackson Pruitt

Moving image with a varied tint.

Processing:

import processing.serial.*;
import processing.video.*;
Movie myMovie;

Serial myPort;
int valueFromArduino;

void setup() {
  size(500, 500);
  background(0);

  myMovie = new Movie(this, “dancing.mp4”);
  myMovie.play();

  printArray(Serial.list());
  // this prints out the list of all available serial ports on your computer.

  myPort = new Serial(this, Serial.list()[ 12 ], 9600);
  // WARNING!
  // You will definitely get an error here.
  // Change the PORT_INDEX to 0 and try running it again.
  // And then, check the list of the ports,
  // find the port “/dev/cu.usbmodem—-” or “/dev/tty.usbmodem—-”
  // and replace PORT_INDEX above with the index number of the port.
}

void draw() {
  // to read the value from the Arduino
  while ( myPort.available() > 0) {
    valueFromArduino = myPort.read();
    // fill (250);
    //ellipse(200,200,valueFromArduino,valueFromArduino);
  }

  if (myMovie.available()) {
    myMovie.read();
  }
  tint(valueFromArduino, 0, 0);
  image(myMovie, 0, 0);

  println(valueFromArduino);//This prints out the values from Arduino
}

Arduino:

// IMA NYU Shanghai
// Interaction Lab
// This code sends one value from Arduino to Processing

void setup() {
Serial.begin(9600);
}

void loop() {
int sensorValue = analogRead(A0) / 4;
Serial.write(sensorValue);

// too fast communication might cause some latency in Processing
// this delay resolves the issue.
delay(10);
}

Reflection:

In the reading Computer Vision for Artists and Designers, there is a quote that reads, “Processing is one such environment, which, through an abundance of graphical capabilities, is extremely well­suited to the electronic arts and visual
design communities. Used worldwide by students, artists, designers, architects, and researchers for learning, prototyping, and production, Processing obtains live video through a QuickTime­based interface, and allows for fast manipulations of pixel buffers with a Java­based scripting language.” Levin, Raes, and Fry are articulating the potential processing has in the visual media arts which is exactly what we took part in during the recitation exercise. Although I was only able to complete the task using one sensor to manipulate one aspect of the video, I feel that this could be further utilized in more advanced software technology such as automizing color grading in film or photography.

Recitation 9 by Hangkai Qian

In my recitation, I planned to use two potentiometers to control the image in the Processing. The first potentiometer is to control the size of the image, and the second potentiometer is to control the degree of the blur of the image.

At first, after I read the Powerpoint in the folder, I used the function “resize” to  change the size of the image and the grammar of “Resize” is photo.resize( , ).  However, a mistake happened. When I resize the picture for several times, the picture blur a little. So I know the resize function will change the function, which leads to the change of the picture. Therefore, later I chose another function which do not  change the image: 

image(photo,0,0,a);

Second, when I first use the code that can transfer data from Arduino to Processing, the error said the ArrayException. At first, I thought it was because of the number from Arduino exceed the range that Processing. However, when I asked Rudi, he said the number of the blur function couldn’t be so big. So I used 

float a=map(sensorValues[1],0,1024,0,40);

to make the number from Arduino smaller.

Here is my arduino code.

void setup() {
Serial.begin(9600);
}

void loop() {
int sensor1 = analogRead(A0);
int sensor2 = analogRead(A1);

// keep this format
Serial.print(sensor1);
Serial.print(“,”); // put comma between sensor values
Serial.print(sensor2);
Serial.println(); // add linefeed after sending the last sensor value

// too fast communication might cause some latency in Processing
// this delay resolves the issue.
delay(100);
}

Here is my processing code

import processing.serial.*;

String myString = null;
Serial myPort;
PImage photo;

int NUM_OF_VALUES = 2; /** YOU MUST CHANGE THIS ACCORDING TO YOUR PROJECT **/
int[] sensorValues; /** this array stores values from Arduino **/

void setup() {
size(500, 500);
background(0);
photo=loadImage(“12deeaa706628e537518aa533d6c8658.jpeg”);
setupSerial();
}

void draw() {
background(0);
updateSerial();
printArray(sensorValues);

image(photo, 0, 0,sensorValues[0],sensorValues[0]);//,width,height);
float a=map(sensorValues[1],0,1024,0,40);
println(“senmap!”,a);
filter(BLUR,a);

//
//filter(BLUR, sensorValues[1]);

// use the values like this!
// sensorValues[0]

// add your code

//
}

void setupSerial() {
printArray(Serial.list());
myPort = new Serial(this, Serial.list()[ 4 ], 9600);
// WARNING!
// You will definitely get an error here.
// Change the PORT_INDEX to 0 and try running it again.
// And then, check the list of the ports,
// find the port “/dev/cu.usbmodem—-” or “/dev/tty.usbmodem—-”
// and replace PORT_INDEX above with the index number of the port.

myPort.clear();
// Throw out the first reading,
// in case we started reading in the middle of a string from the sender.
myString = myPort.readStringUntil( 10 ); // 10 = ‘\n’ Linefeed in ASCII
myString = null;

sensorValues = new int[NUM_OF_VALUES];
}

void updateSerial() {
while (myPort.available() > 0) {
myString = myPort.readStringUntil( 10 ); // 10 = ‘\n’ Linefeed in ASCII
if (myString != null) {
String[] serialInArray = split(trim(myString), “,”);
if (serialInArray.length == NUM_OF_VALUES) {
for (int i=0; i<serialInArray.length; i++) {
sensorValues[i] = int(serialInArray[i]);
}
}
}
}
}

Here is my circuit:

circuit

Here is my video:

r9

Reading

In this week’s reading, the author talked about the development of Computer Image analysis. It showed that computer vision doesn’t only can be used in military and law-enforcement actions. One of the most impressive project based on the technology is called “Standards and Double Standards”, which can recognize the people in the room and make the belt rotate to face to the audience. The project used belts to represent people face to the audience using the tech.

What makes me think is that the game I want to make can use this technique to track the movement of the player in order to make them like moving the stick on the screen, but I’m not sure if I can manage to understand it.

   Source:Computer Vision for Artist and Designers

Final Project Essay: Chloe Wang

Project Title: Truth about Truth

My project is called“Truth about Truth”. This is an interactive art installation that aims to deliver to the audience my thoughts on how a fact is conceived by us, and how difficult it is to be “right”, or to know the whole truth when we are receiving information for those outlets of our choosing. The project will have an image that is blurred or blocked that would only have one spot that is clear when someone actively interacts with it. One image will be on the screen for 15 seconds, and the person interacting with it would have to choose a storyline of what they think is in this image after that. I want to show that our individualized decision-making process is influenced by our different cultural backgrounds, which defines what truth is for us. 

During my preparatory research, I found Daniel Rozin’s “Mirror” installations most inspiring. His installations with various objects mirrors the face or shadow of the person in front of it. This way, when there is no observers, his installations do not have meanings. Only the presence of an observer gives his art a meaning. I wanted to make an installation that responses to the viewer’s movement. When someone walks close to the frame, some parts of the image would become clear, and the person’s movement would change the focused area on the canvas. The canvas is black in the beginning, and when someone walks into the frame, there would be a small shape that shows part of the photo. As the viewer moves a magnifier or a flashlight, the shape moves and reveals other parts of the image. With this effect, the viewer can navigate the image and complete it in their brain. The person needs to figure out what they think is in the image and decide in the end. There are no right or wrong answers, but in the end, there will be data gathered to show the result. 

General visualization of my project:Final Project idea

The Hong Kong protests happening this year has pushed me to think about the relationship between media and our beliefs. It seems like everyone is taking a side on this issue. The news on Weibo and the news on twitter display completely opposite sides of the violence in Hong Kong. As someone who is sandwiched in between Chinese identity and Western ideologies, I found it quite difficult to choose one extreme on this spectrum. Yet, I am not completely unbiased. For this project, I was inspired by the ideas of “selective exposure” and “third-person effect”. Selective exposure is: “individuals’ tendency to isolate themselves by selecting only attitude-consistent news”(Knobloch-Westerwick and Johnson, 2014). It is based on Festinger’s “cognitive dissonance theory”, which means:“people have an inner need to ensure that their beliefs and behaviors are consistent. Inconsistent or conflicting beliefs lead to disharmony, which people strive to avoid”(Cherry, 2019). Third-person effect means that: “Each individual reasons: I will not be influenced, but they (the third persons) may well be persuaded”(Davison, 1983). I hope the completion of my project could reflect on these theories and remind us that maybe it is better if we can accept the existence of the other side of the rhetoric. Instead of rejecting them or judging them, think about why this rhetoric exists and why there are divided opinions on it. Recently I watched a documentary called “Of Fathers and Sons”. This documentary shows the story of a radical Islamist family, a group of people we would not read about from the news we access. It is scary to see how the children were trained to become terrorists and their numbness to violence. On the other hand, this documentary shows the love between their family members. After watching the documentary, our perception of terrorism will not be changed, but at least we have insight into why they exist. In general, my goal is not justifying or judging any actions. Rather, I want to emphasize that although we know things are not all black and white, we tend to choose to believe in those that align with our beliefs that continue to reinforce our beliefs.  

Cherry, K. (2019) Cognitive Dissonance and Ways to Resolve It, Verywell Mind. Available at: https://www.verywellmind.com/what-is-cognitive-dissonance-2795012 (Accessed: 24 November 2019).

Davison, W. P. (1983) ‘The Third-Person Effect in Communication’, Public Opinion Quarterly, 47(1), pp. 1–15. doi: 10.1086/268763.

Knobloch-Westerwick, S. and Johnson, B. K. (2014) ‘Selective Exposure for Better or Worse: Its Mediating Role for Online News’ Impact on Political Participation’, Journal of Computer-Mediated Communication, 19(2), pp. 184–196. doi: 10.1111/jcc4.12036.

Wired (2019) ‘This Artist Makes Kinetic “Mirrors” That Echo Your Movements’. Available at: https://www.wired.com/story/daniel-rozin-mechanical-mirrors/ (Accessed: 24 November 2019).

Derki, T. (2018) Of Fathers and Sons – Official Trailer. Available at: https://www.youtube.com/watch?v=Zd0bRdYb8AI&feature=youtu.be(Accessed: 26 November 2019).
 
WIRED (no date) How This Guy Makes Amazing Mechanical Mirrors | Obsessed | WIRED. Available at: https://www.youtube.com/watch?v=kV8v2GKC8WA&feature=youtu.be(Accessed: 26 November 2019).