Recitation 9: Media Controller by Kat Van Sligtenhorst

For this recitation, I wanted to use two potentiometers to change the size and location of an image in Processing. Here is my code for Arduino:

void setup() {
Serial.begin(9600);
}

void loop() {
int sensor1 = analogRead(A0);
int sensor2 = analogRead(A1);

Serial.print(sensor1);
Serial.print(“,”);
Serial.print(sensor2);
Serial.println();

delay(100);
}

And here is my code for Processing:

import processing.serial.*;

String myString = null;
Serial myPort;
PImage img;
int NUM_OF_VALUES = 2;
int[] sensorValues;

void setup() {
size(800, 800);
img=loadImage(“hongkong.jpg”);
setupSerial();
}

void draw() {
background(0);
updateSerial();
printArray(sensorValues);
float a = map(sensorValues[0],0,1023,0,800);
float b = map(sensorValues[1],0,1023,400,800);
image(img,a,200,b,b);
}

void setupSerial() {
printArray(Serial.list());
// myPort = new Serial(this, Serial.list()[1411], 9600);
myPort = new Serial(this, “/dev/tty.usbmodem1411”, 38400);
myPort.clear();
myString = myPort.readStringUntil( 10 );
myString = null;
sensorValues = new int[NUM_OF_VALUES];
}

void updateSerial() {
while (myPort.available() > 0) {
myString = myPort.readStringUntil( 10 );
if (myString != null) {
String[] serialInArray = split(trim(myString), “,”);
if (serialInArray.length == NUM_OF_VALUES) {
for (int i=0; i<serialInArray.length; i++) {
sensorValues[i] = int(serialInArray[i]);
}
}
}
}
}

While my final project is more focused on challenging users mentally rather than engaging them in physical motion and interaction, I still found Computer Vision for Artists and Designers to be a really interesting read. Christian Möller’s project, “Cheese,” was most useful to me because, if I were to incorporate the ideas of computer vision found in this text into my own project, it would be a really cool way to gauge users’ emotions as they went through the survey. He was focused on smiles, but if his “emotion recognition system” were also able to detect unease or discomfort, that would be an excellent addition to my project. The section on motion detection also gave me something to consider, as I could use this strategy to activate the live video feed whenever a user enters the voting booth. It’s fascinating that the reading mentions, “Techniques exist which can create real-time reports about people’s identities, locations, gestural movements, facial expressions, gait characteristics, gaze directions, and other characteristics,” all of which tie into China’s surveillance state. If I wanted to do a project further expanding on critiques of the Chinese government beyond the message of self-censorship, it would be really cool to give users the experience of all of the above, particularly for audiences in other countries that do not deal with quite so heavy surveillance in their day to day lives.

(I was having trouble uploading the screen recordings, so I will go back and add those later).

Credits:

Levin, G. “Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for Novice Programmers”. Journal of
Artificial Intelligence and Society, Vol. 20.4. Springer Verlag, 2006.

Leave a Reply