Recitation 10: Media Controller
Instructor: Marcela
This was by far one of the more difficult recitations I’ve had. I wanted to model my sketch after the Hokusai wave exercise we did in class the other day where there were randomly generated bubbles that detected the pixel color of the area. I envisioned an image manipulation where the bubbles would change size based on the potentiometer. I wanted to combine the Hokusai exercise and the ellipse exercise that changed size and location based on the potentiometer. I had a bit of trouble starting off, as I mistakenly used the code that imported multiple Arduino values to Processing, when I only needed the code of important one value from Arduino to Processing. This confused me for a while because I wasn’t sure how to incorporate all the additional code that was unnecessary. After receiving guidance from Leon, I was able to better understand how I was making it more complicated than it should have been. We explored how to make the bubbles take up all the space across the image, rather than just a specific area. Using mappedValue, we were able to map the value received from Arudino’s potentiometer and was then able to evenly distribute the area where the bubbles were increasing and decreasing in size. Leon also explained to me the importance of println function, as it puts a comma in between the values and allows us to receive feedback if the potentiometer is actually working. At one point, my potentiometer was broken (but we didn’t know) and we were confused what was wrong with the code. So after including the println, we were able to figure out that the physical component was broken, and not the code.
Reflection:
Computer vision is heavily integrated into the majority of one’s daily life. Image manipulation is easily accessible and can be detected by motion/object tracking. In the reading, the author explains how “a rudimentary scheme for object tracking, ideal for tracking the location of a single illuminated point (such as a flashlight), finds the location of the single brightest pixel in every fresh frame of video.” This kind of methodology can explain how filters and image manipulation, particularly with the face, uses technology to alter or add additional elements into the electronic vision. With my project, if we imagine that the potentiometer were the location of the person’s face, we can allow it to represent the size of the bubbles that manipulate the live camera feed.
Processing:
import processing.serial.*;
PImage cow;
Serial myPort;
int valueFromArduino;
int size = 10;
void setup() {
size(500, 500);
background(0);
printArray(Serial.list());
myPort = new Serial(this, Serial.list()[ 5 ], 9600);
cow = loadImage(“IMG_0292.jpg”);
cow.resize(width,height);
noStroke();
}
void draw() {
while ( myPort.available() > 0) {
valueFromArduino = myPort.read();
}
println(valueFromArduino);
int mappedVal =int( map(valueFromArduino, 0, 255, 0, 50));
image(cow,0,0);
for (int y = 0; y <= height; y = y + 50) {
for (int x = 0; x <= width; x = x + 50) {
color c = get(x,y);
fill(c);
ellipse (x,y, mappedVal, mappedVal);
}}}
Arduino:
void setup() {
Serial.begin(9600);}
void loop() {
int sensor1 = analogRead(A0);
Serial.write(sensor1); // This is a byte of data.. 0 – 1023
delay(100);}