The way technology was used in my project is kind of negotiating with the physical conditions presented by the world although because the input I used are potentiometers whose output is already legible for the computer, it does not need to be modified again (except for the “map” process) to become more easily legible to vision algorithms. The inputs are later used as control of speed and color tone (tint) of the video clip being played. I tend to view my project as a very basic use of computer vision algorithms. From the input side, my project is very direct so the response is somehow more reactive while the use of other motion detective device or sensors might create better interaction. From the output side, since the video is existed while the input only changes the speed and the color tone, it reduces the creativity and degrades interactive experience.
Code for Arduino
Code for Processing
import processing.serial.*; String myString = null; Serial myPort; int NUM_OF_VALUES = 2; int[] sensorValues; import processing.video.*; Movie myMovie; void setup() { size(360, 640); myMovie = new Movie(this, "Lilly.mp4"); myMovie.loop(); setupSerial(); } void draw() { if (myMovie.available()) { myMovie.read(); } updateSerial(); printArray(sensorValues); tint(sensorValues[0]/5, sensorValues[0]/1.5, sensorValues[0]/2); image(myMovie, 0, 0); float newSpeed = map(sensorValues[1], 0, 1023, 0.5, 5); myMovie.speed(newSpeed); } void setupSerial() { printArray(Serial.list()); myPort = new Serial(this, Serial.list()[ 5 ], 9600); myPort.clear(); myString = myPort.readStringUntil( 10 ); myString = null; sensorValues = new int[NUM_OF_VALUES]; } void updateSerial() { while (myPort.available() > 0) { myString = myPort.readStringUntil( 10 ); // 10 = '\n' Linefeed in ASCII if (myString != null) { String[] serialInArray = split(trim(myString), ","); if (serialInArray.length == NUM_OF_VALUES) { for (int i=0; i<serialInArray.length; i++) { sensorValues[i] = int(serialInArray[i]); } } } } }