Final Project – Step 4: Final Blog Post – Lillie Yao

CREATIVE MOTION – LILLIE YAO – IMNI

 CONCEPTION AND DESIGN:

We wanted to create an interactive project where users would get to see the reflections of themselves through a light matrix, instead of seeing their reflections on something obvious like a camera or mirror. Since we wanted to make it very obvious what the users were doing, we decided to put the matrix (our output source) right in front of the camera (our input source). In the beginning we were just going to have them side by side but we realized that would take the attention off of the matrix since people tend to want to look at their reflection more than the light, no matter how obvious the light may be. 

During our brainstorm period, we tried to research different light boards since we knew we wanted to have a surface of light displays instead of single LED lights. We also though that programming and using 64 or more LED lights would be very complicated. We ended up using the Rainbowduino and 8×8 LED Matrix Super Bright to create the light board as well as the Rainbowduino being able to be our Arduino/breadboard source. Since we researched a bunch of different light sources, the only one that was available to us was the Rainbowduino and LED matrix. I’m sure there would have been better options, especially because we hoped to have a bigger LED board.

FABRICATION AND PRODUCTION:

During user testing we got different feedback and suggestions from our classmates that they thought would make our project better. A lot of classmates wished that the display of the LED board was bigger so that we could make our interaction more of an experience rather than just a small board in front of their eyes. As an attempt to change that, we wanted to put together multiple LED boards to make a bigger screen. Soon after attempting that, we realized that you actually can’t put together multiple LED boards easily and make them work together. Each LED board actually operates separately from each other. As stated in our concept and design, we researched multiple different types of LED boards but a lot of the materials that were better suited for our project were not available to us in the short amount of time after user testing.

After realizing that we still had to use one LED Matrix board, we decided to make our fabrication product so that it would magnify the LED board. We decided to make a polygonal shape with the clear acrylic material where the LED would fit snug into a box on the end. We decided to go with the clear acrylic board for laser cutting is because we thought the opaque look would suit our design better than any other material. In my mind, I pictured that the LED lights would reflect off of different surfaces and it would appear more interesting if the product was see-through. We really didn’t think there would have been a better option because all of the other laser cutting materials were too dull and 3D printing wouldn’t have worked for a hollow design. After fabrication,  a fellow ( which I forgot his name…..) gave us the idea to actually put the matrix INSIDE our polygon so that the lights would reflect within our polygon. This truly changed our project because now, we were able to utilize our fabrication design in a different and better way than before.

Sketching for fabrication:

Laser Printing:

Original fabrication:

 

Changed fabrication so the light would shine through:

Another suggestion during user testing that we had was that users wished the LED board was facing them instead of facing up because it was hard to see the board when it is not facing the user. Therefore, we decided to make the fabrication product a polygon so that it would be easier to angle the polygon to turn to the side and face the user.

Lastly, we had a great suggestion to implement sound into our project to make it more interesting rather than just seeing light, users would also be able to trigger different sounds while they move. After getting this feedback, we decided to code different sounds into our project that would trigger when you moved in different places. This really changed our project because we got to use different sounds and lights to create art, which in my opinion made our project more well rounded.

Sketches for LED board and pixels on camera:

After presenting our final project, we got some feedback saying that some of the sounds were too much and it would be better if we used all musical instruments instead of animal noises, snapshots, etc. Since we both really wanted to present our product at the IMA show, we decided to change the sounds to all instrument sounds before the show, that way it would be lighter on the ears and the users would be less confused. I think this helped our project a lot because many people really loved our project at the IMA show, even Chancellor Yu got to interact and see our project!

Chancellor Yu interacting with our project!!!!:

CONCLUSIONS:

The goal of my project was to create something that users could interact with but at the same time, have fun with. We wanted to create something that has a direct input/output as well as something users can play with for fun. As a creator, I felt like it was really cool to create something that people can interact with and have fun with at the same time.

My product result aligns with my original definition of interaction because it has both an input and an output but it will keep running whether or not it has an input. The input being the camera, will still detect changes in motion whether or not something or some one is moving. At the same time, my definition of interaction stated that interaction is a continuous loop from input to output. So if there is an input, there will 100% be an output. Which in my project, if there is any change in motion, it will change the light on the matrix and trigger the sound at the same time.

My expectation of the audience’s response was pretty similar. The only thing my partner and I didn’t really think about was that once a user sees their own reflection, they tend to focus on that instead of the lights changing. I often found myself having to explain my project instead of them figuring it out and if they did figure it out, it took a bit of time for them to do so. Other than that, my expectation of audience reactions was pretty similar.

User Reactions:

If I had more time to improve my project, I would definitely take into consideration the “experience” aspect of the project I wanted to implement. During our final project, Eric said that if really wanted to make it an experience, we needed to factor in a lot of different things. If I could change some of the things about my project to make it more of an experience I would have speakers around to amplify the sound, project the camera input onto a bigger screen, and make the LED light board bigger. 

From the setbacks and failures of my project, I learned that theres always room for improvement, even if you think there isn’t enough time. I learned that there is always going to be projects and parts of other peoples work that will be better than yours but you should never compare what other peoples capabilities are to yours. After taking this class and seeing all of the work that I have done, I am very happy with all of my accomplishments. I would have never thought that this project would come to life during our brainstorming period but I’m really glad we could make it work! I’m really glad that we were able to create a fun and interactive work or art where users were able to see themselves and make art with light as well as music and sound!

Arduino/Rainbowduino Code:

#include <Rainbowduino.h>
char valueFromProcessing;

void setup()
{
Rb.init();
Serial.begin(9600);
}

unsigned char x, y, z;

void loop() {

while (Serial.available()) {
valueFromProcessing = Serial.read();

if (valueFromProcessing == ‘D’) {
Rb.fillRectangle(0, 0, 2, 2, random(0xFFFFFF));
}
if (valueFromProcessing == ‘d’) {
Rb.fillRectangle(0, 0, 2, 2, 0x000000);
}

if (valueFromProcessing == ‘C’) {
Rb.fillRectangle(2, 0, 2, 2, random(0xFFFFFF));
}
if (valueFromProcessing == ‘c’) {
Rb.fillRectangle(2, 0, 2, 2, 0x000000);
}

if (valueFromProcessing == ‘B’) {
Rb.fillRectangle(4, 0, 2, 2, random(0xFFFFFF));
}
if (valueFromProcessing == ‘b’) {
Rb.fillRectangle(4, 0, 2, 2, 0x000000);
}

if (valueFromProcessing == ‘A’) {
Rb.fillRectangle(6, 0, 2, 2, random(0xFFFFFF));
}
if (valueFromProcessing == ‘a’) {
Rb.fillRectangle(6, 0, 2, 2, 0x000000);
}

if (valueFromProcessing == ‘H’) {
Rb.fillRectangle(0, 2, 2, 2, random(0xFFFFFF));
}
if (valueFromProcessing == ‘h’) {
Rb.fillRectangle(0, 2, 2, 2, 0x000000);
}

if (valueFromProcessing == ‘G’) {
Rb.fillRectangle(2, 2, 2, 2, random(0xFFFFFF));
}
if (valueFromProcessing == ‘g’) {
Rb.fillRectangle(2, 2, 2, 2, 0x000000);
}

if (valueFromProcessing == ‘F’) {
Rb.fillRectangle(4, 2, 2, 2, random(0xFFFFFF));
}
if (valueFromProcessing == ‘f’) {
Rb.fillRectangle(4, 2, 2, 2, 0x000000);
}

if (valueFromProcessing == ‘E’) {
Rb.fillRectangle(6, 2, 2, 2, random(0xFFFFFF));
}
if (valueFromProcessing == ‘e’) {
Rb.fillRectangle(6, 2, 2, 2, 0x000000);
}

if (valueFromProcessing == ‘L’) {
Rb.fillRectangle(0, 4, 2, 2, random(0xFFFFFF));
}
if (valueFromProcessing == ‘l’) {
Rb.fillRectangle(0, 4, 2, 2, 0x000000);
}

if (valueFromProcessing == ‘K’) {
Rb.fillRectangle(2, 4, 2, 2, random(0xFFFFFF));
}
if (valueFromProcessing == ‘k’) {
Rb.fillRectangle(2, 4, 2, 2, 0x000000);
}

if (valueFromProcessing == ‘J’) {
Rb.fillRectangle(4, 4, 2, 2, random(0xFFFFFF));
}
if (valueFromProcessing == ‘j’) {
Rb.fillRectangle(4, 4, 2, 2, 0x000000);
}

if (valueFromProcessing == ‘I’) {
Rb.fillRectangle(6, 4, 2, 2, random(0xFFFFFF));
}
if (valueFromProcessing == ‘i’) {
Rb.fillRectangle(6, 4, 2, 2, 0x000000);
}

if (valueFromProcessing == ‘P’) {
Rb.fillRectangle(0, 6, 2, 2, random(0xFFFFFF));
}
if (valueFromProcessing == ‘p’) {
Rb.fillRectangle(0, 6, 2, 2, 0x000000);
}

if (valueFromProcessing == ‘O’) {
Rb.fillRectangle(2, 6, 2, 2, random(0xFFFFFF));
}
if (valueFromProcessing == ‘o’) {
Rb.fillRectangle(2, 6, 2, 2, 0x000000);
}

if (valueFromProcessing == ‘N’) {
Rb.fillRectangle(4, 6, 2, 2, random(0xFFFFFF));
}
if (valueFromProcessing == ‘n’) {
Rb.fillRectangle(4, 6, 2, 2, 0x000000);
}

if (valueFromProcessing == ‘M’) {
Rb.fillRectangle(6, 6, 2, 2, random(0xFFFFFF));
delay(1);
}
if (valueFromProcessing == ‘m’) {
Rb.fillRectangle(6, 6, 2, 2, 0x000000);
}
}
}

Processing Code:

import processing.video.*;
import processing.serial.*;
import processing.sound.*;
Serial myPort;
Capture cam;
PImage prev;
boolean p[];
int circleSize = 10;
SoundFile a4;
SoundFile b4;
SoundFile c4;
SoundFile c5;
SoundFile cello_slide;
SoundFile d4;
SoundFile e4;
SoundFile f4;
SoundFile g4;
SoundFile violin_slide;
SoundFile guitar;

void setup() {
size(800,600);
cam = new Capture(this, 800,600);
cam.start();
prev = cam.get();
p = new boolean[width * height];

myPort= new Serial(this, Serial.list()[ 2 ], 9600);

a4 = new SoundFile(this, “a4.wav”);
b4 = new SoundFile(this, “b4.wav”);
c4 = new SoundFile(this, “c4.wav”);
c5 = new SoundFile(this, “c5.wav”);
d4 = new SoundFile(this, “d4.wav”);
e4 = new SoundFile(this, “e4.wav”);
f4 = new SoundFile(this, “f4.wav”);
g4 = new SoundFile(this, “g4.wav”);
guitar = new SoundFile(this, “guitar.wav”);
}
void draw() {
if (cam.available()) {
cam.read();
cam.loadPixels();
}
translate(cam.width,0);
scale(-1,1);
image(cam,0,0);

int w = cam.width;
int h = cam.height;
for (int y = 0; y < h; y+=circleSize){
for (int x = 0; x < w; x+=circleSize) {
int i = x + y*w;
//fill( 0 );
if(cam.pixels[i] != prev.pixels[i]){
p[i] = true;
}
else{
p[i] = false;
}
}
for (int y = circleSize; y < h- circleSize; y+=circleSize){
for (int x = circleSize; x < w- circleSize; x+=circleSize) {
int i = x + y*w;
int up = x + (y- circleSize) * w;
int down = x + (y+ circleSize) * w;
int left = (x – circleSize) + y*w;
int right = (x + circleSize) + y*w;
int upLt = (x – circleSize) + (y- circleSize) * w;
int upRt = (x + circleSize) + (y- circleSize) * w;
int downLt = (x – circleSize) + (y+ circleSize) * w;
int downRt = (x + circleSize) + (y+ circleSize) * w;
if(p[i] && p[up] && p[down] && p[left] && p[right] && p[upRt] && p[upLt] && p[downRt] && p[downLt]){
fill(cam.pixels[i]);
}
else{
fill( 0 );
}
rect(x,y,circleSize,circleSize);
}
}
int countD = 0;
for (int y = circleSize; y < 150; y+=circleSize){
for (int x = circleSize; x < 200; x+=circleSize) {
int i = x + y*w;
int up = x + (y- circleSize) * w;
int down = x + (y+ circleSize) * w;
int left = (x – circleSize) + y*w;
int right = (x + circleSize) + y*w;
int upLt = (x – circleSize) + (y- circleSize) * w;
int upRt = (x + circleSize) + (y- circleSize) * w;
int downLt = (x – circleSize) + (y+ circleSize) * w;
int downRt = (x + circleSize) + (y+ circleSize) * w;
//fill( 0 );
if(p[i] && p[up] && p[down] && p[left] && p[right] && p[upRt] && p[upLt] && p[downRt] && p[downLt]){
countD++;
}
}

println(countD);
if (countD > 100){
myPort.write(‘D’);
if(!guitar.isPlaying()) {
guitar.play();
}
}
else {
myPort.write(‘d’);
}

int countC = 0;
for (int y = circleSize; y < 150; y+=circleSize){
for (int x = 200+circleSize; x < 400; x+=circleSize) {
int i = x + y*w;
int up = x + (y- circleSize) * w;
int down = x + (y+ circleSize) * w;
int left = (x – circleSize) + y*w;
int right = (x + circleSize) + y*w;
int upLt = (x – circleSize) + (y- circleSize) * w;
int upRt = (x + circleSize) + (y- circleSize) * w;
int downLt = (x – circleSize) + (y+ circleSize) * w;
int downRt = (x + circleSize) + (y+ circleSize) * w;

if(p[i] && p[up] && p[down] && p[left] && p[right] && p[upRt] && p[upLt] && p[downRt] && p[downLt]){

countC++;
}
}

println(countC);
if (countC > 100){
myPort.write(‘C’);
if(!g4.isPlaying()) {
g4.play();
}
}
else {
myPort.write(‘c’);
}

int countB = 0;
for (int y = circleSize; y < 150; y+=circleSize){
for (int x = 400+circleSize; x < 600; x+=circleSize) {
int i = x + y*w;
int up = x + (y- circleSize) * w;
int down = x + (y+ circleSize) * w;
int left = (x – circleSize) + y*w;
int right = (x + circleSize) + y*w;
int upLt = (x – circleSize) + (y- circleSize) * w;
int upRt = (x + circleSize) + (y- circleSize) * w;
int downLt = (x – circleSize) + (y+ circleSize) * w;
int downRt = (x + circleSize) + (y+ circleSize) * w;
//fill( 0 );
if(p[i] && p[up] && p[down] && p[left] && p[right] && p[upRt] && p[upLt] && p[downRt] && p[downLt]){
//fill(cam.pixels[i]);
countB++;
}
}

println(“B: ” + countB);
if (countB > 100){
myPort.write(‘B’);
if(!f4.isPlaying()) {
f4.play();
}
}
else {
myPort.write(‘b’);
}

int countA = 0;
for (int y = circleSize; y < 150; y+=circleSize){
for (int x = 600+circleSize; x < 800; x+=circleSize) {
int i = x + y*w;
int up = x + (y- circleSize) * w;
int down = x + (y+ circleSize) * w;
int left = (x – circleSize) + y*w;
int right = (x + circleSize) + y*w;
int upLt = (x – circleSize) + (y- circleSize) * w;
int upRt = (x + circleSize) + (y- circleSize) * w;
int downLt = (x – circleSize) + (y+ circleSize) * w;
int downRt = (x + circleSize) + (y+ circleSize) * w;
//fill( 0 );
if(p[i] && p[up] && p[down] && p[left] && p[right] && p[upRt] && p[upLt] && p[downRt] && p[downLt]){
//fill(cam.pixels[i]);
countA++;
}
}

println(countA);
if (countA > 100){
myPort.write(‘A’);
if(!guitar.isPlaying()) {
guitar.play();
}
}
else {
myPort.write(‘a’);
}

int countH = 0;
for (int y = 150+circleSize; y < 300; y+=circleSize){
for (int x = circleSize; x < 200; x+=circleSize) {
int i = x + y*w;
int up = x + (y- circleSize) * w;
int down = x + (y+ circleSize) * w;
int left = (x – circleSize) + y*w;
int right = (x + circleSize) + y*w;
int upLt = (x – circleSize) + (y- circleSize) * w;
int upRt = (x + circleSize) + (y- circleSize) * w;
int downLt = (x – circleSize) + (y+ circleSize) * w;
int downRt = (x + circleSize) + (y+ circleSize) * w;
//fill( 0 );
if(p[i] && p[up] && p[down] && p[left] && p[right] && p[upRt] && p[upLt] && p[downRt] && p[downLt]){
//fill(cam.pixels[i]);
countH++;
}
}

println(countH);
if (countH > 100){
myPort.write(‘H’);
if(!a4.isPlaying()) {
a4.play();
}
}
else {
myPort.write(‘h’);
}

int countG = 0;
for (int y = 150+circleSize; y < 300; y+=circleSize){
for (int x = 200+circleSize; x < 400; x+=circleSize) {
int i = x + y*w;
int up = x + (y- circleSize) * w;
int down = x + (y+ circleSize) * w;
int left = (x – circleSize) + y*w;
int right = (x + circleSize) + y*w;
int upLt = (x – circleSize) + (y- circleSize) * w;
int upRt = (x + circleSize) + (y- circleSize) * w;
int downLt = (x – circleSize) + (y+ circleSize) * w;
int downRt = (x + circleSize) + (y+ circleSize) * w;
//fill( 0 );
if(p[i] && p[up] && p[down] && p[left] && p[right] && p[upRt] && p[upLt] && p[downRt] && p[downLt]){
//fill(cam.pixels[i]);
countG++;
}
}

println(countG);
if (countG > 100){
myPort.write(‘G’);
}
else {
myPort.write(‘g’);
}

int countF = 0;
for (int y = 150+circleSize; y < 300; y+=circleSize){
for (int x = 400+circleSize; x < 600; x+=circleSize) {
int i = x + y*w;
int up = x + (y- circleSize) * w;
int down = x + (y+ circleSize) * w;
int left = (x – circleSize) + y*w;
int right = (x + circleSize) + y*w;
int upLt = (x – circleSize) + (y- circleSize) * w;
int upRt = (x + circleSize) + (y- circleSize) * w;
int downLt = (x – circleSize) + (y+ circleSize) * w;
int downRt = (x + circleSize) + (y+ circleSize) * w;
//fill( 0 );
if(p[i] && p[up] && p[down] && p[left] && p[right] && p[upRt] && p[upLt] && p[downRt] && p[downLt]){
//fill(cam.pixels[i]);
countF++;
}
}

println(countF);
if (countF > 100){
myPort.write(‘F’);
} else {
myPort.write(‘f’);
}

int countE = 0;
for (int y = 150+circleSize; y < 300; y+=circleSize){
for (int x = 600+circleSize; x < 800; x+=circleSize) {
int i = x + y*w;
int up = x + (y- circleSize) * w;
int down = x + (y+ circleSize) * w;
int left = (x – circleSize) + y*w;
int right = (x + circleSize) + y*w;
int upLt = (x – circleSize) + (y- circleSize) * w;
int upRt = (x + circleSize) + (y- circleSize) * w;
int downLt = (x – circleSize) + (y+ circleSize) * w;
int downRt = (x + circleSize) + (y+ circleSize) * w;
//fill( 0 );
if(p[i] && p[up] && p[down] && p[left] && p[right] && p[upRt] && p[upLt] && p[downRt] && p[downLt]){
//fill(cam.pixels[i]);
countE++;
}
}

println(countE);
if (countE > 100){
myPort.write(‘E’);
if(!e4.isPlaying()) {
e4.play();
}
}
else {
myPort.write(‘e’);
}

int countL = 0;
for (int y = 300+circleSize; y < 450; y+=circleSize){
for (int x = circleSize; x < 200; x+=circleSize) {
int i = x + y*w;
int up = x + (y- circleSize) * w;
int down = x + (y+ circleSize) * w;
int left = (x – circleSize) + y*w;
int right = (x + circleSize) + y*w;
int upLt = (x – circleSize) + (y- circleSize) * w;
int upRt = (x + circleSize) + (y- circleSize) * w;
int downLt = (x – circleSize) + (y+ circleSize) * w;
int downRt = (x + circleSize) + (y+ circleSize) * w;
//fill( 0 );
if(p[i] && p[up] && p[down] && p[left] && p[right] && p[upRt] && p[upLt] && p[downRt] && p[downLt]){
//fill(cam.pixels[i]);
countL++;
}
}

println(countL);
if (countL > 100){
myPort.write(‘L’);
if(!b4.isPlaying()) {
b4.play();
}
}
else {
myPort.write(‘l’);
}

int countK = 0;
for (int y = 300+circleSize; y < 450; y+=circleSize){
for (int x = 200+circleSize; x < 400; x+=circleSize) {
int i = x + y*w;
int up = x + (y- circleSize) * w;
int down = x + (y+ circleSize) * w;
int left = (x – circleSize) + y*w;
int right = (x + circleSize) + y*w;
int upLt = (x – circleSize) + (y- circleSize) * w;
int upRt = (x + circleSize) + (y- circleSize) * w;
int downLt = (x – circleSize) + (y+ circleSize) * w;
int downRt = (x + circleSize) + (y+ circleSize) * w;
//fill( 0 );
if(p[i] && p[up] && p[down] && p[left] && p[right] && p[upRt] && p[upLt] && p[downRt] && p[downLt]){
//fill(cam.pixels[i]);
countK++;
}
}

println(countK);
if (countK > 100){
myPort.write(‘K’);
}
else {
myPort.write(‘k’);
}

int countJ = 0;
for (int y = 300+circleSize; y < 450; y+=circleSize){
for (int x = 400+circleSize; x < 600; x+=circleSize) {
int i = x + y*w;
int up = x + (y- circleSize) * w;
int down = x + (y+ circleSize) * w;
int left = (x – circleSize) + y*w;
int right = (x + circleSize) + y*w;
int upLt = (x – circleSize) + (y- circleSize) * w;
int upRt = (x + circleSize) + (y- circleSize) * w;
int downLt = (x – circleSize) + (y+ circleSize) * w;
int downRt = (x + circleSize) + (y+ circleSize) * w;
//fill( 0 );
if(p[i] && p[up] && p[down] && p[left] && p[right] && p[upRt] && p[upLt] && p[downRt] && p[downLt]){
//fill(cam.pixels[i]);
countJ++;
}
}

println(countJ);
if (countJ > 100){
myPort.write(‘J’);
}
else {
myPort.write(‘j’);
}

int countI = 0;
for (int y = 300+circleSize; y < 450; y+=circleSize){
for (int x = 600+circleSize; x < 800; x+=circleSize) {
int i = x + y*w;
int up = x + (y- circleSize) * w;
int down = x + (y+ circleSize) * w;
int left = (x – circleSize) + y*w;
int right = (x + circleSize) + y*w;
int upLt = (x – circleSize) + (y- circleSize) * w;
int upRt = (x + circleSize) + (y- circleSize) * w;
int downLt = (x – circleSize) + (y+ circleSize) * w;
int downRt = (x + circleSize) + (y+ circleSize) * w;
//fill( 0 );
if(p[i] && p[up] && p[down] && p[left] && p[right] && p[upRt] && p[upLt] && p[downRt] && p[downLt]){
//fill(cam.pixels[i]);
countI++;
}
}

println(countI);
if (countI > 100){
myPort.write(‘I’);
if(!d4.isPlaying()) {
d4.play();
}
}
else {
myPort.write(‘i’);
}

int countP = 0;
for (int y = 450+circleSize; y < 600; y+=circleSize){
for (int x = circleSize; x < 200; x+=circleSize) {
int i = x + y*w;
int up = x + (y- circleSize) * w;
//int down = x + (y+ circleSize) * w;
int left = (x – circleSize) + y*w;
int right = (x + circleSize) + y*w;
int upLt = (x – circleSize) + (y- circleSize) * w;
int upRt = (x + circleSize) + (y- circleSize) * w;
if(p[i] && p[up] && p[left] && p[right] && p[upRt] && p[upLt]){
//fill(cam.pixels[i]);
countP++;
}
}

println(countP);
if (countP > 100){
myPort.write(‘P’);
if(!c5.isPlaying()) {
c5.play();
}
}
else {
myPort.write(‘p’);
}

int countO = 0;
for (int y = 450+circleSize; y < 600; y+=circleSize){
for (int x = 200+circleSize; x < 400; x+=circleSize) {
int i = x + y*w;
int up = x + (y- circleSize) * w;
//int down = x + (y+ circleSize) * w;
int left = (x – circleSize) + y*w;
int right = (x + circleSize) + y*w;
int upLt = (x – circleSize) + (y- circleSize) * w;
int upRt = (x + circleSize) + (y- circleSize) * w;
if(p[i] && p[up] && p[left] && p[right] && p[upRt] && p[upLt]){
//fill(cam.pixels[i]);
countO++;
}
}

println(countO);
if (countO > 100){
myPort.write(‘O’);
}
else {
myPort.write(‘o’);
}

int countN = 0;
for (int y = 450+circleSize; y < 600; y+=circleSize){
for (int x = 400+circleSize; x < 600; x+=circleSize) {
int i = x + y*w;
int up = x + (y- circleSize) * w;
//int down = x + (y+ circleSize) * w;
int left = (x – circleSize) + y*w;
int right = (x + circleSize) + y*w;
int upLt = (x – circleSize) + (y- circleSize) * w;
int upRt = (x + circleSize) + (y- circleSize) * w;
//fill( 0 );
if(p[i] && p[up] && p[left] && p[right] && p[upRt] && p[upLt]){
//fill(cam.pixels[i]);
countN++;
}
}

println(countN);
if (countN > 100){
myPort.write(‘N’);
}
else {
myPort.write(‘n’);
}

int countM = 0;
for (int y = 450+circleSize; y < 600; y+=circleSize){
for (int x = 600+circleSize; x < 800; x+=circleSize) {
int i = x + y*w;
int up = x + (y- circleSize) * w;
//int down = x + (y+ circleSize) * w;
int left = (x – circleSize) + y*w;
int right = (x + circleSize) + y*w;
int upLt = (x – circleSize) + (y- circleSize) * w;
int upRt = (x + circleSize) + (y- circleSize) * w;
//int downLt = (x – circleSize) + (y+ circleSize) * w;
//int downRt = (x + circleSize) + (y+ circleSize) * w;
//fill( 0 );
if(p[i] && p[up] && p[left] && p[right] && p[upRt] && p[upLt]){
//fill(cam.pixels[i]);
countM++;
}
}

println(countM);
if (countM > 100){
myPort.write(‘M’);
if(!c4.isPlaying()) {
c4.play();
}
}
else {
myPort.write(‘m’);
}

prev = cam.get();
cam.updatePixels();

}



Space Piglet Off Balance – Feifan Li – Marcela 

CONCEPTION AND DESIGN:

When my partner and I were making design decisions, we first wanted to make our project a piece with a deep meaning. Influenced by my peers’ ideas which are all very “profound,” we also want to make our project a statement piece, or an experience that would educate people in the end. We proposed ideas like a game that would educate people on the unnecessary nature of social media and various other apps, but we struggled to create an engaging experience that would show our purpose. After consultation with Professor Marcela, my partner and I realized the more important thing about our project should be the experience itself. If we can create a new type of experience that is really interactive and engaging, we do not have to add some grand purposes to our project. Inspired by Marcela, we decided to focus on new forms of interaction – new experiences. Cathy suggested using the balance board for the user to stand on, and it turned out to be a great idea for our game by going beyond the usual keyboard experience. Our idea is to engage the entire body of the user, instead of merely part of his/her body like the fingers, so the balance board is really a great tool to realize that. Furthermore, the balance board is originally used in the gym, which adds another layer of meaning to our game – have fun while training your balancing ability! To suit the swinging nature of the balance board, we decided to use the sensor accelerometer to detect the user’s movement. We wanted to design a game that would require the user to constantly swing the board while keeping the balance – a fun way of interaction.

FABRICATION AND PRODUCTION:

In the production process, we first wanted to do the digital fabrication. At first my partner and I were thinking about making the balance board through 3D printing ourselves. But after consultation with Andy we realized the materials we have might not be so strong to sustain some users’ weight on the board. So we directly purchased a balance board and decided to do laser-cut a small box to put on the board. We hid the Arduino and sensor inside the box. The box serves as a protection from the user’s feet, which later in the user-testing was very important.

After the digital fabrication, we focused on testing the sensor and the code for the game. We were completely unfamiliar with accelerometer, but thanks to Tristan and Marcela’s help, we downloaded the guideline from the Internet and kept trying to make the sensor right. At first we were unable to detect the changes of the x and y values regardless of how we tried. We later discovered that it is because one port of the sensor was not working. After changing a new sensor and following the guideline, we managed to put the sensor in order.

Then we focused on designing the game. We wanted to use the round shape of the board in our game too, so we were thinking about the life-saving board in the sea at first. We also make the main figure a piglet from Winnie the Pooh. But later Marcela reminded us that we need to create a scenario for our game to better engage the user. So we created a space scenario where the balance board was a future vehicle in the space, and the piglet wearing a space helmet was exploring the space. Where the piglet goes is controlled by the user standing on the board – whichever direction he/she leans to, the piglet moves in the same direction. The moving board in the space is the safe board which the asteroids cannot hit. The piglet would love to have some fun outside the board, but it cannot stay outside for too long because it needs to breathe – after 5 seconds outside the board it dies from suffocation. Or if it gets hit by the moving asteroids directly, it dies instantly. In order to create the atmosphere of this scenario, we decided that the background should be the space. But inserting a image directly would cause the processing to progress too slowly. We tried to fix the problem but Tristan told us there is no easy solution. So I made a relatively simple but nice black background with shiny yellow stars scattered in it. This helps our game run smoothly. Thanks to my partner Cathy, we managed to make the code for the bouncing asteroids right. Although the sketch seems simple, it has the basic elements of a space game.

The user-testing session is very helpful. The users liked our idea of the moving board, and we collected much feedback for modification, such as the speed of the traveling piglet should be faster to make the game more engaging, the moving board can become smaller as time goes by, the moving boundary of the board can be expanded, and the placing of the wire should be switched to make it easy for the user to stand on. About the game concept, users also suggested we creating a game scenario that involves more elements, like bonus scores for “reaching” some stars for the piglet, and the showing of the time limit for the piglet staying outside the board. Marcela also suggested we adding sound to the game in our presentation, which adds a lot more to our game. We took almost all of this advice and modified our game base on that. The changes make our game more complete and engaging. One user also suggested making the fabricated box smaller and moving the Arduino onto the table instead of on the board, but we did not follow that because we had limited time and thought it was more important to focus on the game itself.

CONCLUSIONS:

The goal of my project is to create a new game experience that involves the movement of the user’s entire body. By standing on the balance board to control the movement of the piglet, the user tries to stay balanced as well as engage in the game to make sure the piglet survive as long as possible. The project aligns with my definition of interaction in that the user needs to constantly react to the changing location of the board and asteroids in the game to adjust the location of the piglet. What stands out in our project is that the extended experience is really novel – users have fun interacting with the game while shaking on the board. Ultimately, in the final show, many users tested our game and liked our idea of using the balance board. Some offered suggestions of improving the details of our game setting and code, but most of them find standing on the board very entertaining. One area we can keep working on is that the movement of the board becomes predictable after the user gets familiar with the game. We can randomize the movement of the board or speed it up – dividing the game into several rounds depending on the speed of the movement, so that the game can be more exciting and challenging.

In the production process, I came to realize the importance of exploring on our own. For example, the accelerometer sensor which I had no idea how to use can actually be mastered by self-learning online. The project can be complicated and what we learnt in class maybe insufficient, but we can always try to explore how things work ourselves and seek for help. The ability of self-learning is important in creating something of our own.

From the successes and failures of doing the final project, I also realized the importance of user “experience.” For both the project conception and the game design, we all need to create a scenario that serves the experience of the project. It is the experience that matters most for the user, and creating new forms of interaction really brings new experience and new feelings to the users. What we can further explore is how to inspire such new experiences, either by involving different parts of the human body like we did in our project or by other means. The combination of human body and new technology is very interesting and explores the new human need in the context of new technology today. Humans constantly want new forms of interaction with the technology to extend their conventional human experiences.

Final Project Reflection: Snack Facts by Eleanor Wade

Snack Facts – Eleanor Wade – Marcela Godoy

CONCEPTION AND DESIGN:

When considering how my users were going to interact with this project I took much of my research regarding the consumption of animals and animal products, as well as the typical experience of a grocery store in mind.  In order to recreate a similar feeling of the vast array options that consumers are presented with at a super market, I chose to use the color scanner and foods with various colored tags to allow for users to have the experience of checking out at a typical store. Following making their decisions and selecting different products from the shelves (specifically meats with red tags, animal products with blue tags, and plant-based foods with green tags) users would scan to see an overwhelming assortment of pictures and quick facts regarding the process from an industrialized factory farm to table, and the differing environmental impacts each of these actions have.  The color sensor was critical in my design and conception of this experience because it is not only a hands-on and interesting action, but also it can be very clearly linked to the overall feeling of checking out at a grocery store.  It is my hope that many of my users will associate this feeling of blindly making decisions with the pictures that appear on the screen.  While the shelves were made of cardboard, I also included many collected, plastic packages that are commonly used in grocery stores.  This helped to further explore the question of how we process our foods and package them for our convenience, without fully understanding what the consequences of these choices are. Other materials I used, such as real foods (carton of milk, jam, bread, sausages, cookies) were an effort to make the experience appear slightly more realistic.  Additionally, the few edible foods I provided were very beneficial in working to complete the experiences and add an interactive element of taste and smell to the project.  It was evident that these materials–particularly the real, edible foods– were central to the interactive aspect of my project because in addition to using the color sensor, being presented with both plant based and animal based products further made customers question the choices they make everyday.  In associating this specific taste with the exposed realities of the food systems, this project used levels of interactivity to educate people about the environmental impacts of their food choices.  

FABRICATION AND PRODUCTION:

The most significant steps in my production process started with building off of my previous research about animal products and talking with Marcela regarding the best ways to move forward with how to create an interactive and educational experience involving food.  After deciding to use the color sensor, I used my research from a previous recitation using this sensor to work through the Arduino to Processing communication.  In working on the coding of both, and furthering the project by adding a collage of photos from my research, Marcela was exceptionally helpful to me.  I definitely struggled with how to translate the specific numerical values associated with each color and how to connect this to groups of photos.  User testing proved to be very beneficial to me because I was able to engage with users as they were experiencing my project, as well as receive feedback such as the problems with clarity of the text (so later I changed this to only pictures, rather than facts) as well as the speed of the shifting pictures.  Users/”customers” also commented on the action of selecting individual products to scan, as well as the role that edible foods played in the entire understanding of interactivity in my project.  Because of this, I made an effort to select real foods that would be pertinent to the decisions that we make regarding our every meal.  In terms of justifying these aspects of the design, using sample sized foods also supported the various free samples that are commonly found at grocery stores. While the many changes I made to my project following user testing were effective, I think it would have been even better to clarify the images I used, in addition to fixing the distortion, however even after making many different alterations, this was especially difficult.

Digital Fabrication:

3D printing:  https://www.thingiverse.com/thing:2304545

I decided to create a 3D printed mushroom because it represents the produce that is commonly found at a grocery store or supermarket.  I wanted to 3D print rather than laser cut something because I found it relatively easy and beneficial to be able to make shelves out of cardboard, as well as the scanner that contains the Arduino and breadboard for the color scanner.  

CONCLUSIONS:

The primary focus of my project is to educate people on the larger consequences and implications of their food choices.  Through the interactive concept of using a scanner to trigger images specific to food production, I hope to demonstrate the consequences of dietary choices and the larger implications that surround industrialized agriculture and animal farming.  The results of my project align with my definition of interaction because not only do users get to engage with a supermarket-checkout-style scanner, but also they are presented with real, edible foods to further the understanding of what you eat matters. This response from seeing unpleasant or informative images helps to further the elements of interaction in that users both learn something new and associate these facts with the foods they consume regularly.  If I had more time, I would improve my project by fixing the distortion of the images, and by adding sound–specifically the screams of animals living on factory farms as well as a sound that is made after each scan to demonstrate the actions– in order to engage audiences in the experience of the project on a more complete level.  This project has taught me many valuable components, for example the potential that technology and design have for enhancing our understandings of the world and shifting ideologies on even the most basic aspects of life, such as food.  When users are able to experience projects that appeal to more than just one sense, it also enhances the project overall.  Regarding my accomplishments on this project, I am pleased to have been able to use creative technology to be able to introduce people to the realities of food systems that they may have otherwise been very disconnected from.  Ultimately, this project uses visual cues combined with senses such as taste and smell to demonstrate not only compelling methods of interaction, but also help to bridge the gap that we have from how our food is produced.  Audiences and customers should care about this project and experience because it demonstrates the exceptionally detrimental consequences of eating animals and animal products, and translates these very common interactions with food and at grocery stores into more tangible and straightforward pieces of information.  

BIBLIOGRAPHY OF SOURCES:

“5 Ways Eating More Plant-Based Foods Benefits the Environment.” One Green Planet, 21 Aug. 2015, https://www.onegreenplanet.org/environment/how-eating-more-plant-based-foods-benefits-the-environment/.
https://search.credoreference.com/content/entry/abcfoodsafety/avian_flu/0. Accessed 29 Oct. 2018.
“Dairy | Industries | WWF.” World Wildlife Fund, https://www.worldwildlife.org/industries/dairy. Accessed 4 Dec. 2019.
Eating Animals Quotes by Jonathan Safran Foer. https://www.goodreads.com/work/quotes/3149322-eating-animals. Accessed 3 Dec. 2019.
Flu Season: Factory Farming Could Cause A Catastrophic Pandemic | HuffPost. https://www.huffingtonpost.com/kathy-freston/flu-season-factory-farmin_b_410941.html. Accessed 29 Oct. 2018.
“Milk’s Impact on the Environment.” World Wildlife Fund, https://www.worldwildlife.org/magazine/issues/winter-2019/articles/milk-s-impact-on-the-environment?utm_campaign=magazine&utm_medium=email&utm_source=magazine&utm_content=1911-e. Accessed 4 Dec. 2019.
Moskin, Julia, et al. “Your Questions About Food and Climate Change, Answered.” The New York Times, 30 Apr. 2019. NYTimes.com, https://www.nytimes.com/interactive/2019/04/30/dining/climate-change-food-eating-habits.html, https://www.nytimes.com/interactive/2019/04/30/dining/climate-change-food-eating-habits.html.
Nijdam, Durk, et al. “The Price of Protein: Review of Land Use and Carbon Footprints from Life Cycle Assessments of Animal Food Products and Their Substitutes.” Food Policy, vol. 37, no. 6, Dec. 2012, pp. 760–70. DOI.org (Crossref), doi:10.1016/j.foodpol.2012.08.002.
Ocean Destruction – The Commercial Fishing Industry Is Killing Our Oceans. http://bandeathnets.com/. Accessed 3 Dec. 2019.
Siegle, Lucy. “What’s the Environmental Impact of Milk?” The Guardian, 13 Aug. 2009. www.theguardian.com, https://www.theguardian.com/environment/2009/aug/07/milk-environmental-impact.
“The Case for Plant Based.” UCLA Sustainability, https://www.sustain.ucla.edu/our-initiatives/food-systems/the-case-for-plant-based/. Accessed 4 Dec. 2019.
The Ecology of Disease and Health | Wiley-Blackwell Companions to Anthropology: A Companion to Medical Anthropology – Credo Reference. https://search.credoreference.com/content/entry/wileycmean/the_ecology_of_disease_and_health/0. Accessed 29 Oct. 2018.
“WATCH: Undercover Investigations Expose Animal Abusers.” Mercy For Animals, 5 Jan. 2015, https://mercyforanimals.org/investigations.
What Is The Environmental Impact Of The Fishing Industry? – WorldAtlas.Com. https://www.worldatlas.com/articles/what-is-the-environmental-impact-of-the-fishing-industry.html. Accessed 3 Dec. 2019.
Zee, Bibi van der. “What Is the True Cost of Eating Meat?” The Guardian, 7 May 2018. www.theguardian.com, https://www.theguardian.com/news/2018/may/07/true-cost-of-eating-meat-environment-health-animal-welfare.

LED Biking Jacket – Sagar Risal – Rudi

When first creating this jacket I wasn’t really invested in the material of the jacket, since I knew I just needed any jacket that could hold all the circuitry that it would have. I also recognized that the jacket I would need would have to be of a thinner material so that the LEDs could shine through, so I thought it would maybe be a good idea to put a bigger jacket over it, as to hide the wiring. In the end I ended up not having the bigger jacket over it, since I thought I did a good job of hiding the sewing  I did on the jacket. I also knew the user would need a way to turn on the indicators in a way that would be easy for the user to do as well as safe for when they would be biking. For this reason I decided to add gloves to the jacket so that the user had access to buttons.  I then decided that I would put the buttons on the gloves near the finger so that the user had very easy access to pressing the button and indicating in which direction they were going. I thought of maybe purchasing a real biking jacket with actual biking gloves, as to make the outfit feel more legitimate, but the overall cost steered me to instead find a cheap jacket. If I had a lot more time, and money, I would’ve loved to integrate the wires inside the actual jacket material, so they wouldn’t show.

While creating the jacket there were three main humps. These were, the actual design of jacket and how I would make it and how it would look, the communication between Arduino and Processing and how it would play into the jacket, as well as how the lights would look. These three were all huge parts of the production of the jacket and how it would turn out in the end. One of the biggest struggles for me at first was how I would be able top have LEDs on the back of the jacket showing animations. At first I thought having an LED matrix on the back of the jacket would be a good idea, but after looking at the schematics of it and how much time it would take to do it, I thought that it would be better to have four LED strips and control each individual LED strip to make the desired animations. This would make it a lot easier to make the jacket itself, as well as code for the animations. This decision was a big part of how I would proceed to do my project, since the matrix idea was more Processing based, while having four LED strips would be handled majority by Arduino. I would say that I ended up coming successful in these three humps, except for the fact that I didn’t have time to code for the ability for the user to change the colors of the jacket, which I thought would be a really nice addition. 

Matrix Sketch                                      Four LED Strips Sketch

IMG_4887

In user testing I was missing majority of my project, and though most students liked the idea of LEDs on a jacket and being able to control them for the practical purpose of riding a bike, many of the teachers wanted me to add more elements to the jacket, which is where one of my proudest success came from. Ironically the success wasn’t from the jacket but instead the interface of which one could interact on Processing with the jacket.  I was really proud of the interface because of how it complemented the theme of the jacket, as well as do nice job of showing the interaction between the user and the jacket through the computer. The interface allowed me to play more with the theme that the jacket would follow, as well as how the user would be able to interact with the jacket, while still following the practical use of the jacket itself.  

When I set out to make this project I wanted to make a jacket that could be worn by bikers, so that when they bike at night they are able to be seen by cars as well as by other bikers who, especially at night, don’t know in which direction one is turning. After witnessing many biking accidents here in China, as well as being in a couple myself, I noticed that most of them happen at night    where visibility is low and riding itself is difficult because of the fact bikes have to share the road with pedestrians as well as electric scooters. I wanted an easy as well as cool way in which bikers could safely traverse the streets without feeling like they can’t be seen. I also wanted the driver to interact with the jacket itself, which is why I added buttons, as well as an interface where the user can change the animations to what they want. I have always defined interaction in levels instead of just having interaction, or not having. Obviously just pressing buttons on a jacket isn’t much of an interaction, but when thinking about the bigger picture of how one is able to choose what animations they want on their jacket as well how the usage of the jacket itself and how it lends to interacting with people on the streets while biking, it shows how just be able to control what your clothes do on your body you are able to interact with more than just two buttons on your gloves. 

During my final presentation many of my peers enjoyed the project, but also offered many recommendations that I myself wanted to include in the project but I couldn’t because of time. This included a brake light, as well as feedback for the user when pressing the buttons, so that the user could have some indication that the lights were working in the right way, since the user cannot see the lights themselves. These were all recommendations that I thought were very helpful towards how I could improve the jacket. If I had more time with this project I would’ve loved to add more customization options, as well as take the recommendations I received into my project. I would have also loved to improve on the look of the jacket itself, so that it can look and feel like a regular bike jacket but have LEDs as well. 

One thing that I definitely learned from this project is that combining technology with fashion, or just clothes in general, takes a lot of time, effort and patience. Not everything works the first time and one has many different factors when designing a meaningful way to use technology in ones clothes. The whole process is very tiresome but very rewarding when one is able to do it successfully, making the technology work meaningfully as well as look good.   Clothing and technology, while two very different things, are a lot more similar than one thinks. While humans start using technology more and more in their daily lives, it should be natural that we start adapting it to fit our clothes, which is also necessary for every day use. The more and more comfortable we get with technology and how we can implement it into what we wear, the easier daily life can become, with having simple tasks be able to be done from our clothes instead of our phone or additional technology. My LED biking jacket shows something as simple as a jacket with lights can be used to help solve issues of safety on the road, as well as offer a different style to the bikers who use it. As technology gets better and more incorporated into what we wear, one will be able to interact easier with more from their daily lives, from the simple action of just wearing their clothes. These interactions we have with what we wear, not only can look really cool, but also have a big impact with how we interact with each other in the future.

Arduino Code: 

#include <FastLED.h>
#define LED_PIN 7
#define LED_PIN_2 6
#define LED_PIN_3 5
#define LED_PIN_4 4
#define NUM_LEDS 18

#define NUM_OF_VALUES 3 /** YOU MUST CHANGE THIS ACCORDING TO YOUR PROJECT **/
/* This is the array of values storing the data from Processing. */
int values[NUM_OF_VALUES];
int valueIndex = 0;
int tempValue = 0;

CRGB leds[NUM_LEDS];
CRGB leds_2[NUM_LEDS];
CRGB leds_3[NUM_LEDS];
CRGB leds_4[NUM_LEDS];

int leftButton = 8;
int rightButton = 9;

void setup() {
Serial.begin(9600);
values[0] = 1;
values[1] = 1;
values[2] = 1;
FastLED.addLeds<WS2812, LED_PIN, GRB>(leds, NUM_LEDS);
FastLED.addLeds<WS2812, LED_PIN_2, GRB>(leds_2, NUM_LEDS);
FastLED.addLeds<WS2812, LED_PIN_3, GRB>(leds_3, NUM_LEDS);
FastLED.addLeds<WS2812, LED_PIN_4, GRB>(leds_4, NUM_LEDS);

//LEFT SIGNAL
pinMode(leftButton, INPUT_PULLUP);

//RIGHT SIGNAL
pinMode(rightButton, INPUT_PULLUP);

}

void loop() {

getSerialData();

if (digitalRead(leftButton) == LOW) {
//Play left animation
if (values[0] == 1) {
Left1();
Left1();
Left1();

}
if (values[0] == 2) {
Left2();
Left2();
Left2();

}
if (values[0] == 3) {
Left3();
Left3();
Left3();

}
}
else if (digitalRead(rightButton) == LOW) {
//Play right animation
if (values[2] == 1) {
Right1();
Right1();
Right1();

}
if (values[2] == 2) {
Right2();
Right2();
Right2();

}
if (values[2] == 3) {
Right3();
Right3();
Right3();

}
}
else {
if (values[1] == 1) {
Forward1();
}
if (values[1] == 2) {
Forward2();
}
if (values[1] == 3) {
Forward3();
}
}

}

void Direction1() {

for (int i = 18; i >= 0; i–) {
leds[i] = CRGB (255, 0, 0);
FastLED.show();
delay(40);
}

for (int i = 18; i >= 0; i–) {
leds[i] = CRGB (0, 0, 0);
FastLED.show();
delay(40);
}

}

void Direction2() {

for (int i = 0; i <= 18; i++) {
leds[i] = CRGB ( 255, 0, 0);
FastLED.show();
delay(40);
}
for (int i = 0; i <= 18; i++) {
leds[i] = CRGB ( 0, 0, 0);
FastLED.show();
delay(40);
}

}

void Blink() {

leds[0] = CRGB(255, 0, 0);
leds[1] = CRGB(255, 0, 0);
leds[2] = CRGB(255, 0, 0);
leds[3] = CRGB(255, 0, 0);
leds[4] = CRGB(255, 0, 0);
leds[5] = CRGB(255, 0, 0);
leds[6] = CRGB(255, 0, 0);
leds[7] = CRGB(255, 0, 0);
leds[8] = CRGB(255, 0, 0);
leds[9] = CRGB(255, 0, 0);
leds[10] = CRGB(255, 0, 0);
leds[11] = CRGB(255, 0, 0);
leds[12] = CRGB(255, 0, 0);
leds[13] = CRGB(255, 0, 0);
leds[14] = CRGB(255, 0, 0);
leds[15] = CRGB(255, 0, 0);
leds[16] = CRGB(255, 0, 0);
leds[17] = CRGB(255, 0, 0);

FastLED.show();
delay(500);
leds[0] = CRGB(0, 0, 0);
leds[1] = CRGB(0, 0, 0);
leds[2] = CRGB(0, 0, 0);
leds[3] = CRGB(0, 0, 0);
leds[4] = CRGB(0, 0, 0);
leds[5] = CRGB(0, 0, 0);
leds[6] = CRGB(0, 0, 0);
leds[7] = CRGB(0, 0, 0);
leds[8] = CRGB(0, 0, 0);
leds[9] = CRGB(0, 0, 0);
leds[10] = CRGB(0, 0, 0);
leds[11] = CRGB(0, 0, 0);
leds[12] = CRGB(0, 0, 0);
FastLED.show();
delay(500);
}

void getSerialData() {
while (Serial.available() > 0) {
char c = Serial.read();
//switch – case checks the value of the variable in the switch function
//in this case, the char c, then runs one of the cases that fit the value of the variable
//for more information, visit the reference page: https://www.arduino.cc/en/Reference/SwitchCase
switch (c) {
//if the char c from Processing is a number between 0 and 9
case ‘0’…’9′:
//save the value of char c to tempValue
//but simultaneously rearrange the existing values saved in tempValue
//for the digits received through char c to remain coherent
//if this does not make sense and would like to know more, send an email to me!
tempValue = tempValue * 10 + c – ‘0’;
break;
//if the char c from Processing is a comma
//indicating that the following values of char c is for the next element in the values array
case ‘,’:
values[valueIndex] = tempValue;
//reset tempValue value
tempValue = 0;
//increment valuesIndex by 1
valueIndex++;
break;
//if the char c from Processing is character ‘n’
//which signals that it is the end of data
case ‘n’:
//save the tempValue
//this will b the last element in the values array
values[valueIndex] = tempValue;
//reset tempValue and valueIndex values
//to clear out the values array for the next round of readings from Processing
tempValue = 0;
valueIndex = 0;
Flash();
Flash();
Flash();
break;
//if the char c from Processing is character ‘e’
//it is signalling for the Arduino to send Processing the elements saved in the values array
//this case is triggered and processed by the echoSerialData function in the Processing sketch
case ‘e’: // to echo
for (int i = 0; i < NUM_OF_VALUES; i++) {
Serial.print(values[i]);
if (i < NUM_OF_VALUES – 1) {
Serial.print(‘,’);
}
else {
Serial.println();
}
}
break;
}
}
}

Processing Code: 

import processing.serial.*;

int NUM_OF_VALUES = 3; /** YOU MUST CHANGE THIS ACCORDING TO YOUR PROJECT **/

Serial myPort;
String myString;

// This is the array of values you might want to send to Arduino.
int values[] = {1,1,1};
char screen = ‘H’;

void setup() {

size(1440, 900);

printArray(Serial.list());
myPort = new Serial(this, Serial.list()[ 5 ], 9600);
// check the list of the ports,
// find the port “/dev/cu.usbmodem—-” or “/dev/tty.usbmodem—-”
// and replace PORT_INDEX above with the index of the port

myPort.clear();
// Throw out the first reading,
// in case we started reading in the middle of a string from the sender.
myString = myPort.readStringUntil( 10 ); // 10 = ‘\n’ Linefeed in ASCII
myString = null;
imgLeft = loadImage(“LeftReal.jpg”);
imgMenu = loadImage(“Menu.jpg”);
imgForward = loadImage(“Forward.jpg”);
imgRight = loadImage(“Right.jpg”);
}

void mousePressed() {
if (screen == ‘H’) {
mousePressHome();
} else if (screen==’L’) {
mousePressLeft();
} else if (screen==’F’) {
mousePressForward();
} else if (screen==’R’) {
mousePressRight();
}

//sendSerialData();
}

void sendSerialData() {
String data = “”;
for (int i=0; i<values.length; i++) {
data += values[i];
//if i is less than the index number of the last element in the values array
if (i < values.length-1) {
data += “,”; // add splitter character “,” between each values element
}
//if it is the last element in the values array
else {
data += “n”; // add the end of data character “n”
}
}
//write to Arduino
myPort.write(data);
}

void echoSerialData(int frequency) {
//write character ‘e’ at the given frequency
//to request Arduino to send back the values array
if (frameCount % frequency == 0) myPort.write(‘e’);

String incomingBytes = “”;
while (myPort.available() > 0) {
//add on all the characters received from the Arduino to the incomingBytes string
incomingBytes += char(myPort.read());
}
//print what Arduino sent back to Processing
print( incomingBytes );
}

void draw()//Title Screen
{
if (screen == ‘H’) {
drawHome();
} else if (screen == ‘L’) {
drawLeft();
} else if (screen == ‘F’) {
drawForward();
} else if (screen == ‘R’) {
drawRight();

}

// echoSerialData(20);
}

void keyPressed() {
printArray(values);
sendSerialData();
}

Creative Motion – Yu Yan (Sonny) – Inmi

Creative Motion – Yu Yan (Sonny) – Inmi

Conception and Design:

During the brainstorming phase, my partner Lillie and I tended to build an interactive project that allows users to create digital paintings only with their motions. The interaction of this project includes users’ movements as the input and the image displayed on the digital device as the output. Our enlightenment comes from Leap Motion interactive art exhibit. At this point, we thought about using multiple sensors on Arduino to catch the movements, and display the image on Processing. However, after we tried on several sensors and did some researches, we found that there is no sensor suitable for our needs and even if there is, it would take a huge amount of time to build the circuit and understand how to code. So we turned to our instructor for help and also did some further researches to see other alternative ways. Finally, we decided to use the webcam in Processing as our “sensor” to catch the input (users’ movements) and build an LED board on Arduino to display the output (painting). The reasons why we choose the webcam are that it’s easier to catch images from the camera than from the sensor, the color values detected from the camera are more accurate than from the sensor, and the code is not too difficult to learn with the help of other IMA fellows. However, when we were figuring out the Arduino part, we found it hard to build the circuit using single-colored LEDs and connect all of them on the breadboard. With our further researches, we managed to find that the 8*8 LED matrix can replace the single-colored LEDs and also generate more colors. But the first few pieces of LED matrix are not satisfactory because we don’t know how to connect them to the Arduino board and we were unable to find the solutions online (We found this video that we thought it would be helpful for us to understand how to connect the LED matrix to the Arduino, but it wasn’t). We also found a sample code to test the LED matrix, but since we were unable to connect it to the Arduino, this code became useless as well. Moreover, those pieces can only generate three colors that didn’t meet our needs.

Since we want to allow users to create paintings with more diversity, we tried to find the LED matrix that can display in rainbow colors. After consulting with other IMA fellows, we found that the Rainbowduino can work with one kind of LED matrix and display in rainbow colors. The code for this is also easy to comprehend. So eventually, we decided to use the Rainbowduino and the LED matrix in Arduino as our output device, and the webcam in Processing as our input detector. 

Fabrication and Production:

One of the most significant steps in our production process in terms of failures are the coding phase. Since when we chose materials for the output device, we tried quite a few kinds of LED matrix and also looked for their codes, we discovered that the code for previous LED matrix are too complex to comprehend. We needed to set different variables for different rows and columns of the LEDs, which is quite confusing sometimes. But after we decided to use the Rainbowduino, the code for Arduino became much easier because we can use the coordinate to code for each single LED. And with the help of IMA fellows, we managed to write the code that satisfied our needs. This experience tells us that choosing suitable equipment is very crucial to a project,  for choosing a good one can bring great convenience to the progress and save us a lot of time. Another significant step is the feedback we received during the user testing session. The good things are that many users showed their interests to our project and thought it’s really cool when displaying different colors. They all thought the interaction with the piece is quite intriguing and they liked that their movements can light up the LEDs in different colors. This feedback meets our initial goal of providing users with opportunities to create their own art using their motions. However, there were still some problems we can still improve. First of all, one of the users said that the way the LEDs lighted up could be a little confusing because it cannot well illustrate where the user moves. It’s because we didn’t separate the x-axis and the y-axis for each section of LEDs at first. The following sketches and video help explain the situation.

To solve this issue, we modified our code and separated the x-axis and the y-axis for each section so that it can light up without causing other sections of LEDs lighting up as well. After we showed our modified project to the user who gave us this comment, he said that the experience is better now and he can see himself moving in the LED matrix more clearly. Second, the experience of interaction could be too single and boring and it’s hard to convey our message to users through this experience. Since the interaction is only about moving their bodies and displaying different colors in the same position of their movements on the LED matrix, it might be too stuffless for an interactive project. Marcela and Inmi suggested that maybe adding some sounds to it can make it more attractive and more meaningful. So we took their advice. In addition to turning up a section of LEDs when moving to the relative area, we also added some sound files to each section of LEDs and made them play with the lighting up of the corresponding LEDs. The following sketches illustrate how we defined each section and different sound file.

 

Initially, we used several random sounds such as “kick” and “snare” because we wanted to bring more diversity into our project. But during the presentation, some users commented that the sound is too random and sounded chaotic when they were all turned on. One of them also mentioned that the sound of “snapshot” made her feel uncomfortable. So for the final IMA show, we adjusted all the sound files to different key notes of the piano. This change made the sound more harmonious and comfortable to hear when users are interacting with the project. Third, some users mentioned that the LED matrix is too small and sometimes they might neglect the LED and pay more attention to the computer screen instead. At first, we thought about connecting more LED matrixes together and making a bigger screen, but we didn’t manage to do that. So instead of magnifying the LED matrix, we made the computer screen more invisible and the LED matrix more outstanding by putting it into our fabrication box. The result turned out to be much better than before and we caught users’ attention to our LED matrix instead of the computer screen.

By contrast, the fabrication process is one of the most significant steps of our project in terms of success. Before we settled down the final polygon shape, we came up with a few other shapes as well. Similar to my midterm project, we also chose Laser-cut and glued each layer together to build the shape. Since we wanted to make something cool and make the most use of our material, we decided to choose transparent plastic board to make the shape. We also discovered that polygon can help build a sense of geometric beauty, so finally, we made our box into a polygon shape. At first, we tended to just put the LED matrix on the top of the polygon. But one of IMA fellows suggested that we can put the LED matrix at the bottom of the polygon so that the light can reflect through the plastic and make it prettier. Thanks to this advice, it turned out to be a really cool project!

 

Conclusions:

For our final project, our goal is always allowing people to create their own art using their motions and also encourage them to create art in different forms. Although we changed our single output (painting) to multiple outputs (painting and music), our goal of creating art with motions still remains the same. Initially, we defined interaction as a continuous communication between two or more corresponding elements, an iterative process which involves actions and feedbacks. Our project successfully aligned with our definition by creating a constant communication between the project and the users and providing immediate feedback to users’ motions. However, the experience of interacting with the piece is still not satisfactory enough because we could not magnify the LED matrix so that it’s too small to notice. We didn’t create the best experience to users. But fortunately, most of the users understood that they can change the image and create different sounds with their motions. They all thought that this is a really interesting and interactive project that they can play with for a long time. Some users even tried to play a full song after they discovered the location of each keynote. If we had more time, we would definitely build a bigger LED board to make it easier for users to experience the process of creating art with their motions. The setbacks and obstacles we’ve encountered are all seemed quite fair during the process of completing a project. But the most important thing is to learn some lessons from these setbacks and obstacles. What I learned from them are that we should humbly take people’s comments about our project and turn them into useful improvements and motivations. In addition, I noticed that I still didn’t pay enough attention to the experience of the project. Since experience is one of the most vital parts of an interactive project, it should always be the first consideration. However, I also learned that the reason why many people like our project is that it can display their existence and be controlled by them. Users are in charge of everything the project displays. This also shows that we have created a tight and effective communication between the project and users. Furthermore, making the most use of our materials is also very important. Sometimes it can make a big change to the whole project and turn it into a more complete version. Since nowadays many people still hold the concept that art can only be created in those limited forms, we want to break this concept by providing them with tools to create new forms of art and inspiring them to think outside of the box. Art is limitless and with great potential. By showing that motion can also create different forms of art, this project is not only a recreation but also an enlightenment to help people generate more creative ideas of new forms of art and free their imagination. It also helps make people be aware of their ability and their “power”, and let them control the creation of art. “Be bold, be creative, and be limitless.” This is the message we want to convey to our audience. 

The code for Arduino is here. And the code for Processing is here.

Now, let’s have a look at how our users interact with our project!