Final Essay by Kat Van Sligtenhorst

The Self Censor

In my research for this project, I found several meaningful interactive exhibits whose designs or messages assisted me in developing my own. In terms of the message I wanted to convey, the idea was sparked by a project I found during the research stage. “The Listening Post” by Ben Rubin and Mark Hanson used of “a real-time visualization of thousands of ongoing online conversations” that appeared on numerous screens, giving the viewer the sense that they were being watched or monitored even in spaces they had thought were safe or anonymous. For the physical design, I drew on my research of an exhibit called “The Nautilus,” which utilized a sea of human-height, touch-activated poles to create music. I also incorporated takeaways from a conversation with Rudi and Marcela about a past student’s project, which attempted to recreate the experience of a foreigner going through the TSA at a US airport. I knew that I wanted my project to feel immersive, so that the physical space the interaction took place in added to the user’s experience. Therefore, I decided to construct a voting booth that users could step into, where they would then be faced with a monitor and a series of yes or no questions about Chinese politics and their impressions of censorship. Given the recent press on the topic of self-censorship at NYU Shanghai, the goal of this project is to push students to consider what they will or won’t say and why that is. The statement made by this interaction is particularly relevant and applicable to attendees of this university as we live and study in China and, in doing so, give up certain personal freedoms.

The interactive monitor will receive input from the user regarding their opinions on Chinese politics and censorship. They will be asked true/false or yes/no questions, and then instructed to press a key (T/F or Y/N) corresponding with their answer. In between questions, the screen will flash images of controversial events and human rights abuses in China that may be disturbing or unsettling, like riot police in Hong Kong or reeducation camps in Xinjiang. After several responses that are negative towards Chinese interests, the screen will display an excerpt from the student policy handbook that informs the user that NYUSH will not protect them in the case that their online activity violates PRC law. As detailed in the handbook, “The University shall not use its powers to interfere with the rights of a student beyond the University environment. Conduct that occurs off-campus, online, over social media, or outside the context of a University program or activity, should generally be subject only to the consequences of public authority and/or opinion.” In contrast, I will also include sections of NYUSH policy that guarantee students freedom of expression and the right to protest. Questions will be asked regarding collective action, such as, “You draft an Instagram post asking your peers to rally in support of Hong Kong in the campus cafe next week. Press Y to send and N to delete.” Finally, the camera will switch on and briefly display live video of the person taking the poll, so they are aware they are being surveilled. At the end, I will prompt them to record their name to see if people are willing to link themselves to the answers they have chosen, something like, “Thank you for your participation in this survey. Please type your first and last name and press enter. To remain anonymous, press the spacebar.” I want to observe how successful the deterrents are in conditioning the students to choose less inflammatory answers. So far, I have made my full list of questions for the survey, as well as the order of the images to go along with them. I have also written the code for both the questions and the images, and aim to have the live video portion worked into the code by this week. I will ask some people to test it out on December 6 so I can take the weekend to make adjustments based on their experiences.

While the technology behind this project is relatively simple, I think the message will be very impactful. It is unique in that it addresses a particular group of people, who face all the nuances and challenges of attending the first joint US-Sino university. Our student body is in a position to both observe the affairs of China and to bring international perspectives and standards into our considerations of these issues. We have a distinct ability on our campus to discuss and debate topics that are taboo in wider Chinese society. Therefore, this project is significant in that it takes real-world current events and issues that are of huge concern to students in our position and forces us to reconsider not only why we believe what we do, but how strong those beliefs really are when they are challenged, explicitly or implicitly. This project aligns with my definition of interactivity as something that goes through multiple cycles of input and output between audience and product, and ultimately challenges the user in some way. In subsequent iterations of this project, it could easily be tailored to different audiences and topics based on current sociopolitical issues. For example, if I were to present it in the US, I could focus it on impeachment, working in various quotes from Trump or from witness testimonies during the impeachment proceedings. I could also tailor it to the issue of voter fraud, making it difficult for members of certain demographics to make it through the survey successfully. All in all, I think the format is an effective way to push people to challenge their beliefs and convictions, as well as to realize the corruption of governments in any location.

Recitation 9 Media Controller by Barry Wang

Recitation 9 Media Controller

In this week’s recitation, I tried to make a brief test on the sensors that we would like to use in our final project. So I checked out a accelerometer and use it to control the speed that the video plays.  We hooked the accelerometer to the Arduino, and realize the communication between Arduino and Processing by serial communication.

The tilt of the accelerometer on x-axis represents the speed we would like to be. If I tilt to the left, the video plays reversely; if I stay in the middle, the video pauses; if I tilt to the right, the video plays forward. We managed to make it and it proved that this sensor is completely functional in our final project.

Here is a short test video:

Code on Arduino:

#include <LSM303D.h>
#include <Wire.h>
#include <SPI.h>

/* Global variables */
int accel[3]; // we’ll store the raw acceleration values here
int mag[3]; // raw magnetometer values stored here
float realAccel[3]; // calculated acceleration values here
float heading, titleHeading;
int v;

#define SPI_CS 10

void setup()
{
char rtn = 0;
Serial.begin(9600); // Serial is used for debugging
// Serial.println(“\r\npower on”);
rtn = Lsm303d.initI2C();
//rtn = Lsm303d.initSPI(SPI_CS);
if(rtn != 0) // Initialize the LSM303, using a SCALE full-scale range
{
// Serial.println(“\r\nLSM303D is not found”);
while(1);
}
else
{
// Serial.println(“\r\nLSM303D is found”);
}
}

void loop()
{
// Serial.println(“\r\n**************”);
//getLSM303_accel(accel); // get the acceleration values and store them in the accel array
Lsm303d.getAccel(accel);
while(!Lsm303d.isMagReady());// wait for the magnetometer readings to be ready
Lsm303d.getMag(mag); // get the magnetometer values, store them in mag

for (int i=0; i<3; i++)
{
realAccel[i] = accel[i] / pow(2, 15) * ACCELE_SCALE; // calculate real acceleration values, in units of g
}
heading = Lsm303d.getHeading(mag);
titleHeading = Lsm303d.getTiltHeading(mag, realAccel);
v = int(realAccel[0]*10)+10;
// Serial.println(v);
Serial.write(v);

delay(10); // delay for serial readability
}

Code on Processing:

import processing.video.*;
import processing.serial.*;
Movie myMovie;
Serial Port;
float value;

void setup() {
  printArray(Serial.list());
  background(0);
  size(480, 480);
  myMovie = new Movie(this, "dancing.mp4");
  myMovie.loop();
  Port = new Serial(this, "COM11", 9600);
}
void movieEvent(Movie movie) {
  myMovie.read();  
}
void draw() {    
  while ( Port.available() > 0) {
    value = Port.read();
    println(value);
  }
  image(myMovie, 0, 0);   
  float newSpeed = map(value, 0, 20, -1, 1);
  myMovie.speed(newSpeed);
} 

Reflection:

In the Computer Vision reading, the part that I am most engaged with is the Myron Krueger’s point. He states that the “entire human body ought to have a role in our interactions with computers”. This is the exact point that me and my partner try to realize in our final project. In Krueger’s Videoplace project, the interaction is carried out through motion capture. Though we can out reach that point so far, what we can try is to fit the sensors into wearings like gloves, glasses and so on to create a similar interaction process. By doing so, the traditional interaction way of using mouse and keyboard is greately improved. And that is definitely a way that we need to pursue in the upcoming future.

Recitation 9. Media Controller By Feifan Li

Introduction

In this recitation we are required to control media with a Processing sketch through using a physical controller made with Arduino. First I used two potentiometers to control the location of the image I chose. Then I changed the function of one potentiometer to controlling the size of the image.

Controlling the Location:

Arduino Code (from the example):

// IMA NYU Shanghai
// Interaction Lab
// For sending multiple values from Arduino to Processing

void setup() {
Serial.begin(9600);
}

void loop() {
int sensor1 = analogRead(A0);
int sensor2 = analogRead(A1);

// keep this format
Serial.print(sensor1);
Serial.print(“,”); // put comma between sensor values
Serial.print(sensor2);
Serial.println(); // add linefeed after sending the last sensor value

// too fast communication might cause some latency in Processing
// this delay resolves the issue.
delay(100);
}

Processing Code (based on the example):

// IMA NYU Shanghai
// Interaction Lab
// For receiving multiple values from Arduino to Processing

/*
 * Based on the readStringUntil() example by Tom Igoe
 * https://processing.org/reference/libraries/serial/Serial_readStringUntil_.html
 */

import processing.serial.*;

String myString = null;
Serial myPort;

PImage img;

int NUM_OF_VALUES = 2;   /** YOU MUST CHANGE THIS ACCORDING TO YOUR PROJECT **/
int[] sensorValues;      /** this array stores values from Arduino **/

void setup() {
  size(500, 500);
  background(0);
  img=loadImage("figure0.jpeg");

  setupSerial();
}


void draw() {
  background(0);
  updateSerial();
  printArray(sensorValues);
float a = map(sensorValues[0], 0, 1023, 0, 280);
float b = map(sensorValues[1], 0, 1023, 0, 280);
tint(b, 180, 205);
image(img,a,b);
  // use the values like this!
  // sensorValues[0] 

  // add your code

  //
}



void setupSerial() {
  printArray(Serial.list());
  myPort = new Serial(this, Serial.list()[ 2 ], 9600);
  // WARNING!
  // You will definitely get an error here.
  // Change the PORT_INDEX to 0 and try running it again.
  // And then, check the list of the ports,
  // find the port "/dev/cu.usbmodem----" or "/dev/tty.usbmodem----" 
  // and replace PORT_INDEX above with the index number of the port.

  myPort.clear();
  // Throw out the first reading,
  // in case we started reading in the middle of a string from the sender.
  myString = myPort.readStringUntil( 10 );  // 10 = '\n'  Linefeed in ASCII
  myString = null;

  sensorValues = new int[NUM_OF_VALUES];
}



void updateSerial() {
  while (myPort.available() > 0) {
    myString = myPort.readStringUntil( 10 ); // 10 = '\n'  Linefeed in ASCII
    if (myString != null) {
      String[] serialInArray = split(trim(myString), ",");
      if (serialInArray.length == NUM_OF_VALUES) {
        for (int i=0; i<serialInArray.length; i++) {
          sensorValues[i] = int(serialInArray[i]);
        }
      }
    }
  }
}

I used tint() in my code and connected the location with color. When the image moves up and down, its color changes simultaneously. 

Controlling Location as well as Size:

Arduino Code is the same as above.

Processing Code:

// IMA NYU Shanghai
// Interaction Lab
// For receiving multiple values from Arduino to Processing

/*
 * Based on the readStringUntil() example by Tom Igoe
 * https://processing.org/reference/libraries/serial/Serial_readStringUntil_.html
 */

import processing.serial.*;

String myString = null;
Serial myPort;

PImage img;

int NUM_OF_VALUES = 2;   /** YOU MUST CHANGE THIS ACCORDING TO YOUR PROJECT **/
int[] sensorValues;      /** this array stores values from Arduino **/

void setup() {
  size(500, 500);
  background(0);
  img=loadImage("figure0.jpeg");

  setupSerial();
}


void draw() {
  background(0);
  updateSerial();
  printArray(sensorValues);
float a = map(sensorValues[0], 0, 1023, 0, 255);
float b = map(sensorValues[1], 0, 1023, 0, 500);
tint(b, 180, 205);
image(img,a,100,b,b);
  // use the values like this!
  // sensorValues[0] 

  // add your code

  //
}



void setupSerial() {
  printArray(Serial.list());
  myPort = new Serial(this, Serial.list()[ 2 ], 9600);
  // WARNING!
  // You will definitely get an error here.
  // Change the PORT_INDEX to 0 and try running it again.
  // And then, check the list of the ports,
  // find the port "/dev/cu.usbmodem----" or "/dev/tty.usbmodem----" 
  // and replace PORT_INDEX above with the index number of the port.

  myPort.clear();
  // Throw out the first reading,
  // in case we started reading in the middle of a string from the sender.
  myString = myPort.readStringUntil( 10 );  // 10 = '\n'  Linefeed in ASCII
  myString = null;

  sensorValues = new int[NUM_OF_VALUES];
}



void updateSerial() {
  while (myPort.available() > 0) {
    myString = myPort.readStringUntil( 10 ); // 10 = '\n'  Linefeed in ASCII
    if (myString != null) {
      String[] serialInArray = split(trim(myString), ",");
      if (serialInArray.length == NUM_OF_VALUES) {
        for (int i=0; i<serialInArray.length; i++) {
          sensorValues[i] = int(serialInArray[i]);
        }
      }
    }
  }
}

I changed the function of one potentiometer, and it is now controlling the size of the image. Maybe I should add one more potentiometer so that I can control both the size and the location of two directions.

Reading Reflection

This week’s reading refers to Myron Krueger’s legendary Videoplace, which was developed in the 1970s by his “deeply felt belief that the entire human body ought to have a role in our interactions with computers.This examples inspires me in doing my final project. My partner and I are trying to create a new interactive experience, and the old way of keyboard or mouse interaction with the computer can no longer satisfy us. To expand the boundary of interaction, we look to the human body. We plan to use the motion of human bodies as input in our final project, and we think the game involving the entire human bodies (preferably >=2) would be more interactive and fun. In Krueger’s words, we can create a “multi­-person virtual reality.” It is fascinating to learn the history and pioneering work of human body interaction with the computer from the reading.

Work Cited

Golan Levin and Collaborators. “Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for Novice Programmers.” Journal of Artificial Intelligence and Society, Vol. 20.4. Springer Verlag, 2006.

Recitation 9: Media Controller by Eric Shen

In this recitation we are asked to create a Processing sketch that controls media by manipulating that media’s attributes using a physical controller made with Arduino. For me I chose to use  a physical controller to manipulate a video shown in our class. Two potentiometers are used in my Arduino part, one is to control the position of the video and the other is to control the speed of the video. 

Code: 

Arduino part is the same as the example: 

// IMA NYU Shanghai
// Interaction Lab
// For sending multiple values from Arduino to Processing

const int buttonPin = 8;
void setup() {
Serial.begin(9600);
}

void loop() {
int sensor1 = analogRead(A0);
int sensor2 = analogRead(A1);

// keep this format
Serial.print(sensor1);
Serial.print(“,”); // put comma between sensor values
Serial.print(sensor2);
Serial.println(); // add linefeed after sending the last sensor value

// too fast communication might cause some latency in Processing
// this delay resolves the issue.
delay(100);
}

Processing: 

// IMA NYU Shanghai
// Interaction Lab
// For receiving multiple values from Arduino to Processing

/*
 * Based on the readStringUntil() example by Tom Igoe
 * https://processing.org/reference/libraries/serial/Serial_readStringUntil_.html
 */

import processing.serial.*;

String myString = null;
Serial myPort;


int NUM_OF_VALUES = 2;   /** YOU MUST CHANGE THIS ACCORDING TO YOUR PROJECT **/
int[] sensorValues;      /** this array stores values from Arduino **/


import processing.video.*;
Movie myMovie;

void setup() {
background(0);
  size(800, 800);
  myMovie = new Movie(this, "dancing.mp4");
  myMovie.loop();
  setupSerial();
}

void movieEvent(Movie movie) {
  myMovie.read();  
}

void draw() {
  background(0);
  updateSerial();
  float c = map(sensorValues[1], 0, 1023, 0 , 800);
  image(myMovie, c, c);   
  float newSpeed = map(sensorValues[0], 0, 1023, 0.1, 5);
  
  myMovie.speed(newSpeed);
  
   
 


} 
  // use the values like this!
  // sensorValues[0] 

  // add your code

  //




void setupSerial() {
  printArray(Serial.list());
  myPort = new Serial(this, Serial.list()[13], 9600);
  // WARNING!
  // You will definitely get an error here.
  // Change the PORT_INDEX to 0 and try running it again.
  // And then, check the list of the ports,
  // find the port "/dev/cu.usbmodem----" or "/dev/tty.usbmodem----" 
  // and replace PORT_INDEX above with the index number of the port.

  myPort.clear();
  // Throw out the first reading,
  // in case we started reading in the middle of a string from the sender.
  myString = myPort.readStringUntil( 10 );  // 10 = '\n'  Linefeed in ASCII
  myString = null;

  sensorValues = new int[NUM_OF_VALUES];
}



void updateSerial() {
  while (myPort.available() > 0) {
    myString = myPort.readStringUntil( 10 ); // 10 = '\n'  Linefeed in ASCII
    if (myString != null) {
      String[] serialInArray = split(trim(myString), ",");
      if (serialInArray.length == NUM_OF_VALUES) {
        for (int i=0; i<serialInArray.length; i++) {
          sensorValues[i] = int(serialInArray[i]);
        }
      }
    }
  }
}

Writing reflection:

A variety of computer vision techniques are introduced in the reading which inspire me the ways of how technology is used in my project. In the article, Myron Krueger, the creator of Videoplace, he states that technology should be  used as supportive tools which resonates with my understanding of a great interactive project. As for my project, I use technology to make the experience  and interaction of users and the project better and to make the audience identify with the theme of my project. And there is on particular project in the reading that really impressed me which is Messa di Voce’s interactive software. It can visualize the characteristic of water. Therefore , I think technology can be used to connect different senses together and this notion is used in my final project too. 

Reference

 Levin, Golan. “Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for Novice Programmers.” AI & Society, vol. 20, no. 4, 2006, pp. 462-482.

Recitation 9: Media Controller – by Anica Yao

In this project, I connected the button/switch with a video about subway. You long press the button to play the video, otherwise the video will pause. There are three points that I need to pay attention to:
(1) Before I play the video, I need to define the movie and upload the file first. If necessary, draw the frame. 
(2) I couldn’t play the first frame at first ( the screen is all black ), and thanks to Jintian’s help, I learned that I need to draw the first frame in the setup().
(3) The video couldn’t play smoothly. After I change the value of delay() it went better. 
In the article ” Computer Vision for Artist and Designers”, I learned that computer vision is important in not only physical world but multimedia authoring tools. When I was doing my project, I simply used single variable from Arduino to processing to control the video to be played or paused. But by using multivariable serial communication it’s possible to use physical components to adjust both visual and audio things (e.g. the tent, the speed, the frequency and the volume of the sound, etc. ) Besides, the video pixel capture might also be a worthy method to start with. In my opinion, processing is more a bridge than a destination. It should process the physical information from the video, and create the computer vision beyond the original content of the video, which contains more art and interaction experience.

Processing Codes: 

// IMA NYU Shanghai
// Interaction Lab
// This code receives one value from Arduino to Processing 

import processing.serial.*;
import processing.video.*;
Movie myMovie;

Serial myPort;
int valueFromArduino;


void setup() {
  size(1000, 600);
  myMovie = new Movie(this, "Pexels.mov");
  //myMovie.play();

 if (myMovie.available()) {
    myMovie.read();
  } // read the file
  myMovie.play(); //play the video
  image(myMovie, 0, 0, width, height); //draw the frame

  printArray(Serial.list());
  // this prints out the list of all available serial ports on your computer.

  myPort = new Serial(this, Serial.list()[ 7 ], 9600);
  // WARNING!
  // You will definitely get an error here.
  // Change the PORT_INDEX to 0 and try running it again.
  // And then, check the list of the ports,
  // find the port "/dev/cu.usbmodem----" or "/dev/tty.usbmodem----" 
  // and replace PORT_INDEX above with the index number of the port.
}


void draw() {
  // to read the value from the Arduino
  while ( myPort.available() > 0) {
    valueFromArduino = myPort.read();
  }
  println(valueFromArduino);//This prints out the values from Arduino
  background(valueFromArduino); // only reads 0-255. if over range it reset to 0 again

  if (myMovie.available()) {
    myMovie.read();
  }
  if (valueFromArduino ==1) {
    myMovie.play();
  } else {
    myMovie.pause();
  }
    image(myMovie, 0, 0, width, height); //draw the frame
}

Notes:
Video by Danilo Obradović from Pexels