Final Project Essay: Alexander Cleveland

A. Title 

侬说啥 (What did you say?)

B. Project Statement of Purpose

In this project, my partner and I are aiming to educate young children on the dying languages and dialects in different provinces around China. Unlike most Western countries, China has multiple languages and variations of language spoken throughout the country. Even though Mandarin is the most common one, some cities like Shanghai also have their own dialect called “Shanghainese” (上海话) (Asia Society). In order to preserve cultural dialects such as Shanghainese and others, my partner and I have devised a plan to educate younger children on the many dialects throughout China. In many articles, it is evident that languages other than Mandarin in China are being phased out because the Government is requiring people to use Mandarin in official scenarios (GaoKao exam and government forms for example) (The Atlantic). It is important to maintain the vast array of cultures in China through spoken language . We will appeal to the younger generation through a puzzle game which displays a map of China. The child will take one of the provinces (a singular puzzle piece) out and trigger the screen and audio to begin playing a clip from a native speaker of that region. Each province that is available to source from will have a native speaker from NYU Shanghai introducing themselves. This way children can listen, practice, and most importantly, be aware of the diversity among languages in China. Hopefully through this project, my partner and I can inspire classmates, teachers, mentors, and children to keep dialects alive in China.

C.  Project Plan

The main goal of this project is to preserve the cultural traditions of different languages and dialects around the provinces of China. In order to appeal to younger kids through education, we’ve created a puzzle game which correlates with a screen, joystick, and audio speakers. The first component is creating a map of China, which is split into different puzzle pieces based on provincial boundaries. Each province puzzle piece will be linked through wires to the screen so that when it is taken out of the puzzle, a video of the native speaker from that selected province introducing themselves plays. The joystick will be linked to the screen and control a cursor which can move over a virtual map of China. As the cursor mouses over a province, the voice of that dialect begins to whisper, creating intrigue in the participant. My partner and I plan to film and record different students from NYU Shanghai who come from separate provinces such as Guangdong, Shanghai, Beijing, Hebei, Sichuan and more. We realize that not every province is represented at our school, so we will try and accrue as many as possible. Although, the search process for finding students that speak dialects other than Mandarin has been difficult because of the Government mandated policy of Mandarin only learning in recent years. We have designed the base of the puzzle and plan to laser cut it while we will 3-D print the model of China and its pieces to fit within the base. We also need to coordinate the sensor wires with the puzzle pieces once they are fully printed. After, we will link the joystick with the on screen graphics through the Arduino values we’ve learned and processing skills to create a virtual map.

D. Context and Significance

In my preparatory research for this project, I stumbled upon a similar project named Phonemica which also aims to preserve Chinese cultural dialects by physically traveling to those places and recording the people there. I found it very inspiring that there were other people who are also passionate about preserving the local culture of China. Our project differs from Phonemica though in that we are only in one location and are recording people in our school. We are also trying to appeal to a younger generation by making it a game, rather than a website. Another project that inspired me was the musical bikes, just a block away from our school. The premise of this project was interactivity through music and exercise. When all four people were riding the bikes, the whole song would play. I think that our puzzle and joystick feature creates a similar incentive because at first the voice whispers when they mouse over it with the joystick and only once the child takes the puzzle piece out will it fully play the phrase. The joystick creates an incentive to take out a puzzle piece just as the songs created an incentive for all four people to be exercising on the bikes. I think our project is unique in that we are enlisting the help of the whole school to guide our educational process. It’s not only my partner and I working to preserve Chinese dialects, but those from that region too. To me, it aligns with my definition of interaction because it involves a child playing with a puzzle which translates a result to the screen and helps that child learn. It’s a conversation through the physicality of the puzzle, and the encouragement to repeat the phrase after they are finished watching the video. Just as Chris Crawford explains in what exactly is interactivity? , interactivity has to do with conversations moving back and forth between the user and the machine. It’s not just once, but multiple times in a fluid back and forth process (Crawford). I agree with Crawford and think that my project on interaction is a conversational learning process geared towards children. Hopefully, this project can inspire others in the future to build similar models such as Phonemica’s to help maintain cultural identities in China. These educational tools such as speech repetition games, apps, and computer games could possibly be used in schools around China and specifically in each province too. Seeing as the future is digital, I hope Chinese dialects can survive the new wave of technology.

Works Cited

https://asiasociety.org/china-learning-initiatives/many-dialects-china (Asia Society)

http://www.phonemica.net/

Dix au carre

https://www.theatlantic.com/china/archive/2013/06/on-saving-chinas-dying-languages/276971/ (The Atlantic)

Chris Crawford, What exactly is interactivity? (In class reading)

Week 12 Assignment: Document Final Concept —— Lishan Qin

Background

When I was young, I was fascinated by the magic world created by J.K. Rowling in Harry Potter. She has created so many bizarre objects in that world of magic that I still find very remarkable today. “The Daily Prophet”, a form of newspaper in the Harry Potter world, is the main inspiration of my final project. “The Daily Prophet” is a series of printed newspaper that contains magic which allows the image on the printed paper to appear as if it’s moving. It inspires me to create an interactive newspaper with an “AI editor” where not only the images on the newspaper will update every second according to the video captured by the webcam, but also the passage on it will change according to the image. In my final project, I will use Style Transfer to make the users’ face appear on the newspaper and utilize im2txt to change the words of the passages on the newspaper according to what the user is doing. I will build an interactive newspaper that constantly reports the users’ action. 

       

Motivation

Even with the development of social medias which allow new information to be spread almost every moment and every second, it still requires human people behind the screen to type, collect and then post those news. However, if there is an AI editor that could document, write and edit the news on the newspaper for us, the real-time capability of spreading information of the newspaper would be even better. Thus, I want to create an interactive self-edited newspaper that asks an AI to write the news about the action of the people it sees by generating sentences on their own. 

Reference

I’ll refer to the im2txt model on github https://github.com/runwayml/p5js/tree/master/im2txt here to create the video caption. This model will generate sentences according to the object and action the webcam video caught. I will run this model on the runway and then it will sent the result of the caption to html so that I can manipulate the outcome. Since some of the captions aren’t that accurate, I still need to find some ways to improve on that.

Week 12 Assignment: Document Final Concept – Ziying Wang (Jamie)

Background

Dancing With Strangers is a development of my midterm project Dancing With a Stranger. In my midterm project, I used Posenet model to mirror human movements on the screen and exchange the controls of their legs. With my final project Dancing With Strangers, I’m hoping to create a communal dancing platform that enables every user to use their own terminal to log onto the platform and mirror all the movements on the same platform. In terms of the figure that will be displayed on the screen, I’m planning to build an abstract figure based on the coordinates provided by Posenet, The figure will illustrate the movements of the human body but will not look like the skeleton or the contour.

Motivation

My motivation for this final project is similar to my midterm project: interaction with electronic devices can pull us closer, but can also drift us apart. Using them to strengthen the connections between people become necessary. Dancing, in every culture, is the best way to pull together different people, a communal dancing platform can achieve this goal. Compared with my midterm project, the stick figure I create was too specific, in a way, being specific means assimilation. Yet people are very different, they move different, Therefore, I don’t want to use the common-sense stick figure to illustrate body movement. Abstract provides us with diversity, without the boundary of the human torso, people can express themselves freer.

Reference

For building the communal dancing platform, I’m using the Firebase server as a data collector that records live data sent by different users from different terminals.

For the inspiration of my abstract figure, I’m deeply inspired by the artist and designer Zach Lieberman. One of his series depicts the human body movement in a very abstract way, it tracks the speed of the movements and the patterns illustrate this change by changing its size. With simple lines, bezier and patterns, he creates various dancing shapes that are aesthetically pleasing. I plan to achieve similar results in my final project.

Some works by Zach Lieberman

Final Project: Essay (November 26, 2019) – Jackson McQueeney

Vocal Art (With Kevin Nader)

This project will make artistic interpretations of the user(s)’ voices, and it will base the color’s saturation on the volumes of the voices. The project will ideally translate the vocal input (from a microphone) into colors and shapes (again, based on volume). After one user uses the project, the interpretation of their voice would remain drawn on screen, and will be added to by the next user. 

This project aims to interpret the voices of its users, and compound these interpretations between each user. The first user would start with a blank canvas, while each subsequent use would build on top of that. The Arduino code will process input data and send it to processing, which will generate the output. This project does not require that much physical fabrication, the only physical component being the microphone Arduino circuit. Once this part of the project is complete, my partner and I will write the Arduino and processing codes. Given the simplicity of the physical aspect of the project, this could allow my partner and me to design an aesthetically-pleasing physical apparatus to house the circuit. 

Over the course of the semester, we have had to define and redefine “interaction” numerous times. I originally defined it as “the communication between two actors” for the group project, then changing it to “the communication between two or more actors through mutually influential inputs and outputs” for the midterm project. As of now, my definition is “the communication between two or more organic or machine actors, facilitated by an interchange of inputs and outputs between actors”. Interactivity relies on the mutual exchange of inputs and outputs between human and non-human actors. These updates to my definition were influenced by Crawford’s “The Art of Interactive Design”, in which he defines interaction “in terms of a conversation: a cyclic process in which two actors alternately listen, think, and speak (1)”.

Vocal Art aligns with my updated definition of interaction because it consists of a machine actor and potentially several human actors that consistently communicate and build upon previous outputs. My partner may have different ideas about this project’s significance, but I see it as a collaborative art piece that complicates itself and constantly changes based on its perception of its audience. I don’t think there is a specific intended audience, simply anyone who wishes to add their voice to the collective canvas.

Recitation 8: Serial Communication (November 12, 2019) by Jackson McQueeney

Part 1: Etch-A-Sketch

Arduino Schematic:

Arduino Code:

void setup() {
  Serial.begin(9600);
}

void loop() {
  int sensorValue1 = analogRead(A0);
  int sensorValue2 = analogRead(A1);
  int mappedvalue1 = map(sensorValue1, 0, 1023, 0, 500);
  int mappedvalue2 = map(sensorValue2, 0, 1023, 0, 500);
  Serial.print(mappedvalue1);
  Serial.print(",");
  Serial.print(mappedvalue2);
  Serial.println();
  delay(1);        
}

Processing Code:

import processing.serial.*;
String myString = null;
Serial myPort;
int x2;
int y2;
int NUM_OF_VALUES = 2;   
int[] sensorValues;      

void setup() {
  size(500, 500);
  background(0);
  setupSerial();
}

void draw() {
  updateSerial();
  printArray(sensorValues);
  stroke(255);
  line(x2, y2, sensorValues[0], sensorValues[1]);
  x2 = sensorValues[0];
  y2 = sensorValues[1];
}

void setupSerial() {
  printArray(Serial.list());
  myPort = new Serial(this, Serial.list()[5], 9600);
  myPort.clear();
  myString = myPort.readStringUntil( 10 ); 
  myString = null;
  sensorValues = new int[NUM_OF_VALUES];
}

void updateSerial() {
  while (myPort.available() > 0) {
    myString = myPort.readStringUntil( 10 ); // 10 = '\n'  Linefeed in ASCII
    if (myString != null) {
      String[] serialInArray = split(trim(myString), ",");
      if (serialInArray.length == NUM_OF_VALUES) {
        for (int i=0; i<serialInArray.length; i++) {
          sensorValues[i] = int(serialInArray[i]);
        }
      }
    }
  }
}

Result:

Part 2: Musical Instrument
Arduino Schematic:

Arduino Code:

#define NUM_OF_VALUES 2

int tempValue = 0;
int valueIndex = 0;

int values[NUM_OF_VALUES];

void setup() {
  Serial.begin(9600);
  pinMode(13, OUTPUT);
}

void loop() {
  getSerialData();


  if (values[2] == 1) {
    tone(9, values[0]);
  } else {
    noTone(9);
  }

}


void getSerialData() {
  if (Serial.available()) {
    char c = Serial.read();
    switch (c) {
      case '0'...'9':

        tempValue = tempValue * 10 + c - '0';
        break;
      case ',':
        values[valueIndex] = tempValue;
        tempValue = 0;
        valueIndex++;
        break;
      case 'n':
        values[valueIndex] = tempValue;

        tempValue = 0;
        valueIndex = 0;
        break;

      case 'e':
        for (int i = 0; i < NUM_OF_VALUES; i++) {
          Serial.print(values[i]);
          if (i < NUM_OF_VALUES - 1) {
            Serial.print (",");
          }
          else {
            Serial.println();
          }
        }
        break;
    }
  }
}

Processing Code:

import processing.serial.*;

int NUM_OF_VALUES = 2;

Serial myPort;
String myString;

int values[] = new int[NUM_OF_VALUES];

void setup() {
  size(500, 500);
  background(0);

  printArray(Serial.list());
  myPort = new Serial(this, Serial.list()[5], 9600);

  myPort.clear(); 
  myString = myPort.readStringUntil(10); 
  myString = null;
}

void draw() {
  noStroke();
  fill(0, 30);
  rect(0, 0, 500, 500);

  strokeWeight(5);
  stroke(255);
  line(pmouseX, pmouseY, mouseX, mouseY);

  values[0] = mouseX;
  values[1] = mouseY;

  sendSerialData();
  echoSerialData(200);
}

void sendSerialData() {
  String data = "";
  for (int i=0; i<values.length; i++) {
    data += values[i];
    if (i < values.length-1) {
      data += ",";
    } else {
      data += "n";
    }
  }

  myPort.write(data);
}

void echoSerialData(int frequency) {
  if (frameCount % frequency == 0) myPort.write('e');

  String incomingBytes = "";
  while (myPort.available() > 0) {

    incomingBytes += char(myPort.read());
  }

  print(incomingBytes);
}