CONCEPTION AND DESIGN:
I read a short story by Mu Shiying called “Shanghai Fox-Trot”. In the story, he uses sounds and dialogue to describe the scene in Shanghai in the 1930s. While reading, I can imagine a woman in a red skirt dancing the Fox-trot in the grey city. Following her, we can see all sentient beings are in various forms, from the poorest to the most extravagant.
So, I tried to make us to be the dancers. Our steps and lifting can release different sounds. When we bend our bodies, the quality of the sound changes. It feels like we are fed up with the regular life or trying to figure a new way out.
In particular, I chose to use body parts as the medium of the interaction. I thought this could increase the sense of participation of the users and provide the opportunity to release various reflections because everyone’s movement will be different. The sensors tested steps will be put on the floor. I designed the step points in the curve, so it can make the user feel like dancing through the city. For the second sensor, I plan to use it as a digital signal to detect the position. So, I plan to tie it on the arm where can have a clear position change. Those two types of sensors will work in pairs and release the sounds that relate to each other. The third sensor tested bending will be put on the knee, which we use most frequently in daily life and has a bigger range to move. The analog value from it will change the speed of the sounds.
During the user test, I received a lot of suggestions about the logicality between the movement, sound, and frames. Also, users are advised to make a storyline in the different steps. I thought those were good suggestions, so I added my steps from three to five and remade the sounds and video.
The final version was like this.
Square 1: vehicle stream + car’s horn, representing the way to work.
Square 2: typing keyboard+throw away a waste paper, representing work.
Square 3: metro running + metro door closed, representing the way home.
Square 4: Rapid steps in the rain + hold the umbrella and raindrops drop on it, representing the difficulties and the bad mood.
Square 5: parents gabbing + clink glasses, representing home.
FABRICATION AND PRODUCTION:
- Selection of components
I chose the force-sensing resistor to detect the step. However, after the fabrication design (which I will mention later in the text), I can be replaced by copper tapes. It will be easier and more stable in the coding part if I replace it, but for the time limit, I gave up that option.
I chose the Flex sensor to detect the bending. In the beginning, I put the sensor under the knee. During the user test, I found that the users always bent their knees too hard to break the sensor. So I changed the sensor to the surface of the knee, which can have a physical limitation, although the sensitivity decreased because the value’s range was reduced.
I chose the tilt sensor to tie on the arm. It can be used in the same way as the switch.
2. Sounds and videos:
Most of the sound and videos came from the “Jianying” app. In particular, for the “home” piece, I asked some of my friends to record their grandparents’ words in localism. I put a piece of my grandpa telling a past story of our two as the main audio track so that it wouldn’t confuse the audience. I used the video of celebrating birthday for my grandpa when I was a kid. I got the inspiration from my grandpa, and I thought it could relate to the sense of warmth and gathering.
3. Coding:
The first thing is to design how the sound can change. My anticipate was to change the tone of the sound (the hardest flex causes the stridulation). I tried to use the SawWave as the reference but found that the frequency function can’t be used in my sound file. With Andy’s help, I know that’s because the sound file has various waves, and it’s hard to separate and edit them one by one by only using a frequency function. Fortunately, changing the speed can also carry my point. The next challenge was to manage different conditions. The experience from this was always remember to consider the previous state. Using “sound.isPlaying() == false” can avoid the sound repeatedly starting, which means the sound only starts again when the previous round is over.
Arduino code:
// the tilt sensor attached to :
int tiltsensor = 2;
void setup() {
pinMode(tiltsensor, INPUT);
Serial.begin(9600);
}
void loop() {
// to send values to Processing assign the values you want to send
//FSR1:
int sensorValue1 = analogRead(A0);
//flex sensor:
int sensorValue2 = analogRead(A1);
//tilt sensor:
int sensorValue3 = digitalRead(2);
//FSR2:
int sensorValue4 = analogRead(A2);
//FSR3:
int sensorValue5 = analogRead(A3);
//FSR4:
int sensorValue6 = analogRead(A4);
//FSR5:
int sensorValue7 = analogRead(A5);
// send the values keeping this format
Serial.print(sensorValue1);
Serial.print(",");
Serial.print(sensorValue2);
Serial.print(",");
Serial.print(sensorValue3);
Serial.print(",");
Serial.print(sensorValue4);
Serial.print(",");
Serial.print(sensorValue5);
Serial.print(",");
Serial.print(sensorValue6);
Serial.print(",");
Serial.print(sensorValue7);
Serial.println();
delay(20); // delay in between reads for stability
// too fast communication might cause some latency in Processing
// this delay resolves the issue
// end of example sending values
}
import processing.serial.*; import processing.sound.*; import processing.video.*; Serial serialPort; SoundFile sound1; SoundFile sound2; SoundFile sound3; SoundFile sound4; SoundFile sound5; SoundFile sound6; SoundFile sound7; SoundFile sound8; SoundFile sound9; SoundFile sound10; Movie video1; Movie video2; Movie video3; Movie video4; Movie video5; Movie plus1; Movie plus2; Movie plus3; Movie plus4; Movie plus5; int NUM_OF_VALUES_FROM_ARDUINO = 7; /* CHANGE THIS ACCORDING TO YOUR PROJECT */ int threshold; /* This array stores values from Arduino */ int arduino_values[] = new int[NUM_OF_VALUES_FROM_ARDUINO]; float speed; float flex = 760; float FLEX = 800; void setup() { fullScreen(); background(0); sound1 = new SoundFile(this, "car.WAV"); sound2 = new SoundFile(this, "horn.wav"); sound3 = new SoundFile(this, "writing&typing.WAV"); sound4 = new SoundFile(this, "throw.WAV"); sound5 = new SoundFile(this, "metro.WAV"); sound6 = new SoundFile(this, "door_close.WAV"); sound7 = new SoundFile(this, "running2.WAV"); sound8 = new SoundFile(this, "umbrella.WAV"); sound9 = new SoundFile(this, "talking.WAV"); sound10 = new SoundFile(this, "cheering.WAV"); video1 = new Movie(this, "cars.mov"); video2 = new Movie(this, "keyboard_v.mov"); video3 = new Movie(this, "metro_v.mov"); video4 = new Movie(this, "raining_video.mov"); video5 = new Movie(this, "birthday.mov"); plus1 = new Movie(this, "horn_v.mov"); plus2 = new Movie(this, "throw_v.mov"); plus3 = new Movie(this, "door_close_v.mov"); plus4 = new Movie(this, "umbrella_v.mov"); plus5 = new Movie(this, "gathering.mov"); frameRate(30); video1.loop(); video2.loop(); video3.loop(); video4.loop(); video5.loop(); plus1. loop(); plus2. loop(); plus3. loop(); plus4. loop(); plus5. loop(); printArray(Serial.list()); // put the name of the serial port your Arduino is connected // to in the line below - this should be the same as you're // using in the "Port" menu in the Arduino IDE serialPort = new Serial(this, "/dev/cu.usbmodem1101", 9600); } void draw() { // receive the values from Arduino getSerialData(); int Step1 = arduino_values[0]; float Step2 = arduino_values[3]; float Step3 = arduino_values[4]; float Step4 = arduino_values[5]; float Step5 = arduino_values[6]; int button = arduino_values[2]; //basic state 1 //when standing on the square, the sound and the video play. if (Step1 >= 100) { speed = map(arduino_values[1], flex, FLEX, 0.5, 5); if (sound1.isPlaying() == false) { sound1.play(); } if (video1.available()) { video1.read(); } image(video1, 0, 0, width, height); //raise the arm, new sound and new video add to the basic state if (button == 1) { if (sound2.isPlaying() == false) { sound2.play(); } if (plus1.available()) { plus1.read(); } image(plus1, 0, 0, width, height); tint(255, 100); } else { sound2.stop(); } } else { sound1.stop(); sound2.stop(); } //bend the knee, the speed of the sound and video change. if (arduino_values[1] > flex ) { sound1.rate(speed); video1.speed(speed); } else { //back to the basic state speed = 1; sound1.rate(speed); video1.speed(speed); } // basic state 2 if (Step2 >= 100 ) { speed = map(arduino_values[1], flex, FLEX, 0.5, 5); if (sound3.isPlaying() == false) { //sound3.amp(0.8); sound3.play(); } if (video2.available()) { video2.read(); } image(video2, 0, 0, width, height); if (button == 1) { if (sound4.isPlaying() == false) { sound4.play(); } if (plus2.available()) { plus2.read(); } image(plus2, 0, 0, width, height); tint(255, 128); } else { sound4.stop(); } } else { sound3.stop(); } if (arduino_values[1] > flex) { sound3.rate(speed); video2.speed(speed); } else { speed = 1; sound3.rate(speed); video2.speed(speed); } //basic state 3 if (Step3 >= 300 ) { if (sound5.isPlaying() == false) { //sound5.amp(0.8); sound5.play(); } if (video3.available()) { video3.read(); } image(video3, 0, 0, width, height); if (button == 1) { if (sound6.isPlaying() == false) { sound6.play(); } if (plus3.available()) { plus3.read(); } image(plus3, 0, 0, width, height); tint(255, 130); } else { sound6.stop(); } } else { sound5.stop(); } if (arduino_values[1] > flex) { speed = map(arduino_values[1], flex, FLEX, 0.5, 5); sound5.rate(speed); video3.speed(speed); } else { speed = 1; sound5.rate(speed); video3.speed(speed); } //basic state 4 if (Step4 >= 50 ) { if (sound7.isPlaying() == false) { //sound5.amp(0.8); sound7.play(); } if (video4.available()) { video4.read(); } image(video4, 0, 0, width, height); if (button == 1) { if (sound8.isPlaying() == false) { sound8.play(); sound7.amp(0.3); } if (plus4.available()) { plus4.read(); } image(plus4, 0, 0, width, height); tint(255, 130); } else { sound8.stop(); sound7.amp(1); } } else { sound7.stop(); } if (arduino_values[1] > flex) { speed = map(arduino_values[1], flex, FLEX, 0.5, 5); sound7.rate(speed); video4.speed(speed); } else { speed = 1; sound7.rate(speed); video4.speed(speed); } //basic state 5 if (Step5 >= 80 ) { if (sound9.isPlaying() == false) { sound9.play(); } if (video5.available()) { video5.read(); } image(video5, 0, 0, width, height); if (button == 1) { if (sound10.isPlaying() == false) { sound10.play(); } if (plus5.available()) { plus5.read(); } image(plus5, 0, 0, width, height); tint(255, 130); } else { sound10.stop(); } } else { sound9.stop(); } if (arduino_values[1] > flex) { speed = map(arduino_values[1], flex, FLEX, 0.6, 5); sound9.rate(speed); video5.speed(speed); } else { speed = 1; sound9.rate(speed); video5.speed(speed); } } // the helper function below receives the values from Arduino // in the "arduino_values" array from a connected Arduino // running the "serial_AtoP_arduino" sketch // (You won't need to change this code.) void getSerialData() { while (serialPort.available() > 0) { String in = serialPort.readStringUntil( 10 ); // 10 = '\n' Linefeed in ASCII if (in != null) { print("From Arduino: " + in); String[] serialInArray = split(trim(in), ","); if (serialInArray.length == NUM_OF_VALUES_FROM_ARDUINO) { for (int i=0; i<serialInArray.length; i++) { arduino_values[i] = int(serialInArray[i]); } } } } }
The original anticipate was covering the FSR sensor with the overlay (That’s why I chose the force sensor instead of copper tapes). However, according to the feedback from the users test, it was so tiring to set up and it was not tidy enough to have sensors exposed. For the improvement, I attached all the FSRs to one cardboard and covered them with 1.5mm wood boards. Between each wood board and the cardboard, I used foam tape to stick them together as well as to distinguish the state before and after standing on it. Besides I added some decoration to remind the storyline and the interactive function.
For the wearable sensor, I sewed the tilt sensor on an elastic strap that can attach to the arm and sewed the flex sensor to a kneepad.
After all, I hid the Arduino and breadboard in the box.
CONCLUSIONS:
For the interactive part. I think it’s quite successful because my users all got surprised when they found their movement is related to digital reflection. Compared with my midterm project, the interaction is increased because the reflection is not only caused by fingers anymore. I included the whole body in the interaction. Besides, each movement has its meaning. The sound and scene that appear when raising the arm is about what will happen when we truly raise our arm in that situation. Beyond the simple communication between people and computers, it’s a whole new experience to tell a story by using an interactive device.
For the storytelling, I found that users are more likely to stay longer on the last square. It feels like we start to think about what has happened in a whole day. And I enjoy the process of connecting with the audience through a very ordinary little thing. However, in the IMA show, some of the users asked about “what is it used for”, so I had to tell them about the background, my aims, and the story again and again. But I don’t think it reflects the bugs in my design or settings. I always think we don’t need to express our thoughts too directly in the project. Both as the designer and audience, I would like to enjoy the moment of being enlightened suddenly. So, if the users can take more time to think about my project, we might find more connections between each other.
From experience, I learned that the appearance is as important as the feature. Previously, I paid more attention to the code and components. After finishing the final project, I found that making a beautiful and neat appearance can enhance the attractiveness and integrity of the project. Also, it will be easier for me to check if there is something wrong with my circuit.
If I have more time, I will improve the improve existing problems first. Replacing the FSR sensor with the copper tapes can avoid adjusting the project every time I start it. Improve the place of the projector to help the audience to see clearer about the scene. Besides, I would like to improve the code to avoid getting stuck when running the progress. To make the expression of the project better, I will add more wearable sensors to enrich the changing methods. Also, I can add more sound to see if we can produce a piece of music with the sound of city life, which means we are not only finding a way out of the regular life but also trying to find joy from it.
DISASSEMBLY:
Acknowledgement:
Thanks for my grandparents for Beijing localism and Chaoshan localism, Hong, Jiayi and her Grandparents for Beijing localism, Liu, Yuqing and he grandpa for Xi’an localism, Wu, Zhenzhen and her mother for Shanghai localism, Shi, Xiaoyu and Liu, Yiqiao for Cantonese. Zhao, Yichen and her grandmother for Nanjing localism.