Final Project

  1. FANtabulous-Maggie Wang-Gottfried Haider
  2. Chinese fans have a rich cultural history and connotations, yet they face challenges due to traditional static display methods, and fan culture requires more dynamic and engaging expressions. Our project aims to enable intriguing interactions between the audience and the scene, creating a multi-sensory immersive experience through Chinese fans. Through various interactions, such as flapping, patting, and dividing, participants can use a fan as an interactive prop to experience the fusion and harmony between the fan and traditional culture. We want to make a project that requires interaction between the fan and the projection so that the audience could experience the charm of Chinese fan culture. We intend to construct a projection screen and place several sensors behind it. The movements of the fan will affect how the projection appears on the screen when the hidden sensors detect the movements and send signals to processing which controls the images. Users mainly use the fan for patting and fanning( to make the butterflies move or split a rock), both of which are conventional fan movements. In order to complete our task, we used processing to create various projection scenes and placed sensors behind the screen. The topic of this project is Chinese fan culture, which was inspired by the preparatory research we did. There was a project we found about Chinese oil-paper umbrellas.
    It’s an interactive project that introduces new and innovative ways to engage with traditional Chinese oil-paper umbrellas, such as incorporating technology, interactivity, and modern design elements. In that project, they make use of the AR and VR technology to give users immersive experiences using Chinese oil-paper umbrellas and experience the beauty and cultural heritage behind them. Similarly, we can use Arduino and Processing to achieve a similar effect. Users can get their hands on the Chinese fans and experience by making different movements. The idea of using the projection and moving the fan in front of the screen also comes from the Interactive Light Painting- Pu Gong Ying Tu (Dandelion Painting) project. In this project, there’s a fabric with the image of dandelions on it.
    The sensors and LEDs are behind the fabric. It uses the distance sensor, when the person is close enough and exhales on the dandelion, the white LEDs, which represent the white tops, will fly away. This gives us the inspiration to use the fan to affect the images on the screen. In our project, the users can engage with the fan and experience the process in several ways, including flapping, patting, and dividing. By controlling the fan by moving it in different ways, the user can send the message to the installation. And the installation can reply back to the user by the change of the image. So there’s communication between the user and the machine. During the user test session, we were given the feedback that the instruction wasn’t clear and we had to explain to the users what they should do with the fan. Another problem was that the sensors weren’t that sensitive. Also, we added some sound effects but since it was too noisy on the scene the users can’t hear much sound.

    Based on this feedback, we made some adaptations. To start with, we added instructions by putting text beside the related image. When the user starts fanning and the sensor detects it, the text will change, indicating the validity of the fanning. We also added a bar to indicate the quantity of wind produced. For the problem with the sound, we borrowed a speaker to create a more immersive environment. Taking the suggestion that we could make some decorations for the frame, we stuck flowers on the wooden frame to fit the painting. We also kept adjusting the data of the sensor to make them more precise. These turned out to be effective adaptations.

     

  3. The first important fabrication step was making the frame. We sought help from Andy with the wooden frame. Andy helped us cut the board into the desired size, and we glued them together. Then we attached the fabric to the frame. Then the major step is to attach the sensor to the frame. This was ideally easy, but it turned out to be really hard. We spent a lot of time figuring out how to attach the flex sensor so that it can detect the movement of the fabric. We had a perfect image in mind, but the reality was not the same. At first, we tried to directly stick the sensor on the fabric, but the change of data in the serial monitor wasn’t obvious enough.  We tried to place it under the stick, but it still didn’t work well. We thought about changing to another sensor if this couldn’t work out. But finally, we decided to stick to the flex sensor.  We came up with the idea to use the board to stabilize the sensor. I made use of the board left for the popcorn session. We first stuck the sensor to the board and stuck the boards together. The other end of the sensor was stuck to the fabric. The result turned out to be very good. Besides the flex sensor, we also used the distance sensor and the vibration sensor. We used them in a similar way. I placed the distance sensor in the middle of the stick. And we measured the data when we put our hands in the middle of the stone. We widened the range to reach a better effect. For the vibration sensor, I discovered that clicking it would make a huge difference. So we simply stuck it on the fabric. To make the function more obvious to the user, we print the image of the snowflake and stuck it on the fabric. We also added the text “Hit it” next to the snowflake. However, from the presentation, I can see that the text is too small for the users to notice. We still had to explain how to use our project to the users. There were many ways for us to draw the paintings on processing. But we decided to use what we learned in class. The butterfly is a video. And the splitting stone is a photo covered by two photos. And the tree and the snow are coded. We managed to reach a relatively good visual effect in the simplest way. For this project, an important decision we made is to free the fan and add everything, including the sensors and actuators to the screen. This was because we hoped the users to feel like they were actually fanning. If there would be cables and noticeable sensors on the fan, it would ruin the kind of feeling we were trying to create. The goal was to combine the sensors with the screen. And the users won’t feel like they needed to activate the sensor, instead, they could use the fan to manipulate the image on the screen. We want to reach the effect that the users cannot tell how we make it happen and how they manipulate the image. The result turned out to be rather successful. Besides these, another major problem we faced was how we could stabilize the screen. Thanks to Gohai’s advice, with his help, we hung the screen on the sticks using the fishing line. One drawback of it was that we would not be able to move our project anywhere else. It took us a long time to place it in the right position. Moreover, the placement of the projector was also tricky. It was hard to decide on a specific spot. And we needed to keep changing it to have a fuller projection. Our cables were lifted in the air, and they fell apart easily. We couldn’t place them on the table. In order to fix it, I taped all the cables. One thing I learned from the other group was that I could braid all the cables. I also used hot glue to glue the breadboard because the cables attached to it also fell apart easily. I also learned this from the other group. For the original design, we only had two sensors, the flex sensor, and the distance sensor. We were sure to use the flex sensor. Because this was the best way we could think of to create the effect of fanning. But we weren’t sure about the other sensors. One problem with the flex sensor was that it detects the movement of the whole fabric. If we fan, the data it collects on the different parts of the fabric is similar. So we couldn’t use it to control all the different parts of the painting. Therefore, we had to use other sensors to make more interaction. We thought about the distance sensor, the light sensor, and the laser sensor. We found that the distance sensor was perfect for the action of splitting the stone. We also want to add another sensor to reach another effect. The light sensor wasn’t a good choice since we were using projection. We decided to use the vibration sensor at the last minute. It was effective, but it also had problems. It was separated from the other two sensors and did not have strong connections. So even if we had instructions, the user still wasn’t sure how to make it snow. It might be better if we could think of more ways to get different data from simply fanning different parts of the fabric.

D. The original goal is to make a project that creates interaction between the fan, which represents the user, and the projection, which represents the computer so that the audience could experience the charm of Chinese fan culture. I think we managed to reach the stated goal. Our projection is beautiful in my eyes. The audience used the fan in different ways to trigger the various effects on the screen. When people fan, the leaves fall down and the butterflies fly up. When they place the fan between the stone, the stone split. When they click the snowflake, it snows. The audience can communicate with the screen. The audience can get the meaning of the movements, and the screen can provide timely feedback. One improvement we could make is to make the stone splitting more obvious. We could create a prompt or a scenario for the audience. The stone part could be glowing, and the user needs to split it to see what’s inside.

We could also make better use of the text to guide the user to finish different steps, instead of making them do so randomly. In this way, the interaction can be more organized. This can make the audience feel more like completing a mission. What we should keep perfecting is the preciseness of the sensor and its response speed. There are still ways to think about to make the interaction more fluent. What’s more, we could think about how to stick to the fanning action, which is the most basic action of the fan, and create interaction based only on that so that the other parts won’t be separated. The audience can accomplish all the effects by fanning different areas of the screen. In this way, the project makes more sense and is more easy to understand. The creation process was very difficult and time-consuming. But one takeaway is that I didn’t think that we managed to accomplish this project in such a limited time. This matches our ideal image for this project. In our midterm project, our first plan was not practical so we had to move to plan b. But for this final project, it was lucky that our first decision worked out well. I integrated the knowledge I learned throughout the semester to complete this project. It’s important because our idea of combining Arduino and Processing was successful. We used Processing for the images and the sound effects. And we come up with an innovative way to use the fan to control the sensors. What’s most important is that we explore the Chinese traditional culture and integrate it into our own project.

E.

Processing:

import processing.video.*;
import processing.serial.*;
import processing.sound.*;
PImage photo1;
PImage photo2;
PImage photo3;
Serial serialPort;
SoundFile sound;
SoundFile rock;
SoundFile leafsound;
int NUM_OF_VALUES_FROM_ARDUINO = 3;
int arduino_values[] = new int[NUM_OF_VALUES_FROM_ARDUINO];
float startTime;
int prev_values[] = new int[NUM_OF_VALUES_FROM_ARDUINO];
float dropX, dropY;
float x=0;
int y=0;
int z=0;
int dropCount = 200;

int dropFrame = 0;
Movie myMovie;
color dropColor = color(255);
ArrayList<Branch> branches = new ArrayList<Branch>();
ArrayList<Leaf> leaves = new ArrayList<Leaf>();
int maxLevel = 9;
Drop[] drops = new Drop[dropCount];
int dropState = 0; // 0 not raining 1 raining
void setup() {
size(1200, 1000);
background(0);
//sound = new SoundFile(this, “bgm2.mp3”);
//sound.amp(0.3);
//sound.loop();
rock = new SoundFile(this, “stonespilt.mp3”);
leafsound = new SoundFile(this, “leafsound.mp3”);

photo1 = loadImage(“mountain.png”);
photo2 = loadImage(“stone1.png”);
photo3 = loadImage(“stone2.png”);
printArray(Serial.list());
serialPort = new Serial(this, “/dev/cu.usbmodem1101”, 9600);
myMovie = new Movie(this, “butterfly.mp4”);
myMovie.loop();
colorMode(HSB, 100);
generateNewTree();
for (int i = 0; i < drops.length; i++) {
drops[i] = new Drop(random(width), random(-200, -50));
}
}

void draw() {
background(0);
getSerialData();

//pushMatrix();
translate(width, 0);
scale(-1, 1);

if (myMovie.available()) {
myMovie.read();
image(photo1, 0, 100, 1200, 600);
image(photo2, 400-y, 280, 300, 400);
image(photo3, 650+y, 280, 300, 400);
image(myMovie, 50, 500+z*5, 80, 80);
image(myMovie, 150, 500+z*5, 130, 130);
image(myMovie, 100, 400+z*5, 115, 115);

if (arduino_values[0]<120) {
z=-125+arduino_values[0];
}

if (arduino_values[0] < 120 && leafsound.isPlaying() == false) {

leafsound.play();
}

if (arduino_values[1]>55 && prev_values[1] >= 50 && arduino_values[1]<70 &&rock.isPlaying() == false) {
y=200;
rock.play();
startTime=millis();
}
if (millis()-startTime>=4000) {
y=0;
}

for (int i = 0; i < branches.size(); i++) {
Branch branch = branches.get(i);
branch.move();
branch.display();
}
for (int i = leaves.size()-1; i > -1; i–) {
Leaf leaf = leaves.get(i);
leaf.move();
leaf.display();

leaf.destroyIfOutBounds();
}
}
if (arduino_values[0]<120) {
for (Leaf leaf : leaves) {
PVector explosion = new PVector(leaf.pos.x, leaf.pos.y);
explosion.normalize();
explosion.setMag(0.01);
leaf.applyForce(explosion);
leaf.dynamic = true;
}

// copy the current values into the previous values
// so that next time in draw() we have access to them
for (int i=0; i < NUM_OF_VALUES_FROM_ARDUINO; i++) {
prev_values[i] = arduino_values[i];
}
}
if (arduino_values[2]>30 && dropState == 1 ) {
dropState = 0;
}
if (arduino_values[2]>30 && dropState == 0) {
dropState = 1;
}

if ( dropState ==1 ) {
for (int i = 0; i < drops.length; i++) {
drops[i].display();
drops[i].update();
}
dropFrame = dropFrame + 1;
}
if (dropFrame == 300) {
dropFrame = 0;
dropState = 0;
}

//popMatrix();
pushMatrix();
fill(255, 0, 255);
textSize(30);
text(“Explore the 3 triggers in this picture!”, 10, 50);
rect(200, 800, (138-arduino_values[0])*2, 10);
noFill();
stroke(255, 0, 255);
rect(200, 800, (138-70)*2, 10);
//textSize(20);
//text(“Wind Strength”, 860, 910);
if (arduino_values[0]<120) {
textSize(20);
//fill(129,99,99);
text(“They are flying!”, 140, 600);
text(“They are falling!”, 1000, 840);
} else {
textSize(20);
text(“Help the butterflies fly!”, 140, 600);
text(“Fan off the leaves!”, 1000, 840);
}
if (arduino_values[1]>60 && prev_values[1] >= 50 && arduino_values[1]<70) {
//text(“Good job!”, 500, 450)
} else {
text(“Spilt the stone!”, 500+x, 450+x);
x=random(-3, 3);
}
popMatrix();
}

void generateNewTree() {
branches.clear();
leaves.clear();
float rootLength = 150;
branches.add(new Branch(width/1.2, height, width/1.2, height-rootLength, 0, null));
subDivide(branches.get(0));
}

void subDivide(Branch branch) {
ArrayList<Branch> newBranches = new ArrayList<Branch>();
int newBranchCount = (int)random(1, 4);
switch(newBranchCount) {
case 2:
newBranches.add(branch.newBranch(random(-45.0, -10.0), 0.8));
newBranches.add(branch.newBranch(random(10.0, 45.0), 0.8));
break;
case 3:
newBranches.add(branch.newBranch(random(-45.0, -15.0), 0.7));
newBranches.add(branch.newBranch(random(-10.0, 10.0), 0.8));
newBranches.add(branch.newBranch(random(15.0, 45.0), 0.7));
break;
default:
newBranches.add(branch.newBranch(random(-45.0, 45.0), 0.75));
break;
}
for (Branch newBranch : newBranches) {
branches.add(newBranch);

if (newBranch.level < maxLevel) {
subDivide(newBranch);
} else {
float offset = 5.0;
for (int i = 0; i < 5; i++) {
leaves.add(new Leaf(newBranch.end.x+random(-offset, offset), newBranch.end.y+random(-offset, offset), newBranch));
}
}
}
}
class Branch {
PVector start;
PVector end;
PVector vel = new PVector(0, 0);
PVector acc = new PVector(0, 0);
PVector restPos;
int level;
Branch parent = null;
float restLength;

Branch(float _x1, float _y1, float _x2, float _y2, int _level, Branch _parent) {
this.start = new PVector(_x1, _y1);
this.end = new PVector(_x2, _y2);
this.level = _level;
this.restLength = dist(_x1, _y1, _x2, _y2);
this.restPos = new PVector(_x2, _y2);
this.parent = _parent;
}

void display() {
stroke(10, 30, 20+this.level*4);
strokeWeight(maxLevel-this.level+1);
if (this.parent != null) {
line(this.parent.end.x, this.parent.end.y, this.end.x, this.end.y);
} else {
line(this.start.x, this.start.y, this.end.x, this.end.y);
}
}
Branch newBranch(float angle, float mult) {
PVector direction = new PVector(this.end.x, this.end.y);
direction.sub(this.start);
float branchLength = direction.mag();
float worldAngle = degrees(atan2(direction.x, direction.y))+angle;
direction.x = sin(radians(worldAngle));
direction.y = cos(radians(worldAngle));
direction.normalize();
direction.mult(branchLength*mult);

PVector newEnd = new PVector(this.end.x, this.end.y);
newEnd.add(direction);

return new Branch(this.end.x, this.end.y, newEnd.x, newEnd.y, this.level+1, this);
}

void sim() {
PVector airDrag = new PVector(this.vel.x, this.vel.y);
float dragMagnitude = airDrag.mag();
airDrag.normalize();
airDrag.mult(-1);

airDrag.mult(0.05*dragMagnitude*dragMagnitude);

PVector spring = new PVector(this.end.x, this.end.y);
spring.sub(this.restPos);
float stretchedLength = dist(this.restPos.x, this.restPos.y, this.end.x, this.end.y);
spring.normalize();
float elasticMult = map(this.level, 0, maxLevel, 0.1, 0.2);
spring.mult(-elasticMult*stretchedLength);
}
void move() {
this.sim();
this.vel.mult(0.95);
if (this.vel.mag() < 0.05) {
this.vel.mult(0);
}
this.vel.add(this.acc);
this.end.add(this.vel);
this.acc.mult(0);
}
}

class Leaf {
PVector pos;
PVector originalPos;
PVector vel = new PVector(0, 0);
PVector acc = new PVector(0, 0);
float diameter;
float opacity;
float hue;
float sat;
PVector offset;
boolean dynamic = false;
Branch parent;

Leaf(float _x, float _y, Branch _parent) {
this.pos = new PVector(_x, _y);
this.originalPos = new PVector(_x, _y);
this.parent = _parent;
this.offset = new PVector(_parent.restPos.x-this.pos.x, _parent.restPos.y-this.pos.y);
if (leaves.size() % 5 == 0) {
this.hue = 2;
} else {
this.hue = random(75.0, 95.0);
this.sat = 50;
}
}
void display() {
noStroke();
fill(this.hue, 40, 100, 40);
ellipse(this.pos.x, this.pos.y, 5, 5);
}

void bounds() {
if (! this.dynamic) {
return;
}

float ground = height-this.diameter*0.5;

if (this.pos.y > height || this.pos.x > width || this.pos.y < 0 || this.pos.x < 0) {
this.vel.y = 0;
//this.vel.x *= 0.95;
this.vel.x = 0;
this.acc.x = 0;
this.acc.y = 0;
//this.pos.y = ground;
this.pos = this.originalPos;
this.dynamic = false;
}
}

void applyForce(PVector force) {
this.acc.add(force);
}

void move() {
if (this.dynamic) {
PVector gravity = new PVector(0, 0.005);
this.applyForce(gravity);

this.vel.add(this.acc);
this.pos.add(this.vel);
this.acc.mult(0);

this.bounds();
} else {
this.pos.x = this.parent.end.x+this.offset.x;
this.pos.y = this.parent.end.y+this.offset.y;
}
}
void destroyIfOutBounds() {
if (this.dynamic) {
if (this.pos.x < 0 || this.pos.x > width) {
leaves.remove(this);
}
}
}
}
float distSquared(float x1, float y1, float x2, float y2) {
return (x2-x1)*(x2-x1) + (y2-y1)*(y2-y1);
}
class Drop {
float x, y, ySpeed;

Drop(float x_, float y_) {
x = x_;
y = y_;
ySpeed = random(5, 15);
}

void display() {
noStroke();
fill(dropColor);
ellipse(x, y, 5, 10);
}

void update() {
y += ySpeed;
// If the raindrop reaches the bottom of the screen, reset its position
if (y > height) {
y = random(-200, -50);
x = random(width);
}
}
}

void getSerialData() {
while (serialPort.available() > 0) {
String in = serialPort.readStringUntil( 10 ); // 10 = ‘\n’ Linefeed in ASCII
if (in != null) {
print(“From Arduino: ” + in);
String[] serialInArray = split(trim(in), “,”);
if (serialInArray.length == NUM_OF_VALUES_FROM_ARDUINO) {
for (int i=0; i<serialInArray.length; i++) {
arduino_values[i] = int(serialInArray[i]);
}
}
}
}
}

Arduino:

const int flexPin = A0;
const int trigPin = A2;
const int echoPin = A1;
long duration;
int distance;
int flexValue;
void setup() {
Serial.begin(9600);
pinMode(trigPin, OUTPUT);
pinMode(echoPin, INPUT);
}
void loop() {
delay(20);
// digitalWrite(trigPin, LOW);
// delayMicroseconds(2);
// digitalWrite(trigPin, HIGH);
// delayMicroseconds(10);
// digitalWrite(trigPin, LOW);
// duration = pulseIn(echoPin, HIGH);
// distance = duration * 0.034 / 2;
// Serial.println(distance);
// delay(50);
// int flexValue= analogRead(A0);
// int distence= analogRead(A1);
flexValue = analogRead(flexPin);
delay(200);
digitalWrite(trigPin, LOW);
delayMicroseconds(2);
digitalWrite(trigPin, HIGH);
delayMicroseconds(10);
digitalWrite(trigPin, LOW);
duration = pulseIn(echoPin, HIGH);
distance = duration * 0.034 / 2;
int val;
val=analogRead(3);
Serial.print(flexValue);
Serial.print(‘,’);
Serial.print(distance);
Serial.print(‘,’);
Serial.println(val,DEC);
delay(50);
}

picture reference:

Tree code reference: https://blog.csdn.net/weixin_38937890/article/details/95176710

Video  references:https://www.bilibili.com/video/BV1t3411g7PT/?spm_id_from=333.880.my_history.page.clickhttps://www.bilibili.com/video/BV1GW41157FV/?spm_id_from=333.880.my_history.page.click&vd_source=f866fba9037e3c6bb5c31a3be8244b62



Midterm Project Report

  1. Squeeze Competition-Maggie Wang-Gohai
  2. CONTEXT AND SIGNIFICANCE.    The speed game we did in recitation gave me inspiration. The two competitors can do the same movements and interact with one another. I also did research on the automatic plant watering project. The moisture sensor detects the humidity. When there’s not enough water, the sensor sends out the signal and the water pump can move water into the pot. The sensor detects and responds to the environment and human activities. This is my idea of interaction. Two players, or the sensor and the player can exchange information and react to each other. Our project is unique in how we make use of the moisture sensor. We don’t put it in the mud. Instead, we put it in a box with water to detect the water level(since there’s no water level sensor in the school.).  Our project is set in a scenario where there’s an arsonist starting a fire and a firefighter who wants to put off the fire. The targeted audiences are two people who want to compete with each other.
  3. CONCEPTION AND DESIGN.   To start with, I want the users can know when the game starts and ends, so we add buzzers to ring twice as the beginning and the end. We attached a LED strip to show the process and the result. During the game, the LEDs will blink between blue and red in a waveform to show who has the advantage. This design can directly show the users how they’re doing in the game. The direction of the LED going can show users whether they’re sucking enough water or they need to squeeze more water in the box.

    To help users understand the scenario, we drew paintings of an arsonist and a firefighter, stuck them on the cardboard, and place them on the box. The LED strip suits our project because it’s more obvious that LED lights. It can show the process of squeezing clearly. To squeeze the water, we specifically chose the quadrate water bottles because they’re easier to squeeze and make less noise than round ones. We used the tubes Gohai provided us since they’re stronger than straws. Another option might be covering balloons on the water bottles. When pressing the balloon, the water can come out. We rejected it because we couldn’t find a balloon that was flexible enough.

  4. FABRICATION AND PRODUCTION.    For the box and the sensor, we drilled the holes and placed the sensor quickly. We struggled with the approach the move the water in and out. At first, we tried to use the water pump. However, we couldn’t get it to work. I came up with the idea of using water bottles and straws to squeeze the water in and out. We first tried round bottles but they make too much noise. Then we decided to use quadrate water bottles. They’re easier to squeeze. We spent a lot of time trying out the right bottle, drilling holes, and stabling the tubes and the bottles together.
    During the User Testing Session, users pointed out that the box was too low and participants didn’t really have an idea of what was going on. When they saw the two bottles, they didn’t know what to do. So I added arrowheads on the tubes to indicate which way the water should go. We also tried to direct the users of the squeezing process by the indication of the LED strip. We only had the words “arsonist” and “firefighter” at the beginning. But Gohai suggested we should put on paintings or photos to demonstrate them more clearly. The adaptations we made were effective. Users know what they should do regardless of our explanations. Despite the fabrication, we also jointly work on the code and electronics. It took us a lot of time to get the code right.
  5. CONCLUSIONS.   The goal of our project is to explore moisture sensors with the two-player game squeeze competition. My project result shows the interaction because it can use the sensor, the buzzer, and the LEDs to communicate with the users. However, the users can only squeeze the bottle. There’s nothing else they can do. It’d be better if users have more ways of communicating. My audience interacted with my project well. The problem was with the bottles. They can be hard to squeeze, and water might come out. The truth is that it’s not that the harder you squeeze, the more water you move. The key is the air. However, many users mistakenly believe that they need to squeeze hard to move the water. So hard that they damage the bottle. It’s not the case. If there were more time, I would find an alternative to the water bottle like the thing we use for blood pressure. It might be easier to squeeze, the outcome can be more obvious and there won’t be water spilling out.  Our idea wasn’t easy to accomplish. One thing I learned from the setbacks is to accept the fact that our anticipation may not work out and we need to move to plan b. We decided to explore the messiest element when it comes to electric devices-water. Fortunately, we didn’t screw up. It was fantastic that we can use sensors and electronic devices to detect water.

Video:

Group Research Project

This artifact is derived from Doris’s proposal inspired by The Ones Who Walk Away from Omelas. It responds to the interactive mirror we found in the “Research” phase of the project. This mirror can reflect the viewer and reflect the sad child who suffers for the happiness of people in Omelas.  The viewer can have a conversation with the mirror.

Doris painted the sketch for the mirror. We started by making as many reels as we could. And then we stuck them together to create the frame. During the process of making the mirror, I spent most of the time making the mirror. After making the frame, I came up with how we could build the base of the mirror to make it more stable. We weren’t sure how to make the top. Sean made great efforts to cut the cardboard in a curved shape and make it the top. We wanted to make a mirror that was about 1.8 meters tall like a normal mirror. But when actually making this artifact, we realized this was too much work. I proposed that we could make the mirror as long as we could and put it on a box. The biggest challenge for us was to make it stand. The bottom was still a big reel, which made it unable to stand on its own. We worked together to use cardboard to create a triangle to hold it and used tape to strengthen it. Eventually, we made the mirror stand. Our group decided to make this mirror with reels to make it thicker and more beautiful to watch.

Our basic script is that there’s a carnival in Omelas and there are two pairs of viewers and reflections. The viewer looks into the mirror. The mirror will show the reflection of the viewer first, but it’ll gradually change to the image of the sad child. For the performance, I suggested that girls could take selfies. The viewer could introduce the background and have conversations with the mirror holder. And our performance intended to provoke the audience’s thinking about the reason behind happiness. In my opinion, the successes of the artifact are as follows. Our artifact stands out because it’s not like any of the other artifacts in other groups. It’s one of the biggest artifacts presented in our section and it’s good-looking on the outside. It’s stable and can stand on its own. And it fits into the narrative. For the failure of the artifact, even if we added a button on it, it might seem more magical than technical. Our original idea was that there were sensors in the mirror. But we didn’t have enough time to accomplish that.

In the whole process, our teamwork was great. We didn’t have many communication problems. We did everything as a team. We made the mirror together.

The group with the name Door-ry left an impression on me. This group’s artifact is inspired by The Veldt. There’s a computer that detects the user’s mind. The user can think of one place he wants to go. When he  opens the door, he’ll be in that place. He can feel, touch eat there, and can even bring something back to the real life. This artifact is relevant to the fictions story. It’s like the nursery, but it includes more sensations and interactivity. It meets the criteria of the assignment well. It’s an interactive invention that fits into the narrative. They managed to make many small props relevant to the travel places, which was interesting. However, I feel like the idea of this project is commomly-used. It’s not a very unique idea, but the implementation was great. The computer was well-made. They design a door that the user can use to travel to other places. The user travels to three places and she demonstrates the usage of the artifact well. By making props like little pyramids and sticking the Dory picture on a fish-shape cardboard, they explain how interactive the artifact is. For further improvement, I remember the script saying the computer is the device and the door is just a portal, but I don’t see strong connection between them. Their performance could be better if they can demonstrate the device more clearly.

https://drive.google.com/drive/folders/1661WgiI36BlYwgUcssxdlBvu9iB4E1Qn?usp=share_link