Interaction Lab Final Project: 8. Report -Guide: Escape from the Maze – Jiaxiang Yuan

A. PROJECT TITLE – YOUR NAME – YOUR INSTRUCTOR’S NAME
Guide: Escape from the maze
Jiaxiang (Eric) Yuan
Eric Parren

B. CONCEPTION AND DESIGN:
The original concept of this project is to create both a virtual maze and a physical maze within the same structure, requiring players to solve the puzzle of one maze based on information provided by the other. In the virtual maze, there is a character that follows the player’s guidance performed in the physical maze. Both mazes contain areas covered by darkness, and players must guide the character out of the maze through careful observations and persistent experimentation.

I conceived this idea when exploring a new interaction format that goes beyond the typical computer-keyboard interaction by connecting Processing and Arduino. In traditional maze games, players often control the character using keys like “w, a, s, d,” promoting trial and error to find the way out. However, I chose a different approach by constructing an identical physical maze and requiring players to press force sensors within the maze to guide the character. If the player makes a wrong move by pressing the wrong sensor, the character gets blocked, and the player must locate the character in the maze again, adding a time cost. This unique maze game encourages players to focus on observing the maze’s structure before controlling the character, providing them with a novel gaming experience.

While searching for examples on YouTube, I found the idea of creating a dark maze, which aligned well with my original concept (Inspired by this video: Processing – ‘Ghost Maze’ game – YouTube). The concept of darkness brought to mind visually impaired individuals. I recalled a charity event associated with a mobile game called Identity V, where visually impaired people were symbolized as wizards, and wands represented white canes (点亮每个孩子的“心光” 《第五人格》视障主题公益活动暖心预告! _《第五人格》官方网站 (163.com)). This inspired the use of a 3D-printed wand to activate a force sensor within the physical maze. This project aims to illustrate the challenges faced by visually impaired individuals in an engaging game format, fostering awareness and compassion for the visually impaired.

Attached are images of the 3D-printed wand:


The model for the wand is sourced from here: Harry Potter Wand by Alexbrn – Thingiverse

However, during the construction phase, I encountered challenges in designing intricate puzzles that necessitate connection and interaction between the two mazes. Additionally, my coding skills limited my ability to make multiple covered areas appear simultaneously, as detailed in the Fabrication and Production section. Recognizing the difficulty and not wanting to discourage players, I decided to modify the puzzle. Instead of intricate connections between the mazes, I opted for a scenario where the character in the virtual maze has limited sight, while the player can view the entire maze structure. This adjustment not only streamlined my workload but also made the game more comprehensible and enjoyable for the player.

Attached are the blueprints I created for my project:

(overall size and shape)


(forest)


(ruins)


(cave)


(ocean)

The mazes were generated using the Maze Generator at https://mazegenerator.net/

– Red spot: Force sensors (players guide the character by pressing these sensors).
– Green spot: Special events.
– Black/Brown spot: Tunnel (character transfers to another tunnel when walking on it).
– Blue spot: Deep water (character “dies” and returns to the start of the ocean part of the maze when walking on deep water).

Each of the four maze parts has unique features:
1. Forest: Designed to teach players how to play the game.
2. Ruins: Maximizes the use of tunnels.
3. Cave: Character’s sight decreases in this area.
4. Ocean: Maximizes the use of deep water.

List of events:
– Torch: Doubles the sight in the cave area.
– House: Removes darkness and shows the position of touch sensors on the screen.
– Cat: Guides the way in the ruins area.

When designing the maze, I adhered to the rule that pressing the wrong sensor at any position would lead to a wrong path and a collision with a wall. To add challenge, I strategically placed events opposite the correct path, making it harder for players to find. The maze structure was adapted to fit each area’s features. For example, Ruins had separate blocks with one or two tunnels, simplifying the game level. Cave had a linear layout to reduce confusion, and Ocean ensured every shortest route led the character into deep water.

My designs were instinctive, lacking systematic game design knowledge, and some structures of the maze may confuse players. With more knowledge before designing, I could have created more precise and interesting mazes.

I expected users to carefully observe the maze structure, explore freely, and try different events and features. However, during user testing, participants randomly pressed sensors without considering the maze structure. They quickly lost patience, finding the maze too challenging. To address this, I added instructions before the game and each area:






And also the images for the event:



(all the background images are generated by Stable Diffusion with LORA “M_Pixel 像素人人” M_Pixel 像素人人 – v3.0 (morden_pixel) 现代像素 | Stable Diffusion LoRA | Civitai)

The modifications proved effective, as players clearly understood the objective of guiding the character out of the maze during the presentation and IMA Show. Additionally, the inclusion of images enhanced the immersive environment for players.

C. FABRICATION AND PRODUCTION:
When selecting sensors, I experimented with both touch sensors and pressure sensors. I discovered that the touch sensor’s values were rather unstable, and objects other than fingers had difficulty significantly increasing its value. Additionally, its size posed a challenge as it was challenging to fit between the two walls in the maze. Consequently, I opted for force (pressure) sensors.

Attached is an image of the force sensors along with 1-meter-long wires:

Given that I needed to install 47 pressure sensors, and one Arduino Mega 2560 only provides 16 analog read pins, I made the decision to employ 4 Arduino Mega boards to manage the circuit. These boards communicate with each other through the RX/TX pins. (I gained insight into establishing communication between multiple Arduinos through this tutorial: Arduino 2 Arduino Communication VIA Serial Tutorial – YouTube) Below is the main code for communication:

while (Serial1.available()){
delay(1);
if(Serial1.available()>0){
char c = Serial1.read();
if (isControl(c)){
break;
}
readString1 += c; //makes the string readString
}
}

Q1 = readString1;

To streamline the workload, I utilized 3 Arduino Mega 2560 boards (referred to as slaves) to receive signals from pressure sensors. Additionally, one Arduino Mega 2560 (referred to as the master) was employed to collect signals from the slaves and transmit them to Processing. Below is the complete code for the Arduino boards:
https://drive.google.com/file/d/1gUjY9kesq36heEaFDPm6Si2EAdmMDHJi/view?usp=drive_link

https://drive.google.com/file/d/1l0cQwGEZuBZ8QlnaXohmttdRkc_Qljo3/view?usp=drive_link

https://drive.google.com/file/d/1z_7DL5cl0-ETMQGHGzjJzXwXtIdOowve/view?usp=drive_link

https://drive.google.com/file/d/19qhUFeGvHh9pqCsPHBMm8QsEeQeG1igt/view?usp=drive_link

When working on my midterm project – Stepper Destroyer, my partner and I encountered various issues with wires, such as loose connections and insufficient length. To address these concerns, I purchased 1-meter-long wires for my final project, which proved to be effective. The extended length ensured that I didn’t have to worry about wires being unplugged due to force from other wires. I simply needed to ensure all wires were connected correctly and used a hot glue gun to secure them. Attached is a picture of the entire circuit:

Even though it may appear messy, the setup functions well. I connected 47 touch sensors to 4 Arduino Mega 2560 boards and interlinked the Arduino Mega boards through TX/RX pins. Considering the straightforward nature of the connections, I believe a circuit diagram may not be necessary.

As for the physical maze, I printed out the four areas and affixed them to cardboard. Attached are the images:




These images are the same in the two mazes. (physical and virtual)I learned and made these images myself through GIMP.

(The background images are from: OpenGameArt.org )
Forest background: Tower Defense – Grass Background | OpenGameArt.org
Ruins background: Handpainted Stone Tile Textures | OpenGameArt.org
Cave background: Handpainted Stone Floor Texture | OpenGameArt.org
Ocean background: Water | OpenGameArt.org
Water Textures | OpenGameArt.org

Other images:
House: hut | OpenGameArt.org
Tunnel: Cave entrance | OpenGameArt.org
Cat: 香炉 · 免费素材图片 (pexels.com)
Torch: [LPC] Animated Torch | OpenGameArt.org
Cat claw: Cat Claw PNG, Vector, PSD, and Clipart With Transparent Background for Free Download | Pngtree

Attached are the images of the building process of the physical maze:

I chose cardboard as my material because it is easy to cut, and I needed to install 47 force sensors on the maze, requiring the drilling of 47 holes. While I could have achieved this through laser-cutting with plywood, it would have demanded precise design work on Cuttle (https://cuttle.xyz/), and I believed it would consume too much time.

The sticks on the back are cardboard cut into 2.5-centimeter widths using a laser. Here is an image of my laser-cutting process:

I printed out the maze wallpaper and affixed it to the sticks. Subsequently, I used scissors to trim the sticks to the correct length and glued them onto the maze. Here are some photos documenting the process of affixing the walls:

Affixing the wallpaper, cutting the sticks, and gluing them onto the cardboard was indeed a time-consuming and tedious task. Taking this into consideration, I’ll opt for a smaller scale in my future projects.

For the wallpapers, I sourced them from OpenGameArt.org
Forest wallpaper: Large forest background | OpenGameArt.org
Ruins wallpaper: Bricks | OpenGameArt.org
Cave wallpaper: Seamless cave in parts | OpenGameArt.org
Ocean wallpaper: Sea background 1920×1080 | OpenGameArt.org



To conceal the wires and provide support for the cardboard, I used a laser-cutter to shape plywood pieces, embedding them along the sides of the maze. I opted for plywood due to its greater strength compared to cardboard.

Our fabrication lab assistant, Da Ling, provided assistance in creating two stands for my project. These stands were essential to prevent the cardboard from breaking under the weight of my laptop and the force applied during pressing. Attached is a picture of the stand:

I secured the Arduinos and breadboards in an orderly manner by using hot glue. This arrangement facilitated the easy connection of the power supply and the linking of the Arduino (Master) with Processing. Here are the photos of the setup:

Subsequently, I added cardboard at the bottom to fully conceal the wires. Here is a photo of the finalized setup:

I utilized 3D printing to create small statues representing events and tunnels in the physical maze. All the models
were sourced from https://www.thingiverse.com/.
Here is a photo of the 3D printing process:

I acquired the 3D models from the following sources:
Tunnel: Cave Entrance by PrintedEncounter – Thingiverse
Torch:Torch by techappsgoodson – Thingiverse
Cat: Sitting cat low poly by Vincent6m – Thingiverse
House: Little Cottage (TabletopRPG house) by BSGMiniatures – Thingiverse

Subsequently, I 3D-printed these models, adjusting their size to fit within the physical maze. However, this resizing led to printing challenges, causing spaghetti in the 3D printer and making the removal of supports difficult. To mitigate these issues, I applied glue on the printing plate and enlarged the base. After several attempts, I successfully printed all the models.

During user testing, individuals with long nails expressed difficulty in directly pressing the pressure sensors. In response, I printed out some chess pieces to represent the character in the physical maze. Here is a photo of the chess pieces:

I obtained the models for the statues from this source: 3D Printable Chess Pieces by sivakaman – Thingiverse

The initial idea of using a 3D-printed wand to press the sensor failed due to the wand’s thinness, making it challenging to apply sufficient pressure to the sensor area. As a result, I recommended direct pressing with fingers.

Following user testing, where some participants indicated confusion about the starting and ending points of the game, I addressed this concern by adding “start” and “end” signs. The images were sourced from the following locations:
Startsymbol Auf Weißem Hintergrund Stock Vektor Art und mehr Bilder von Anfang – Anfang, Einzelner Gegenstand, Futuristisch – iStock (istockphoto.com)
End Sign – Road End Sign , SKU: K-6498 (roadtrafficsigns.com)
Here are the photos of the signs:

In response to feedback during user testing, where some testers reported difficulty in clearly seeing the tunnel model in the Ruins area of the physical maze, I addressed this concern by adding red flags to mark the tunnels. Here is the photo of the updated design:

For the code in Processing, I used this code for receiving the signals from Arduino:

void getSerialData() {
while (serialPort.available() > 0) {
String in = serialPort.readStringUntil( 10 ); // 10 = ‘\n’ Linefeed in ASCII
if (in != null) {
print(“From Arduino: ” + in);
String[] serialInArray = split(trim(in), “,”);
if (serialInArray.length == NUM_OF_VALUES_FROM_ARDUINO) {
for (int i=0; i<serialInArray.length; i++) {
arduino_values[i] = int(serialInArray[i]);
}
}
}
}
}

(I copied it from here: 10.1 – Serial Communication 2 | F23 – Google Slides)

I divided the maze into four areas, with each section corresponding to a stage in Processing. To set the scene, I created the background image at the bottom. Here is how it looks:



I represented the character using a white circle positioned based on its x, y coordinates. Additionally, I incorporated an image with a transparent circle in the middle to create the effect of darkness. Here is how it looks:




Thanks to Interaction Lab fellow Kevin for teaching me how to it.

The images I used to cover the maze:


I created the images using GIMP.

However, while this method was the simplest I could think of, it presented a challenge. It made it impossible to have one uncovered area move with the character while keeping another uncovered area fixed. As a result, I abandoned my original idea of having both mazes with some areas covered and others not. Instead, I shifted the interaction to a scenario where the character has limited sight in the virtual maze, and the player can see the entire structure of the physical maze.

Regarding character movement, after discussions with my professors, Professor Andy Garcia provided an example using the bezier() and bezierPoint() functions. Professor Eric Parren helped me convert it into a custom function. However, I found that I still needed multiple `if` statements to create the character’s route, leading me to abandon this approach.

Later, I considered making the character appear directly at the position guided by the player. However, I deemed this approach less immersive and potentially more confusing. Instead, I opted for a similar but more complex method. I divided each area into cells (e.g., 10×20 or 10×30) and set the position of each cell using an array. I derived this idea from the maze generator (Maze Generator), which prompted me to input the number of cells for width and height when creating the maze. Here is the code:

int x_position_forest_ocean[] = new int[20];
int x_position_ruins_cave[] = new int[30];
int y_position_forest_ocean[] = new int[10];
int y_position_ruins_cave[] = new int[10];

for (int i= 0; i < 20; i++) {
x_position_forest_ocean[i] = 105 + 90 * i;
}

for (int i= 0; i < 10; i++) {
y_position_forest_ocean[i] = 135 + 90 * i;
}

for (int i= 0; i < 30; i++) {
x_position_ruins_cave[i] = 90 + 60 * i;
}

for (int i= 0; i < 10; i++) {
y_position_ruins_cave[i] = 270 + 60 * i;
}

And I made the character move by reducing the frame rate to 4 and using multiple if statement. Here is an example:

if (position == 73) {
if (signal ==17) {
route = 57;
position =17;
signal = 0;
}
}

if (route == 57) {
if (count == 0) {
x = x_position_ruins_cave[17];
y = y_position_ruins_cave[5];
}
if (count == 1) {
x = x_position_ruins_cave[17];
y = y_position_ruins_cave[4];
}
if (count == 2) {
x = x_position_ruins_cave[16];
y = y_position_ruins_cave[4];
}
if (count == 3) {
x = x_position_ruins_cave[16];
y = y_position_ruins_cave[3];
}
if (count == 4) {
x = x_position_ruins_cave[16];
y = y_position_ruins_cave[2];
}
count++;
if (count == 5) {
count = 0;
route = 0;
}
}

I assigned a unique number to every possible position of the character and every possible route. To simplify my coding, I created an x-y coordinate axis. Here are my blueprints:



And this is a video of how the character moves:

void keyPressed() {
if (key == ‘r’ || key == ‘R’) {
restart = true;
}
}

This code allows players to position their character at the start of each area of the maze based on the character’s current location. In case of encountering bugs, players can press “r” to restart. I implemented this feature as there were a few remaining bugs in the code, and time constraints limited extensive testing and debugging.

To enhance the gaming experience, I incorporated background music for each area, all sourced from OpenGameArt.org
Forest music: forest | OpenGameArt.org
Ruins music: Ancient Ruins | OpenGameArt.org
Cave music: Tecave | OpenGameArt.org
Ocean music: Ice Mountain | OpenGameArt.org

Due to the complex method I chose for character movement, where I manually defined each route, my Processing code extends to 11,475 lines, surpassing the maximum limit of the draw() function.

Professor Andy Garcia taught me to make each state into a custom function to save the space in function draw (). It works well.

Here is the full code for Processing:
https://drive.google.com/file/d/1ha_S5wtQmlBnEyINp5OY32bdkMIqjSWd/view?usp=drive_link

The data for Processing:
https://drive.google.com/drive/folders/1DKtQRijOefecR16PRM-a-vVHoXcJltmC?usp=drive_link

D. CONCLUSIONS
The objective of my project was to develop an immersive game that not only captivated users but also fostered awareness and empathy for the visually impaired. Although the emphasis leaned more towards the gaming experience rather than explicitly representing the challenges of the visually impaired, user testing and feedback from the IMA Show suggested that players found the game both challenging and fulfilling. However, upon revealing that the character they guided represented visually impaired individuals, players expressed surprise. Striking a balance between the gaming aspect and raising awareness posed a challenge, but I am content with the final outcome.

In the production phase, I initially didn’t anticipate writing such an extensive amount of code. Faced with the choice of reducing workload, learning advanced algorithms, or tackling the heavy but straightforward tasks, I opted for the latter due to my ambitious goal, time constraints and the uncertainty associated with learning complex algorithms. Despite the challenges, I successfully completed this sizable and intricate project as an individual effort within 14 days, which is a source of pride for me.

However, I believe there is considerable room for improvement in my project. While it boasts an imposing size and workload, it lacks the finer details. During the project’s design phase, I relied on instinct to infer player actions, making it appealing to experienced gamers who can swiftly grasp the game’s mechanics and passionately solve puzzles. Fortunately, at the IMA Show, I encountered two such users who were deeply engrossed in the game and provided glowing feedback.

This aligns with my vision of interaction, where users can input, process, and output information efficiently within my project. Users visually tracked position changes on the screen, considered the character’s location in the virtual and physical mazes, selected the appropriate sensor, and pressed it. The sensors transmitted signals to Processing, prompting the character to move, creating a continuous cycle of interaction. This linear, multi-step engagement aligns with my definition of a positive interactive experience.

However, some users found my project overly complex. At the IMA Show, there were individuals seeking something cool and advanced, displaying impatience with the detailed instructions. They hastily pressed sensors randomly, observed no immediate changes, and either left or sought guidance. Additionally, as my project is a game, it attracted many kids who, despite their enthusiasm, struggled to solve the puzzles independently. Consequently, I found myself needing to guide each child individually, resulting in exhaustion.

In my future project designs, I plan to prioritize simplicity while striking a balance between ambitious goals and intricate details. The aim is to create an aesthetically appealing project with a complex structure that is accessible to everyone. I envision a project that can be enjoyed without excessive deliberation, yet still offers a rewarding experience for those who appreciate careful consideration and creative design. Achieving this requires enhancing my skills and collaborating with an ambitious and skillful partner in the next endeavor.

For the current project, given more time, I would enhance the user experience by integrating 47 LEDs alongside the pressure sensors. Additionally, I would refine the structure of certain maze elements to ensure they make more sense to users. Furthermore, I’d optimize the sensitivity of some sensors to better respond to pressure. These adjustments aim to not only improve the project’s overall functionality but also enhance user engagement and satisfaction.

This is my first attempt at designing a game independently, and prior to embarking on this course, I had no coding skills. Throughout this project, I’ve not only gained technical expertise but also expanded my intellectual horizons. I’ve discovered that I possess the capability to bring my ambitious, and at times even seemingly crazy, ideas to fruition. However, it’s evident that there’s still a considerable journey ahead.

The substantial workload I assigned myself during this project serves as a stark reminder of my limited skill set and underscores the importance of teamwork. While I’ve been able to realize ambitious goals, the experience has highlighted the significance of paying attention to details as much as addressing complexity. Striking a balance is key – achieving ambitious objectives is gratifying, but the true measure of success lies in ensuring that all users can fully enjoy and comprehend my project.

E. DISASSEMBLY
I bought all the equipment and most materials myself (47 force sensors, 4 Arduino Mega 2560, 10 100×100 cm cardboard, 200 100 cm male-to-male wires) and borrowed nothing from Equipment Room and Interaction Lab so that I can take my project home. Attached are the photos of my project taken to dorm:


F. APPENDIX
the appearance of the physical maze:

















photos of playing the game:





the whole process of normal route of the game on screen:

shortcut 1:

shortcut 2:

event cat:

event house:

event torch:

fastest route (using restart):

wandering around (walking every wrong route):

Stepper Destroyer – Jiaxiang Yuan – Bang Xiao

CONTEXT AND SIGNIFICANCE
In my previous research, I define interaction as two or more humans, animals or non-living things input, process and output information with each other in a way both of them can sense. And good interactions are those which communicate high-value information in a short period of time. For interactive art products, if they can make people immerse themselves in the interaction while conveying them some food for thought or if they do good to human lives, they are of high quality. (See the analysis in here) The definition of a high-quality interactive art inspired me to build a transformer, which can combine multiple functions in one artifact, thus offering a more immersive experience as if the transformer is really an intelligent creature. I think what is unique about my project is that it provides users with a combined experience with multiple elements. For example, when you turn the rearview mirror of the transformer, it can respond to you with speech. My project does not strengthen one specific element. Instead, it combines multiple elements for a more immersive experience. My project is intended for the masses, hoping to convey the idea that through careful observation and exploration, everyone can find something interesting in daily objects, like a car can be transformed into an autobot.

CONCEPTION AND DESIGN
To make the experience of the transformer more immersive, I gave up the idea of adding instructions directly on the transformer and adopted text-to-speech module to make speech instructions. Moreover, I decided to add several different sensors to simulate the process of fixing the transformer, which frees the interaction of the transformer from merely being driven around and transforming. To meet my expectation, I abandoned the nonlinear experience, which was originally designed to make users control the transformer more easily through IR remote, because I found it was not very creative and didn’t help create an immersive experience between the users and the transformer. Instead, I designed it as a linear experience to let users feel that they undergo a complete story —— Thet meet the transformer, fix it, see its astonishing transformation and drive it happily. As to the materials, I used cardboard at first because it was the most convenient material at hand. However, I found that the stepper motor sticked to the cardboard often got too hot and melted the glue. To solve this problem, Professor Andy Garcia taught me that I could 3D-print motor mounts for stepper motors to prevent the heat from transferring to the glue. (See the motor mount here)


The other parts of the transformer were made up of cardboards to cut down on its weight so that it could be lifted with the force of the stepper motors. The plastic for 3D-printing may be a better choice, since it is light and good at heat insulation, but I hadn’t learned how to model at that time.
And here are some pictures of our primary design of the transformer:
The design of the structure:

The design of how it is going to transform:

(We gained our inspiration here)

FABRICATION AND PRODUCTION
I think the success in my production process is that I made the transformer able to talk through text-to-speech module. It really improved the interactivity of the transformer (enabling the transformer to respond through speech) and made my project different from others. Here is my code for it:
#define RX_PIN 5
#define TX_PIN 6
// Rx and TX pins should be pin 0 and 1, but it works with 5 //and 6.
EMIC2 emic;

void setup() {
emic.begin(RX_PIN, TX_PIN);
delay(1000);
emic.setVoice(1);
//There are 10 different voices to choose from.
emic.setRate(150);
emic.setVolume(18);
}

void loop() {
emic.speak(“Hi, I am Stepper destroyer created by Eric and Evan. If you see any stepper motor in front of me, you can press the button on my container and help me chase it. Anyway, nice to see you.”);
emic.speak(“Could you do me a favor? I don’t feel well today. Parts of my body maybe needs to be fixed.”);
}

I think the failure of the production process is to try to use 4 stepper motors at the same time. To make one stepper motor move as instructed is already complex enough. The stepper motor needs to be connected to the stepper motor module through a specific wire. (The one from the ER can easily drop off, the one from Interaction Lab is much better) Among the five pins of the stepper motor module, three pins need to be connected to the Arduino and the other two need to be connected to the 5V and GND, all of them are connected through male-to-female wires, which drops off more easily than male-to-male wires. Moreover, the three pins have to be connected with the digital pins, rather than the PMW pins. (In theory the PMW pins should also work, but they didn’t for Arduino MEGA for some reasons I don’t know.) The stepper motor module is also connected to 12V. However, I have to turn off 12V most of the time. Otherwise, the stepper motor will get too hot and stop working. And for the code, it is impossible to make two stepper motors to move at once. So, at first my code was wrong:

stepper1.runToNewPosition(200);
stepper2.runToNewPosition(200);

(It will make one stepper motor turns two rounds and the other not work)
Luckily, Professor Andy Garcia taught us a way to pretend that the two stepper motors are running at once. Here is the code:

for (int i = 0; i < 200; i++){
stepper1.runToNewPosition(-i);
stepper2.runToNewPosition(i);
}
delay(1);

(The two stepper motors actually run one after another. But the speed was too fast that they were like running at the same time.)
One stepper motor is complex enough, and we had four. So, in the production process, we encountered problems including the wires drop off, the pins are not connected correctly, the power is off, the wires are burnt, the stepper motor gets overheated and stops working, the glue melts, the stepper motor is not firmly stick to the cardboard, which cannot make the wheels move, the battery runs out of power, the stepper motor can only run when the wires are in a specific position due to the poor quality of wires, the power board cannot offer enough power for four stepper motors and etc. And every time one stepper motor doesn't work, we had to check every part of the circuit, which largely drains our time. Therefore, we named our transformer Stepper Destroyer, to show our dislike towards stepper motors. But it was too late to change the motor, since we made our transformer so big that only stepper motors can lift its head up and complete the transformation. (They didn't at last.) In the production process, I wrote most of the codes for Stepper Destroyer, and here are my codes:
(There are two Arduinos in Stepper Destroyer.)

Arduino Uno (or see the codes here)

#include SoftwareSerial.h;
#include SD.h ;
#include EMIC2.h;
#include IRremote.h;
//I don’t know why the blog eats my library every time so I //can’t use the punctuation for the library.
IRsend irsend(3);
#define RX_PIN 5
#define TX_PIN 6
int first = 0;
int count1 = 0;
int count2 = 0;
int count3 = 0;
const int trigPin = 9;
const int echoPin = 10;
long duration;
int distance;
int melody[] = {
139, 165, 175, 196, 208, 233, 247
};
float noteDurations[] = {
4, 4, 4, 16, 8, 8, 16
};
void setup() {
pinMode(trigPin, OUTPUT);
pinMode(echoPin, INPUT);
Serial.begin(9600);
emic.begin(RX_PIN, TX_PIN);
delay(1000);
emic.setVoice(1);
emic.setRate(150);
emic.setVolume(18);
}
void loop() {
digitalWrite(trigPin, LOW);
delayMicroseconds(2);
digitalWrite(trigPin, HIGH);
delayMicroseconds(10);
digitalWrite(trigPin, LOW);
duration = pulseIn(echoPin, HIGH);
distance = duration * 0.034 / 2;
Serial.print(“Distance: “);
Serial.println(distance);
delay(10);
if (distance < 20 && first < 1) {
for (int thisNote = 0; thisNote < 8; thisNote++) {
float noteDuration = 60000 / 183 * 4 / noteDurations[thisNote];
tone(8, melody[thisNote], noteDuration);
int pauseBetweenNotes = noteDuration * 1.05;
delay(pauseBetweenNotes);
noTone(8);
}
emic.speak("Hi, I am Stepper destroyer created by Eric and Evan. If you see any stepper motor in front of me, you can press the button on my container and help me chase it. Anyway, nice to see you.");
emic.speak("Could you do me a favor? I don't feel well today. Parts of my body maybe needs to be fixed.");
first = first + 1;
}
if (first == 1) {
int sensorValue1 = analogRead(A0);
Serial.println(sensorValue1);
int sensorValue2 = analogRead(A1);
Serial.println(sensorValue2);
int sensorValue3 = analogRead(A2);
Serial.println(sensorValue3);
if (sensorValue1 < 300 && count1 == 0) {

emic.speak("Thank you, my rearview mirror was opposite, that is why I can't see the back.");
count1 = 1;
delay(1000);
}
if (sensorValue1 400 && count2 == 0) {
emic.speak("Thank you, there was dirt on my antenna, now I can recieve the signals.");
count2 = 1;
delay(1000);
}
if (sensorValue3 > 300 && sensorValue3 < 700 && count3 == 0) {

emic.speak("Thank you, now my logo is straight and I feel better.");
count3 = 1;
delay(1000);
}
if (count1 == 1 && count2 == 1 && count3 == 1) {
for (int i = 0; i < 3; i++) {
irsend.sendNEC(0xF7C03F, 32);
delay(100);
}
first = 2 ;
emic.speak("Thank you so much, now I can show you my secret. Autobots roll out.");

}
}
}

Arduino Mega (or see the codes here)

#include IRremote.h;

IRrecv IR(7);

#include AccelStepper.h;

int DIR_PIN1 = 22;
int STEP_PIN1 = 24;
// Pin 3 just doesn’t work.
int EN_PIN1 = 26;

int DIR_PIN2 = 28;
int STEP_PIN2 = 2;
int EN_PIN2 = 30;

int DIR_PIN3 = 45;
int STEP_PIN3 = 5;
int EN_PIN3 = 41;

int DIR_PIN4 = 35;
int STEP_PIN4 = 4;
int EN_PIN4 = 33;

int state = 1;
int button;
long startTime;
long presstime;
int first = 0;

AccelStepper stepper1(AccelStepper::DRIVER, STEP_PIN1, DIR_PIN1);
AccelStepper stepper2(AccelStepper::DRIVER, STEP_PIN2, DIR_PIN2);
AccelStepper stepper3(AccelStepper::DRIVER, STEP_PIN3, DIR_PIN3);
AccelStepper stepper4(AccelStepper::DRIVER, STEP_PIN4, DIR_PIN4);

void setup() {
pinMode(EN_PIN1, OUTPUT);
digitalWrite(EN_PIN1, LOW);

pinMode(EN_PIN2, OUTPUT);
digitalWrite(EN_PIN2, LOW);

pinMode(EN_PIN3, OUTPUT);
digitalWrite(EN_PIN3, LOW);

pinMode(EN_PIN4, OUTPUT);
digitalWrite(EN_PIN4, LOW);

stepper1.setMaxSpeed(30000);

stepper1.setAcceleration(10000);

stepper2.setMaxSpeed(30000);

stepper2.setAcceleration(10000);

stepper3.setMaxSpeed(30000);

stepper3.setAcceleration(10000);

stepper4.setMaxSpeed(30000);

stepper4.setAcceleration(10000);

IR.enableIRIn();
Serial.begin(9600);
pinMode(50, INPUT);
pinMode(48,INPUT);

}

void loop() {
if(IR.decode()){
Serial.println(IR.decodedIRData.decodedRawData,HEX);
if(IR.decodedIRData.decodedRawData == 0xE619BF00){
for (int i = 0; i < 50; i++){
stepper3.runToNewPosition(i);
stepper4.runToNewPosition(i);
}
delay(1);

}

IR.resume();
}

if (state == 1) {
if (digitalRead(50) == HIGH) {
state = 2;
startTime = millis();
}
} else if (state == 2) {
button = digitalRead(50);

if (button == LOW) {
presstime = millis()-startTime;
Serial.println(presstime);
state = 1;

for (int i = 0; i < presstime ; i++){
stepper1.runToNewPosition(-i);
stepper2.runToNewPosition(i);
}
delay(1);

}

IR.resume();
}

if (digitalRead(48) == HIGH && first == 1){
for (int i = 0; i < 50; i++){
stepper3.runToNewPosition(-i);
stepper4.runToNewPosition(-i);
}
first = 0;
delay(1);
}
}

(I checked these websites for reference when I wrote the codes:
https://www.youtube.com/watch?v=0DgALFDwouA&t=56s
https://blog.codebender.cc/2014/02/20/emic2/
https://www.bilibili.com/video/BV1XW411H7Ps/?p=3&vd_source=637c310ef910b08826920d2defc0a6f5
https://howtomechatronics.com/tutorials/arduino/ultrasonic-sensor-hc-sr04/
https://docs.google.com/document/d/1MEgxjFO1iaVlIwwEjccmbOkdaQapE4WuOntilAkYfJA/edit#heading=h.15jiiuwqq4uk
)

My group worked together on the building and Coloring of Stepper Destroyer. Bang Xiao is responsible for the mechanism of Stepper Destroyer, and I assisted him during the building process. During the user testing session, our Stepper Destroyer had its wheel drop off and hadn't been fully assembled. So, we just described the functions of the Stepper Destroyer to our testers. However, they still gave us a lot of suggestions, including adding a spiral spring (I later adopted a similar concept, how long the button is pressed, how far the transformer goes.), using ropes to lift the head, shortening the lever, using rubber band and adding something heavy to help lift the head, but we didn't have time to try all of these suggestions, we only conducted some of these. Eventually, we adopted none of these, but we realized that it was so hard to lift the head up, so we could turn the transformation from lifting the head up into an autobot to putting the head down into a truck. In this way, we no longer needed to worry about whether the stepper motors could lift the head or not. We planned to add more functions to the Stepper Destroyer. However, we found the capacity of the Arduino was not enough for so many codes and we had to use multiple Arduino. But we didn’t know how to make multiple Arduinos communicate with each other without IR receivers and emitters, and we considered using multiple IR was too complex and made it too difficult for the physical wiring. So, we gave up our ideas of adding more. Our skills limited us from making the transformer more advanced and next time we should focus more on the creative ideas rather than multiple functions. (But generating really creative idea (like the paper, scissors, stone machine) is so hard and perhaps next time we will still adopt the way of adding more functions. (without stepper motors))

And here is the electronics:

(It looks complex, but it isn’t. The four stepper motors actually have the same wirings. So, I am going to skip the introduction of how we connected the wires.)

CONCLUSIONS
The goal of my project is to convey the idea that through careful observation and exploration, everyone can find something interesting in daily objects. I think my project results is align with my definition of interaction, as it gives the user an immersive experience in a few minutes. (when all the stepper motors are working well) Users are guided to interact with the Stepper Destroyer successfully, which is great. However, the guidance needs to be improved since some users couldn’t grasp what Stepper Destroyer said for the first time. If I had more time, I would improve the accuracy of the words the Stepper Destroyer says and make the pressure sensor (antenna) more conspicuous. I learned from the hard work of building the Stepper Destroyer (three whole nights) that I should consider the workload before launching an ambitious project. I learned from the accomplishment of the Stepper Destroyer that to some extents, hard work pays off. Although our project didn’t meet our expectations, our hard work and ambitious project encouraged other groups to try more difficult and complex codes and electronics and I feel worth it.

Here is a full video of how my project works:

APPENDIX

Group Research Project: 5. Report——The Mirror

Introduction for the interactive artifact——The Mirror
by Yuan, Jiaxiang of Group E

The Idea
by Cheah, Nicole

The imaginary interactive artifact I would create for the short story, “The Ones Who Walk Away from Omelas” by Ursula K. Le Guin, would be an interactive two-sided ‘mirror’ art piece that would be placed in the middle of the city of Omelas. I think the city is described as one that is quite happy on the surface, but a city that has a dark secret that lies underground– a child who suffers at the expense of everyone’s (citizens of Omelas) happiness. I think placing this ‘interactive mirror’ in the city, would be more-so a surprising exhibition that would shock townspeople as the child’s suffering is one that is known to all people but not spoken of. The guilt of this moral dilemma consumes certain people in the town and leads them to disappear and walk away to an unknown place– however, the installation of this ‘mirror’ could help create dialogue and open-reflection amongst the townspeople and help them confront the ethical problem that is directly tied to the child’s suffering. To describe this ‘mirror’ in further detail, I think it is an artifact that on one side will show the city as it is normally (happy on the surface, while everyone being ignorant of the child’s misery), and on the other side a visual of the amount of pain the child suffers on a day-to-day basis. The artifact encourages some sort of introspection for the citizens, and I believe would be something that could wake the citizens up in confronting the moral dilemma the town faces. Despite me saying before, how this ‘mirror’ can perhaps prompt conversation and dialogue between the townspeople as the subject is like the ‘elephant in the room’– a potential problem that may be caused by the controversial exhibition and interactive artifact could be an overwhelming sense of guilt that may be felt by the citizens when seeing and interacting with the piece. As the child lives underground –under the city– the citizens may not know or have ever visualized how bad the conditions are for this one child– this artifact, thus, I feel, has the risk to make more people want to walk off into the unknown land. I think that there are many artifacts, and in-fact, art exhibitions that act similarly to this interactive artifact for the town of Omelas. The article I have linked below from Goodness Exchange, shows an exhibition of an NYU Associate Arts Professor and Israeli American artist’s work (that of Daniel Rozin). He creates mechanical mirrors out of strange materials that reflect a “new reflection” (Allerton) of the person looking into it. I think this project is extremely interesting and eye-opening as I feel like the technology Rozin has created is kind of an inspiration of what I would like to base my artifact off of, as the mirror for Omelas is supposed to reflect a kind of ‘alternate reality.’ Instead of showing a different or per-say new reflection of a person, like the work of Rozin’s, I feel like the interactive artifact for Omelas would more-so reflect what I mentioned before; the dichotomy between the happy city and the child’s suffering– a citizen would see these visualizations when peering into the mirror.

The Final artifact


The front of the artifact paints the extremely sad boy and the back of the artifact is the mirror.

The Performance

The Script
by O’Brien, Roman

It starts with the boy being bullied for a little bit.

Boy: “Please, I’ll be good, I promise I’ll be good

Bully: Shut up, filthy. You disgust me, you represent everything that is wretched.

[bully walks off stage]

boy, alone: But I’ll be good.

On the stage everyone stands with a partner whispering to them, the microphone is positioned so that the audience can hear the vague whispers and gossip.

They whispers stop as the focus is shifted onto the mirror in the center of the stage.

Someone at the side where the audience can’t see: ‘hurry up we have to get to the festival.’

12: ‘Okay okay let’s go.’
[skipping with an 8-year-old across stage until they see the mirror, maybe there’s bells chiming or they’re laughing or something and then they stop when they see the mirror]

8: I’ve never seen this mirror before

12: Do you see that? Inside, in there! It’s a boy!
[points]

8: Quick Quick we have to get him out

12: [ name of 8-year-old] we can’t

8: What do you mean we can’t?

12: I’ve seen him before, hasn’t your mom told you about him?

8: No… but why can’t we let him out?

[pause]

12: Think about it like this: You’re happy, right?

8: Well, sure of course

12: How do you know?

8: [shrugs]

12: When you see that boy, you know.

8: But he’s just like me, why can’t I be his friend? 🙁

12: [frantic] Hey hey! look! when you frown, I can’t see him…

8: Oh… But every time I smile because of something that makes me happy he comes back.

12: I wonder who put this mirror here, I’m not sure I like the way it makes me feel …

Horse: What are you kids doing here? Quickly, go up to the Green Fields for the procession.

12: yea, we should do that. Quickly [ name of 8-year-old], c’mon

[ walk off stage]

Horse, talking to himself/horse: Oh, what a lovely day it is to be joyous. What a lovely life . . . easy, quiet girl, can’t you see the beauty of the world around you? It’s the first day of summer, and the sun is shining so. What a gift it is to know joy.

Horse: Come with me, observe the beauty of your mane in the reflection.

[stands in front of the mirror]

[slowly realizes and gets disgusted]

[kids come back with their parents]

[parent one steps up, kids and parent two talk in a little circle farther on stage]

Horse: have you seen this mirror, this monstrosity?

Parent 1: The children were just telling us about it, but sir it isn’t so bad is it . . . It’s not as if you didn’t know.

Horse: yes, I knew, but do you think it should just be out like this . . . wouldn’t you rather forget at times?

Parent 1: Well, sure, but we shouldn’t

Horse: Look at me, aren’t I the picture of joy? Exuberance? And my horse, don’t you think thinking about . . . it. . . that, takes away from the festivities?

Parent 1: I think you should be wary of vapid irresponsible happiness, lest you end up like him, unable to be helped. . .

[bully enters again, kicking and spitting on kid in the mirror] [people watch in shock] [bully walks around stage as if to walk out of building and towards town square]

Bully: Shouldn’t you all be out enjoying life?

Horse: look, this mirror, we just saw you through it

Bully: ugh, not that poor wretched thing

Horse: I suppose so . . .

Bully: I think it’s good to go and give it a good kick sometimes, keep things in order

Horse: wouldn’t you rather just forget it exists at all?

Bully: don’t you understand you can’t just forget about it? You need to hurt it. You can’t let it know that it is important or even cared for, that will ruin everything.

Parent 2: you’re supposed to take those feelings and put them in your own children, don’t you understand?

Parent 1: This is just the way things are.

Bully: This mirror shows you how the world works

Horse: I do understand but I don’t like it, it horrifies me . . . I don’t think I can stay here, are we not just prisoners?

Parent 2: Maybe we are . . . but you can’t just leave Omelas. Too many young people have just left and never returned. You don’t know where you’re going, and it could be dangerous out there.

Horse: But how can I stay when I know that this is happening, when this mirror reminds me of the reality every time, I am happy?

Parent 2: [exasperated] Why can’t you honor this boys sacrifice?

Horse: Because it is wrong

Parent 2: If that is wrong then all of our joy, the nobility of their architecture, the poignancy of our music, the
profundity of our science is also wrong.

Horse: I cannot agree with you

Parent 2: I cannot agree with you either, but please stay.

8: please don’t go away

12: please don’t walk away from Omelas

Horse: I’ll stay

Parent 2: I’ve never been able to talk about the boy with anyone before, I try to just forget it like you, but no one here wants to ignore it anymore.

12: maybe being able to talk about it is a good thing?

8: I think so too

Horse: maybe this mirror is what the people of Omelas need

Parent 2: There’s a chance it could make more people try to walk away, like you, but there’s also a chance that it makes people talk about the truth more.

Bully: [interjecting/interrupting, purposefully sounds awkward/ruins sentimental moment] SOOO…. Do you guys want me to move this from the middle of town or…?

Parent 1: No, no, I think you should leave it.

Horse: Yes, I agree, please leave it right there.

Props

A horsehead made by cardboard.

My Opinion on Our Artifact
I think the failure of our artifact is that we didn’t build a stand for it so that it can turn around without the help of the turning chair. Without the stand, our artifact seemed less interactive to the audience.

If I could make improvement to the artifact, I would not completely stick the mirror and the painting together. Instead, I would only stick the two parts at the top, so that the turning chair wouldn’t block the view of some parts of the artifact.

I think the success of our artifact is that we clearly demonstrated the purpose and interaction of the mirror through our elaborate script. We showed the audience that the mirror turns whenever there is joy to reveal the cruel truth under the happiness of Omelas and different people’s perspectives and opinions on the mirror and the imprisoned, bullied and mistreated kid.

I define interaction as two or more humans, animals or non-living things input, process and output information with each other in a way both of them can sense. I think the artifact relates to the established form of interaction I identified in my research. The mirror can sense people’s mood and decide whether to turn or not, and the thoughts of users change when the mirror turns, which leads them to stay, reflect or leave Omelas.

My Opinion on the Performance from Other Group

Goggle by Group A (a mood detecting glasses which tell the user moods by giving colors)

I think it does not well meet the criteria of the assignment. Although Goggle meets the established form of interaction I identified in my research, it doesn’t meet the universe of the story “The Ones Who Walk Away from Omelas” because people in Omelas should be mostly happy, but in their performance, most people goggle detected were feeling negatively and always complained about their schoolwork, which does not make sense to Omelas. Moreover, they didn’t perform the possible new problems caused by Goggle, which should be easy to think of, the loss of privacy. Adding to that, their performance didn’t show the purpose of the artifact clearly. As humans, most of us can easily tell others’ feelings through facial expression, so the Goggle’s functions are not for the masses. Goggle should be used on those who are born with disability to detect others’ moods, but their performance didn’t demonstrate that. The user of Goggle went to each person and said something to comfort them, I think a normal person can do that too without Goggle. Last but not the least, their performance didn’t display the basic function of Goggle, mood detecting. The user asked other performance “What your feeling?” at the beginning of their performance, which actually weakened the function of Goggle. Since you knew their feeling, why did you ask them again? Therefore, their script is not logical enough.
For their creativity, I think mood detector is not a new idea, especially for the one which only detects mood and does not do other things. However, their artifact uses color to represent one’s mood, which I think is quite a creative idea. As for their performance, I think they showed their artifact clearly, but in a quite boring way. They only have one-to-one conversation, which is not appealing.
To improve their group project, I think they can revise their script to perform a more complex story. For example, the user of Goggle may detect the extremely upset kid in Omelas and call for a rescue. Moreover, they can set the user as one who can’t detect mood since he was born, which adds more purpose for Goggle. Other than that, they can have more conversations with different characters, to make the performance more vivid.

My Contribution to the Group
I acted as the bully in the performance and helped turn the mirror around.
I attended every meeting of the group and gave several creative ideas for the project to make it more feasible. For example, I recommended to use a horsehead to replace a whole horse and use the turning chair to turn the mirror.
I made the prop, the cardboard horsehead, with Cajigal, Serena and Gao, Jingchen. Serena and I made most parts of the horsehead and glued them with tapes and hot glue gun and Jingchen drew and cut the sides of the horsehead.

Our reference

https://www.pinterest.com/pin/352758583313082437

The final prop

Teamwork atmosphere
Our teamwork was really dynamic. Everyone contributed to group project and attended every group meeting. The communication within the team was good. We communicated through WeChat, and everyone replied to the messages quickly.
Here is a photo of the group after performance:

Nicole was busy so she didn’t take a photo with us:(

The different roles
Jiaxiang: Bully
Jingran : 12(years old kid)
Serena : 8(years old kid)
Yihan : Parent 2
Nicole : Parent 1
Roman : Horse (the man riding a horse)

The task allocations
prop(horsehead): Jiaxiang, Serena, Jingran
the artifact’s painting: Serena, Jingran, Yihan, Nicole, Roman
purchase the mirror: Jingran
script: Roman
Idea: Nicole
performance: Serena, Jingran, Yihan, Nicole, Roman, Jiaxiang