WORRY BLOWER | Qiya Huang | Rodolfo Cossovich
CONCEPT AND DESIGN
In our project, we aimed to address the pervasive issue of worry and negative emotions by providing users with a tangible and cathartic outlet for their concerns. The concept emerged from a synthesis of two key elements: bubbles and worries. We observed how people are constantly burdened by worries in today’s fast-paced world and sought to create a symbolic representation where worries, like bubbles, could be released and ultimately dissipated.
Central to our design was the interaction between users and the animated head displayed on a monitor, filled with bubbles representing worries and negative emotions. We integrated face and speech recognition technologies to enable users to communicate their worries verbally. A threshold volume feature allows the user to “blow away” their worries by speaking out loud, thus reducing the number of bubbles on the screen. At the same time, the bubble machine will begin to operate, blowing bubbles depending on the volume level. This interaction is intended to provide a tangible interactive experience, symbolizing the process of releasing worries and seeing them eventually disappear, like bubbles in the air.
During the user testing phase and final presentation, we received valuable feedback that had a significant impact on the design. One important aspect that stood out was the sensitivity of expressing concerns in public. Recognizing this, I realized that we were overlooking parts of the user experience and considered whether this entire project should lean towards a peaceful, soothing atmosphere or a more intense emotional release. This feedback prompted us to improve the direction of the project.
FABRICATION AND PRODUCTION
During the fabrication and production phase of our project, we encountered both successes and challenges as we worked towards realizing our vision of a bubble-based worry release system. Our primary focus was on creating a functional bubble machine and integrating it with facial and voice recognition technologies, as well as developing the necessary Arduino circuits and code.
The fabrication process began with testing the fan to ensure it operated as needed.
Once confirmed, we followed a tutorial to construct the automatic bubble machine. However, our initial design encountered a significant hurdle: the bubbles did not blow out easily due to the large hole size and insufficient fan power. To address this, we iterated on the design, reducing the hole size and increasing their number. This adjustment proved successful, leading to a version of the bubble machine that effectively produced and released bubbles.
In terms of electronics, our Arduino and circuit setup mainly underwent two iterations. Initially, we tested the bubble machine’s analog reactivity using a potentiometer before integrating it with the p5 environment. After confirming that the bubble machine could adjust its speed of operation based on the potentiometer-transformed values with the existing circuitry and code, we proceeded to integrate the values from p5 in place of the potentiometer-transformed ones. This transition allowed us to establish a direct link between the user’s input and the bubble machine’s behavior.
In integrating facial and voice recognition functionalities into our project, we opted for the p5.js framework over Processing due to its compatibility with web-based applications and ease of implementation.
For facial recognition, we used a method developed by Kyle McDonald that effectively detects faces and facial features. Using this technique, the user can see his or her face on the monitor. Based on the indexed data provided by McDonald, we sketched out the shape of the eyebrows, eyes, nose, mouth and face to accurately represent a mirrored animated character on the computer.
For voice recognition, we employed the p5.speech.js library, which facilitated the transcription of user speech. By initializing a new instance of p5.SpeechRec and specifying the language (“en-US”), we could capture and process user utterances in real-time. The parseResult() function allowed us to extract and utilize the most recent word or phrase detected, enabling responsive interactions based on user input.
For bubble creation, I drew inspiration from previous creative coding projects while introducing novel interactions based on the goals of our project. One of the major innovations was to adjust the size of the bubbles on the screen according to the volume level of the user input. When the volume exceeds a pre-set threshold, the size of the bubble decreases until it disappears completely.
One of the most challenging aspects was establishing serial communication between the p5 and the Arduino. We initially tried various methods, including the p5.serialcontrol app. We ended up using a more traditional method recommended by Prof. Rudi. This involved coding the communication process directly in the p5 and Arduino to ensure reliable data transfer between the two platforms.
The code snippets below illustrate the process of successfully receiving data from Arduino in p5.js. We use the p5.SerialPort object to establish a serial connection and listen for incoming data. When data is received, the serialEvent() function is called to process it. It’s important to note that the data type used in p5.js to receive and process data from Arduino should match the data type sent by Arduino. In our case, we found that using the Number() function to convert the received data to a number was essential for successful communication. Other data types like Byte() and Int() were not suitable for our application and resulted in communication failures.
And to send data from p5.js to Arduino, we mainly use “Serial.write().” In the Arduino code, we define the pin connected to the fan motor and set it as an output. Inside the loop() function, we continuously check for incoming data from p5.js using the Serial.available() function. If data is available, we read it using the Serial.read() function and use analogWrite() to control the fan motor speed based on the received data.
By refining our designs and overcoming technical hurdles, we were able to create a functional and impactful project that effectively fulfilled its intended purpose.
CONCLUSIONS
Our project aimed to provide users with a tangible and cathartic outlet for expressing and releasing worries through an interactive bubble-based system. We sought to create an experience where users could visually and physically engage with their emotions, ultimately experiencing a sense of relief and catharsis. In reflection, our project largely achieved its stated goals. We successfully implemented facial and voice recognition technologies, integrated them with the bubble machine, and enabled dynamic interaction between the user, the digital interface, and the real world. Users could see their mirrored faces, express their worries verbally, and witness the tangible representation of their emotions through the creation and release of bubbles.
From the setbacks and failures encountered during the project, I learned the importance of iterative design and the value of embracing challenges as opportunities for growth. Each obstacle we faced presented a chance to learn, adapt, and ultimately improve the project. These experiences underscored the iterative nature of design and the necessity of resilience in the face of challenges.
In conclusion, our project has demonstrated the power of interactive design in fostering emotional expression and connection. Through thoughtful integration of technologies and a user-centered approach, we were able to create an engaging and meaningful experience that resonated with our audience.
DISASSEMBLY
APPENDIX
Bubble Machine
- https://www.youtube.com/watch?v=CWFRqQhGa6w(mechanism)
- https://forum.arduino.cc/t/running-two-motors-in-a-loop/967922(dc motor control)
- https://docs.google.com/presentation/d/1riWKSLQ8dvxT7iRjQZRR9aIbOMKsuxgG1sxSNWiaHiM/edit#slide=id.g2bd96f4d727_0_45(controlling motors)
P5 Facial/Audio Recognition
- https://itp.nyu.edu/physcomp/labs/labs-serial-communication/two-way-duplex-serial-communication-using-p5js/(p5 serial control we used)
- https://kylemcdonald.github.io/cv-examples/(facial recognition)
- https://github.com/kylemcdonald/AppropriatingNewTechnologies/wiki/Week-2(facial recognition 2)
- https://florianschulz.info/stt/(Speech to text library with google chrome)
- https://processing.org/reference/libraries/sound/AudioIn.html(aduio lib)
- http://learningprocessing.com/examples/chp20/example-20-09-mic-input(mic input)
- https://github.com/p5-serial/p5.serialserver(another p5 serial control)
- https://www.youtube.com/watch?v=MtO1nDoM41Y(arduino to p5)
- https://www.youtube.com/watch?v=MtO1nDoM41Y(two way communication)
- https://www.youtube.com/watch?v=EA3-k9mnLHs(Processing facial recognition)