A. MURAL MARATHON – Eric – Andy Garcia
B. CONCEPTION AND DESIGN
The concept of our project is a virtual graffiti installation that allows users to draw on a computer screen (or of course on a projection) using the graffiti can that don’t spray out real paint. This creative experience transcends the limitations of paper or walls and provides the user with more room for creativity. We think it could be very memorable if users could create or contribute their own little works of art. Because not only would it be a meaningful, constantly evolving, and dynamic piece of art, but it could help users or those interacting with our project to reduce stress and have fun with others. We took into account that mental health is something that people need to pay attention to during the final week. So, we generated this idea of entertainment-based interaction. In addition, we considered that street graffiti is not only illegal in Shanghai, China, but also in most other regions and countries. So we wanted to create a graffiti atmosphere for users to relax and vent their stress. The initial interaction concept was inspired by Etch-a-Sketch and an exhibition at K11 shopping center in Shanghai. After Nicole did this field research, we wanted to replicate and adapt this experience with what we learned in Arduino and Processing. In our previous preparations, we thought of virtual graffiti as an innovative form of expression that combines art and technology, connecting traditional graffiti with digital technology (in this case, Arduino and Processing). We wanted to create an immersive, personalized and interactive art experience. Although we are unable to achieve a high level of virtual graffiti due to technical and learning limitations, the goal of our project aims to provide users with a fun experience and interactive effects of graffiti while allowing them to explore color (theory). It will also help users develop their thinking and potential drawing talents. Users need to think about what ideas they want to put on the canvas, the way they want to paint, and how they want others to understand them during the interaction. Based on this, we thought about how to physically present their ideas to others during the interaction. That is, we wanted to set a scenario in which the users could show the rest of the content with their own actions, freely express their creativity and ideas, and allow them to complete their own “performances”. Our project will present the user’s ideas to themselves and to other audiences, creating a two-way conversation.
Our project design requires a canvas, which can be a computer screen or a projection. The centerpiece of the project is a device that mimics a graffiti can that works through two ultrasonic sensors that calibrate the distance/position of the device to the table and paper wall, respectively. When the sensors receive a specific distance value, dots, lines, circles or other shapes will appear on the canvas. A pressure sensor is also added, which the user will press before drawing. In user testing session, the overall interaction was relatively smooth and the user could make virtual graffiti and change colors using the color selector. However, I found that the wires attached to the can occasionally blocked the position detected by the sensor at the bottom of the can. As the user needs to constantly move the can, the wires are driven, which prevents the user from accurately drawing the desired pattern. Some users also suggested that the way of pressing the pressure sensor could be optimized. This is because during user testing, the center of the nozzle of our graffiti can was hollow. Users needed to press the pressure sensor onto the narrow plane at the edge of the nozzle, otherwise they would not be able to draw. We collected a very large amount of feedback and suggestions (see the Appendix section afterward for videos of user testing, feedback and suggestions, and more detailed improvements), so we picked out a few adaptations from the many that helped show our project goal. The top 3 adaptations were: the way the top pressure sensor is pressed, the overall setup (including the wires and the background brick wall), and the ambient sound. These three adaptations were effective in the final presentation because they optimized the user’s experience of graffiti compared to user testing. Users not only saw their graffiti visually, but they could also hear the ambient sound associated with the graffiti. The other aspect of the ambient environment is that while we added LED strips that simulate the flashing red and blue lights of a police car and background sounds, due to technical difficulties this did not become part of the interaction that could be triggered by the user.
C. FABRICATION AND PRODUCTION
Main materials and technologies used:
- 3D printing
- Laser cutting
- Projector x 1
- Button with built-in resistor x 5
- F/M & M/M jumper cables
- Arduino UNO x 2
- Ultrasonic sensor x 2
- Pressure sensor x 1
- Neopixel LED strip (60) x 2
Primary sketches:
We built a pen model in our sketches, but later in the production process we turned it into a graffiti can. This is because the graffiti can is more in line with the overall goal of our project and the atmosphere it creates.
C. 01 Ultrasonic sensor
The core of our project is an ultrasonic sensor that works by detecting the distance to the floor and to the wall and transmits the data to Processing. Before 3D printing, I made a model of a pen with cardboard, hot glue, and tape, which doesn’t look like a pen because we only have the body part of the pen. Two ultrasonic sensors were placed in the pen, one facing the ground and one facing the side. The idea of choosing ultrasonic sensors was inspired by our recitation of “mouse x & mouse y”, which we converted into a realistic form.
Since our form of interaction requires the user to hold and manipulate a item to draw on the screen/projection, we used very long wires. To prevent problems such as tangled wires and other unrecognizable issues, we used multiple strips of paper tape to tie the four wires of each ultrasonic sensor together, and indicated on the tape which one detects the distance to the table and which one detects the distance to the wall. Each of the two sensors detects a distance value in a different dimension, and after we map(), Processing will read the values monitored by these two sensors connected to the Arduino via serial communication, ultimately producing lines in the canvas that transform as the graffiti can moves.
C. 02 3D modeling and 3D printing
I modeled three different pens/cans in Tinkercad, and after discussing and comparing them with Nicole, we ended up with the third version. Details of the process can be found in the Appendix. After measuring and adjusting to a size suitable for the user to hold in their hand, we printed out a graffiti can and combined two ultrasonic sensors with it using paper tape (a hot glue was used in the final stages to secure it). The hole at the top was used to hold the pressure sensors and allow wires to pass through. We decided to 3D print our main device because we wanted this digital fabrication technology to be used not just as decoration, but could be an important part of the user interaction experience.
C. 03 map() and calibration
The appearance of the graffiti can:
This part was the core of our project and the most challenging part. Initial debugging was done with the help of Prof. Andy, who suggested that we start with a smaller range of distance values and keep the graffiti can not too far away from the table, instead of utilizing our initial idea of using the floor as the detection distance value. This was because the ultrasonic sensors were too far from the floor for our calibration. In our first calibration, the lines were not very smooth, not completely under the control of the user, and there was some laggy. I created a wall using cardboard, foam board, and metal wire to detect the distance of one of our ultrasonic sensors. Namely, one sensor is on the side of the graffiti can and facing that wall, and the other sensor is on the bottom of the can and facing the table. Meanwhile, we borrowed the smooth() that I used in my midterm project and changed its value to 0.08. This is because the ultrasonic sensors also need this function to make the resulting lines smoother.
With the help of that wall, we calibrated the new distance values. professor Andy helped us to change part of the code so that the distance values were detected in millimeters, which made our graffiti more stable and smooth, and less laggy than before. Below are the calibrations of the graffiti using the line and the circle, respectively. We encountered a small difficulty in this step in that the sensitivity of horizontal and vertical distances did not seem to be the same. Prof. Andy suggested that we use tape to hold our wall in place while we measured the distance between the wall and the computer and memorize this distance value in the subsequent calibration.
After that, to make the interaction closer to a real graffiti can, we added a pressure sensor at the position of the nozzle. This sensor was chosen because it requires the user to apply a certain amount of pressure to trigger an interaction, whereas the real graffiti can also requires the user to press the nozzle with their finger. Specifically, when the user presses the sensor with their finger, an interaction is triggered. When we first added this sensor, the user could press it while drawing one line to synchronize circles, and the size of the circle was determined by how hard the user pressed the sensor. Later, we adjusted it so that the line is drawn when the user presses the pressure sensor; otherwise, no pattern will appear in the canvas. The thickness of the line is determined by how hard the user presses the pressure sensor. The sensor also needs to be calibrated by us using map(), and with a calibrated ultrasonic sensor in place, we didn’t have a great deal of difficulty with this step.
C. 04 Color selector
In the next step, to make the project more interesting and to add a form of interaction, we added a color selector to the device. First, we tested how the buttons should work and be added to our project using the circuit connections in the first picture. But then we replaced the buttons that required external resistors with buttons that had built-in resistors, as this would optimize the setup of our circuit. We planned to replace our previous idea of having paint buckets on the screen for the user to choose from by using different button colors to represent and control the colors in the canvas. Since the previous one would be complicated, it would also require us to use Processing to draw the exact locations and graphics. After this, we re-soldered and reassembled the wiring into the existing circuit. The black buttons represent random colors, changing to a random color every time they are pressed. The other buttons represent their corresponding colors. We appreciate Kevin’s help on our Arduino code. We also allowed the user to eliminate (i.e., reset the background to white) after creating their artwork if they were not happy with it, which was accomplished by pressing one of the keystrokes on the keyboard.
Here is a demo of what it looks like after we added the color selector to our setup.
C. 05 User testing
D. CONCLUSIONS
I think our project achieved that goal because, first of all, we provided different colorful graffiti tools so that users could experience the full power of color in their creations, and motivated them to actively explore and understand color theory. (We had buttons with random color changes for users to choose from, and some users gave us feedback that it made them feel like they were creating abstract art.) Secondly, we introduced interactive effects and ambient environment, which inspired a burst of creativity through the user’s interaction with the virtual canvas, providing a more vivid and engaging experience for the user. What’s more, we helped users develop thinking skills in the process of graffiti through simple hints/instructions, and gradually develop and show their potential drawing talents. For example, many users who use our program have mentioned that it makes them feel like abstract artists. My roommate felt like a performance artist when he used the project to draw. Our ideal expectation is that the user will be able to easily enter the world of virtual graffiti through an intuitive and simple interface/tool, regardless of their artistic background. In the process, they will be able to choose different colors and express their creativity in the virtual environment, thus establishing an immersive creative experience. They will also have the opportunity to engage in social interactions, with the option of discussing what they want to paint with those around them. This mode allows our users to share the creative process with others, inspire and learn from each other, and facilitate creative collisions. And they will be able to find their own unique experience in the graffiti space through this multi-layered interaction. In the end, the audience who used our project basically realized the first expectation, they could roughly create what they wanted, such as kittens and human faces. However, some of them did not fulfill the expectation of social interaction because they were focusing on how to use the graffiti cans and thus neglected the broader experience. This is also a user experience that we can further optimize. Therefore, if we had more time, we would improve on the following details. First of all, users can choose various painting tools when drawing, such as brushes, paints, are textures, etc., and even the “bupu/pffff” sound that users can make during the graffiti process to simulate the feeling of traditional painting. At the same time, in order to increase the fun of creation, we can introduce special effects and virtual objects, for example, users can add three-dimensional elements on the canvas to make their creations more vivid. Thirdly, we need to make the LED strips controlled by the background sound without affecting the users’ normal graffiti. During the preparation of the project, I learned a lot of methods and techniques on how to map() value and smooth them. This is something that I used during the midterm project, but I am more familiar with it after this final project. I think it is interesting to have many interesting ideas, but the key lies in the feasibility of the idea, and the most important thing at the end is to turn the feasible idea into feasible manufacturing and preparation. From the very first sketches to the final presentation, we adapted our ideas a lot. Nevertheless, our project goal did not change and all the improvements, changes, additions, and deletions were centered around the goal. Another point is that my partner and I started preparing for our project very early, we 3D printed early and prototyped early, these kept us from being too late or feeling rushed.
Our issue is mainly about how to make the user interact with the ambient environment as well, rather than just seeing the ambient environment as part of the project’s decoration. However, our goal is still to create an immersive, personalized and interactive graffiti art experience. This virtual graffiti project encompasses a transformative idea, the essence of which is not only to provide users with a fun and interactive graffiti experience, but also to create a dynamic and inclusive creative environment. The combination of intuitive tools, immersive interactions, and communication and sharing with others caters to a diverse audience that transcends artistic contexts. Our virtual graffiti is more than just a pile of simple technology; it’s a deep understanding of our users’ creativity and needs. Throughout the design and fabrication process, we focused on the user experience and attempted to create a virtual art space that was both creative and easy to manipulate. Ideally, we hope that by providing a space where individuals can freely express themselves, we can unlock the creative potential of users, promote the exploration of color theory, and promote artistic thinking, which means not only considering the feasibility of the technical implementation, but also deeply understanding the users’ passion and expectations for artistic creation. This understanding influenced our decisions on tool (sensor) selection, prototyping, and interaction methods, making virtual graffiti a project that is truly close to users’ needs. It was true not only in this project and this course, but in other ways as well. For example, the project also helped me understand another course I took on product management, namely, that, as a product manager, I needed to engage the audience and the logic behind the structure. After all, if a product manager is perceived as unorganized or lacking empathy, how can I say I understand the user if I don’t even understand what the audience wants/needs to hear? Last but not least,our project is consistent with my understanding of interaction as some kind of communication, but not in the way that current flows only from the positive terminal to the negative terminal in an electrical circuit, but rather in a multidirectional, dynamic, open-ended way. A case in poitn, the graffiti created by users is the result of open-ended conversation. In other words, our projects are ostensibly about the structure and content of objects, but essentially about the behavior of users, including but not limited to graffiti. Users need to think about what ideas they want to put on the canvas, the way they want to paint, and how they want others to understand them during the interaction. Our projects present users’ ideas to themselves and other audiences as if clear thinking were being visualized, creating a two-way conversation.
E. DISASSEMBLY
F. APPENDIX (with important content)
Final presentation:
Demo 1:
Demo 2:
Processing code:
import processing.serial.*; import processing.sound.*; Serial serialPort; SoundFile sound; // declare an Amplitude analysis object to detect the volume of sounds Amplitude analysis; int NUM_STRIPS = 2; int NUM_LEDS = 60; // How many LEDs in your strip? color[][] leds = new color[NUM_STRIPS][NUM_LEDS]; // array of one color for each pixel PImage brickwall; int NUM_OF_VALUES_FROM_ARDUINO = 8; /* CHANGE THIS ACCORDING TO YOUR PROJECT */ /* This array stores values from Arduino */ int arduino_values[] = new int[NUM_OF_VALUES_FROM_ARDUINO]; float oldx; float oldy; void setup() { //size(1400, 800); fullScreen(); //background(255); frameRate(30); brickwall = loadImage("concrete wall.png"); image(brickwall, 0, 0, width, height); sound = new SoundFile(this, "Police Siren Ambience in Busy City.mp3"); //sound.loop(); // load and play a sound file in a loop // create the Amplitude analysis object analysis = new Amplitude(this); // use the soundfile as the input for the analysis analysis.input(sound); printArray(Serial.list()); // put the name of the serial port your Arduino is connected // to in the line below - this should be the same as you're // using in the "Port" menu in the Arduino IDE serialPort = new Serial(this, "/dev/cu.usbmodem101", 9600); println("Loading mp3..."); } void draw() { // receive the values from Arduino getSerialData(); // use the values like this: float x = map(arduino_values[1], 200, 500, 0, width); float y = map(arduino_values[0], 350, 50, 0, height); float size = map(arduino_values[2], 100, 800, 0, 50); if (arduino_values[2] > 100) { strokeWeight(size); line(oldx, oldy, x, y); //circle(x, y, size); } oldx = x; oldy = y; if (arduino_values[3] ==1) { saveFrame("line-######.png"); } else if (arduino_values[4] ==1) { stroke(#16F063); } else if (arduino_values[5] ==1) { image(brickwall, 0, 0, width, height); } else if (arduino_values[6] ==1) { stroke(255, 255, 143); } else if (arduino_values[7] ==1) { stroke(255, 0, 0); } if (sound.isPlaying() == false) { sound.loop(); } } void getSerialData() { while (serialPort.available() > 0) { String in = serialPort.readStringUntil( 10 ); // 10 = '\n' Linefeed in ASCII if (in != null) { print("From Arduino: " + in); String[] serialInArray = split(trim(in), ","); if (serialInArray.length == NUM_OF_VALUES_FROM_ARDUINO) { for (int i=0; i<serialInArray.length; i++) { arduino_values[i] = int(serialInArray[i]); } } } } }
Arduino 1 code (2 ultrasonic sensors + 5 buttons + pressure sensor):
// constants won't change. They're used here to set pin numbers: const int buttonPin1 = 2; // the number of the pushbutton pin const int buttonPin2 = 3; // the number of the pushbutton pin const int buttonPin3 = 4; // the number of the pushbutton pin const int buttonPin4 = 12; // the number of the pushbutton pin const int buttonPin5 = 13; // the number of the pushbutton pin // variables will change: int buttonState1 = 0; // variable for reading the pushbutton status int buttonState2 = 0; int buttonState3 = 0; int buttonState4 = 0; int buttonState5 = 0; //first ultrasonic sensor int TRIG_PIN1 = 9; int ECHO_PIN1 = 10; //second ultrasonic sensor int TRIG_PIN2 = 5; int ECHO_PIN2 = 6; float SMOOTHING = 0.08; long duration1, duration2; long distance1, distance2; float smoothed1, smoothed2; void setup() { Serial.begin(9600); // initialize first sensor pinMode(TRIG_PIN1, OUTPUT); pinMode(ECHO_PIN1, INPUT); // initialize second sensor pinMode(TRIG_PIN2, OUTPUT); pinMode(ECHO_PIN2, INPUT); // initialize the pushbutton pin as an input: pinMode(buttonPin1, INPUT); pinMode(buttonPin2, INPUT); pinMode(buttonPin3, INPUT); pinMode(buttonPin4, INPUT); pinMode(buttonPin5, INPUT); } void loop() { buttonState1 = digitalRead(buttonPin1); buttonState2 = digitalRead(buttonPin2); buttonState3 = digitalRead(buttonPin3); buttonState4 = digitalRead(buttonPin4); buttonState5 = digitalRead(buttonPin5); int sensorValue = analogRead(A0); // first sensor digitalWrite(TRIG_PIN1, LOW); delayMicroseconds(2); digitalWrite(TRIG_PIN1, HIGH); delayMicroseconds(10); digitalWrite(TRIG_PIN1, LOW); duration1 = pulseIn(ECHO_PIN1, HIGH); distance1 = duration1 / 2.9 / 2; if (distance1 > 500) { distance1 = 0; } smoothed1 = smoothed1 * (1.0 - SMOOTHING) + distance1 * SMOOTHING; // second sensor digitalWrite(TRIG_PIN2, LOW); delayMicroseconds(2); digitalWrite(TRIG_PIN2, HIGH); delayMicroseconds(10); digitalWrite(TRIG_PIN2, LOW); duration2 = pulseIn(ECHO_PIN2, HIGH); distance2 = duration2 / 2.9 / 2; if (distance2 > 500) { // distance2 = 0; } smoothed2 = smoothed2 * (1.0 - SMOOTHING) + distance2 * SMOOTHING; Serial.print(smoothed1); // floor distance Serial.print(","); // put comma between sensor values Serial.print(smoothed2); // side Serial.print(","); Serial.print(sensorValue); Serial.print(","); // add linefeed after sending the last sensor value //Serial.println(sensorValue); // read the state of the pushbutton value: Serial.print(buttonState1); Serial.print(","); Serial.print(buttonState2); Serial.print(","); Serial.print(buttonState3); Serial.print(","); Serial.print(buttonState4); Serial.print(","); Serial.print(buttonState5); Serial.println(); //delay(1); // delay in between reads for stability }
Arduino 2 code (LED strips):
/// @file ArrayOfLedArrays.ino /// @brief Set up three LED strips, all running from an array of arrays /// @example ArrayOfLedArrays.ino // ArrayOfLedArrays - see https://github.com/FastLED/FastLED/wiki/Multiple-Controller-Examples for more info on // using multiple controllers. In this example, we're going to set up three NEOPIXEL strips on three // different pins, each strip getting its own CRGB array to be played with, only this time they're going // to be all parts of an array of arrays. #include // I don't know why the library can not be copied here. We used "FastLED.h" #define NUM_STRIPS 2 #define NUM_LEDS_PER_STRIP 60 CRGB leds[NUM_STRIPS][NUM_LEDS_PER_STRIP]; // int next_led = 0; // 0..NUM_LEDS-1 // byte next_col = 0; // 0..2 // byte next_rgb[3]; // temporary storage for next color // For mirroring strips, all the "special" stuff happens just in setup. We // just addLeds multiple times, once for each strip void setup() { // Serial.begin(115200); // tell FastLED there's 60 NEOPIXEL leds on pin 2 FastLED.addLeds<NEOPIXEL, 8>(leds[0], NUM_LEDS_PER_STRIP); // tell FastLED there's 60 NEOPIXEL leds on pin 3 FastLED.addLeds<NEOPIXEL, 7>(leds[1], NUM_LEDS_PER_STRIP); } void loop() { for (int i = 0; i < NUM_LEDS_PER_STRIP; i++) { leds[0][i] = CRGB::Red; FastLED.show(); leds[0][i] = CRGB::Black; delay(1); } for (int i = 0; i < NUM_LEDS_PER_STRIP; i++) { leds[1][i] = CRGB::Blue; FastLED.show(); leds[1][i] = CRGB::Black; delay(1); } }
Tinkercad: link
Due to not finding suitable buttons for our project in Tinkercad, they were not built, and their respective 5V and GND connections are connected to the “+” and “-” of the breadboard. I also didn’t find a pressure sensor that fits our circuit. We use pressure sensors with three connections (5V, GND, and DIN), and there is only one found in Tinkercad that has only 2 connections. So in the diagram, I assume its got another connector, which is connected to A0 of the first Arduino.
Other fabrication steps:
1) Laser cutting
Nicole made a simple box and dug a hole in the front and one in the back of the box, this was accomplished by using Cuttle.xyz to draw circles on two pieces of plywood. We then fit the circuit sections into that box to make the whole project more organized and tidy. We purposely made the two holes a little larger because we needed to add more wires later to connect through the holes to the circuit.
2) 3D Modeling and 3D printing
Version 1:
At first I planned to print the tip of the pen together with the cylindrical part, but my partner suggested that I print them in different parts, that way we could choose whether to assemble them or not depending on the subsequent process, thus avoiding the inability to change them twice. After measuring the dimensions of the sensors with a ruler, the hollowed out side of the model was made to accommodate two ultrasonic sensors.
Inspired by Prof. Andy, we decided to re-model the pen as a graffiti can, which would make it easier for users to recognize our form of interaction. We planned to replace the buttons with pressure sensors at the top of the can, which will result in other forms of interaction. The functionality and connection of the two ultrasonic sensors remain unchanged.
3) User testing
Based on user testing, we got the following feedback:
- Satisfaction is not there. Pressure sensor to be more stable, put something (button, cap, …) around it
- The way it set up, more refine
- Wire need to be packaged up; too messy right now w all the wires
- From cardboard to brick wall, from white canvas to brick wall canvas so it feels like you are immersed in an alleyway spraying graffiti
- Keypress for reset ➡️ button for reset, like what we did for the 5 buttons, maybe replace one of the buttons for reset
- Ambient: police sirens; so the player feels pressure that they need to quickly graffiti in time before the ‘police’ arrive
- Maybe create another cardboard wall + make a hole in the wall and put the laptop inside so its more immersive?
- Kevin suggestion: aside from police sounds, make street noise to make you feel like you are in the zone; he said to not use headphones and use speaker as there will be a group audience (so we can hook it up to class speakers or sth)
- Kevin also praised us and said spray can works so well + super immersive; however a bit awkward having the buttons on the left hand side (the placement for user testing, he suggested a new way to change colors)
- One user suggested maybe needing a cursor on the screen for Processing so that you know where you are spraying, because when he was using the spray can he felt as if he didn’t know where he was spraying everytime he started (start of each stroke)
- LA suggested we should add the pfffsshshhh (sound effect of spray can spraying)
Work more on fabrication + provide a bit more cues how to use the project (make instructions easier) - Can pre-program images or letters onto the Processing screen so it is not just strokes that are being made but also like images and letters ++ add patterns
- Fill up the whole screen for Processing; change dimensions to be full-screen– we can adjust this when we use the mini projector
- Potentially find a way how to capture a screenshot of someone’s art piece after he/she finishes drawing (how to screen save)
4) Refinement and progress after user testing
After user testing, I added and modified a sponge block to the lip of the spray can, the missing part of the sponge block is used to catch the internal pressure sensor. This way, the user can press the sensor on top of that sponge block and the user will feel more comfortable using it. I also added a background of a brick wall to our canvas and the user will be able to draw on top of this background wall. In addition, our project allows users to save their work by pressing the “s” key on the keyboard. Because, we considered the users’ demand to save and share their works as they do on social media. So, we have provided a simple save function.
Processing sketches:
We used strong black tape to wrap up the complex wires, which enables the user to graffiti without the wires getting in the way. The video shows a demo of graffiti on a brick wall using neater wires and graffiti can. The brick wall in the video was small in size, and we later resized it to the size of our canvas. Besides, in order for the user to be able to paint on top of the brick wall, loadImage() needs to be placed in setup() instead of draw().
We decorated our side wall with prints, hot glue, and glue sticks to give it a more “street vibe”. We changed the black button that represented a random color to a special purple button that resets the background canvas. When the user presses this button, i.e. when he/she/they finishes the painting or if he/she/they is (are) not satisfied with it, the canvas will be reset to white. At first we could only reset both the lines and the brick wall to white background at the same time, with the help of Prof. Andy, we used image(brickwall, 0, 0, width, height) instead of background() to remove the lines and keep the brick wall. After that, we also adapted the blue button from color control to save sketch, which allows users to save their masterpieces. The reason for using buttons instead of the keyboard is that we want to go beyond the traditional forms of human-computer interaction (mouse, keyboard, etc.).
We then added two Neopixel LED strips, which were made to exhibit random colors through initial debugging. The ideal color we were looking for was alternating red and blue strips to simulate the effect of police car lights. With the help of one of the LAs — Rachel, we tried different LED strip effects and finally chose the one with alternating red and blue colors and sped up the alternation in subsequent debugging. But we also found the problem that the LED strips cannot be controlled by the background police sound well if the LED strips and the original circuit and code are combined together, and we didn’t have a good solution for this problem until the final presentation. Moreover, regarding the background sound, we planned to use a buzzer at first, but then decided to use a computer to play it. Since our circuit was placed in a box, we didn’t have a good way to connect the buzzer to it.
After many debugging, we realized that due to the delay, the graffiti becomes very laggy even though the LED strips are working properly. So we used two Arduinos, i.e., one to control 2 ultrasonic sensors with 1 pressure sensor and another Arduino to control the LED strips. And we decided not to use the sound to control the LED strips but at the same time keep that sound.
5) Decorating
In the final stage, we decorated the whole installation. Nicole designed the instructions and title of our project, which we printed out and put on the brick wall on the left side. Links to the three source files below:
Title: https://docs.google.com/drawings/d/1DUKhfBFcWt2QuO6f8scTq3QhIQZ2ZFh7XlPmtVDPg9A/edit
Instruction 1: https://docs.google.com/drawings/d/1D9LXN5fgwxpJXBT6t6BlFCo6KpR5hLG72k993TuamEU/edit
Instruction 2: https://docs.google.com/drawings/d/171ZWjKEG8O86lNaRsp-tlo0lyk_7Em-tLNNFiXwj_FQ/edit
6) Projector testing
We planed to use a projector, and before that, we roughly drew a random size on the whiteboard to figure out how to remap().
After remap() the new distance values, we used a projector for initial testing. The projection worked better than the computer screen to some extent, but we didn’t use the projection in the final presentation because the two screens would be a bit weird.
7) IMA Show
#1:
#2: a face
#3: we attracted a lot of kids!
The mother of one of the children marveled at the fact that just 2 Arduinos could do such a thing. She was checking out how our circuits and sensors worked.
Our project has also attracted the parents of these children. The parents told me: “I have already purchased Arduino for their children and want to start them learning about it… After visiting today’s show, I realized that it’s rare for this university to offer so many opportunities for kids/students, and that it’s a school where people can really learn and be creative.” I think they were able to get the idea after viewing our projects and those of other students that the projects we create are meaningful and not just about completing a coursework. Not only did our graffiti project allow the kids to be creative, but it caused their parents to raise concerns about their children’s education. Even the idea of having their kids come to NYU Shanghai afterward.
#4: flowers