Recitation 7: Functions and Arrays by Alexander Cleveland

Introduction

In this recitation, I created the base outline for a function and added it it through multiple steps. These steps included using the for() function to create different arrays. My goal for this recitation was to thoroughly understand every line of code that I was creating. I didn’t want to copy and paste lines from the example without knowing how they interacted with my function first. This way, I could understand how to use the code in future purposes for my project if necessary.  

Step One

I started step one by creating a simple motif using a rectangle, an ellipse, and a square. It alternated black and white colors to signify the difference between shapes. I entered these units under void display to have the three shapes represented on the canvas. The code for void display can be found below.

void display (float x, float y, color c)
{
fill(c);
rect(x, y, 300, 300);

fill(255);
rect (x, y, 150, 200);

fill(0);
ellipse(x, y, 55, 55);
}

This step in the process just resulted in the shape and colors being represented on the canvas without any manipulations. 

Step Two

In step two, I created a for loop under setup to move my motif in 100 different positions at the same time. In setup, my for loop consisted of the following code:

for (int i = 0; i < numberOfInstances; i++)

Before creating this loop, I defined “numberOfInstances” at the top of my code as equaling 100. So, this code will run until it reaches 100 instances of my motif hitting the screen.  But because it is in setup, it will only one once, so I have to move it into draw in the future. Because every time the for loop is running, the “i” value is gaining “+1” because of the “i++” value.  This for loop combined with my defined “numberOfInstances” ensures that the function will only run until “i” is no longer less than the “numberOfInstances” which in this case, is 100.

Step Three

In step three, I created three arrays to store my x, y, and color data for the original function I created. In order to do this, I had to start within setup and fill them with data. Below is the code I had up until this point within display.

void setup() {
size(600, 600);
background(255);
rectMode(CENTER);
for (int i = 0; i < numberOfInstances; i++) {
xPositions[i] = random(0, 250);
yPositions[i] = random(0, 255);
cPositions[i] = color(random(0, 255));

The for loop stayed the same, but in order to fill each “i” position with data, I set it equal with random values. What should be noted is that in order to setup the array, I needed to include three lines at the beginning of my code that are as follows.

float[] xPositions = new float[numberOfInstances];
float[] yPositions = new float[numberOfInstances];
color[] cPositions = new color[numberOfInstances];

These three values set x, y and c positions to float the numberOfInstances (100) within my code. So, under setup, I was essentially assigning random numbers to be placed within these addresses (xPositions[i], yPositions[i], and cPositions[i]). I needed to differentiate cPositions as color in the above code because otherwise it wouldn’t coordinate to a random color, but instead a random position. Then in Draw, I used the array to make sure the function is displayed 100 times.

void draw() {
background(255);
for (int i = 0; i < numberOfInstances; i++) {
if(xPositions[i]>600){xMovement[i]= -xMovement[i];}
if(yPositions[i]>600){yMovement[i]= -yMovement[i];}

display(xPositions[i], yPositions[i], cPositions[i]);

xPositions[i] += xMovement[i];
yPositions[i] += yMovement[i];
cPositions[i] += cMovement[i];
}
}

This can be seen in the original for loop that I correlated with “numberOfInstances”, thus creating 100 repetitions in the function. This code for these 100 different positions are also correlated by manipulating the variable “i” in (display(xPositions[i], yPositions[i], cPositions[i]);) on line 7 of the above code.

Step Four

By this step, I had created 100 different positions of my motif across the screen with random numbers assigned as positions and colors. To first add any movement to the variables, I had to define xMovement, yMovement, and cMovement as “float” and “color” and set it equal to “numberOfInstances” so in the for loop it could occur 100 times.

float[] xMovement = new float[numberOfInstances];
float[] yMovement = new float[numberOfInstances];
color[] cMovement = new color[numberOfInstances];

Under setup, I then added three lines of code signifying the randomness of each movement correlated with the originally defined “x/y/cMovement” variables.

xMovement[i] = random(0, 10);
yMovement[i] = random(11, 20);
cMovement[i] = color(random (0,255));

After establishing the variables and the randomness, under draw I set the positions equal AND adding “one” to the movements to ensure they would be moving from the random position they were put in.

xPositions[i] += xMovement[i];
yPositions[i] += yMovement[i];
cPositions[i] += cMovement[i];

After, and in order to make sure they don’t float off of the screen, I inserted an if statement just below the original for loop in draw.

if(xPositions[i]>600){xMovement[i]= -xMovement[i];}
if(yPositions[i]>600){yMovement[i]= -yMovement[i];}

By doing this, I was essentially saying that if x or y positions reached a certain point on the canvas, that the shapes would bounce off of the side and come back into view. This was when I inserted the xMovement equaling “-xMovement” so reflect back into the picture when it reached 600. Thus completing the step where the function cannot move off screen. All of my code can be viewed at the bottom of this post listed under “My Full Code.”

A video of the completed project can be viewed below.

Questions

Question One

In your own words, please explain the difference between having your for loop from Step 2 in setup() as opposed to in draw().

When I used a for loop in setup, the purpose was to set a random factor to happen only once during the course of the function. Setup only runs things once as opposed to draw. When I inserted the for loop into draw, it repeated it as many times as I set it to. In this case, I set it to run continuously because I did not insert any code to say “stop” once it hits a certain value. All the code knew was to keep repeating on and on under draw.

Question Two

2. What is the benefit of using arrays?  How might you use arrays in a potential project?

Arrays can be used to store different types of data within a function. As I did here, I store colors and positions which matched with my position, movement, and color data. The benefit of arrays are their ability to store large quantities of this data within their own function and repeat this how ever many times you need it to. For a potential project, I may use an array to store random color data and have it linked to something physical within the arduino kit. So that when a button is hit for example, the array will display a different, random color value every time.

My Full Code

int numberOfInstances = 100;

float[] xPositions = new float[numberOfInstances];
float[] yPositions = new float[numberOfInstances];
color[] cPositions = new color[numberOfInstances];

float[] xMovement = new float[numberOfInstances];
float[] yMovement = new float[numberOfInstances];
color[] cMovement = new color[numberOfInstances];

void setup() {
size(600, 600);
background(255);
rectMode(CENTER);
for (int i = 0; i < numberOfInstances; i++) {
xPositions[i] = random(0, 250);
yPositions[i] = random(0, 255);
cPositions[i] = color(random(0, 255));

xMovement[i] = random(0, 10);
yMovement[i] = random(11, 20);
cMovement[i] = color(random (0,255));

//void display(random(0,255), random(40, 90), random(80, 160))
// display (width/2, height/5, color(125));
// display (width/4, height/2, color(220));
//}
}
}

void draw() {
background(255);
for (int i = 0; i < numberOfInstances; i++) {
if(xPositions[i]>600){xMovement[i]= -xMovement[i];}
if(yPositions[i]>600){yMovement[i]= -yMovement[i];}

display(xPositions[i], yPositions[i], cPositions[i]);

xPositions[i] += xMovement[i];
yPositions[i] += yMovement[i];
cPositions[i] += cMovement[i];
}
}

void display (float x, float y, color c)
{
fill(c);
rect(x, y, 300, 300);

fill(255);
rect (x, y, 150, 200);

fill(0);
ellipse(x, y, 55, 55);
}

Final Project Proposal by Alexander Cleveland

Fly Superman Home

This project will be a game wherein a test subject tries to manage the height and speed of superman by blowing into a sensor in order to get him home. The game will be shown on processing and have a superman icon figure which is in a stable position on the screen, but flying as the background moves forward. The top and bottom of the screen will be death zones which superman needs to avoid to continue the game. In order to manage the height of superman, the test subject will blow into a wind sensor which correlates with how high or low superman gets on screen. Within the game, there will be obstacles such as floating rocks that will cause the subject to adjust the rate of blowing into the sensor. The longer they can blow into the sensor without killing superman, the higher the score they will get. Throughout the game, different levels of progression will correlate with musical sounds to encourage the player. 

Research from this project was  actually inspired by looking through my old iphone games. Many years ago I often played a game called jetpack joyride. In this game, the player navigates a man on a jetpack through a laboratory by using their fingers to adjust how high or low the jetpack is moving. The premise of my game is similar to jetpack joyride, but takes on a different element of controlling the character. On the iphone, it is much easier to control the character by tapping a finger. But in my project, it is significantly harder to guide the character to safety. It involves a physical aspect which is missing in many of the iphone and computer games of today. I think this project is a game angled towards younger kids. In this, I am seeking to move kids away from the new-age games of today that don’t involve physical interaction. As for impact, I think it creates a new medium of gaming that mixes old style physicality with new style technology and reward. Hopefully it can inspire future games to be interactive on both a physical and digital level.

Re-earthing Traditions in China

In order to preserve traditional Chinese culture, this project will be focused on learning non-mainstream Chinese dialects of today and the past. The goal of this project is to raise awareness regarding the dying out dialects around China. Because Mandarin is becoming standardized as the official language, families are putting less emphasis on preserving their historical linguistic roots. For example, the Gao Kao (高考college entrance exam is only being distributed in Mandarin. This excludes many of the dialects that come from provinces containing millions of people that have not traditionally used Mandarin in the past. This issue has forced many families to abandon the use of their own dialect and focus solely on learning Mandarin. An example of this is in many southern provinces such as Guangdong province with different dialects such as Cantonese.

In order to correctly preserve these traditions, this project will have a physical map which displays the different provinces of China. Each province will be linked with a button on top of it. Once clicked, the button will link back to processing and display a video or “how to” of a native speaker using that dialect in some form or another introducing themselves. This project will be used as an educational resource for non-native or native speakers to practice or just learn about Chinese in general. It will hopefully raise awareness about the subject and revitalize the importance to maintain traditional dialectic roots in a Mandarin dominated society. 

Rejection Recycling

In “Rejection Recycling”, I am creating a project that can detect when a paper or cardboard item is a non-recyclable because it is contaminated with oil or other food substances. The original problem I am dealing with is that many things that are often thought of as recyclable when they are in fact trash or other categories. This applies to a wide variety of things having to do with food, but mostly paper and cardboard products. When someone places an item into the wrong trash sorting bin, there is no consequence at the time because it is difficult to monitor food waste everywhere. The only consequence is wasted energy at the recycling plant which has to deal with misplaced trash. But with the help of my project, people will now understand where to put their food scraps or pizza boxes and help clean up the environment.

The premise of the idea derives from the many sensors we’ve used in past recitations. By placing sensors within the recycling bin, they will be able to autonomously detect if it is suitable for that category of waste. If it is suitable, then a large green check mark will appear on screen by way of processing accompanied by an encouraging “ding” sound. But if the waste is not suitable for recycling, then a large red X will appear on the screen along with an alarm sound. There will be a mechanism to reject the non-suitable material into a side bin where the subject can pick it up and place it into a suitable bin. This will be done through arduino physical motions connected to a trap door. In today’s world, many people are uneducated about which materials need to be recycled and which don’t. I hope to educate many on which items should be placed in recycling and forms societal habits which contribute to environmental protection. This product will be targeted as the mass public because our environment affects all of us and recycling is the first step to reducing negative impacts around the world.

Recitation 6: Processing Animation by Alexander Cleveland

My Work and Code in Recitation

During this recitation, I made a point to try and explore the processing app through a couple different variations. This mean’t trying to use and combine tings I hadn’t necessarily learned which led to a lot of trial and error. A list of the functions I used (and tried to use but didn’t include) are as follows:

  1. void setup (to initialize the sizing and parameters of the work)
  2. void draw (within this I set the background size, the rectangle size, and the coloring of the rectangle)
  3. void mouse pressed (an interactive element whereby clicking the mouse made the rectangle appear and disappear off screen after some time)
  4. void mouse released (this ensured that whenever I released the clicking from the mouse, the rectangle would stop moving)
  5. frame rate (determined the speed of the image moving across the screen)
  6. no stroke (took away the black border line of the rectangle)

Below is a list of failed commands I tried using but ended up deleting:

  1. Push matrix, as the description on processing says “Pushes the current transformation matrix onto the matrix stack. Understanding pushMatrix() and popMatrix() requires understanding the concept of a matrix stack. ” I did not particularly understand the basics behind matrix stack so I was stuck and eventually deleted it after finding help.
  2. Key pressed, I couldn’t quite figure out how to incorporate this with the mouse pressed at the same time so I ended up abandoning it because I liked the clicking of the mouse better.

My code for the moving colored rectangle is as follows:

A video of my sketch is as follows;

The code for my recitation homework is as follows:

float X;
float Y;
int speed = 8;
int d=5;
float c;

void setup (){
frameRate(50);
size(600,600);
colorMode(HSB);
X = width*.5;
Y = height*.5;
noStroke ();
smooth();
}

void draw() {
background(255);
if (c >= 255) c=0;
else c++;
stroke(c, 255, 255);
strokeWeight(30);
noFill();

ellipse(X, Y, d, d);

d = d + speed;

if (d> width || d < 10) {
speed=-speed;
}
}

A video of my homework can also be viewed below:

*part of this code inspiration was sourced from (https://gist.github.com/AHicks/5202330) and a lot of the help came from teaching assistant Eszter Vigh.

Preparatory Research and Analysis by Alexander Cleveland

A.

When I was finally able to visit the Chronus exhibit, I was overwhelmed with the amount that the artists did with such little material. In class sometimes I feel limited because of the small motors and Arduino boards we work with. And during the mid-term project I had absolutely no idea what to try and accomplish. Even though the goals were clearly laid out for us, my partner and I still had a difficult time coming up with a concept owing to the small materials we had on hand. I really enjoyed watching the Autonomous System, which was the large house like device propped up with wooden stilts (Ralf Baecker). Within the house, there was a large base like structure attaching strings which extended out onto the outer shell of the house. Also attached to these strings were small weights. It worked like a pulley system where the weights would be raised and lowered within certain intervals programmed into the servo motors. It was amazing to me to see how much can be done with so little available to the artists. Projects like these serve as a great inspiration for my future work in interaction lab and beyond. Compared to a normal art exhibit, I found a few differences that were less than subtle. When I step into a traditional art gallery, I am typically looking at the walls where the paintings are hung. But in the Chronus exhibition, the art was all around and in-between us. The first project, Artificial Intuition, is all around you as you walk into the exhibition (Zhang Hua). The sensors are above and beside you so that the hands of the project can move and block you as you try and enter the exhibition. To me, this is the coolest aspect of interactive art exhibitions. So many famous museums around the world are centered around having their artwork displayed on a wall to maintain the look and feel of tradition. Or if it is displayed in the middle of the room, like a sculpture for example, it is on a pedestal or a stand. In the Chronus exhibition, the art is all around you. It makes for an interactive and engaging experience that I enjoy far more than a traditional art museum. Some pictures from the Chronus exhibition can be viewed below.

“Artificial Intuition”

“Autonomous System”

B.

    In the first interactive project I researched, musical bikes, subjects ride four stationary bikes which then trigger a song to play on the loudspeakers connected. Each bike is connected to one part of the song. For example, the far left bike only plays the beat, while the far right bike plays the vocals. The bikes do so not matter how fast the subject rides, as long as they are peddling. So, when four people are riding the bikes at the same time, the entire song is played at full speed through the loud speaker. If one person stops, the other Ÿ of the song continues playing without a part (for example, the voice of the singer). Below is a picture of the project next to century avenue metro stop.

“Musical Bikes”

I think this is a successful interactive project because it incorporates a user relationship with the machine. By user relationship, I mean that when the subject uses the machine, the machine reacts and then triggers a song. Going back to my original definition, “Interaction is a continuous conversation between two or more corresponding elements.” This fits along with the bike and song concept because as the subject rides the bike, part of the song plays. The machine reacts to the user riding the bike, thus constituting a conversation between the corresponding elements.

A separate interactive project I researched was Scott Clandinin’s “Interactive Mario Mushroom Block.” In this, there is a box similar to a Mario mystery box attached to a wall on a vertical shaft. When the subject hits the bottom of the box, the box jumps up and opens its roof. A mushroom comes out of the roof, just as it would in the Mario video game. I found this to be a successful version of interaction because it is a conversation between the human hitting the box and the mushroom popping out as a result. It is a textbook conversation between between two corresponding elements. I also liked how it played off of a contemporary video game rather than only have the base mechanics with no design. As I’ve researched many projects, I’ve found design to be critical in attracting users. The same goes for the musical bikes. I first found out about it was I was 500 yards away and the music drew me closer.

A project that I think is less successful, but still within the realm of interaction is “Expanded ID” by Anaisa Franco. Expanded ID is essentially a bench with a fingerprint scanner attached to it. In Franco’s words “Expanded ID is a parametric interactive public art installation that scans the visitor finger print and transforms its unique shape in a generative animation, which pulses colorful 3D blocks of the user’s unique finger prints shapes.” (Anaisa Franco) Once the reader is done scanning the fingerprint, it projects the fingerprint in front of the bench. It does so in an array of colors and designs, illustrating the beauty and uniqueness of a finger print. Although this is technically interaction, is is not necessarily a conversation between two elements. In simple, the bench is just projecting out the subject’s fingerprint. There is not much more to it. While it is an interesting concept for an art installation, I don’t think it encapsulates all of the necessary qualities of interaction. 

C.

“Interaction is a continuous conversation between two or more corresponding elements.” I think the base of this definition has held true throughout my learning, but the qualities of a good interactive project do not necessarily have to include a continuous element to it. Continuous to me would mean the project is constantly on while the user is using it. Take the rain room for example. The person is walking around and the sensors are using this data to turn off the water wherever the subject walks. This is a continuous project. Most projects I see don’t contain this continuous element to them but are still interactive. I think for it to be a sound project, it has to be a reactionary relationship between two or more elements involved. Just as Chris Crawford argues, interaction is a “cyclic process in which two actors alternatively listen, think, and speak” (Crawford 5). For me, this element of a conversation is essential in interaction. When one person speaks, the other responds. This is how people converse. So in a project, when a subject walks for example, the motion sensors sees that and react accordingly. Interaction is reactionary in the sense that the project senses the user and reacts based on the different inputs. The output depends on the type of input received by the project. The biking interactive exhibition in point B contains a correspondence between two or more parties involved. One person can ride the bike and create a fourth of the song, but in order for it to be successful, all four people need to be riding the bikes. To make this interaction work, people are engaged with the project and enjoying the end result. I think this relates to the overall theme of interaction where people want to enjoy and end result. The interaction needs to be something useful to the world, interesting, fun or anything else. On a deeper level, the bike project encourages exercise with fun music as the motivation. Expanded ID creates something cool, but not exactly useful to the world. The overall concept is abstract to the basic necessity of people. As I define it now, and for the purpose of my final project, interaction is a reactionary relationship/conversation between two or more corresponding elements. To make an interactive project work, it needs to be a conversation wherein the project reacts to the subject. Whether is is programmed to react a certain way or not, it still needs to sense that the subject or element is present in order to begin. A conversation always starts with one element or person and in this case, needs to have a result from the machine.

Works Cited

http://www.anaisafranco.com/expanded-id

Ralf Baecker, Autonomous System (Chronus Art Exhibition)

Zhang Hua, Artificial Intuition (Chronus Exhibition)

https://www.lacma.org/art/exhibition/rain-room (rain room)

https://dix2.com/#&gid=1&pid=10 (Musical Bikes)

Crawford, Chris. The Art of Interactive Design: A Euphonious and Illuminating

     Guide to Building Successful Software. San Francisco, No Starch Press, 2003

Hit That Like Button – Alexander Cleveland – Marcela

Alexander Cleveland

Context and Significance
When taking into account my past group project, I thought mostly about the relationship between the user and the machine. What was the end result when the user interacted with the machine? In click wall, the audience would tap a panel on the wall and it would change color as a result. Thus, someone could eventually use this to create a piece of artwork such as a silly design by clicking the panels in a certain pattern. It had the potential to be used for an artistic medium, but also just for fun and games. This nature of a relationship played into my group’s definition of “Interaction is a continuous conversation between two or more corresponding elements.” We also analyzed fire wall, which was similar to click wall in the sense that one was physically touching a wall. The fire wall instead played music and rippled whenever it was touched. It wasn’t as black and white as the interactive relationship in click wall. In fire wall, there was room for improvisation from the wall depending on where one touched. It also depended on the amount of pressure applied to the wall. Different pressures corresponded with different values of sound and physical reaction from the wall. I think firewall significantly impacted our design because of the different reactive values it produced. Our project involved hitting a 3D printed “like” button, which would trigger three different sound and visual values based on how hard one hits the button. We taped a button to the vibration sensor which correlated with the three different ranged values. I think what my partner and I created was similar to a carnival game where one hits a target with a hammer and if they hit it hard enough, rings the bell at the top. Ours is different in the sense that even if one does not hit the highest value, there is still feedback of the red and yellow flashing lights along with the negative music. The feedback also came through a separate like button which was taped to a servo motor. The button would either be thumbs down, in between, or a thumbs up depending on how hard the target was hit. The carnival game only has positive feedback when hit at the hardest value. Our product is a game designed for all ages to test their strength; although, I think it is tailored to a younger audience.

Conception and Design
For the initial building materials, my partner and I used cardboard to encase the arduino board, speaker, and breadboards. We also used cardboard for the hitting target and drew a bullseye to indicate where the participant should hit their fist. The initial design of the hitting board on the left and the casing on the right can be viewed below.

But, in the final version we used a 3D printed plastic like button as our target because it was sturdier than cardboard. We thought about using rubber as a target but the sensor wasn’t reading the correct values so we had to use thinner material. We also used laser cut wood for our final casing of the arduino board to have a more professional look and hide the wires.

The original cardboard encasing for the speaker muffled the sound too much so we decided to build a small shelf on the outside of the wood case for the speaker. The speaker is on the left of the below image.

This was so the speaker could easily be attached to the arduino plus breadboard and still be on the outside to project quality sound. Originally, we also had a 3D printed like button attached to a servo motor hanging off the casing. But, we wanted a more independent look from it seperate from the casing so we built a small tower at which the like button and motor were atop of. This created a more significant and recognizable look for the like button.

Fabrication and Production
At first, my partner Kevin and I used a cardboard circle as our target to hit but through user testing we both saw that it was quite uncomfortable for most people to pound on. It was also too big so that many people missed the tiny sensor and it fell apart quickly. A video of people hitting the board can be viewed below.

As is visible, the cardboard target was unusable because it was too big of a target for people to hit the tiny sensor in the middle. The participant at hand could not even register an LED signal. So, we 3D printed a thin “like button” and used that as a target. The like button, made from plastic, held up better than the cardboard bullseye, and also presented a smaller target which led to more accurate hits from participants. It was also more accurate in reading the vibrations than the thicker cardboard. Another word of feedback we received was a better visual LED feedback. Initially during testing we only had the LEDs connected to the breadboard. Many people didn’t understand the results because the LEDs were not visible in tandem with the hitting board. As a result, we created a tower in which the three LED lights were situated like a stop light in a vertical manner. Above the lights was also the other 3D printed thumb connected to a servo. Thus, one could see two visual results right next to each other. An image of the tower and final casing for the box can be viewed below.

Many people also criticized our speaker placement because it was almost impossible to hear the music when it was covered. At first, we poked holes in the cardboard for sound to release out of it, but this was not enough. So we created a shelf on the outside of the final wooden casing for the speaker to rest on. This ensured clarity in the music that would play. Many of the participants were also confused as to where to hit their fist. Because of our title “hit that like button”, many thought that they should hit the like button on the servo motor. So instead of using a bullseye target as our hitting board, we eventually replaced it with another like button. A picture of our initial sketch design can be viewed below.

Conclusion
The goal of our project was to create an interactive relationship in which one tests their strength on a three point scale of color and sound. This relationship dealt with a participant hitting the sensor and receiving negative, moderate, or positive feedback from the speaker, LEDs, or 3D printed thumb. I think my results accurately reflect my original definition of “interaction is a continuous conversation between two or more corresponding elements.” The conversation is between the user and the vibration sensor. By using the physical hardware as a translator of energy, the conversation is continued through to the lights, speaker, and motor. Although, I think the project disagrees with my own definition in the sense that it is not necessarily a “continuous” conversation. My audience interacted with it how we predicted, by hitting the thumb over and over again until the lights went green and the star wars cantina music played. The more the audience played the game, the more they became enthralled with the fun of hitting it over and over again. The game creates a necessity for achieving the highest goal possible. I think our biggest setback was understanding which material would work best with the vibration sensor. We had to find the perfect medium which wouldn’t hurt the participant, but would also be thin enough to correctly read the vibration values. This was difficult, but a 3D printed plastic thumb worked well in the end. I think people should care about this project because we defined a different method of motivation from traditional carnival games. The hammer and bell game I described only rewards the participant if they reach the top prize. But our game motivates players to keep hitting it as hard as they can. If there were not a red or yellow LED light, then there wouldn’t be nearly as much interest in continuing the game. But because we rated the game on a scale, it presented this addicting factor which kept the player engaged throughout the whole game. Below is the complete code for the project and a video showcasing the finished product.

Music Notes

Music Notes

LED Code

Speaker Code

Speaker Code

The Final Product

Sources

https://create.arduino.cc/projecthub/natthakit-kim-kang/click-canvas-an-interactive-wall-04332c?ref=tag&ref_id=interactive&offset=0 

http://aaron-sherwood.com/works/firewall/