Game-box of altering elements-Turn Down For What – Serene Fan – Rudi

INDIVIDUAL REFLECTION

CONTEXT AND SIGNIFICANCE

When my partner and I were making a specific plan of our project, we referred to the reading “The Art of Interactive Design”. As mentioned, to achieve interactive design, the product should involve the process of input, process and output for each object engaged. Therefore, we were informed that we should stimulate conversations between the audience and our project through our designing. Both sides should engage in the course of “input, process and output” just like in a conversation. The artist team called Time’s Up also offered me inspirations. Their artwork Gravitron aimed to “have people consider ‘their dependence upon biomechanics, control, perception’ and cultural and technological structures that attempt to regulate them”. The deep thought behind their design highly resonated with our intention to put thought-provoking indications into our project. Also, our attempts to have people reflect on what external influence they have long been experiencing but ignored.

I shaped my definition of interaction after obtaining these inspirations, which was to be capable of conveying a message while receiving a message at the same time. Another inspiration was a video (https://m.weibo.cn/5984336074/4311411788665983) edited by a Chinese vlogger named Yue Jing (井越). In his video, he first displayed three separated scenes of a breakfast store, two elderly men playing traditional music in the park and a night club. Then, he mixed them up by altering the background music from each scene. The prototype of our project was then triangulated between his video and Time’s Up’s Gravitron, which was to build up a stage on which people are able to alter the elements freely to create interesting, hilarious or even weird matches. By expansion, we planned to build up a physical product to stress on the interactive function and amplify the effect of mixing up the elements. Moreover, we make elements of music, light and characters more independent from each other to make the indication clearer. When the box is presented to the audience, it is conveying the message that they are able to freely organize the elements provided. After they build up the scene, they take the role of conveying a message of their expectation that how these elements should be arranged. There then exists the conversation between the game-box and the audience.

The purpose of our project is to extract the basic elements from a specific scene so that the audience are able to reorganize them in whatever way they want. Music represents way more than the feeling it can bring about. Same with all the other elements such as the light color and the movements of people around you. These elements are always connected to specific scenes in people’s daily life. For instance, the special ring in Family mart becomes a symbolic piece of music. Similarly, intense music reminds people of clubbing while certain Chinese traditional music makes people recall the parks in China, which are always filled with elderly people playing erhu in the morning. This stereotype of elements in daily life is what we are interested in. We intend to let the audience experience the process of choosing and combining the elements provided to build up the scene. Through this, we draw the attention of the audience to their stereotype of the supposed elements of daily scenes. Our project then provides an opportunity for the audience to explore the possibility of abnormal matches of different elements so that they may get inspiration of how to deal with the implanted stereotype of different objects or people they come across in daily life.

CONCEPTION AND DESIGN

To achieve the freedom of the users as much as possible, we set aside our previous intention to pack up all the elements in one scene and build them up with one single pushbutton. We were alerted to the result that the product might act as a superior informer instead of an equal participator in the interaction. The video about Norman Doors also gave us instructions of making at least the explicit purpose of our product clear to the audience. For example, the pushbuttons should be obvious so that the audience would know what they should do with the product. One specific change we made was about the position of the buttons. They were at the front side of the box before. To make them easier to push and let the audience focus on the stage instead of the buttons, we moved them to both sides of the box. I chose the pushbutton to activate all the circuits. The decision was highly influenced by the video about Norman Doors. I used the similar criteria of designing, which was to make it as clear as possible. Sensors could be another option to activate the circuits, but they would distract the audience from the purpose of the product. More importantly, I intended to give the audience a sense of turning on and off all the parts of the whole scene, so it would be more obvious for them that they were building a scene by adding or cancelling elements.

FABRICATION AND PRODUCTION

In our production process, there were several significant steps we went through.

We first expected to enable the character to orbit by using gears. However, the motor was too weak to run the gears. Therefore, we had to directly connect the motor to the character.

During the User Testing Session, we accepted many suggestions. The most useful suggestion we got was to change the position of our buttons to make them easier to push.

We also got advice on the movements of the characters. Specifically, many users suggested separating the switches for the characters, which means one button for each character. It would take more work, but we still accept this advice to meet the need of the users. We consider providing more buttons as offering more freedom to the users. On the other hand, for those advice we did not adopt, such as lifting up the LEDs to make them look fancier and adding a backdrop to the stage, we thought they less aligned with our basic purpose although they sounded attractive. Also, we modified our designing of the stage for several times, in order to make it bigger and for holding all the circuits and better-looking.

CONCLUSIONS

The goal of our project is to break the daily scene into several elements and give the audience an opportunity to reorganize them. By doing this, we intend to see what kind of stereotype people have regarding the music, different characters and the overall environment. When the users were trying our project, some of them combined the two piece of music we provided by turning on and off the buttons in certain sequence, which was out of our expectation and showed the users’ recreation of it. However, as I reflect on the project, I find that we ourselves have already pre-set the stereotype by providing these elements. Maybe we could give more freedom to the audience and let them decide the elements. Ultimately, there did exist interaction between our project and the audience. If we had more time, we would try to provide more freedom to the audience, maybe by giving them the tool of designing the elements themselves. One important lesson I have learned from the several setbacks that have motivated our modification of our project is that the designers should meet the balance of expressing themselves and putting themselves in the audience’s shoes. The ideal situation is after the audience get interested in the product, they are willing to explore more about the product such as the background story, the hidden meaning or the potential application. Also, through the process of building up our project, I taught myself basic skills of editing music, arranging the circuits in order and so on. I considered these skills as essential for future projects. After the course of designing and redesigning the circuits, I got more proficient in Arduino and the electronics.

When I was planning my project with my partner, I thought further than making a physical thing that would meet the requirement of the professor. I tried to assign deeper meaning to our project. We would be glad to see that our project is able to stimulate others’ thought, even slightly. Our purpose to extract independent elements from reality and motivate the audience to reorganize them can be generalized as an intention to facilitate recreation through creation and to draw people’s attention to the reflection of stereotypes in life.

                                              

Week 07: Fireworks Project with Ruby | Jonathon Haley

Link to Fireworks

For this project, Ruby and I decided to make a fireworks simulator. Click anywhere on the screen, and fireworks will shoot up and explode. When the fireworks explodes, not only do you see vivid colors and glittering trails, but you also hear one of the fireworks sound samples that we created.

Ruby was in charge of creating and downloading the various sounds we used to create our fireworks sounds. She recorded a number of strange things, such as hitting a whiteboard and crumpling paper bags. Since there were some sounds, such as large explosions, that we could not replicate ourselves, as well as various other noises we decided to include (listen to the sounds closely…), Ruby (and to a lesser extent, myself) also found some sounds online at freesound.org which would serve our purposes.

We combined the recorded and downloaded sounds in Audacity, using a combination of trimming, amplification (volume adjustments for individual tracks), and reverb, to create four very different-sounding fireworks samples.

I was in charge of creating the code and scripts for the fireworks website. My initial idea was to have a black screen (like the night sky at real fireworks shows), and when you click an area on the screen, a GIF of a real fireworks will play where you clicked, and at the same time one of our recorded sound samples will play. However, after some Google searching I found a better solution. I found some Javascript code online at https://codepen.io/whqet/pen/Auzch which served our exact needs – creating fireworks on the screen whenever/wherever the mouse is clicked. I added this code to our script.js file. There’s a line I made in the code using comment /// slashes – everything above it is script and functions that I wrote, and everything below it (besides calls to the above functions, and also tweaks to a few numbers) is from the above link. Per the original script writer’s implementation, I also added a <canvas> element into our index.html page, as well as a bit of CSS for the mouse crosshair style.

A big challenge I faced here was making audio play in the background – not only would a random audio sample (from among the four we recorded) have to play every time a firework is created by the user, but it would also need to be at the same panning (x-axis position) as the place the user clicked. After a bunch of messing and trying different approaches, with help from IMA fellow Tristan, I found the solution to be to create an AudioContext object and StereoPannerNode, and connect the two together. Ruby and I also decided to add a progress bar, which increments every time the user launches a firework. When the progress bar is filled, a bunch of fireworks launch to random parts of the screen and explode, triggering a bunch of different audio samples at once. To make this work, I had to partition much of my code into smaller functions, which could be called multiple times for each firework.

Side note to the user: every time you fill up the progress bar, the fireworks becomes slightly thicker – with more particles exploding out of it. It’s not super noticeable (which is why we don’t point it out on the page), but if you fill it up enough times the fireworks become quite large.

The final problem I dealt with (although it came up pretty early in the process) was making the sound play in Chrome – although the visuals worked fine, you wouldn’t hear anything. This, I learned, was due to restrictions on autoplaying audio set in Chrome, as well as restrictions on running audio locally (possibly also having to do with autoplay restrictions). To circumvent the first issue, I made the audio only load after the first time a user clicks somewhere on the screen, as Chrome allows you to play audio after some sort of user interaction. With IMA fellow Dave’s help, I set up a local host (localhost:8000), and was able to run the website there with no problems. This was a perfect workaround, as it simulated how the website would run after being uploaded to the NAS server, and allowed the audio to play as desired.

In the end, the audio (and the rest of the website) works in both Firefox and Chrome (Firefox didn’t even require running a local host!), but not in Safari. In Safari, when I try to instantiate a new AudioContext with the line “let context = new AudioContext();”, I get the error “Can’t find variable: AudioContext”. Apparently the AudioContext class isn’t accessible in Safari, so as a result the audio for the website doesn’t work in Safari (though the visuals work fine). I’m not sure what the reasons or workarounds for this might be, but it’s not super surprising as different browsers do seem to treat audio a bit differently (hence how Chrome required a local host to play audio, but Firefox didn’t).

In conclusion, while our project may have been simple in concept, it actually took a great deal of work to imagine, create, and execute the various details – especially recording, finding, and combining different sounds to create the fireworks samples, and creating full audio functionality with panning, multiple simultaneous audio tracks, and cross-browser compatibility. I consider this project a success, and look forward to the next one.

Pooping Baby- Nate Hecimovich- Eric

For our midterm project Isaac and I decided to construct a “pooping baby”.  We did not take much inspiration from the group research project as our product for that was too far fetched to give us any rational ideas.  Originally, we had wanted to do some form of interactive car that moved in reaction to its environment.  Specifically, we wanted to build a motorized cat toy that would react as the cat played with it.  However, as time progressed we realized the feasibility of the project to be lacking as we would have make multiple motors work to drive around a car with an arduino and wiring hidden away on the inside but with sensors on the outside.  We concluded that it would be simpler and more reasonable to do a baby that ate and pooped using servos.

We began our project by building a body for our creation, we settled on using the skeletal frame of a RC car as the primary support and used cardboard to produce an exterior.  We then attached cardboard to a servo to replicate a mouth that would be in a perpetual chewing like motion.

The code itself was fairly simple but our next major problem became the sensor.  We had intended to use a pressure sensor, putting it on the inside and triggering the pooping mechanism when “food” landed on it.  However, we found out that the sensor was not sensitive enough to pick up the food landing on it so we then considered an infrared or motion sensor.  In the end though we concluded to move the pressure sensor to the outside of the project and have it be manually triggered by squeezing its lollipop. 

After some final decorations we had a working, pooping alien looking baby.  In our user testing we were encouraged to use a different form of food and add clarification as to how exactly to make the baby poop. We decided it to be impractical to change what food we used but we did print a sign clarifying somewhat on how to use the baby.

In conclusion, this project coincides with my definition of interaction in that it requires some form of communication or involvement on behalf of the user.  The point of the baby is not to simply look at but to feed it, and make it book.  Due to it requiring human input to exert a reaction I would consider it interactive.  This project helped expand my knowledge in the laser cutting as well as with coding and circuit construction. But in my opinion the most interesting was the design and giving the trash baby it’s appeal.

Week 7 – Audio Project: Comm 42’s Last Message – Milly Cai & Laura Huang

Here is the link:

https://imanas.shanghai.nyu.edu/~yc2966/comm42/index.html

Description:

In this project, Laura and I try to narrate a story from the little robot Comm 42 ‘ view at the end of human history, which is about four important scenes in his life: the day it was finished and turned on for the first time;  his first job – kids caring and housekeeping; his time with his human family; the end of human’s world due to the disaster, war, pollution and resource running out.  The name Comm 42 is formed up by Comm lab and “42”, which comes from Douglas Adams ‘s scientific fiction The Hitchhiker’s Guide to the Galaxy——42: The answer to life, the universe and everything.

As for the timeline and interaction design,  we set the end of the world as the current time, while comm 42 is about to run out of his power and tries to leave his last message. Users could click on the turn on button to wake up Comm 42 and listen to his last message and choose the piece of the story they want to hear. After listening to all the 4 records, Comm 42 will shut down in the noises.

Process:

In this project, I’m mainly responsible for visual assets and web design, while Laura is in charge of editing the audio part. Though it’s said that we divided our duty, we still did much work together. For example, we developed the storyline together, performed the human’s lines in the story, and also search for the sound effect together. And Laura helped me a lot for debugging and editing the sound wave pictures.

As we would like to make the view more realistic, I created a 3D Model for our little Comm 42 and rendered out its front view.  And I also edited the view of the robot to make it look rusting and dirty. To collaborate with the audio part as well as enhance the expression of the sound, I added some glitch views, loading messages and sound waves.

The original model for Comm 42 

Edited Look for Comm 42 

Glitch

 

Showing scenes

All View Assets

 

Screenshot for audio files

Screenshots for the web page: turn on

 

Screenshots for the web page: main page

 

Screenshots for the web page: back & reload

Screenshots for part of the code

   

Post-mortem: 

Due to my weak knowledge of javascript, I mainly used the setTimeout function to control the pace of pictures and animations collaborating with the sound. However, this finally results in great manual work and many weird bugs…As for the audio part, since the human voice in the project is acted by Laura and me, not only the actors’ emotion and after-editing does not approach our initial expectation (But it was still a lot of fun to act hh). In addition, the robot voice we chose the voice from the ios system, however, the problem we’ve met with this is that it can not change its pace and tone to be more humanistic, for example, when Comm 42 is calling the kids to watch out, it sounds too natural and calm… Probably next time, it’s still better to have a human voice perform it first and then add machine-like effect to it.

Week 07: Audio Project-Laura&Milly

Project name: Comm42’s last message

Link: http://imanas.shanghai.nyu.edu/~yh2330/audioproject/

Description: The audio project presents the last words of a robot called COMM42, who is soon out of battery, the robot displays 4 audio scenes that saved in his memory card. The 4 scenes recall the invention and production of the robot, his interesting experience with human beings and the extinction of the human in the end.

Process: Milly and I came up with the idea of robot together, we both like the topic and thought it would be fun to create the machine sound and combine them with a robotic voice. We add interesting remarks from the robot that we think the robot may have when working for human beings. We want to show that if humans don’t protect the earth, the modern society may ultimately extinct and the only witness of this history may be the memory card in the robots.

I wrote the general script of the story and edit all the audio sources. Milly is responsible for the visual part on HTML and does most of the coding. When recording the original sound source, at first, we want to use the google to read our script but the voice was not so nice. Then we found there are more choices on Mac in the voiceover, so we select the robot voice from it. We choose a more human-like voice because it is clearer and more realistic considering the current AI technology. The pure robot sound is too noisy and seems not fit the following life scenes.

I collect the sound sources from the Internet (https://www.freesound.org/browse/),  including the general environment background sound, detail sounds of the human characters, electricity sounds from the robot and the main robot lines. I use the environment sound along each scene and only adjust the volume so that it can be more realistic. I also search for many details that match the characters’ words and insert them into the story. For example, the sound of the egg crashing, the sound of the kids running and shouting. We found some problems when processing the sound of the robot from the laptop as there is some current sound from the computer. We try to adjust the input level of the Tascam and lower the sound output from the computer and it became better. I did some further process in Audacity to add noise reduction, change the volume decibel. I also add in more pause in robot’s speak so that it becomes more natural and comfortable. When combing robot’s words and the background sound, I adjust the volume and use the envelope tool to make them more harmonious. I found that the detail sounds should overlap the main soundtrack so that they won’t be abrupt.

In the HTML page, we use the button to control the play of the audio file, We also add a sound wave gif to display when the robot is talking. We add the back button and the replay button. Once the user finished listening to all the four records, the web will jump to the final page. We achieve this by using the if function and make four scenes from 0 to 1 once it is played. When the sum of four scenes is 4 it will operate the final scene function. We also add some visual effects like the loading words and the pattern on the screen to match the audio part.

Some improvements for our projects may be: the volume of the robot is not so constant along the 4 scenes. We may record the voice in a better environment and maintain a constant record distance. Otherwise, we should adjust it in the Audacity to make the sound more comfortable with the same volume. For the HTML part, we had a subtle problem with the sound wave, which there are a few times that the gif displays randomly. We tried to debug this problem but we failed. Also, we may figure out better ways to change the image instead of using setTime out and improve the code. Overall, Milly and I worked well together and we have achieved our initial goal.