Audio Collage Project (Cecilia Cai)

http://imanas.shanghai.nyu.edu/~zc1151/commLab/AudioProject/intro.html

The above link it to our Audio project. This time, I work with Shirley to create and present a story of a man who lost his glasses trying to make a pancake. Initially, we were inspired by some cooking videos in which the host speaks nothing, and instead they amplify the sounds of the cooking process. We found watching these videos and listening to the sounds of food very comforting, and decided to record the sound of making pancakes for our project. We then started to think how this idea could be fit into a story, as well as the way to present the sounds. I recalled the Masterchef Season3 winner Christine, who has visual disability but is very talented in cooking. We watched a video of how she cooked, wondering if the sounds help. We realized that, although listening is indeed an important way for her to control her cooking, she mainly use touching, smelling, and tasting. This example then does not stand for our idea. However, we still want to find a context where the sound itself is highlighted, since people normally do not pay that much attention to merely sound itself. Therefore, we chose for the setting of a guy, nearsighted, broke his glasses and wanted to cook a pancake. We intentionally blurred the pictures, indicating that you don’t necessarily have to see clearly to enjoy cooking, since the sounds itself is very comforting.

For the visual images, we drew all the pictures. I designed the prototypes on paper while Shirley edit them on her iPad.  For the audio clips, most of them are of our own recordings, but we also made use of some clips downloaded on http://freesound.org. For those free audios online, I also edited them with Audacity before using them in our program.

I mainly worked on the coding part and audio editing. Shirley helped with setting up the structure of the introduction page. I find it a great exploration for me to work on this project, as I’ve never worked on audio itself before. I used to only be able to cut and combined audio clips with iMovie, but I played around a lot with Audacity when working on this, learning the ways to remove the environment noise, amplify the main part, change the speed and tempo, even the pitches and volume, change the far and near of the sound, and etc. I also learned to edit the detail parts, such as smoothing the transitions and removing clippings. 

I also found the programming part a bit challenging to me, as I’ve never worked with audios on javaScript before. I learned the built-in attributes, events and functions of audios by reading through the professor’s class notes and browsing tutorials on google. A problem I kept ran into is that, when I call the audio.play() function, the console return either an exception of “uncaught in promise”, or an “async” error. However, it works occasionally. I looked it up on Google, and realize that this would happen if the audio is set to play on-load, when it is not readily loaded or sync, and is due to the characteristic of the browser. To solve this problem, I changed the triggers of the audio files to be clicking on a button/image instead of directly calling “audio.play()” in onload/mouseover functions. 

I learned about tracking the current time of the audio. I initially tried to use  audio.currentTime as conditions in function, only to find that the value that it track does not update from time to time. I realized that this is because currentTime is returned only when you call to check it, and will not automatically refresh its value. I looked up on google and learned an event for an audio file called “onTimeupdate”, which continually updates the value of currentTime when the file is playing. This function is very helpful for me to create one of the feature of our project, as it helps track when to change the image, and enable me to create some simple animations. Besides this event, I also make use of the “onended” and “onplay” events to trigger some corresponding functions.

One of the most frequent function I used in the code is “setTimeOut” function. I also used “setInterval” somewhere, for creating animations according to time. By delaying the time for calling some functions, I’m able to make the audios play at proper time, corresponding to the change of the images.

Another problem I ran into is that, as the initial picture I use displays every  ingredient needed for making a pancake, I originally track the position of the mouse to decide which object the user is clicking on and set it to trigger different functions, and later I realized that, when the window is enlarged or shrank, the position of each object related to the window changes, and thus the original values I set become of no use. I later changed the tracked value of mouse position from px to percentage. And by doing this, the tracking worked, as the position of a point always changes in proportionate to the change of size of the window.

Lastly, one of the CSS tricks I used in the coding is what I learned from a YouTube tutorial video. I edited the code it shows and create a shape of eye at the beginning of the story, which blinks when the mouse is over the shape, and generate the sound of cartoonish blinking eyes when clicking on it. To create the animation, I created a span within a div, and use hover.

I really do enjoyed working on this project, as I learned a lot about sound and gained many experiences with audios. Our cooperation between partner is overall pretty smooth. Shirley mainly deals with collecting the audio and image materials, while I mainly take the responsibility of editing and putting them together and present them as a whole.

Leave a Reply