The project can be found here
The code for this project can be found on this Github repo
Project Idea :
The idea (that Harry came up with and further refined together afterwards) was to come up with a audio-visual representation of the popular thought experiment :
“If a tree falls in a forest and no one is around to hear it, does it make a sound?”
Typically, like other thought experiments such as Schrodinger’s Cat or Maxwell’s Demon, there is no clear way to demonstrate a prototype as to how the experiment actually works. We attempted to demonstrate an artistic interpretation of the experiment through a series of interactive pictures and sound that that tries to highlight the phenomenon that the experiment entails.
The user initially hears and sees a tree being chopped down, and as he “moves away”, he begins to be flooded with more ambient sound and eventually the sound of a tree falling. The question is, was it the same tree that the user saw at the onset?
Implementation:
The implementation was fairly straightforward – we have two main HTML pages – the title screen, and the page with the images and the actual sound. The second page has a zoom button which serves multiple purposes.
1) Reduces the size of the image with each click.
2) Swap the image out for the next one once the image reaches a certain size.
3) Reduce the volume of the sound with each click. This is done to reinforce the notion of the surroundings getting further away from you as you keep clicking.
The idea is that the closer you are to the tree, the more you hear sounds related to the tree and less sounds related to the ambiance, but as you zoom out, the ambient sound starts to become more prevalent. Ultimately, you hear a tree fall, but that sound is soon overshadowed by the sounds of the ambient surroundings.
Another feature is the onhover on the images. This triggers the image and the soundtrack to be swapped out for their distorted counterparts. This essentially adds an extra dimension to the entire project where we experimented with different forms of the same piece that conveys a different sort of feel and emotion and provides for a totally different experience.
Workload Split-Up :
With regards to the splitting up of the workload, Harry came up with the initial concept and we further refined the implementation together. I was in charge of gathering the audio sources which included several open source tracks. I then put these tracks on Audacity and mixed them, cutting certain portions out, adding special effects such as distortion, and modified the amplitude and echo manually to suite the purposes of the project. Each track in the project (with the exception of the last one) has at least two different tracks mixed together to create a cohesive piece that fits the setting of the scenario.
I also worked on some of the core JavaScript features which included the zoom feature for the image, as well as the distortion on hover.
The zoom feature was quite troublesome to configure since I couldn’t find a proper standardized way to do it, so I went along with a method I devised myself. It seemed to work for the most part so we went along with that. The distort and undistort functions worked perfectly fine and smoothly, which was a great plus.
Making the audio and the images switch out was fairly easy with two arrays and a bit of modulo division to make the counter loop through the array in a circular manner.
Reflections:
Harry and I made it a point to make this project as concept-driven as possible. We drew out the plans on paper first and translated the concepts to code as closely as possible. As a work partner, Harry was absolutely wonderful, and put a lot of thought and effort into making sure that the concept shone throughout the entire project. All in all, we are quite satisfied with the overall outcome.