Unity Exercise2_Audio Visualization and Group Presentation 2
This time I utilize Unity to deal with some audio visualization stuff. Our final project’s idea is about the ‘corridor’ and the style is more close to a fantasy-like one. Based on that, I think finding one way to link audio resources with visual patterns is a good starting point. By finding a tutorial online, in this first exercise, I divide the frequency into 512 groups, each reassembled by one cube. By reading the data contained in the audio file, the cube’s height is going to change according to the corresponding frequency at that second. The main takeaway here is to learn how the read the audio data and then transform it into other controlling variables.
In my second exercise, I take a step further. Here I use the data extracted from the audio source to control the intensity of 4 different spotlight sources as well as the height of these 8 cubes. And I add a rotation function to the camera and let the camera move. Though I didn’t try to split it into 2 screens to make the 3-D effect (I review the content, I just don’t want to add that in the exercise), I find the scene beautiful and move coherently.
The second part is about the group presentation. This time, we dived into more details about the whole story and developed a storyboard frame by frame, sequence by sequence. Our idea is that we are conducting a clone survival experiment. And our main character, who is a clone, will travel between different scenes, trying to run away from the experiment safely. There will be some unnatural elements and scenes in the whole story, indicating that this is an experiment conducted by others rather than the real scene. In each scene, he can find a body killed. With the development of the story, he will gradually discover the truth behind this experiment.
This is the link to the storyboard we have discussed, you can see more details about the story in it. The link is here. https://docs.google.com/presentation/d/1VqHMKAKEUIv-1gRFhGCOqPtvrDj8J3qZRcRsvSzq9fI/edit#slide=id.p
————————————————————————————————
There are also some hints for my other course ‘responsive environment’. I think this is a good starting point to examine how different inputs can be manipulated to control the visual content. In the ‘stress-relieving final project, we could also use similar techniques to let the stress level, which is calculated based on brainwave frequency, heartbeat, and other biological data, control multiple outputs at the same time to make the whole experience more immersive. Plus, this is what we have built according to the tutorial in class, and I think this is quite inspiring at how to utilize the VFX system to create a cool effect.