Final Project for RAPS

Title
Kaspar

Project Description
The name of my project, Kaspar, comes from Kaspar Hauser, a mystical German youth who claimed to have spent most of his life in a dark cell completely isolated from society and therefore had acquired almost zero knowledge for any language by the time he was discovered in a street at the age of sixteen. Although the myth of Kaspar remains unresolved and is said to be a fraud, it has inspired numerous works in literature and theater including Peter Handke’s play Kaspar which directly inspired the concept behind my project. Handke’s play unfolds with the lonely presence of Kaspar himself and develops through Kaspar’s interaction with abstract human voices which are meant to instruct him in speaking like a normal social being. Throughout the play, Kaspar struggles painstakingly to pronounce words, grasp the meaning of them, and pairing the sounds and meanings of language. As Kaspar eventually learns to speak, he dies.

Handke’s play is a critique of language’s powerful constraint on human expression and the conventional way of thinking in accordance with the inner logic of language. The concrete subject of his critique is post-war Germany where the adverse impact of war persisted even after the end of the war in the form of the spoken German language. There was a lack of words to appropriately address people and events, as well as to express the post-war trauma. I feel very relatable to the inadequacy of language, a subject that haunted most of my time in college — while language is broadly regarded as the most straightforward and efficient medium for communications, there are thinkings and emotions one can’t express through spoken words; at the same time, the omnipotence of language may very well be a constraint to thoughts.

After all, language is nothing but the pairing between sounds and meanings or signs and meanings, it is respected as a useful tool which bond almost all of us together. I feel that most of the people I know including myself are much too used to treating language as something already developed or assigned, and therefore ignoring the true nature of written and spoken words. The most important knowledge I acquired from RAPS is the realization that sounds and visuals are all effective and affective languages. I feel inspired to make my final project a tribute to this knowledge and later came up with the idea of deconstructing and alienating the familiar language through software manipulations.

Perspective and Context
Live audiovisual performances are incredibly rich in possibilities that performers can either focus on generating an immersive sensual experience or creating a narration through storytelling. While the ideas of live cinema, VJing, and live audiovisual performance according to Gabriel Monotti, Eva Fischer, and Ana Carvalho all focus on the possible approaches to the representation of visual materials, I see my project as an experiment where the theory of visual representation is applied to the performance of sounds. While my live manipulation of the visuals during the performance is a response to the practice of VJing as an improvisation responsive to music, the arrangement/ deconstruction of sounds is closer to a narration than an abstract experience as in the case of live cinema. Moreover, my project incorporated an improvisation with sounds comparable to a jamming section of visual materials. The implementation is all about the subtle balance between representing a prepared idea using preexisting materials and creating new meanings adding live manipulation to the performance.

Development & Technical Implementation
The execution of the project started with me working in DAW creating the intro track. Rather than structured music, I expected the intro to be an assemblage of different sounds including the sound of broken glasses (implying how I hear the deconstruction of things in a subtle manner…), scattered drum beats, as well as a ground layer. Further into the intro are distorted sound fragments taken from a recording of my own reading which is also played later in the performance — I pitched, stretched, echoed, or reversed three samples of me reading the words “speak”, “talk”, and “walk” in order for an effect of alienation and an eerie metaphor for machines learning to speak. The intro ends with a dramatic pitch-up of the ground layer sound and a final drop of percussion. Since the intro is pre-recorded, I directly ran it through the mixer on Max without any extra sound effects. So are the background for my reading section and the sound of reversed orchestra. Besides the stereo output I connect after the mixer module which held most of the sounds used in the project – the prerecorded materials and the SAMPLR modules I used for recording, playback, chopping, and pitching, I had one extra output for the input of the microphone running through flanger, chorus, and reverb effects for simpler manipulation of the patch. I also attached effect modules after two of the samplers which I only triggered approaching the end of the performance. I made the playing speed adjustable for one of the pre-recorded materials so that I could switched between my own reading voice and a pitched-down and machine-like version or directly show the alternations of sounds to the listeners.

In generating the visuals I planed three main chapters utilized by different effect modules in Vizzie. The first section is an abstract movement of shapes of glasses which I created through running a picture through modules. Aligning with the intro track, I gradually layered, darkened, and blurred the visuals through twisting different values on the modules live on a midi controller till the image temporarily disappeared as I started to read. In the second stage, I layer the the previous visuals with movement of stripes created from the combination of two easemappers as a response to the floating orchestral sound. In the final stage, I run an image of the word “speak” through a ZAMPLR and a BRCOSR to play with the pixelation of the image and make it flicker in response to the tiny sound loops from the live recording.

Prepared “nonsense” on the status of language: To recognize, in order to compromise / To receive, and repeat, / So that I could release, as a relief; / But I want to be free. / Hammer a sentence, sentence a word, / Reverse an orchestra, / Expression has no extra. / Be free, speak or not speak. 

Links to GitHub:

https://gist.github.com/HoiyanGuo/d9f216e62a7c32a68e223d295fea0489

https://gist.github.com/HoiyanGuo/9c60c98b1d9e5db1d01ba49422911896

Performance
The final performance was an incredible source for my nightmares for a whole week before it eventually took place. Thinking of the anxieties and nerves, I feel glad that I did go upstage and finish the performance without escaping from it! I felt really nervous during the performance —when controlling the visual materials during the performance intro, I realized that one of the control knob on the midi-controller wasn’t properly assigned; when reading through the microphone, I felt unsure how it sounded like for the audience and couldn’t quite handle their confused facial expressions in my realtime imagination; another desperate moment struck me heavily as I recorded myself and saw no waveform in the recoding’s visualization — luckily, I actually did have something recorded and could smoothly continue my performance till the end.

The two experiences of live performance for the class (another one in the auditorium) taught me how sounds could strike completely differently when played in different space and sound system. I feel that I should have taken more time exposing myself to different sound system or sound environment as a warmup, especially when live reading and live recording was such an important component in my project. Moreover, I never timed myself during the rehearsals and ended up having the shortest performance with a limited amount of materials. I will definitely pay more attention to a performance’s timing and pacing for potential future projects. I also find it necessary to deign an ending of any performance since the ending of my project was not obvious during the performance.

Conclusion
The process of accomplishing the project was a valuable experience for me to form a creative relationship with softwares or technology in general. At the same time of executing ideas with softwares, I had to experiment with them in the first hand to explore different possibilities. Nonetheless, I feel regretful that I only worked with what I already knew in creating the project rather than using it as an opportunity to learn new knowledge and generate fresh visual experience. I am aware that there is an imbalance between the visual and the audio components since I actually did put much more focus onto the sounds. I didn’t make the visual component of the project necessary or at least crucial to the overall representation of my idea — it was there for the sake of the assignment’s requirement and didn’t add to the essential meaning of the whole project. But the experience helped me understand the difference between an audiovisual work and an audio or visual work merely accompanied by audio or visual factors.

I think the final outcome did successfully imply an alienation of spoken language, however, while the deconstruction is achieved, I now find possibilities in further transforming the deconstructed pieces of sounds, for example, in generating music. I reversed an orchestral sample with the intention to imply a reversion of pre-existing order for the potential generation of new knowledge and aesthetics. However, I achieved it with an extra sound sample rather than transforming the recorded sounds themselves. With me cutting the sounds into tiny pieces and looping them crazily, I merely suggested a possibility rather than showing the formation of a new language. The experience of working for the final project did open a new world to me — the infinite possibilities of human voice with the alternations made possible by technology. Nothing is more natural and at the same time more complex than the human voice. And I believe that is where the secret to a new musical language lies. I feel very passionate about incorporating more alienated voices into my personal experiments with music. At last, I’d like to thank Professor Parren and my peers for this semester of exciting journey and putting up with my weirdness and overall inexperience with IMA — I can’t wait for more wonderful works from them and future encounters!

VR/AR Fundamentals Reflection

For my final project, I worked with Kenneth and Kennedy to present Extinction: A Commentary on Urbanization in Natural Settings, which showcase the animals of the African Safari going extinct, until we see these animals again as an exhibit in the Shanghai Natural History Museum.

My primary focus was the visual post-production of the project. From the 360 video we took from inside the museum, I took some still frames from the shot and used Photoshop to change the environment to a full landscape. I did this several times, with each iteration removing an animal, until the final image was just an empty landscape. I then used Premiere to cut together these images with fade transitions to provide the effect of the animals disappearing.

Some of the difficulties I ran into while working on this project was that I had issues with creating a stereoscopic image. I attempted to edit both eyes individually but there was too much discrepancy between the two images that there was weird artifacting and graininess from this. Because of this, I chose to leave the final video in 2D, until it fades back into the museum, which is 3D.

I feared that the 2D shift into 3D may have looked strange, but based on the expressions that people made afterwards, most were quite pleased with our work. I knew that if I wanted criticism, the most genuine responses were going to be the reaction of children. Based on the results of the show, I can say most of the kids really enjoyed our project and it was amazing to see their fascination when they see the animals dissapearing.

In terms of improvement, I would hope that there is another method as to actually making the image stereoscopic. Perhaps there is a way, but my Photoshop skills is not to that caliber. I would hope that I could improve on that. Our 360 audio also was not functional, as it played a mono channel in both ears. The ambisonic sound was a final and last minute decision and went without a lot of testing and debugging. In terms of that, I wish we had started working on audio earlier. I would also wish that the first portion wasn’t static. For example I would have preferred to see some moving animals as a video rather than a still image. I think if I were to redo my project differently, I would have made my own environment and added 3D models that moved, but I felt that my current skills in 3D game engines are not good enough to realistically do this.

VR Production Experience for “I Do.” – Kat Valachova

Name of the Project: “I Do.”

Partners: Molly He, Ben Tablada

Links:

The “1 minute project”:

https://youtu.be/NxazNNrrNXI

30 seconds long documentation:

https://drive.google.com/drive/folders/1Mlm17z9e5ZCGfNhuu3wy7Nnh31xzAsmU

What worked well – I have to admit  our marriage market experience idea worked much better, than any of us anticipated. There have been many times me and my partners felt the project was hopeless or that the initial idea would have to be courbed because of the situation at hand. In the end, we didn’t have to do any project idea sacrifices and the response of our test user was so much better than what we could have imagined at the beginning of this project. We also ended up having much more amazing content than we expected when the actual shooting took place. Seeing people genuinely thrilled after experiencing our demo version during the IMA Show was something we all watched with wonder. I feel very lucky to have the team I had for this project, because without the great effort and cooperation everybody showed, this project with all of its challenges would be impossible.

I have learned a lot during both this project and the course as a whole. Before the course started, I had never even tried a VR headset before, not even talking about creating any VR content. I chose this class with great anticipation to learn as much as possible about VR, but I was also very worried. Lacking knowledge, creating VR content meant to me something alongside programming a game or shooting a very complex movie, meaning humanly impossible. Thanks to this project, I discovered VR is not as intimidating as it seemed at the beginning and it can offer great possibilities for being creative and share it with others in a very fun and unique way. The content creation is also, with enough guidance, possible for even amateurs to make. I never worked with Premiere pro before, so besides shooting VR content for the first time, this was another challenge for me, which taught me a lot. Overall, I learned so much about the whole video based VR making process, from shooting to editing, postproduction and user testing.

What I would do differently is the preparation before shooting. I believe we were well prepared in terms of the terrain, the place we wanted to shoot so it would incorporate all of the angles and objects we wanted it to. The problem we did not think through were the local people, which ended up posing the biggest challenge. We should have chosen a different approach, communicate with them beforehand, get them on our side and excited to be part of our project.

Another thing I would make sure of next time is checking the quality of the video and presence of all of the components (such as the sound source we have lost) during all of the stages of production – from stitching, transferring of the project to rendering.

VR Production Experience by Aibike Begali

Being the first ever IMA class that I have taken, VR/AR Fundamentals by Michael Naimark with Dave Santiano assisting provided some solid basis and fundamental insight into the area of Interactive Media. Working with VR more specifically, made it comprehensive to be able to produce a stereoscopic video as well as to edit and manipulate it such that an enjoyable and exciting product can be obtained. 

First, by constantly keeping an eye on world updates and news in the sphere of VR/AR, it was easy to submerge into trends and to understand the basics of VR/AR medium when used as a tool of expression and communication.

Secondly, trying out our own shootings out in Shanghainese streets with Insta360 Pro camera provided by the university gave us great opportunity to be able to file and document our real-life surroundings in a 360 degree video. Some slight obstacles occurred in terms of lighting and transporting the footage to PC devices due to the heavy weight of the footage and the importance of proper stitching. 

Third, the process of post-production required some in-depth understanding of manipulation of stereoscopic video and spatial audio, however with the intense help of Dave, all the iterations and work process in either After Effects or Premier Pro were remarkably possible and enjoyable. It made me think how accessible projects in VR are becoming to any amateur creators and videographers with the development of user interfaces of such video-editing softwares. 

Showtime during IMA show on December 13 showed us how powerful VR medium is in terms of teleporting people to designated locations and sharing the experience and the message aimed to be displayed.