RAPS | Final Documentation | Yutong Lin

White Crane, Old Dog, Fading Memory

Documentation

 

Project Description

The project is a tribute to my great grandmother in the form of a live cinema performance. It is a re-telling of her life story through my reflection. By bring her singing from almost twenty years ago in conjunction with my family archival footage and my photography, I wish to remember and celebrate her by sharing her story.  

The title – White Crane, Old Dog, Fading Memory has its own connotation. “white crane by the lake” is a repeating theme/metaphor in the folk music of Nakhi, “old dog” is literally what I saw a lot in the village where my great grandmother used to live. There were old dogs wandering around in the village that made me think about the passage of time. 

Perspective and Context

I was greatly inspired by the Light Surgeons, which is a London-based live cinema performance collective. True Fictions was one of the projects dated back a decade ago. It investigates the concept of myth, cinema, and American history. I was intrigued by the story just by the trailer on Vimeo. The amateur aesthetics, music score, and the editing have together created a power arrangement of media and deeply engaging emotion. I can’t imagine how sensory and breath-taking the performance was when they play the music score live. 

From their work, I became fascinated by live cinema as an art form and personal expression. 

Gabriel Menotti says in Live Cinema, that live cinema has the potential to challenge the storytelling strategies in traditional theater-based or screen-based cinema. Or to say, to mythicize the content by the multi-sensory input and the fluidity of visual arrangement and music-making. “Such extensive freedom of configuration favors works whose evocative structure is closer to poetry than to the prosaic linearity that distinguishes most movie genres, thus suggesting improvised, free-flowing abstractions” (Menotti 86). The “poetic” aspect of live cinema, loosely structured, widely sourced can be coherent by the magic of music, if edited or lively-controlled properly.      

In addition, another aspect of live cinema that I am interested in is the creator/artist’s way of interaction with the audience. “The light projected onto the screen is like a campfire around which the public gathers to absorb the performer’s tales. The performer is an actor whose main job would be to pursue communication through this process, keeping it meaningful for the audience” (Menotti 88). 

The presence and visibility of the “director” can be regarded as a powerful statement to understand what he/she has to say. The after-party and mingle with the audience also are part of the feedback for both sides – the audience and the director. 

 Development & Technical Implementation

Link https://gist.github.com/Amber-yutong-lin/28f089eab4176af7fb426653268958a8

Since my direction is slightly diverged from what other people were doing, my workflow was also a little different.

Step #1 – music composition:

I composed my music in Logic Pro. It was my first time using it. I was sampling drum sequences from GarageBand and Eric suggested that too much sampling may result in a generic output and lack of personality, especially for those who are familiar with either GarageBand or Logic. Therefore, I recomposed a more futuristic and ambient layer that goes well with my great grandmother’s folk singing.

(The drum sequence was made by my friend – Fiona Chen with her consent to use.)

Step #2 – scriptwriting and transcription:

 

I wrote a short prose of my memory of her when I was younger.

Step #2 – text, image, and video curation

I chose the visual materials based on the script. The selection began even earlier when I was reviewing my previous photograph and family footage. 

Step #3 – pre-compositing and visual effects

This step is done in C4D, Max, and Aftereffects.

Step #4 – move to Max for live performance

The live visual elements are mainly controlled by MUTIL8R and HUSALIR for saturation, contrast, and hue. By altering the black and white only images to the highly contrasted ones, I wish to visualize the color of memory, the hyperreality in nature of cinema.

The audio effects are played live. The reverb, echoing, and filters are rehearsed to fit the visual and the grand layer of music. 

Performance

The performance generally went well and achieved what I envisioned before. I think my action to bring folk music to an unconventional venue for it, which are the night clubs, can facilitate more conversation about different kinds of music and ways to think about heritage. 

However, I have to acknowledge the fact that I changed my mind one day before the performance and I skipped the rundown of my final patch, which greatly impacted my judgment of how was the overall experience. Next time, I have to give myself enough time to test on bigger screens and different audio configuration. I asked a bunch of people who were there at the performance. I got some positive feedback but I am afraid because they are my friends and acquaintance and didn’t have to be critical. However, the subtitles for English readers didn’t go super well, people said the projection contrast wasn’t enough to see the letters and some of the duration of words needed to be extended. Secondly, the audio filters could be better rehearsed and polished. 

 Conclusion

I really appreciated the chance to perform what I have learned in RAPS. I have never thought about making my music since I have no background in it. And my passion for it has been cultivated during this class. I wish and I want to learn more about music-making 

And I never thought about doing live performances. When I heard people say that they almost cried after experiencing my performance, I consider it a success. This class made me rethink VJ, and I think it can be more cool, and more than “wallpapers,” but as an engaging way to tell powerful stories.

Works cited: 

Menotti, Gabriel. edited by Ana Carvalho and Cornelia Lund. “Live Cinema”, THE AUDIOVISUAL BREAKTHROUGH. 

Final Project for RAPS

Title
Kaspar

Project Description
The name of my project, Kaspar, comes from Kaspar Hauser, a mystical German youth who claimed to have spent most of his life in a dark cell completely isolated from society and therefore had acquired almost zero knowledge for any language by the time he was discovered in a street at the age of sixteen. Although the myth of Kaspar remains unresolved and is said to be a fraud, it has inspired numerous works in literature and theater including Peter Handke’s play Kaspar which directly inspired the concept behind my project. Handke’s play unfolds with the lonely presence of Kaspar himself and develops through Kaspar’s interaction with abstract human voices which are meant to instruct him in speaking like a normal social being. Throughout the play, Kaspar struggles painstakingly to pronounce words, grasp the meaning of them, and pairing the sounds and meanings of language. As Kaspar eventually learns to speak, he dies.

Handke’s play is a critique of language’s powerful constraint on human expression and the conventional way of thinking in accordance with the inner logic of language. The concrete subject of his critique is post-war Germany where the adverse impact of war persisted even after the end of the war in the form of the spoken German language. There was a lack of words to appropriately address people and events, as well as to express the post-war trauma. I feel very relatable to the inadequacy of language, a subject that haunted most of my time in college — while language is broadly regarded as the most straightforward and efficient medium for communications, there are thinkings and emotions one can’t express through spoken words; at the same time, the omnipotence of language may very well be a constraint to thoughts.

After all, language is nothing but the pairing between sounds and meanings or signs and meanings, it is respected as a useful tool which bond almost all of us together. I feel that most of the people I know including myself are much too used to treating language as something already developed or assigned, and therefore ignoring the true nature of written and spoken words. The most important knowledge I acquired from RAPS is the realization that sounds and visuals are all effective and affective languages. I feel inspired to make my final project a tribute to this knowledge and later came up with the idea of deconstructing and alienating the familiar language through software manipulations.

Perspective and Context
Live audiovisual performances are incredibly rich in possibilities that performers can either focus on generating an immersive sensual experience or creating a narration through storytelling. While the ideas of live cinema, VJing, and live audiovisual performance according to Gabriel Monotti, Eva Fischer, and Ana Carvalho all focus on the possible approaches to the representation of visual materials, I see my project as an experiment where the theory of visual representation is applied to the performance of sounds. While my live manipulation of the visuals during the performance is a response to the practice of VJing as an improvisation responsive to music, the arrangement/ deconstruction of sounds is closer to a narration than an abstract experience as in the case of live cinema. Moreover, my project incorporated an improvisation with sounds comparable to a jamming section of visual materials. The implementation is all about the subtle balance between representing a prepared idea using preexisting materials and creating new meanings adding live manipulation to the performance.

Development & Technical Implementation
The execution of the project started with me working in DAW creating the intro track. Rather than structured music, I expected the intro to be an assemblage of different sounds including the sound of broken glasses (implying how I hear the deconstruction of things in a subtle manner…), scattered drum beats, as well as a ground layer. Further into the intro are distorted sound fragments taken from a recording of my own reading which is also played later in the performance — I pitched, stretched, echoed, or reversed three samples of me reading the words “speak”, “talk”, and “walk” in order for an effect of alienation and an eerie metaphor for machines learning to speak. The intro ends with a dramatic pitch-up of the ground layer sound and a final drop of percussion. Since the intro is pre-recorded, I directly ran it through the mixer on Max without any extra sound effects. So are the background for my reading section and the sound of reversed orchestra. Besides the stereo output I connect after the mixer module which held most of the sounds used in the project – the prerecorded materials and the SAMPLR modules I used for recording, playback, chopping, and pitching, I had one extra output for the input of the microphone running through flanger, chorus, and reverb effects for simpler manipulation of the patch. I also attached effect modules after two of the samplers which I only triggered approaching the end of the performance. I made the playing speed adjustable for one of the pre-recorded materials so that I could switched between my own reading voice and a pitched-down and machine-like version or directly show the alternations of sounds to the listeners.

In generating the visuals I planed three main chapters utilized by different effect modules in Vizzie. The first section is an abstract movement of shapes of glasses which I created through running a picture through modules. Aligning with the intro track, I gradually layered, darkened, and blurred the visuals through twisting different values on the modules live on a midi controller till the image temporarily disappeared as I started to read. In the second stage, I layer the the previous visuals with movement of stripes created from the combination of two easemappers as a response to the floating orchestral sound. In the final stage, I run an image of the word “speak” through a ZAMPLR and a BRCOSR to play with the pixelation of the image and make it flicker in response to the tiny sound loops from the live recording.

Prepared “nonsense” on the status of language: To recognize, in order to compromise / To receive, and repeat, / So that I could release, as a relief; / But I want to be free. / Hammer a sentence, sentence a word, / Reverse an orchestra, / Expression has no extra. / Be free, speak or not speak. 

Links to GitHub:

https://gist.github.com/HoiyanGuo/d9f216e62a7c32a68e223d295fea0489

https://gist.github.com/HoiyanGuo/9c60c98b1d9e5db1d01ba49422911896

Performance
The final performance was an incredible source for my nightmares for a whole week before it eventually took place. Thinking of the anxieties and nerves, I feel glad that I did go upstage and finish the performance without escaping from it! I felt really nervous during the performance —when controlling the visual materials during the performance intro, I realized that one of the control knob on the midi-controller wasn’t properly assigned; when reading through the microphone, I felt unsure how it sounded like for the audience and couldn’t quite handle their confused facial expressions in my realtime imagination; another desperate moment struck me heavily as I recorded myself and saw no waveform in the recoding’s visualization — luckily, I actually did have something recorded and could smoothly continue my performance till the end.

The two experiences of live performance for the class (another one in the auditorium) taught me how sounds could strike completely differently when played in different space and sound system. I feel that I should have taken more time exposing myself to different sound system or sound environment as a warmup, especially when live reading and live recording was such an important component in my project. Moreover, I never timed myself during the rehearsals and ended up having the shortest performance with a limited amount of materials. I will definitely pay more attention to a performance’s timing and pacing for potential future projects. I also find it necessary to deign an ending of any performance since the ending of my project was not obvious during the performance.

Conclusion
The process of accomplishing the project was a valuable experience for me to form a creative relationship with softwares or technology in general. At the same time of executing ideas with softwares, I had to experiment with them in the first hand to explore different possibilities. Nonetheless, I feel regretful that I only worked with what I already knew in creating the project rather than using it as an opportunity to learn new knowledge and generate fresh visual experience. I am aware that there is an imbalance between the visual and the audio components since I actually did put much more focus onto the sounds. I didn’t make the visual component of the project necessary or at least crucial to the overall representation of my idea — it was there for the sake of the assignment’s requirement and didn’t add to the essential meaning of the whole project. But the experience helped me understand the difference between an audiovisual work and an audio or visual work merely accompanied by audio or visual factors.

I think the final outcome did successfully imply an alienation of spoken language, however, while the deconstruction is achieved, I now find possibilities in further transforming the deconstructed pieces of sounds, for example, in generating music. I reversed an orchestral sample with the intention to imply a reversion of pre-existing order for the potential generation of new knowledge and aesthetics. However, I achieved it with an extra sound sample rather than transforming the recorded sounds themselves. With me cutting the sounds into tiny pieces and looping them crazily, I merely suggested a possibility rather than showing the formation of a new language. The experience of working for the final project did open a new world to me — the infinite possibilities of human voice with the alternations made possible by technology. Nothing is more natural and at the same time more complex than the human voice. And I believe that is where the secret to a new musical language lies. I feel very passionate about incorporating more alienated voices into my personal experiments with music. At last, I’d like to thank Professor Parren and my peers for this semester of exciting journey and putting up with my weirdness and overall inexperience with IMA — I can’t wait for more wonderful works from them and future encounters!

RAPS Final Project: opus 1 by Gabriel Chi

title:
opus 1

Link to GitHub:

https://gist.github.com/gabrielchi/b91dac2eef3e0b33d1feb1e34c871305

Project Description

For my project, unlike some of the other students, I wanted to take a very abstract and contemporary take on the audio visual performance art form. I wanted to make an immersive audio visual experience that would reminiscent of my favourite artists in the field, such as Ryoji Ikeda, but with my own twist. By incorporating my prior knowledge with music production, and my somewhat moderate understanding of Max8, I wanted to challenge myself to create something original and artistically unique, pushing myself out of my own comfort zone.

When thinking of a concrete concept behind the project, I approached this less as an art piece with a fully fleshed out plot and story, rather, I wanted to take this head on as a creative exercise in a discipline I had never worked with prior, hence the name: opus 1 (work 1). The name itself not only presents itself, quite straightforwardly, as my first work, but also connotes that this is merely the first piece, and there will be more “opuses” in the future.  

Perspective and Context:

In terms of opus 1 and its relationship to the wider spectrum of audiovisual history, one can look at its relation to contemporary styles of the art form. As we have mainly been looking back to traditional artists within the field for our previous projects, such as Thomas Wilfred’s Lumia, I thought it would be an interesting creative exercise to focus exclusively on contemporary influences.

Thus, the stylistic elements of the project are heavily influenced by abstract visual artists such as Ryoji Ikeda, who, in certain projects, makes use of strictly black and white when performing. Additionally, piece such as loudthings by TelcoSystems, played an important influence for certain portions of the audio that plays throughout the earlier and latter half of the project.  

Development & Technical Implementation
 

For my project, I had already had an idea of how I wanted to execute the visuals. After having a talk about how TelcoSystems made loudthings with Professor, I decided to make a large grid cube of 3d objects, and place different cameras within the space, and trigger them live during the performance. With help from professor, I realized that one could incorporate .js directly into Max8, which allowed for me to easily make such a large amount of 3d objects in a grid arrangement, mainly with the use of for loops  and nested for loops. 

Additionally, by looking into the jit.gl.camera, I was able to place different positions of the camera within and around the cube, allowing for me to switch between perspectives of the cube within the 3d space. I also used jit.gl.skybox and cubemap in order to make a larger cube, which would have a texture inside of it, mainly to give the 3d space more depth and visual texture (I used to a texture of static to emulate an almost deep-space/ starry universe effect to the world). 

Finally, the method to create and control the 3d objects were based off the 3d  object exercise we did in class. This allowed for me to upload my own sourced 3d object, and control the different rotations it made on both the x and y axis. For the rest of the patch, it mainly was the different Vizzie effects to affect the visuals during the performance. 

 

For the audio, which was a large part of the project, I used Ableton to produce the entire 10 minute score. When trying to accomplish this task, I found extremely hard, not only to create a 10 minute score, but to find a way to make it not sound boring and completely uniform. Therefore, the first and last half of the project sound very similar to projects such as loudthings, a very abstract and surreal soundscape, making use of unconventional samples such as locust noises. However, when reaching the middle of the audio, I decided to add my own personal twist to the sound. I decided to make a chord progression, later accompanied by a drop and percussion, just to differentiate the different sections of the project. This was mainly done by using a midi keyboard and different beat packs that I use for my other music production.

I think the hardest aspect of the audio, besides the aforementioned issues, was trying to master a 10 minute soundscape. Not only was there already a chord progression and drum sequence to master and mix, but there was also so many small elements throughout the soundscape, which took a lot of time and effort, and I still think I can improve on it. Making the soundtrack for this project was definitely a challenge, and forced me to step outside my comfort zone. 

For the performance patch, I wanted to make automating the patch in real-time as easy and intuitive as possible. When considering the different elements I was going to be automating during the perfomance, I realized that I actually did not need as many things as I previously had thought. I only added different bangs and triggers for each camera angle, making sure to make not of the different camera names (eg. camera 1, 2,  3, etc.) Additionally, I added in  my controls for the speed of movement for the x axis and y axis of the 3d objects. Finally, I also included the audio for the project itself, the stereo mixer, and the visual effects such as DELAYR and 2TONR for making visual shifts within the project. 

Performance
Overall, I really enjoyed performing the project in a setting that was off campus. Not only did it make it feel as if we were wholeheartedly showing our pieces to the public, but it also brought a certain pressure that pushed me to make sure everything was perfect during the performance. I thought that by having a show that was outside of school grounds, we could all challenge ourselves to step outside our comfort zones.

I think for my performance specifically, I felt that it went smoothly, in terms of the lack of any technical difficulties. However, looking back at the recording of the performance, I definitely could make some improvements. Mainly, I noticed that there was some latency/ lag time between the projection and the buttons I was triggering. Although I did not notice this because the small display had no such latency, I did not account for the slight delay that would be projected on the larger screen, which affected the portion of my project that had percussion.   

Conclusion
I think that this project was an extremely eye opening experience for me, both in a creative and technical sense. I realized that after this project, audio visual performance was indeed something that I would want to further pursue, as it has endless applications within countless artistic mediums and disciplines. By making and performing this final project, I was able to realise that there is still much I have to learn, and there are countless technical skills I must master to better perfect my work. 

I think that in the future, I would like to look into different audio/visual softwares and programs that could offer a more in-depth and intuitive perspective on the art form, as I sometimes found Max8 to be unnecessarily complicated. By taking into consideration the mistakes and accomplishments I had made in this final project, I hope to be able to apply my focus further to continue making more audio visual works. 

RAPS Project 3: Documentation – Kyle Brueggemann

A Transcendent Journey

Project Description

For our final project, my partner, Celine Yu, and I desired to create a work that allows the viewer to transcend their current physical condition and evoke spiritual thoughts and transformations. We created our work as a medium of expressing deep spiritual concepts and expressing them in a way that is conceivable to a live audience. Our work is revolutionary in the way that it uses audio and visuals to express a concept once previously only expressed through words and ideas.

The spiritual experience we are portraying is one of ego-death and loss, followed by a finding of oneself and the rebuilding of one’s identity from the very bottom-up. It is depicting the confrontation of one’s shadow followed by the subsequent creation of a new identity that further matches one’s identity and a true sense of self.

Within our initial research, we were mainly inspired by Sigmund Freud’s concept of the human psyche and its personality. We wanted to show the battle between the super-ego and the id and how when consistently battling with each other, the ego must intervene. Our performance is meant to show the tense confliction between these parts of our identity, and how when they become destroyed, the true ego or self is revealed and this part of ourselves must then choose to build its identity without any implication.

We used the medium of a realtime audiovisual performance to show how the ego would disappear and adapt to its death and eventual resurrection. With our imaginative concepts, we were able to incorporate this with our hallucinogenic imagery and audiovisual storytelling.

Perspective and Context

Our audiovisual performance finds its inspiration in spiritual concepts, however, in order to fully attain its purpose as a work that moves its audience, the concepts must be attainable but not too secret. With such an abstract genre, our meaning definitely will shift based on the viewer’s own personal interpretation, however, one development throughout our project was to make sure that our audio and visuals show the effort of that spiritual transformation.

James Whitney, who drew his inspiration for his audiovisual works from many eastern-philosophical concepts, also went through similar creative processes. As described by William Moritz, “The abstract language of his art became ‘non-objective’ in the special sense of its refusal to view ‘things’ coldly as objects” (2). So in the same essence, I can relate our creative process to Whitney’s, one that must use objects to confirm certain realities, but one that must also use abstract ideas to reveal abstract concepts. The same worries that I had in the creation process also seem to link to Whitney’s refusal to view objects just as objects. His perspective totally reflects our same approach to expressing deep concepts in an abstract sense. Similarly, “As he studied Eastern philosophies, James realized that certain cosmic principles did not yield easily to verbal explanations, but could be seen and ‘discussed’ through the abstract shapes in his films” (Moritz 2). In this same way, I believe our process of evoking ideas through this medium is a modern extension of his beliefs.

In relation to previous problems within the audiovisual field that we have solved with our project, I believe we have attained an amazing level of connection between our audio and our visuals. When describing the mathematics of music, “Many pieces of music may share exactly the same mathematics quantities, but the qualities that make one of them a memorable classic and another rather ordinary or forgettable involves other non-mathematical factors, such as orchestral tone color, nuance of mood and interpretation” (Moritz). A previous artist who struggled with this is Mary Ellen Bute, as in her past works she was “using gaudily-colored, percussive images of fireworks explosions during a soft, sensuous passage–perfectly timed mathematically, but unsuited to mood and tone color” (Moritz). However, I believe that we have evolved upon this problem by separating our audio into many parts, this allows our visuals to be affected by each individual audio, rather than the mathematical output of the entire soundtrack. This adds a layer of complexity to our performance, but it allows us to evolve beyond these problems in the audiovisual field. Another way we work beyond this is by working as a duo, which allows us to personally manipulate the visuals as the performance moves on.

Whether our connection to the rest of the audiovisual world is a shared inspiration and form of expression, or whether it is a way that we have advanced the field, I believe that our work definitely finds connection with other artists in this field. 

Development and Technical Implementation

GIST

Celine and I split up the workload as I completed the audio, while she found the videos, and then we both worked together with the completion of the patch.

Audio:

For the audio process, I used the system Logic Pro X and experimented with all of the different sounds. I had wanted to split up our project into 5 distinct parts that represent the process of the ego deterioration and then rebirth as follows: excitement, confusion, nightmare, acceptance & peace. For each section, I found certain audios in the music program that matched the vibe of each section of our performance. I then stitched together all of these audios and experimented with how they affect the work’s transformation as a whole. For each section, I made sure to include dramatic transitions to ensure that the audio evokes the story that we were trying to tell. 

Following, I took out 12 different audio tracks from the main piece that would be individually played by me throughout the performance. This separation of the audio from the main track allowed for the playing of the audio to individually trigger changes in our visuals. The main track plus the 12 separate audios allowed for 13 separate tracks to be played at certain intervals throughout the entire performance. Each track was titled with the exact minute and second counter at which it had to be queued to match up with the main track’s beat.

Filming:

For the visual part, instead of starting with a generator effect within our patch, we decided to have a base video that we could then manipulate within our patch. Celine took the role of going around to Shanghai and finding different colorful scenes to film that would be useful for the manipulation within our patch. She ventured to various spots around Shanghai such as the Light Museum, the Bund, and Space Plus.

For our film, we came to the conclusion that we wanted something with a bokeh effect because it would not only give us a colorful movie, but also something that could be easily manipulated into an abstract design for our specific scenes. We wanted something simple enough that it could take on many forms. At the end of the day, we ended up with many different videos, but we decided to incorporate just one due to the fact that our patch was already running quite slow after we incorporated it in.

The Patch:

The head:

When working on the patch, the first thing I did was incorporate a background visual, as well as a 3d model that could be placed on top of it. Using a model found online, https://sketchfab.com/3d-models/helios-vaporwave-bust-f7a0fdfc6bef44b497e33257658764c8, I was able to upload it into the patch using the jit.glmodel that was named myworld. We decided upon this head because we figured we could move it in many ways throughout the experience to replicate the emotional process that one would go through should they experience ego death. I then set the model in the very middle of the patch and was able to connect different loadmess objects to its placement values so that when we load the patch in it will be correct every time. We had some issues with uploading the head at first because we could not implement the texture correctly, but after finding out the issue that we need to name the texture correctly and source it from the right file folder, it worked amazingly.

In order to spice up our 3d model even more, I came up with the idea to animate its texture as a forever changing rainbow. In order to do this, I connected the head texture file to a husalir, which had a twiddlr connected to it, then the output of the husalir would be sent to the jit.gl.texture object. The input of this effect allowed us to turn off and on the head’s color-changing effect.

The background:

Using the video that Celine filmed, we figured out how to convert it into the HAP file after many sessions of trial and error with the AVF Batch Converter. Once the video was able to be uploaded without slowing down our FPS too much, Celine then took on the role of experimenting with different Vizzie modules in order to find the most desired effects for manipulating the video. After her experimentation and consulting with me, we decided to use the pixl8r, interpol8r, sketchr, kaleidr, and husalir. Once experimenting with these for a long time, we found the best combinations of each Vizzie module that gave us the effects we desire most. Another thing that occurred during our manipulation of the video is that we felt that it was too fast, so we manipulated the speed of the original video. This was accomplished by adding a rate $1 message to our jit.movie object and then attaching different messages to the rate $1 that would allow us to slow down the movie as well as speed it up if needed.

The Tunnel:

I brought up with Celine that we had a beautiful background, but the patch could use a bit more going on, especially if we are going to have multiple scenes. I came up with the idea of creating a generated animation within the patch in addition to the background video and the head 3d model. After playing around with the jitter program, I came up with the idea of using a torus, but rather placing the perspective of the audience as being inside the torus, and then having it rotate on its axis as if the audience was traveling through a never-ending tunnel.

It was quite difficult to get the perfect positioning of the torus animations in order to achieve this tunnel effect, but once we achieved it, we were quite satisfied. In order to even further enhance the experience of the tunnel, we connected our original background output to a jit.gl.texture object which would be named the texture of our tunnel. This allowed us to add amazing visuals to the tunnel as well as have a lot of ability to manipulate its visual effect throughout the performance.

Combining the background, head, and tunnel:

Using a lumakeyr module, we then combined the 3d head model as well as the manipulated video, the lumakeyr allowed for us to fade the head in and out of the main background video. However, we then wanted to be able to switch between the main background video and the tunnel animation. We originally had an xfade Vizzie module with both the background video and tunnel animation connected to it, but for some reason, this made our patch crash quite frequently. Our professor then helped us out by replacing our xfade module with a “prepend param xfade” object. This allowed for us to create the effect of the xfadr module without actually incorporating the entire module into our patch. For some reason, this change allowed for our patch to run much smoother.

Audio in Patch:

In order to add all the audio into the patch, I added the 12 different audio tracks as well as the main audio track, all connected to 2 different pan mixers, and then sent to a stereo output. I also added an audio2vizzie object as well as a smoothr object for every single individual audio file that we had. This is important for the implementation of our audio files allowing our visuals to move.

Audio to Visual:

As I had made a lot of the creative decisions in the design of the visuals, Celine took on the responsibility of deciding which audio would affect which visual throughout the patch. The process was one of trial-and-error, connecting a lot of the audios to individual Vizzue modules and taking note of their effect.

During this process, our patch became quite messy and complicated because we had many wires going all the way from our audio tracks to the Vizzie components on the whole other side of the patch. One main difficulty was noting how each visual component would be turned on and off by the trigger of the sound values. We had to make sure that the certain transformation of visuals that occurs from the sounds being triggered is, in fact, going to genuinely affect our overall output. This is due to the fact that many Vizzie modules would be turned on and off throughout the performance, as they would be affected by other sounds that were played. In order to fix this, Celine came up with a genius solution to create 1 and 0 messages which would be triggered by the various audios. This allowed for us to make the audios trigger the turning on and off of certain Vizzie effects as needed throughout the performance.

Once this process was completed, our patch was quite complicated, but thankfully presentation mode came to our rescue and allowed us to create a screen with only the essentials of our performance.

MIDI Board:

As I would be controlling the audio and certain visual effects from the computer and Celine would be controlling the MIDI board throughout the performance, she set up the MIDI board to certain effects that she wanted to alter throughout the performance. She had the MIDI board connected to the lumakeyr module so she could fade it in as well as the different movement values connected to the head. She also had it connected to the xfadr which switched between the tunnel visual and the background video.

Performance

Due to us being not able to find a MIDI board that we could practice with until very late, the amount of time we had to practice our performance all the way through was definitely not as long as I had hoped, but we managed to pull off our performance very well. During our practice sessions, we made many decisions about when to fade in the head, when to add color to the head, as well as when to switch between the tunnel visual and the background visual. During our practice, we helped create a checklist of the things that we wanted to alter throughout our 10-minute piece in order to follow the storyline of the audio.

For our performance, I had my phone open in order to play the stopwatch so we would know when to cue certain effects, while Celine had her phone open in order to display the cues that would be needed. During the performance, we both took big roles as Celine would fade and play with a lot of the visuals as well as control the movement of the head, while I played all the sounds as well as controlled the speed of the video and controlled the heads movement.

Everything went amazingly during our performance. We had a few moments where we were not entirely synchronized, but none of these small mistakes were obvious enough to affect the overall outcome of our performance. Everything worked perfectly and all of our visuals reacted in the way we desired, it was just at times we could have been a bit more punctual. I believe with more practice throughout our entire performance, we would have been more familiarized with the different transitions more and more able to play them at the exact right time.

Overall, I think Celine and I worked very well during our performance, I felt really in touch creatively with how we were expressing ourselves through our audiovisual performance and I believe she felt the same way. We were quite nervous at first buy my confidence definitely built up as the performance went on.

Conclusion

I believe that this truly was the best outcome for our project. Ever since our planning stage, we have made a few changes to our final output, however, the essence of our project has stayed the same. From our initial proposal as well as our trial performance, we were advised to make certain changes such as minimalize the number of scenes we had as well as do more with the head as to not reveal everything to the audience right away. We took all opinions into great consideration by not including too different elements, but rather taking a smaller number of elements and making them more interactive during the performance. Also with the appearance of the head, we had originally planned to have it in sight during the entire performance, but after our professor’s suggestion to not reveal it right away, we came together and found the best transitions to fade our head in or out, include color, and to have it move around, that best fit what our original story intended to portray.

The way we divided the work in our project worked perfectly was I am more versed with audio creation, and Celine is more versed with filming. Once we created our individual parts, we were able to come together and create a beautifully intricate, however extremely messy max patch that accomplished basically every effect we learned in the class. I was extremely satisfied with how we were both dedicated to producing the most beautiful and thought-provoking outcome of a project as it allowed us to put on a performance that I’m very happy with. Regardless of a few timing issues, we were able to do everything we wanted with the patch seamlessly, even if we were a bit on edge during our performance.

Our ability to work together is amazing and I knew we’d work together based on our past success working on the Communications Lab final project together. Both of our strengths and weaknesses really balanced each other out throughout the entire creative process.

Regarding the emotional meaning behind the work, I believe we were truly able to portray the meaningful spiritual transformation that comes with ego death and the building of identity that follows one’s loss of ego. Through all of the visual effects and transitions, accompanied by the story told through the audio, I hope our audience was also able to feel the story of our work.

Regardless of a few stressful moments and some minor flaws in our performance, I’m entirely happy with what happened at Elevator. I enjoy the nightlife dynamic and to be able to be a part of that was really enjoyable. I believe a few small alterations such as practicing more with the MIDI board and tidying up our presentation mode would have streamlined our process even more, but there never is a perfect project. The overall process from planning, to creating, and then manipulating that was really enjoyable to me and I think this is my favorite IMA project I have done so far.

I am grateful to have taken this IMA class and completed such an intensive project because it has really allowed me to realize an art form that I didn’t even know existed. This art form is now my favorite form of expression and the completion of this final project has given me a big insight into what I would like to pursue in the future. Being able to convey such beautiful concepts through imagery and visuals is extremely rewarding for me and I’m glad I was able to share what Celine and I had created with such a special audience. I had a great time with this project and I know that the effect it had on my aspirations will not die anytime soon.

Works Cited

Moritz, William. Mary Ellen Bute: Seeing Sound, The Museum of Modern Art/Film Stills Archive.

Moritz, William. “Who’s Who in Filmmaking: James Whitney.” Sightlines, Vol. 19, no.2, 1986.

RAPS: Final Project Documentation – Celine Yu

Title: A Transcendent Journey 

Partner: Kyle Brueggemann 

Project Description: 

A Transcendent Journey reflects our desire to create a performance that transcends beyond the range of normative and physical human experiences, a spiritual and psychological one to be exact. Our intentions were to become pioneers in the realtime audiovisual performance scene by representing a concept through visual and auditory means that were previously only understood through schemas, thoughts, and words.

The choice of concept for this desire is none other than the 3-part theory on personality, founded and developed by Sigmund Freud, the father of psychoanalysis. Freud’s single most enduring notion of the human psyche was first analyzed and explained in “The Ego and The Id” in 1923, in which he discusses the three fundamental structures of the human personality: the id, ego, and superego.

According to Freud, the id is known as the primitive portion of the mind that harnesses sexual and aggressive drives, operating on the aspect of instant gratification while being fantasy-oriented. The super-ego then functions as the moral conscience, incorporating the values and morals of a society that are learned from one’s environment. Its function is to counteract the impulses of the id while persuading the ego (mentioned later on) to pursue moralistic goals as compared to the forbidden drives of the id.  Lastly, the ego becomes the part of the mind that mediates the desires of both the id and super-ego, operating on reason and logic according to the principle of reality. It is in charge of working in realistic forms that satisfy the id’s demands while avoiding the negative consequences of society, as concerned by the super-ego. 

While prominent, Freud’s theory describes these pillars as being mere concepts, not existing in any physical shape or form as parts of the brain. They are purely theorized systems, through which, people have only learned about and internalized through words and thoughts. 

Kyle and I got to wondering how the mind would act if the ego were to disappear, foresay, the death of the ultimate ego. How would the mind react? How would the mind adapt? As partners, we found that this would be the perfect case to exercise our pioneering interests in audiovisual amalgamation. We were inspired to display the innate actions and behaviors of the id, super-ego, and ego by transcending them across sensory understandings of optic and auditory means. 

Perspective & Context: 

When pinpointing a project in the larger cultural context and perspective of audiovisual performances, it can be difficult, as we have learned in our many readings and studies of the art genre. Nonetheless, I would say that our performance leans relatively more to the side of Live Cinema. Live Cinema is just one minor category that falls under the wing, or as Ana Carvalho depicts, the umbrella that is Live Audiovisual. She describes Live Cinema as being one of many, that works underneath the Live Audiovisual “umbrella that extends to all manners of audiovisual performative expressions” (134).

Our project does not employ any large contraceptions, Lumia projections or the usage of aspects like water or shadows as used in previous performances, but rather, this project focuses much more on the aspect of story-telling and the conveyance of deeper representations. As Gabriel Menotti mentions, in Live Cinema, the creator is given a much “larger degree of creative control over the performance” (95); In which there is a significant increase in leeway for the artist to create what they desire and convey what they wish to inform, considering the reality that he or she does not need to follow momentary trends. 

Though similar, our project should not be confused with the acts of VJing, for our emphasis on narration and communication articulates a much more personal and artistic essence for both the creator and the audience member. This is also why “many live cinema creators feel the need to separate themselves from the VJ scene altogether” (93). As live cinema designers and performers, Kyle and I were given full creative control over the audio, the set, the visuals and the fundamental concept behind the performance with no reliance on exterior sources, giving ourselves the upper hand throughout the entire showcase. With the visualization of Freud’s early theory of the human psyche, Kyle and I were able to communicate a cognitive system through the means of narration and story-telling that further allow the audience members to fall deeper into the performance and understand its meaningful representations on a much more significant scale.

Adding to the historical significance, our project was heavily influenced by the words, art pieces and beliefs of both the Whitney brothers and Jordan Belson. These audiovisual artists strived for the elimination of association to the real world by simply replacing it with the truths that lay hidden not in the natural world but in fact the mind. With the usage of their Eastern-metaphysics inspired visual art, both the Whitneys and Belson created ideal worlds, ones that relentlessly and perpetually explored “uncharted territory beyond the known world” (132) in order to reach a new perspective. Similarly, Kyle and I reflected the same perception in the transcendent nature of our own project. 

Development & Technical Implementation: 

Github: https://gist.github.com/kcb403/f694a176df373ad1c655bf9c54d261c0

Audio Creation:

Kyle volunteered to take over the aspects of the audio due to his experience in other classes. He used the application LogicProX, in order to create a base layer. He followed our proposed sectionized portions and stacked together audio after audio in order to have each section significantly differentiate from one another. Upon completion, Kyle selected 12 audio snippets for us to use in our patch. These sample layers along with the main base layer were exported and placed into the patch for further manipulation.

 

Filming – Base Video:

For the project, I was tasked with going around Shanghai and taking minute-long videos of things that stimulated me. I knew that I wanted to gather a collection of videos that utilized light, color and focus to their core, which is why I went to a number of places that I knew would fulfill these requirements of mine. I went to the Bund, I went to the Light Museum and I also went to the Space Plus Club located right in Pudong. At these various places, my eyes were set on any source of light that I could find.

 In the midst of planning, Kyle and I showed an immediate infatuation in the usage of bokeh, which is the technique of rendering the lenses of a camera to create out-of-focus qualities in a photographic image, or in this case, video. Aside from just light, I was also on the hunt for any location I could gather bokeh from. I was able to collect a handful of videos that I admired, but after actually trying all of them in our patch, Kyle and I decided not to go with our original plan of using multiple videos for backgrounds and scenes. Instead, we would use just one video throughout the entire performance in a number of different methods in order to maintain a steady fps value and keep the Max crashes down to a minimum. 

Patch (Video Base):

I was in charge of creating the effects for the base video with the help of Vizzie modules. I uploaded the 3-minute video by using the recommended process of HAP settings and connected that to jit.movie. From there, I started experimenting with the Vizzie modules we had come to familiarize ourselves with and focused myself on the categories of effect and transformative elements.

The modules I had chosen to work with for this project include and come in the following order: PIXL8R, INTERPOL8R, SKETCHR, KALEIDR and the HUSALIR. Aside from the KALEIDR and HUSALIR, I didn’t necessarily like what the output looked like when I used either the INTERPOL8R, SKETCHR and PIXL8R on their own, but as I played around with them and stacked them atop of another while understanding the effects of toggling the invert/regular settings, I started to like how I was able to achieve so many different visuals from just a total of 5 modules. Kyle and I felt that the video moved too quickly at times for our liking and so, we had to manipulate the speed of the overall video by attaching a rate $1 message to the corresponding jit.movie and then use float/messages that would toggle the speed between 0.2 and 1. This gave us the freedom to move back and forth between a set slow and fast speed on the MIDI board.

Patch (3D Head):

Using our skills learned from the 3D Model lesson this semester, Kyle and I thought that it would be memorable and smart to incorporate some sort of 3D head model in our performance, considering how our topic revolves around the human mind. The two of us looked for several models online, only to finally settle on an antique head, which we thought would fit in perfectly with the nightmarish style of our second act. Kyle was able to implement the 3D object into the patch using a jit.gl.model called myworld.

From there, we had the idea to manipulate the movement and placement of the head during our performance and so incorporated the three major attributes of the object into the patch: position, rotatexyz, and scale through a separate jit.gl.multiple command. From there, Kyle managed to find a fixed position for the head right in the middle of the screen and I managed to connect the speeds and scale attributes of the head’s various axis onto the MIDI board for our performance. 

Furthermore, Kyle and I both agreed that the grey-scale head was too boring and mediocre for the visuals we had imagined, in which Kyle was able to solve the problem with the usage of a texture. With this uploaded texture, Kyle and I were able to manipulate the colors changing across the face through the usage of our trusty Vizzie modules. The modules in question include the HUSALIR and the TWIDDLR controller that would help us manage the intensity, hue, saturation, and lightness of our rainbow texture in realtime. 

Patch (Tunnel Background):

As mentioned earlier, after much discussion between the two of us, Kyle and I thought that it would get too repetitive and dull to just use a different video background and manipulate it for the third and last act of our performance. Kyle then started to play around even more with the jitter program and was fascinated by the torus shape he was able to employ into the patch. With the torus object and with much arduous labor, Kyle showed me this tunnel visual he was able to create by expanding and zooming in on the inside of the torus, or as we like to call it, the donut.

I immediately liked the idea of using this tunnel-like imagery for how seamlessly it was able to fit in with the introductory audio to the specific act, in addition to its originality. Kyle managed to find the perfect settings for the ‘tunnel,’ and after multiple crashings of the Max patch, we decided that it would be best to set a loadbang in addition to multiple messages that would automatically send the desired values to our ‘tunnel’s’ attributes upon the opening of the patch. 

Patch (Video Mixing):

We then mixed the video components and the 3D object together through a LUMAKEYR. We attached the 3D model as Video 1 and then the other videos (Base and Tunnel) through Video2. Kyle and I wanted to move seamlessly from the base video to the tunnel base, which is something we were able to achieve through an xfade Vizzie module. Using this component gave me the access to fade between the videos using just the twist of a knob on our MIDI board. Later on, we realized that this xfade module was causing our patch to crash frequently, in which we sought assistance, and were told to use a different form of the xfade component, which greatly helped our crashing problem. We switched the connection between the MIDI board and the xfade module with the flonum component instead. Using the flonum worked just as effortlessly as the previous option.

Patch (Audio Implementation):

We first laid out all of the 12 audio tracks carefully selected by Kyle into a straight line in the patch, followed by the base layer which was placed underneath them. We then connected each of the 12 minor tracks to separate AUDIO2VIZZIE modules in order to convert the data into Vizzie readable values. From there, the 12 tracks were then each connected to a SMOOTHR module before we began to attach them to the Vizzie modules we had already created in previous sections of the project. We then attached each of the 13 audio tracks, including the base layer, into 2 large pan mixers. These pan mixers were then finally led through a stereo output for the integrated sounds to be sent through our exterior speakers. 

The process of connecting our pre-polished audio files with our Vizzie elements in the patch was quite possibly one of the most confusing and hectic processes of our entire project. 

I was put in charge of working with audio to video connections. For this process, I used a method of trial-and-error. I would first listen to the audio track, understand its theme, understand its position in our 1/2/3 act performance, and attach it to different Vizzie modules until I found the effect that I admired most and thought fit best with the audio.

The first few tracks went quite smoothly, as all I had to do was attach the tracks to a VIZZIE effect that I liked most. It did get more and more difficult forever as I realized just how hectic the patch was becoming as I was moving these links and attachments from the upper-right hand corner to the bottom-left corner. I could have, however, avoided this situation by cleaning up the patch and moving the modules closer to certain sections.

Another difficulty I really struggled with was how as I was going through each track and its attachments, I realized how I needed certain modules to be turned on or off during certain parts of the performance. I then thought it would be best to replay the entire audio with the connections I had already made, and just as I had expected, many of the audio tracks did not appear visually as how I had planned because I needed to turn off or turn on certain modules along with the audio track that was being played. Confusing, I understand.

I created messages (1/0) for all of the Vizzie modules and buttons to go along with them, I attached the audios I wanted to correspond with the modules through the button, creating a bang that would be sent to activate either the 1 or 0 message box to the module. This, as you may imagine, only complicated the patch even more. I continued like this throughout the entire audio, until I was finally satisfied with how the entire 10-minute track sounded and looked.

Kyle and I did, however, manage to give ourselves a clearer perspective by adding only the essentials to our Presentation Mode. This helped us in our performance, but I still think that we could have tidied up our main patch as well. 

MIDI Board:

From there, I made difficult decisions as to which settings on the patch I wanted to manipulate in realtime through the MIDI board. I linked them through, connecting all of the base video effects to the sliders on the bottom of the board, and then split sections of the MIDI board between the movement of the 3D head along with certain xfaders and so on. I then took the professor’s advice and stuck masking tape onto the MIDI board and used a felt pen to write down certain notes and labels that would help me quickly identify the usage of certain sliders and knobs if I were to forget them during the performance. 

Performance:

For our performance, Kyle and I wanted to keep things as clean and minimalistic as possible when it came to supplies and set-up. We had split our roles equally before the project, in which Kyle would be in charge of using the computer and therefore, the audio, while I would be in charge of using the MIDI board. These were the only two materials we brought to the performance aside from our much needed, USB-C adapter. 

When practicing, Kyle and I put a lot of focus into scheduled actions, meaning that we were interested in every single minute that went into the performance. As we practiced and changed a number of things throughout the patch and our arranged timetable of events, the two of us finalized a transcript indicating specific timestamps and their corresponding operations. By using this transcript on my phone, in concurrence with a stopwatch playing on Kyle’s, we managed to pull off the performance just as how we practiced, in sync, in motion, and on schedule. 

During the performance, things went quite smoothly, we started out well and ended on a good note. Throughout the 10 minutes, Kyle and I managed to follow through the entire transcript while keeping an eye on the stopwatch. There were times, however, where we had to tap on our phone screens in order to keep them on. We could have avoided this by pre-programming our phones to automatically stay on through a display that never sleeps. 

Aside from our phones, I think that we performed great together. At that moment, and I think I can speak for Kyle as well, I was only focused on the stopwatch, the transcript, the monitor and my MIDI board. I was too nervous to look at anything else, not Kyle, not the professor not even the audience. I think that Kyle and I worked together gracefully, and were able to pull off the performance well by constantly giving each support through minor gestures, words of endurance as well as small thoughtful reminders. 

There are of course, always improvements to be made to any project/performance. There were a few times in our performance, where I noticed that either Kyle or I would be acting too late or too early on our operations for the performance as according to our scheduled transcript. There were also times where I would be twisting the knobs in the wrong direction, which gave the opposite outcome we wanted on the screen. I also think that the head movements were sometimes either too abrupt or too slow when pitted against the screen and its audio counterpart, this could have been fixed by slowing down the rate in which the head moved. Also, I do think that the labels taped on to the MIDI board helped a lot, especially since we were performing in the dark.

The problems I mentioned could have been avoided or minimized with more practice on our part. More practice with the transcript, more practice without stopping and maybe even more practice in the dark or with an audience could have helped our situation. Nonetheless, I think that despite these minor mistakes, Kyle and I managed to move past them without panicking or accidentally indicating to the audience that we had made a mistake. 

Some of My Favourite Moments:

Conclusion:

Since the initial proposal we created for this final project, Kyle and I have made several changes to the presentation, production, and vision to the overall product. While our main research and connection to real-time audiovisual performance did not change, our proposed plan has gone down a much more minimalistic path, as we had been advised to during the proposal hearing. We were told to minimize our sections down to a smaller amount in order for us to truly focus on them and create the best outcome from each of them, which is exactly what we did.

The creation process went smoothly in the beginning, for Kyle was able to work on the audio compilation he fairly enjoys and I was given the opportunity to once again, go out and explore Shanghai and videotape what I deemed to be visually stunning. The creation process in Max may have been the most arduous portion of the entire project, but it was also one of the most stimulating moments in the entire process. I was determined to fix any problems that arose and I was determined to use my knowledge and skills to the max. The execution of the project at Elevator was just the icing on top, I was super nervous before going up, but who wasn’t? I was lucky enough to have a hardworking partner like Kyle and was grateful that we had practiced enough to create a visually-stimulating performance that we were both satisfied with. There is, however, always improvements to be made, but for the most part, I really enjoyed the entire journey, the entire “transcendent journey”.

I learned of the importance of placing trust in your partner and the importance of splitting certain roles in order to allow one another to grow and take responsibility of their actions for the greater whole. I discovered a deep interest in audiovisual performance and an appreciation for the arduous planning and work that goes into each VJing/audiovisual/live cinema performance.

I believe that together, Kyle and I succeeded in creating a 3 act performance in correspondence with the id, ego, and superego of Freudian theories. With 3 distinguished sections like ours, I believe that we were able to reflect the distinct attributes of the psychodynamic theories as mentioned earlier.

I wished that Kyle and I could have taken the time to organize our patch, like in our presentation mode. I feel like this is something we struggled with, but due to time and lack of understanding in its importance, we were unable to keep the patch clean and refrain ourselves from becoming confused and muddled by the chaotic imagery that was and is our patch. I believe that Kyle and I could have also benefited from more practice before our performance at Elevator, this way, we could have been more comfortable with our selves, and felt more in sync and in tune with our music. I felt that during the performance we were somewhat stiff, which may have also been because of stage fright. Nonetheless, I am confident that more practice could have done more positive than negative to our overall project. 

Overall, I am very satisfied with the outcome Kyle and I had created for this final project of Realtime Audiovisual Performances. Going into the process, I was very hesitant and timid about the scale of the entire performance, but as I worked through everything step-by-step, alongside the schedule we had created for the project, things didn’t seem as terrifying as they did in my head. It was a fun and stimulating process in which I got to learn even more about the planning and impacts of realtime audiovisual performances.

Furthermore, I was glad to have been able to produce a visually and auditorily stimulating performance that depicted a concept that had not yet been created in any form other than thoughts and words. I feel like some sort of courageous pioneer in this aspect, which is what I think is an important attribute when one wishes to become a part of the realtime live audiovisual community. 

Works Cited:

Carvalho, Ana. “Live Audiovisual Performance.” The Audiovisual Breakthrough, 2015, pp. 131–143.

Menotti, Gabriel. “Live Cinema.” The Audiovisual Breakthrough, Edited by Cornelia Lund, 2015, pp. 83–108.

Thames, and Hudson. “Cosmic Consciousness.” Visual Music: Synesthesia in Art and Music Since 1900. N.p.: n.p., 2005. 125-61. Print.