Final Project for RAPS

Title
Kaspar

Project Description
The name of my project, Kaspar, comes from Kaspar Hauser, a mystical German youth who claimed to have spent most of his life in a dark cell completely isolated from society and therefore had acquired almost zero knowledge for any language by the time he was discovered in a street at the age of sixteen. Although the myth of Kaspar remains unresolved and is said to be a fraud, it has inspired numerous works in literature and theater including Peter Handke’s play Kaspar which directly inspired the concept behind my project. Handke’s play unfolds with the lonely presence of Kaspar himself and develops through Kaspar’s interaction with abstract human voices which are meant to instruct him in speaking like a normal social being. Throughout the play, Kaspar struggles painstakingly to pronounce words, grasp the meaning of them, and pairing the sounds and meanings of language. As Kaspar eventually learns to speak, he dies.

Handke’s play is a critique of language’s powerful constraint on human expression and the conventional way of thinking in accordance with the inner logic of language. The concrete subject of his critique is post-war Germany where the adverse impact of war persisted even after the end of the war in the form of the spoken German language. There was a lack of words to appropriately address people and events, as well as to express the post-war trauma. I feel very relatable to the inadequacy of language, a subject that haunted most of my time in college — while language is broadly regarded as the most straightforward and efficient medium for communications, there are thinkings and emotions one can’t express through spoken words; at the same time, the omnipotence of language may very well be a constraint to thoughts.

After all, language is nothing but the pairing between sounds and meanings or signs and meanings, it is respected as a useful tool which bond almost all of us together. I feel that most of the people I know including myself are much too used to treating language as something already developed or assigned, and therefore ignoring the true nature of written and spoken words. The most important knowledge I acquired from RAPS is the realization that sounds and visuals are all effective and affective languages. I feel inspired to make my final project a tribute to this knowledge and later came up with the idea of deconstructing and alienating the familiar language through software manipulations.

Perspective and Context
Live audiovisual performances are incredibly rich in possibilities that performers can either focus on generating an immersive sensual experience or creating a narration through storytelling. While the ideas of live cinema, VJing, and live audiovisual performance according to Gabriel Monotti, Eva Fischer, and Ana Carvalho all focus on the possible approaches to the representation of visual materials, I see my project as an experiment where the theory of visual representation is applied to the performance of sounds. While my live manipulation of the visuals during the performance is a response to the practice of VJing as an improvisation responsive to music, the arrangement/ deconstruction of sounds is closer to a narration than an abstract experience as in the case of live cinema. Moreover, my project incorporated an improvisation with sounds comparable to a jamming section of visual materials. The implementation is all about the subtle balance between representing a prepared idea using preexisting materials and creating new meanings adding live manipulation to the performance.

Development & Technical Implementation
The execution of the project started with me working in DAW creating the intro track. Rather than structured music, I expected the intro to be an assemblage of different sounds including the sound of broken glasses (implying how I hear the deconstruction of things in a subtle manner…), scattered drum beats, as well as a ground layer. Further into the intro are distorted sound fragments taken from a recording of my own reading which is also played later in the performance — I pitched, stretched, echoed, or reversed three samples of me reading the words “speak”, “talk”, and “walk” in order for an effect of alienation and an eerie metaphor for machines learning to speak. The intro ends with a dramatic pitch-up of the ground layer sound and a final drop of percussion. Since the intro is pre-recorded, I directly ran it through the mixer on Max without any extra sound effects. So are the background for my reading section and the sound of reversed orchestra. Besides the stereo output I connect after the mixer module which held most of the sounds used in the project – the prerecorded materials and the SAMPLR modules I used for recording, playback, chopping, and pitching, I had one extra output for the input of the microphone running through flanger, chorus, and reverb effects for simpler manipulation of the patch. I also attached effect modules after two of the samplers which I only triggered approaching the end of the performance. I made the playing speed adjustable for one of the pre-recorded materials so that I could switched between my own reading voice and a pitched-down and machine-like version or directly show the alternations of sounds to the listeners.

In generating the visuals I planed three main chapters utilized by different effect modules in Vizzie. The first section is an abstract movement of shapes of glasses which I created through running a picture through modules. Aligning with the intro track, I gradually layered, darkened, and blurred the visuals through twisting different values on the modules live on a midi controller till the image temporarily disappeared as I started to read. In the second stage, I layer the the previous visuals with movement of stripes created from the combination of two easemappers as a response to the floating orchestral sound. In the final stage, I run an image of the word “speak” through a ZAMPLR and a BRCOSR to play with the pixelation of the image and make it flicker in response to the tiny sound loops from the live recording.

Prepared “nonsense” on the status of language: To recognize, in order to compromise / To receive, and repeat, / So that I could release, as a relief; / But I want to be free. / Hammer a sentence, sentence a word, / Reverse an orchestra, / Expression has no extra. / Be free, speak or not speak. 

Links to GitHub:

https://gist.github.com/HoiyanGuo/d9f216e62a7c32a68e223d295fea0489

https://gist.github.com/HoiyanGuo/9c60c98b1d9e5db1d01ba49422911896

Performance
The final performance was an incredible source for my nightmares for a whole week before it eventually took place. Thinking of the anxieties and nerves, I feel glad that I did go upstage and finish the performance without escaping from it! I felt really nervous during the performance —when controlling the visual materials during the performance intro, I realized that one of the control knob on the midi-controller wasn’t properly assigned; when reading through the microphone, I felt unsure how it sounded like for the audience and couldn’t quite handle their confused facial expressions in my realtime imagination; another desperate moment struck me heavily as I recorded myself and saw no waveform in the recoding’s visualization — luckily, I actually did have something recorded and could smoothly continue my performance till the end.

The two experiences of live performance for the class (another one in the auditorium) taught me how sounds could strike completely differently when played in different space and sound system. I feel that I should have taken more time exposing myself to different sound system or sound environment as a warmup, especially when live reading and live recording was such an important component in my project. Moreover, I never timed myself during the rehearsals and ended up having the shortest performance with a limited amount of materials. I will definitely pay more attention to a performance’s timing and pacing for potential future projects. I also find it necessary to deign an ending of any performance since the ending of my project was not obvious during the performance.

Conclusion
The process of accomplishing the project was a valuable experience for me to form a creative relationship with softwares or technology in general. At the same time of executing ideas with softwares, I had to experiment with them in the first hand to explore different possibilities. Nonetheless, I feel regretful that I only worked with what I already knew in creating the project rather than using it as an opportunity to learn new knowledge and generate fresh visual experience. I am aware that there is an imbalance between the visual and the audio components since I actually did put much more focus onto the sounds. I didn’t make the visual component of the project necessary or at least crucial to the overall representation of my idea — it was there for the sake of the assignment’s requirement and didn’t add to the essential meaning of the whole project. But the experience helped me understand the difference between an audiovisual work and an audio or visual work merely accompanied by audio or visual factors.

I think the final outcome did successfully imply an alienation of spoken language, however, while the deconstruction is achieved, I now find possibilities in further transforming the deconstructed pieces of sounds, for example, in generating music. I reversed an orchestral sample with the intention to imply a reversion of pre-existing order for the potential generation of new knowledge and aesthetics. However, I achieved it with an extra sound sample rather than transforming the recorded sounds themselves. With me cutting the sounds into tiny pieces and looping them crazily, I merely suggested a possibility rather than showing the formation of a new language. The experience of working for the final project did open a new world to me — the infinite possibilities of human voice with the alternations made possible by technology. Nothing is more natural and at the same time more complex than the human voice. And I believe that is where the secret to a new musical language lies. I feel very passionate about incorporating more alienated voices into my personal experiments with music. At last, I’d like to thank Professor Parren and my peers for this semester of exciting journey and putting up with my weirdness and overall inexperience with IMA — I can’t wait for more wonderful works from them and future encounters!

Reading Response Live Cinema

According to Eva Fischer’s interpretation of the practice of VJing, VJing means live manipulation of prepared footages which is usually responsive to music selected and manipulated by DJs at a common venue (106,111,112). VJing is mainly based on improvisation of manipulating abstract visual contents, however, the responsive and cooperative nature of the act usually leads to its being categorized as a secondary art practice, which causes VJ artists to turn away from being identified as merely VJs (113). My understanding of live cinema from Gabriel Monotti’s “Live Cinema” is based on the comparison between live cinema and VJing as well as live cinema and traditional cinematographic conventions. According to Mia Makela, a live cinema practitioner, live cinema differs from VJing for a different degree of artistry as well as the artists’ agency and overall control of the creative outcome (94). The performer is responsible for every aspect of the outcome rather than working as a secondary role and is free from the need to prioritize either the other creators in the scene (DJ, lighting engineer, set producer) or the audience (95). Compared to conventional cinematographic approaches, live cinema performers have more freedom to choose between the traditional linear story-telling and a more abstract and intuitive narration (87). In her article, Ana Carvalho suggests the term live audiovisual performance as a generic umbrella that covers all manner of audiovisual performative expressions including VJing, live camera, and others (134, 135). Its major characteristics are the liveness of the practice and the interconnection between the visual and audio experience, as well as its intermediality (131,133,139).

The Audiovisual Breakthrough, Fluctuating Images, http://www.ephemeral-expanded.net/audiovisualbreakthrough/. Accessed 12 Nov. 2019.

Assignment — Granular Synthesis

A main part of our class on granular synthesis was on paralleling three fragmented clips from one video source in one common output and mixing them in interesting ways. The assignment focused on experimenting with sounds which are the results of fragmenting the chosen video. In accomplishing the assignment, I added effects and filters to each three individual audio outputs and added a fourth audio input through inserting a sequencer, which I planed to use as a base layer.

In the first stage, I played with the twisters controlling the selected fragments and the length of them in order to look for a possible rhythmic line for further development. I chose a short sample from the broadcaster’s speaking voice for the first video input, a sample of repetitive noise for the second input, and for the third input, I went back to the broadcaster’s voice but slowed down the speed to create an extended voice effect. After trying different modules, I decided on a few for the submitted patch — the RETUNER module for the first sample from broadcaster to make him sound digital and robotic; the REVERB 2 module and a low pass filter for the second abstract sound sample to make it sound deep in space and clearer without the disturbance of noise; the FLANGER module to make the extended broadcaster voice sound more like a soundscape rather than a recognizable human voice. I only created a simple ground layer because of having no clue of how to incorporate a relatively complicated sound into the already fast-paced, busy, and overwhelming layer of sounds. I used a SEQUENCER, a KARPLUS oscillator, and a comb FILTER to make a typical synthesized sound which repeats only one note after a certain interval, enriching the pre-exiting short samples of sounds. I find myself constantly looking for sound effects which drastically alienate the familiar broadcaster voice from the original footage and transform it into something heavy and eerie, that’s probably what makes granular synthesis so exciting — completely altering something through approaching the materials with a different scale.

Link to GitHub: https://gist.github.com/HoiyanGuo/a52409458567e4bfde65626580504aec

Response to VJ Culture / Hoiyan Guo

The article “VJing” written by Eva Fischer and the documentary Video Out produced by Meredith Finkelstein and Paul Vlachos both imply a significant problem with VJing — its marginalized position as an art practice. According to Fischer, the strong attachment VJing shares to DJing is undeniable as “it has developed in deep entwinement with DJing” (112). VJs generate visuals that are responsive to the rhythm, arrangement, and overall tone of the music in contribution to a consistent and immersive club environment as one unity of sounds and visuals. However, the responsive nature of VJing causes VJs’ work to be a mere background, a wallpaper, rather than artistic productions that have its own right.

I think there are actually more differences between VJing and DJing than their shared similarities — while both practices are based on real-time manipulation and the processing of content, improvisation is deeply rooted in VJing where one processes materials in a spontaneous, creative, and unique manner which is beyond merely screening a prepared footage. In comparison, although DJing as well deals with realtime processing of musical materials, the selection of musical materials based on personal taste is the most significant nature of DJing. While a VJ’s work takes the form of total abstraction and conceals the apparent connection to the analogue or digital objects used to generate the visual content, a conventional DJ edits or filters his selected materials in order to bring them back to the front but in a different context. In other words, I think the relationship between pre-existing materials and the live manipulation is different in VJing and DJing.

The practice of VJing and DJing have reached beyond the club venue and makes appearance in places traditionally reserved for what’s known as “high art” — galleries, museums, theatre house… Although they share different historical roots and development, the club setting is still the most representative conjunction of the two. Clubbing is completely different in theory and in practice. I think the most important side note is that the experience of club is never a sheer appreciation of either music or what’s offered visually. It’s the consumption of one integral atmosphere built by a little bit of everything — music, screens, lightings, people, moods… It’s a social activity but personal, and is about the visitors themselves in the end. Personally, I do agree with the distinction between VJs and visual artists. But at the end of the day, the question of the artistic status of VJs is in fact a question of what defines art and how the relationship between an artist’s agency and the spectators’ reception should be subtly balanced. And it’s an impossible question!

Eva Fischer. “VJing.” The Audiovisual Breakthrough, Fluctuating Images, http://www.ephemeral-expanded.net/audiovisualbreakthrough/. Accessed 12 Nov. 2019.

Project 2 / Hoiyan Guo

Title:   To See, To Sea

Project Abstract:   A live audiovisual performance designed for a taste of life, oddness, and warmth.

Project Description:   Generating visual content through a combination of both analog and digital manipulations is the key guidance during the creation process of the project. Always keeping Thomas Wilfred’s lumia in mind as the main inspiration or reference, we intended to create something organic and abstract. The preparation process started as we intensively contemplated which objects have the most potential to be teased out interesting visual content. After a few experiments with light source, lenses, liquids, and leaves, we finally decided to base the visuals on the interaction with a bowl of clear water which will be added to acrylics during the performance. My group mate Phyllis contributed a lot to the arrangement or development of movements, coming up with the much-appreciated idea to use the performance for a narration of something emerging, growing, coming alive, and eventually disappearing. Overall, the performance aims to convey a sense of fluidity and vitality through an experience which can be found both odd and absorbing.

Perspective and Context: As mentioned above in the project description, we hope the outcome to be a creative response to Wilfred’s lumia. The historical record of Wilfred’s speeches where he publicly guided his spectators towards artistic and spiritual interpretations helps us take a glimpse of his intention as an artist. In the opening remarks of his lecture at Grand Central Palace in midtown Manhattan in 1933, he shared the hope for his art in the possibly most poetic way – to “transform this hall into the cabin of a fantastic dream ship capable of traveling through space with the speed of thought” (26 Orgeman). Wilfred ’s vision for lumia was a transformative experience which invited spectators for a journey from the realistic and physical space to an unfamiliar outer space dimension where spectators could contemplate the genesis of life from an alienated perspective however also feeing the innate and primitive attachment to the medium of light. In Keely Orgeman’s words, an experience which feels “as strange as floating through space yet as familiar as sunlight touching the skin” (27). The idea of something strange, alienated yet familiar inspired us in creating our own vision of lumia. Visually, the projection of liquids contained in a transparent bowl results in the plainest shape; however, through manipulating the effects digitally, the shape, color, and texture gradually  become distinctly unfamiliar. In correspondence to the alienation effect in the visual presentation, we also redesign the sounds of water drops which results in an even more digital impression. In short, we hope our performance to offer a dream-ship-like experience as initially created by Wilfred, invite our audience to forget about the realistic situation and feel as if having the perspective of a microscope, traveling through both time and space.

Orgeman, Keely, Thomas Wilfred, James Turrell, and Maibritt Borgen. Lumia: Thomas Wilfred and the Art of Light. , 2017. Print.

Development and Technical Implementation: In summary, the creation process is vaguely divided into these steps – deciding which materials to utilize, arranging movements, creating a Max patch for Vizzie modules, creating soundtracks according to movement arrangement and visual effects, combining the soundtracks with a new patch with Beap modules, and cooperating the visual and audio content in rehearsals. All of the stages make influences on one another. In terms of work division, while all of us cooperate within each stage, each of us did have different focuses – Phyllis on live manipulation of objects, Amber on digital control of visual content with me also digitally controlling the soundtracks.

Before creating a Max patch for sounds, I created two soundtracks in DAW as the ground layer. The first piece is a minimal and ambient composition designed for the emergence and awakening stage of the performance; the second piece integrates an upbeat rhythmic part which becomes stronger along with the change of visual image. Since liquidity is a main subject for the project, I also use a sample of the water dropping sound with the plan to trigger it during the live performance. To explain how the patch works –  during the first part of performance, the first piece of soundtrack runs through a low pass filter and a pan mixer which I control with a midi controller during the performance in order to create a hovering soundscape. In the middle stage of the performance, I start playing the water drop sample on a midi keyboard while the first soundtrack is still playing. As I trigger the water drop sounds in different keys, Phyllis drops water to the bowl at the same time to connect the visual and audio experience into one. As I gradually fade out the first piece of music, I start playing the second soundtrack. Amber makes sure that the visual effects should become more apparent and layered with each stage of the music progressing. Approaching the end of the performance, I repeat playing the water-dropping sample  at the same time of triggering more apparent effects including chorus and delay.

Max patch for sounds: https://gist.github.com/HoiyanGuo/b56d4fd6c1e00e94277d079f0562e113Max patch for visual: https://gist.github.com/PhyllisFei/4ec4c3338de1da2f67211cbe38a5f5e0

Performance: In general, I think the in-class performance went well with a few unwanted surprises- since most of our rehearsals happened in open classroom or the lab, I needed to take some time adapting to the performance environment including feeling comfortable controlling the laptop and MIDI keyboard in the darkness and with how the music sounded in the auditorium space. Since the music was recorded in advance, what brought us most panic were the digital visual effects. In the middle part of the performance happened a sudden blackout and a few unplanned effects since the MIDI controllers were all sensitive. Moreover, during the first few minutes of the performance, although I was constantly panning the soundtrack hoping to create a hovering effect, the panning was not easily heard. Other than that, I found it quite a fun experience performing with my friends!

Conclusion: Accomplishing the project was a valuable experience for me. I appreciate the combination of digital and non-digital approaches in generating visual content. The project helps me to understand Max as a practical digital tool to polish or alter materials coming from the physical world rather than having everything created digitally. Moreover, with the help of my group mates, I feel that I have gained a stronger intuition in terms of the coordination between images and sounds. The project inspires me to care about materials, texture, and their interaction with lights, and teaches me to use technology as a new lens to look at the world. In conclusion, I feel satisfied with the project outcome – it was imperfect indeed, but full of human touch.