RAPS Final Project Documentation —— Thea(Sang)

Title
The Station of the Metro

Project description
The Station of the Metro is an attempt to demonstrate the daily experience by real-time audiovisual. It based on the different video clips shot in the real station in Shanghai and integrated by the new digital way of demonstration. Traditional photography shows new meanings and aesthetic value in this way.

Our inspiration comes from the poem of Ezra Pound, In the Station of the Metro and the name of our project is borrowed from his poem. The origin poem is as follows. The apparition of these faces in the crowd. Petals on a wet, black bough. As a quintessential Imagist text, the treatment of the subject’s appearance by way of the poem’s visuality. Thus, it is quite suitable to reveal the artistic conception of this poem by audio-visual performance. Basically, we want to emphasis the loneliness of a person in the noisy crowds and demonstrate the low temperature behind boisterous society. The overall atmosphere of our project is dark, cold, and depressive.

Perspective and Context

Our project The Station of the Metro belongs to the category of live cinema. According to Chris Allen, a member of The Light Surgeons, live cinema is a “deconstructed, exploded kind of filmmaking that involves narrative and storytelling” (91). Different from VJing, which highly emphasis the interaction between presenters and audiences, live cinema demonstrates a more personal view and artistic concept during the performance. The Raindrops #7 and Strata #2 we saw in the class were my favorites. I was surprised and excited to see how normal scenes in our daily life can achieve such a great aesthetic value. The traditional videos combined with digital and technical form demonstrate strong vitality

Quayola, “Strata #2,” 2009

What’s more, we were also inspired by Glitch Art. The sudden distortion occurs in video or sudden white noise in audio not only achieves harmony but also demonstrates some features of Glitch Art. Glitch Art comes from digital art and also serve digital art currently.

Development & Technical Implementation
The development of our project can be developed into two parts, materials collection stage, and technical stage.
We started the first stage early as we did not know what kinds of video we needed in our project. On the one hand, we wanted to present the atmosphere demonstrated in In The Station of the Metro written by Ezra Pound. On the other hand, we hope we can integrate our daily experience in the real station. Luckily, when we first shot in Century Avenue station, we found a girl squatted in the corner while the crowded rushed to pass by her without a glance. The quiet girl stands in sharp relief against noisy crowded and delivered a strong sense of longlines. We were deeply impressed by this scene and found it not the only one. Thus, we decided to emphasis this kind of contrast between loneliness and noise through our project.

We found a lonely girl and  a still boy in the station

I did most of the job in the collection stage including video collections and audio collections. According to my previous experience in photography, I used time-lapse photography and made thousands of photos into a video. In this time-lapse video, the crowds are like water flowed around the still person. I selected different locations to shot videos, including the subway entrance, subway platform, and ticket entrance. Also, I particularly chose morning peak and evening peak to shot the video to achieve the best effects. Additionally, I shot rainy car windows which reflected stunningly beautiful lights. We planned to fuse it with station video to create an obscure, romantic and idiosyncratic atmosphere.

I took time-laspe photos in the entrance of station

I made a time-lapse video

In the early technical stage, Katie editing, splicing, cutting and simply post-processing raw video materials with Pr and I made a piece of background music by GarageBand. I firstly used the piano to write a theme and then changed its volume, pitch, timbre, and rhythm. Based on the looped theme, I added different notes, including the bass with a long extension and volume change as well as silvery high pitch. Overall, the basic background music does not have a strong beat and shows great rhythm and delivers a sense of dark and repression.

i made a background music for our project

In the post technical stage, we put our background video and music into MAX. For video, Katie and I almost explore every possible effect and we found something quite suitable for our videos. Besides, we spent a lot of time creating and adjusting the 3D model in our project as well as finding suitable effects. We created three wonderful 3D models that perfectly fitted our project. However, as it was hard for us to precisely one change 3D model to another, we regretfully chose one and gave up two. What’ more, to develop a harmony consistent between the 3D model and background video, we adapt the size, light, and texture of the 3D model. We were satisfied with the final results. For audio effect, I want it to show a tight and dynamic relationship with videos. On the one hand, I changed the SIZE of audio and added white noise when the videos constantly shift or when certain video effects occur. On the other hand, I add the sound like jewelry collision when the 3D model occurs. We want to give our audience a sense of synesthesia These through the harmony between audios and videos.

3D model 1

 

3D model 2

3D model 3

audio patch

After these two stages, we worked on how to run everything in perfect order. I wrote a basic script including three timelines. Firstly, I analyzed the important time point in the background video such as video shift or video superstation. Then, according to this timeline, we jointly decided the time points of different effects including DELAY, ROTATE, COLOR as well as the time when the 3D model occurs. Finally, I planned the audio effects based on the former two lines. Followed the basic script, Katie and I rehearsed many times and constantly add new things and adjusted the old things.

Performance
Video clips:

The overall performance went well and was better than any of our rehearsals. I appreciate that we could perform in an underground club, where the room is completely dark and the sound equipment is advanced. Besides, because our project was the first to present and the overall atmosphere was relatively quiet, the performance effect was better than our expectations. We want to demonstrate the dark, heavy and depressed in the fast-paced and crowded station through our project, which is perfectly consistent with the dark and quite underground club. In our presentation, I mainly controlled the audio effects while Katie controlled the video effects. I think I did much better than before. I cautious followed the video and the video effects to adjust the audio effects. The overall audio effects and video effects are in harmony and follow our script because of our previous multiple rehearsals. Katie and I showed great tacit understanding during the performance. Although some effects did not perfectly follow the rhythm, our quick adaption balanced the tiny mistakes. More excitedly, just before 30 minutes we left for the club, we found a new video effect that was quite suitable for our project and this effect went very well during the presentation.

However, there was still something unexpected. Although we have rehearsed a lot of times, we still did not achieve all the effects we planned. Firstly, the audios patch crashed three times before performance, I rushed to develop the patch so some detailed settings deviated from the original one. Secondly, the output device on my computer did not work steadily and I was always worried about it during the presentation. Thirdly, as we were the first presenters, both Katie and I were a little bit nervous and made some mistakes.
I enjoyed being a presenter of this real-time audiovisual performance. Although we were not so confident about all the effects and we experienced a lot of tension, we gained happiness and satisfaction when we successfully finished our personation and gained positive feedback among the audiences. It was quite an exciting experience.

Conclusion:
It is a valuable experience for me to develop an audio-visual project and present it in public. I was also proud that when I finished the performance, some people came to give us great praise. Although it took me a lot of time, the satisfaction and happiness I gained from the project exceeded struggling. Different from other projects that mainly made by MAX itself, we shot video from the most common senses in our daily life as a background video and post-processed with MAX as well as combined 3D models. Personally, I appreciate such an idea that art comes from life and life can become an art. Through our project and presentation, the normal station showed aesthetic value and demonstrated unusual aspects. That is exactly the most meaningful thing to me.
Besides, we had successful cooperation. We not only shared the work equally but also play our strengths. More importantly, both of us had great passion and curiosity for MAX. We explore different possible effects until the last time. We gained infinite excitement when achieving new things.
Of course, as things can never be perfect, we still have many areas to improve. On the one hand, we could be more confident during the presentation. Honestly speaking, due to our nerves, we made a lot of tiny mistakes. On the other hand, the structure of our project can be much better. After watching the video of our presentation, I found that we did not emphasis the contrast between a still person and noisy crowds. Although we did some emphasis, for audiences, they may not receive this simply through our presentation. It is an important experience and I need to pay attention to deliver information coherently and clearly for my audiences to follow and understand.

References:

Menotti, Gabriel, “Live Cinema,” in The Audiovisual Breakthrough (Fluctuating Images, 2015), 85.

Raindrops #7

Quayola, “Strata #2,” 2009

Realtime Audiovisual Performance: The DARK Space (Phyllis)

Title: The DARK Space

Designer/Performer: Phyllis Fei

Description

The DARK Space is a 10-minute real-time audiovisual performance that explores fluidity, flexibility, interaction, and transformation in abstract shapes (aka dots and lines) as well as their relationship with minimal music. It emphasizes the idea of “freedom within limits” and “order within chaos” through dynamic motions of simple shapes. It is a minimal piece that resonates an immersive experience, aiming for an open space for imagination and interpretation from the audience.

Perspective and Context

The DARK Space lies in the genre of live audiovisual performance, and the use of technology (Max) allows for real-time production. It. As is described in The Audiovisual Breakthrough (Fluctuating Images, 2015), it is a time-based, media-based, and performative “contemporary artistic expression of live manipulated sound and [visual].”1 It heavily focuses on improvisation: contents are presented and modified simultaneously while the action of turning the knobs on the MIDI controller is happening.

Unlike VJs who follow popular trends so as to seek simple but effective audiovisual experience in more commercial circumstances (such as club environments), The DARK Space seeks for more personal and artistic practices while asking for more conceptual feedback from the audience.2 This piece is more of “free-flowing abstractions,” and is thus more conceptual than the VJing context. Similar to the genre of live cinema, The DARK Space asks for appreciative audiences from whom there arises “in-the-moment awareness, responsiveness, and expression”.3

  1. Carvalho Ana, “Live Audiovisual Performance,” in The Audiovisual Breakthrough (Fluctuating Images, 2015), 131.
  2. Menotti, Gabriel, “Live Cinema,” in The Audiovisual Breakthrough (Fluctuating Images, 2015), 93.
  3. Menotti, 91.

Development & Technical Implementation

I started the entire production by carrying out a storyboard/scene design of the performance (see Figure 0.1 and 0.2).

Figure 0.1. Scene Design #1
Figure 0.2. Scene Design #2

This is the link to my Github gist.

Visual Production

I decided to work with particle systems for this project but had no previous knowledge about jitter… so I started with following an online tutorial by Federico Foderaro. The logic behind it is very similar to that in Processing, however, it is the language that I’m not familiar with. To ensure the quality of visual performance, I moved heavy graphics to shader world and luckily found the GL3 example library in Max, which is extremely helpful to my project. I also tried an example patch about attractors but decided not to include it in my project. So I kept working on the simple particle system patch I borrowed from the library.

After experimenting with the particle system, I picked out “lowForce” and “maxDist” and “draw_mode” as parameters to be manipulated in realtime, to change the motions or switch between points and lines condition. Colors are also added to the system with rgb deviation (BIG THANKS TO ERIC) to create more vitality and variations in realtime visual production. Figure 1.1 is a screenshot of the particle system section in Max.

Figure 1.1. Particle System with Color Deviation (Max)
Figure 1.2. Particle System Visual

I added VIzzie to the system to create different layers of my visual story, following my storyboard. Here I basically placed the same scene from above four times on the screen but with different positions (see Figure 2.1 and 2.2). I allowed the MIDI controller to adjust the opacity (fader) of each small rectangular section and created more visual dynamics.

Figure 2.1. Vizzie
Figure 2.2. Vizzie Visual

I got quite interesting visual outcomes when changing color composition, scale and rotation parameters (see Figure 2.3, 2.4 and 2.5).

Figure 2.3
Figure 2.4
Figure 2.5

This is a screenshot of the entire visual production system I manipulated in the performance (see Figure 3).

Figure 3. Visual Production System

Audio Production

I made the base layer and some short sound samples in GarageBand and edited them before export. Below is a screenshot of my base layer sound design (see Figure 4).

Figure 4. Sound Design in GarageBand

Eric suggested using Classroom Filter to filter out the high pitches so that the short sound samples can better fit in the base layer. Eric also helped me design a third sound sample to enhance the uncanny, mysterious, minimal audio experience. Below is a screenshot of the audio production (see Figure 5).

Figure 5. Audio Production System

Below is a screenshot of my entire patch which gathers both audio and visual production (see Figure 6).

Figure 6. Screenshot of the Entire Patch

The change in draw_mode triggers rapid signal sounds. The fading extent of two sections out of the four are affected by the mysterious sound in the performance.

Performance Setup

I assigned parameters (such as scale, position, rotation, forces, fader, etc.) to different MIDI knobs to make everything be in my control more easily. The photo below shows the interface of how I improvised with my audiovisual system on a MIDI controller (see Figure 7). There was also a separate display screen during the performance for me to see the visuals I produced in realtime.

Figure 7. Realtime Control Interface: MIDI Controller

Performance
The performance went quite well during the showcase, I enjoyed it, but it could be better. I think the sound and the visual accompanied each other quite well in realtime and did offered people an immersive, minimal experience. As I discussed with Eric earlier, however, the sound could ‘ve been richer with more variations. I was a bit upset about my ending (the last 20s or so) — I wasn’t able to find the right knob to turn the audio off and it was a bit out of my control.

Visual wise, I wasn’t expecting the graphics to be that slow. I’ve worked with the system long enough and clearly knew that the visual was much smoother while rehearsing. I also produced much more beautiful scenes with better color combinations, scaling/rotating/positioning parameters… Although I know that the unexpected is always a crucial aspect of live performances that cannot be avoided, and I received quite positive feedback from the audience, I still wish that it could’ve been better. 

Conclusion

It was my first time showing my work to a larger audience other than NYU Shanghai people, which was already so amazing to me. I appreciate it so much for such a great opportunity that Eric offered :). I’ve found myself a big fan of audiovisual performance but have never imaging myself really doing it one day — I am surprised at how much I have been able to achieve in both the visual and the audio all on my own. I will definitely work with jitter more in the future and produce better audiovisual performances. 

Reading Response Live Cinema

According to Eva Fischer’s interpretation of the practice of VJing, VJing means live manipulation of prepared footages which is usually responsive to music selected and manipulated by DJs at a common venue (106,111,112). VJing is mainly based on improvisation of manipulating abstract visual contents, however, the responsive and cooperative nature of the act usually leads to its being categorized as a secondary art practice, which causes VJ artists to turn away from being identified as merely VJs (113). My understanding of live cinema from Gabriel Monotti’s “Live Cinema” is based on the comparison between live cinema and VJing as well as live cinema and traditional cinematographic conventions. According to Mia Makela, a live cinema practitioner, live cinema differs from VJing for a different degree of artistry as well as the artists’ agency and overall control of the creative outcome (94). The performer is responsible for every aspect of the outcome rather than working as a secondary role and is free from the need to prioritize either the other creators in the scene (DJ, lighting engineer, set producer) or the audience (95). Compared to conventional cinematographic approaches, live cinema performers have more freedom to choose between the traditional linear story-telling and a more abstract and intuitive narration (87). In her article, Ana Carvalho suggests the term live audiovisual performance as a generic umbrella that covers all manner of audiovisual performative expressions including VJing, live camera, and others (134, 135). Its major characteristics are the liveness of the practice and the interconnection between the visual and audio experience, as well as its intermediality (131,133,139).

The Audiovisual Breakthrough, Fluctuating Images, http://www.ephemeral-expanded.net/audiovisualbreakthrough/. Accessed 12 Nov. 2019.

Reading Response 8: Live Cinema – Celine Yu

Reading Response:

To differentiate between the terms of VJing, Live Cinema and Live Audiovisual Performances, we must understand the relation between them and their supposed hierarchal standings. Live Audiovisual, as depicted by Ana Carvalho, works as an “umbrella that extends to all manner of audiovisual performative expressions” (134). This artistic umbrella harnesses under its wing, expressions that include VJing, live cinema, expanded cinema and visual music. The term itself is generic and vast for it fails to identify a single style, technique or medium, rendering it complex at the same time. Its “live”, “audiovisual” and “performative” features are grounded in a nature of improvisation. This sense of improvisation has developed alongside the increase in immediacy and ‘liveness’ in technology (cameras, mixers, software) for image and sound manipulation that now permit the capturing and presenting of performance simultaneously while an action is happening (134). The category is often commended for the opportunity it provides audience members across the global to understand the numerous innovative expressions that it entails. 

Though similar, the practices of VJing and Live Cinema are crucially distinct under the wing of Live Audiovisual Performance. VJing, as we had learned in the past, runs parallel to the responsibilities of a disk jockey (DJ). VJs are rooted in the manipulations of live visuals as DJs are rooted in the manipulation of audio. It is however, unlike Live Cinema, much more interactive based in terms of its relationship with the performer and the audience. The acts of VJing, can be relatively restrictive when compared to the likes of Live Cinema. VJs may have the artistic freedom of improvisation and a lower demand for narration, but for the most part, VJs lack the upper hand in a performance. They have less control; having to rely on their fellow collaborators (lighting engineers, DJs, set producers) as well as the response of the audience members. Since the VJing aspect is so heavily interaction-based, it is here we see that the performative aspects of VJs are often times more than not, restricted to the monotonous setting of a nightclub, where they are treated like wallpaper and supporting acts. 

Live Cinema on the other hand is a much more hands on and demanding genre. In essence, the goals of Live Cinema are described to be much more personal and artistic to the eyes of the creator as well as the audience member on the receiving end. This is where “many live cinema creators feel the need to separate themselves from the VJ scene altogether” (93), for goals of live cinema are relatively difficult to achieve in the club environment. The creator is given a much “larger degree of creative control over the performance” (95), there is much more leeway for the artist to create what they want, given that they don’t need to follow trends and situational norms. Furthermore, compared to VJing, Live Cinema houses a much larger importance on narration and communication, where story telling then becomes a needed skill in articulating meaningful representations to the audience. 

Examples

Live Audiovisual Performance

Ryoichi Kurokawa is a household name in the genre of Live Audiovisual Performance. The performance’s usage of synthesized and impactful sounds that play in collaboration with distinct visuals do not have strong narrative sense, but nonetheless work as a personal and artistic piece. His usage of human depictions, animal species and other meaningful representations further convey a sense of artistic expression to the audience member. 

Live Cinema

This performance by Ge-Suk Yeo can be categorized underneath the Live Cinema wing of live audiovisual performances for its usage of concrete narrative aspects to form visual art. The theme of aquatic life and narration of the light down under within the dark seas is prominent in this performance. 

VJing

This example of VJing is a standard performance that does not necessarily harness any narrative components, but makes use of live manipulation of audio and its visual storage to create an atmosphere that allows members present to become harmonious with the performance. The performance brings people closer to the performance they are seeing. 

Sources:

Carvalho, Ana. “Live Audiovisual Performance.” The Audiovisual Breakthrough, 2015, pp. 131–143.

Menotti, Gabriel. “Live Cinema.” The Audiovisual Breakthrough, Edited by Cornelia Lund, 2015, pp. 83–108.

Reading Response 8 – Live Cinema from Joyce

Live Cinema and VJing can be said to be two branches of live audiovisual performance. Through the reading, I can tell that a group of artists have discrimination against VJing – they are trying to build hierarchy under the category of live audiovisual performance. The group of artists is represented by Amy Alexander, Toby Harris, and Makela. According to Live Cinema by Gabriel Menotti and Live Audiovisual Performance by Ana Carvalho, Toby Harris, he articulates how the monotony of everyday VJing presentations – “stuck in nightclubs and treated as wallpaper” (89). In contrast, live cinema is an art that “invites the audience to construct narrative and cultural critique” (89). What’s more, the connection between live cinema and narrative is verbalized in a statement by Chris Allen, a member of The Light Surgeons, who describe their work as a “deconstructed, exploded kind of filmmaking that involves narrative and storytelling” (91). In addition to Toby Harris’s different description of two forms of live art, Makela adds a primary and secondary relationship between these two. When we have A closer reading of Makela, she indeed asserts that cinema now includes “all forms of configuring moving images”. Nevertheless, she insists that live cinema is “in essence artistic”, and therefore can be set apart from VJing. More explicitly, Markela even remarks that “many Live Cinema creators feel the need to separate themselves from the VJ scene altogether, in order to establish their own artistic goals, which would rarely find an appreciative audience in a club environment (93). She is not only suggesting a sort of hierarchy of values in the realm of audiovisual performance but also imposing discrimination on VJing. Another difference between VJing and live cinema is the position of the player. Performing live cinema means not falling into contingent collaborations with any DJ, lighting engineer, or set producer that might be on that day’s shift, as a VJ often has to do. In live cinema, the performer directs every aspect of the spectacle, never being relegated to a secondary role (95). Along with the dominant right, Live cinema also masters other influences. To call a performance “live cinema” is more than invoking a background. It is to inscribe this performance in a tradition, supposedly dissolving any suspicion that might exist about its cultural relevance. Not only implying a cultural background, but live cinema also upholds a particular cultural meaning and relevance, the concept of live cinema can be useful not only for audiovisual performance but also for cinema itself (99). The term of live cinema’s distinction from other media actually becomes enforced, allowing it to keep a certain prominence instead of challenging cinema’s specificities (101).

The definition and meaning of the term of live audiovisual performance seem to be much more general. Both live cinema and VJing are categories under live audiovisual performance. However, it is general but complex. Live audiovisual performance is complex because it does not comprise a specific style, technique, or medium, but instead gathers a series of common elements that simultaneously identify a group of artistic expressions as well as specific works, which don’t necessarily fit within either of the particular expressions that constitute the group (131). Amy Alexander differentiates between VJing and live cinema but does not address live audiovisual performance as a practice with its own particular features (135). I have found some videos from the internet, the first one is Otolab’s Punto Zero.

Screenshot of Otolab’s Punto Zero

Setting the audience inside the range of the project, the project has strong engagements with the audience, which is an explicit characteristic of live cinema. In contrast, a VJ performance from Dalanars University

Screenshot of VJ performance from Dalanars University

seems more static – it is not saying the image is static, but it has no interactions between the audience and the project itself. It is more like the interaction between the player and the computer. As Kinetic Lights – Atom

has shown, live performance (not considering VJing) often includes a physical part, a feeling of telling a story, and indicates a certain background. What’s more, Guy Sherwin’s ‘Man with Mirror’

indicates that “a thorough interpretation of live cinema would mean taking these and other elements that collaborate in the continuing production of moving images more seriously into account” (103). Combining these with the previous critics on Vjing, actually, it would be more appropriate to construct flexible structures connecting the different works rather than using a rigid series of definitions (141). The better future development of live audiovisual performance should always “pointing ahead to the next turn that will be provoked technologically, politically, aesthetically, or by affections between the elements of the community” (143).