Realtime Audiovisual Performance: The DARK Space (Phyllis)

Title: The DARK Space

Designer/Performer: Phyllis Fei

Description

The DARK Space is a 10-minute real-time audiovisual performance that explores fluidity, flexibility, interaction, and transformation in abstract shapes (aka dots and lines) as well as their relationship with minimal music. It emphasizes the idea of “freedom within limits” and “order within chaos” through dynamic motions of simple shapes. It is a minimal piece that resonates an immersive experience, aiming for an open space for imagination and interpretation from the audience.

Perspective and Context

The DARK Space lies in the genre of live audiovisual performance, and the use of technology (Max) allows for real-time production. It. As is described in The Audiovisual Breakthrough (Fluctuating Images, 2015), it is a time-based, media-based, and performative “contemporary artistic expression of live manipulated sound and [visual].”1 It heavily focuses on improvisation: contents are presented and modified simultaneously while the action of turning the knobs on the MIDI controller is happening.

Unlike VJs who follow popular trends so as to seek simple but effective audiovisual experience in more commercial circumstances (such as club environments), The DARK Space seeks for more personal and artistic practices while asking for more conceptual feedback from the audience.2 This piece is more of “free-flowing abstractions,” and is thus more conceptual than the VJing context. Similar to the genre of live cinema, The DARK Space asks for appreciative audiences from whom there arises “in-the-moment awareness, responsiveness, and expression”.3

  1. Carvalho Ana, “Live Audiovisual Performance,” in The Audiovisual Breakthrough (Fluctuating Images, 2015), 131.
  2. Menotti, Gabriel, “Live Cinema,” in The Audiovisual Breakthrough (Fluctuating Images, 2015), 93.
  3. Menotti, 91.

Development & Technical Implementation

I started the entire production by carrying out a storyboard/scene design of the performance (see Figure 0.1 and 0.2).

Figure 0.1. Scene Design #1
Figure 0.2. Scene Design #2

This is the link to my Github gist.

Visual Production

I decided to work with particle systems for this project but had no previous knowledge about jitter… so I started with following an online tutorial by Federico Foderaro. The logic behind it is very similar to that in Processing, however, it is the language that I’m not familiar with. To ensure the quality of visual performance, I moved heavy graphics to shader world and luckily found the GL3 example library in Max, which is extremely helpful to my project. I also tried an example patch about attractors but decided not to include it in my project. So I kept working on the simple particle system patch I borrowed from the library.

After experimenting with the particle system, I picked out “lowForce” and “maxDist” and “draw_mode” as parameters to be manipulated in realtime, to change the motions or switch between points and lines condition. Colors are also added to the system with rgb deviation (BIG THANKS TO ERIC) to create more vitality and variations in realtime visual production. Figure 1.1 is a screenshot of the particle system section in Max.

Figure 1.1. Particle System with Color Deviation (Max)
Figure 1.2. Particle System Visual

I added VIzzie to the system to create different layers of my visual story, following my storyboard. Here I basically placed the same scene from above four times on the screen but with different positions (see Figure 2.1 and 2.2). I allowed the MIDI controller to adjust the opacity (fader) of each small rectangular section and created more visual dynamics.

Figure 2.1. Vizzie
Figure 2.2. Vizzie Visual

I got quite interesting visual outcomes when changing color composition, scale and rotation parameters (see Figure 2.3, 2.4 and 2.5).

Figure 2.3
Figure 2.4
Figure 2.5

This is a screenshot of the entire visual production system I manipulated in the performance (see Figure 3).

Figure 3. Visual Production System

Audio Production

I made the base layer and some short sound samples in GarageBand and edited them before export. Below is a screenshot of my base layer sound design (see Figure 4).

Figure 4. Sound Design in GarageBand

Eric suggested using Classroom Filter to filter out the high pitches so that the short sound samples can better fit in the base layer. Eric also helped me design a third sound sample to enhance the uncanny, mysterious, minimal audio experience. Below is a screenshot of the audio production (see Figure 5).

Figure 5. Audio Production System

Below is a screenshot of my entire patch which gathers both audio and visual production (see Figure 6).

Figure 6. Screenshot of the Entire Patch

The change in draw_mode triggers rapid signal sounds. The fading extent of two sections out of the four are affected by the mysterious sound in the performance.

Performance Setup

I assigned parameters (such as scale, position, rotation, forces, fader, etc.) to different MIDI knobs to make everything be in my control more easily. The photo below shows the interface of how I improvised with my audiovisual system on a MIDI controller (see Figure 7). There was also a separate display screen during the performance for me to see the visuals I produced in realtime.

Figure 7. Realtime Control Interface: MIDI Controller

Performance
The performance went quite well during the showcase, I enjoyed it, but it could be better. I think the sound and the visual accompanied each other quite well in realtime and did offered people an immersive, minimal experience. As I discussed with Eric earlier, however, the sound could ‘ve been richer with more variations. I was a bit upset about my ending (the last 20s or so) — I wasn’t able to find the right knob to turn the audio off and it was a bit out of my control.

Visual wise, I wasn’t expecting the graphics to be that slow. I’ve worked with the system long enough and clearly knew that the visual was much smoother while rehearsing. I also produced much more beautiful scenes with better color combinations, scaling/rotating/positioning parameters… Although I know that the unexpected is always a crucial aspect of live performances that cannot be avoided, and I received quite positive feedback from the audience, I still wish that it could’ve been better. 

Conclusion

It was my first time showing my work to a larger audience other than NYU Shanghai people, which was already so amazing to me. I appreciate it so much for such a great opportunity that Eric offered :). I’ve found myself a big fan of audiovisual performance but have never imaging myself really doing it one day — I am surprised at how much I have been able to achieve in both the visual and the audio all on my own. I will definitely work with jitter more in the future and produce better audiovisual performances. 

Response 8: Live Cinema (Phyllis)

After reading chapters from The Audiovisual Breakthrough (Fluctuating Images, 2015), I find that live audiovisual performance is defined as time-based, media-based, and performative “contemporary artistic expressions of live manipulated sound and image”.1 It heavily focuses on improvisation: contents are captured and presented simultaneously while an action is happening and the use of technology allows for realtime production. It is a generic, broad term that extends to all manner of audiovisual performative expressions, including practices such as VJing, live cinema, expanded cinemas, and visual music.2 It is inclusive because it also covers specific works that fit within neither of the particular expressions mentioned above, and “does not comprise a specific style, technique, or medium”.3 We are able to find “complex dynamics between the presence of the artists and the meaning for the final result presented to the audience”.4

Live cinema according to Mia Makela is similar to VJing but is being shown in a setting such as “a museum or theater”.5  However, it is more conceptual than the VJing context and suggests “instantaneous feedback between the creator and the public”.6 Live cinema can be developed with loose, linear narratives, and is of “extensive freedom of configuration” and thus suggests “improvised, free-flowing abstractions”.7 Artists such as Toby Harris employs “continuity within and between episodes [in live cinema], invite[s] the audience to construct narrative and cultural critique”.8 Live cinema asks for appreciative audiences from whom there arises “in-the-moment awareness, responsiveness and expression”.9 I see an interesting hierarchy is formed among various live audiovisual performance styles: artists agree on live cinema being “in essence artistic,” and therefore can be set apart from VJing. Unlike VJs who follow popular trends so as to seek simple but effective audiovisual experience in more commercial circumstances (such as club environments), live cinema takes place in “a place equivalent to that of film auteurs, whose goals ‘appear to be more personal and artistic,'” and asks for more conceptual feedback from the audience.10 

VJ

This is a typical VJ performance whose audio and visual are manipulated in real-time, designed for club environments. There aren’t any specific deep meanings to the audio or visual, but they together form a “cool,” “high” audience experience.

Live Cinema

This is a live cinema show by Writer Duncan Macmillan, director Katie Mitchell, and video director Leo Warner. They use basically all the settings that cinema/film production needs to produce the work. It is a narrative about The Forbidden Zone, which asks audiences to sit down and appreciate it just like watching a film. The visual effects are all manipulated in real-time. Compared to the VJing example above, it’s quite obvious that live cinema performance is more thoughtful and artistic.

Live Audiovisual Performance

This is a Kinect light audiovisual performance by Robert Henke. It is achieved by live light projection that is responsive to the designed audio. It is abstract, conceptual, providing an immersive experience at its own flow.

Reference

1. Carvalho Ana, “Live Audiovisual Performance,” in The Audiovisual Breakthrough (Fluctuating Images, 2015), 131.

2. Carvalho, 135.

3. Carvalho, 131.

4. Carvalho, 133.

5. Menotti, Gabriel, “Live Cinema,” in The Audiovisual Breakthrough (Fluctuating Images, 2015), 85.

6. Menotti Gabriel, 87.

7. Menotti, 87.

8. Menotti, 89.

9. Menotti, 91.

10:. Menotti, 93.

Assignment 5: Multi 3D Objects (Phyllis)

Here is the link to my gist

Process

I was searching for different 3D models and finally decided to choose a human head (downloaded on TurboSquid) for this assignment. I loaded it to max through “read,” added 10 in total and experimented with their motions based on Eric’s example patch. I did some adjustments on the frequency/scale/speed of position, XYZ rotation, and scale of my head model. For rotation on the x-axis, I switched the parameter to “phase.” The motion was finalized to be 10 heads nodding at a high frequency with low-frequency shakes, and the 10 heads move around on x, y, and z-axis.

After finalizing the motions, I started to generate patterns for the model, however,  I could only make changes to the background rather than the model itself. (Now that I understand how it works, I find myself stupid… 😑) The patterns were not passed to the model — Eric explained to me how effects/patterns could be passed to the model in jitter and I finally understand!!! Then I worked with the patterns by using 1PATTERNMAPPER, MAPPER and HUSALIR, and produced the first output image (see Figure 1).

Figure 1

Demo for Figure 1

Eric also showed me how to switch between different patterns by adding more statements to the drawing function. The switching can be easily achieved by a simple click. I modified my face with MUTIL8R and had my second output image produced (see Figure 2).

Figure 2

Demo for Figure 2

Below (Figure 3)  is a screenshot of my entire patch.

Figure 3

Reflection

  • I find out the reason why I felt this assignment challenging at the beginning — I was not comfortable with jitter yet. So I was afraid of trying out and felt that I’m not good at it (even though we’ve been worked with Max for an entire semester). I need to step out of my comfort zone.

Assignment 4: Granular Synthesis Audio (Phyllis)

This is the link to my gist.

Goal: Add extra layers of synthesized audio to the patch to form a base-layer of sound on top of which the sound from the videos plays. Add effects to the audios of the video so that they become “playable” as musical entities. Make the base layer and the audio from the videos coherent as a whole.

Process

I started this assignment by synthesizing the base layer first. Considering audio clips from videos are a bit different from music pieces (melody versus human tones), I synthesized a base layer that is more ambient/white noise ish than melodic. I composed a “wavy” layer with few variations in keys in the SEQUENCER and adjusted its frequency with OSCILLATOR to lower the pitch. To enrich the base layer, I added another layer with the same sequencer but with different effects: I used DFLFO and CLOUD to make the audio noisier and busier. The frequency in OSCILLATOR and CLOUD were adjusted accordingly so that the base layer sounds more coherent. For me, I can tell the two layers come from the same original audio sequence and they change correspondingly.

Then, I added separate effects to the three audio clips from the video. I used SPECTRAL FILTER for the first selected audio from the video to give it a spooky feeling, then used COMPRESSOR to turn up its volume a little more. I enriched the second selected audio clip from the video with FEEDBACK DELAY and also volumed it up more with COMPRESSOR. I added LADDER, FREQUENCY SHIFTER, and LFO to the third selected audio clip from the video.

Then I did some final change in each effect to make them fit in one another better.

The screenshot below indicates how the entire patch looks like.

Reflection

I wanted to also make the selected audio clips from the video in sequence (quite like how beats from a sound piece work), but I did not figure out how to do so😢. However, I did feel more comfortable playing around with sound in Max.

Response 7: VJ Culture (Phyllis)

The practice of VJing involves image/video loops blending and mixing, generating visual materials, creating collages and mixes, etc., in which electronic music elements are crucial to the entire production process. It is a collaborative, live practice of pure improvisation where the visual is actively and constantly responsive to the music. Different from traditional movie screening or music, the contents of VJ performance are in real-time rather than pre-recorded, as a result, as Eva Fischer claims, the “visual is perceived and processed in a temporal sequence”. 1  VJing, as Erika Fischer-Lichte points out, requires the physical “co-presence of actors and spectators”.2 VJs closely collaborate with musicians, sound artists, and DJs, and create immersive spaces in which the visual is strongly influenced by and integrated with the structure and characteristics (rhythm, melody, style, etc) of music. VJing in this context is not simply a visual company of the music, but a live performative artwork that has music as the foundation and expands itself beyond a musical context. Therefore, I don’t quite agree with the concept of “visual wallpaper” and even feel offended by it.

I have been to some music festivals with live VJ performances, however, I personally think that the VJing elements there were not as effective. Music festivals have similar atmospheres with clubs where the audience are more volatile than visitors in art galleries. To put it superficially, the artistic narratives of VJ performances are not what music festival audiences look for in their experience; they want anything that’s “cool”/”high” and can provide instantaneous stimuli. In other words, even though VJs desire to improvise conceptual and narrative works, the atmospheres (music festivals/clubs) technically don’t allow them to do so. I guess this is why I failed to feel the richness of the VJing contents but only the craziness in the visual. I prefer having VJ works more concept-based and richer in itself.

Reference:

  1. Eva Fischer, “VJing,” in The Audiovisual Breakthrough (Fluctuating Images, 2015), 111.
  2. Fischer, “VJing,” 107.