Project 3 Documentation

from Joyce Zheng

Title

Wild 失控

Project Description

Our project is based on a modern background where people rely heavily on modern technology. We took off the initial concept of cyberpunk but we are still trying to express the losing control on those technologies and the disasters followed. The use of Kinect controlling and the sound of breaking glass with flickering white screen works as a transition part, while before is the prosperity and rely on technology in modern society after the transition is the crash of society.

Like the intermediate between live cinema and VJing, we are trying to tell something rather than just creating patterns. The videos we have added to make our realtime performance more like realistic stories instead of fiction. By connecting reality with fiction, we want to raise people’s awareness of how addicted we are to phones, how our life is changed, also controlled by modern technologies. 

Our inspirations firstly come from the videos we have seen in class. The raining window provides us with our first idea of shooting different videos in reality. Real-time VJ and Real-time motion graphics to redefine the aesthetics of juggling are our biggest inspirations, where we decide that in the project we should have something that we can control the real-time movement on the screen.

Perspective and Context

One of our project’s most important parts is the transition, where the sound fits the visual effect the most. Although our music is made separately from our visual part, we still try our best to make them echo with each other and to cohere the whole performance. From Belson and Whitney’s experience, Belson’s work is trying to “break down the boundaries between painting and cinema, fictive space and real space, entertainment and art”. Belson and Jacobs were moving visual music from the screen out into “three-dimensional space”. While trying to combine the audio and the video, we are also trying to interpret music in a more three-dimensional way.

Jordan Belson’s work also provides us with a significant frame to perform. Film historian William Moritz comments that “Belson creates lush vibrant experiences of exquisite color and dynamic abstract phenomena evoking sacred celestial experiences”. His mimesis on celestial bodies inspired us to make something to be centered – it doesn’t have to be specific, but as a representation of something: maybe the expansion of technology or technology itself.

According to Live Cinema by Gabriel Menotti and Live Audiovisual Performance by Ana Carvalho, Chris Allen states that live cinema is “deconstructed, exploded kind of filmmaking that involves narrative and storytelling” (Carvalho 9). Our idea of creating the live performance corresponds to this concept, where we transfer our idea into a few breaking scenes and jump between one and another. Moreover, going back to Otolab’s Punto Zero, we can see they set strong interactions between their artwork and the audience. Initially, we wanted to invite an audience to come and interact with the Kinect, and see how the visual effects will change with the audience’s movement. Due to limitation, Tina performed that part instead of inviting someone, but we would choose someone whose identity is an audience to do this if possible. 

Development & Technical Implementation

Gist: https://gist.github.com/Joyceyyzheng/b37b51ccdd9981e507905ddcbbda988d

For our project, Tina is responsible for the creation in Processing, including the flying green numbers and how it can interact with Kinect, thus can interact with her. She also works on the combination of max patch and processing. I am responsible for the generation of audio and the creation of the max patch, and process the videos we shoot together. 

As I mentioned previously, we developed our audio and video separately, which is somewhere I think we can improve a lot. I mainly develop the audio in GarageBand, generated by mixing different audio tracks containing audio samples from freesound.org and different beats and base. I tried to use an app called Breaking Machine to generate sound, but later I found with the time GarageBand would be the best choice. I listened to a few real-time performances (e.g. 1, 2) and some cyberpunk style music and decided the style I wanted. Firstly I generate the audio looks like this:

Fig.1. Screenshot of audio version 1

Tina and our friend’s reactions to this was that it was too plain – it is not strong, thus not expressing enough of our emotions. Eric suggests that we should add some more intense sounds as well, like the sound of breaking glass. With those suggestions and some later small adjustments, I selected audio samples that I think would be proper for the audio and came up with the final audio patch. I decided to use a lot of choirs since it sounds really powerful and different types of base intersecting with each other on the same track. The creating process of video generally was to make it stronger and more intense. At first, we wanted to put the audio into Max so we can control them at the same time with video, but later we found inside Max we cannot see when it is playing, and it is not feasible for us to have two patches (since one is not enough) to control at the same time, so what we do is let it stay in GarageBand. 

Fig.2. Screenshot of audio version 2 -1
Fig.3. Screenshot of audio version 2 -2

For the max patch, we turned to jit. instead of filters like Easemapper. I searched for a few tutorials online and build the patch. The tutorials are really helpful, and the final video result produced by Max is something we would like to use. I combine the two objects that Max creates together and with Eric’s help, they can be put into the same screen simultaneously, which contributes to our final output. After the combination, we started to add videos in and put all the outputs on one screen. One problem we have met was we tried to use the AVPlayer to play more than 8 videos, which made our computer crashed down. With Eric’s suggestion, we used the umenu which was much better and allowed us to run the project smoothly. 

Fig. 4. Two separate patches
Fig. 5. The original patch before operating by Tina
Fig. 6. Final patch screenshot
Fig. 7. Present mode patch

We shot the video together and I processed the video with different filters. Here are screenshots of the video we have used. They are all originally shot, and we mainly want to express the prosperity of the city and how people rely heavily on technologies by city/hand/email/we chat message video, and how the world crashes down after the explosion by other videos. Therefore, our max patch somehow works more like a video generator: it generates all the input sources, including Kinect input, videos, and jit.world.

Fig. 8. Video 1 screenshot
Fig. 9. Video 2 screenshot
Fig. 10. Video 3 screenshot
Fig. 11. Video 4 screenshot
Fig. 12. Video 5 screenshot
Fig. 13. Video 6 screenshot
Fig. 14. Video 7 screenshot
Fig. 15. Video 8 screenshot

       

Performance

The performance in Elevator was an amazing experience. We were nervous but generally, everything went well. During the show, I am mainly responsible for the control of videos and audio, while Tina was responsible for the control of the cube and the keep-changing 3D object. When she is performing through the Kinect, I am responsible for the control on the whole patch. There were a few problems during the performance, the first one is that the sound was too loud especially of the second half part, and I could feel the club shaking even on the stage. Second is that there are a few mistakes during the performance since Kinect doesn’t work unless it is directly plugged into the computer and Tina’s computer had only 2 type-c sockets, we need to unplug the MIDIMIX and plugin Kinect and start Processing during the performance. After the part where Tina interacts with the Kinect ends, we have to replug in MIDIMIX. Though we rehearsed several times, I still forgot to reload in the videos so when the white screen flickering, you can still see the static video from Processing, which is a little terrible. Something surprising also happened, while in the end our cubes suddenly became only one cube (we still don’t know why till now), and it looks pretty good and echoes with the concept of the “core” of technology as well. 

When Tina was performing in front of Kinect I was hiding under the desk so that I would not be captured by the camera, which causes that Tina could not hear my word when I reminded her to left her arms up so that audience would be able to know that it was her who was having realtime performance. Sadly she didn’t understand me and I thought that this part could go much better. What’s more, I think the effect will be much better if we combine the audio into the max patch. But our performing time has to close precisely to the music’s breaking point, so we leave the GarageBand interface and mainly work on the realtime video during the performance. 

Here is the video of the selected performance.

https://youtu.be/ZnhN2Ev6uZo

Conclusion

To be honest, my concept of raps came more clear after watching everyone’s performance. If I could do this once again, I think I will not going to use any element of the video but to explore more on Max of the combination of other apps with Max. The initial idea of the project came from our common idea: something mixing reality and fiction. So from research to implementation of the whole performance, we work separately on different parts and combine them in the end – so the performance might not be coherent either. We wanted to achieve the effect that a human on the screen can control the 3D object or at least something not limited to the numbers, but Tina said only Windows could achieve that, so we had to give it up. 

However, we did try hard with limited time and skills. I want to say thank you to Tina who worked on an app that we never learned before and created one of the best moments of our performance, as well as Eric’s help through the whole project. We explored another world on Max, and how we should shoot the video to make more “raps”, how to modify the audio so it is not pieces of different music/usual songs anymore. 

I would work more on the audio and max patch for future improvements. I am not sure if it is influenced by the environment or not, but when I communicated with a lady from another university, she thought that most of our performances or our music were not like club music – people cannot dance or wave their bodies with it, and she felt it was because “it is still a final presentation for a class”. For this reason, I would generate more intense videos like granular videos through Max and stronger rhythmic music with better beats and audio samples.

Sources: 

https://freesound.org/people/julius_galla/sounds/206176/

https://freesound.org/people/tosha73/sounds/495302/

https://freesound.org/people/InspectorJ/sounds/398159/

https://freesound.org/people/caboose3146/sounds/345010/

https://freesound.org/people/knufds/sounds/345950/

https://freesound.org/people/skyklan47/sounds/193475/

https://freesound.org/people/waveplay_old/sounds/344612/

https://freesound.org/people/wikbeats/sounds/211869/

https://freesound.org/people/family1st/sounds/44782/

https://freesound.org/people/InspectorJ/sounds/344265/

https://freesound.org/people/tullio/sounds/332486/

https://freesound.org/people/pcruzn/sounds/205814/

https://freesound.org/people/InspectorJ/sounds/415873/

https://freesound.org/people/InspectorJ/sounds/415924/

https://freesound.org/people/OGsoundFX/sounds/423120/

https://freesound.org/people/reznik_Krkovicka/sounds/320789/

https://freesound.org/people/davidthomascairns/sounds/343532/

https://freesound.org/people/deleted_user_3544904/sounds/192468/

https://www.youtube.com/watch?v=Klh9Hw-rJ1M&list=PLD45EDA6F67827497&index=83

https://www.youtube.com/watch?v=XVS_DnmRZBk

Reading response8: live cinema (Katie)

Both VJ and live cinema are a kind of audio visual performance or visual music that emphasise “liveness”. The differences between VJ and live cinema can be seen in three approaches. The difference in audience’s participation and content.

For live cinema, as Ganriel Menotti and Ana Carvalho mentioned is “in a setting such as a museum or theater. The public, instead of being absently lost amid multiple projections, is often “sitting down and watching the performance attentively” (85). While VJ “stuck in nightclubs and treated as wallpaper” (89). So there’s a huge difference in how audience interact with the performance between live cinema and VJ. In terms of content, live cinema often has specific content. The idea of live cinema can be “characterised by story telling” (87) in the cinema context, while VJ often act as wallpapers in club context.

Final project documentation—Katie

Title
In the Station of Metro

Project Description

Our project aims to depict the the crowdedness and loneliness in the metro station. We were inspired by Ezra Pound’s poem, In a Station of the Metro 

In a Station of the Metro

The apparition of these faces in the crowd:

Petals on a wet, black bough

The metro station becomes a very important part in people’s daily lives. Often time, people in the metro station are in a rush with no motions on the faces. My own experience in the metro is not so pleasant, too. Crowded with people and noises, sometimes I even feel hard to breathe.  We took inspirations from

Raindrops #7

Quayola’s  “Strata #2,” 2009 

Perspective and Context

Our project fits into the historical context of live audiovisual performance and synesthesia . The audio and visual are consistent. When I think of the greyish images, I could not think of some bright melody, but link it to some sad and slow ones. 

I think our performance is more like a live cinema thing rather than VJing. We have a specific content like a story telling which expect our audience to watch attentively sitting or standing, but not a “visual wall paper” that the audience dance to.

Development & Technical Implementation

For the visuals, the two of us shot footages together in the metro station. We choose different time in a day to go, in the morning, noon and evening, try to capture what we want: being absorbed by steams of people. Then, based on the footages we have, we discussed and decided on the overall tone of the video: a quite depressed one. Then, we work separately on visuals and audios. I worked on visuals.

We have two kinds of footages, one is inside the metro station, one is raindrops on glasses. I first edit the footage in premiere to form a basic line of approximately eight minutes. For the first half of the video, it’s purely crowed scene, for the second part, I placed the raindrop image onto the metro station image. So, we want to emphasize in the second half of the video the loneliness when one is in the metro, facing tons of strangers,  cold and rainy outside. Also, inspired by Quayola’s  “Strata #2”, we want to create the effect of raindrops breaking through the glasses. So I use the 3D models to achieve this effect. The problem I faced at first is I could not control the distribution and movement of the models. After getting help from professor, I got the idea of what every value and functions mean. So I experiment with the scale, size and speed and finally have something I want. Like this:

For other effects, we add slide to add to the crowdedness. The slider makes the black shadows connect to one another. Like this:

The rotate effect fits the raindrops very well which create a sparks play image:

This is the link to my patch: https://gist.github.com/JiayanLiu27/9b714a9ecbcdd7dbfcfbdd34c1117b58

This is my overall patch:

Performance

I’m in charge of the video effects and Thea for the audio. We were super nerves before the performance because Thea’s computer seems to have some problems with the audio, and the program often shutdown itself. Fortunately, it all went very well in the performance and there’s no technology issues. One thing could be better is that the contrast and brightness of colors on the projection screen is different than the one our own screens. We should adjust it a little for the audience to see the visuals more clearly.

In the performance, there are certainly some parts that went not like our previous rehearsal. For example. there’s one part that the visuals of different scenes changed very quickly with some sound effect simultaneously. At the same time, the color of the video would change according to the sound. However, when we were performing, the change of the color is a little out of control, it covered the visuals behind.

And for the 3D models, when adding slider effect, it becomes hard to see. But I think it overall went very well.

Conclusion

This is my first time to do a live audio visual performance. In this process, I learned a lot. The first thing is to always have a backup plan because there are really a lot of uncertainties when performing live: the program do not run, the screen shut down etc. For our group, I think it’s better for us to borrow a macbook form the IMA equipment room and prepare a patch on that computer to prevent the audio problem of the windows computer. 

Also, I think we could take more risks in this project. For now, I think this is a very save one: we have a concrete video as background, so things could not go very wrong even if the effects do not work. But for other projects in the future, I would like to experience with some abstract concept and make some really crazy visuals. For example, explore how can different shapes and colors transform.

RAPS | Final Documentation | Yutong Lin

White Crane, Old Dog, Fading Memory

Documentation

 

Project Description

The project is a tribute to my great grandmother in the form of a live cinema performance. It is a re-telling of her life story through my reflection. By bring her singing from almost twenty years ago in conjunction with my family archival footage and my photography, I wish to remember and celebrate her by sharing her story.  

The title – White Crane, Old Dog, Fading Memory has its own connotation. “white crane by the lake” is a repeating theme/metaphor in the folk music of Nakhi, “old dog” is literally what I saw a lot in the village where my great grandmother used to live. There were old dogs wandering around in the village that made me think about the passage of time. 

Perspective and Context

I was greatly inspired by the Light Surgeons, which is a London-based live cinema performance collective. True Fictions was one of the projects dated back a decade ago. It investigates the concept of myth, cinema, and American history. I was intrigued by the story just by the trailer on Vimeo. The amateur aesthetics, music score, and the editing have together created a power arrangement of media and deeply engaging emotion. I can’t imagine how sensory and breath-taking the performance was when they play the music score live. 

From their work, I became fascinated by live cinema as an art form and personal expression. 

Gabriel Menotti says in Live Cinema, that live cinema has the potential to challenge the storytelling strategies in traditional theater-based or screen-based cinema. Or to say, to mythicize the content by the multi-sensory input and the fluidity of visual arrangement and music-making. “Such extensive freedom of configuration favors works whose evocative structure is closer to poetry than to the prosaic linearity that distinguishes most movie genres, thus suggesting improvised, free-flowing abstractions” (Menotti 86). The “poetic” aspect of live cinema, loosely structured, widely sourced can be coherent by the magic of music, if edited or lively-controlled properly.      

In addition, another aspect of live cinema that I am interested in is the creator/artist’s way of interaction with the audience. “The light projected onto the screen is like a campfire around which the public gathers to absorb the performer’s tales. The performer is an actor whose main job would be to pursue communication through this process, keeping it meaningful for the audience” (Menotti 88). 

The presence and visibility of the “director” can be regarded as a powerful statement to understand what he/she has to say. The after-party and mingle with the audience also are part of the feedback for both sides – the audience and the director. 

 Development & Technical Implementation

Link https://gist.github.com/Amber-yutong-lin/28f089eab4176af7fb426653268958a8

Since my direction is slightly diverged from what other people were doing, my workflow was also a little different.

Step #1 – music composition:

I composed my music in Logic Pro. It was my first time using it. I was sampling drum sequences from GarageBand and Eric suggested that too much sampling may result in a generic output and lack of personality, especially for those who are familiar with either GarageBand or Logic. Therefore, I recomposed a more futuristic and ambient layer that goes well with my great grandmother’s folk singing.

(The drum sequence was made by my friend – Fiona Chen with her consent to use.)

Step #2 – scriptwriting and transcription:

 

I wrote a short prose of my memory of her when I was younger.

Step #2 – text, image, and video curation

I chose the visual materials based on the script. The selection began even earlier when I was reviewing my previous photograph and family footage. 

Step #3 – pre-compositing and visual effects

This step is done in C4D, Max, and Aftereffects.

Step #4 – move to Max for live performance

The live visual elements are mainly controlled by MUTIL8R and HUSALIR for saturation, contrast, and hue. By altering the black and white only images to the highly contrasted ones, I wish to visualize the color of memory, the hyperreality in nature of cinema.

The audio effects are played live. The reverb, echoing, and filters are rehearsed to fit the visual and the grand layer of music. 

Performance

The performance generally went well and achieved what I envisioned before. I think my action to bring folk music to an unconventional venue for it, which are the night clubs, can facilitate more conversation about different kinds of music and ways to think about heritage. 

However, I have to acknowledge the fact that I changed my mind one day before the performance and I skipped the rundown of my final patch, which greatly impacted my judgment of how was the overall experience. Next time, I have to give myself enough time to test on bigger screens and different audio configuration. I asked a bunch of people who were there at the performance. I got some positive feedback but I am afraid because they are my friends and acquaintance and didn’t have to be critical. However, the subtitles for English readers didn’t go super well, people said the projection contrast wasn’t enough to see the letters and some of the duration of words needed to be extended. Secondly, the audio filters could be better rehearsed and polished. 

 Conclusion

I really appreciated the chance to perform what I have learned in RAPS. I have never thought about making my music since I have no background in it. And my passion for it has been cultivated during this class. I wish and I want to learn more about music-making 

And I never thought about doing live performances. When I heard people say that they almost cried after experiencing my performance, I consider it a success. This class made me rethink VJ, and I think it can be more cool, and more than “wallpapers,” but as an engaging way to tell powerful stories.

Works cited: 

Menotti, Gabriel. edited by Ana Carvalho and Cornelia Lund. “Live Cinema”, THE AUDIOVISUAL BREAKTHROUGH. 

VR in 5 Years

Accurate, Prophetic, Powerful

  • Is 5G a Game Changer for VR and AR? (yes!)

Considering the potential speed increase, low latency, and high computational power, 5G brings a lot of potential to stream VR/AR content, allowing another step to a more untethered experience.

  • Create An Entire Home Gym With Oculus Quest

I think this is particularly the case more so with cardio and floor workouts rather than weightlifting, but having a coach that guides you in VR can give the extra push that people need.

  • VR Skin

Haptics are absolutely an important part of VR and the sooner they can migrate away from controllers, the better. This allows for a more realistic touch sensation than moving around and holding things with a controller.

  • Adobe Appears! (in AR space)

I think AR cameras will definitely be regarded as important if we are shifting to an AR based society. Doinf something like taking AR pictures in real-time seems interesting.

Absolutely Silly

  • VR for Cows and Milk Production

Do I really need to say anything about this one? The headset doesn’t even cover the cow’s eyes correctly!

  • VR for Women in Labour

I feel that labour pains are too intense that a virtual environment would not be helpful in easingba prexisting pain.

  • VR Live Concerts

One of the best parts of a love concert is how analog it is. You get to meet and talk to other people, dance, look at all the imperfections. Something is lost when that shifts into Vr, that I’d loses its appeal

  • Death of Mobile VR

Phone VR may be dying out, but that is only considering the current generation of phones. I think that there is still potential for phone based VR in future devices, that may be designed with more of a VR focus.