Project 3 Documentation

from Joyce Zheng

Title

Wild 失控

Project Description

Our project is based on a modern background where people rely heavily on modern technology. We took off the initial concept of cyberpunk but we are still trying to express the losing control on those technologies and the disasters followed. The use of Kinect controlling and the sound of breaking glass with flickering white screen works as a transition part, while before is the prosperity and rely on technology in modern society after the transition is the crash of society.

Like the intermediate between live cinema and VJing, we are trying to tell something rather than just creating patterns. The videos we have added to make our realtime performance more like realistic stories instead of fiction. By connecting reality with fiction, we want to raise people’s awareness of how addicted we are to phones, how our life is changed, also controlled by modern technologies. 

Our inspirations firstly come from the videos we have seen in class. The raining window provides us with our first idea of shooting different videos in reality. Real-time VJ and Real-time motion graphics to redefine the aesthetics of juggling are our biggest inspirations, where we decide that in the project we should have something that we can control the real-time movement on the screen.

Perspective and Context

One of our project’s most important parts is the transition, where the sound fits the visual effect the most. Although our music is made separately from our visual part, we still try our best to make them echo with each other and to cohere the whole performance. From Belson and Whitney’s experience, Belson’s work is trying to “break down the boundaries between painting and cinema, fictive space and real space, entertainment and art”. Belson and Jacobs were moving visual music from the screen out into “three-dimensional space”. While trying to combine the audio and the video, we are also trying to interpret music in a more three-dimensional way.

Jordan Belson’s work also provides us with a significant frame to perform. Film historian William Moritz comments that “Belson creates lush vibrant experiences of exquisite color and dynamic abstract phenomena evoking sacred celestial experiences”. His mimesis on celestial bodies inspired us to make something to be centered – it doesn’t have to be specific, but as a representation of something: maybe the expansion of technology or technology itself.

According to Live Cinema by Gabriel Menotti and Live Audiovisual Performance by Ana Carvalho, Chris Allen states that live cinema is “deconstructed, exploded kind of filmmaking that involves narrative and storytelling” (Carvalho 9). Our idea of creating the live performance corresponds to this concept, where we transfer our idea into a few breaking scenes and jump between one and another. Moreover, going back to Otolab’s Punto Zero, we can see they set strong interactions between their artwork and the audience. Initially, we wanted to invite an audience to come and interact with the Kinect, and see how the visual effects will change with the audience’s movement. Due to limitation, Tina performed that part instead of inviting someone, but we would choose someone whose identity is an audience to do this if possible. 

Development & Technical Implementation

Gist: https://gist.github.com/Joyceyyzheng/b37b51ccdd9981e507905ddcbbda988d

For our project, Tina is responsible for the creation in Processing, including the flying green numbers and how it can interact with Kinect, thus can interact with her. She also works on the combination of max patch and processing. I am responsible for the generation of audio and the creation of the max patch, and process the videos we shoot together. 

As I mentioned previously, we developed our audio and video separately, which is somewhere I think we can improve a lot. I mainly develop the audio in GarageBand, generated by mixing different audio tracks containing audio samples from freesound.org and different beats and base. I tried to use an app called Breaking Machine to generate sound, but later I found with the time GarageBand would be the best choice. I listened to a few real-time performances (e.g. 1, 2) and some cyberpunk style music and decided the style I wanted. Firstly I generate the audio looks like this:

Fig.1. Screenshot of audio version 1

Tina and our friend’s reactions to this was that it was too plain – it is not strong, thus not expressing enough of our emotions. Eric suggests that we should add some more intense sounds as well, like the sound of breaking glass. With those suggestions and some later small adjustments, I selected audio samples that I think would be proper for the audio and came up with the final audio patch. I decided to use a lot of choirs since it sounds really powerful and different types of base intersecting with each other on the same track. The creating process of video generally was to make it stronger and more intense. At first, we wanted to put the audio into Max so we can control them at the same time with video, but later we found inside Max we cannot see when it is playing, and it is not feasible for us to have two patches (since one is not enough) to control at the same time, so what we do is let it stay in GarageBand. 

Fig.2. Screenshot of audio version 2 -1
Fig.3. Screenshot of audio version 2 -2

For the max patch, we turned to jit. instead of filters like Easemapper. I searched for a few tutorials online and build the patch. The tutorials are really helpful, and the final video result produced by Max is something we would like to use. I combine the two objects that Max creates together and with Eric’s help, they can be put into the same screen simultaneously, which contributes to our final output. After the combination, we started to add videos in and put all the outputs on one screen. One problem we have met was we tried to use the AVPlayer to play more than 8 videos, which made our computer crashed down. With Eric’s suggestion, we used the umenu which was much better and allowed us to run the project smoothly. 

Fig. 4. Two separate patches
Fig. 5. The original patch before operating by Tina
Fig. 6. Final patch screenshot
Fig. 7. Present mode patch

We shot the video together and I processed the video with different filters. Here are screenshots of the video we have used. They are all originally shot, and we mainly want to express the prosperity of the city and how people rely heavily on technologies by city/hand/email/we chat message video, and how the world crashes down after the explosion by other videos. Therefore, our max patch somehow works more like a video generator: it generates all the input sources, including Kinect input, videos, and jit.world.

Fig. 8. Video 1 screenshot
Fig. 9. Video 2 screenshot
Fig. 10. Video 3 screenshot
Fig. 11. Video 4 screenshot
Fig. 12. Video 5 screenshot
Fig. 13. Video 6 screenshot
Fig. 14. Video 7 screenshot
Fig. 15. Video 8 screenshot

       

Performance

The performance in Elevator was an amazing experience. We were nervous but generally, everything went well. During the show, I am mainly responsible for the control of videos and audio, while Tina was responsible for the control of the cube and the keep-changing 3D object. When she is performing through the Kinect, I am responsible for the control on the whole patch. There were a few problems during the performance, the first one is that the sound was too loud especially of the second half part, and I could feel the club shaking even on the stage. Second is that there are a few mistakes during the performance since Kinect doesn’t work unless it is directly plugged into the computer and Tina’s computer had only 2 type-c sockets, we need to unplug the MIDIMIX and plugin Kinect and start Processing during the performance. After the part where Tina interacts with the Kinect ends, we have to replug in MIDIMIX. Though we rehearsed several times, I still forgot to reload in the videos so when the white screen flickering, you can still see the static video from Processing, which is a little terrible. Something surprising also happened, while in the end our cubes suddenly became only one cube (we still don’t know why till now), and it looks pretty good and echoes with the concept of the “core” of technology as well. 

When Tina was performing in front of Kinect I was hiding under the desk so that I would not be captured by the camera, which causes that Tina could not hear my word when I reminded her to left her arms up so that audience would be able to know that it was her who was having realtime performance. Sadly she didn’t understand me and I thought that this part could go much better. What’s more, I think the effect will be much better if we combine the audio into the max patch. But our performing time has to close precisely to the music’s breaking point, so we leave the GarageBand interface and mainly work on the realtime video during the performance. 

Here is the video of the selected performance.

https://youtu.be/ZnhN2Ev6uZo

Conclusion

To be honest, my concept of raps came more clear after watching everyone’s performance. If I could do this once again, I think I will not going to use any element of the video but to explore more on Max of the combination of other apps with Max. The initial idea of the project came from our common idea: something mixing reality and fiction. So from research to implementation of the whole performance, we work separately on different parts and combine them in the end – so the performance might not be coherent either. We wanted to achieve the effect that a human on the screen can control the 3D object or at least something not limited to the numbers, but Tina said only Windows could achieve that, so we had to give it up. 

However, we did try hard with limited time and skills. I want to say thank you to Tina who worked on an app that we never learned before and created one of the best moments of our performance, as well as Eric’s help through the whole project. We explored another world on Max, and how we should shoot the video to make more “raps”, how to modify the audio so it is not pieces of different music/usual songs anymore. 

I would work more on the audio and max patch for future improvements. I am not sure if it is influenced by the environment or not, but when I communicated with a lady from another university, she thought that most of our performances or our music were not like club music – people cannot dance or wave their bodies with it, and she felt it was because “it is still a final presentation for a class”. For this reason, I would generate more intense videos like granular videos through Max and stronger rhythmic music with better beats and audio samples.

Sources: 

https://freesound.org/people/julius_galla/sounds/206176/

https://freesound.org/people/tosha73/sounds/495302/

https://freesound.org/people/InspectorJ/sounds/398159/

https://freesound.org/people/caboose3146/sounds/345010/

https://freesound.org/people/knufds/sounds/345950/

https://freesound.org/people/skyklan47/sounds/193475/

https://freesound.org/people/waveplay_old/sounds/344612/

https://freesound.org/people/wikbeats/sounds/211869/

https://freesound.org/people/family1st/sounds/44782/

https://freesound.org/people/InspectorJ/sounds/344265/

https://freesound.org/people/tullio/sounds/332486/

https://freesound.org/people/pcruzn/sounds/205814/

https://freesound.org/people/InspectorJ/sounds/415873/

https://freesound.org/people/InspectorJ/sounds/415924/

https://freesound.org/people/OGsoundFX/sounds/423120/

https://freesound.org/people/reznik_Krkovicka/sounds/320789/

https://freesound.org/people/davidthomascairns/sounds/343532/

https://freesound.org/people/deleted_user_3544904/sounds/192468/

https://www.youtube.com/watch?v=Klh9Hw-rJ1M&list=PLD45EDA6F67827497&index=83

https://www.youtube.com/watch?v=XVS_DnmRZBk

Reading Response 8 – Live Cinema from Joyce

Live Cinema and VJing can be said to be two branches of live audiovisual performance. Through the reading, I can tell that a group of artists have discrimination against VJing – they are trying to build hierarchy under the category of live audiovisual performance. The group of artists is represented by Amy Alexander, Toby Harris, and Makela. According to Live Cinema by Gabriel Menotti and Live Audiovisual Performance by Ana Carvalho, Toby Harris, he articulates how the monotony of everyday VJing presentations – “stuck in nightclubs and treated as wallpaper” (89). In contrast, live cinema is an art that “invites the audience to construct narrative and cultural critique” (89). What’s more, the connection between live cinema and narrative is verbalized in a statement by Chris Allen, a member of The Light Surgeons, who describe their work as a “deconstructed, exploded kind of filmmaking that involves narrative and storytelling” (91). In addition to Toby Harris’s different description of two forms of live art, Makela adds a primary and secondary relationship between these two. When we have A closer reading of Makela, she indeed asserts that cinema now includes “all forms of configuring moving images”. Nevertheless, she insists that live cinema is “in essence artistic”, and therefore can be set apart from VJing. More explicitly, Markela even remarks that “many Live Cinema creators feel the need to separate themselves from the VJ scene altogether, in order to establish their own artistic goals, which would rarely find an appreciative audience in a club environment (93). She is not only suggesting a sort of hierarchy of values in the realm of audiovisual performance but also imposing discrimination on VJing. Another difference between VJing and live cinema is the position of the player. Performing live cinema means not falling into contingent collaborations with any DJ, lighting engineer, or set producer that might be on that day’s shift, as a VJ often has to do. In live cinema, the performer directs every aspect of the spectacle, never being relegated to a secondary role (95). Along with the dominant right, Live cinema also masters other influences. To call a performance “live cinema” is more than invoking a background. It is to inscribe this performance in a tradition, supposedly dissolving any suspicion that might exist about its cultural relevance. Not only implying a cultural background, but live cinema also upholds a particular cultural meaning and relevance, the concept of live cinema can be useful not only for audiovisual performance but also for cinema itself (99). The term of live cinema’s distinction from other media actually becomes enforced, allowing it to keep a certain prominence instead of challenging cinema’s specificities (101).

The definition and meaning of the term of live audiovisual performance seem to be much more general. Both live cinema and VJing are categories under live audiovisual performance. However, it is general but complex. Live audiovisual performance is complex because it does not comprise a specific style, technique, or medium, but instead gathers a series of common elements that simultaneously identify a group of artistic expressions as well as specific works, which don’t necessarily fit within either of the particular expressions that constitute the group (131). Amy Alexander differentiates between VJing and live cinema but does not address live audiovisual performance as a practice with its own particular features (135). I have found some videos from the internet, the first one is Otolab’s Punto Zero.

Screenshot of Otolab’s Punto Zero

Setting the audience inside the range of the project, the project has strong engagements with the audience, which is an explicit characteristic of live cinema. In contrast, a VJ performance from Dalanars University

Screenshot of VJ performance from Dalanars University

seems more static – it is not saying the image is static, but it has no interactions between the audience and the project itself. It is more like the interaction between the player and the computer. As Kinetic Lights – Atom

has shown, live performance (not considering VJing) often includes a physical part, a feeling of telling a story, and indicates a certain background. What’s more, Guy Sherwin’s ‘Man with Mirror’

indicates that “a thorough interpretation of live cinema would mean taking these and other elements that collaborate in the continuing production of moving images more seriously into account” (103). Combining these with the previous critics on Vjing, actually, it would be more appropriate to construct flexible structures connecting the different works rather than using a rigid series of definitions (141). The better future development of live audiovisual performance should always “pointing ahead to the next turn that will be provoked technologically, politically, aesthetically, or by affections between the elements of the community” (143).

RAPS Assignment 5: Multi 3D Object from Joyce

Gist

(I uploaded the video from the media library but it seems that it doesn’t work, so I put the youtube video link below)

For the patch, I met some difficulties building the texture on the 3D model.  The jit.gl.texture should be followed by my object name but I did pay attention to it. What’s more, the tex_map can be directly used since it is the attribute of jit.gl.model, and jit.gl.texture can be separately used with video generators above it.  

At first, I downloaded the ojb.file of an eyeball but the result doesn’t turn out well since the texture on the eyeball is jpg. file and it is independent, so what I read in max is only a white ball with a strange shape. Eventually, I used the human body. I tried to use what camera captures as the texture, but I cannot distinguish what is on the body at all, and if I use attracter and Easemapper with other Vizzle effect modules, the texture will keep changing, thus looks better. When changing the number for jit.gl.multiple, I found that I cannot let those bodies rotating respectively, so I added the rotating effect. Moreover, I used Husaliar to make the final visual effect better. 

I am thinking about improving my project by creating a 3D object by myself, (though the previous result didn’t really turn out very well). One of the reasons I did not try it is that I cannot control the shape of the 3D object I create. What is interesting is that adjusting the parameters for position, rotate angle and the scale will have a huge influence on the 3D object. The final output is also contributed by the dim parameter. I love the screen filling up with bodies, but it was really a mess so I lower the number of dim. The 3D object is really interesting and applicable, I think I am going to apply it into my final project. 

Below are screenshots of my patch and videos for my final output of the patch.

https://youtu.be/DA0xNvM9Pzk

https://youtu.be/QHGKm2XNQCA

3D model resource: https://free3d.com/3d-models/face

RAPS Assignment 4 from Joyce

Gist Link

Before adding effects to the audio from the video, I used Granular and Cloud for the extra layers for the synthesized audio.  I used the audio: Jongly in the granular since this is the most proper one in my perspective from the music. Cloud can produce amazing sound (though a little noisy) with the adjustment on offset and fatness.

For the assignment, the movement of the people in the video looks like the movement of Qigong in China. However, when only playing a second or even shorter piece of the video, it looks like he is sleepwalking. Since according to Cleveland Clinic, people who sleepwalks will sitting up in bed and repeating movements, such as rubbing eyes or tugging on pajamas. Though the person in the video clip and is awake, his repeating movements make the scene unreal. In the effects module, I found an amazing effect: Gigaverb. By adjusting the parameters, it can produce a very eerily, constantly superimposed wonderful echo, similar to the sound of dream or non-reality. Therefore, after figuring out other functions, I added a Gigaverb for all three audio tracks. Reverb 1, Reverb 2, Flanger are all effects that can give the original audio an elusive effect.

Reading Response 7 from Joyce

As a new occupation appears in recent years, VJ can be explained as a visual jockey or visual jammer. Visual Jockey comes from “video performance artists who create live visuals, in parallel with a disk jockey” (106). They are also called as the visual jammer is simply because the way they mix the video. As a counterpart of DJ, what VJs do is mainly what DJs do on music. VJing usually blends various image formats, such as real video loops, generated visual material, found footage from movies or photography and their structural fragmentation, and creating collages and mixes. VJing has created a visual format that defies traditional forms of visual narration (108). 

This new visual format can be distinguished easily and relies a lot on its liveness and performative. The existence of VJ themselves is important since the audience cannot tell if it is real-time performance or not without a people standing on the stage to perform. What’s more, according to Eva Fischer, much as in electronic performances, which go well beyond the act of pressing the play button, it is not sufficient for something to be defined as a “VJ performance” to produce a video clip which is played back and projected. VJing — like any other performative format — stands for liveness, transience, and uniqueness. But even more than, for example, a live cinema performance, which usually is based on a dramaturgical audio-visual concept, VJing is pure improvisation (7). This means the VJ performance during that certain time and place is liveness, performative show that is unique and cannot be copied anymore. This also corresponds to one of the most valuable points of theater shows (musicals), when fans always buy the same show many times since every show will be different, even though the shows happen on the same day, in the afternoon and in the evening. 

However, VJing still has a long way to go. The call of “visual wallpaper” happens where competitive behavior occurs in the dissatisfaction of many VJs, who report being treated and perceived increasingly unfavorably by hosts and audiences compared to the musicians (113). VJs have long been treated unequally compared to the artist, since what they do is usually creating the atmosphere rather than telling a story especially in clubs (115). In order to clarify their occupations, most VJs will amend their jobs by a second or third job title. As a musical fan, I appreciate the visual art form of VJ very much based on its uniqueness and performative, and I hope that it can be an art form accepted by the masses one day.