RAPS Project 3 – Chenlan Yao (Ellen)

Title

Social Media

Project Description

As the name suggested, my project is about social media. The socializing patterns have changed a lot due to the development of mobile phone and social media. In China, specifically, Wechat becomes the most frequent used chatting software and software for takeaway ordering such as ELEME (饿了么) become more and more popular. The development of social media and software changes the way we socialize a lot: they offer us convenience as well as more access to various information; however, they are also controlling us to some extent, as we rely and focus on them so much that we have neglected and lost some important aspects in life. Therefore, I want to embed the relationship between people and their socializing ways, in order to reflect the impact of social media nowadays critically.

Undoubtedly, the development of smartphones and social media created many conveniences in our daily life. In my project, I want to use some examples in real life (Wechat and ELEME) to reveal the connection between social media and people in a specific way. At the same token, I get inspiration from the topology theory to demonstrate the interaction between social media and people in an abstract way. As a geometric concept, the topology reveals the relationship among things with lines and its ramifications: arc, node, poly …, which can be a good example of abstracting the social network. The combination of specific (real-life examples) and abstract (topology) ways can reveal the complication of the interaction between people and social media.

On the other hand, social media has created problems, which is the other aspect that I want to include in my project. I got inspiration from a YouTube video called “Deleting Social Media for 30 Days Changed My Life” by Yes Theory. A critical question was raised: as we gain connectivity [from social media], what have we lost?”. To explore the reliance on social media for people nowadays, the host challenged himself to delete all social media software on his phone for 30 days and to see how he and his lifestyle had changed. The video aims at making people be aware of some aspects that have been neglected due to the dependence of social media; for instance, various activities including reading, working out, hanging out with friends without browsing one’s own phone and have no communication, can be easily achieved and enjoyed without social media. The host engaged more in his life instead of scrolling down the screen all-day and achieve nothing, which reflects the disadvantage that social media has brought to us. Therefore, I wanted to cover both the advantage and the disadvantage of social media to show the impact of it in an integrated and critical way.

Perspective and Context

Considering that the performance would be conducted in an underground club, I wanted to let the audience feel involved in my project and to do a typical VJ-ing performance. According to Eva Fischer in the chapter on VJing, “the viewer response [in VJing] plays an important role. Interaction with the audience, albeit on a subconscious level, has a big influence on the result of the performance, which never occurs in the same manner twice” (115). Under that certain atmosphere, I believe the more the audience gets involved in my performance, the better my project could work. Therefore, both audio and video I wanted to make an offer a sense of strong impact and shock: for the sound, popular trap beat and element of pop music would be contained; for the visual part, there would be simple patterns and shapes with strong-contrast color. Power and energy are the way I want to use in order to shorten the distance between my performance and the audience and to let them enjoy themselves during the performance.

For some specific patterns in the visual part, I gained inspiration from Ryoji Ikeda and his work Formula and Data Matrix:

Formula: https://www.youtube.com/watch?v=OwuI0sGm0rk&t=493s

Data Matrix: https://www.youtube.com/watch?v=k3J4d4RbeWc&t=132s

I really appreciate how he deals with simple lines and cubes. As the geometric concept of topology is an essential part of my project, the relationship and movement of different shapes in Ryoji Ikeda’s work offer me some stylistic ideas.

Development & Technical Implementation

In two to four paragraphs describe how you developed your project and how your final realtime audiovisual performance system works technically. What was your research and creation process like? Which things did you try out before arriving at your final setup? How was the audio created? How were the visuals created? What materials did you use to create them? Link to any outside source if you used them.

Describe the setup you used for the performance and how it works. What are the different components? How do they interact with each other? How do you interact with them? Include pictures, diagrams, and screenshots! Upload your Max patch, including a functioning snapshot, to Github as through the process described in Assignment 3 and link to it in this section.

Audio Patch: https://gist.github.com/Ellen25/29b33ef48300661610a88c68ca7475d5

Video Patch: https://gist.github.com/Ellen25/3954b13e9a25fcbbd96fa636a34e7724

Audio part: Because I want to use the visual part to reflect the geometric relationship, which is quite abstract, the audio part is more realistic. Several real-life sounds are included (Wechat, ELEME) in order to raise the resonance of the audience and to make them feel closer to the performance. I asked my friend to pretend to be the delivery man and record his voice. The content of his voice is the scenario of take-out delivery: “Hello, your take-out food is ready. I’m at the front door of the building. Can you come and get it? Wait what? Put it to the storage room? Ok sure. It will be in the storage room. Remember to pick it up. Thank you!”.I am sure that most people have encountered a situation like this, so the audience should be familiar enough to the content shown in the record. I want to keep the voice and words clear, so it is used as the very beginning of the performance. The real-life scenario is used as an intro leading the audience to the following performance and make sure they are engaged in the environment and atmosphere I created. At the same time, I asked my friend Bongani, who makes rap music, for one of his rap songs ELEME. I cut a part of the song with the lyrics “say 你好,你的外卖到了” (Say hello, your take-out food is ready) to add some rhythm to the delivery situation, and to make the audio more energetic. 

The following part is about Wechat. I recorded several sounds related to Wechat, including sounds of Wechat notification, Wechat phone call, Wechat account transfer, Wechat message sent and so on. These sounds are combined and composed in GarageBand, with some beats added to them to create a cadence. I also get help from my friend who helped me to edit a piece of the soundtrack with JL and I mixed several elements together. From simple to complicated, the density of the sounds changes as time goes by. It reflects the relationship between the users and Wechat: at the beginning, Wechat helps the user to communicate and offers convenience; after a while, the user becomes busier and busier with Wechat. Wechat starts to control its user’s daily life.

JL screenshot

grageband ss

After the Wechat part, I added a record from a talk show Rock & Roast as a transition. The host used an example situation in a nightclub, in which a man used the advantage of the background music and the atmosphere to socialize. The situation the host performed is ironic as he wanted to show the contrast between him, a person who was too shy to socialize and could not step out of his comfort zone, and the man in the nightclub, and to reflect the problem in the socializing situation of the youth nowadays. I want to use this as the transition from the demonstration of some real-life situations to a reflective aspect. In the following part, I created some wavy and mysterious background soundtracks to lead the atmosphere of the performance deeper while adding some breaking trap beats to the basic layer to avoid being too dull. 

At the end of the performance, I made another melody based on my performance of Guzheng, a traditional Chinese instrument. The Chinese style and elements have been more and more popular nowadays in music and performance. It is special and cool to have Chinese elements combined with some modern techniques, which is undoubtedly one reason for me to keep it in my performance. What is more, I do not want my performance to end in a sad and deep way; in contrast, I want to leave some space for the audience to refresh and reflect for a while. Take their time to review what they have just experienced: how do they feel? What do they gain from social media? What do they lose?

Visual part: As I have mentioned before, topology is essential for my project. Jit.gl is the best choice for me to compose the visual things, as it offers an opportunity to draw and create geometric shapes. At the beginning, I was thinking about having various cool effects with various components in MAX; however, Eric suggested that jit.gl has already contained a lot for me to play with. Therefore, my MAX patch is quite easy to understand, as I copied and posted several 1D jit.gl worlds and made them overlapping with each other to create different effects. By changing the drawing model of the gridshape into lines and giving a close shape to it (sphere, cone, or circle), lines and shapes with lines were drawn.

ss1

videopatch1

In the Wechat part, I recorded several screen captures, including sending Wechat message and having a Wechat phone call and made them as the background. To create a chaotic effect as time goes by, I also combined two 1EASEMAPPERs to get a colorful and geometric background. The 1EASEMAPPERs can be added to several elements to make plain shapes colorful in some certain time. 

EASEMAPPR

3D model has been contained as well. I asked my friend to make a 3D model of “ASSC”, referring to Anti Social Social Club, for the reflecting part (after transition); however, the final effect did not satisfy my expectation, which will be discussed further in the Performance part of this documentation.

3Dmodel

Performance

Overall, the performance went well. Few mistakes had been made, which satisfied me a lot. Because I do not have enough time to manage too many knobs and variables, the audio part is mostly ready-made, and I focus more on generating the visual part during the performance.

audiopatch

The scale, dim and rotatexyz are 3 parameters that I generated most frequently during the whole performance. In the intro part, I generated the dim of the gridshape to change the shape from lines (1 side) to triangles (3 sides) and so, matching with the beat of the background music. For the following part, I generated the x and y scale and used the rotatexyz to move the shapes and make them overlapping with each other so that some special colors and shapes can be created. One thing I found out to be effective is the change of the MIXR mode, as the color and effects would have great differences with it. With the help of the MIXR mode, I turned the screen capture of Wechat phone call into the background of the performance, as it has a black background and a few colorful lines on it. When shapes from jit.gl overlapped with it, special shapes would come out. 

triangle

4

performance

I planned a lot, but still quite a few problems came out during the performance. Eric said I was trying to do too many things at the same time. When I was practicing before the performance, the screen went fluent even I generate several knobs at the same time; however, the final effect was out of my expectation, as it kept on seizing up during the whole time. I was nervous and worried about that on the stage, but I just want to do something (or I don’t even know what I am doing at that time) to save my performance, so I kept generating the knobs and scrolling down the screen, which made the consequence worse. Fortunately, my friend commented on this and said “I didn’t find it. I thought you meant to do that as an effect.”

I tried to embed the 3D model into the project as well, as I have mentioned in the last part. However, because I failed to make special texture for it, the 3D model turned out to be plain and quite strange in the whole project, especially on a big screen. So I let it out for a while and removed it immediately during the performance.

Conclusion

At the starting point of this project I brain-stormed a lot of ideas that I wanted to put into it. I succeeded in focusing on some real-life examples (Wechat, ELEME) and to make them as specific as I can; however, I the narration of the whole performance needs improvement. The performance was divided into several parts, which did not move fluently enough as an integer. I tried to add transitions and phone call sounds to link these parts, but improvement is still needed for the emotional change during the whole process.I am not that confident with the final effect as my original thought is to make something beyond simply cool or lit, but conveys some points that can be further discussed.

For the technical part, I think I relied too much on software like GarageBand and JL rather than MAX when making the audio. I need to spend more time on developing audio with MAX and adding effects to them. In the visual patch, I have too many things (5 jit.worlds and several effects) and I believe there could be some ways to simplify it.

The experience of VJing is really interesting and I learned a lot, both technically and practically. A lot of accidents happened during the whole process: MAX breaks down frequently; some language on the objects that is theoretically correct does not work… but fortunately, I gained my first experience of doing VJ on the stage in a real club instead of simply being an audience as before. A music label planning to organize some performance even asked me to cooperate with them and to perform with them. The experience really taught me a lot and helped me to understand the VJ culture and their works better.

RAPS Final Project: Wild (Tina)

Title

Wild失控

Project Description

Our project’s name is Wild. We got the inspiration from the movie Blade Runner. In the movie, artificial intelligence is designed to be nearly the same as human, it is really hard to distinguish the difference between humans and artificial intelligence. They are built by humans, and they are stronger than humans. When they come to fight with humans for limited resources, humans get hurt by the creature created by ourselves. Just like the first project I have done in the interaction lab, we designed a short scene play which described a situation that there would be a memory tree in the future which people can upload and download memory and knowledge from. But when one is too greedy and get too much from the memory tree, he is hurt by the memory tree. In this project, we want to present human’s over-dependence on technology. The technology is a double-edged sword, it can make our life more convenient but when we depend on it too much, it will hurt us on the other hand. The “Wild” is named after the scene in which all objects as well as the music lose control and go wild.

Perspective and Context

Our project is consists of three parts. The first part is the music, Joyce made the music using garage band. Because we wanted to make a contrast between how people rely on the technology and how would the situation be when we lose the control of the technology in the whole project, so we decided to make the first part of the music more peaceful and mild, and at the turning point, the music crashes. It turned crazy and wild. In this way, we create a connection between music and the visual scene. And the second part is about the Kinect. I got the inspiration for using Kinect to create the interaction between the performer and the project from this contemporary ballet dance.

https://www.youtube.com/watch?v=NVIorQT-bus&feature=youtu.be 

I really love how the dancer’s movement influence the objects and projection next to him, and at the same time, the objects influence him as well. Rather than just simply controlling, the performer he himself has become a part of the project, he and the project become as a whole. Therefore, I decided to put myself into the performance. I originally planned to use the depth function in the Kinect to control the movement of the object in the max, but because my computer is Mac and many functions including detecting the depth can not be used. So, at last, I just use the shape capture to capture the shape of my body. And in this project, I use my hands to receive the 0s and 1s, which represents that humans can receive digits information through technology(because the technologies we have developed till now are all based on the binary system). The digits disappear where I am, but later on, with an explosive sound in the background music, the digits lose the control and randomly move on the screen.

The third part is the max patch. In the beginning, we want to design a changing object from regularly changing to crazily changing, in order to represent the control-losing. But later we found it was too boring and dull to just use a single object. So we decided to use more videos to enrich the content. We use a scene of a bustling city as the beginning of our project to indicate the development of the city, then there are green 0s and 1s dropping from the sky, the whole city is under the control of digits. By this, we want to show that the city is running with the help of technology. And then we use several short clips of how we depend on technology in our daily life. Then the main object appears. It changes and moves regularly according to the music. Later, we add the scene from Kinect, and with a sudden glass-broken sound, we cover the whole screen with the white canvas very fast, which represents the exploiting point. After this, our core object goes wild. It turns from smooth to poignant. The whole scene goes crazy. We add all different videos to show the situation that when we rely on technology too much and it crashes.

Development & Technical Implementation

patch code

In the group of me and Joyce, basically, she finished the part of the music and I was in charge of the Kinect and processing part. And then we developed the max part together. Through this process, I learned to connect different ideas together to create a more diverse project, and I really enjoy cooperating with her.

   
While dealing with the Kinect, one biggest issue we met is the data transmission from Kinect to the computer. The amount of the data is too big for the adapter from USB to TYPEC, so when we try to connect Kinect through the adapter, there was nothing on the screen. We tried several different adapters, but none of them can transfer the data from Kinect to my computer. There was only one that works. So at last, we switch the midi mix to the Kinect during performing, and after the part of Kinect, we switch back to the midi mix.
Developing the max patch is the most challenging part. According to Ana Carvalho and Cornelia Lund (eds.), the VJ is just playing the videos and music that already exist. We want to avoid using too many video shots from the real world. Thus, I created videos of smog and random numbers by processing.

The code I use for connecting Kinect with processing and Max

The code I use to create random numbers and smog scenes

To create an adjustable object, we decided to use jit.gl to create a 3D object immediately. We learned from the tutorial on the Youtube, and then improve it. Performance 

It was the first time we went to the club for performing. We got a bit nervous at first, but we found it was normal and everyone got nervous. During the performance, I was in charge of the movement of the center cube while Joyce switches the videos and the music. Basically everything smoothly, but before starting, the processing could not work normally, it kept reporting the error. I guessed that it was because we have run both the processing and max for too long, so we restart it immediately. Luckily, it worked. And in the middle part of the performance, the music became a little loud. Such unexpected things happen a lot during performances, so learning how to deal with them calmly is needed for every live performer.

Another important thing is to organize the patch in a clear way. It is dark in the club, therefore, when we want to use the MIDIMIX to control the patches, it is hard to see what we write on the tag. So assigning the notes on the MIDIMIX according to the arrangement of the objects on the patch is important.

At the end of the performance, when we adjust the position and shape of the cube in the center, it did not change as we practiced. We planed to only have the circle rotating, but there was a small yellow cube in the center, which make the project seems even better. So, although we have no idea why it would appear here, we are still satisfied with it.

Conclusion

I think this is an interesting project which broadens my eyes, I learned to produce music by garage band and I explore the use of Kinect. But there are a lot of things we can do to make the project better.

First, about the Kinect, the first thing is I want to try to use p5 or runaway to capture the movement of the body because I found that the control stage was not as dark as I thought. Or I can try depth and other functions of the Kinect on a windows laptop. And I also want to turn the Kinect to the audience so that there will be more interaction. Maybe an underground club is not a very proper place for these kinds of interaction, but this is a fun way to think from to develop the project.

Second is about the max part, we still use too much real scene videos in our project. I think there is more we can do on the patch itself. For now, our developments on the objects in the max simply rely on the functions inside the max. And it is hard for us to make a controllable and beautiful object in the max directly. I asked my classmates about their way of using max, I found that both of the two projects I love the most have used the .js. Therefore, I want to try more different ways of combining the max with other resources to create a more diverse project.

One biggest issue of our project is that we are trying to convey something “meaningful” to the audience, but actually there is no need for us to do so. Just like what we learned at the beginning of the semester, the synaesthesia varies because everyone has different life experiences and thinking. Therefore, their feeling toward the same piece of music may be different. The meaning of art is the same. The meaning of an art piece can vary from person to person, and it doesn’t need to be educational. We can jump out of traditional thinking and create something more crazy and abstract. So, in the future, rather than just put the street views on the screen, I want to discover more forms of presenting the idea I think. Also, while designing the music, we should consider more about the place we are going to present the project is a club, so the music with stronger beats and rhythm will be more suitable. 

In all, even though there I think the connection between our music and visual art is strong, and audiences’ feedback tells us that we have made an exciting live performance.

Work Cited

Live Cinema by Gabriel Menotti and Live Audiovisual Performance by Ana Carvalho from The Audiovisual Breakthrough (Fluctuating Images, 2015).

Project 3 Documentation

from Joyce Zheng

Title

Wild 失控

Project Description

Our project is based on a modern background where people rely heavily on modern technology. We took off the initial concept of cyberpunk but we are still trying to express the losing control on those technologies and the disasters followed. The use of Kinect controlling and the sound of breaking glass with flickering white screen works as a transition part, while before is the prosperity and rely on technology in modern society after the transition is the crash of society.

Like the intermediate between live cinema and VJing, we are trying to tell something rather than just creating patterns. The videos we have added to make our realtime performance more like realistic stories instead of fiction. By connecting reality with fiction, we want to raise people’s awareness of how addicted we are to phones, how our life is changed, also controlled by modern technologies. 

Our inspirations firstly come from the videos we have seen in class. The raining window provides us with our first idea of shooting different videos in reality. Real-time VJ and Real-time motion graphics to redefine the aesthetics of juggling are our biggest inspirations, where we decide that in the project we should have something that we can control the real-time movement on the screen.

Perspective and Context

One of our project’s most important parts is the transition, where the sound fits the visual effect the most. Although our music is made separately from our visual part, we still try our best to make them echo with each other and to cohere the whole performance. From Belson and Whitney’s experience, Belson’s work is trying to “break down the boundaries between painting and cinema, fictive space and real space, entertainment and art”. Belson and Jacobs were moving visual music from the screen out into “three-dimensional space”. While trying to combine the audio and the video, we are also trying to interpret music in a more three-dimensional way.

Jordan Belson’s work also provides us with a significant frame to perform. Film historian William Moritz comments that “Belson creates lush vibrant experiences of exquisite color and dynamic abstract phenomena evoking sacred celestial experiences”. His mimesis on celestial bodies inspired us to make something to be centered – it doesn’t have to be specific, but as a representation of something: maybe the expansion of technology or technology itself.

According to Live Cinema by Gabriel Menotti and Live Audiovisual Performance by Ana Carvalho, Chris Allen states that live cinema is “deconstructed, exploded kind of filmmaking that involves narrative and storytelling” (Carvalho 9). Our idea of creating the live performance corresponds to this concept, where we transfer our idea into a few breaking scenes and jump between one and another. Moreover, going back to Otolab’s Punto Zero, we can see they set strong interactions between their artwork and the audience. Initially, we wanted to invite an audience to come and interact with the Kinect, and see how the visual effects will change with the audience’s movement. Due to limitation, Tina performed that part instead of inviting someone, but we would choose someone whose identity is an audience to do this if possible. 

Development & Technical Implementation

Gist: https://gist.github.com/Joyceyyzheng/b37b51ccdd9981e507905ddcbbda988d

For our project, Tina is responsible for the creation in Processing, including the flying green numbers and how it can interact with Kinect, thus can interact with her. She also works on the combination of max patch and processing. I am responsible for the generation of audio and the creation of the max patch, and process the videos we shoot together. 

As I mentioned previously, we developed our audio and video separately, which is somewhere I think we can improve a lot. I mainly develop the audio in GarageBand, generated by mixing different audio tracks containing audio samples from freesound.org and different beats and base. I tried to use an app called Breaking Machine to generate sound, but later I found with the time GarageBand would be the best choice. I listened to a few real-time performances (e.g. 1, 2) and some cyberpunk style music and decided the style I wanted. Firstly I generate the audio looks like this:

Fig.1. Screenshot of audio version 1

Tina and our friend’s reactions to this was that it was too plain – it is not strong, thus not expressing enough of our emotions. Eric suggests that we should add some more intense sounds as well, like the sound of breaking glass. With those suggestions and some later small adjustments, I selected audio samples that I think would be proper for the audio and came up with the final audio patch. I decided to use a lot of choirs since it sounds really powerful and different types of base intersecting with each other on the same track. The creating process of video generally was to make it stronger and more intense. At first, we wanted to put the audio into Max so we can control them at the same time with video, but later we found inside Max we cannot see when it is playing, and it is not feasible for us to have two patches (since one is not enough) to control at the same time, so what we do is let it stay in GarageBand. 

Fig.2. Screenshot of audio version 2 -1
Fig.3. Screenshot of audio version 2 -2

For the max patch, we turned to jit. instead of filters like Easemapper. I searched for a few tutorials online and build the patch. The tutorials are really helpful, and the final video result produced by Max is something we would like to use. I combine the two objects that Max creates together and with Eric’s help, they can be put into the same screen simultaneously, which contributes to our final output. After the combination, we started to add videos in and put all the outputs on one screen. One problem we have met was we tried to use the AVPlayer to play more than 8 videos, which made our computer crashed down. With Eric’s suggestion, we used the umenu which was much better and allowed us to run the project smoothly. 

Fig. 4. Two separate patches
Fig. 5. The original patch before operating by Tina
Fig. 6. Final patch screenshot
Fig. 7. Present mode patch

We shot the video together and I processed the video with different filters. Here are screenshots of the video we have used. They are all originally shot, and we mainly want to express the prosperity of the city and how people rely heavily on technologies by city/hand/email/we chat message video, and how the world crashes down after the explosion by other videos. Therefore, our max patch somehow works more like a video generator: it generates all the input sources, including Kinect input, videos, and jit.world.

Fig. 8. Video 1 screenshot
Fig. 9. Video 2 screenshot
Fig. 10. Video 3 screenshot
Fig. 11. Video 4 screenshot
Fig. 12. Video 5 screenshot
Fig. 13. Video 6 screenshot
Fig. 14. Video 7 screenshot
Fig. 15. Video 8 screenshot

       

Performance

The performance in Elevator was an amazing experience. We were nervous but generally, everything went well. During the show, I am mainly responsible for the control of videos and audio, while Tina was responsible for the control of the cube and the keep-changing 3D object. When she is performing through the Kinect, I am responsible for the control on the whole patch. There were a few problems during the performance, the first one is that the sound was too loud especially of the second half part, and I could feel the club shaking even on the stage. Second is that there are a few mistakes during the performance since Kinect doesn’t work unless it is directly plugged into the computer and Tina’s computer had only 2 type-c sockets, we need to unplug the MIDIMIX and plugin Kinect and start Processing during the performance. After the part where Tina interacts with the Kinect ends, we have to replug in MIDIMIX. Though we rehearsed several times, I still forgot to reload in the videos so when the white screen flickering, you can still see the static video from Processing, which is a little terrible. Something surprising also happened, while in the end our cubes suddenly became only one cube (we still don’t know why till now), and it looks pretty good and echoes with the concept of the “core” of technology as well. 

When Tina was performing in front of Kinect I was hiding under the desk so that I would not be captured by the camera, which causes that Tina could not hear my word when I reminded her to left her arms up so that audience would be able to know that it was her who was having realtime performance. Sadly she didn’t understand me and I thought that this part could go much better. What’s more, I think the effect will be much better if we combine the audio into the max patch. But our performing time has to close precisely to the music’s breaking point, so we leave the GarageBand interface and mainly work on the realtime video during the performance. 

Here is the video of the selected performance.

https://youtu.be/ZnhN2Ev6uZo

Conclusion

To be honest, my concept of raps came more clear after watching everyone’s performance. If I could do this once again, I think I will not going to use any element of the video but to explore more on Max of the combination of other apps with Max. The initial idea of the project came from our common idea: something mixing reality and fiction. So from research to implementation of the whole performance, we work separately on different parts and combine them in the end – so the performance might not be coherent either. We wanted to achieve the effect that a human on the screen can control the 3D object or at least something not limited to the numbers, but Tina said only Windows could achieve that, so we had to give it up. 

However, we did try hard with limited time and skills. I want to say thank you to Tina who worked on an app that we never learned before and created one of the best moments of our performance, as well as Eric’s help through the whole project. We explored another world on Max, and how we should shoot the video to make more “raps”, how to modify the audio so it is not pieces of different music/usual songs anymore. 

I would work more on the audio and max patch for future improvements. I am not sure if it is influenced by the environment or not, but when I communicated with a lady from another university, she thought that most of our performances or our music were not like club music – people cannot dance or wave their bodies with it, and she felt it was because “it is still a final presentation for a class”. For this reason, I would generate more intense videos like granular videos through Max and stronger rhythmic music with better beats and audio samples.

Sources: 

https://freesound.org/people/julius_galla/sounds/206176/

https://freesound.org/people/tosha73/sounds/495302/

https://freesound.org/people/InspectorJ/sounds/398159/

https://freesound.org/people/caboose3146/sounds/345010/

https://freesound.org/people/knufds/sounds/345950/

https://freesound.org/people/skyklan47/sounds/193475/

https://freesound.org/people/waveplay_old/sounds/344612/

https://freesound.org/people/wikbeats/sounds/211869/

https://freesound.org/people/family1st/sounds/44782/

https://freesound.org/people/InspectorJ/sounds/344265/

https://freesound.org/people/tullio/sounds/332486/

https://freesound.org/people/pcruzn/sounds/205814/

https://freesound.org/people/InspectorJ/sounds/415873/

https://freesound.org/people/InspectorJ/sounds/415924/

https://freesound.org/people/OGsoundFX/sounds/423120/

https://freesound.org/people/reznik_Krkovicka/sounds/320789/

https://freesound.org/people/davidthomascairns/sounds/343532/

https://freesound.org/people/deleted_user_3544904/sounds/192468/

https://www.youtube.com/watch?v=Klh9Hw-rJ1M&list=PLD45EDA6F67827497&index=83

https://www.youtube.com/watch?v=XVS_DnmRZBk

Reading response8: live cinema (Katie)

Both VJ and live cinema are a kind of audio visual performance or visual music that emphasise “liveness”. The differences between VJ and live cinema can be seen in three approaches. The difference in audience’s participation and content.

For live cinema, as Ganriel Menotti and Ana Carvalho mentioned is “in a setting such as a museum or theater. The public, instead of being absently lost amid multiple projections, is often “sitting down and watching the performance attentively” (85). While VJ “stuck in nightclubs and treated as wallpaper” (89). So there’s a huge difference in how audience interact with the performance between live cinema and VJ. In terms of content, live cinema often has specific content. The idea of live cinema can be “characterised by story telling” (87) in the cinema context, while VJ often act as wallpapers in club context.

Final project documentation—Katie

Title
In the Station of Metro

Project Description

Our project aims to depict the the crowdedness and loneliness in the metro station. We were inspired by Ezra Pound’s poem, In a Station of the Metro 

In a Station of the Metro

The apparition of these faces in the crowd:

Petals on a wet, black bough

The metro station becomes a very important part in people’s daily lives. Often time, people in the metro station are in a rush with no motions on the faces. My own experience in the metro is not so pleasant, too. Crowded with people and noises, sometimes I even feel hard to breathe.  We took inspirations from

Raindrops #7

Quayola’s  “Strata #2,” 2009 

Perspective and Context

Our project fits into the historical context of live audiovisual performance and synesthesia . The audio and visual are consistent. When I think of the greyish images, I could not think of some bright melody, but link it to some sad and slow ones. 

I think our performance is more like a live cinema thing rather than VJing. We have a specific content like a story telling which expect our audience to watch attentively sitting or standing, but not a “visual wall paper” that the audience dance to.

Development & Technical Implementation

For the visuals, the two of us shot footages together in the metro station. We choose different time in a day to go, in the morning, noon and evening, try to capture what we want: being absorbed by steams of people. Then, based on the footages we have, we discussed and decided on the overall tone of the video: a quite depressed one. Then, we work separately on visuals and audios. I worked on visuals.

We have two kinds of footages, one is inside the metro station, one is raindrops on glasses. I first edit the footage in premiere to form a basic line of approximately eight minutes. For the first half of the video, it’s purely crowed scene, for the second part, I placed the raindrop image onto the metro station image. So, we want to emphasize in the second half of the video the loneliness when one is in the metro, facing tons of strangers,  cold and rainy outside. Also, inspired by Quayola’s  “Strata #2”, we want to create the effect of raindrops breaking through the glasses. So I use the 3D models to achieve this effect. The problem I faced at first is I could not control the distribution and movement of the models. After getting help from professor, I got the idea of what every value and functions mean. So I experiment with the scale, size and speed and finally have something I want. Like this:

For other effects, we add slide to add to the crowdedness. The slider makes the black shadows connect to one another. Like this:

The rotate effect fits the raindrops very well which create a sparks play image:

This is the link to my patch: https://gist.github.com/JiayanLiu27/9b714a9ecbcdd7dbfcfbdd34c1117b58

This is my overall patch:

Performance

I’m in charge of the video effects and Thea for the audio. We were super nerves before the performance because Thea’s computer seems to have some problems with the audio, and the program often shutdown itself. Fortunately, it all went very well in the performance and there’s no technology issues. One thing could be better is that the contrast and brightness of colors on the projection screen is different than the one our own screens. We should adjust it a little for the audience to see the visuals more clearly.

In the performance, there are certainly some parts that went not like our previous rehearsal. For example. there’s one part that the visuals of different scenes changed very quickly with some sound effect simultaneously. At the same time, the color of the video would change according to the sound. However, when we were performing, the change of the color is a little out of control, it covered the visuals behind.

And for the 3D models, when adding slider effect, it becomes hard to see. But I think it overall went very well.

Conclusion

This is my first time to do a live audio visual performance. In this process, I learned a lot. The first thing is to always have a backup plan because there are really a lot of uncertainties when performing live: the program do not run, the screen shut down etc. For our group, I think it’s better for us to borrow a macbook form the IMA equipment room and prepare a patch on that computer to prevent the audio problem of the windows computer. 

Also, I think we could take more risks in this project. For now, I think this is a very save one: we have a concrete video as background, so things could not go very wrong even if the effects do not work. But for other projects in the future, I would like to experience with some abstract concept and make some really crazy visuals. For example, explore how can different shapes and colors transform.