Reading response8: live cinema (Katie)

Both VJ and live cinema are a kind of audio visual performance or visual music that emphasise “liveness”. The differences between VJ and live cinema can be seen in three approaches. The difference in audience’s participation and content.

For live cinema, as Ganriel Menotti and Ana Carvalho mentioned is “in a setting such as a museum or theater. The public, instead of being absently lost amid multiple projections, is often “sitting down and watching the performance attentively” (85). While VJ “stuck in nightclubs and treated as wallpaper” (89). So there’s a huge difference in how audience interact with the performance between live cinema and VJ. In terms of content, live cinema often has specific content. The idea of live cinema can be “characterised by story telling” (87) in the cinema context, while VJ often act as wallpapers in club context.

Final project documentation—Katie

Title
In the Station of Metro

Project Description

Our project aims to depict the the crowdedness and loneliness in the metro station. We were inspired by Ezra Pound’s poem, In a Station of the Metro 

In a Station of the Metro

The apparition of these faces in the crowd:

Petals on a wet, black bough

The metro station becomes a very important part in people’s daily lives. Often time, people in the metro station are in a rush with no motions on the faces. My own experience in the metro is not so pleasant, too. Crowded with people and noises, sometimes I even feel hard to breathe.  We took inspirations from

Raindrops #7

Quayola’s  “Strata #2,” 2009 

Perspective and Context

Our project fits into the historical context of live audiovisual performance and synesthesia . The audio and visual are consistent. When I think of the greyish images, I could not think of some bright melody, but link it to some sad and slow ones. 

I think our performance is more like a live cinema thing rather than VJing. We have a specific content like a story telling which expect our audience to watch attentively sitting or standing, but not a “visual wall paper” that the audience dance to.

Development & Technical Implementation

For the visuals, the two of us shot footages together in the metro station. We choose different time in a day to go, in the morning, noon and evening, try to capture what we want: being absorbed by steams of people. Then, based on the footages we have, we discussed and decided on the overall tone of the video: a quite depressed one. Then, we work separately on visuals and audios. I worked on visuals.

We have two kinds of footages, one is inside the metro station, one is raindrops on glasses. I first edit the footage in premiere to form a basic line of approximately eight minutes. For the first half of the video, it’s purely crowed scene, for the second part, I placed the raindrop image onto the metro station image. So, we want to emphasize in the second half of the video the loneliness when one is in the metro, facing tons of strangers,  cold and rainy outside. Also, inspired by Quayola’s  “Strata #2”, we want to create the effect of raindrops breaking through the glasses. So I use the 3D models to achieve this effect. The problem I faced at first is I could not control the distribution and movement of the models. After getting help from professor, I got the idea of what every value and functions mean. So I experiment with the scale, size and speed and finally have something I want. Like this:

For other effects, we add slide to add to the crowdedness. The slider makes the black shadows connect to one another. Like this:

The rotate effect fits the raindrops very well which create a sparks play image:

This is the link to my patch: https://gist.github.com/JiayanLiu27/9b714a9ecbcdd7dbfcfbdd34c1117b58

This is my overall patch:

Performance

I’m in charge of the video effects and Thea for the audio. We were super nerves before the performance because Thea’s computer seems to have some problems with the audio, and the program often shutdown itself. Fortunately, it all went very well in the performance and there’s no technology issues. One thing could be better is that the contrast and brightness of colors on the projection screen is different than the one our own screens. We should adjust it a little for the audience to see the visuals more clearly.

In the performance, there are certainly some parts that went not like our previous rehearsal. For example. there’s one part that the visuals of different scenes changed very quickly with some sound effect simultaneously. At the same time, the color of the video would change according to the sound. However, when we were performing, the change of the color is a little out of control, it covered the visuals behind.

And for the 3D models, when adding slider effect, it becomes hard to see. But I think it overall went very well.

Conclusion

This is my first time to do a live audio visual performance. In this process, I learned a lot. The first thing is to always have a backup plan because there are really a lot of uncertainties when performing live: the program do not run, the screen shut down etc. For our group, I think it’s better for us to borrow a macbook form the IMA equipment room and prepare a patch on that computer to prevent the audio problem of the windows computer. 

Also, I think we could take more risks in this project. For now, I think this is a very save one: we have a concrete video as background, so things could not go very wrong even if the effects do not work. But for other projects in the future, I would like to experience with some abstract concept and make some really crazy visuals. For example, explore how can different shapes and colors transform.

Final project individual reflection (Katie)

PROJECT TITLE – YOUR NAME – YOUR INSTRUCTOR’S NAME

Forest Box: build your own forest–Katie–Inmi

CONCEPTION AND DESIGN:

In terms of interaction experience, our idea is to create something similar to VR: users do physical interactions in the real world and resulted in changes on the screen. There are several options we explored in the designing process. Inspired by a work by Zheng Bo called 72 relations with the golden rod,  at first, we want to use an actual plant that attach multiple sensors onto it. To let the users explore different relations they can do to plants. For example, we want to attach a air pressure sensor onto the plant and whenever the someone blow to it, the video shown on the computer screen will change. And a pressure sensor that someone can step on it. 

But we ended up not choosing these options because the equipment room do not have most of the sensors we need. We then select the color sensor, touch sensor, distance sensor and the motion sensor. However, we did not think carefully about the physical interactions before hooking them up. The first problem is that the motion sensor does not work as the way we want: it only sense motion but cannot identify certain gestures. As a result it sometimes will conflict with the distance sensor, so we give it up. So we have a very awkward stage where we have three sensors hooked up and different videos running according to sensor values but have difficulty to link them together.

After asking Inmi for advice, the final solution we think of is to make a forest box that users can interact with and the visuals on the screen will change according to different kinds of interactions. If you place fire into the box, a forest fire video will be shown on the screen. if you throw plastics into the box, a video of plastic waste will be shown on the screen. if you pull off the trees, a video of forest being cut down will be shown on the screen. By this kind of interaction, we want to convey the message that every individuals’ small harm to earth can create huge damages. For the first scene, we use a camera with effects to attract user’s attention.

 

FABRICATION AND PRODUCTION:

The most significant steps are first: hook up the sensors, Arduino and Processing code and let them communicate. I think the most challenging part for me is to figure out the logic in my Processing code. I did not know how to start at first because there are too many conditions and results. I don’t know how the if statements are arranged to achieve the output I want. The very helpful thing is to draw a flow diagram of how each video will be played.

and then we need to define a new variable called state and the initial value is 1.

Then, the logic becomes clear and the work become simpler. I just need to write down what each state does separately and then connect them together. Although the code within one state can be very lengthy and difficult, the overall structure is simple and clear to me.

For example from state 1 to state 2:

Another important thing is we want to switch from state 1 to two by the keypress interaction. However, the video of state 2 only plays when the key is pressed, when you release the key, the state will turn back to 1. To solve this problem, I create a Boolean to trigger the on and off of the video.

At first, we want different users to run from far to near to the screen, wearing different costumes representing plastics and plants. the user wearing which costume first approach the screen determines which video to play. However, during user testing, our users said first, the costume is of poor quality. Second, the process of running is not interactive enough. Our professor also said that there’s no link between what the users are physically doing: running, to what’s happening on the screen: playing educational videos.

So after thinking through this problem, we create a forest box to represent forest, and one can interact with different elements of the box.

In this way, what the user is doing physically has some connections with what is shown on the screen.

 

CONCLUSIONS:

The goal of our project is to raise people’s awareness about climate issues, and reflect on our daily actions. The project results align with my definition of interaction in the way that output of the screen is determined by the input (users’ physical interaction). It is not align with my definition in the way that there is no “thinking” process between the first and the second input. We’ve already give the options they can do, so there’s no much exploration. I think our audience interact with our project the way we designed.

But there is a lot of things we can improve. First, we can better design the physical interaction with the forest box and let user to design the box the way they the want. For example, we can fill the box with dust and provide different kinds of plants and other decorations.  Different users can experience the act of planting a forest together. By placing multiple color sensors in different place on the box, the visuals will change according to the different trees being planted.  Second, we could first let plastics fill the surface of the box to indicate the plastic waste nowadays. If the user get rid of the plastics, the visuals on the screens will change, too.  Second, for the video shown on the screen, we can draw by ourselves.

The most important thing I’ve learned is experience design. As I reflect on my project designing process, I realize for me now, it’s better to first think of an experience rather than the theme of the project. Sometimes to begin with a very big and broad theme is difficult for me to design the experience. But if you first think of an experience, for example, a game, then it’s easier to adapt the experience to your theme.

The second thing I’ve learned is coding skills. With more can more experience in coding, I think my coding logic improves. For this project, we have many conditions to determine which video to play, so there are a lot of “if statement”. I felt a mess at first, did not know how to start, then Tristan asked me to jump out of the “coding” for a second and think about the logic by drawing a flow diagram. After doing this, I felt much clearer of what I was going to do.

I think the climate issue is certainly a very serious one and everybody should care. Because  climate changes effect our daily lives. “Nature does not need humans. Humans need nature.”

Recitation 10 media manipulation workshop (Katie)

In this recitation, I chose to attend the media manipulation workshop.  The thing I want to work out in this workshop is how to switch scenes. For example, to switch from the webcam to the video. I want to add a keypress function to achieve but the problem is that the scene only changes when I press the key. After I release the key, it turns back to the webcam. To solve this problem, I add a boolean to trigger the video.

RAPS assignment5 (Katie)

This is the final output of my patch:

 https://drive.google.com/file/d/1QzwV3WO8B6ogglVcj1f5o1jeDO9PvxAa/view?usp=sharing

This is the link to my patch

https://gist.github.com/JiayanLiu27/8bc52ca522e90c67fb6f7ea3341ac9cb

I download a 3D model obj file from website. And upload it through the “read” function in my max patch. Then I click the toggle to start capturing camera video and routing this visual into a named texture object called “myface”. In order to apply the visuals captured by the webcam to the texture of the 3D model, I click the little green button on the left of the jit.gl.model and found the texture function and add it to the jit.gl.model. I then send a message of “myface” to the “texture”. then @texture myface in the jit.gl.model.

This way the visuals I captured by web cam can be routed into the texture of the 3D model. The final step is to adjust the speed, scale and position according to the output.

This is my overall patch: