RAPS assignment5 (Katie)

This is the final output of my patch:

 https://drive.google.com/file/d/1QzwV3WO8B6ogglVcj1f5o1jeDO9PvxAa/view?usp=sharing

This is the link to my patch

https://gist.github.com/JiayanLiu27/8bc52ca522e90c67fb6f7ea3341ac9cb

I download a 3D model obj file from website. And upload it through the “read” function in my max patch. Then I click the toggle to start capturing camera video and routing this visual into a named texture object called “myface”. In order to apply the visuals captured by the webcam to the texture of the 3D model, I click the little green button on the left of the jit.gl.model and found the texture function and add it to the jit.gl.model. I then send a message of “myface” to the “texture”. then @texture myface in the jit.gl.model.

This way the visuals I captured by web cam can be routed into the texture of the 3D model. The final step is to adjust the speed, scale and position according to the output.

This is my overall patch:

Assignment 5: Multi 3D Objects (Tina)

github link

3D resource

I originally wanted to make a scene with different types of desserts, but after adding the first cake model, I found it was really hard to get the correct color I wanted to apply. I tried to use the mappr and other color-controlling patches, but the results weren’t very satisfying. 

Then I think the color-changing “cake” looks 1more like instead. Therefore, I add a 1EASEMAPPR as the background, and make the “cake” as well as some randomly changing lines move on the top. I will try to go further in discovering how to adjust the color and moving mode as I plan.

 

The video: 1

The reason why this assignment is so challenging is that it is really hard to understand the very specific function of each statement. I go through the patch we have done in the class, also I look at some sample codes. Through this I understand some of the functions. In the following weeks, I really should spend time exploring more, and I think this part will be very useful in the final project.

Reading Response VJ, Live Cinema, Live Audiovisual Performance —— Thea (Sang)

Compared with VJ and Live Cinema, Live Audiovisual Performance is a more generic and broader concept. It can be applied to a group of artistic performances that has common features. Not only VJ, Live Cinema, other performances such as expanded cinema, but visual music can also be regarded as the practice of the term Live audiovisual Performance (135). Basically, Live Audiovisual Performance is an artistic expression of live manipulated sound and image, defined as time-based, media-based and performative (131). It largely depends on technological and multiple installations to achieve “liveness”, “Realtime” and “interaction”.

As for VJ and Live Cinema, personally, both of them are the branches of the broader concept. But according to the article, Live Cinema seems to achieve a higher level than VJ. VJing always performs in the nightclub and more like a supplement to the overall atmosphere. VJing needs to achieve harmony and unity with other performers like DJs and lighting controllers and create a great atmosphere to entertain and lighten the audience. VJing presentation is treated more as wallpaper and people seldom emphasize the artistic value of VJing. Makela, who wrote “The Practice of Live Cinema” also implies that the threshold of DJ is relatively low and its artistic value is as high as Live Cinema.

Different from VJ which emphasizes the correspondence between hearing and vision, Live Cinema involves narrative and storytelling and “invite the audience to construct narrative and cultural critique” (89). At the same time, the Live Cinema is also deconstructed, exploded kind of filmmaking, which separated it from the traditional cinema. Whether from the aspects of the selection and organization of sources or the aspects of performing form, Live Cinema explores a deeper and wider territory of VJing and much bolder areas of cinema. I feel that Live Cinema is more like a complete and dependent performance with high aesthetic autonomy and vitality to touch audiences and interact with audiences. I appreciate its integrity. Not only the installation are the parts of Live Cinema, but the performer cannot be changed in order to maintain the aesthetic autonomy and impendence of the performance.

 
However, I still do not think there exists a natural hierarchy of value between VJ and Live Cinema. Both of them show the attitudes of artists and can deliver unique aesthetic values. Although Live Cinema is relatively complex and easy to show deeper thoughts, I believe the simple VJing presentation on the on hand, can also abundant, changeable and full of possibilities, even if it is only out of entertainment purpose; on the other hand, can also deliver great concepts or ideas by excellent artists.

RAPS Assignment 5 – Chenlan Yao (Ellen)

Link to my gist: https://gist.github.com/Ellen25/5f3deafb41a829c51a9f8d5a19dd3447

Sample video:

Process:

I started this assignment from the 1D model I made in class:

1D model

1D model

I furtherly developed it by first adding another 2 dimensions to gl.multiple and adding attributes to gridshape. After editing the 3D model, I also used 1EASEMAPPR and 1PATTERNMAPPR to create a pattern with 2TONR to generate its color. Connected it with the 3D model created by jit.gl, the changing color pattern was added to the model. The final patch looks like:

assignment5 pattern

I tried to use the 3D model downloaded from the Internet but I failed. No matter how I adjust the settings, the model didn’t show up clearly. I will try my  best to figure it out later.

Assignment 5: Multi 3D Objects (Phyllis)

Here is the link to my gist

Process

I was searching for different 3D models and finally decided to choose a human head (downloaded on TurboSquid) for this assignment. I loaded it to max through “read,” added 10 in total and experimented with their motions based on Eric’s example patch. I did some adjustments on the frequency/scale/speed of position, XYZ rotation, and scale of my head model. For rotation on the x-axis, I switched the parameter to “phase.” The motion was finalized to be 10 heads nodding at a high frequency with low-frequency shakes, and the 10 heads move around on x, y, and z-axis.

After finalizing the motions, I started to generate patterns for the model, however,  I could only make changes to the background rather than the model itself. (Now that I understand how it works, I find myself stupid… 😑) The patterns were not passed to the model — Eric explained to me how effects/patterns could be passed to the model in jitter and I finally understand!!! Then I worked with the patterns by using 1PATTERNMAPPER, MAPPER and HUSALIR, and produced the first output image (see Figure 1).

Figure 1

Demo for Figure 1

Eric also showed me how to switch between different patterns by adding more statements to the drawing function. The switching can be easily achieved by a simple click. I modified my face with MUTIL8R and had my second output image produced (see Figure 2).

Figure 2

Demo for Figure 2

Below (Figure 3)  is a screenshot of my entire patch.

Figure 3

Reflection

  • I find out the reason why I felt this assignment challenging at the beginning — I was not comfortable with jitter yet. So I was afraid of trying out and felt that I’m not good at it (even though we’ve been worked with Max for an entire semester). I need to step out of my comfort zone.