W2: Velostat and Lily pad

In-class exercise

Velostat sensor  

Testing Lily Pad

                                                     

Final output

Problem encountered: My partner and I had the circuits right but the buzzer didn’t make sound. We asked Marcela and then we found that since we created quite a long velostat sensor, the resistance becomes big. The original minimum resistance value set in the sample code was 50 but our maximum resistance value was 17. So we changed the minValue to 10 to enable the buzzer to make sound.

int minValue = 10; //start playing at this value

Arduino Code:

//exercise 2 – Interactive Fashion
int speakerPin = 9; //0 if you are using ATtiny
int sensorPin = A3; // Analog pin 3 if you are using ATtiny
int sensorValue = 0;
int minValue = 10; //start playing at this value
int pitch = 10; //adjust pitch of the sound
void setup() {
Serial.begin(9600); //Serial Comm doesn’t work with ATtiny
}
void loop() {
sensorValue = analogRead(sensorPin);
if(sensorValue > minValue){
makeNoise(speakerPin, sensorValue * pitch, 100);
}
Serial.println(sensorValue); //Serial Comm doesn’t work with ATtiny
}
void makeNoise(unsigned char pin, int frequencyInHertz, long timeInMilliseconds) {
int x;
long delayAmount = (long)(1000000 / frequencyInHertz);
long loopTime = (long)((timeInMilliseconds * 1000) / (delayAmount * 0.8));
for(x = 0; x < loopTime; x++)
{
tone(pin, frequencyInHertz);
delayMicroseconds(delayAmount);
noTone(pin);
delayMicroseconds(delayAmount);
}
}
 
Group Assignment
create a wearable device that alters your perception and changes your relationship to your environment—your personal, physical, and social world. No electronic devices and no circuits are needed for this assignment. You are free to use any material, it does not need to be “soft”.
 
 
Final output
 
Smile and I teamed up and our idea is to create a wearable that makes people reflect on how stress will impact people’s view of the world and how others may perceive us. When we’re under a lot of stress, the color of the world in our eyes changes,  and the stress will influence others to see the twisted version of themselves. This wearable aims to alter users’ perceptions of touch and sight and build a connection between mind, touch, and sight.
Our inspiration comes from the reflective and colorful fabric that we found in the fab lab. 
When I see these fabrics, I am reminded of the colorful hanging pictures like what is shown above. They are reflective and we can see ourselves in them, like mirrors. However, when I touch the fabric, the human image becomes deformed.
 
Glasses are the most common wearable that we can think of when it comes to the sense of sight. Our first idea was to cover the glasses with colorful fabric. We wanted to make a mechanism that we could change the color of the eyeglass, which is the fabric, through our motion, but it wasn’t easy to accomplish. Therefore, we simply covered it with laser-colored paper. We weren’t satisfied with the current design, and we are still thinking of some possible ways to make the glasses more interactive. 
 
Another part of the wearable is an eye mask. The blue fabric is the reflective fabric. We wanted human motion to twist the fabric so the human image would be twisted. I experimented with sewing the cube into the fabric, and when I drag the cube, the fabric will twist. We hope to implement this in our wearable.
Here is the link to the video of the motion engaging the hand movement and sight we want to create. 
 
We covered it with a hole-like white cloth, which is very much like our brain.
When thinking about the concept of stress, we wanted to use the glow sticks (which we found plenty of on the tables in the studio after orientation) to represent it. When people squeeze the glow sticks, it sends out an information of feeling stressed. And the motion of squeezing will move the cube and therefore move the eye mask.
There are definitely better approaches to represent our ideas and we’re still working on it. This is only a rough prototype, but we want to really consider how wearables can inspire us to consider how certain mental feelings will reflect on our movements with certain wearables and influence our perceptions of the world and our senses.
 
Here is the video for our final output:
 
 
 
 

W1 Assignment Soft Circuit

In Class Exercise

Final output video:   Soft Circuit Functioning

Diagram of the soft circuit

Testing on breadboard

Final output

Sewing process and problems encountered:

Since I didn’t have experience sewing before, I struggled a lot with sewing at first. I asked Marcela and also referred to the tutorial video. However, it still took me a lot of effort to sew correctly.

After cutting the fabric to the shape of a rabbit, I had problems connecting the thread to the LEDs. I asked Marcela for help and sewed the LED into the fabric. The rabbit’s mouse serves as the button. There are two layers of conductive fabric in between the mouse. When clicking the button, the LED in the rabbit’s eye will light up.

When I first sewed to connect the battery to the LED, I found that the LED automatically lit up without me pushing the button. I immediately realized I connected the circuit wrong and did not create a break in the circuit though I thought I did. I went back to my sewing and I realized that I directly connected the positive and negative poles of the battery to the two bars of the LED. So I unsewed the thread linking the positive side of the battery and sewed it again to the button. Therefore the break in the circuit is the mouse. When I press the mouse, which is the button, the conductive fabrics touch each other and the circuit is complete. Then I ultimately created the right circuit path.

Another small problem is that I chose the softer white fabric this time and it is very difficult to sew and maintain. Perhaps the harder fabric will be better.

Blog Post Questions

If you need to turn on two LEDs, what circuit would you use and why? Test the circuits and explain what is happening and why.

If I need to turn on two LEDs, I will use the first circuit, in which the LEDs have parallel connections with the power. In the parallel connection circuit, the two LEDs have the same voltage as the battery(3V), so they both have enough power to light up.

The second circuit, however, is a series connection.  The two LEDs separate the voltage provided by the battery and share a total amount of 3V, so neither has enough power to light up.

Reading Reflections

  • What is fashion for you? and why are you interested in fashion?

For me, fashion is an expression, attitude, and a notion of self-care. It is a way to communicate my personality, interests, and creativity through clothing and accessories. It can also be an attitude, reflecting individual tastes and preferences. Additionally, fashion is often viewed as a means of self-care, as the act of dressing well and feeling confident in one’s appearance can contribute to overall well-being. Fashion can empower people. These are the reasons why I’m interested in fashion. Through fashion, I can sense people’s different styles, lifestyles, and attitudes. Fashion is constantly changing but sometimes goes back to a cycle. Fashion serves as a dynamic and ever-changing form of cultural expression that allows individuals to convey a sense of identity and belonging. Fashion is also associated with ecology, exploitation, etc. Therefore, it is a topic worth researching about.

  • What kinds of things do your clothing say about you and your values?

I don’t have a specific clothing style, and I often wear different styles of clothes. However, when I was young, I wore very cute, girly, and pink clothes, but I don’t do so anymore. I’d say this shows how my mindset and values change over time. I grew up and this change indicates the transformation of my self-identity. In Do Clothes Speak? What makes them Fashion?, Fred wrote that “what we wear is characterized by tensions over gender roles, social status, and the expression of sexuality”(2).  I can really identify with this. Now, through my clothes, I’m trying to show myself as an independent and cool woman. The different styles of clothes I have in fact imply the different characteristics I have. Since I began to street dance,  I tend to wear hip-hop and confi clothing more, and this reveals my value of living freely and enjoying the moment of life.

  • What are your main learnings and takeaways from the readings?

One idea that raised my thoughts was in Davis’s book about the private self vs. public persona(6). He discusses how clothing can be said to function as a visual language and also like music(5). Clothing is coding. What we wear inputs something and results in an output, which might be our impression to others. Fashion can definitely showcase our private self, as we can wear anything we want. However, for Chinese students, we had to wear uniforms before university and this reveals our public persona: a good student. Davis discusses that clothing’s meanings are cultural(15).  It is associated with social status and economy. By looking into the fashion of something, we can learn a lot about its cultural background. Fashion is not only about a distinctive style but also links to change and fabrication(16). This echoes to my main takeaway from Ying Gao’s Interactive Fashion Doubles as Cultural Critique. The main word in this reading is definitely “innovate”. To Ying, what matters is” the concept of the foreign, the dissimilar, and the different.” Nowadays, when fashion is connected to technology, there are many ways to innovate, either through the biology angle, or the architecture side, etc. Fashion” encounters with time.”  Her works using different sensors in an unconventional way really inspired me. She using sensors to interact with strangers and feel the gaze is indeed innovative. I can really learn from her spirit of breaking the norm.

References

Davis, Fred (1992) ‘Do Clothes Speak? What makes them Fashion?’ in: Fashion, Culture and Identity. Chicago: The University of Chicago Press, pp. 1–18

Ying Gao –  Interactive Fashion Doubles as Cultural Critique

Project 3 – Documentation Post

Title
Revival

Project Description

The theme of our project is explosion. The 10-minute piece depicts the process of how an explosion happens from the beginning to the end. We started with the chemical reactions at the early stage of an explosion, gradually moved to the huge explosion part, and finally ended with the particles of the bomb. We were inspired by the movie Oppenheimer, as well as other audio-visual performances that include stunning explosion scenes (see reference). We wanted to create dynamic visuals with exciting audio to fit into the scenario.

Perspective and Context

Our project fits into the historical context of visual music, abstract film, live cinema, and live audiovisual performances. The depiction of the explosion process, from chemical reactions to the eventual dispersion of bomb particles, resonates with the tradition of abstract film and visual music, where artists often explore non-representational and dynamic visual elements paired with sound to evoke emotional responses. Our project embraces this idea by translating the auditory and visual dynamics of an explosion into a synchronized audiovisual experience. Our intent to create dynamic visuals with exciting audio aligns with the principles of live cinema, emphasizing the immediacy and co-presence of image and sound.

We tried to follow the artistic and stylistic characteristics we learned in the early abstract film section. Take Norman McLaren’s work as an example.  The synesthetic relationships between the visuals and the sound are very interesting. The presence of different shapes signifies the sound in his work. This is what we try to accomplish in our work.

Furthermore, our project engages with the theoretical aspects discussed during the semester. In the reading Live Cinema by Gabriel Menotti and Live Audiovisual Performance by Ana Carvalho from The Audiovisual Breakthrough (Fluctuating Images, 2015), I learned that Live Cinema tends to be more narrative-focused, integrating cinematic elements into live performances, VJ-ing typically involves real-time visual manipulation during music performances or events, often using software to mix and manipulate visuals, and that Live Audiovisual Performance encompasses a broader spectrum, including both VJ-ing and Live Cinema but often places a stronger emphasis on the integration of sound and visuals as equal components. Our project tells the narrative of the process of explosion and includes realtime audio-visual manipulation during the performance. 

Development & Technical Implementation

We first had the idea of making a performance with the theme of explosion. Our first step was to create visuals for the explosion. We started by trying to film some short videos and modified them with Max to create abstract visuals. We experienced many setups, including paint, ink, and hot glue, and filmed the clips. We then uploaded them in Max to experiment with different effects together.

We documented the cool and useful effects we found in the Google doc to prepare us for the visual building. When we felt like the materials and the effects were enough for the 10-minute piece, since we did not want to make it too complicated, we started to make outlines for the project. Our initial thought was to make a narrative that thinks of the nuclear bomb as a living thing with a mind that watches itself form, explode, and displace. 

This is the timeline we created:

Timeline:

  1. 0:00 – 0:30 formation of the bomb:  visual:bubble(preset 2); audio:sound in a science lab
  2. 0:30 – 1:10 transition to the final formation 1:WYPR cage effect, bubble spread audio:stage opening sound
  1. 1:10 – 1:50 transition to the final formation 1:kaleido mixfader e2
  2. 1:50 – 2:30 completion of the bomb:visual bubble (先调mixfader 1再调m2),注意这时候是扩散的,可以调速度/扭动
  3. 2:30 – 3:10 bombs gather: visual 扩散效果调为0,用zoomer配合heartbeat  
  4. 3:10 – 3:50 fuse:red lines (preset 3),可以扭动,可以加笼子
  5. 3:50 – 4:30 explosion:red line to 3d model(flicker effect)
  6. 4:30 – 5:20 3d explosion:调BRCOSR和PINCHR
  7. 5:20 – 6:20 explosion to ink
  8. 6:20 – 7:00 before restart:ink调fogger,old TV stuck
  9. 7:00 – 7:40 restart:ink to spread bubble
  10. 7:40 – 8:20 restart finish:bubble to bomb
  11. 8:20 – 9:00 bomb crash:scrambler
  12. 9:00 – 10:00 end with particles

These are pictures of the original clips we filmed and the final output with the effects.

For the visuals, we also used the jit.mo model. Sophia adjusted some parameters, and we added effects on the model to create various visuals.

For the 3d modeling part, I used a donut model I made before and modified it with other effects.

Below is the visual Max patch:

We made the visuals first and then composed the audio accordingly. When Sophia was adding effects on the visual, I found and downloaded the sound samples online. I at first tried to use the drum sequencer and the piano roll sequencer, but it was hard to reach the effect that I wanted. After Sophia was mostly finished with the overall flow of the visual, I started composing the audio using Garageband. I used different piano effect in the software to compose the piece accoding to the visual progress. I also made use of the sound effects downloaded there.

I first tried to find the base sound. I used the noise module in Max. It was useful because I could also use this sound for the fogger part of the visual.

I first add a relatively soft audio as the beginning of the whole piece, and I gradually added music pieces and sound effects, drum beats to correspond to the visuals. I intentionally put obvious sound effects for each transitions part so that Sophia can have a bettern sense of time when manipulating the MIDI. For the explosion part, I tried to add strong and heavy beats, and integrated the explosion sound samples to make the audio dynamic and explosive. In the ink part, the audio gave audience a sense of the water fluid, but when each drop of ink fell in the water, there was a drum beat. For the last part of the particles of the bomb, the audio turned to sparkle-ish, and I adjuested the frequency to have a more misty sound.

For the performance setup, we control the visual MIDI together since we need to constantly add video clips and effects. I also controlled the audio by myself. I already have a base sound, so I add the sound samples and adjust the parameters of the effects during the performance. For the recorded sound samples, the visual controller manipulate it to match the audio. For instance, Sophia changes the size of he bubble according to the sound og the heartbeat. I may also play some short samples according to he visual.

The link to the Max patch: https://drive.google.com/drive/folders/1f5aYERLiX967pw6qx4fdpd_Oow1N7u9P?usp=share_link

Performance

The proformance went well generally. We all made tiny mistakes due to nervousness, but we didn’t make big mistakes. One regret is that the explosion effect that I expected the most didn’t work. The explosion audio sample was too loud when I played it so I didn’t have the time to manipulate the flicker effect in the visual MIDI. Also, our transitions were still not very smooth and the screen was black for a short period of time. 

In the performance, Sophia is in charge of most of the visual manipulation. I’m in charge of some visual manipulation, all audio manipulation, and looking at the time. As a group, we coordinated well. 

Performing in a club with the big screen and the audio setting felt completely different than I practiced. Since our theme is explosion and we have the dynamic visuals, putting them on a big screen made the visual seemed a lot better and immersive. The audio also sounded a lot better. The strong beats and the explosion sound effects became more heavy. The whole setup in the club helped to create a more immersive experience of our performance. 

Conclusion

In the research part, we researched many famous audiovisual work and gained a lot of inspiration from them. For the creation part, we initially had many ideas for creating the visuals, including filming videos, modeling and building with Max. However, using one or two approaches is enough, so we mainly used the model in Max and two videos we recorded. One thing to improve is that we didn’t really know how to build the visual we wanted simply using Max so we modified the jit.mo model. In the future we might try integrating more models we made by ourselves. I discovered that we could make amazing visual effects simply using one model and modifying the parameters. Sometimes simplicity is good and we can’t accomplish that much in 10 minutes.

However, I think the major problem is that we used multiple visuals and their transitions is not smooth. The connection between the audio and visual is not that coherent. There is still a lot of effort we can make to create a visual-audio performance that is more consistent. Also, I recorded the 10-minute base sound, so I only manipulated the sound effects and some parameters. This time the two of us must work together to control the visuals due to its complexity. Next time, I hope that the audio can also be more live if possible. 

Video reference:

https://www.youtube.com/watch?v=jgm58cbu0kw

https://vimeo.com/user14797579

https://www.youtube.com/watch?v=MzkfhANn3_Q

https://www.youtube.com/watch?v=08XBF5gNh5I

Assignment 5 – Multi 3D Objects


For this project, I loaded a tree model made by myself into the patch. I modified the speed, scale, and frequency and set them into motion. To add texture, I generate the video content using the generated module and capture the output as texture. Therefore, the different shapes and lines became textures on the trees. For the background, I uploaded a prerecorded video and added effects to it. I added KALEIDR and the background visual became colorful. I also used the ZOOMR to control the motion of the trees.

Link to my video and patch: https://drive.google.com/drive/folders/1Z-4uUhhSK5gtEWoTZRyofGF9B3xo5sjQ?usp=drive_link

Reflection 8 – Live Cinema

Prompt

Read the chapters on Live Cinema by Gabriel Menotti and Live Audiovisual Performance by Ana Carvalho from The Audiovisual Breakthrough (Fluctuating Images, 2015).

Armed with the knowledge you gained last week by reading the chapter on VJ-ing and by watching the documentary Video Out (Finkelstein & Vlachos, 2005), write a two to three paragraph reflection on the difference between VJ-ing, Live Cinema, and Live Audiovisual Performance. Reference one contemporary artist / group / collective from each category as an example to indicate the differences in context, methodology, and technology (if any!).

VJ-ing, Live Cinema, and Live Audiovisual Performance represent distinct facets of the audiovisual arts, each with its unique characteristics.

VJ-ing typically involves real-time visual manipulation during music performances or events, often using software to mix and manipulate visuals. It often takes place in the clubs and the audience will be lost in the projections. One example is the AntiVJ artists. They create immersive installations and live performances that challenge the audience’s senses. One of their most famous shows, in which current member Simon Geilfus performed with Murcof, featured stars, lines, and chemical particles that were projected on multiple screens to create a 3D-sensory experience that had no boundaries, taking the audience through a mind-blowing journey where the light sources could barely be perceived.

Live Cinema tends to be more narrative-focused, integrating cinematic elements into live performances. Live cinema is exempt from chief constraints, namely narrative continuity and fixed spatial arrangement. Even setting up the projections is incorporated “as part of the creative process,” intensifying the practice’s kinship with the fields of expanded cinema and interactive installation. Live cinema can be characterized by storytelling. Examples can be found on the website of the live cinema festival. The artist Paraadiso, for instance, creates captivating audiovisual experiences by merging scientific data visualization with electronic sound, showcasing a more interdisciplinary approach.

Live Audiovisual Performance encompasses a broader spectrum, including both VJ-ing and Live Cinema but often places a stronger emphasis on the integration of sound and visuals as equal components. Live audiovisual performance is a term applied to contemporary artistic expressions of live manipulated sound and image, defined as time-based, media-based, and performative. Live audiovisual performance is complex because it does not comprise a specific style, technique, or medium, but instead gathers a series of common elements that simultaneously identify a group of artistic expressions as well as specific works, which don’t necessarily fit within either of the particular expressions that constitute the group. One example is the EKO by Kurt Hentschläger. It is an audiovisual performance with LED wall display and surround sound. EKO is performed live in the splendid void of pitch-darkness. Erasing the audience’s perceptual boundaries, the absence of light is interrupted for only fractions of seconds with bursts of micro-animated abstract forms. 

Assignment 4 – Granular Synthesis Audio

Due to the shuttering nature of the video, I added the sequencer following the frequency of the video using the vizzieconverter. To create more audio, I used a drum sequencer and linked it to 3 cell MIDIs. One is a kick sound, one is a shush sound. I added the shush sound because in one video the woman is demonstrating a similar movement. The shushing sound occurs once in a cycle, and the kick sound occurs regularly by a certain frequency. I modified the shushing sound with the gigaverb and reverb module to give it more reverb effect. I also added another audio sample that I found suitable for the video because it has a fast tempo and its frequency fits the video. I modified it with a flanger effect and adjusted the center, width, and rate so that the original audio does not sound that obvious. 

The audio I created has a very intense vibe and corresponds with the effect I added to the video. 

The link to the recorded video and the patch:

https://drive.google.com/drive/folders/13eOFslG3ewisbSDVn7S3BQ_OkYDjGiRM?usp=drive_link

Reflection 7 – VJ Culture

Prompt 

Watch the documentary Video Out (Finklestein & Vlachos, 2005) and read the chapter on VJing by Eva Fischer from the collection The Audiovisual Breakthrough (Fluctuating Images, 2015).

Write a two to three-paragraph reflection on the practice of VJing, as described by the author, and how it relates to the notions of “liveness”,  “the performative”, and the concept of “visual wallpaper”. Relate it to your own experience with work you’ve seen by VJs in a concert/club/theater setting.

“VJing stands for liveness, transience, and uniqueness.” Different from the music video screening, VJing requires the performing and improvising VJ. “The performative character of a VJ performance is closely connected to the structural and formal influences of the music: composition, rhythm, the desire to create immersive 12 [ spaces, and the use of samples, 13 [ loops, 14 [ or patterns, 15 [ all of which can be compared to the development of electronic music and DJing.” From how I understand it, VJing requires control and movement of the performers or even the audience. Time and space are important for VJing because the same performance will never occur twice. It’s important to note that VJing can’t be separated from music because it is largely related to DJing. 
“VJing as a visual component, therefore, always refers to a responsive action, a cooperation with someone else.” The visual wallpaper refers to the dissatisfaction that VJs have because they are treated more unfavorably compared to DJs. In the documentary, some artists also change from VJs to video artists. From my perspective, it’s large because VJs always need to be a part of joint performances with others particularly DJs or audio performers. I totally agree with the author’s opinion that the environment of the club causes this as well. The audience in the club won’t focus on the screen as much as in the movie theater, which makes perfect sense. It’s indeed a pity that VJing isn’t paid enough attention in the performing places, especially the clubs. 
The latest experience I have with audiovisual performance is the field trip to the live house. I feel that most people went there to watch the performance so many people did pay attention to the VJing. The visual effects, the light, and the audio were very cool. It was like everyone was standing to watch a performance, which felt different from the club. 

Project 2 – Documentation Post

Title

Atlantis

Project Description

The theme of our project is the lost kingdom of Atlantis. The Lumia we create demonstrates the beginning and demise of Atlantis. We aim to use light and different materials to create the visual of the ocean and aurora. Our group noticed that many of the Lumia shows create the aurora visual, which gave us the feeling of the ocean, so we decided to make visuals of the deep ocean. The addition of material and color demonstrates the change in the kingdom of Atlantis.

Perspective and Context

Lumia is the art of light. In the reading A Radiant Manifestation in Space: Wilfred, Lumia and Light by Keely Orgeman (Yale University Press, 2017), Wilfred paid attention to the “tempo of movement, the intensity of color, and levels of brightness and darkness,” which is important for Lumia. Our project tried to establish the relationship between light and music. We used light to explore complex concepts like the born and decay of nature. The documentary Lumia mentioned that some audience heard the music while watching the Lumia, but the Lumia was in fact silent. This shows that it’s only a visual performance, but it can interact with other senses. Our project also aims to create an immersive experience.

The reading “Cosmic Consciousness” from Visual Music: Synaesthesia in Art and Music Since 1900 (Thames & Husdon, 2005) emphasizes the feature of visual music, which is “creating a kind of ideal world, a universe of linked senses in which all elements–sound, shape, color, and motion–are absorbed into one another”. Our project tries to use light to create visuals that correspond with the music and arouse all senses.

Development & Technical Implementation

Our group started by finding materials to create visuals. We decided to use the basic materials in the lab since they already have amazing reflections and could create amazing visuals. As we put the materials and different fabrics together, there were already visuals that we thought were suitable for the project. We explored many different materials and also used plastic bottles and glass to generate some visuals. The visual gives us the feeling of the deep ocean and the Atlantis. We could start with one color as the beginning stage of Atlantis, then add more colors demonstrating its development. In around 3 minutes begins the prosperous stage of Atlantis, with most colors and visual effects in Max. Around 4 minutes, we gradually take away the materials and the colors fade showing the fall of the kingdom. The audio corresponds with the visual. It starts with bubble sound, then more audio pieces gradually add up, and finally die down. 

The first problem we encountered was the setup. Since we constantly need to add and reduce material, we first wanted to make a spinning installation so that the material could show by itself. We also thought of making a cupboard-like installation using bouncy thread so that we could take off the fabrics by moving the thread. In the end, we decided to keep it simple. We made a screen using a semitransparent and matte fabric. To avoid the wooden frame affecting the visual, we stuck silver fabric onto the frame.

 The foundation of the setup is the reflective silver material and the red, blue, and green transparent paper. The light directly shines on the silver paper and the visual will be projected on the screen.

Performance

The performance was kind of unexpected. We rehearsed several times before the final review, but the first time had the best effect.

Since the pixel of the webcam is too low, we decided to use our phone, but it disconnected twice during the two performances. However, even when we were using our phones, the visual was still not as clear as expected. Also, the light was a bit low and the effect was not that good.

I was in charge of the audio, the overall effect was great, but some transitions were too sudden. When I turned on one module, the sound suddenly turned super big and affected the whole piece. The Max effect added was cool but we could make it correspond with audio more. Our group feels that real-time performance is really full of the unexpected. 

Team Work

During the creation process, we came up with the theme and visuals together. When the visual was set, Chanel was in charge of adding Max effect to the visuals. Sophia and I searched for audio, made a 5-minute piece, and added the Max effect together. In the performance,  I controlled the audio, Chanel controlled the visual, and Joy and Sophia managed the setup. The team communicated well. 

One drawback of the collaboration for a performance like this is that the visual and audio were made separately, so it was hard to make them perfectly fit with one another. The benefit is that the use of the materials can be more complicated and create more beautiful visuals. 

Conclusion

Previous research on Lumia and abstract film gave us inspiration for this project. The group’s cooperation and communication were effective. Through this project, I learned that we can create beautiful visuals with a simple setup and material. However, in real-time performance, there are full of the unexpected. The execution of the performance didn’t quite live up to our expectations, but we did our best. Most of the time, the final performance will not be satisfying and it can be hard to reproduce the visuals we thought were great in the creation process. I’d say our project is a beautiful Lumia, but lacks some creativity, especially when watching other groups’ performances. The other groups’ usage of different kinds of material other than fabric and the use of fireworks, paint, smoke, and face was very creative and inspiring. In the future, I’d like to try more like this.  

Reflection 6 – Graphic Scores

At the beginning of our audio file, we try to create an atmosphere where everything just starts and life gradually appears. So we drew some small and tiny Greek letters which were in the shape of waves to symbolize the sound of bubbles. Then the sound of bubbles will echo, which is alternatively shown as bigger waves in grey in our graphic scores.

After that (between the 60s and 180s), based on other Greek letters which were written in “A Fragment of Atlantis” by Hellanicus of Lesbos, we change the shapes of these alphabets to adapt them to our sound when the long and mysterious sound of the deep ocean comes out after the bubbles. The next period of sound (at around 180s-240s) is a combination of background sound (the sound of deep ocean), the sound of collapse, and the sound of explosion. During this period, we use twisted Greek characters to show the conflicts between the later-added sounds and the background sound.

Then (240s-280s) the other sounds besides the background sound disappear and the volume of the background sound becomes lower, which is presented as small Greek characters in graphic scores. At last (280s-300s), we drew the tiny Greek letters which are in the shape of waves again since the sound also comes back to the bubbling sound, ending in peace.

Reflection 5 – Cosmic Consciousness

Prompt

Read page 120 to 175 from the chapter “Cosmic Consciousness” from Visual Music: Synaesthesia in Art and Music Since 1900 (Thames & Husdon, 2005) and select two of the films that we watched today by Belson, (one of) the Whitneys, Cuba and/or Schwartz and write a two to three paragraph reflection on how them. How are they similar? How are they different? What conceptual basis do they share? What are their influences? Etcetera. Also address how the Vortex Concert series and the later light shows by other groups have influenced our current experiences at pop concerts and dance music events.