Project 3 – Documentation Post

Title
Revival

Project Description

The theme of our project is explosion. The 10-minute piece depicts the process of how an explosion happens from the beginning to the end. We started with the chemical reactions at the early stage of an explosion, gradually moved to the huge explosion part, and finally ended with the particles of the bomb. We were inspired by the movie Oppenheimer, as well as other audio-visual performances that include stunning explosion scenes (see reference). We wanted to create dynamic visuals with exciting audio to fit into the scenario.

Perspective and Context

Our project fits into the historical context of visual music, abstract film, live cinema, and live audiovisual performances. The depiction of the explosion process, from chemical reactions to the eventual dispersion of bomb particles, resonates with the tradition of abstract film and visual music, where artists often explore non-representational and dynamic visual elements paired with sound to evoke emotional responses. Our project embraces this idea by translating the auditory and visual dynamics of an explosion into a synchronized audiovisual experience. Our intent to create dynamic visuals with exciting audio aligns with the principles of live cinema, emphasizing the immediacy and co-presence of image and sound.

We tried to follow the artistic and stylistic characteristics we learned in the early abstract film section. Take Norman McLaren’s work as an example.  The synesthetic relationships between the visuals and the sound are very interesting. The presence of different shapes signifies the sound in his work. This is what we try to accomplish in our work.

Furthermore, our project engages with the theoretical aspects discussed during the semester. In the reading Live Cinema by Gabriel Menotti and Live Audiovisual Performance by Ana Carvalho from The Audiovisual Breakthrough (Fluctuating Images, 2015), I learned that Live Cinema tends to be more narrative-focused, integrating cinematic elements into live performances, VJ-ing typically involves real-time visual manipulation during music performances or events, often using software to mix and manipulate visuals, and that Live Audiovisual Performance encompasses a broader spectrum, including both VJ-ing and Live Cinema but often places a stronger emphasis on the integration of sound and visuals as equal components. Our project tells the narrative of the process of explosion and includes realtime audio-visual manipulation during the performance. 

Development & Technical Implementation

We first had the idea of making a performance with the theme of explosion. Our first step was to create visuals for the explosion. We started by trying to film some short videos and modified them with Max to create abstract visuals. We experienced many setups, including paint, ink, and hot glue, and filmed the clips. We then uploaded them in Max to experiment with different effects together.

We documented the cool and useful effects we found in the Google doc to prepare us for the visual building. When we felt like the materials and the effects were enough for the 10-minute piece, since we did not want to make it too complicated, we started to make outlines for the project. Our initial thought was to make a narrative that thinks of the nuclear bomb as a living thing with a mind that watches itself form, explode, and displace. 

This is the timeline we created:

Timeline:

  1. 0:00 – 0:30 formation of the bomb:  visual:bubble(preset 2); audio:sound in a science lab
  2. 0:30 – 1:10 transition to the final formation 1:WYPR cage effect, bubble spread audio:stage opening sound
  1. 1:10 – 1:50 transition to the final formation 1:kaleido mixfader e2
  2. 1:50 – 2:30 completion of the bomb:visual bubble (先调mixfader 1再调m2),注意这时候是扩散的,可以调速度/扭动
  3. 2:30 – 3:10 bombs gather: visual 扩散效果调为0,用zoomer配合heartbeat  
  4. 3:10 – 3:50 fuse:red lines (preset 3),可以扭动,可以加笼子
  5. 3:50 – 4:30 explosion:red line to 3d model(flicker effect)
  6. 4:30 – 5:20 3d explosion:调BRCOSR和PINCHR
  7. 5:20 – 6:20 explosion to ink
  8. 6:20 – 7:00 before restart:ink调fogger,old TV stuck
  9. 7:00 – 7:40 restart:ink to spread bubble
  10. 7:40 – 8:20 restart finish:bubble to bomb
  11. 8:20 – 9:00 bomb crash:scrambler
  12. 9:00 – 10:00 end with particles

These are pictures of the original clips we filmed and the final output with the effects.

For the visuals, we also used the jit.mo model. Sophia adjusted some parameters, and we added effects on the model to create various visuals.

For the 3d modeling part, I used a donut model I made before and modified it with other effects.

Below is the visual Max patch:

We made the visuals first and then composed the audio accordingly. When Sophia was adding effects on the visual, I found and downloaded the sound samples online. I at first tried to use the drum sequencer and the piano roll sequencer, but it was hard to reach the effect that I wanted. After Sophia was mostly finished with the overall flow of the visual, I started composing the audio using Garageband. I used different piano effect in the software to compose the piece accoding to the visual progress. I also made use of the sound effects downloaded there.

I first tried to find the base sound. I used the noise module in Max. It was useful because I could also use this sound for the fogger part of the visual.

I first add a relatively soft audio as the beginning of the whole piece, and I gradually added music pieces and sound effects, drum beats to correspond to the visuals. I intentionally put obvious sound effects for each transitions part so that Sophia can have a bettern sense of time when manipulating the MIDI. For the explosion part, I tried to add strong and heavy beats, and integrated the explosion sound samples to make the audio dynamic and explosive. In the ink part, the audio gave audience a sense of the water fluid, but when each drop of ink fell in the water, there was a drum beat. For the last part of the particles of the bomb, the audio turned to sparkle-ish, and I adjuested the frequency to have a more misty sound.

For the performance setup, we control the visual MIDI together since we need to constantly add video clips and effects. I also controlled the audio by myself. I already have a base sound, so I add the sound samples and adjust the parameters of the effects during the performance. For the recorded sound samples, the visual controller manipulate it to match the audio. For instance, Sophia changes the size of he bubble according to the sound og the heartbeat. I may also play some short samples according to he visual.

The link to the Max patch: https://drive.google.com/drive/folders/1f5aYERLiX967pw6qx4fdpd_Oow1N7u9P?usp=share_link

Performance

The proformance went well generally. We all made tiny mistakes due to nervousness, but we didn’t make big mistakes. One regret is that the explosion effect that I expected the most didn’t work. The explosion audio sample was too loud when I played it so I didn’t have the time to manipulate the flicker effect in the visual MIDI. Also, our transitions were still not very smooth and the screen was black for a short period of time. 

In the performance, Sophia is in charge of most of the visual manipulation. I’m in charge of some visual manipulation, all audio manipulation, and looking at the time. As a group, we coordinated well. 

Performing in a club with the big screen and the audio setting felt completely different than I practiced. Since our theme is explosion and we have the dynamic visuals, putting them on a big screen made the visual seemed a lot better and immersive. The audio also sounded a lot better. The strong beats and the explosion sound effects became more heavy. The whole setup in the club helped to create a more immersive experience of our performance. 

Conclusion

In the research part, we researched many famous audiovisual work and gained a lot of inspiration from them. For the creation part, we initially had many ideas for creating the visuals, including filming videos, modeling and building with Max. However, using one or two approaches is enough, so we mainly used the model in Max and two videos we recorded. One thing to improve is that we didn’t really know how to build the visual we wanted simply using Max so we modified the jit.mo model. In the future we might try integrating more models we made by ourselves. I discovered that we could make amazing visual effects simply using one model and modifying the parameters. Sometimes simplicity is good and we can’t accomplish that much in 10 minutes.

However, I think the major problem is that we used multiple visuals and their transitions is not smooth. The connection between the audio and visual is not that coherent. There is still a lot of effort we can make to create a visual-audio performance that is more consistent. Also, I recorded the 10-minute base sound, so I only manipulated the sound effects and some parameters. This time the two of us must work together to control the visuals due to its complexity. Next time, I hope that the audio can also be more live if possible. 

Video reference:

https://www.youtube.com/watch?v=jgm58cbu0kw

https://vimeo.com/user14797579

https://www.youtube.com/watch?v=MzkfhANn3_Q

https://www.youtube.com/watch?v=08XBF5gNh5I

Assignment 5 – Multi 3D Objects


For this project, I loaded a tree model made by myself into the patch. I modified the speed, scale, and frequency and set them into motion. To add texture, I generate the video content using the generated module and capture the output as texture. Therefore, the different shapes and lines became textures on the trees. For the background, I uploaded a prerecorded video and added effects to it. I added KALEIDR and the background visual became colorful. I also used the ZOOMR to control the motion of the trees.

Link to my video and patch: https://drive.google.com/drive/folders/1Z-4uUhhSK5gtEWoTZRyofGF9B3xo5sjQ?usp=drive_link

Reflection 8 – Live Cinema

Prompt

Read the chapters on Live Cinema by Gabriel Menotti and Live Audiovisual Performance by Ana Carvalho from The Audiovisual Breakthrough (Fluctuating Images, 2015).

Armed with the knowledge you gained last week by reading the chapter on VJ-ing and by watching the documentary Video Out (Finkelstein & Vlachos, 2005), write a two to three paragraph reflection on the difference between VJ-ing, Live Cinema, and Live Audiovisual Performance. Reference one contemporary artist / group / collective from each category as an example to indicate the differences in context, methodology, and technology (if any!).

VJ-ing, Live Cinema, and Live Audiovisual Performance represent distinct facets of the audiovisual arts, each with its unique characteristics.

VJ-ing typically involves real-time visual manipulation during music performances or events, often using software to mix and manipulate visuals. It often takes place in the clubs and the audience will be lost in the projections. One example is the AntiVJ artists. They create immersive installations and live performances that challenge the audience’s senses. One of their most famous shows, in which current member Simon Geilfus performed with Murcof, featured stars, lines, and chemical particles that were projected on multiple screens to create a 3D-sensory experience that had no boundaries, taking the audience through a mind-blowing journey where the light sources could barely be perceived.

Live Cinema tends to be more narrative-focused, integrating cinematic elements into live performances. Live cinema is exempt from chief constraints, namely narrative continuity and fixed spatial arrangement. Even setting up the projections is incorporated “as part of the creative process,” intensifying the practice’s kinship with the fields of expanded cinema and interactive installation. Live cinema can be characterized by storytelling. Examples can be found on the website of the live cinema festival. The artist Paraadiso, for instance, creates captivating audiovisual experiences by merging scientific data visualization with electronic sound, showcasing a more interdisciplinary approach.

Live Audiovisual Performance encompasses a broader spectrum, including both VJ-ing and Live Cinema but often places a stronger emphasis on the integration of sound and visuals as equal components. Live audiovisual performance is a term applied to contemporary artistic expressions of live manipulated sound and image, defined as time-based, media-based, and performative. Live audiovisual performance is complex because it does not comprise a specific style, technique, or medium, but instead gathers a series of common elements that simultaneously identify a group of artistic expressions as well as specific works, which don’t necessarily fit within either of the particular expressions that constitute the group. One example is the EKO by Kurt Hentschläger. It is an audiovisual performance with LED wall display and surround sound. EKO is performed live in the splendid void of pitch-darkness. Erasing the audience’s perceptual boundaries, the absence of light is interrupted for only fractions of seconds with bursts of micro-animated abstract forms. 

Assignment 4 – Granular Synthesis Audio

Due to the shuttering nature of the video, I added the sequencer following the frequency of the video using the vizzieconverter. To create more audio, I used a drum sequencer and linked it to 3 cell MIDIs. One is a kick sound, one is a shush sound. I added the shush sound because in one video the woman is demonstrating a similar movement. The shushing sound occurs once in a cycle, and the kick sound occurs regularly by a certain frequency. I modified the shushing sound with the gigaverb and reverb module to give it more reverb effect. I also added another audio sample that I found suitable for the video because it has a fast tempo and its frequency fits the video. I modified it with a flanger effect and adjusted the center, width, and rate so that the original audio does not sound that obvious. 

The audio I created has a very intense vibe and corresponds with the effect I added to the video. 

The link to the recorded video and the patch:

https://drive.google.com/drive/folders/13eOFslG3ewisbSDVn7S3BQ_OkYDjGiRM?usp=drive_link

Reflection 7 – VJ Culture

Prompt 

Watch the documentary Video Out (Finklestein & Vlachos, 2005) and read the chapter on VJing by Eva Fischer from the collection The Audiovisual Breakthrough (Fluctuating Images, 2015).

Write a two to three-paragraph reflection on the practice of VJing, as described by the author, and how it relates to the notions of “liveness”,  “the performative”, and the concept of “visual wallpaper”. Relate it to your own experience with work you’ve seen by VJs in a concert/club/theater setting.

“VJing stands for liveness, transience, and uniqueness.” Different from the music video screening, VJing requires the performing and improvising VJ. “The performative character of a VJ performance is closely connected to the structural and formal influences of the music: composition, rhythm, the desire to create immersive 12 [ spaces, and the use of samples, 13 [ loops, 14 [ or patterns, 15 [ all of which can be compared to the development of electronic music and DJing.” From how I understand it, VJing requires control and movement of the performers or even the audience. Time and space are important for VJing because the same performance will never occur twice. It’s important to note that VJing can’t be separated from music because it is largely related to DJing. 
“VJing as a visual component, therefore, always refers to a responsive action, a cooperation with someone else.” The visual wallpaper refers to the dissatisfaction that VJs have because they are treated more unfavorably compared to DJs. In the documentary, some artists also change from VJs to video artists. From my perspective, it’s large because VJs always need to be a part of joint performances with others particularly DJs or audio performers. I totally agree with the author’s opinion that the environment of the club causes this as well. The audience in the club won’t focus on the screen as much as in the movie theater, which makes perfect sense. It’s indeed a pity that VJing isn’t paid enough attention in the performing places, especially the clubs. 
The latest experience I have with audiovisual performance is the field trip to the live house. I feel that most people went there to watch the performance so many people did pay attention to the VJing. The visual effects, the light, and the audio were very cool. It was like everyone was standing to watch a performance, which felt different from the club. 

Project 2 – Documentation Post

Title

Atlantis

Project Description

The theme of our project is the lost kingdom of Atlantis. The Lumia we create demonstrates the beginning and demise of Atlantis. We aim to use light and different materials to create the visual of the ocean and aurora. Our group noticed that many of the Lumia shows create the aurora visual, which gave us the feeling of the ocean, so we decided to make visuals of the deep ocean. The addition of material and color demonstrates the change in the kingdom of Atlantis.

Perspective and Context

Lumia is the art of light. In the reading A Radiant Manifestation in Space: Wilfred, Lumia and Light by Keely Orgeman (Yale University Press, 2017), Wilfred paid attention to the “tempo of movement, the intensity of color, and levels of brightness and darkness,” which is important for Lumia. Our project tried to establish the relationship between light and music. We used light to explore complex concepts like the born and decay of nature. The documentary Lumia mentioned that some audience heard the music while watching the Lumia, but the Lumia was in fact silent. This shows that it’s only a visual performance, but it can interact with other senses. Our project also aims to create an immersive experience.

The reading “Cosmic Consciousness” from Visual Music: Synaesthesia in Art and Music Since 1900 (Thames & Husdon, 2005) emphasizes the feature of visual music, which is “creating a kind of ideal world, a universe of linked senses in which all elements–sound, shape, color, and motion–are absorbed into one another”. Our project tries to use light to create visuals that correspond with the music and arouse all senses.

Development & Technical Implementation

Our group started by finding materials to create visuals. We decided to use the basic materials in the lab since they already have amazing reflections and could create amazing visuals. As we put the materials and different fabrics together, there were already visuals that we thought were suitable for the project. We explored many different materials and also used plastic bottles and glass to generate some visuals. The visual gives us the feeling of the deep ocean and the Atlantis. We could start with one color as the beginning stage of Atlantis, then add more colors demonstrating its development. In around 3 minutes begins the prosperous stage of Atlantis, with most colors and visual effects in Max. Around 4 minutes, we gradually take away the materials and the colors fade showing the fall of the kingdom. The audio corresponds with the visual. It starts with bubble sound, then more audio pieces gradually add up, and finally die down. 

The first problem we encountered was the setup. Since we constantly need to add and reduce material, we first wanted to make a spinning installation so that the material could show by itself. We also thought of making a cupboard-like installation using bouncy thread so that we could take off the fabrics by moving the thread. In the end, we decided to keep it simple. We made a screen using a semitransparent and matte fabric. To avoid the wooden frame affecting the visual, we stuck silver fabric onto the frame.

 The foundation of the setup is the reflective silver material and the red, blue, and green transparent paper. The light directly shines on the silver paper and the visual will be projected on the screen.

Performance

The performance was kind of unexpected. We rehearsed several times before the final review, but the first time had the best effect.

Since the pixel of the webcam is too low, we decided to use our phone, but it disconnected twice during the two performances. However, even when we were using our phones, the visual was still not as clear as expected. Also, the light was a bit low and the effect was not that good.

I was in charge of the audio, the overall effect was great, but some transitions were too sudden. When I turned on one module, the sound suddenly turned super big and affected the whole piece. The Max effect added was cool but we could make it correspond with audio more. Our group feels that real-time performance is really full of the unexpected. 

Team Work

During the creation process, we came up with the theme and visuals together. When the visual was set, Chanel was in charge of adding Max effect to the visuals. Sophia and I searched for audio, made a 5-minute piece, and added the Max effect together. In the performance,  I controlled the audio, Chanel controlled the visual, and Joy and Sophia managed the setup. The team communicated well. 

One drawback of the collaboration for a performance like this is that the visual and audio were made separately, so it was hard to make them perfectly fit with one another. The benefit is that the use of the materials can be more complicated and create more beautiful visuals. 

Conclusion

Previous research on Lumia and abstract film gave us inspiration for this project. The group’s cooperation and communication were effective. Through this project, I learned that we can create beautiful visuals with a simple setup and material. However, in real-time performance, there are full of the unexpected. The execution of the performance didn’t quite live up to our expectations, but we did our best. Most of the time, the final performance will not be satisfying and it can be hard to reproduce the visuals we thought were great in the creation process. I’d say our project is a beautiful Lumia, but lacks some creativity, especially when watching other groups’ performances. The other groups’ usage of different kinds of material other than fabric and the use of fireworks, paint, smoke, and face was very creative and inspiring. In the future, I’d like to try more like this.  

Reflection 6 – Graphic Scores

At the beginning of our audio file, we try to create an atmosphere where everything just starts and life gradually appears. So we drew some small and tiny Greek letters which were in the shape of waves to symbolize the sound of bubbles. Then the sound of bubbles will echo, which is alternatively shown as bigger waves in grey in our graphic scores.

After that (between the 60s and 180s), based on other Greek letters which were written in “A Fragment of Atlantis” by Hellanicus of Lesbos, we change the shapes of these alphabets to adapt them to our sound when the long and mysterious sound of the deep ocean comes out after the bubbles. The next period of sound (at around 180s-240s) is a combination of background sound (the sound of deep ocean), the sound of collapse, and the sound of explosion. During this period, we use twisted Greek characters to show the conflicts between the later-added sounds and the background sound.

Then (240s-280s) the other sounds besides the background sound disappear and the volume of the background sound becomes lower, which is presented as small Greek characters in graphic scores. At last (280s-300s), we drew the tiny Greek letters which are in the shape of waves again since the sound also comes back to the bubbling sound, ending in peace.

Reflection 5 – Cosmic Consciousness

Prompt

Read page 120 to 175 from the chapter “Cosmic Consciousness” from Visual Music: Synaesthesia in Art and Music Since 1900 (Thames & Husdon, 2005) and select two of the films that we watched today by Belson, (one of) the Whitneys, Cuba and/or Schwartz and write a two to three paragraph reflection on how them. How are they similar? How are they different? What conceptual basis do they share? What are their influences? Etcetera. Also address how the Vortex Concert series and the later light shows by other groups have influenced our current experiences at pop concerts and dance music events.

 

Project 1 – Documentation Post

Title
Rhythm with patterns and light

Project Description

The project is about performing a generative composition in both sound and image. The visual should correspond with the audio. I started my project with the sound I generated. For the music, I intended to combine the piano notes and the drum beats and kick beats. I also wanted to add the background music to make the audio more complex and atmospheric. I intend to use the audio data to generate visuals relating to the space and universe, and my music presents a vibe of the universe.  I explored many generate modules to create a rich and colorful image.

Perspective and Context

Visual music is about relating visuals and audio. Viewers can perceive a connection between what they hear and see. In my project, viewers can see the connection between visual and audio. The visuals of my project show the beat and the frequency of the music. The patterns change every time the piano notes hit. I gained inspiration for my project through one of the early abstract films we watched, Dots by Norman McLaren. In this film, the sound and the visuals fit perfectly with each other. We can sense the beat of the sound through the visual of dots we see. In the reading about synesthesia, I got the idea that visual and auditory stimuli are perceived as interconnected and influencing each other. In my project, each sound I make with the sequencer and oscillator modules corresponds with a visual. The viewer can hear the note change in the audio as they’re watching the visuals. Also, the project can give the viewer a different kind of experience in sight and hearing. 

Development & Technical Implementation

My research process includes the reading of visual music, synesthesia, and abstract films. I gained inspiration for my project through one of the early abstract films we watched, Dots by Norman McLaren. In this film, the sound and the visuals fit perfectly with each other. We can sense the beat of the sound through the visual of dots we see.

I tried out many different things. For starters, I spent time deciding what sequencers and oscillators I should use to generate data and audio. I began with Sequencer and Piano Roll Sequencer. However, I felt the audio generated was too repetitive, so I added Granular and the Drum Sequencer.

The noisy sound generated by GRANULAR gives me an image of different particles merging together. The data of this music goes into BFGENER8R. There were many patterns I could choose from and I tried out every one of them. I finally chose the polygonal-like pattern as the background visual. I control the zoom range and rotation of this background visual so that it moves and floats with the noisy sound. I added KARPLUS and GIGAVERB to give the piano notes a stronger oscillation and reverb.  To generate the videos, I tried using different Mix-composite modules like LUMAKEYR. It combines 2 videos using lumakeying, but the output turned out to be too complex. I decided to use EASEMAPPER to generate the diamond-like pattern. The data goes into zoom and rotation angle. So when each piano note triggers, the diamonds rotate once. Therefore, we can see the trigger of the piano notes from the movement of the patterns. And the patterns keep changing with the input data. To color the patterns, I tried different modules like POSTERIZR, COLORIZR, etc. I intended the change the color of both patterns corresponding to the beat of the music and the frequency of the audio. But this didn’t really work well. I found that using MAPPR to change the color of these patterns corresponding to the piano notes has a better effect. I hope to change the color once the note hits. However, the effect was not what I expected. So I eventually used the TWIDDLR module to change the color. But in the final output, the lines weren’t actually changing color.

Another aspect I considered was the pattern to represent the notes. I originally used the straight-line pattern, but then I found it would be in conflict with the other line pattern generated. So I changed it to diamond-like patterns so that the change in music could be viewed more obviously.

This is the link to this original video output.

I used the drum sequencer and the frequency data went into two MIDIs. For one audio, I used the FLANGER and SYNC DELAY  to make it more atmospheric. Another audio is the kick, which has a strong beat. The data goes into 1PATTERNMAPPR, which generates a linear light visual. This visual shows the occurrence of each kick beat. The zooming in and out of the line also represents the audio. 

Presentation

Unfortunately, I didn’t have the chance to present due to technical issues. But if I had the chance, I think the volume of my audio might be too loud. This reminds me that I should always keep the volume in mind when dealing with music.

Link:

https://drive.google.com/drive/folders/1FQSgO6XVgt40h_qB-80dhGGzsc5W1qkJ?usp=drive_link

my recorded video

Conclusion

The research on reading and film gave me inspiration for this project. The usage of merely one pattern can also generate great effects corresponding to music. The previous exercises helped me explore the different types of modules, which enabled me to choose the vizzie generator that can generate the pattern that I want and align with the music. From my creation process and the presentation, I discovered that with different effect or filter modules, the audio output may vary significantly. There are many ways to manipulate one audio or visual for me to explore. 

During the creation project, I feel Max is still difficult to use and it’s hard to achieve the effect I intended.  I feel the 3 visual effects do not align very well and the audio composition can be improved. In the current video, the volume of the piano notes and the kick beats were too low and could not be heard clearly. The volume is also something I find hard to measure because the outcome in the recording is not the same as when I listen from my laptop. One problem is that I make the patch more complex than it should be. I tried to simplify the patch, but since I used in total of 3 sequencers and oscillators, and 3 Vizzie generate modules, the overall effect became a little complicated. I would say that simplifying certain steps would be better. I changed and improved my patch multiple times because it’s hard for me to decide the relatively best effect.  Now it’s not clear how the visuals represent the audio. Instead of simply making the patterns rotate with every occurrence of the beat, I could also try to change the zoom effect, the position, the color, and so on. What I need to improve is the alignment of both audio and visual and their inner connection. It would be much better if the visuals better correspond with the audio.

Reflection 4 – Midterm Preparation

Prompt

Watch the documentary LUMIA (stream/download here) by Meredith Finkelstein and Paul Vlachos about the life and work of visual music pioneer Thomas Wilfred and the people he inspired to follow in his footsteps. Then, read the text A Radiant Manifestation in Space: Wilfred, Lumia and Light by Keely Orgeman (Yale University Press, 2017) about the conceptual world Wilfred placed his work in and the technical details of his creations.

Write a two to three paragraph reflection on the concept of Lumia and the life and work of Thomas Wilfred, as you understand it from the documentary and the text. Reflect on the significance of his work, his conceptual framework(s), and his technical process.

Lumia is the art of light.

Thomas Wilfred envisioned Lumia as a unique art form. He wished to “separate Lumia from the light-organ tradition”, so this art form is not about establishing an actual relationship between light and music. What inspires me is that Wilfred did not consider light as the medium of his art form but as its fundamental conceit. He used light to explore complex concepts like human existence and other philosophical ideas. The marvelous visualization evoked the audience’s sensations and imagination. Lumia broadens our understanding of what art can be and how art can interact with the audience. The documentary mentioned that some audience heard the music while watching the Lumia, but the Lumia was in fact silent. This shows that it’s only a visual performance, but it can interact with other senses.

The technical process included the selection of composition and modifying composition aspects like “tempo of movement, the intensity of color, and levels of brightness and darkness”, etc. It’s interesting that those artists all made things from home. They built the theater of light by themselves, including the projectors and the light bulbs. For Wilfred, he painted the abstract color design on the light bulb and had it projected. However, since the machinery was fragile, those works were hard to maintain. Perhaps the best way to preserve them is through filming. 

Just like the name of the text suggests, Wilfred’s manipulation of the movement of light contributed to the fusions of art and technology. Lumia explores the connection between light, sound, and human emotions. It makes us wonder about the relationship between light and art.

Reference

A Radiant Manifestation in Space: Wilfred, Lumia and Light by Keely Orgeman (Yale University Press, 2017)