RAPS: Midterm Project Documentation – Celine Yu

Title: The Abyss 

Group Members: Gabriel Chi, Kyle Brueggemann, and Celine Yu 

Project Abstract: The Abyss is an audio-visual project conducted to test the limits of Lumia production with the usage of water and light. It occurs through the usage of a mysterious atmosphere that questions the stereotypical stigma we have around the concrete as well as abstract perceptions of water. 

Project Description: 

When all 3 of us went to research ideas for our midterm project, we knew that we wanted to look back into our studies of light art pioneer, Thomas Wilfred. We agreed that after having rewatched his many works with the Clavilux machine, that they somewhat, seemed to resemble the imagery of water. From there on, we unanimously decided that the element of water would become the centerpiece for our project. 

The usage of water has been partnered with a mysterious and unsettling atmosphere that seeks to challenge and question the status-quo of our connotation as well as the denotation of water. When one recalls the imagery of water in their heads, they most likely begin to think of a serene and tranquil ambiance. With this project, we wanted the chaotic renditions of monks, chanting and ominous melodies to play a part in intriguing the audience member as they view the imagery of water in motion. We wanted our combination of contrasting visuals and audio to challenge what most people connote water as in the real world as it works to signify change and the passage of time. 

Perspective and Context: 

Though our project is rooted in the concrete subject of water, the outcomes we create with the input do not resemble nor attempt at referencing reality outside of the screen. “The Abyss” is our take in the fields of abstract film and visual music for it is a non-narrative Lumia art performance incorporating aspects of auditory and visual information related through a mutual entailment. Through which, neither is articulated without the existence of the other, similar to the effect of synesthesia. 

Our concept for this project was heavily influenced through readings and documentaries into the works of Thomas Wilfred and his famous Clavilux Lumia machines. Wanting to follow in his footsteps, Gabriel, Kyle and I also designed a self-contained light art machine that would focus on the 3 most important elements of the “8th Fine Art” as conveyed by Wilfred himself: form, color, and motion. To him, in any form of light art, “motion is a necessary dimension” (Orgeman 21) for it plays a crucial role in showcasing the true nature of light itself. While the elements of form and color were already in mind, the concept of motion had not yet been implemented in our project, at least not until after our research into Wilfred’s work. For this matter, we decided that we would allow the addition of movement to occur through our usage of water by creating a physical Lumia box (containing the liquid) we could manipulate in various directions and at varying forces. 

Development and Technical Implementation:

The 3 main areas our project encompasses in its development and technical set up/creation include the audio, the physical Lumia box as well as the visuals. 

Audio

Due to my inexperience in audio creation and manipulation through the applications of Ableton and Logic Pro X, my team members, Gabriel and Kyle volunteered to create the background melody as well as the live filler manipulations. With the help of Ableton, Gabriel created the background file that would work as the basis for the entire project, which was crucial in setting the overall theme. He sampled several layers of monks chanting and later combined them with tracks that resembled the movement of liquids (water). The resulting combination was then merged with an ambient synthesizer to further develop the prevalent atmosphere. The effects of reverb and delay were then added to the composition to complete the base layer, a track emitting both ethereal and peculiar feelings as well as imagery.   

While the unchanging background music was created with Ableton, the audio files we intended to manipulate during the live performance were developed through the usage of Logic Pro X. Kyle managed to download all of the audio files provided and sifted through each and every piece to locate the tracks that would best fit our intended direction. He identified sounds seemingly straight out of sci-fi horrors, meditation rooms, and zen-like atmospheres. Once we limited the chosen number of tracks to a maximum of 5, Kyle began to work with the in-application keyboard to create short melodies and snippets we could better utilize throughout the performance. These audio files were recorded and saved for future usage within our Max patchwork.  

Physical Machine

The physical lightbox we created for this midterm project was, as mentioned in the last section, heavily inspired by the inventions of Thomas Wilfred. Wilfred’s own self-contained Lumia machines were revolutionary and as a group, we knew that we wanted to create a product with similar yet distinct features. Just like Wilfred’s many versions of the Clavilux, Gabriel, Kyle and I had also gone through a significant number of trials and errors to get to our final design of a self-contained ‘Lumia’ box. The images that follow are diagrams and drawings that have led up to the current version. We modeled the sizings, the walls and even incorporated the idea of handles and wheels to go with the physical box in order to better integrate it with the project as a whole. I will discuss the functionality and mechanics of the lightbox in the next paragraph. 

Attempts 1, 2, 3: 

       

Final Design: 

Through this detailed diagram, it is important to note that there are 4 crucial factors to the creation and recording of our desired wave-like shadows: a light source, a water source, a base for light projection and a live video camera. The box itself comes in 2 separate parts, a bottom, and a top, with a removable middle layer in between. On the top of the box sits the light source shining its light downwards through a long and narrow horizontal slit we made with the laser cutter. The idea of minimizing the light that entered the otherwise, fully enclosed box, surfaced from when we were still in the midst of prototyping the actual project. We realized then, that the light we were shining on the cases of water was too bright to allow any shadows or reflections of light to occur. Thus, we created a restrictive opening that limited the amount of light that was shone onto the water below. 

Inside the box, sitting on top of the transparent middle layer are three identically proportioned plastic cases filled to the brim with tap water. They lay side by side inside of the box, using up the entire area of the middle section. Being our main source for visual variety, the waves and movement of the enclosed water shine downwards to the base of the lightbox with the help of the light source mounting over on top. 

The inside of the remaining section of the lightbox (underneath the water/middle layer) is covered in a white backdrop we utilized from the IMA resource center. We had initially conducted the experiments with the light shining downwards through the water and onto the original wooden base of the machine, however, the results proved to be mediocre. As a team, we thought of how a white background would allow the shadows and highlights of the reflections to become more visible and concrete, thus, the white walls that combine to create the base layer. 

In order to maximize the stability of our audiovisual source footage, Gabriel, Kyle and I decided that it would best for us to secure the video camera onto the lightbox itself. For this attachment, we created yet another cut/slit on the back wall of our base layer, one big enough to fit the camera’s cords through and balance the camera across but small enough to prevent it from standing out to the audiences’ eyes. The actual live camera which sits within the base of the box is tilted downwards to film the bottom layer’s wave projections of light and is ultimately taped down to secure its position. 

The video camera then sends the live footage of the reflections created upon physical manipulation of the lightbox into the computer containing our Max patch for coordination with our audio as well as visuals. 

Patchwork (Audio + Visual)

In order to incorporate the audio files we had created outside of Max, we uploaded them through cell midi components. Each of the tracks we intended on using during the live performance was fit into their own section and connected to the main drum sequencer on top. We then sent each of the tracks through a pan mixer before sending all of them out through a stereo output. To then interact the audio with our VIZZIE creations, we made sure that each cell to MIDI component was converted through an AUDIO2VIZZE component before being sent towards separate SMOOTHR modules that would help ease the BEAP information into the VIZZIE section.  

For the visuals, the three of us worked hard to explore the VIZZIE components in relation to the audio factors through conversions of data and numbers but also agreed that we wanted to gather inspiration from our in-class assignments to build upon the basis of our realtime effects. 

With the GRABBR module set in place and connected to our live camera, we got to work with the visual manipulation. With physical movement checked off the list, we focused on the elements of color and form according to Wilfred’s standards. We first attached the OPER8R to combine two separate input videos which we indicated as the original feed as well as the delayed version (with the DELAYR). This output is then sent through an effect module known as the ZAMPLR, which works to alter the color input as well as horizontal and vertical gains of the video entering the component. This input was then put through yet another effect module, this time the HUSALIR  component, which functions to alter the hue, saturation as well as lightness for our video. The ZAMPLR and HUSALIR modules both had functions controlled by our audio components as their information was converted over, allowing them to manipulate the hues as well as gains for this first visual input. 

The above compilation is then combined together with another color-heavy visual input of the 2TONR through usage of the MIXFADR module. For the 2TONR we chose the prevalent colors of blue/green, thinking that it would best correspond with our focus on water. 

To complete the full compilation of our video output, we sent the resulting visuals through a ROTATR module (wrap) to increase the implementation of movement through our digital visuals, adding on to that of our physical movement of the LUMIA/WATER box. This aspect finalized our video before being sent out through a PROJECTR component that allowed us to view and showcase the audiovisual performance in realtime. 

The functions of the gain, hue, saturation, offsets, rotations and much more were then programmed to our midi board for usage during the actual performance. As we had learned from class, we thought it would be best to create a presentation mode. We selected only the aspects we programmed onto the board as well as the dum sequencer we intended to use and change during the performance. This cleaned the entire patchwork up very quickly, allowing us to work with the most urgent aspects much more easily. 

GitHub Link: https://gist.github.com/kcb403/43ff09f15f59c622571d287b8c1f22ee

Performance: 

For our live performance in the auditorium, we knew that we needed to split our roles somewhat evenly both in terms of length in time and significance in relation to the project outcome. We decided that there would be 3 main factors that were in need of manipulation during the actual presentation: Audio, video, and movement. While Kyle took responsibility for the live audio portion, Gabriel and I decided that we would switch between our roles of the video and movement throughout the 5-minute production. While I would be manipulating the live visuals, Gabriel would physically be manipulating the self-contained lightbox in realtime. Once the timer hit the 2:30 (2 min and a half) mark, we would change our positions and ultimately, our roles in the performance. We had practiced for the performance a number of times before the actual showing and were satisfied with the results we had, however, we ran into a major complication at the time of the actual event. 

Right as we were setting up for our live presentation, the Max patch we had prepared ahead of time had unexpectedly, and unfortunately, crashed. As we raced to relaunch the application and set up the components in time, all 3 of us were anxious about the time we had remaining and the outcome of our project after this unanticipated hit. The audios were working perfectly fine as before, however, our visuals took a turn for the worst. Our color scheme, which had once encompassed a large ROYGBIV range with the 2tone VIZZIE component had now been restrained to just neon blue and neon green. We tried our best to salvage the visuals but were met with defeat when we knew that time was running out. We decided to go on with the show despite this unfortunate occurrence. 

During the performance, almost everything went smoothly as planned. The beginning started out smooth, with a fade-in component we tied in with the gain controller. We wanted to achieve the same effect with our closing, but were unable to given the unforeseen complications. 

The overall performance was admirable and enjoyable as a whole, however, my fellow team members and I believe that it was a pity that we were unable to present the final product we intended to showcase at the performance. We know that we could have somehow prevented the situation by practicing even more on our own time and practicing the sequence in the auditorium before the actual class. 

Conclusion:

Overall, I believe that as a team for this midterm project, Gabriel, Kyle and I worked amazingly together. Each of us was able to practice upon our audiovisual skills while showcasing the knowledge we were already more confident and familiar with. Together, we brainstormed various ideas and were able to finalize our opinions and concepts faster than any other team project I have ever been a member of. Our research into Thomas Wilfred was exhaustive, but I hope that in the future we could seek inspiration from a list of artists who have made a mark in the world of audiovisual art. In terms of creation, I believe that our group could have benefited from further extension into our visual components, making it seem more cohesive as a whole and practicing even more effects that will pivot us closer to the likes of Wilfred’s 3 crucial elements of Lumia art. Finally, in relation to our official execution during the performance, I think that we did the best we could with the circumstances we were stuck with at the time. In the previous section, I mentioned how we could have minimized the problem by practicing even more in order to prevent complications like ours from occurring in the future. 

What I learned from an individual perspective is how I need to conduct more practice in terms of my auditory compilation skills as well as hands-on research into the VIZZIE modules and components of Max. I wish that I can improve upon my skills in the future and with the upcoming project. 

To end this documentation, I can honestly say that the project was a great lesson for me in terms of the teamwork it provided and the visually stimulating perceptions it granted me throughout its process. I look forward to working with my partners again. 

Works Cited: 

Orgeman, Keely. “A Radiant Manifestation in Space: Wilfred, Lumia.” Lumia: Thomas Wilfred and the Art of Light, Yale University Press, 2017, pp. 21–47.

Midterm Project: Graphic Score – Celine Yu

Graphic Score: 

Creation:

My group members (Gabriel/Kyle) and I created our design digitally through a visual design application known as ProCreate. Since the three of us were already well aware of the genre as well as stylistic aspects of our intended music, creating the graphic score was a fun experience.

The musical bars in the background of the graphic score represent the use of different synthesizers, acting as the backbones for our realtime audiovisual performance. The color scheme chosen for the graphic score is also closely related to the mediums we plan to incorporate into our performance. The different shades of blue and purple used within the score represent the usage of our star act: water.

Furthermore, as we analyze the graphic score, it is crucial to note the evolving sense of imagery that progresses from a flowing liquid form to a strong staccato presence on the right. The gradual darkening of the shades and hues are utilized to reflect the actual progression of our musical symphonies, slowly increasing in both intensity and volume. 

Near the right of the graphic score is a clear black line that separates the chaos on the left from the lone and peaceful imagery on the right. This final aspect of the graphic score is the moment in which our musial score abruptly returns from an intensified score to that of a calm and simple melody. 

Reading Response 5: Cosmic Consciousness – Celine Yu

Reflection:

Compared to prevailing themes at the time, brothers John and James Whitney wanted to embrace a new cosmic film experience through the implementation of new technology. Their creations through the motions of a pendulum allowed the duo to produce sounds and synchronized images that were electronic and aggressive yet inexplicably linked to one another. With a strong belief in Eastern metaphysics, the brothers combined the forces of art and science to create a new form of visual music. Ultimately, the Whitneys’ success in cosmic cinema acted as inspiration for the works of San Francisco-based Jordan Belson. Similar to John and James, Belson also held a strong belief in the importance of Eastern metaphysics on his own art. Both the Whitneys and Belson wished to eliminate any association with the real world by replacing it with the truths that lay not in the natural world but in fact the mind. Through their visual art, they strived to create ideal worlds, ones that constantly explored “uncharted territory beyond the known world” (132) as a means of reaching an abstract perspective.

Following in his seniors’ footsteps, Belson also strived towards the cosmic dimension. However, unlike the Whitneys who focused solely on the usage of new technology, Belson found that he could achieve the best results by combining the technologies of the past and future (old/new). To him, the combination consisted of standard animation, optical printing, lasers as well as liquid crystals. With these techniques, Belson created features that seemed to spring from natural phenomena, never using “images that bespoke of their origins” (148), whereas the Whitney brothers prioritized the scientific curiosity of atomic energy in visual music. This difference allowed the Whitneys to render their image down to its most foundational state, a point of light. With this atomic language, John and James were able to create and shape imagery that weaved together the aspects of high art, science as well as spirituality through congruent masterpieces of visual music. 

The Vortex Concert series, one of its first kind to tour over 100 performances incorporated innovative audiovisual performances that immersed its audience members through groundbreaking forms that altered their consciousness to a greater extent. The shows seemed to offer a level playing field in which high art and popular culture could coexist together alongside the abstract and the representation to form impactful sensations. This ideal synaesthetic experience of The Vortex Concert series was, however, not the only existing experiment conducted to further the concept of visual music. From Elias Romero’s percussion-based and liquid abundant shows to the electronic and vibrant Happenings of Sonics, the light show phenomenon spread rapidly across Europe and the U.S. The acceptance of deep immersion and sensory overload as a willing recipient has crossed centuries, as visual music continues to thrive as a cultural phenomenon throughout modern pop culture. References to these pioneering light-shows of the 1960s can still be found in various pop concerts and rave events whose goals are to blur the distinction between auditory and visual senses for all of its viewers and listeners. 

Project 1 Documentation: Alien Abduction – Celine Yu

Title: Alien Abduction 

GistHub: https://github.com/cy1323/CelineYu/blob/master/Project%201:%20Alien%20Abduction

Project Description:

For my first project under this course, I was instructed to design a generative composition for an audiovisual sequencer. While I gathered plenty of information from both the Sound Art History course and the Early Abstract Film research, I also gathered inspiration from a course I had taken during the Summer term: Chinese Science Fiction. Within the course, I touched upon the wonders of alien narratives and extraterrestrial characteristics and cliches found throughout mainstream media as well as its history. It was also very interesting to observe the “Area 51 Raid” just last month. I wanted to reflect this curiosity of mine within this first project by modeling the visuals after an ‘alien invasion’ or ‘alien abduction’ and the melodies after what I would imagine are the sound effects that occur during such situations. 

Perspective and Context:

The close-knit amalgamation of audio along with video in this project of mine indicates it as a symbiotic result of a generative audiovisual system. The array of colors used within the project are there to mirror the visuals that occur in the mind of someone who lives with the condition of synesthesia. This is how I believe my project fits into the context of synesthesia as an audiovisual system. As I’ve learned through research into synesthesia, the individual’s brain is cross-wired into a specific form that causes certain stimuli to create unique responses that are otherwise not seen in regularly functioning minds. To these synesthete individuals, the sounds they hear are translated into colors through a process commonly known as ‘color hearing’. It is through a similar process that my project incorporates the use of sound in order to instigate and create visual information. I view the project as a creation of visual music, for the system that has been created here provides the music with another means of reaching audience members. Instead of merely affecting the audience’s auditory senses, along with the vibrant colorful imagery, the listeners will for sure have their visual senses heightened as well. 

I’ve taken a lot of information from the readings we’ve completed in the past few weeks into this project of mine. The most crucial piece of information I found most inspiring was from abstract film artist and pioneer visual music specialist, Oskar Fischinger. Fischinger’s work stood out to me the most with his usage of carefully selected music and minimalistic designs. His priorities and beliefs as a creator are also important to me as an artist who is still in the process of mastering the realm of visual arts alongside sound. To Fischinger, art should always be a pleasurable experience and should constantly be a place where invention and creation occurs. To follow in his footsteps, I applied his beliefs to my own work this time around.

Development and Technical Implementation:

When it came to finally creating the actual project through Max, I knew that I needed to start with a solid melodic base in order to move on to the visual portion. I thought that the best way to start off would be to apply all the information and skills I learned in class into the audio section. I started off with a drum sequencer and connected three separate Cells where I could then implement the individual audio pieces sourced from within Max itself. After tons and tons of sampling, I finally decided to go with the repercussions such as snares, bases, and cymbals to create a strong but catchy beat.

Afterward, I created a piano roll sequencer to act as the generator and base for my second synthesizer of the patch. I tried my best to stick to the format we were taught in class and followed the process of an oscillator → mixer → filter → level control and finally, an effect. This is the portion I really tried to correspond the audio into the context of an ‘alien abduction’. I edited and played around with the synthesizer until I reached a track that seemed unordinary and with a ‘warped’ feeling.  

Upon completing these two audio portions, I combined them together through a Pan Mixer at the very bottom and lastly, connected them through ‘stereo’ for its output mode. 

Moving on from the audio, I finally began working on the visuals. (I learned my lesson here and connected AUDIO2VIZZIE/BEAPCONVERTR between beap and vizzie modules, mentioned in Presentation) Upon converting the data values from the audio components, I connected each of them to separate SMOOTHR controller modules that would help smooth the incoming VIZZIE input data. I used these values to create the light beams with 2 1EASEMAPPRS and a LUMAKEYR module. This created the effect of lights shining down from a ‘UFO’ or ‘alien spacecraft’ as we have all seen throughout several sci-fi films. This portion was connected to the main visual module, the 3PATTERNMAPPR and put to the side. 

I wanted to a few more effects and so went onto create a different path to the left. I started out with a GRANULAR beap module. (Then added in the AUDIO2VIZZIE upon reminder) I attached the data input to several vizzie generative modules such as ATTRACTR, OSCIL8R and TWIDDLR. These together would provide my visuals with a sense of movement. The effect modules of FOGGR and BRCOSR were also attached to give the visuals more depth in relation to the audio tracks above. All of these modules were then connected to randomized placements on the larger 3PATTERNMAPPR  mentioned before. Afterward, I created an output device for the visuals by attaching to the bottom of the 3PATTERNMAPPR, the PROJECTR. 

In the end, I came out with a result that I believe truly does fit within my designs of alien abduction. Its bright beams of light, as well as vibrant colors, reach an effect that resembles those found within sci-fi scenes. I am proud of the overall project. 

\

Video of Project: Screen Recording 2019-10-17 at 1.08.24 AM

Presentation:

For the presentation, I received a number of suggestions and comments upon my audiovisual creation. While others showed their admiration for the bright visuals and others for the in-depth audio, I felt that the suggestions for improvements left the greatest impression on me. There was first the suggestion provided by Professor Eric: the unspoken stigma that lingers behind using pre-recorded sounds on the Max application. He recommended that in the next project, I should strive to work myself away from such recordings in order to prevent my creations from sounding like sample products, despite my known intentions. This factor has been noted down and will be taken into consideration within the next project. 

Another major suggestion I received in regards to my project concerns the relationship between audible and visual contexts. I was informed by my fellow classmates that in the next project, or if I wish to simply improve upon it, I should dive into more intricate and clear ways that can further delineate the mutually stimulating relationship that goes on between the two components. I completely agree with the comments of my peers and I know that I can achieve better results with more practice and experience with the audiovisual medium. 

Lastly, I was notified during the presentation a small mistake I had made in my Max patchwork. When converting between beap and vizzie modules, I had failed to convert their data values through the usage of VIZZIECONVERTR and AUDIO2VIZZIE, damaging the project as a whole. I took this comment seriously and immediately went to change the mistake I had made following the presentation that day.

Conclusion:

This entire hands-on experience with Max has allowed me to mature a lot as an audiovisual content creator and as an artist overall. I had gone into the experiment quite overwhelmed with the amount of work I needed to have completed with an application I was still nowhere close to mastering. Luckily, I had a repertoire of readings, demos and research findings that granted me the information and inspiration needed to complete the assignment. With info on synesthesia, audiovisual systems, audio synthesizers, visual content and words of wisdom from creators such as visual art pioneer Oskar Fischinger, I created a piece that I am proud of myself for. 

With the project, I learned the importance of planning ahead of time. Though I did have a general idea of the direction I wanted to head in, I was still very confused at certain points of the process and was easily frustrated when the effects I wanted to achieve were not displaying in front of me. I hope that in future projects, I can garner the courage to ask my fellow classmates and professor for assistance when I am truly confused and stuck on a problem.

This time around, I think that the factors I succeeded at most were creating an intricate and detailed audio track. I spent a lot of time working with different modules and styles until I reached a track that I was at least somewhat satisfied with. The parts I thought that needed necessary improvement was my ability to separate certain audio features to correspond with specific visual modules and pieces. I also think that it would have been beneficial for me to limit the colors involved in the visual by using another set of modules. This would have given off a more ‘green’ and ‘dark’ concept that would fit the ‘alien abduction’ theme even more. After the presentations, I was really amazed by the artworks my fellow classmates had completed and was blown away by the fluidity and cohesiveness that connected their audio and visuals together. I wish to improve upon this in the future for my next assignment. 

Overall, being that this is my first real attempt at creating a cohesive audiovisual system, I am still quite satisfied with the final results. I know that there are still a lot of improvements to be made in terms of the relationships that stand between my audio and visual components, but I am confident that I will be able to create even better audiovisual systems in the future.

Reading Response 4: Lumia & Thomas Wilfred – Celine Yu

Thomas Wilfred is a pivotal figure in the history of luminous art for his contributions through LUMIA, one of the most influential pieces of artwork throughout the 20th century, most specifically the 1920s. With its origins transcending decades, Wilfred’s work has and is still capable of influencing hundreds of artists, engineers, and scientists in combining their skills to create visual art. As described by Keely Orgeman in her in-depth bibliography of the artistic entrepreneur in A Radiant Manifestation in Space: Wilfred, Lumia, and Light, Wilfred had expressed since his childhood, a passion for light as well as prisms. He was obsessed with the desire of avant-garde artistic creation and felt the need to continue his legacy by working day and night to achieve critical acceptance, recognition, and success. With his hard work, he created and perfected the CLAVILUX at his shared studio with The Prometheans, a cutting-edge device at the time. It was used by Wilfred to create and project abstract imagery he coined as, Lumia. With the CLAVILUX, Wilfred toured both the country and the world, bringing people a newfound sense of art that combines movement, color, and light. 

With this self-proclaimed, “8th Fine Art” of the world, Thomas Wilfred perceived movement as one of the most significant factors found in each luminous piece. As claimed by Wilfred, in any form of light art, “motion is a necessary dimension” (Orgeman 21) for it provides a means of introducing and showcasing to viewers the true nature of light. Wilfred found it crucial that this  “force of energy that travels constantly through space” (Orgeman 21) be implemented into his work by providing the basis for it.  This reality was achieved through the CLAVILUX with modern equivalents of “lightbulbs, motors, color records and reflective materials (Orgeman 34).

It is interesting and intriguing to learn of Thomas Wilfred’s history and upcoming as an artist in the 20th century.  The portion that stood out to me the most was the claim of Wilfred is one of the pioneers in combining the likes of art and science for creation/invention. Throughout the video it is mentioned that while being an artist, Wilfred was often compared to that of a scientist. His studio would resemble more of an inventor’s workshop as well as a scientist’s laboratory than it did a traditional artist’s studio. He would ultimately demonstrate this deep understanding of both the arts and the sciences through the CLAVILUX, for his “precise light-bending technique [would often mimic] that of physicists’ laboratory experiments” (Orgeman 34). Along with his many versions of the CLAVILUX and his story, Wilfred inspired many individuals such as Katherine Dreier, engineer Charles Dockum, piano prodigy Mary Hallock-Greenewalt and many more. Wilfred’s belief in the achievement of spiritual liberation through art was, however, not as pertained throughout the generations. It is evident today, that this sense of spiritual understanding is neglected within modern artists. Nonetheless, the works of Thomas Wilfred had opened doors to the now seemingly limitless range that art is capable of reaching, which ultimately includes the art of light.