Categories
RAPS

Project 3 Documentation

Our project “Glitch in the Simulation” refers to the idea where our reality is actually a digital simulation, and we experience glitches signaling error in the system. Inspired by music genres such as glitchcore, jungle, liquid, experimental hip hop and electronic music and artists such as JPEGMafia, machine girl, midwxst, and drain gang this project is our perception of transcending reality and exploring the manipulation of audio and visual contexts to produce such an experience for the audience.

Our audio portion comes across as quite contemporary, but as we stated in the project proposal, we have had a few inspirations for audio technique and implementation that are not necessarily too recent. For technique, we explored the works of John Cage such as his experiments manipulating and recontextualizing sound. John Cage’s “William’s Mix” was an inspiration for the audio effects on the project. It also influenced the visuals. 

Since we worked together in a group, work was distributed right down the middle: I worked on the audio segments and my partner worked on the visual segments. 

After exposure to all of our inspirations, I started to embark on music development. My vision for our audio performance was more concrete than abstract: a small set of short pieces that would be sequenced together over time. The pieces are based off of contemporary artists such as JPEGMafia, 100 gecs, and Machine Girl, and I added small touches of John Cage in the audio effects portions. 

I planned to use Ableton Live to perform our set, but connecting it to Max 8 and guaranteeing its stability was too risky. Therefore, I created the audio first, then played the set in Max, using Max almost like a DJ deck. My workflow to create the music was sequential but unordered. I first started brainstorming and creating multiple Ableton projects with as many musical snippets as I could. Then, after sitting down with my partner and discussing whether or not the tracks fit within the grand scheme of our project, I cut some and enhanced the five remaining ones. After sequencing the entire song into sections and assembling their appropriate instrumentations, I exported all 58 tracks and started a Logic project and started mixing them, careful attention paid to bass (important). Even in Logic, I started to make some changes to the arrangement and feeling of some sections, since being in another workspace changes your perspective of the sound.

After mixing, I exported each of the five sections into audio files. I then put those audio files together into another Logic project to master them to an appropriate volume. Then I exported the entire 10 minute track into a single file, as well five different files for each section. These were then put into a Max playlist, which I could then use as a switch board for each part of the performance. In the Max patch, I further manipulated the sound with various effects and MIDI mappings to manipulate the audio to get that “John Cage” mashed up sound. The audio output was then routed to the visuals which reacted to that input. 

The bass during the performance was unexpectedly body shaking. I didn’t know it’d hit that hard, especially during the last section. I think that we could have rehearsed scene changes better and audio fades better. I also think that I should’ve worn some sort of monitoring device because I really couldn’t hear my own music that well, so cued in effects and such were really difficult to pull off effectively. 

Regardless, I think that our communication during the performance was alright, I ensured that my partner knew when to come in around the appropriate times. It was straightforward and we essentially just followed the music. The same applies to the second performance, except I could hear the music a little better this time, yet my timing was still a little off on some parts.

In conclusion, I think that this project was successful. During this project, I learned more about how to link the audio and visual capabilities of Max to create a cohesive performance (Max is awesome software), and also a taste of what it feels like to perform in a live club environment. I have experience in symphonic halls and outside on the street stuff, so being in a different environment with different acoustics was interesting.

To improve for next time, I think that we could add more enhancements to the visual and audio synergy, for example, making the visuals more “obviously” reactive to the audio with clear and delineated movements. I also think that for the audio, I would have added a vocalization part because I think that would have added a depth and layer to the performance that is usually seen at live shows and would add to the energy and make things less abstract.

Categories
RAPS

Assignment 4 – Granular Synthesis Audio

https://drive.google.com/drive/folders/1AIKaTy5JpV1_T9z0G6dM-IOJYHI6edVA?usp=drive_link

For this assignment, I chose to use the Granular Synthesis synthesizer in Beap, because this is basically the same thing that the band Granular Synthesis did for their production. I also included a high pitched synthesizer that has a CV valued tied to an LFO in order to somewhat sync with the visuals. For the audio effects, I put a generative reverb to simulate a wide space, adding a contrast to the claustrophobic visuals.

Categories
RAPS

Reflection 8 – Live Cinema

VJ-ing, Live Cinema, and Live Audiovisual Performance involve the live manipulation or creation of visuals are different in their approaches to the relationships between narratives, visuals, and music. VJ-ing is more music-reactive focused, since VJs often work alongside a DJ; Live Cinema focuses on a narrative and coherence of that narrative; and Live Audiovisual Performance is more so about creating a synesthetic experience, where audio and visuals are all rolled into a singular sense.

Gabriel Menotti describes live cinema as a thematic coherence and narrative. It often involves the live creation or manipulation of visuals, but with a stronger emphasis on storytelling or thematic exploration. This form might include elements of VJ-ing but is generally more structured and conceptual. As we have seen in earlier classes, Light Surgeons exemplify live cinema through their multi-layered visual narratives that integrate documentary techniques, graphic design, and animation, constituting to a more subtle, contemplative experience.

Ana Carvalho describes A/V performance as a fusion of audio and visual elements created and performed live. She visualizes a unification of the audience, the artist, and the performance. This practice can encompass elements of VJ-ing and Live Cinema but emphasizes the technique and execution of the performance. An artist like Ryoichi Kurokawa uses creates synesthetic experiences such as his work in his project Subassemblies. The visuals and sound design interact with one another.

Categories
RAPS

Reflection 7 – VJ Culture

The main concept I took away from these two sources is that VJing is more so about creating an atmosphere rather than performing something “concrete”, because a VJ always works in tandem with a DJ, musician, or an audio performance, and so a purely video performance is merely a silent video. VJing enhances an audio performance or enhances the “vibe”. Its that special something like a secret ingredient to a dish.

I have never been to a club before so I can only relate my VJ experiences with what I have seen in class, whether it be my classmates or from the performances that we have watched. I found the hibana group’s performances quite interesting, in particular, ZPTPJ. The dazzling and quick visuals would not make any sense without the sound providing context. Furthermore, similar to what the article had discussed, the VJs on stage counterintuitively add rather than take away from what the audience is observing, because you understand that there are physical movements associated with the visuals and audio. The atmosphere that the performance creates feels ethereal, which I think is what they were going for, given their previous and current works.

Categories
RAPS

Reflection 6 – Graphic Scores

graphic score project 2

The first score is the audio score. It was drawn on paper then scanned. It reads from left to right, starting at 001, and ending at 005, indicating the length of the piece, which is five minutes. At the bottom right, there is a box with the digits and symbols 90 ^ 130, indicating two different BPMs that the performance should be performed at. The ^ is a mathematical symbol for OR, indicating that a choice on either side of the symbol is valid. The mathematical notation was used because of its simplicity and conciseness when expressing a logical expression according to two values.

The third dotted line from the left describes the mid-point of the performance. The sine wave inside the rectangular tunnel traversing from left to right signifies a constant hum/noise in the background that should be present during the performance audio wise. The circles are small pops or subtle explosions, expressing a burst of auditory energy. The long stretching lines that form shapes signify expressions of pitch and effect modulation.

The second score is the visual score, which was developed off of the audio score. Both the audio score and the visual score, I think, are aural scores as presented in the reading Graphic Notation. Reason being, it is a transcription of the performance that can be read analytically, like visual cues that denote where and when actions should occur. Compared to the scores such as John Cage’s BB in the reading, ours is not as esoteric or cryptic.

Categories
RAPS

Reflection 3 – Early Abstract Films

Its incredible that many of the abstract films we had watched could be just random assignments in a visual effects course studying Adobe After Effects or Cinema 4D. For example, I found “Optical Poem” by Oskar Fischinger intriguing because of its smooth color palettes and rudimentary movement.

Optical Poem was created in 1937 as a short collaboration between John Cage and Fischinger himself. A stop motion animation approach was used to create continuity between each frame. The objects seen on screen are paper cutouts translated across a flat plane. Their movement was organized beforehand using graph paper on which Fischinger sketched their movements. Optical Poem has an accompanying sound track that is Franz Liszt’s Second Hungrarian Rhapsody, but it is not necessarily a synaesthesic piece in particular. The soundtrack is meant to emphasize the movement and serve as a basis for the idea of “Buddhist-inspired belief that all things have a sound” that Fischinger subscribes to.

Originally, Fischinger was a painter. He along with Wassily Kandinsky believed in the idea that non-objective imagery can communicate spiritually on the same abstract level of music. He pursued moving image because of its similarity to music in the fourth dimension of time, something that a static image cannot replicate in real time. Therefore, motivations for creating Optical Poem were to bridge this gap between visual and musical art.

Sources:

Oskar Fischinger: An Optical Poem

https://www.ideelart.com/magazine/oskar-fischinger

 

Categories
RAPS

Reflection 4 – Midterm Preparation

I found the documentary and text on Thomas Wilfred interesting because of how committed Wilfred was to creating this new type of art form. This is most evident in the documentary, in which multiple people express their praise and recollection of Wilfred through small stories of things he created or his innovations. The text on the other hand discussed conceptual frameworks on which Wilfred’s Lumia works existed in, such as Einstein’s theory of relativity and light energy. The light energy theory, the one that Wilfred lectured about in 1933, was also a focal point in his motivations for his works. The idea that light from the sun is the source of life and spirit is not unfounded and is biologically relevant in nature.

What most especially stood out to me was the graphic depicting what Lumia is, defining under what terms it exists and through what kind and type of performance manifests its existence. The graphic is straightforward and logical, something that I have not seen before in any other genre of art, mostly because the art that I am familiar with, such as music or videos, is a culturally and mentally understood construct that no one speaks of explicitly. However, due to the nature of Lumia, it is necessary to create a foundation on which people can understand and perform it, because it interoperates with philosophical ideas of physics.

Categories
RAPS

Reflection 5 – Cosmic Consciousness

The two films I’d like to discuss are Calculated Movements by Larry Cuba and Lapis by John Whitney. In terms of differences, the pieces have dissimilar visual artistic style. For example, Lapis, is built upon spirals of several varying colored circles creating a visual spiral. Meanwhile, Calculated Movements is only greyscaled, with no circle and only line segments and paths creating movement. Both pieces also seem to be approaching the concept of synaesthesia from different angles, where Lapis attempts to express it through color while Calculated Movements is expressing it through movement. In terms of their similarities, they both maintain a distinct visual pattern in the way they interact with their sound component. Both pieces stay within their domain of circularity or linearity and do not deviate from that throughout the entire length.

The vortex concert series and later light shows by other groups have greatly influenced our experiences at music venues. I’m not a concert or club goer by any means, but having seen videos from those experiences, it is very obvious that the visual articulations function upon the musical components being played, similar to how Lapis or Calculated Movements express their audio component in either color or movement form.

 

Categories
RAPS

Project 1 – Phase Shift

My project was heavily inspired by John Cage, especially from one of the examples showed in class where he played audio backwards and forwards in random sequences which made distorted and warbled sound effects. I like beautiful chaos and so I wanted to create something along those lines.

Within the context of visual music, my project is somewhat synesthesiac, because colors correspond to the instrumentation of the music. The brushed snare hits phase in and out in the red color. The blue lines are dynamic with respect to the continuous drum hits in the background below all the synthy sounds.  I approached this project with the intention of making a detectable signal from noise, drawing inspiration from some of the drone tracks we listened to in class. So even though its meant to overwhelm, there should still be some semblance of organization.

As I have discussed before, I am interested in creating 3D objects out of 2D, so starting out, I was kind of stuck with what I had, since it was rather two-dimensional in the output, as seen below.

In order to create three-dimensionality, I reflected the image onto itself, creating a sort of “corner of a wall” effect. For the sound, I really like incorporating personally recorded sound for abstract art projects, because in a way, it grounds the project in reality. You’re manipulating the senses of the real world into something unique, so something concrete like raw audio recordings are welcome. I used recordings of me playing the drums, playing the piano, and me saying more random things. I used the piano parts to create a synth that runs in the background of the track. The drum parts are scattered about and random, but still have that rhythmic feel due to the timbre of drums in general. In short, the visuals follow the audio and not the other way around.

I think that my presentation went relatively well. I appreciated the feedback regarding the mix of the audio, which is something that I considered a little bit when I was making the project, but not what I considered fully. It is quite a dense mix in general, so it would require some tinkering to get it just right.

In conclusion, I think that this project went relatively well. I discovered how to use reflections more effectively to create holographic/3D effects, and I succeeded in connecting the audio directly to the visuals. I can improve though on connecting the visuals to the audio, such that anything that happens visually will effect the audio in strange ways, whether it be using a camera or pre-recorded video.

 

Categories
RAPS

Assignment 3 – Second Synth

I already have experience with audio synthesis and audio engineering, so I’m quite familiar with all of the effects and things you could do with synths. However, I’m not used to the idea of control voltage so that’s something I kind of wanted to play with. Its interesting being able to see the entire synth laid out in front of you, because you have to build it yourself.

For this patch, I decided that I wanted to create something techno. So I started with the oscillator that lets you set four different waves to different harmonics. I added a chorus effect to that and some reverb. Then I added another synth, the FM synth and put a comb filter on it, because comb filters always sound cool. In order to create notes and sound, I used the MIDI piano roll draw control to write out notes. I used an LFO to control some of the control values for the oscillators.

https://drive.google.com/drive/folders/1jzQKRwCGz6bUOXxdqd4v37xZWAXbFvPO?usp=drive_link