Our project “Glitch in the Simulation” refers to the idea where our reality is actually a digital simulation, and we experience glitches signaling error in the system. Inspired by music genres such as glitchcore, jungle, liquid, experimental hip hop and electronic music and artists such as JPEGMafia, machine girl, midwxst, and drain gang this project is our perception of transcending reality and exploring the manipulation of audio and visual contexts to produce such an experience for the audience.
Our audio portion comes across as quite contemporary, but as we stated in the project proposal, we have had a few inspirations for audio technique and implementation that are not necessarily too recent. For technique, we explored the works of John Cage such as his experiments manipulating and recontextualizing sound. John Cage’s “William’s Mix” was an inspiration for the audio effects on the project. It also influenced the visuals.
Since we worked together in a group, work was distributed right down the middle: I worked on the audio segments and my partner worked on the visual segments.
After exposure to all of our inspirations, I started to embark on music development. My vision for our audio performance was more concrete than abstract: a small set of short pieces that would be sequenced together over time. The pieces are based off of contemporary artists such as JPEGMafia, 100 gecs, and Machine Girl, and I added small touches of John Cage in the audio effects portions.
I planned to use Ableton Live to perform our set, but connecting it to Max 8 and guaranteeing its stability was too risky. Therefore, I created the audio first, then played the set in Max, using Max almost like a DJ deck. My workflow to create the music was sequential but unordered. I first started brainstorming and creating multiple Ableton projects with as many musical snippets as I could. Then, after sitting down with my partner and discussing whether or not the tracks fit within the grand scheme of our project, I cut some and enhanced the five remaining ones. After sequencing the entire song into sections and assembling their appropriate instrumentations, I exported all 58 tracks and started a Logic project and started mixing them, careful attention paid to bass (important). Even in Logic, I started to make some changes to the arrangement and feeling of some sections, since being in another workspace changes your perspective of the sound.
After mixing, I exported each of the five sections into audio files. I then put those audio files together into another Logic project to master them to an appropriate volume. Then I exported the entire 10 minute track into a single file, as well five different files for each section. These were then put into a Max playlist, which I could then use as a switch board for each part of the performance. In the Max patch, I further manipulated the sound with various effects and MIDI mappings to manipulate the audio to get that “John Cage” mashed up sound. The audio output was then routed to the visuals which reacted to that input.
The bass during the performance was unexpectedly body shaking. I didn’t know it’d hit that hard, especially during the last section. I think that we could have rehearsed scene changes better and audio fades better. I also think that I should’ve worn some sort of monitoring device because I really couldn’t hear my own music that well, so cued in effects and such were really difficult to pull off effectively.
Regardless, I think that our communication during the performance was alright, I ensured that my partner knew when to come in around the appropriate times. It was straightforward and we essentially just followed the music. The same applies to the second performance, except I could hear the music a little better this time, yet my timing was still a little off on some parts.
In conclusion, I think that this project was successful. During this project, I learned more about how to link the audio and visual capabilities of Max to create a cohesive performance (Max is awesome software), and also a taste of what it feels like to perform in a live club environment. I have experience in symphonic halls and outside on the street stuff, so being in a different environment with different acoustics was interesting.
To improve for next time, I think that we could add more enhancements to the visual and audio synergy, for example, making the visuals more “obviously” reactive to the audio with clear and delineated movements. I also think that for the audio, I would have added a vocalization part because I think that would have added a depth and layer to the performance that is usually seen at live shows and would add to the energy and make things less abstract.
