Title
Rhythm with patterns and light
Project Description
The project is about performing a generative composition in both sound and image. The visual should correspond with the audio. I started my project with the sound I generated. For the music, I intended to combine the piano notes and the drum beats and kick beats. I also wanted to add the background music to make the audio more complex and atmospheric. I intend to use the audio data to generate visuals relating to the space and universe, and my music presents a vibe of the universe. I explored many generate modules to create a rich and colorful image.
Perspective and Context
Visual music is about relating visuals and audio. Viewers can perceive a connection between what they hear and see. In my project, viewers can see the connection between visual and audio. The visuals of my project show the beat and the frequency of the music. The patterns change every time the piano notes hit. I gained inspiration for my project through one of the early abstract films we watched, Dots by Norman McLaren. In this film, the sound and the visuals fit perfectly with each other. We can sense the beat of the sound through the visual of dots we see. In the reading about synesthesia, I got the idea that visual and auditory stimuli are perceived as interconnected and influencing each other. In my project, each sound I make with the sequencer and oscillator modules corresponds with a visual. The viewer can hear the note change in the audio as they’re watching the visuals. Also, the project can give the viewer a different kind of experience in sight and hearing.
Development & Technical Implementation
My research process includes the reading of visual music, synesthesia, and abstract films. I gained inspiration for my project through one of the early abstract films we watched, Dots by Norman McLaren. In this film, the sound and the visuals fit perfectly with each other. We can sense the beat of the sound through the visual of dots we see.
I tried out many different things. For starters, I spent time deciding what sequencers and oscillators I should use to generate data and audio. I began with Sequencer and Piano Roll Sequencer. However, I felt the audio generated was too repetitive, so I added Granular and the Drum Sequencer.
The noisy sound generated by GRANULAR gives me an image of different particles merging together. The data of this music goes into BFGENER8R. There were many patterns I could choose from and I tried out every one of them. I finally chose the polygonal-like pattern as the background visual. I control the zoom range and rotation of this background visual so that it moves and floats with the noisy sound. I added KARPLUS and GIGAVERB to give the piano notes a stronger oscillation and reverb. To generate the videos, I tried using different Mix-composite modules like LUMAKEYR. It combines 2 videos using lumakeying, but the output turned out to be too complex. I decided to use EASEMAPPER to generate the diamond-like pattern. The data goes into zoom and rotation angle. So when each piano note triggers, the diamonds rotate once. Therefore, we can see the trigger of the piano notes from the movement of the patterns. And the patterns keep changing with the input data. To color the patterns, I tried different modules like POSTERIZR, COLORIZR, etc. I intended the change the color of both patterns corresponding to the beat of the music and the frequency of the audio. But this didn’t really work well. I found that using MAPPR to change the color of these patterns corresponding to the piano notes has a better effect. I hope to change the color once the note hits. However, the effect was not what I expected. So I eventually used the TWIDDLR module to change the color. But in the final output, the lines weren’t actually changing color.
Another aspect I considered was the pattern to represent the notes. I originally used the straight-line pattern, but then I found it would be in conflict with the other line pattern generated. So I changed it to diamond-like patterns so that the change in music could be viewed more obviously.
This is the link to this original video output.
I used the drum sequencer and the frequency data went into two MIDIs. For one audio, I used the FLANGER and SYNC DELAY to make it more atmospheric. Another audio is the kick, which has a strong beat. The data goes into 1PATTERNMAPPR, which generates a linear light visual. This visual shows the occurrence of each kick beat. The zooming in and out of the line also represents the audio.
Presentation
Unfortunately, I didn’t have the chance to present due to technical issues. But if I had the chance, I think the volume of my audio might be too loud. This reminds me that I should always keep the volume in mind when dealing with music.
Link:
https://drive.google.com/drive/folders/1FQSgO6XVgt40h_qB-80dhGGzsc5W1qkJ?usp=drive_link
my recorded video
Conclusion
The research on reading and film gave me inspiration for this project. The usage of merely one pattern can also generate great effects corresponding to music. The previous exercises helped me explore the different types of modules, which enabled me to choose the vizzie generator that can generate the pattern that I want and align with the music. From my creation process and the presentation, I discovered that with different effect or filter modules, the audio output may vary significantly. There are many ways to manipulate one audio or visual for me to explore.
During the creation project, I feel Max is still difficult to use and it’s hard to achieve the effect I intended. I feel the 3 visual effects do not align very well and the audio composition can be improved. In the current video, the volume of the piano notes and the kick beats were too low and could not be heard clearly. The volume is also something I find hard to measure because the outcome in the recording is not the same as when I listen from my laptop. One problem is that I make the patch more complex than it should be. I tried to simplify the patch, but since I used in total of 3 sequencers and oscillators, and 3 Vizzie generate modules, the overall effect became a little complicated. I would say that simplifying certain steps would be better. I changed and improved my patch multiple times because it’s hard for me to decide the relatively best effect. Now it’s not clear how the visuals represent the audio. Instead of simply making the patterns rotate with every occurrence of the beat, I could also try to change the zoom effect, the position, the color, and so on. What I need to improve is the alignment of both audio and visual and their inner connection. It would be much better if the visuals better correspond with the audio.