Categories
CommLab

Close-Up Film + Reading Response

This “unbroken belief” of the documentary form are represented in Abbas Kiarostami’s film through its use of fiction. The scenes with the actors (of course who are themselves) are not necessarily “true” to the exact events. However, if it were, that would be a difficult task to recreate. So a dash of fiction adds a layer of sparkle to the truth, a way to captivate and leave a lasting, storyful impression rather than inform.

The truth informs fiction in close up in the same way that a group of children would recount the events of a situation to an adult. The problem arises in the incredibly difficult task to piece together a piece of fiction that comes from a variety of truths from different sources. Everyone in the film plays themselves in the fictional scenes that take place, so the recreation of each instance must be a distillation of multiple viewpoints of the characters in that scene. The fiction is a derivative of reality and in this way separates itself from truth itself. The process of creating this distillation is what fiction is, in terms of the way it was used in this film.

The fiction informs the truth in the same way that a newspaper story would inform a reader about a certain event. The film is a docufictionary film, and therefore has intertwined elements of fiction and reality. The reality is the court scene, however the fiction are the scenes that are intercut between court movements. The story is being told through a certain lens, the one that the author or director/editor of a video would tell through, and it is in this way that the fiction “informs”, or more specifically alters the truth. Because it is this is the only lens in the moment of focus that the viewer/reader are able to understand and interpret the truth. Therefore, the truth then becomes an element of fiction rather than a separate aspect of reality in itself.

The role of a live broadcast is to simply broadcast live. The events don’t necessarily have to be even taking place live, in fact, how would you know that the events on screen are even taken place live? A broadcaster can repeat an event that happened in the past that no one has known about but play it off as a real event, altering any indication of time on screen. A broadcast is an illusion that leads the viewer to interpret, but not fully grasp, a situation that is happening. For example, there was live broadcasting in the large scale violence, riots, and protests during the George Floyd massacre. I was watching the things on screen live from thousands of miles away, and what was depicted scary to say the least. Fires, armed men, and an entire CNN news crew was arrested during filming live on camera. It was insane. The role that this broadcasting plays is the same as any, as I have stated before. To broadcast live, and interpret the protests in any way you want. But the grasping and understanding of the situation comes later. Not now.

Categories
CommLab

Shotgun (Visual Metaphor Project) – A Documentation

Concept

The basic concept behind this film, originally, was to tell a story about two lovers who break up. Eventually, the project turned into a character/personality-centric film, instead of a conflict driven story arch. I don’t think the film would have been the same if the couple were straight. The first reason why they are gay is because I don’t know any female actors here in Hawaii right now. They’re all in the mainland. Another reason is because the actor I had for this film let me use his apartment, which was where a bulk of the filming was done. Getting started writing this film was a little difficult. I know what I wanted, but I didn’t know how to put it into words. So I forced myself to write a script before storyboarding, since the filming was supposed to follow the narration.

Here is the script:

my_shotgun_script_d2

After the script, I compiled a storyboard.

One reason I wanted to explore this topic is because of the limited resources and time I had available to film. I created a story that was convenient enough to create for me.

Locations

  • Various shots inside a Toyota Corolla 2001 Edition
    • This is a car I had access too. It also has a vintage feel and look, so that was a nice addition to the overall feel of the film.
  • Apartment Balcony
    • Scenic shot of the cityscape
  • Apartment Bathroom and Shower
  • Apartment Building Garage

Technicalities

Video Equipment

  • Canon T5i DSLR
  • Sigma 17-55mm lens
  • Sony A7S III
  • Sony 85mm prime lens
  • Sony 16-35mm lens
  • iPhone 12
  • Velbon Tripod
  • DJI RSC2
  • Generic light stands off Amazon
  • Godox RGB Mini Creative M1 LED Light

Audio Equipment

  • Avantone CK-7
  • MOTU M4 Audio Interface
  • Neumann NDH 20 Headphones
  • PreSonus Eris 3.5 Nearfield Monitors

Software

  • Adobe Premiere Pro CC 2021
  • Adobe Media Encoder CC 2021
  • Logic Pro X
  • Handbrake

Video Production

Shooting

All the apartment scenes including the garage scene were shot using the Sony A7s III, the cityscape wide shot with a pan toward the ground was shot with an iPhone 12, and the interior car shots with the rear view mirror and windshield wipers were shot with the Canon T5i.

It is common practice to make a shot list when shooting short films, but with the amount of shots I had in this film, a shot list wasn’t necessary. I followed a basic outline for creating the shots. The first shots that were finished shooting, the interior car shots, were finished a week and a half before the apartment shots. The apartment shots were completed in three hours, because if it continued on past three hours I would be charged for parking at the apartment complex.

The apartment shots consisted of:

  • The shower scenes
  • The parked car in the garage scene
  • Scenes with the lover and his ex

Now let’s talk about lighting. I chose the lighting for this entire film to be done easily. I didn’t have many resources like studio lights or anything significant, so I had to work with the environment, which is not necessarily a bad thing.

A spark for the inspiration of this film came when I was driving home from getting milk tea a week or two before the final project was explained in class. This tiny journey of getting milk tea and going back home takes around 30 minutes, and I do this around sundown. While driving, I noticed that the sunlight hit the side of my face so well in the rear-view mirror. So I realized that the lighting conditions were perfect for a nice looking shot and I wanted to implement that into a film. Hell, the shot was so nice I decided to base my entire film off of the visual metaphor of a car. So the car shot lighting is quite simple, as I just used natural light.

The apartment shots were also quite simple as well. In terms of lighting gear, all I had available were my friend’s Godox RGB Mini Lights mentioned above. They are little powerhouses for sure. I mounted these on generic light stands off of Amazon.

I then placed one on the outside on the balcony at 50% brightness and less than daylight white balance, and one inside just right outside of the hallway toward the exit door. This can be seen in the shot diagram below.

The hallway light is noted in purple. However, the balcony light is not even drawn. This is because during filming, I decided to put it there as an accent finish to highlight some of the talent on the balcony. This worked out quite well in the final cut. Here is an example shot with and without the light.

Here, a different spill light was used, however the reason for not using that spill light is because the shower/bathroom scene is shot in the same apartment, which is not what the viewer should realize. It would create questions in the story line.

With little resources, I wanted to control everything I possibly could control in such a short shooting time frame. I decided to shoot this scene at night because that would let me have greater control over the lighting, and it did. I didn’t have enough “interesting” locations so I had to use the location that I already had, the apartment, to a good extent, so that is why I added a pseudo-dolly shot shot with the DJI stabilizer. This allowed us to move around the space in the apartment and generate some subtle visual interest, using the space as a dimensional playground instead of a stagnant, small space.

So originally, I had intended a nearly different shot movement for the apartment scene, as seen in this shot diagram below:

As seen above, I designed two different but similar camera movements in case the apartment space would not be able to accommodate the camera’s movements. During shooting, we changed the shot to instead of following the talents outside into the hallway, the camera would back into the room in the hallway, the room where the light is, creating a look where the talents are fighting toward the door instead of the camera. Another aspect that was changed was the blocking of the characters. The ex was originally going to be standing and moving around when the lover entered the apartment from the balcony, but this changed to the lover sitting down in the beginning and doing a slow reveal to him as the camera dollyed out. This is much more effective, as it sets the tone of the scene much more well, without the use of extravagant lighting, scene production, and color grading.

As seen above, I was originally going to have the camera move backward slowly while facing the actor. However, we changed this to a hard pan to the right, where the camera witnesses the lover being thrown out and falling onto the ground in the hallway. This is a much better shot selection, since there is use of talent movement to express emotion and shot composition, since the long hallway makes the shot feel lonely.

Also take note of the dialogue cues in the shot diagram. I used these as a general rule of thumb rather than an exact point of sound placement, since I edited the scene and felt out the words in post.

For the concluding shots that took place outside on the balcony at the end of the film, the shot diagram also looks a little different than the film.

In the film, the ex was originally going to be sitting at the table. However this changed to the ex walking into the kitchen and placing his hand on the light to turn it on. I liked the lighting in the final shot better, since it made the ex be in the darkness and the lover be in the light, already creating a contrast between the two characters before the fight that happened earlier in the film.

The shower scene was quite simple.

As seen above, the camera was originally supposed to move toward the mirror as the talent got into the shower, but this was almost impossible, since there was not enough space in the bathroom to accommodate such camera movement, and the intended shot was too long. Instead, we just went handheld and took video of the talent looking into the mirror at himself. Then to get the inside shower shots, I actually had to take a shower, just not naked obviously. One person stood right up next to the closed shower door with the camera and pointed it down at me. Another person, right behind the cameraman, held a single Godox light up to the ceiling to bounce light into the shower. This setup was effective. From the various cuts, it’s evident that the scene is sterile and meant to convey cleansing. For added effect, the shower was on the coldest setting, so I would actually shiver and not try to “fake it” with warm water. For closeups of the cross on the chain, we opened the shower door and did slow handheld close ups, dollying in and out on the chain.

This shot was taken by opening the car door and placing the camera right outside of the shotgun passenger seat. Of course, this focal length is near ~70mm to get a close feeling, and to not get everything else in the shot. This shot was actually cropped more in post. This shot should be noted since it was different from the original storyboard’s intent, as the storyboard depicted the windshield being filmed from the back passenger seat. However I noticed the sun outside looked quite nice hitting the windshield with wiper solution on it, so I did this shot instead.

The workflow I had throughout the post-production process was a dance between three things: the video editing, the audio design, and the story narration.

Editing and Visual Design

I used Premiere to edit and export the final product. I did some simple audio design in Premiere, such as fading in and out of audio clips and basic rearrangement and deletion of clips. I left some of the raw clip sounds (no editing) in Premiere so I had more room to play with them when I exported the video file to import into Logic.

Screenshot of my Premiere workspace.
Screenshot of my Premiere workspace.

Color

In my opinion, color grading is a simple affair: get that video to look nice and appealing. It’s kind of like varnish on wood or the final polish before a product is sent to the shelves. Overdone, it can make a good cut look bad, but done just right, it can make a good cut look even better.

Therefore, I considered certain aspects of the image when color grading:

  • What are my lighting conditions?
  • What colors are already present in the frame?
  • What am I emphasizing?
  • What mood am I setting?

Here are some example frames of color grading from the video.

The color palette is fairly simple. There are drab tones throughout every shot in the film, made of brown, black, and gray. Things feel muted, even in the sunlit rear-view mirror shots. The only real eye-catching color are the whites, which are heavily emphasized in the shower scenes, which is used to convey intensity and shock.

Video Quality and Aspect Ratio

The Canon T5i is only able to shoot at 1080p at 30fps, so I took advantage of the fact that the Sony shoots in 4K at 60fps. Oh what a joy it was to edit with 4K footage! 4K is bigger than the sequence dimensions, so I had to scale down some shots from 100 to 50 in the motion settings. This can be seen in the frames above, where the shots that are zoomed in are actually just 4K footage not re-scaled to 1080p, therefore appearing large. It may seem like a disadvantage, but it isn’t.

4:3 was the aspect ratio. I had originally intended to use a 16:9 aspect ratio, but after accidentally exporting in 4:3, I decided to keep it. This actually gave me more options in post. It made the film have an overall “tight” and “intimate” feeling, and it also gave me the technical ability to have more control over the movement of the one take tracking dolly shot (we used the DJI rig for that, not a real dolly), because since less of the image was in frame, there was leeway to toy with key frames and moving the clip around in space. This toying of movement was also possible due to the 4K footage with all that off-screen real-estate.

Here is an example of what I mean.

All the happy feelings editing in 4K escape you when premiere starts to lag and the computer fans start screaming, especially on an Intel processor (step up your game Intel). So the solution to editing so quickly and smoothly is to use proxies, a feature in many editing programs that lets you create a “proxy” of a video using Adobe Media Encoder. For example, the proxies for all of the footage I used were 720p, 1024×540 versions of the original footage. That means I was editing the sequence using the proxies, not the 4K or 1080p footage, and exporting it without the proxies, using the 4K or 1080p footage to 1080p at 24fps. With these down-scaled versions, editing is a joy yet again.

Visually editing and assembling this piece was a little difficult at first. I had around 80-90% of the shots from the storyboard, so there was a key point in the story lacking. I had no idea where to go on the running scene. I didn’t want to show running, as that would defeat the concept of narration. It’s a sin to describe what is happening on the screen in filmmaking, after all, the whole goal is to show not tell. I had an idea. I compiled all of the clips I had already into a rough cut, following near, but not exactly, along the lines of the script. The whole running part was replaced with shower shots intercut with driving and windshield wiping. I brought this rough cut into Logic and experimented with saying the script. I realized that I could make the narrator have a voice or a sense of character, not just through the delivery of the lines, but also in the context and story of the events unfolding on the screen.

For accessibility reasons, I also created subtitles that are embedded in the video, or as an .SRT file for upload to streaming services. Not all audience members will understand the language I am using. I used Handbrake to embed the subtitles.

Audio Production

The narration was originally going to follow the script above, but a problem arose when I started editing the footage. I noticed that the creative direction and story was leaning toward a more ambiguous and abstract feeling. Of course, this is not an abstract and experimental film, but the story’s concept is just about there. Therefore, I deviated heavily from the dialogue in the second half of the film after the fight scene. I essentially free-styled the narration. A tactic I used to create effective narration that “fit” the onscreen actions is to make it such that the narrator sounds like he is talking to a psychologist or a therapist, detailing events in a very subjective point of view with definite flavors of his personality shining through. This is apparent not only in the delivery, but also the word choice and expression of words. For instance, in the latter half of the video, the narrator says, “if you didn’t get the joke, like a stupid person”, which is indicative of his pessimism toward life, something that was lacking in the original script. Character was the main thing that mattered in the entire film in general, since creating a short story for around 90 seconds is quite difficult, so I decided to emphasize the one facet of a film that I had access to: me and my voice.

I used Logic for all sound related activities. Audition is just too bulky, and the workflow is wonky. Logic is much more optimized for MacOS systems unlike Audition, and I already have Logic experience, so using it was not difficult. The ideas I will be explaining though are applicable to any capable Digital Audio Workspace (DAW).

Recording and Instrumentation

The only real, in-person recording I had to do was the narration. I used the aforementioned Avantone mic and MOTU interface for that. To get the recording “right”, I used this mic setup below.

Placing the mic just above the upper lip and having the vocalist tilt their head a little up to speak into the mic softens sibilants greatly. The black sock on the mic (does not have to be black) acts as a plosive filter.  These two precautions make editing the vocal easier. I am sitting down to get the nonchalant delivery. Sitting down slightly slouched, I sound careless but engaged.

I had a grand vision for the soundtrack, I wanted to use enamored violins. I liked the unstable and infatuated sound it provided. Below are four short and separate violin compositions that I arranged myself. I brought my friend over to record them. These are unmixed and unmastered. Unfortunately, I did not use these at all in the final product.

I realized though that this was not the creative direction I was going toward. The 4:3 ratio kind of sealed the deal on the soundtrack direction. The violins were too grand and even “stringy” for such intimacy, so I took it into a more minimalist approach. I used three instruments for the composition, all of them were synthesizers. They are Gentle Sine Bells, MonoSlide, and Vibus.

I chose these instruments because of their simple nature. They aren’t noisy or annoying or tense. They are calming. In fact, little to no post processing was done to these instruments. Here is the score for the main theme that plays in the beginning and end of the film:

Played at 70 BPM.

The fight scene score is a little more ambiguous. The sheet music doesn’t come out correctly since it wasn’t played at any BPM. I played the notes into the computer timing it with the scene. Essentially, the short phrase is in the key of A Major, following a basic 2-5-1 chord progression. The strike of the 1 chord in the progression, the resolution, or in terms of the film, the punchline is timed with the door banging sound. This 2-5-1 chord progression was also used in the main score. This reoccurring 2-5-1 utilizes the flat 9 of the dominant chord, which creates a feeling of tasteful uneasiness. At the same time though, the resolution to the 1 relaxes the listener, yet the content of the narration is not relaxed. Although delivered in such a semi-suave manner, the content is dark and disturbing. This contrast creates a unique audio atmosphere even without mixing and mastering. There is no exact feeling that can be pinpointed here in this combination. It’s a mix of melancholy, longing, loneliness, passive-aggression, and apathy.

Project Organization

This is important, especially for audio projects, since mixing is such a crucial process in creating a finished product. I divided the mix into groups, seen below.

The Voiceover Group is just the narration, nothing else. The Second Phrase Group is the second musical track that starts and ends in the fight scene, and consists of just Vibus. The First Phrase Group is the first and third musical track that appears in the beginning and end of the film, consisting of the Gentle Sine Bells and Monoslide. The Sound Effects Group are sound effects.

Here is the fader control panel and mix bus view.

Most importantly, here is the star of the show: the narration track.

Recorded with an Avantone CK-7 into a MOTU M4 Audio Interface, this effects chain is quite simple. Let’s go down the list since that’s how the computer processes it. First the Linear Equalizer (EQ) is used to cut the low frequencies, like 20Hz to around 100Hz. It’s always good practice to cut those. We can’t hear 20Hz well enough to make a smart mixing decision to include them appropriately (perhaps you can *feel* it with a sub-woofer, but that’s a different story). The EQ is linear phase, basically meaning the tonality of the sound will not change when frequencies are manipulated, contrary to using something like a parametric EQ. The three compressors are next, which compress the dynamic range of the sound and give the voice more of a presence. It’s in your face. The Tube EQ (a software EQ plugin modeled after hardware Tube EQ technology) is used to add a little “warm” touch or character to the vocal, and the final Linear EQ is to add just a little sparkle and correction to the final output to ensure things are exactly the way I want them. A lot of processing for just one vocal, but it was necessary for it to stand out in the mix, especially since male vocals usually inhabit the low-mid frequency range and the ambient sounds are also there too. Of course, all these effects aren’t necessarily needed. The Avantone CK-7 sounds exceptional on it’s own, but the processing adds further character to the voice as well, as it became more present with the effects decisions I made. It pushes the sound an extra mile.

Mixing

Mixing was relatively simple. Due to the lack of intricate clashing between instruments in something like a song, controlling the ambient environment and sound effects songs against the monologue was easy. Bus routing was used to do some general EQ adjustments on many tracks at once, and also to add some reverb. An important moment in the mixing process though, was the fight scene sounds. The audio from the fight scene was not originally going to be used. However, when I was editing, I noticed that the fighting contained some tense dialogue that I wanted to pursue for the story, hence the reason why the audio quality is abysmal compared to the narration. I used some simple automation of volume and EQ to get the desired effect of being able to hear the dialogue and fight at the same time. Below is an example video of how I did it. The orange track is the original video volume.

I created a mix then created a rough cut. This rough cut was the precursor to the final video. I analyzed it and adjusted a few points. Here it is below.

In the final cut, I made it such that the ringing sound of the car door open was made to represent an EKG monitor. I noticed that if you leave the headlights on and get out with the door open while the car is off, the sound stops beeping and instead makes just a long tone. I used that as an EKG flat line. This EKG flat line is also in time with the narration where the narrator is talking about killing himself with electrocution. The only aspect that was planned here was the joke in the script about the toast bread in the bath tub, everything else though was a concept thought of during post production.

A keen listener who is active in the hip-hop or even alternative hip-hop circles would notice that the song being played inside the car at the beginning of the film is A BOY IS A GUN* by Tyler The Creator, a gay breakup song about a lover and his ex. Quite fitting.

I mixed the project to -7dB True Peak for headroom in mastering.

Mastering

After exporting the mixdown, the master was simpler. Its just a few plugins on the effects rack: Ozone 9, Tube EQ, and Limiter. The other two, SPAN and Loudness, are only for metering.

Now onto the use of the effects. For EQ I boosted the low end a bit, the entire mix needed a boost in the low end, it was definitely there, but the EQ enhanced it. I used the Low Pass Filter to decrease some high (almost inaudible) sound, to give the Tube EQ some setup later. The Imager was just to expand the stereo field just a little bit, for the sounds to sound bigger. The maximizer was used to maximize the track’s amplitude, AKA make it louder. I gave myself around -2dB of headroom after the maximizer to use the limiter to push the rest of the sound to 0dB or to around -14 LUFS.

The Tube EQ was used to add some flavor, that little extra sparkle or spice to the final sound to make it sound vintage. In general terms, it created a more intimate vibe. Here is the mix and the master compared to each other.

For both of these processes, I used the Neumann NDH 20 headphones and PreSonus Eris 3.5 monitors to analyze the sound. Most of the work and listening was done on the monitors though.

Conclusion

This was a fun project to do, although it was very hectic due to the nature of online school. I am relatively satisfied with the result.

 

Categories
CommLab

Final Project: “My Shotgun” Script

Here is the script for My Shotgun

Premise: A lover breaks up with his boyfriend and walks us through a series of his thoughts pre and post breakup.

The script is quite self explanatory. It is a near standard screenplay. I have no experience writing screenplays, but this is my shot at it. The script includes narration and scene cues.

Categories
CommLab

Final Project Proposal

I.

  • No idea for project title.

II.

  • Me, and another actor who does not go to NYU.

III.

  • Poem, self written.
  • Story is about gay lovers, or maybe depression, or maybe both.

I don’t think about him anymore.
I don’t think he cares.
At all
When was the right time anyway?
I don’t know. Time doesn’t mean anything.
Because this loneliness isn’t confined to measurements
Or space, or time.
So here’s to another eternity, Alone.

  • Going to focus on loneliness
  • Style and aesthetics may include clean, simple shots that are visually appealing, one take shots that last an entire minute maybe, and a moody atmosphere that takes advantage of natural lighting and some practical lights due to the lack of studio lights or studio

IV.

  • I will use perhaps a stabilizer, audio recorder, and a DSLR or maybe a mirrorless camera if I’m able to obtain one, and lights that I can borrow from my friend.
  • Shot in multiple locations… city, tantalus, apartment building, small room, inside a moving car, Punahou School campus, Manoa. Shot usually at Magic Hour. Some night scenes to portray sadness, however that is a little cliche.
  • Character clothing is essential.
  • Music self-composed.
  • Work will be divided upon no one. It’s all on me.

V.

Will be uploaded.

Categories
CommLab

Storyboard Assignment

Categories
CommLab

Photoshop Dipytch

Neo Alabastro, “Victory”

The concept behind this project is that of control, or the lack there of. I wanted to create something that represented the concept of two places: one in front of the mirror and the one behind. The image on the left represents the “real” world, or the world we see in front of us as we would perceive it through our eyes. The image on the right corresponds to the world of the mind, or things we aren’t able to materialize with our eyes. Essentially, the edited image is a descent into madness. In the first image, the figure is posed with both arms in the air; it looks like it is running a victory lap. However, on the image on the right, the arms in the air no longer look like a positive aspect, as it now appears as if the figure is having an existential experience, similar to a mental breakdown. The duplicated versions of the wooden figure are meant to be perceived as a spiral out of control; the repetitive-ness are the thoughts of success ruminating throughout its head. A keen eye would spot a Ouija board in in the first image. The board is meant to represent the thoughts that await the figure on the “other side.” These thoughts are haunting.

The process was simple for taking these images. I had two LED lights on hand that were able to change color across the entire RGB spectrum and had blinding levels of brightness. I mounted these two lights on opposite sides of each other, yet staggered them in such a way to create depth on the objects in the picture. The objects I chose were a wooden figure, a Ouija board, and a hand with metal jewelry. Although seen in the first image, the hand was not used in the second image. I experimented with different angles and light colors, as well as different focal lengths and f-stops, as seen in the contact sheet.

The process for editing the images in Photoshop was also straight forward. I used the selection tool to cut out each figure from images. For the black, silhouetted figures on the bottom right, I edited the layer properties and experimented with the satin effect. For the other cut out images, I experimented with color overlay and layer blending modes. The result was a mesh of vibrant blues and pinks, which I think looks pretty cool. The top portion of the image consists of two nearly black and white, sepia photos. These were made with the layer blend mode “difference.” In addition to these colors, I have also added some texture to the figures. For example, there is some slight blue grain on the leftmost figure. On the top right figure, there is an incredibly faint, blue silhouette behind the figure, meant to represent the figure’s soul leaving its body. There weren’t any compromises or problems that I found along the way in the process of making the edited version. I let my mind flow freely, and this is the product of the stream of consciousness.

If I had more time, I would improve this project by blending even more elements of the pictures I took together and also playing with color more. I would have designed a color palette with the lights and even more so with the objects in the picture. In that way, I would have a little more creative liberty in the positioning and manipulation of the images instead of having to worry about if colors would fit together.

Categories
CommLab

Sound Visualization Project

Neo Alabastro – “Nowhere2go” by Earl Sweatshirt

“Nowhere2go” by Earl Sweatshirt is a song off of his 2018 album entitled Some Rap Songs. The song can be considered avant-garde, as the entire track is full of a repetitive, almost meditative vocal and guitar sample that loops through the song behind the percussion and drums, which comprise of a lean kick drum, a piercing snare, a fluttering tambourine, and simple hi-hats. Initially, the song sounds like a mess; there’s too much going on and the main singer’s vocals are almost incoherent amidst the instrumental. However as soon as I was on my fifth play through, my head started to sway left to right; I started to notice a rhythm to the song. The rhythm of the song is easygoing and organic, like a seance with a haunting, repetitive rhythm.

With the song aspects in mind, I decided that I wanted to create something that was digestible to the viewer of the sound visualization project. I wanted to create something that would simplify the song’s rhythm but convey it in the organic way it already does.

Initially, I had created a 2D design that had several flaws. The first flaw was that there was not enough movement, and the design itself was stagnant. A main feature of the piece was a pattern of lower case i’s in the Courier New font. This pattern ran from the top of the canvas to the bottom, and it created a stagnant feeling on the piece. The dominating presence of the pattern didn’t create enough rhythm expressed in the song. Another problem with the original piece was that there was a hint of color. This was fixed by undoing the distortion effect in illustrator.

The mid critique influenced my design by changing the way I saw the song. Professor Inmi suggested that I add more rhythm to the piece. Funnily enough, although noting the rhythm in my multiple listens of the track, I had never thought of conveying it through imagery. I was more so focused on the lyrics and the meaning of the instrumental than the movement of the track itself. Therefore, in order to create rhythm, I created an arc out of the pattern of i’s. I duplicated the pattern in place and distorted it such that the pattern of i’s unwind like a an musical waterfall. The i’s in the final version curve and sway back and forth from one side of the canvas to the other. In order to convey even more movement, I placed O’s along the arc of i’s, representing snare and percussive hits. These O’s were falling down off the waterfall of i’s in order to create a feeling of a freefall amongst the steadiness of the i’s. Similar to the song, the percussive hits of the snare only come once and are loud and impactful. To convey this, I distorted the O’s using the pucker and bloat distortion feature in illustrator, distorting the edges of the O’s just enough to make them still identifiable as O’s, but also enough to show an impactful percussive hit. I changed the background to black to create more contrast than just black on white. The black background also helps the movement to be seen more clearly.

In terms of Gestalt Theory, I believe that my design exemplifies the rules of similarity, continuity and proximity. For instance, the i’s are grouped through similarity, and follow a continuous path. The i’s are also grouped close together in proximity such that the figure of a winding wave-like structure appears.

If I had more time on this project, I’d want to figure out a way to make the O’s three dimensional. I feel like there isn’t enough depth to them and the project would look more complete with everything in a three dimensional space.