week 4 – patterns

This week I worked with creating patterns, sometimes with unexpected results. After some playing around I arrived at two evolving patterns:

1) Childhood reimagined

When I was a child I loved drawing a specific pattern using grid paper. It was very soothing and took alot of effort, patience and time. 

I wanted to recreate this pattern using computing, which surprisingly also took a lot of effort. While I was coding, I made several mistakes which resulted in unexpected and interesting patterns. Here are some of the ‘mistake’ results:

Finally I arrived at the right pattern:

When I made mistakes in the code, I quite liked some of the patterns, so  I decided to randomize several parameters and see what happens. Here is the result [link to code]:

 

2) Mars and Jupiter’s solar rave

I got inspired by an animation of Mars and Jupiter orbits around the sun that I saw on someone twitter

I found it very hypnotic and beautiful, reminding me of a stellar dance. I wondered what different orbits and speeds would look like. Voila, welcome to the solar rave! [link to code]
 
 

week 3 lab – can you guess the tune?

This week we played with analog input and output (or Pulse Width Modulation), testing the concepts on a speaker and a Servo motor. I also tried to do something ‘more creative’ with the speaker, so I created a little game called:

Can you guess the song in less than 10 notes?

With the help of Henry’s composition skills and Alexandra’s, Dre’s and Alfonsette’s song guessing abilities I was able to test out the game:

Hurray! Collectively the team guessed all the songs. Although I wonder how do I prompt the users to just press the button once, as opposed to continuing to hold it. Oh, and here is the set up of the circuit:

guess the tune game circuit

I was able to complete this week’s labs, although many questions remain. And my microcontroller broke at the end :(. Here is a lab highlight: potentiometer controlling the Servo motor..

 

!remaining questions!

  1. What does a transistor actually do and how does it work? What do the different pins mean?
     
  2. What is the speaker set up below? Which direction does the electricity flow in the speaker?
  3. How does one connect an audio jack?
  4. What does a capacitor do and when do we need to use it?
  5. A potentiometer doesn’t have a pulldown resistor even though it is a variable resistor. Why not?
  6. sizeOf(melody)/sizeOf(melody[0]) didn’t work when I used it to determine the length of the notes array. Why not?

week 3

Coded in collaboration with Lisa Jeong

Inspired by the classical Monty Hall problem the player must choose a door to open. Only instead of a car, they win a CAT

Sometimes you win, sometimes you lose, and sometimes the consequences are unpredictable. Refresh to play again and see if you get a different outcome.

Link to p5.js code

synthetic media analysis

Last weekend, I went to the Whitney Biennial 2022, where at some point I noticed a group of people lying on their backs, enchanted by the movements on a set of ceiling mounted LED panels. Upon closer inspection I realized that they are watching an ever-changing pattern of shapes and colors that resembled nature’s textures like the ocean, jelly fish and blossoming flowers. The screens were interwoven with an LED net and surrounded by large aluminum panels on the walls, dripping with oil dents. The atmosphere in the room was otherworldly to say the least.

But what was truly fascinating about the installation by WangShui entitled Titration Print (Isle of Vitr⸫ous), Hyaline Seed (Isle of Vitr⸫ous), and Scr⸫pe II (Isle of Vitr⸫ous) is that this work was ‘co-authored’ by AI. The visuals displayed on LED panels are generated by Generative Adversarial Networks (GANs), which change based on the conditions of the surrounding environment that the AI ‘senses’. In particular, active sensors pick up light levels emitted from the LED screens and CO2 levels from the viewers and use that data to evolve the screen’s patterns. As a consequence, during the night, the piece enters into a form of a resting state. The ability to adapt to and sense the ‘self’ and the surrounding world reveals a conscious and living quality of the art piece. The aluminum panel paintings on the side walls are also created by using AI generated  “new images from  [artist’s] previous paintings which [the artist] then sketches, collages, and abrades into the aluminum surfaces.” [source].  

Both the animation and the paintings mirror nature’s textures, colors and curves, giving us a hint into what the training data consisted of. In fact, during an interview with artnet News, WangShui mentions that the training dataset is a depiction of their research subjects and in a way is the artist’s journal. It includes ‘thousands of images that span deep sea corporeality, fungal structures, cancerous cells, baroque architecture, and so much more.’

As I laid under the screen, mesmerized by the evolving shapes above me, I thought about how important the artist is in curating and framing synthetic media. Generative algorithms like GANs sooner or later will become a commonplace in artists’ toolboxes and I look forward to being a part of the community experimenting with this medium.

 

week 2

This week I started developing circuits with the microcontroller using Arduino IDE. I also learned how to solder, which was new and exciting. Overall the labs went smoothly with a few hick-ups like figuring out that:

    1. I need to use ~D ports for analogWrite().
    2. I need to introduce delay() after the tone() to make sure the sound plays since the loop() goes too fast relative to the frequency.

Some questions following the lab remain:

      • When combining an LED with a resistor in series does it matter if the resistor goes before the LED or after?
      • Why choose the same order of magnitude resistor as the variable resistor?
      • What does ‘9600’ in Series.begin() mean?
      • What does the following mean: When you take a reading with the ADC using the analogRead() command, the microcontroller stores the result in memory. It takes an int  type variable to store this, because a byte  is not big enough to store the 10 bits of an ADC reading. A byte can hold only 8 bits, or a range from 0 to 255′

A few fun circuits that I constructed using potentiometer and a speaker:

and two Force-Sensitive Resistors (FSR’s) and two LED’s:

  •  

week 2

The second iteration of my self-portrait plays with the idea of the many human emotions and character traits, including ones that people often try to hide like anger and vindictiveness.  As time passes, I change the coordinates of the key portraits elements (wings, eyebrows, and the flame) and the face color to transform my original self-depiction into an evil looking creature. The mouse cursor also controls the hair shape, letting the viewer add to the character’s  maleficent aura. I also used a cyclic sine function to create the oscillating eyebrows, which add a feeling of uneasiness to the transformation. The p5.js code can be found here [link].

 

uncertain journey

Creators: Maryia Markhvida, Dror Margalit, Peter Zhang
Voice: Zeynep Elif Ergin

If we are lucky, we are born into the loving arms of our parents and for many years they guide us through life and help us understand the events around us. But at some point, sooner or later, life takes a turn and we are all eventually thrown into the chaos of this world. In these moments things often stop making sense and we have a hard time navigating the day-to-day. Eventually though, most of us adapt and figure out some way to go on.

Sound, in a way, is a chaotic disturbance of the air,  but somehow we learn how to make sense of it and even manipulate our voice and things around us to reproduce the strange frequencies. We start recognizing the patterns in the randomness and eventually derive deep meaning from it. 

This work is an interactive sound journey that uses evolution of randomly generated frequencies to reflect the human experience with uncertainty and chaos. It tells Zeynep Elif Ergin’s story through a composition of computer-generated sounds, her voice, and interactive visuals :

 

Inspiration and Process

After our first Hypercinema lecture, I got very intrigued by the composition of sound in terms of signal and superposition of sine waves. This is something I vaguely knew about from my engineering background (I worked a lot with earthquake wave signal in my PhD) but never got to actually play with, let alone create.  I was also curious to hear what different sounds could be produced if I used different probability distributions (uniform, normal, lognormal) to generate the number of waves, the frequencies and the amplitudes. 

Once we formed a group with Dror and Pete, we started talking about what uncertainty and randomness meant to each one of us. We discussed that when people face moments of high uncertainty they are first thrown into absolute chaos and then slowly they tend to embrace the uncertainty and adapt the chaos into something that feels more familiar. We eventually arrived at a question: What would ones journey through uncertain times sound like using randomly generated sounds?

All of us wanted this piece of work to be grounded in and driven by real human experience. In the words of Haley Shaw on creating soundscapes:

‘…even when going for goosebumps, the intended feeling should emerge from the story, not the design

The final piece is presented in an interactive web interface ,which allows one to listen to Zeynep Elif Ergin’s story through a progression of randomly generated noises. The listener has the option of experiencing the story without the main subject (inspired by the removal of the main character in Janet Cardiff’s work) or overlaying her voice over the computer-generated soundscape. There is also a gradual evolution of the visuals.

Now a little bit about the technical side:

The first question was how can one randomly generate sound starting from scratch, i.e. a blank Python Jupyter Notebook. After a quick conversation with my father about the physics of sound waves and super-positioning, I had an idea of what to do.

I started with generating a simple sine wave (formula below) and converting is into a .wav file.

$$ y = A sin(2\pi f t +\phi) $$

This is the equation of a sine wave with phase angle ($\phi$), frequency ($f$) and amplitude ($A$), all of which were eventually randomized according to different probability distributions. Below is a sample of the first python code and first sounds I generated with only one frequency:

Then I played around and generated many different sounds. Here are some things I tired:

    • Broken down the duration into many phases of varying speed, where each phase had a different sound;
    • Created “heart beats” with low frequency;
    • Super-positioned a range of 2-100 waves of varying frequency, amplitude, and phase angle to create multi-dimensional sound;
    • Tried a uniform and normal distributions to randomly generate frequencies and sounds;
    • Generated random sounds out of music note frequencies using 5 octaves;
    • Generated random arpeggios;
    • Limited notes to C major to produce random arpeggios (creates a happier tone).

Here is an example of one of these generated sounds:

In the meantime, Dror and Peter recorded the interview with Zeynep Elif Ergin as well as additional ambient sounds around NYC, and worked in Adobe Audition to compose the final piece using randomly generated sounds .

The last step was to create the interactive interface with p5.js [link to code] , which gives the listener the option of playing only the ‘chaos track’ or overlaying the voice when the mouse inside of the center square. As the track is played, the uncertainty and chaos slowly resolve, both sonically and visually… but they are never quite gone.

My first circuit, breadboard smoke, and the Jackson Switch

The Jackson Switch

I cannot believe how much I learned about circuits in just two days. All my life I was intimidated by the amount of different parts that go on the breadboard, but alas, this week I started my journey into taming the beast.  Over the last few days I created a short circuit, blew up an LED, and created a glove switch in tribute to the electric pop star, Michael Jackson. Here is a prototype of the Jackson Switch:

To create this switch, I used one of the circuit set-ups from the Labs with a 12V power source,  a 5V voltage regulator, a 220 kOhms resistor and a white LED light. My father consulted me on my circuit and was my assistant in filming and doing the lighting for the video.

Below is the filming and circuit set-up:

Image shows the electric circuit set-up for the Jackson switch

Even when using latex gloves under the white gloves, my body still conducted electricity (albeit with low current and high resistance)! The LED would light up slightly when I put the gloves on, even though the ‘switch’ fingers weren’t touching. How would I prevent that?

Week 1 Lab Reflection

Using the breadboard turned out to be much harder than I thought. The good news is that it got easier with practice. First of all, the working table became a mess very quickly:

There are a few important things that I learned:

  • using pliers to insert things into the breadboard would’ve saved me a lot of time.
  • nails and physical computing do not go well together.
  • making sure your wires are in the holes fully IS KEY. Most of the time if something didn’t work it was because of a connection problem.
  • I originally had a question as to why we have a resistor in front of an LED, but then it got answered when my red LED blew up (evidence below):
  • Short circuits are real and will start smoking and burning the breadboard 🙂
  • I need to invest in many  tools (duct tape, scissors, pliers, etc.)

Questions (and failures):

  • When I put two LED’s in parallel (or series) only the one red lit up even though the voltage was the same. Why?
  • I was unsuccessful in measuring current (mutli-meter read 0.0 all the time)
  • What do the numbers on the potentiometer mean?
  • I did not fully understand two concepts: what is a voltage divider and why do we need a constant resistor to pair with a variable one?
     


And finally, my first working circuit:

week 1

Self-portrait

I wanted to attempt to incorporate some elements of a phoenix  in my self-portrait as it symbolizes the cyclical nature of demise and rebirth –  something I seem to identify with lately. So I made a quick sketch and started coding:

sketch of a self-portrait

Using p5.js editor I first decided to create an outline of the portrait. I played around with shape of the flame, mirroring the side parts and creating the middle on top. The variables for the x and y position were extremely helpful in moving the entire image around without changing the curveVertex(). I followed a similar process for the wings although the sharp corners and change of direction was harder to achieve. For hair I used a combination of arc(), rect(), and quad(). For the face I used curveVertex() and quad() functions. The curve definition was easier this time due to experience and symmetry. For face features I experimented with line(), ellipse(), circle(), and triangle(). This is the intermediate result:

this image shows the outline of the self portrait without the color

Then I added colors and rearranged the shape (I didn’t like having a crown on my head!). I wanted to keep a simple color palette, accentuating the fire aspect of the phoenix. Here is the final version [code can be found here]:

final version of the self-portrait in ps5.js

Reflections: one of the trickiest parts was to figure out how to create more complex curves like the claim and wings. While we didn’t learn this in class yet, I could work without using variables because I am too used to defining things parametrically and rapidly iterating (especially the positioning). I also really wanted to play around with curves so I ended up using curveVertex() function instead of curve(). I also realized that I cannot seem to get away from symmetry — something that I want to change in my next iterations.

Computation and me

There are way too many things I want to use computation for — it is hard to pick . Previously, I mostly used programming for data analysis, simulations, and calculations but finally I have a chance and space to go beyond that. I want to dive into the world of creative visualization, multi-sensory immersion, and human-digital interactivity. I also absolutely MUST try using machine learning and computer vision in creating interactive installations. This is something I always wanted do but never had the bandwidth nor a community to.

A recent memorable experience I had was attending an immersive night walk in a forest close to Whistler, Canada. Vallea Lumina took visitors on a 1km snowy night trail following the footsteps of two long-lost hikers into an enchanted forest. The multi-media night walk did a beautiful job in bringing a sense of magic into the forest, which made me feel like a child full of awe and wonder again.  The light and sound installations throughout the walk really created a feeling of enchantment. I would love to create a similar immersive experience that supplements natural environment or day-to-day spaces with augmented elements (such as light, sound, projections, etc.) to create a new reality.

words in sounds

Describing sounds with words can be tricky. But what about describing words with sounds?
In collaboration with Dipika, Anvay, and Vera

Ticking:

Humming:

Airy Sound:

Sound of Heat:

Sound of Betrayal:

Sound of Roundness:

Sound of Loneliness:

Sound of Red:

Sound of Joy:

Other explorations this week

Deep listening: [link]

‘The ear hears, the brain listens, the body senses vibrations’
‘To hear is the physical means, to listen is to give attention to what is perceived both acoustically and psychologically.’

– Pauline Oliveros 

Sound design: Haley Shaw [link]