MLNI – P5 Basics Week 2 (Sarah Tahir)

For this week’s assignment, I animated a simple ellipse to move in a circle around the screen. I used translate and filled the ellipse with random. This was a simple sketch to familiarize myself with P5, which I have not used before. It is incredibly similar to Processing, which is very exciting. I think there is much more I can do for future development.

MLNI – Midterm Documentation (Sarah Tahir)

For my midterm I decided to use PoseNet to explore movement in the context of dance. The final product is a machine learning system that animates over a dancer in real time.

Inspiration

I was inspired by motion capture dance performances and the use of imagery to augment and extend emotion. I was also very interested in the various aspects of the body that are used to form a performance. I broke these down into three categories: line, shape and pose. These are the categories I decided to explore with the use of artistic imagery.

Development

At first, I wanted to use PoseNet with Pts.js to create the images. Pts.js is a JavaScript library that allows you to abstract shape and line. However, the integration of PoseNet and Pts.js was very difficult. The illustrations would not map to the correct joints or limbs and many of them would stay in the same position and not move with the dancer.

After working with Pts.js, I decided to use P5 instead. It was difficult to draw complex shapes (beginShape/endShape), so I simplified my approach quite a bit. I drew a skeleton using PoseNet and then decided to look at shape and color.

For shape, I drew ellipses to imitate the body and for color, I tried to fill the page with multiple dancers and an array of colored ellipses.

Challenges and Future Work

I spent most of my time trying to work with Pts.js and so when I started using P5, I had to simplify my ideas quite a bit. I also did not have time to integrate sound which is what I wanted to use to control the size of the ellipses. Instead, the ellipses are set at random sizes. I also wanted to make the shape portion look more like rippling water, but I did not have the time to fine tune it. I think if I were to do this again, I would spend more time outlining achievable designs since this project relies so heavily on imagery.

Furthermore, I would use webcam input. My original idea was to work with webcam input, but because I wanted to center this around dance it was easier to design over video playback. Webcam input would make it much more interactive.

MLNI – Case Study Week 1 (Sarah Tahir)

One thing that I found very interesting in this weeks readings and video is a quote Golan Levin referenced in his ted talk.  “The mouse is probably the narrowest straw you could try to suck all of human expression through” (Mountford). It stuck with me because the possibilities provided by new interfaces for digital art is what excites me the most. I think machine learning in a creative sense is just the practice of utilizing the full potential of the digital world as a medium.

In the Style of Klee by Parag Mital

Parag Mital  is a computational artist who uses film, eye-tracking, EEG, and fMRI recordings to build models of audiovisual perception. His artistic practice explores these models using generative collage processes. The video above is a great example of his work. He took real-time video filmed through a car window and used a machine learning model to transform it into the painting style of Paul Klee. This use of machine learning is really interesting because it takes the core idea behind the expressionist movement and puts into practice. Expressionism explores a solely subjective perspective of the world and distorts in order to evoke moods or ideas. With this project you are able to see that distortion in real time and in real spaces. You can see the moods pass in front of you.