Week 02: ml5.js exploration

I really like the “StyleTransfer_Video” example, and I think it could have very interesting artistic uses. The video aspect is fun and interactive, and I like the stylistic possibilities.

https://ml5js.github.io/ml5-examples/p5js/StyleTransfer/StyleTransfer_Video/

How it works

The program takes an image and uses tracking software to match certain stylistic elements to the webcam images. The software applies the color scheme and patterns of the “Style” image to whatever the webcam captures, creating a surprisingly similar style while keeping the webcam aspects identifiable.

Here was the “style” image:

Here are some webcam screencaps of me using this style:

Potential uses

I think it could be a cool public art piece, especially in an area like M50 with a lot of graffiti or outdoor art. The webcam could be displayed on a large screen that places passersby into the art of the location, taking the stylistic inspiration from the art pieces around it. I also think it could be cool way to make “real-time animations” using cartoon or anime styles to stylize webcam footage. If a simple editing software was added to the code, such as slo-mo effects, jump cuts and zoom, and the program could become an interactive game that “directs” people and helps them create their own “animated film.”

I’m also curious how the program would work if the “style” images were screencaps of the webcam itself. Would repeated screencaps of the webcam fed through it as the “style” create trippy, psychedelic video? I would love to find out!

Week 1: Case Study Presentation (AI Arts)

For my case study I analyzed the Google Deep Dream project, a fascinating intersection of data analyzation and art that sprung from a Google images project on image recognition. Developed by Alexander Mordvintsev for the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) in 2014, the software was intended to categorize images based on faces and patterns. The software was open source which opened up possibilities for developers to tweak it, teaching the software to recognize various patterns, faces and images with different levels of sensitivity. The software can also be used in the reverse by teaching the network to adjust the original image to create a higher recognition rate for the faces or patterns it detects. The network can continue adjusting the image, going off of patterns found and exaggerating these patterns in each generation of the image, ad infinitum.

The result is highly psychedelic imagery that can be adjusted so that certain patterns are detected, such as dog or cat faces, with a popular version created for “jeweled birds.” The software can be applied to video as well, as seen in Memo Atken’s personalized code:

https://vimeo.com/132462576

Using https://deepdreamgenerator.com/ a version of the software made available online with various filters and settings, I experimented with my own photo (of me as a child) and ran it through various iterations to produce some surrealist Deep Dream images.

Link to my presentation: https://drive.google.com/file/d/1hXeGpJuCXjlElFr1kn5yZVW63Qcd8V5x/view?usp=sharing

Sources:

https://www.fastcompany.com/3048274/heres-what-googles-trippy-deep-dream-ai-does-to-a-video-selfie

https://www.fastcompany.com/3048941/why-googles-deep-dream-ai-hallucinates-in-dog-faces

https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html

https://www.psychologytoday.com/us/blog/dreaming-in-the-digital-age/201507/algorithms-dreaming-google-and-the-deep-dream-project