Week 2 AI Arts: ml5.js Experiment(Ronan)

Technology is not enough. 

Consider the technology as a tool which, in itself, could do nothing.

Treat the technology as something that everyone on the team could learn, understand and explore freely.

– Red Burns

I found this quote from Daniel Shiffman’s introduction video on ml5. Red Burns was a chair of the ITP program at ITP. Her work has inspired a lot of people. I really like this quote, because it tells us that don’t be too caught up in this shining technology and “we have to remember that without human beings on the earth, what’s the point?”, as Daniel Shiffman said. So whenever we are looking at the algorithms or rules, also think about how we can make use of it and how we can teach the technology.

After I explored a bit of the ml5.js website, I am really fascinated by all the possibilities that were brought to me. As someone who doesn’t really have a lot of knowledge about machine learning, I really like the way that ml5.js provides lots of pre-trained models for people to play around, and I can also re-train a model by myself. 

For example, when I was playing around with the image classifier example, I realized that it will return both a label of the picture as well as a confidence score.  When I was watching Daniel Shiffman’s introduction on ml5.js, he used a puffin as the source image and the model didn’t get the right label. However, that was in 2018 and I wanted to try it out for myself to see whether they have updated the model. So I started to look at the example code provided on the ml5 website.

At first, the example code didn’t work out:

After examining the code, I realized there was a bug on the website. On line 32, it should be “Images” instead of “images”.

But again, now I can see the picture. However, I can’t really see the label and confidence score and there is another error on the website. I’m not sure how to fix this right now, but maybe I will look into later with more knowledge about this framework. 

Besides, I’ve also learned that the image dataset that is used in the imageClassifier() example is the ImageNet.

Week 1: Case Study Presentation (AI Arts)

For my case study I analyzed the Google Deep Dream project, a fascinating intersection of data analyzation and art that sprung from a Google images project on image recognition. Developed by Alexander Mordvintsev for the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) in 2014, the software was intended to categorize images based on faces and patterns. The software was open source which opened up possibilities for developers to tweak it, teaching the software to recognize various patterns, faces and images with different levels of sensitivity. The software can also be used in the reverse by teaching the network to adjust the original image to create a higher recognition rate for the faces or patterns it detects. The network can continue adjusting the image, going off of patterns found and exaggerating these patterns in each generation of the image, ad infinitum.

The result is highly psychedelic imagery that can be adjusted so that certain patterns are detected, such as dog or cat faces, with a popular version created for “jeweled birds.” The software can be applied to video as well, as seen in Memo Atken’s personalized code:

https://vimeo.com/132462576

Using https://deepdreamgenerator.com/ a version of the software made available online with various filters and settings, I experimented with my own photo (of me as a child) and ran it through various iterations to produce some surrealist Deep Dream images.

Link to my presentation: https://drive.google.com/file/d/1hXeGpJuCXjlElFr1kn5yZVW63Qcd8V5x/view?usp=sharing

Sources:

https://www.fastcompany.com/3048274/heres-what-googles-trippy-deep-dream-ai-does-to-a-video-selfie

https://www.fastcompany.com/3048941/why-googles-deep-dream-ai-hallucinates-in-dog-faces

https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html

https://www.psychologytoday.com/us/blog/dreaming-in-the-digital-age/201507/algorithms-dreaming-google-and-the-deep-dream-project

Sophia Crespo: Trauma Doll | aiarts.week01

Sophia Crespo: Trauma Doll

A short intro.

Sophia Crespo is a media and net artists based in Berlin. Many of her works are driven by an interest in bio inspired technologies and human-machine boundaries. 

Trauma Doll, a project started in 2017, is an algorithm-powered doll that “suffers” from PTSD, anxiety, depression, and other mental health issues. “The whole idea is playing with pattern recognition combining every field possible—and that’s what Trauma Doll does, she sees patterns everywhere in the web and connects them,” Crespo explains. “It’s up to the consumer whether they see the patterns or not.”

With a growing collection of generated collages, Trauma Doll brings forward forms of expression that tap into a larger societal discussion of how our mental and emotional landscapes are increasingly influenced by digital technologies.

Week 1 AI Arts: Research on face-swapping APP Zao(Ronan)

Click here to see the slides.

What is it?

Zao is a face-swapping app that uses clips from films and TV shows, convincingly changing a character’s face by using selfies from the user’s phone

How does it work?

Upload a photo and it will swap DiCaprio’s face with a user’s in a 30-second mashup of clips from his films.

My thoughts?  Privacy and Security Issues

 1. What is the company doing with the photos?

Zao’s original user agreement said that people who upload their images had agreed to surrender the intellectual property rights to their face and allow their images to be used for marketing purposes

2. WeChat, China’s ubiquitous messaging service, and social media platform banned links to Zao, citing security risks

3. Smile to Pay facial-recognition system

Deepfake?

1. According to Wikipedia, Deepfake (a portmanteau of “deep learning” and “fake”) is a technique for human image synthesis based on artificial intelligence.

2. It uses a machine learning technique known as generative adversarial network.

Voice Deepfake:

Thieves stole over $240,000 by using voice-mimicking software to trick a company’s employee

3. The academic research on Deepfake:

“Synthesizing Obama” program, published in 2017, modifies video footage of former President Barack Obama to depict him mouthing the words contained in a separate audio track. The project lists as the main research contribution its photorealistic technique for synthesizing mouth shapes from audio.

4. Abuses of Deepfake:

 – used to create fake celebrity pornographic videos.

 – used to create fake news and malicious hoaxes.

 – used to misrepresent well-known politicians on video portals or chatrooms

Week 1 Artificial Intelligence Arts Assignment, Cassie Ulvick

Case Study: New Nature by Marpi

Click here for presentation slides

New Nature was an exhibit I visited at a technology-focused art gallery called ARTECHOUSE when it was on display there this past year in Washington, DC. It was created by digital artist Marpi as his first large-scale solo exhibition, and was inspired by the biology, ecology and underlying mathematics of the natural world.

About the Artist

Marpi is a Polish digital artist based in San Francisco with a focus on 3D worlds, AR and VR, interactive art and storytelling. He is interested in creating works where viewers have the opportunity to participate in the creation of the artwork, accomplishing this through creating interactive, scalable and multiplatform pieces.

About the Work

New Nature was essentially a representation of different creatures or organisms but in a very mathematical, geometric and almost futuristic visual aesthetic.

The main room of the exhibition included large screens displaying a giant creature that visitors could interact with by using an app on their smartphones to feed it. From there, the viewer was able to see how the creature moved and interacted with the food it was fed.

Another section of the exhibit, my favorite part, included smaller screens with smaller individual creatures. Each screen had a Kinect sensor attached to the bottom that would detect how the user moved their hand, visually displaying their hand and its interactions with the creature on the screen.

The exhibit incorporated machine learning so that the more the audience interacted with the creatures, the more complex behaviors the creatures would perform.

Overall, New Nature aimed to explore the intersection between the stiffness of technology and the more fluid nature of the natural world. The implementation of machine learning supported Marpi’s goal – through machine learning, he was able to give the creatures in his artwork a more realistic behavioral pattern. His creatures were able to learn and adapt just as how a real organism would.

Sources

  • https://www.dc.artechouse.com/new-nature
  • https://www.marpi.studio/exhibitions/new-nature