Week 2 AI Arts: ml5.js Experiment(Ronan)

Technology is not enough. 

Consider the technology as a tool which, in itself, could do nothing.

Treat the technology as something that everyone on the team could learn, understand and explore freely.

– Red Burns

I found this quote from Daniel Shiffman’s introduction video on ml5. Red Burns was a chair of the ITP program at ITP. Her work has inspired a lot of people. I really like this quote, because it tells us that don’t be too caught up in this shining technology and “we have to remember that without human beings on the earth, what’s the point?”, as Daniel Shiffman said. So whenever we are looking at the algorithms or rules, also think about how we can make use of it and how we can teach the technology.

After I explored a bit of the ml5.js website, I am really fascinated by all the possibilities that were brought to me. As someone who doesn’t really have a lot of knowledge about machine learning, I really like the way that ml5.js provides lots of pre-trained models for people to play around, and I can also re-train a model by myself. 

For example, when I was playing around with the image classifier example, I realized that it will return both a label of the picture as well as a confidence score.  When I was watching Daniel Shiffman’s introduction on ml5.js, he used a puffin as the source image and the model didn’t get the right label. However, that was in 2018 and I wanted to try it out for myself to see whether they have updated the model. So I started to look at the example code provided on the ml5 website.

At first, the example code didn’t work out:

After examining the code, I realized there was a bug on the website. On line 32, it should be “Images” instead of “images”.

But again, now I can see the picture. However, I can’t really see the label and confidence score and there is another error on the website. I’m not sure how to fix this right now, but maybe I will look into later with more knowledge about this framework. 

Besides, I’ve also learned that the image dataset that is used in the imageClassifier() example is the ImageNet.

Week 1 AI Arts: Research on face-swapping APP Zao(Ronan)

Click here to see the slides.

What is it?

Zao is a face-swapping app that uses clips from films and TV shows, convincingly changing a character’s face by using selfies from the user’s phone

How does it work?

Upload a photo and it will swap DiCaprio’s face with a user’s in a 30-second mashup of clips from his films.

My thoughts?  Privacy and Security Issues

 1. What is the company doing with the photos?

Zao’s original user agreement said that people who upload their images had agreed to surrender the intellectual property rights to their face and allow their images to be used for marketing purposes

2. WeChat, China’s ubiquitous messaging service, and social media platform banned links to Zao, citing security risks

3. Smile to Pay facial-recognition system

Deepfake?

1. According to Wikipedia, Deepfake (a portmanteau of “deep learning” and “fake”) is a technique for human image synthesis based on artificial intelligence.

2. It uses a machine learning technique known as generative adversarial network.

Voice Deepfake:

Thieves stole over $240,000 by using voice-mimicking software to trick a company’s employee

3. The academic research on Deepfake:

“Synthesizing Obama” program, published in 2017, modifies video footage of former President Barack Obama to depict him mouthing the words contained in a separate audio track. The project lists as the main research contribution its photorealistic technique for synthesizing mouth shapes from audio.

4. Abuses of Deepfake:

 – used to create fake celebrity pornographic videos.

 – used to create fake news and malicious hoaxes.

 – used to misrepresent well-known politicians on video portals or chatrooms