Week 01: Magenta Studios

Magenta Studios

Slides: https://drive.google.com/open?id=1fXMxelSqzy15b-pwVr40K46TJRxdlGAo48KnkDRgK34

https://magenta.tensorflow.org/studio

DRUMIFY 0 – Document 2 Chords progrssion

Magenta is an opensource machine learning project that was created with the purpose of creating art in visual and audio form. Through this, they have created various different sets of programs that use algorithms and deep learning to create art pieces.

Magenta Studios is an audio enhancement tool that helps artists further their process of creating music. It can be used on Ableton (music producing program) as a plug-in, or it can be used on its own as a standalone program. 

I decided to download the standalone program and test the usage and results were amazing. By uploading a chord progression in the form of a “.midi” file, I uploaded the file into the program called Drumify which creates drum loops based on the chords and melodies that you feed it. The results were actually very interesting. It managed to create a drum loop that almost felt like a live performance. 

Although the results were amazing, the creation of something like this leads to the question of what is authentic music in the modern age. If we can create music that is indistinguishable from live music with the help of AI and machine learning, what will happen to the music industry?

Week 1: Case Study Presentation (AI Arts)

For my case study I analyzed the Google Deep Dream project, a fascinating intersection of data analyzation and art that sprung from a Google images project on image recognition. Developed by Alexander Mordvintsev for the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) in 2014, the software was intended to categorize images based on faces and patterns. The software was open source which opened up possibilities for developers to tweak it, teaching the software to recognize various patterns, faces and images with different levels of sensitivity. The software can also be used in the reverse by teaching the network to adjust the original image to create a higher recognition rate for the faces or patterns it detects. The network can continue adjusting the image, going off of patterns found and exaggerating these patterns in each generation of the image, ad infinitum.

The result is highly psychedelic imagery that can be adjusted so that certain patterns are detected, such as dog or cat faces, with a popular version created for “jeweled birds.” The software can be applied to video as well, as seen in Memo Atken’s personalized code:

https://vimeo.com/132462576

Using https://deepdreamgenerator.com/ a version of the software made available online with various filters and settings, I experimented with my own photo (of me as a child) and ran it through various iterations to produce some surrealist Deep Dream images.

Link to my presentation: https://drive.google.com/file/d/1hXeGpJuCXjlElFr1kn5yZVW63Qcd8V5x/view?usp=sharing

Sources:

https://www.fastcompany.com/3048274/heres-what-googles-trippy-deep-dream-ai-does-to-a-video-selfie

https://www.fastcompany.com/3048941/why-googles-deep-dream-ai-hallucinates-in-dog-faces

https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html

https://www.psychologytoday.com/us/blog/dreaming-in-the-digital-age/201507/algorithms-dreaming-google-and-the-deep-dream-project

Sophia Crespo: Trauma Doll | aiarts.week01

Sophia Crespo: Trauma Doll

A short intro.

Sophia Crespo is a media and net artists based in Berlin. Many of her works are driven by an interest in bio inspired technologies and human-machine boundaries. 

Trauma Doll, a project started in 2017, is an algorithm-powered doll that “suffers” from PTSD, anxiety, depression, and other mental health issues. “The whole idea is playing with pattern recognition combining every field possible—and that’s what Trauma Doll does, she sees patterns everywhere in the web and connects them,” Crespo explains. “It’s up to the consumer whether they see the patterns or not.”

With a growing collection of generated collages, Trauma Doll brings forward forms of expression that tap into a larger societal discussion of how our mental and emotional landscapes are increasingly influenced by digital technologies.

Week 1 AI Arts: Research on face-swapping APP Zao(Ronan)

Click here to see the slides.

What is it?

Zao is a face-swapping app that uses clips from films and TV shows, convincingly changing a character’s face by using selfies from the user’s phone

How does it work?

Upload a photo and it will swap DiCaprio’s face with a user’s in a 30-second mashup of clips from his films.

My thoughts?  Privacy and Security Issues

 1. What is the company doing with the photos?

Zao’s original user agreement said that people who upload their images had agreed to surrender the intellectual property rights to their face and allow their images to be used for marketing purposes

2. WeChat, China’s ubiquitous messaging service, and social media platform banned links to Zao, citing security risks

3. Smile to Pay facial-recognition system

Deepfake?

1. According to Wikipedia, Deepfake (a portmanteau of “deep learning” and “fake”) is a technique for human image synthesis based on artificial intelligence.

2. It uses a machine learning technique known as generative adversarial network.

Voice Deepfake:

Thieves stole over $240,000 by using voice-mimicking software to trick a company’s employee

3. The academic research on Deepfake:

“Synthesizing Obama” program, published in 2017, modifies video footage of former President Barack Obama to depict him mouthing the words contained in a separate audio track. The project lists as the main research contribution its photorealistic technique for synthesizing mouth shapes from audio.

4. Abuses of Deepfake:

 – used to create fake celebrity pornographic videos.

 – used to create fake news and malicious hoaxes.

 – used to misrepresent well-known politicians on video portals or chatrooms

Week 1 Artificial Intelligence Arts Assignment, Cassie Ulvick

Case Study: New Nature by Marpi

Click here for presentation slides

New Nature was an exhibit I visited at a technology-focused art gallery called ARTECHOUSE when it was on display there this past year in Washington, DC. It was created by digital artist Marpi as his first large-scale solo exhibition, and was inspired by the biology, ecology and underlying mathematics of the natural world.

About the Artist

Marpi is a Polish digital artist based in San Francisco with a focus on 3D worlds, AR and VR, interactive art and storytelling. He is interested in creating works where viewers have the opportunity to participate in the creation of the artwork, accomplishing this through creating interactive, scalable and multiplatform pieces.

About the Work

New Nature was essentially a representation of different creatures or organisms but in a very mathematical, geometric and almost futuristic visual aesthetic.

The main room of the exhibition included large screens displaying a giant creature that visitors could interact with by using an app on their smartphones to feed it. From there, the viewer was able to see how the creature moved and interacted with the food it was fed.

Another section of the exhibit, my favorite part, included smaller screens with smaller individual creatures. Each screen had a Kinect sensor attached to the bottom that would detect how the user moved their hand, visually displaying their hand and its interactions with the creature on the screen.

The exhibit incorporated machine learning so that the more the audience interacted with the creatures, the more complex behaviors the creatures would perform.

Overall, New Nature aimed to explore the intersection between the stiffness of technology and the more fluid nature of the natural world. The implementation of machine learning supported Marpi’s goal – through machine learning, he was able to give the creatures in his artwork a more realistic behavioral pattern. His creatures were able to learn and adapt just as how a real organism would.

Sources

  • https://www.dc.artechouse.com/new-nature
  • https://www.marpi.studio/exhibitions/new-nature