Week 12: Final Concept Documentation

Link to concept presentation slides: https://drive.google.com/file/d/1jgxeo-knGx7nLrnWBmPZYLOIwz4mdJ8k/view?usp=sharing

Background

For my final project, I’ll be exploring abstract art with Deepdream in two different mediums, video and print. I plan on creating a series of images (printed HD onto posters to allow viewers to focus on the detail) and a zooming Deepdream video. I’ll create the original images with digital tools, creating abstract designs, and then push them one step (or several) further into the abstract with the Deepdream algorithm.

I love creating digital art with tools such as Photoshop, Illustrator, Data moshing and code such as P5, however I’ve realized that there are limitations to these tools in terms of how abstract and deconstructed an image can get. I’ve also been interested in Deepdream styles for a while and the artistic possibilities, and I love the infinite ways Deepdream can transform images and video. In the first week, I presented Deepdream as my case study, using different styles of the Deep Dream generator tool to transform my photos.

Case study slides: https://drive.google.com/file/d/1hXeGpJuCXjlElFr1kn5yZVW63Qcd8V5x/view

I would love to take this exploration to the next level by combining my interest in abstract, digital art with the tools we’ve learned in this course.

Motivation

I’m very interested in playing with the amount of control digital tools give me to create, with Photoshop and Illustrator giving me the most control over the output of the image, coding allowing me to randomize certain aspects and generate new designs on its own, and data moshing simply taking in certain controls to “destroy” files on its own, generating glitchy images.

Created with p5 with randomization techniques and Perlin noise:

Created with datamoshing, filters and layout in P5:

Abstract image created in Photoshop:

However, Deepdream takes away all most of this prediction and control, and while you are able to set certain guidelines, such as “layer” or style, octaves, or the number of times Deepdream goes over the image, iterations and strength, it is impossible to predict what the algorithm will “see” and produce in the image, creating completely unexpected results that would be nearly impossible to achieve in digital editing tools.

References

I’m very inspired by this Deepdream exploration by artist Memo Akten, capturing the eeriness and infinity of Deepdream: https://vimeo.com/132462576

His article (https://medium.com/@memoakten/deepdream-is-blowing-my-mind-6a2c8669c698) explains in depth his fascination with Deepdream, something I share. As Akten writes, while the psychedelic aesthetic itself is mesmerizing, “the poetry behind the scenes is blowing my mind.” Akten details the process of Deepdream, how the neural networks work to recognizevarious aspects of the reference image based on its previous training and confirm them by choosing a group of neurons and modifying “the input image such that it amplifies the activity in that neuron group” allowing it to “see” more of what it recognizes. Akten’s own interest comes from how we perceive these images, as we recognize more to these Deepdream images, seeing dogs, birds, swirls, etc., meaning viewers are doing the same thing as the neural network by reading deeper into these images. This allows us to work with the neural networks to recognize and confirm the images it has found, creating a cycle requiring both AI technology and human interference, to set the guidelines and direct it to amplify the neural network activity as well as perceive these modified images. Essentially, Deepdream is a collaboration between AI and humans interfering with the system to see deeper into images and produce something together.

Final Project Concept – Katie

For my final project, I am focusing on the theme of human vs. computer perception. This is something I’ve tried to explore through my midterm concept and initial plan of reconstructing humans from image classification of parts. When I talked with Aven, I realized there were other, less convoluted ways of investigating this that would allow the work of the computer to stand out more. He showed me the examples from the duo Shinseungback Kimyonghun that also follow these ideas; specifically I was more inspired by the works FADTCHA (2013) and Cloud Face (2012), which both involve finding human forms in nonhuman objects.  

fadtcha

These works both show the difference in which a face detection algorithm can detect human faces, but humans cannot. Whether or not it’s because CAPTCHA images are very abstract, and whether or not it’s because the clouds are fleeting doesn’t matter; this difference is still exposed.

cloud-face

I wanted to continue with this concept by using a human body-detecting algorithm to find human forms in different spaces where we cannot see them. Because I’m most familiar/comfortable with the ml5.js example resources, I started by using BodyPix to do some initial tests, which was interesting as far as seeing what parts of buildings are seen as body segments, but it’s not a clear idea. Then I tried using PoseNet to see where points of the body could be detected. 

test1

test2

This was a little more helpful, but still has a lot of flaws. These two images were the shots where the highest number of body points could be detected (other shots had anywhere from 1-4 points, but no similar shape to human body), but still this doesn’t seem concrete enough to use as data. I plan on using a different method for body detection—as well as a better quality camera—to continue working toward the final results.

Final project concept: EB

slides to presentation: https://drive.google.com/open?id=1kigOl5IQ15UO5NGDD3uHTJ4GrR2BLlJ-OUPsYl3Zp-A

Background:

I have always been a fan of the sci-fi genre. As a child, I would daydream about flying cars and neon lights in the city. However, the more I grew up, the more I looked past the aesthetic of the genre and looked at the implications of a cyber-heavy society. Movies such as Blade Runner and Tron all show a dystopian society in which the world is heavily influenced by the lack of human-to-human interaction. This is partly due to the effect of technology. The more unsupervised technological breakthroughs occur, the higher the chances of it affecting our day to day lives for the worst. 

The cyberpunk sub-genre immediately reflects the disparity between humans and machines within our society.  The dystopian aspect of the genre can be seen from a couple of common aesthetic themes present in the sub-genre. Dark skies, unnatural neon lights, near-empty streets and so on. 

These aesthetic choices reflect the dystopian society with a naturally dark and gloomy scenario which is coupled with unnatural man-made neon lights. This is a showcase of how humans have deviated away from the natural world and attempt to replicate the natural with their own creations. 

Motivation:

I want to be able to show the eerie beauty of the genre to everyone else. I want people to see what I see when I walk around a megacity like Shanghai. The future depicted in these media is truly breathtaking, however, a glimpse past the veil of technology shows a terrifying future. 

I want to use the knowledge of machine learning and AI to showcase my vision to others around me. The reason for this is because sometimes words fail me and I can’t explain what I see clearly with words. But thanks to what we have been learning in class, I can finally be able to show what I mean.

Reference:

I will be using StyleTransfer to train my models to develop the city skylines that I want.

I also want to use BodyPix to separate the human bodies from the backgrounds through segment. By doing so, I will be able to implement two different Styletransfers that will help my vision come true. However, to showcase this, I might need to take a video of the city in order to actually what the model can do. 

Week 12: Final Project Concept by Jonghyun Jee

Presentation slides can be viewed here.

Background

There is hardly any question about the fact that humans are the only species that create art. Some might bring up examples to refute this; the painting elephants of Thailand, the male bowerbirds that build a collage-display with sticks and glasses to impress the females, bees that build structurally perfect honeycombs, and so on. Yes, they are clearly showing kinds of artistry; and yet, I cannot put them on the same level as artists. They have techniques but not the thoughts—the essential core that makes art, art. What did these animal artists mean by their artworks? Marcel Duchamp displayed a toilet to question the traditional values of craftsmanship; Damien Hirst put a tiger shark in a vitrine filled with formaldehyde to visualize the physical impossibility of death. Many modern artists, including these two, present pieces that seemingly lack artistic techniques in a traditional sense, but their philosophy underneath makes their work “artwork.”

In this sense, it is no wonder that the emergence of AI in the field of art has triggered such a myriad of controversies. Some people even envisioned the dystopian future of the art world in which most of the human artists are replaced with AI artists. This apprehension climaxed when an AI-generated portrait “Edmond de Bellamy” was sold for $432,500 in a Christie’s auction last year. A year later, however, the hype seems to have faded. In November 15th, the Obvious Art—the creator of “Edmond de Bellamy”—put another AI-generated painting for a Sotheby’s auction; the result turned out disappointing for them. Their new Ukiyo-e artwork was sold for $13,000, barely above the presale high estimate. This price crash is indicative of how skeptical the art world is of electronically created artworks. The staggering price of “Edmond de Bellamy” was, in my opinion, mainly because it was the first AI-generated artwork that came under the auctioneer’s hammer. Their second Ukiyo-e was not that special anymore and it was exactly reflected in its price. The artworks of the Obvious art team, strictly speaking, are not “created” by artificial intelligence. It was human who fed the algorithm lots of data. I would not say the AI is an artist here. Humans who collected the data and wrote the code are rather closer to the definition of an artist; AI was just a tool. No one will say the brush in a painter’s hand is an artist, even though it is what actually draws a painting.

Motivation

I intend to focus on the effectiveness of AI as an art tool, especially in terms of creating a piece of fine arts. Using traditional art mediums such as paint and ink is not only time-consuming but mostly irreversible. We cannot simply press CTRL+Z in a canvas. When I create an artwork, the biggest obstacle has always been the lack of my techniques; my enthusiasm cooled off when I could not visualize my thoughts, ideas, and impressions in a way I had envisioned.

The AI tools I have learned during the class, in this sense, can fill in the technical gap of my art experiments. For my final project, I will use AI to color and morph my rough sketches and print out the generated outcomes.  Juxtaposing my original sketches and AI-modified versions of them, I want to show the process of how AI spices up my raw ideas.  

Reference

Among the models we have covered in the class, I will mostly use the Deep Dream to explore the possibilities of AI as an art tool, and Style Transfer as an inspiration. To break down the whole process, the first step is to draw a sketch and take a photo of it; next, I will briefly color the drawing with Photoshop so the background will not remain totally blank (if there is nothing on the background, AI might just fill it up with dull, repetitive patterns); Last, I will feed algorithms my drawings and repeat the retouching processes. I found that Deep Style tool of this website is particularly powerful. 

Below are the articles that gave me some insights:

AI Is Blurring the Definition of Artist

Has the AI-Generated Art Bubble Already Burst? Buyers Greeted Two Newly Offered Works at Sotheby’s With Lackluster Demand

With AI Art, Process Is More Important Than the Product

Week 12: Final Project Concept (Cassie)

Presentation slides

 

Background

Roman Lipski’s Unfinished collection is a never-ending collection of paintings in which he uses neural networks to generate new artworks based off of his own paintings. He started out by feeding a neural network images of his own paintings, which it used to generate new images. Lipski then used these generated images as inspiration to create new paintings before repeating the process again and again.

Lipski essentially created his own artificial muse and thus an infinite source of inspiration.

 

Motivation

As a person who likes to draw and paint, one of the many struggles I know too well is creative block. I’ll have the urge to create something, but struggle with a style or the subject matter. Inspired by Roman Lipski’s work, for my final project I want to create my own artificial muse to give ideas for new pieces of art. The end goal is to have at least one artwork that was created based off of ideas from the artificial muse. This project is also an exploration of the creative outcomes that are possible when AI and artists come together and equally work together to produce artwork.

 

Reference

I will be using the style transfer technique we learned in class to train a model on a previous artwork that I have done. As suggested by Professor Aven, I will be training multiple models at once to explore as many different visual inspirations as possible. The trained style will be transferred onto another one of my old artworks to create a combination of the two, hopefully producing something visually interesting enough to serve as inspiration for a new painting. This graphic illustrates the process I will follow: