Final project concept: EB

slides to presentation: https://drive.google.com/open?id=1kigOl5IQ15UO5NGDD3uHTJ4GrR2BLlJ-OUPsYl3Zp-A

Background:

I have always been a fan of the sci-fi genre. As a child, I would daydream about flying cars and neon lights in the city. However, the more I grew up, the more I looked past the aesthetic of the genre and looked at the implications of a cyber-heavy society. Movies such as Blade Runner and Tron all show a dystopian society in which the world is heavily influenced by the lack of human-to-human interaction. This is partly due to the effect of technology. The more unsupervised technological breakthroughs occur, the higher the chances of it affecting our day to day lives for the worst. 

The cyberpunk sub-genre immediately reflects the disparity between humans and machines within our society.  The dystopian aspect of the genre can be seen from a couple of common aesthetic themes present in the sub-genre. Dark skies, unnatural neon lights, near-empty streets and so on. 

These aesthetic choices reflect the dystopian society with a naturally dark and gloomy scenario which is coupled with unnatural man-made neon lights. This is a showcase of how humans have deviated away from the natural world and attempt to replicate the natural with their own creations. 

Motivation:

I want to be able to show the eerie beauty of the genre to everyone else. I want people to see what I see when I walk around a megacity like Shanghai. The future depicted in these media is truly breathtaking, however, a glimpse past the veil of technology shows a terrifying future. 

I want to use the knowledge of machine learning and AI to showcase my vision to others around me. The reason for this is because sometimes words fail me and I can’t explain what I see clearly with words. But thanks to what we have been learning in class, I can finally be able to show what I mean.

Reference:

I will be using StyleTransfer to train my models to develop the city skylines that I want.

I also want to use BodyPix to separate the human bodies from the backgrounds through segment. By doing so, I will be able to implement two different Styletransfers that will help my vision come true. However, to showcase this, I might need to take a video of the city in order to actually what the model can do. 

Week 12: Final Project Concept by Jonghyun Jee

Presentation slides can be viewed here.

Background

There is hardly any question about the fact that humans are the only species that create art. Some might bring up examples to refute this; the painting elephants of Thailand, the male bowerbirds that build a collage-display with sticks and glasses to impress the females, bees that build structurally perfect honeycombs, and so on. Yes, they are clearly showing kinds of artistry; and yet, I cannot put them on the same level as artists. They have techniques but not the thoughts—the essential core that makes art, art. What did these animal artists mean by their artworks? Marcel Duchamp displayed a toilet to question the traditional values of craftsmanship; Damien Hirst put a tiger shark in a vitrine filled with formaldehyde to visualize the physical impossibility of death. Many modern artists, including these two, present pieces that seemingly lack artistic techniques in a traditional sense, but their philosophy underneath makes their work “artwork.”

In this sense, it is no wonder that the emergence of AI in the field of art has triggered such a myriad of controversies. Some people even envisioned the dystopian future of the art world in which most of the human artists are replaced with AI artists. This apprehension climaxed when an AI-generated portrait “Edmond de Bellamy” was sold for $432,500 in a Christie’s auction last year. A year later, however, the hype seems to have faded. In November 15th, the Obvious Art—the creator of “Edmond de Bellamy”—put another AI-generated painting for a Sotheby’s auction; the result turned out disappointing for them. Their new Ukiyo-e artwork was sold for $13,000, barely above the presale high estimate. This price crash is indicative of how skeptical the art world is of electronically created artworks. The staggering price of “Edmond de Bellamy” was, in my opinion, mainly because it was the first AI-generated artwork that came under the auctioneer’s hammer. Their second Ukiyo-e was not that special anymore and it was exactly reflected in its price. The artworks of the Obvious art team, strictly speaking, are not “created” by artificial intelligence. It was human who fed the algorithm lots of data. I would not say the AI is an artist here. Humans who collected the data and wrote the code are rather closer to the definition of an artist; AI was just a tool. No one will say the brush in a painter’s hand is an artist, even though it is what actually draws a painting.

Motivation

I intend to focus on the effectiveness of AI as an art tool, especially in terms of creating a piece of fine arts. Using traditional art mediums such as paint and ink is not only time-consuming but mostly irreversible. We cannot simply press CTRL+Z in a canvas. When I create an artwork, the biggest obstacle has always been the lack of my techniques; my enthusiasm cooled off when I could not visualize my thoughts, ideas, and impressions in a way I had envisioned.

The AI tools I have learned during the class, in this sense, can fill in the technical gap of my art experiments. For my final project, I will use AI to color and morph my rough sketches and print out the generated outcomes.  Juxtaposing my original sketches and AI-modified versions of them, I want to show the process of how AI spices up my raw ideas.  

Reference

Among the models we have covered in the class, I will mostly use the Deep Dream to explore the possibilities of AI as an art tool, and Style Transfer as an inspiration. To break down the whole process, the first step is to draw a sketch and take a photo of it; next, I will briefly color the drawing with Photoshop so the background will not remain totally blank (if there is nothing on the background, AI might just fill it up with dull, repetitive patterns); Last, I will feed algorithms my drawings and repeat the retouching processes. I found that Deep Style tool of this website is particularly powerful. 

Below are the articles that gave me some insights:

AI Is Blurring the Definition of Artist

Has the AI-Generated Art Bubble Already Burst? Buyers Greeted Two Newly Offered Works at Sotheby’s With Lackluster Demand

With AI Art, Process Is More Important Than the Product

Week 12: Final Project Concept (Cassie)

Presentation slides

 

Background

Roman Lipski’s Unfinished collection is a never-ending collection of paintings in which he uses neural networks to generate new artworks based off of his own paintings. He started out by feeding a neural network images of his own paintings, which it used to generate new images. Lipski then used these generated images as inspiration to create new paintings before repeating the process again and again.

Lipski essentially created his own artificial muse and thus an infinite source of inspiration.

 

Motivation

As a person who likes to draw and paint, one of the many struggles I know too well is creative block. I’ll have the urge to create something, but struggle with a style or the subject matter. Inspired by Roman Lipski’s work, for my final project I want to create my own artificial muse to give ideas for new pieces of art. The end goal is to have at least one artwork that was created based off of ideas from the artificial muse. This project is also an exploration of the creative outcomes that are possible when AI and artists come together and equally work together to produce artwork.

 

Reference

I will be using the style transfer technique we learned in class to train a model on a previous artwork that I have done. As suggested by Professor Aven, I will be training multiple models at once to explore as many different visual inspirations as possible. The trained style will be transferred onto another one of my old artworks to create a combination of the two, hopefully producing something visually interesting enough to serve as inspiration for a new painting. This graphic illustrates the process I will follow:

Week12 Assignment: Final Concept Documentation–Crystal Liu

Background+Motivation

My final project is mainly inspired by the re-created famous paintings, especially the portraits. Some people replace the Mona Lisa’s face with the Mr. Bean’s face and the painting is really weird but interesting. 

Image result for mr bean and mona lisa

Also, I found that some people tend to motivate the poses of the characters in the painting, such as The Scream:

Image result for people imitate the screamImage result for people imitate the scream

Therefore, I want to build a project to let the users add their creativity to the famous painting and personalize the paintings to recreate these paintings. It reminds me my previous assignment for style-transfer. For that assignment I use a painting from Picasso to train the model, so that everyone or everything showing in the video can be changed into Picasso’s style. Even though the result is not that good, it still shows a way to personalize the painting or to let the users create their own version of paintings.

My idea is that the user can trigger a famous painting by imitating the pose of the characters in that painting. For example, if the user wants to trigger The Scream, he or she needs to make the pose like this: 😱. After the painting showing up, the user can choose to transfer the style of the live camera to the style of The Scream. If the users want to change to another painting, they just need to do the corresponding pose to trigger the expected painting.

Reference

My reference is the project called moving mirror. The basic idea is that when the user makes a certain pose, there will be lots of images with people making the same or similar pose.

What attracts me most is the connection between images and human poses. It displays a new way of interaction between human and computer or machine. Users can use certain poses to trigger things they want, and in my project it is the painting. 

The second one is style-transfer. It reminds me some artistic filters in Meituxiuxiu, a popular Chinese photo beautification application. These filters can change the style of the picture to sketch style, watercolor style or crayon style.

But the filter is only for still picture. I want to use style-transfer model to add this filter to the dynamic video so that the user can see their style-changed motions in a real time.

Week 12 Assignment: Document Final Concept —— Lishan Qin

Background

When I was young, I was fascinated by the magic world created by J.K. Rowling in Harry Potter. She has created so many bizarre objects in that world of magic that I still find very remarkable today. “The Daily Prophet”, a form of newspaper in the Harry Potter world, is the main inspiration of my final project. “The Daily Prophet” is a series of printed newspaper that contains magic which allows the image on the printed paper to appear as if it’s moving. It inspires me to create an interactive newspaper with an “AI editor” where not only the images on the newspaper will update every second according to the video captured by the webcam, but also the passage on it will change according to the image. In my final project, I will use Style Transfer to make the users’ face appear on the newspaper and utilize im2txt to change the words of the passages on the newspaper according to what the user is doing. I will build an interactive newspaper that constantly reports the users’ action. 

       

Motivation

Even with the development of social medias which allow new information to be spread almost every moment and every second, it still requires human people behind the screen to type, collect and then post those news. However, if there is an AI editor that could document, write and edit the news on the newspaper for us, the real-time capability of spreading information of the newspaper would be even better. Thus, I want to create an interactive self-edited newspaper that asks an AI to write the news about the action of the people it sees by generating sentences on their own. 

Reference

I’ll refer to the im2txt model on github https://github.com/runwayml/p5js/tree/master/im2txt here to create the video caption. This model will generate sentences according to the object and action the webcam video caught. I will run this model on the runway and then it will sent the result of the caption to html so that I can manipulate the outcome. Since some of the captions aren’t that accurate, I still need to find some ways to improve on that.