AI Arts Final Project Documentation – Artificial Muse (Cassie)

Final Presentation Slides

Project Description

This series of paintings was created based on images generated by an artificial muse. Neural style transfer was used to merge together past artworks, creating newly styled images. This serves as a technique for artists to gain new inspiration from their past works and to use A.I. as a tool for optimizing creativity. 

 Background/Inspiration

Roman Lipski’s Unfinished collection is a never-ending collection of paintings in which he uses neural networks to generate new artworks based off of his own paintings. He started out by feeding a neural network images of his own paintings, which it used to generate new images. Lipski then used these generated images as inspiration to create new paintings before repeating the process again and again. Lipski essentially created his own artificial muse and thus an infinite source of inspiration.

Motivation

As a person who likes to draw and paint, one of the many struggles I know too well is creative block. I’ll have the urge to create something, but struggle with a style or the subject matter. Inspired by Roman Lipski’s work, for my final project I wanted to create my own artificial muse to give ideas for new pieces of art. This project was also an exploration of the creative outcomes that are possible when AI and artists come together and equally work together to produce artwork.

Methodology

I used the style transfer technique we learned in class to train multiple models on previous artworks I have done. The trained styles were then transferred onto images of other past artworks, and this process was repeated with multiple layers until a series of three images was produced. These three images were used as inspiration to paint three new paintings, the final works.

Experiments

The process I ended up following was different from my original plan. I originally wanted to follow a process like this:

However, as I was experimenting in the first phase, I realized that this would not be ideal for two reasons. The first was that each model takes 20 hours to train, so to take time to train one model, make a painting, and then wait another 20 hours for that style to train, and repeat that whole process two more times for a total of three paintings would take too long for the time we had. The second reason was that the trained model doesn’t necessarily look visually interesting, so it wouldn’t make sense to rely on that painting’s style to transfer into a good model.

What I ended up doing was first I trained eleven different models on eleven different styles of my old art pieces, and here’s an example of a few of them transferred onto input from a webcam:

Some of them definitely turned out better than others. The bamboo one and the one on the top, for example, I didn’t like very much. The bamboo one just turned everything green while the other one didn’t seem to do anything at all. However, I really liked the model on the bottom since it actually altered the shapes a bit, so I decided to take that style and transfer it onto other previous artworks:

From these, I selected the images I liked the most and ran it through another style:

I then took two of these images that I liked the most and ran them through yet another style:

It was then down to picking between these two series:

I quite liked how the portrait turned out because it looks like a robot to me, which kind of fits the whole AI thing, but I ended up choosing the mountains because I thought it would be quicker to paint because of time constraints. The mountain landscape was also a bit more abstract, which I liked.

Social Impact

I think the process I followed for this project is indicative of how AI can be used as a tool during the creation process rather than as a tool that creates the final generated product. Like I was inspired by Lipski, I hope that other artists can take inspiration from this and experiment with AI to enhance their creative process. 

Further Development

This project could honestly go on forever until I run out of old artworks, but that would be nearly impossible because of the number of combinations I could test out. I am also curious how this project would turn out if I had followed my original plan and continuously trained models based on new paintings I created rather than the styles coming entirely from old artworks.

I could also develop it further by using different AI techniques like DCGAN rather than style transfer. I am curious to see how a GAN could recreate my art, however I am not sure how big of a dataset it would need, and the process would be a bit different. 

Week 12: Final Project Concept (Cassie)

Presentation slides

 

Background

Roman Lipski’s Unfinished collection is a never-ending collection of paintings in which he uses neural networks to generate new artworks based off of his own paintings. He started out by feeding a neural network images of his own paintings, which it used to generate new images. Lipski then used these generated images as inspiration to create new paintings before repeating the process again and again.

Lipski essentially created his own artificial muse and thus an infinite source of inspiration.

 

Motivation

As a person who likes to draw and paint, one of the many struggles I know too well is creative block. I’ll have the urge to create something, but struggle with a style or the subject matter. Inspired by Roman Lipski’s work, for my final project I want to create my own artificial muse to give ideas for new pieces of art. The end goal is to have at least one artwork that was created based off of ideas from the artificial muse. This project is also an exploration of the creative outcomes that are possible when AI and artists come together and equally work together to produce artwork.

 

Reference

I will be using the style transfer technique we learned in class to train a model on a previous artwork that I have done. As suggested by Professor Aven, I will be training multiple models at once to explore as many different visual inspirations as possible. The trained style will be transferred onto another one of my old artworks to create a combination of the two, hopefully producing something visually interesting enough to serve as inspiration for a new painting. This graphic illustrates the process I will follow:

Week 11: BigGAN – Cassie

For this week’s assignment I played around with BigGAN. In class we experimented with how truncation would affect single images, but I wondered how it would affect the video animation from one object into another.

I wanted to morph an object into another one that is already similarly shaped, so at first I chose guacamole and ice cream on truncation 0.1. This turned out to be…really disgusting looking.

Video: https://drive.google.com/file/d/1mAewM63SA8vT1rez3u7co2fDE3eP7d0C/view?usp=sharing

For some reason the guacamole didn’t really seem to be changing at all at the beginning, and when it did begin to morph into ice cream it just looked like moldy food. The ending ice cream picture also didn’t really look like ice cream.

So…I decided to change the objects to a cucumber and a burrito. This worked a lot better. I created four videos, one with truncation 0.2, one with 0.3, one with 0.4 and one with 0.5. I then put these into a collage format so you could see the differences between all of them:

Though it’s subtle, you can definitely tell that there is a difference between the four videos. Theoretically, the top left corner is 0.2, the top right corner is 0.3, the bottom left is 0.4 and the bottom right is 0.5, however I am not super well-versed in video editing and when I put this together in iMovie it was hard to tell which one was which.

Week 10: Style Transfer (Cassie)

For this week’s assignment, I decided to train the style transfer model with one of Jackson Pollock’s paintings:

The reason I chose to use this painting, besides the fact that I like Jackson Pollock, is that when I was considering using style transfer for my midterm project, Professor Aven mentioned that images that have bright colors and very defined shapes would work the best. While this piece doesn’t really have very defined shapes, the colors are still pretty different from each other.

After the model was trained, I put it into the styletransfer style.js code from Professor Aven’s github to test the output through the webcam. This was the result:

The shapes generated were interesting, kind of like a honeycomb. The colors somewhat matched the source image, but it also seems like some new slightly different colors were generated. If I saw this image without knowing how it was made, I wouldn’t think that it had anything to do with Jackson Pollock, though.

Now…what to do with this? I was really inspired by Roman Lipski’s Artificial Muse in how he incorporates his own paintings and combines them with his algorithm so that the role of artist is split equally between human and machine. This whole style transfer process also reminded me a lot of when I was first learning how to draw and paint: my art teacher would always give us some references that we would just straight up copy to try and improve our own skills. Combining these two ideas, what would it look like if I tried to paint my own Jackson Pollock painting, and then show that painting to the Pollock-trained style transfer? What would the combination of a human replicated Pollock painting and a machine replicated Pollock painting style look like?

I first attempted (key word: attempted) to paint the Pollock painting on a small canvas:

I then held the painting up to the webcam with the trained model, which created this output:

The colors are a bit duller, and the strokes are smoother. However, the whole thing is kind of blurry and there is this faint bumpy grid pattern over the whole image. I kind of like these effects because they would be difficult to achieve with paint on canvas – they very much digitize the style.

Overall, this was an interesting experiment and I think this concept is something I would potentially want to further explore for the final project.

Midterm Documentation (Cassie)

Social Impact

The technical simplicity of this project shows that AI doesn’t have to be scary or complex in order to use it: it can be a simple tool for artists looking to explore new digital mediums to create something visually interesting.

There is also a lot of discussion surrounding AI art in terms of who the artist is: is it the neural network, or is it the person who programmed the work? A lot of people seem to view this debate as very black and white in terms of believing that the artist is solely the neural network or the artist is solely the programmer. Regardless of what your opinion may be surrounding this debate, I think this project is an example of AI art where I would argue that both the programmer and the AI components equally work together to create the outcome. It doesn’t have to be an all-or-nothing scenario: the point of AI is to help us achieve certain outcomes more easily, so why not use it to work together rather than treating it as something that is taking away human creativity?

Further Development

I can see this project taking two different routes if I were to further develop it. The first route is to make it more user-friendly in order to make this kind of art more accessible to other people. In this case, a better interface would absolutely be necessary. The whole hover-to-start setup worked fine for me, but it might not be so intuitive or useful for others. Some kind of countdown before the drawing process starts, as well as an option to save the completed piece or automatically record it rather than having to manually take a screen capture video would make more sense. Additionally, making the artwork customizable from the interface side would be good to add such as being able to change the colors, size of the ellipses, or even change the shape entirely, rather than having to go into the style.js code to change these aspects.

The second route would be to further explore the concept as a personal artistic exploration. This option is definitely more open-ended. I could try and apply more machine learning skills; for example, I still really like the idea of AI generative art, so what if a GAN or DCGAN could make its own similar pieces based on these body movement pieces? This is conceptually interesting to me because it’s like giving a neural network its own set of eyes. It’s like some machine is watching you and can predict your movements, converting the artwork into a statement on privacy in today’s digital world rather than just an exploration of body movement over time.

Full documentation

(with updated background + motivation for new concept): https://docs.google.com/document/d/1DGs7plWL98vslkEo1t7phG4EcR2uOVikXQ4AFmjzsZI/edit?usp=sharing