IML | Week09 Style Transfer – Quoey Wu

For this week’s assignment, we are asked to train the style transfer by ourselves and do some inference.

I chose a picture with colorful geometries so I named it as “colorfulgeo”.

Training

As for the training part, I simply followed the instructions. The obstacle I met is about the setup step because I originally operated in the login node. And there were also some problems with unzip which caused results like these.

Inference

After figuring out those problems, I successfully ran the task and finally got the training result. Firstly, I tried the style transfer in webcam and got screenshots of myself like these:

 

Later, I replaced the video input with some images and got these results:

I think when the input has a certain main object, it works better than when there is only some similar colors or patterns. Maybe it is also because my original image is not so large and has less information. However, at least I have a basic understanding of how style transfer works and there still remain a lot of aspects that we can explore to improve the visual effect at last.

IML | Week 9: Training Style Transfer – Yufeng

Training

As I’ve trained several models for the midterm, I only trained one this week. I did it on my own desktop with a GTX 1080 on Ubuntu for two epochs. Some artifacts are quite visible probably because the lack of epochs in training.

 

 Style Image

The style is based on a Blade Runner style poster I found on Pinterest.

In the following inferencing, I also used another one of models which I trained in the weeks before.

 Image property of the Albright-Knox Art Gallery, Buffalo, NY.

The other style is Convergence, 1952 by Pollock.

Inferencing

I reused the code from my midterm project to perform a multilayer style transfer. I decided to experiment on transferring to two styles back and forth

Khrystyna’s World, #10103, by Todd Hido
 

The source image being transferred

Result

The fusion of two styles works is quite effective but converges a stable effect with small “chunky” style transfer artifacts.

 
 
 
 
 

iML | Week09: Style Transfer – Ivy Shi

After this week’s class, I continued to explore style transfer by training a model and inferring using ml5. I followed the instructions on training on Intel devCloud and encountered several difficulties before successfully initializing the training. Since my model is still in the training process, I completed the inference part with pre-trained models in the meantime. It was quite informative to look at the differences between output of differing models. And once my own model is completed, I would like to compare the results as well.  

Training: 

In the very beginning, I was having difficulties just setting up the environment. Because I had download other packages for tensorflow when working for the midterm projects. The packages specified in the environment.yml that needs to be installed seem to come in conflicts with the existing ones. I suspect this is due to having old and new versions of the same package. 

In addition, I kept getting prefix for a conda environment already exist even though it does not exist in my conda env list. The error persists no matter how I try conda remove the environment and try it again. In the end, I edited the environment.yml file to rename the path. 

I was stuck on this for a while until Aven pointed to the conda clean command. I removed unused packages and caches and also re-cloned the GitHub repository. After solving some other issues, I was finally able to successfully create a new virtual environment for the purpose of training this style transfer model. Currently, my model is still training and will take more hours to complete. 

Downloading the dataset is also an unpleasant experience. It happened to me five or six times the dataset just stopped downloading in the middle and would have to restart. Eventually, the download process was not interrupted and it took about 3 hours to complete. 

After solving all these problem and issues, I was able to start training. I specifically looked for a picture with clear geometric shapes and contrasting colors for training and hope to get a strong style transfer effect. Currently, the job is still running and will take more hours to complete.

Inference: 

I perfomed the inference on several pre-trained models. The input image is here:

 The outputs are shown here: 

The models used are: waves, zaha, matta and mathura. I thought the results turned out very good and the style transfered very well to the input image. In these four output images, you can also see how texture has also been transformed. I look forward to applying my own style transfer once the model has been trained. 

Overall, style transfer creates a unique artistic effect. It would be interesting to explore the subject further and potentially form new styles stem from this type of machine learning approach.  

  

 

Chinese Shanshui Painting Style Transfer

Inspired by Aven’s Shanshui-DaDA project. I was trying to train a style-transfer model to give images a traditional Chinese Shanshui Style. Here are the two style images:

The first one was the Pure and Remote View of Streams and Mountains, done by Xia Gui, one of the most famous painters in the Southern Song period. His painting can be identified by the ax-cut texturing strokes. And the second one was done by Zhang Daqian. He developed the Pomo skill in the traditional Chinese painting and liked to use colors like blue and green to depict mountains. 

 

Here shows some of the results by implementing the two models. The webpage was based on the sample page given on the ml5 official website:

  

Since Xia Gui’s painting has no color, the style-transfer will turn the color of the image to white (not that white actually) and black, while the Pomo style will always try to give it some blue color.

The quality when applied the model to the wave image is not that good. It maybe needs more training epoch to refine the model, or it just simply because there are not many Chinese painting to depict the wave.

This made I wondering that if the original style of the picture played an important role in the quality of the result. So I tested it with some mountain pictures. It seems that the mountain pictures after the style transferring process can achieve better effect than using pictures of other subject. The third one was the most successful one as it even imitate the ax-cut texturing strokes of Xia Gui’s work. And it gives a sense of the traditional Chinese painting. By the way, those pictures are all from my phone. The first two mountain pictures were taken in the desert in Jordan and the third one was taken on the Mountain Huang 黄山, Anhui Province. 

It’s interesting to find that the quality of the image after the style transfer depends much on the their original style. That’s to say, if one image’s original style matches well with the style image, the result will be relatively better. For instance, the image taken in Huang Shan might be most close to the footage the painter referred, that’s why it looks so good after the style transfer. 

Reference:

https://ml5js.org/docs/StyleTransfer

https://www.aven.cc/Shanshui-DaDA.html

IML: Week 8 Style Transfer – Thomas Tai

 

Introduction

The goal for this week was to train the model using the given code. I ran the model training program using the given instructions. I had to repeat the download for the dataset a couple times since it failed. I found the qstat command to be very useful for checking if my training was still going on, since the training took a couple tries. Like others, I was unable to finish the training since the maximum runtime was 24 hours for the Intel AI Cloud. So, I modified the code to skip training and only include the checkpoint conversion to a format that ml5.js supports. Alternatively, you could modify the number of epochs to reduce training time. I was able to successfully get the model, which seems to only be a series of weights compiled by the program. I trained two models, but these are incomplete and likely need more training to have a better result.

Style Image:

shanghai

Input Images: 

Output Images:

I find this form of machine learning technology to be really cool, since it combines art and computer science together. This would not have been possible just a few years before. I have noticed that Google Photos sometimes gives me stylized suggestions for photos, and I would think that a variation of this model is used for their implementation of style transfer. I noticed artifacts and weird patterns in the output, so the model might require further training or modifications. Either way, I enjoyed the unique style it generated, and I look forward to seeing more machine generated art in the future.

Sources for Images:
Shanghai
New York City
Cute-Dog