Week 12: CycleGAN Cartoon2Picasso

For this week’s cycleGAN project, I ended up utilizing a pre-trained model due to several issues I ran into during training, as well as time constraints. I wanted to incorporate a few photos of Picasso’s cubism paintings and stylize images of childhood cartoons, as I thought the result would be quite interesting. My expectation would be that the characters would still retain some form of similarity to the original photo (whether it be through color, form, etc), but show some obvious changes in structure, caused by the sharp edges of Picasso’s cubism style. 

Here are the results:

  

  

 

I think the photos turned out extremely clean cut, due to the fact that the input images contained very defined edges, as they are cartoons. The Picasso cubism style is quite noticeable, although I do think that the results will be a bit worse with photos of real life scenes. 

IML | Week12 CycleGAN Training – Quoey Wu

For this week’s assignment, we are supposed to spend some time on CycleGAN. At first, I accidentally retrained monet2photo, so I just explored more about this model by doing some inferences.

Here are some of my results:

From my observation, the model works much better on the paintings of scenery than those of people. For the paintings of people, they are more like added a blur filter without too many changes. However, even though the results of scenery paintings are not too bad, it is still not realistic enough in my opinion, considering there are still some strokes in the picture.

And here is an example demonstration of monet2photo model I found online. Apparently, the effect is better than my results. I think it may be due to the different parameters during training, but I’m not sure which parameters would influence most.

Furthermore, I used some paintings of Van Gogh as the input to see their results under monet2photo model. I think the results are worse here than the portray paintings of Monet, but meanwhile there is still some similarity of the style for the outputs.

Besides, I trained CycleGAN using facades dataset, but the training process takes time and I may add more details about it later. 🙂

Week 12: VanGogh CycleGan – Jarred van de Voort

For our week 12 assignment, we’re asked to create an environment and train a CycleGan. As noted, this could take several days, and good results would require much longer training time. In order to circumvent this, we can use a well trained cycleGan model from the original tensorflow based repo linked below. With the pretrained model saved in as our checkpoint, I stylized several images using the van gogh 2 photo dataset. The results are below:

Github repo:

https://github.com/xhujoy/CycleGAN-tensorflow

IML | Week12 – Training CycleGan – Andrew Huang

Introduction

This week’s assignment was to train cyclegan. As I have previously explored this during my midterm, a lot of this wasn’t really new to me. I decided to train the van gogh to photo dataset.

Process

As expected the training task is very annoying because of the walltime issue. I actually tried training this on the NYUSH HPC servers but because of the weird issues with not having enough space on the compute servers. I could not get the requirements installed, so I did not have the chance to train the model with gtx 1080s… Additionally I think because my capstone was also training I could not get the compute quota I needed.. very troubling issues. Also I realized I did not get the images outputted during the training process so I can only get the images from inference.

Results

I did not spend a lot of epochs training because of walltime so the results are not good. There are baseline photos online of this model which I will share. 

Conclusion

For most models it seems much better to train on GPU. I wanted to get better results on the NYU HPC, but I am still unsure why the requirements filled my disk quota, perhaps I will try that again in the future if I have time.

iML Week12: CycleGAN Training – Ivy Shi

Introduction: 

For this week’s assignment, I trained the summer2winter yosemite dataset with CycleGAN on Intel DevCloud. Domain A contains images of Yosemite Park in the summer and Domain B in the winter. There are no big differences in style between the two domains as most of the landscapes remain the same except for the addition of snow in the winter. 

Process:

Due to some error when saving the training script, I wasted some time initially retraining the monet2photo dataset again. Therefore there was not as much time for the summer2winter dataset training which contains around 1000 images. Right now I am at 58 epochs after 24 hours. It will be trained up to 200 epochs which should take a little less than three more days. 

Results:

Here are the results achieved after inferencing with 58 epochs

Conclusion: 

The results are actually worse than I expected. If we look closely, we can see the skies in the inferenced images get gloomier and trees appear to be in a darker shade of green. However, there is no trace of snow to signify winter which is rather disappointing. 

In general, there are no big differences between the left and right images which suppose to correspond to summer and winter respectively. I suspect this is due to the small number of epochs trained. Another possible reason might be images from Domain A and B are quite similar in style. 

The results right now are not great. I will continue to train and inference with a better model once it is completed.