Week 11: BigGAN – Cassie

For this week’s assignment I played around with BigGAN. In class we experimented with how truncation would affect single images, but I wondered how it would affect the video animation from one object into another.

I wanted to morph an object into another one that is already similarly shaped, so at first I chose guacamole and ice cream on truncation 0.1. This turned out to be…really disgusting looking.

Video: https://drive.google.com/file/d/1mAewM63SA8vT1rez3u7co2fDE3eP7d0C/view?usp=sharing

For some reason the guacamole didn’t really seem to be changing at all at the beginning, and when it did begin to morph into ice cream it just looked like moldy food. The ending ice cream picture also didn’t really look like ice cream.

So…I decided to change the objects to a cucumber and a burrito. This worked a lot better. I created four videos, one with truncation 0.2, one with 0.3, one with 0.4 and one with 0.5. I then put these into a collage format so you could see the differences between all of them:

Though it’s subtle, you can definitely tell that there is a difference between the four videos. Theoretically, the top left corner is 0.2, the top right corner is 0.3, the bottom left is 0.4 and the bottom right is 0.5, however I am not super well-versed in video editing and when I put this together in iMovie it was hard to tell which one was which.

Week 11 Assignment: Explore BigGAN or deepDream – Ziying Wang (Jamie)

For this week’s project, I tweaked the parameters in the DeepDream model. 

The tried the examples in the images folder first, then discover that id photo or human portrait works the best with deep dream model. Therefore, I used the picture of Robert Downey Jr. for my deepDream project.

The original picture:

The first set of parameters I changed was the mixed figures.

From left to right, I gradually changed the mixed number from 0 to 0.2, which means for the fourth picture, all four mixed figures in the features become 0.2. The pattern on the picture slowly reduces as the mixed number of each layer increased.

Then I kept the features as they were in the fourth picture, and changed the step size from 0.01 to 0.05 and 0.08 (from left to right). The effect increases as the step size increases, as well as the pattern’s complexity.

The third parameter I tweaked was the number of the octave, I changed it from 3 to 5. The opacity increases while the complexity decreases.

Finally, I increased the iterations from 20 to 50. The opacity of the effect remains the same while the complexity lessens.

After this short tweak with terminal, I uploaded the original picture of rdj to my google cloud and used the deepdream_video_hack to generate the deepDream video.

The video’s frame length depends on the zooming rate and the dreaming rate. Since I didn’t change these rates at first, the video ended quickly. I then changed the dreaming rate to 50 and the zooming rate to 100, then I changed the fps to 150, which should get me a 1 s video, I ended up getting the 1 s fast zooming DeepDream video.

To observe the transformation better, I changed the zooming rate and changing rate to 100, then reduced the fps to 24 frames/s, and generated a 13 seconds long deepDream video that demonstrates the change well.

The zoom_factor of the previous video was too high and the zooming rate was too big, the video was zooming endlessly and the ending became boring. I, therefore, decreased both factors as well as the dreaming rate, so that the time before the zooming can reduce ass well. I ended up getting this video which is quite satisfying.

Week11 Assignment: Exploration on deepDream — Crystal Liu

Problems:

This week we have learned BigGan and deepDream and I have done some exploration on BigGan in the class. Thus, for this week’s assignment I choose to explore more about deepDream. I met some errors when I train the model. One is that I ran the python script without installing keras and tensorflow so that I couldn’t run the script. The other one is that I didn’t change the a+=b to a=b+. After I changed that, the model could run successfully.

Process:

This is my original image:

The default version looks like this:

Then I changed the step from 0.01 to 0.1 and the result looked vague.

The next step I changed the number of scales from 3 to 6 and the result looked like this: it looked more like the original one except for the ring in the middle.

Then I changed the size ratio from 1.4 to 1.8 and the result looked like this:

Then I changed the max-loss and found this change doesn’t have great effect on the result:

Week 11 Assignment: Explore BigGAN or deepDream —— (Lishan Qin)

For this week’s assignment, I played with DeepDreaming a little to generate images. I ran the model with the image below and changed the settings to see how each setting influences the output image. The difficulty I met when running the model is that when I first run the deep_dream.py file the error message “RuntimeError: Variable += value not supported.” showed up. I later changed the code in the file into “loss = loss+coeff * K.sum(K.square(x[:, :, 2: -2, 2: -2])) / scaling” and it worked. But I still don’t really know why it couldn’t run when the code is ” lose += ….” . But the program did begin to generate output after I changed that. The results are as follows.

The input image:

The output image when no setting is changed:

The output when I changed all “feature settings” into 2:  

(The changes seem to be smaller than the original image.)

The output when I changed step into 0.1 from 0.01 and left anything else unchanged:

Overall this was a super fun experience. The image it generated has a really powerful and interesting feeling. Even though I haven’t seen  how I can apply this technology in my final project, still I find this the deep dreaming is a very powerful technology.

Week 11 Assignment: Explore BigGAN, deepDream

Explore GAN interpolation, create generated videos (more details at in class exercises) and document your work on class blog. Or play with deepDream, create generated videos and document it.

  • in-class: collaborative work 
  • continue your exploration and document it! 
  • Post it on the IMA blog before Friday Midnight, 22nd with tag: aiarts11