Training deep dream
Author: Andrew Huang
Deep dream has always been very fascinating to me as the illustrations that it creates are similar to visuals one see’s on psychedelics or LSD, but I never really knew how it worked. After reading some articles about it online, I found that what the neural network does is slightly perturb the image so that the classification of the neural network has slightly less loss. I think this is similar about how GANs train because they both slightly perturb something ( in Gan case it is the latent vector). It is kind of like doing back-propogation but instead of changing the weights wrt loss it is changing the image itself and freezing the weights. I noticed in all deep dream images generated there appears to be spirals everywhere, I’m still not sure why these get generated. I’m also unsure why the algorithm needs to use different scales of the image and scale up to compare to the original, so in that regard I need to do more study to learn how the algorithm really works. That being said, the code worked smoothly for me, and I simply just did conda activate with my midterm environment and I just scp’d the pictures I wanted to transform. I transformed 2 city landscapes and one anime style picture. The distortions that we get from this model aren’t as good as the one’s I see online; perhaps more iterations are needed to get the proper distortion. In particular, when a class of the image is present in the training set of the original model (imagenet), then the results are especially good because I feel the network will disturb the image more to fit the loss better. Even though deep-dream has a pretty limited use case and it’s relatively old now in terms of new machine learning models and techniques, it’s still interesting to see the algorithm work and how far we have come.