iML Week 10: Explore DeepDream – Ivy Shi

After the class introduction to DeepDream, I was quite intrigued by the effects and results that this computer vision program produces. I decided to explore it further by trying different image inputs. 

I started out with these images 

The parameters are

settings = {
      ‘features’: {
            ‘mixed2’: 1.,
            ‘mixed3’: 1.5.,
            ‘mixed4’: 1.,
            ‘mixed5’: 1.5.,
       },
}step = 0.01 # Gradient ascent step size

step = 0.01 # Gradient ascent step size
num_octave = 3 # Number of scales at which to run gradient ascent
octave_scale = 3.4 # Size ratio between scales
iterations = 50 # Number of ascent steps per scale
max_loss = 10.

I thought the effect of the night sky star photo was the most interesting. So I continued with that and tuned the parameters to get different output. 

The difference between having mixed3 and mixed 5 as the main feature. Mixed5 looked more pleasing. I then changed the iterations: 

Difference between 20 iterations and 50

Looking at the overall picture, I think the 20-iteration output looks better. But as I zoom in, 50 iterations produces much more refined details. 

Midterm Project: Dodecahedron Lamps – Ivy Shi

Title: Lamp Dodecahedron  

Project Description:

For the midterm project, I was inspired by Vincent Buret and his work Mirobolante Light Sculpture. I was especially interested in the shape -dodecahedron and how it is an excellent structure to display light interaction. After doing more research related to this shape, I saw on Pinterest a style of lamp sculptures incorporating light and patterns. Drawing inspirations from these artworks, I decided to create my own dodecahedron lamp using laser cutting and explore further how I can create a project relating this structure to LED light and patterns. 

Mirobolante Light SculpturePinterest Inspiration

Perspective and Context

I was really keen on incorporating shapes on sides of the dodecahedron not only because it creates a beautiful visual effect, but also because it connects with Merleau-Ponty’s view on Space in the book “The World of Perception”. He states that our perceived world is structure by a plurality of overlapping perspectives  within which different aspects are somehow seen together. As light shine through the geometric shapes, patterns are created in the space. Depending where the viewer stands, where the light is placed, where the reflective surfaces are, people see different patterns. And there is no absolute right way to enjoy it. Any angle provides an unique viewing experience that when they combine together contribute to the greater perception of space. 

Development & Technical Implementation

To get started, I decided to use laser cutting as the fabrication method to create the main structure. I drew twelve pentagons in Illustrator and utilized Joinery to create joint inserts. Here are some photos of the first prototype: 

In this case, the prototype did not work out because not all the joints match up exactly. I could not assemble the pieces together. It took more spatial imagination to readjust the illustrator file. I also changed to patterns that create better visual effects. The second attempt is shown here: 

I learned that acrylic is quite expensive so when prototyping it is best to go with MDF board. I ended up creating a finished prototype by assembling the pieces and making it into a lamp design. It is connected to a 20v halogen light. The results are as pictured: 

While the second prototype works well with a stable, bright light source and I was happy with the result, I wanted to do more with LED lights.  However, the problem with LED is that lights do not shine through the wood board. In order to best combine LED lights and the structure, I experimented with some alternative materials – reflective glass and frosted acrylic. I ended up picking frosted acrylic as it best presents the effects I want to achieve. 

In the process of prototyping, I also tried several different patterns and finalized two distinct patterns for the top and bottom half of the structure. 

In terms of the composition, I tried to maximize the functions of digital LEDs by using both single and mixed colors, and control pixels separately to create a kinetic effect. I worked with the fastLED library and built upon some existing light patterns.  Here are the details of the code: https://gist.github.com/ivyshi98/73924a12eb95492240b6a8a09d6df956

Presentation: 

For the presentation, I showcased my MDF lamp first. Even though the light effects are limiting with no change of colors, I still thought this piece creates captivating visuals. And people reacted to it quite well even though it was a simple piece. 

Then for the main project, I displayed the frosted acrylic lamp on a MDF board with two sides covered. My intention was for each side to reflect the patterns of the lamp. This is a video of the final result: 

Both of my works are self-explanatory. In general, people enjoyed the patterns generated by the interaction of LED lights and the lamp as well as the overall light composition. Some of the feedback and critiques include: 

1) Create a generative composition – Instead of considering composition with a duration, I should establish a continuous composition that will let people enjoy the display no matter when they see it. It should not have a clear beginning and end. This is something I can definitely improve by adding additional sequences and program the code so that it randomly selects a sequence. The delay time between will also be tuned so it feels more like a transition rather than a sudden stop. 

2) Overall display needs improvement – The main critique is that the lamp itself is beautiful but by putting it on a small, plain DMF board it takes away from the effect. The materials do not match and the base is too small to allow the entirety of the pattern reflection to show. Professor Eric suggested using a white foam board. The overall display is definitely lack of consideration on my part. The overall viewing experience is as important as the main sculpture. Next time I will put more time and efforts into testing different display methods and choose the best one. For this project, I re-ran the project near the corner of a dark room. The effects are improved: 

Conclusion: 

I learned a lot from the process of making this midterm project. In the development stage, it is important to create prototypes and make improvements each time. Do not expect to get to the final product in one go. In addition, it is a good practice to start the prototype small and inexpensive to not waste time and materials. I made the mistake of using acrylic the first time thinking that it will work. After revising my prototype so many times, I now have a good grasp of laser cutting as a fabrication method.  

In terms of presentation, I also learned from the critiques and listed the details in the presentation section. Next time, I will consider making a generative composition as well as carefully designing the overall display. A project can be the most beautiful thing but without providing proper display and best viewing angle(s) to the audience, it is worthless.  

iML | Week09: Style Transfer – Ivy Shi

After this week’s class, I continued to explore style transfer by training a model and inferring using ml5. I followed the instructions on training on Intel devCloud and encountered several difficulties before successfully initializing the training. Since my model is still in the training process, I completed the inference part with pre-trained models in the meantime. It was quite informative to look at the differences between output of differing models. And once my own model is completed, I would like to compare the results as well.  

Training: 

In the very beginning, I was having difficulties just setting up the environment. Because I had download other packages for tensorflow when working for the midterm projects. The packages specified in the environment.yml that needs to be installed seem to come in conflicts with the existing ones. I suspect this is due to having old and new versions of the same package. 

In addition, I kept getting prefix for a conda environment already exist even though it does not exist in my conda env list. The error persists no matter how I try conda remove the environment and try it again. In the end, I edited the environment.yml file to rename the path. 

I was stuck on this for a while until Aven pointed to the conda clean command. I removed unused packages and caches and also re-cloned the GitHub repository. After solving some other issues, I was finally able to successfully create a new virtual environment for the purpose of training this style transfer model. Currently, my model is still training and will take more hours to complete. 

Downloading the dataset is also an unpleasant experience. It happened to me five or six times the dataset just stopped downloading in the middle and would have to restart. Eventually, the download process was not interrupted and it took about 3 hours to complete. 

After solving all these problem and issues, I was able to start training. I specifically looked for a picture with clear geometric shapes and contrasting colors for training and hope to get a strong style transfer effect. Currently, the job is still running and will take more hours to complete.

Inference: 

I perfomed the inference on several pre-trained models. The input image is here:

 The outputs are shown here: 

The models used are: waves, zaha, matta and mathura. I thought the results turned out very good and the style transfered very well to the input image. In these four output images, you can also see how texture has also been transformed. I look forward to applying my own style transfer once the model has been trained. 

Overall, style transfer creates a unique artistic effect. It would be interesting to explore the subject further and potentially form new styles stem from this type of machine learning approach.  

  

 

Week 07: Light Composition Assignments – Ivy Shi

Assignment 1: Analog RGB LED Strip

Title: Music lights

For this project, I attempted to sync color changes on my Analog RGB strip with ups and downs of a music piece. I carefully chose and edited a one-minute BGM to match my intended goal of having a complete cycle of introduction, build up, climax and ending. The instrumental music starts out slow with distinct notes, then transitions into incrementally fast tempos and eventually fades out. In order to sync the music with colors, I had to manually adjust the wait time in between. This process was time-consuming, but the effects turn out quite good. 

For the presentation, I bent the strip into a big circle and placed it into an acrylic box that reflects light, making the colors more vibrant. The box also had flat surfaces and sharp angles, the effects can vary when looking at it from different angles. 

Here is the video: 

Source code: https://gist.github.com/ivyshi98/8a0e7f934e04c0791e8dce4398cec6f1

Assignment 2: Digital RGB LED Strip

Title: A Day of Light on Earth 

Team members: Sylvia, Lily

For this assignment, our team worked with FadeCandy to be able to connect multiple Digital LED Strips. Our theme is to use light to represent a day on Earth. It started out with green – creating a fresh, foresty atmosphere. Then the focus goes to the sky represented in blue. Then the dots going through the strips represent birds flying over. It eventually ends with sunset and dawn. In the process, we used various colors, speeds, patterns and lights up different sections of the strips to achieve a captivating visual effect. 

In terms of the presentation, we cut the LED strips shorter and placed them on a box paper board. They are fixed by plastic pins with crystal balls and pyramids placed in between. The crystals also serve the purpose of deflecting and reflecting lights in multiple directions which creates varying viewing experience from different angles. 

Here is the video: 

Feedback: we received the feedback that placing everything on an undecorated box cut out is not the best in terms of presentation and visual. We could utilize an acrylic board or laser cut a base. This is something we should definitely take more considerations into. 

Source code: https://gist.github.com/ivyshi98/b9d4f76742efab4bd1e839ea1722c004

iML Week 07: Midterm Project – Ivy Shi

Introduction: 

For the midterm project, I was really interested in exploring GAN (generative adversarial network) and its power to produce realistic-looking generative images. I researched some existing projects and looked into different variations and interesting usages of GAN. My project idea is composed of two parts. 

1) Create a web application that allows users to upload an image of themselves and generate the celebrity version of themselves using machine learning.

2) Create an interface that allows users to generate random tattoo images. The stretch goals would be to generate tattoos based on their input sketches and to allow users to choose tattoo styles.

My objectives are divided into two aspects. During the first part, I will be using an established, well-organized dataset on celebrity faces to grasp the idea of GAN and how to use it. Then the next part is me utilizing GAN to fulfill my interest in tattoo generation. In this case, I will be collecting my own tattoo dataset. Since I foresee this to be a long-term and time-consuming project, I expect to make substantial progress for the midterm and eventually complete my goals for the final project. 

Continue reading “iML Week 07: Midterm Project – Ivy Shi”