MLNI-Final Project Concept — Crystal Liu

Initial Thought:

I want to develop my final project based on my midterm project. As I said in my midterm documentation, I want to add storytelling part and smooth interaction to my final project. For the storytelling part, I plan to design a theme about the festival. Since Christmas is around the corner, I choose Santa’s journey on Christmas Eve as the main topic. 

     

If the users touch the Merry Christmas, they will see this crystal ball.

As the user gets closer and closer to this image, the image will be bigger and bigger, which seems like the user is approaching this crystal ball in a real world. If the distance reach the point, the user will see another image, which means they enter into the scene successfully:

I will set a large size for these images, which is larger than the canvas size. The users can drag this image by stretching their left or right hand. Also, they can trigger something in the image. For example, if the butterfly approach the elf who is raising his hands in the air on the second image, the user will hear “Merry Christmas” in an exciting mood. This is the first scene. The users can go to the next scene by letting the butterfly get close to the right edge of the image. If they do so, they can see an arrow guiding them to the next scene. Every scene has its own surprising part as my midterm and I plan to add some hints to guide the users. As Tristan suggested, I can use fade function to let the users recognize they just triggered something. 

Technology

The core technology is still poseNet. I was inspired by Shenshen’s and Billy’s midterm project. The users can zoom in the image by getting close to the screen. Also, I want to make some filters for the users and the image or GIF’s position is located based on poseNet. I also want to use style transfer to enrich the visual output. But I’m afraid that the model will get stuck and can’t work smoothly.

 

Leave a Reply