Background+Motivation
My final project is mainly inspired by the re-created famous paintings, especially the portraits. Some people replace the Mona Lisa’s face with the Mr. Bean’s face and the painting is really weird but interesting.
Also, I found that some people tend to motivate the poses of the characters in the painting, such as The Scream:
Therefore, I want to build a project to let the users add their creativity to the famous painting and personalize the paintings to recreate these paintings. It reminds me my previous assignment for style-transfer. For that assignment I use a painting from Picasso to train the model, so that everyone or everything showing in the video can be changed into Picasso’s style. Even though the result is not that good, it still shows a way to personalize the painting or to let the users create their own version of paintings.
My idea is that the user can trigger a famous painting by imitating the pose of the characters in that painting. For example, if the user wants to trigger The Scream, he or she needs to make the pose like this: 😱. After the painting showing up, the user can choose to transfer the style of the live camera to the style of The Scream. If the users want to change to another painting, they just need to do the corresponding pose to trigger the expected painting.
Reference
My reference is the project called moving mirror. The basic idea is that when the user makes a certain pose, there will be lots of images with people making the same or similar pose.
What attracts me most is the connection between images and human poses. It displays a new way of interaction between human and computer or machine. Users can use certain poses to trigger things they want, and in my project it is the painting.
The second one is style-transfer. It reminds me some artistic filters in Meituxiuxiu, a popular Chinese photo beautification application. These filters can change the style of the picture to sketch style, watercolor style or crayon style.
But the filter is only for still picture. I want to use style-transfer model to add this filter to the dynamic video so that the user can see their style-changed motions in a real time.