Topic 1 Research

Simulation

Deepfake

Deepfake essentially stands for “deep learning” and “fake”. It is commonly understood as manipulated video or computer-generated face swap, but it is way more than that. Deepfake specifically refers to synthetic media(video, audio, image) that have been created through machine learning and Artificial intelligence techniques, seemingly blurring the boundary between reality and artifice.

In a world where videos are seen as hard evidence, Realistic Deepfakes could be used negatively in the wrong hands. In fact, the term was first introduced in 2017 as a Reddit username, which posted adult film videos with the faces of celebrities on the bodies of adult actors using this technology. 

However, Deepfake could have endless possibilities and potentials in art and culture. 

Reference:

https://amt-lab.org/blog/2021/8/positive-implications-of-deepfake-technology-in-the-arts-and-culture 

https://en.wikipedia.org/wiki/Deepfake

Ideas

  • Use to deceive? what are the negative effects of it?
  • What are the potential risks of this technology in the future?
  • How is it publicized to the general public? Is there any misunderstanding? Why and How? 
  • Deepfake in Art
  • How could Deepfake be used positively?
  • How could artists benefit from it?
    • Convenience: create hyper-realistic synthetic performances without the celebrity or actor ever being on set
    • Cost-effective: Less expensive than shooting in-person or using unrealistic computer graphics.
    • Endless content variation and editing possibilities: AI-assisted content creation is extremely flexible. Easily edit expressions, facial features, or entire faces or create localized content without further shooting.
    • Benefits for Digital Artists: realistic renderings, CGI, VFX, games
    • High-quality renderings with realistic 3D models could also be considered as Deepfake, and could be potentially used in my project.

Popular Softwares & tools for creating Deepfakes:

 

Experimental Video Using First Order Motion Model

First, I experimented with the First Order Motion Model on RunwayML.  I’m using Donald Trump’s picture as the input source, and Xi’s Chinese New Year greeting speech as the driving source.

Then I ran the demo code on First Order Motion Model on Google Colab using my Japanese friend’s face.

I recorded myself saying a line in Simpson and use that as the driving source. This fantastic artwork of Homer Simpson is by Joe Parente.

 

Inspirations

VFX & AI Artist VFXChris Ume:

The possible future of Deepfake in gaming:

Artist Nick Denboer:

 

Video Tutorials on How to Make Deepfake by Myself

Faceswap using DeepFace Lab:

First Order Motion Model:

3D Models in Game Engine:

Leave a Reply

Your email address will not be published. Required fields are marked *