sent.ai.rt – Final Project Proposal – Abdullah Zameek

sent.ai.rt – an interactive portrait  

concept :

sent.ai.rt is a real-time interactive self-portrait that utilizes techniques in machine learning to create an art installation which changes its behavior depending on the user’s mood derived from his/her facial expressions. The user looks into a camera and is presented a video feed in the shape of a portrait which they can then interact with. The video feed responds in two ways to the user’s emotion – it changes its “style” depending on the expression, and the web page plays back music corresponding to the mood. The style that is overlaid onto the video feed comes from famous paintings that have a color palette that is associated with the mood, and the music was crowd sourced from a group of students at a highly diverse university. The primary idea is to give individuals the ability to create art that they might have not been otherwise been able to create. On a secondary level, since the styles used come from popular artists such as Vincent Van Gogh, it is essentially paying homage to their craft and creating new pieces that essentially draw from their art pieces.

The phrase “sent.ai.rt” comes from the words “sentiment” which is meant to represent emotion which dictates how the portrait responds, and “art”. The “ai” in the middle represents the term “artificial intelligence” which is the driving force behind the actual interaction.

inspiration:

The use of machine learning in the arts has never been more prominent. With more technical tools coming out each day, artists have found new and exciting ways to display their craft. One such artist, Gene Kogan, was one of the pioneers in the use of machine learning learning to create interactive art. Inspiration for sent.ai.rt was heavily drawn from his project “Experiments with Style Transfer (2015)” where Kogan essentially recreated several paintings in the styles of others. For example, he re-created the popular Mona Lisa in styles ranging from Van Gogh’s “Starry Night” to the style of the Google Maps layout. Another popular artist, Memo Akten, also created a portrait based AI piece called “Learning to See : Hello World (2017)” which involves involves teaching an AI agent how to see. Thus, my project draws heavy inspiration from both these artists and their work in order to create a cohesive piece that takes into account human emotion and its interaction with computer based intelligence.

production:

The project is completely web-based – it uses standard web technologies such as HTML (which defines the structure of the website), CSS (which dictates how the website looks) and JavaScript (which allows for programming/algorithmic knowledge to be implemented). In addition to these, the website will also use several JavaScript-based frameworks, namely p5.js(which allows for a great deal of design and multimedia work to be done) and ml5.js (which is a machine learning framework). The machine learning components can be further divided down to two distinct tasks – recognizing human emotion, and applying the style related to that emotion to the video feed. The former is referred to as “sentiment analysis” and will be done with the help of an additional JavaScript add-on called FaceAPI. The latter is referred to as “neural style transfer” and will be done with the help of a ml5js functionality.

Additionally, the assets for this project such as the images and music have been procured from the Internet. The choice of music was determined by an online survey in a closed university group where students were asked to list songs that they associate with a particular set of moods.

In terms of feasibility, the technology exists to make this project reality and can certainly be extended to add further functionality (such as the ability to freeze, save and tweet out a frame from the feed) if necessary.

Leave a Reply