Part1:
Zhuolin Li
How Do I Really Feel?
2022
“How Do I Really Feel?” is an artwork that serves as a communicative mirror to reflect on your feelings according to the computer’s interaction and recreation of your image. As visitors approach the website, a windows 95-style interface will direct you to either “AF” (Artificial Friend) or “Mirror.” Dithering represents the horror of the unknown from the negative space of the image and creates a strong contrast from the futuristic concept of an artificial friend or a mirror that could standardize the emotion, a process to watch oneself’s emotion be reinterpreted into a digital visual style. Here, “How Do I Feel?” and its camera work helps us become consciously aware of the wide usage of facial detection, and at the same time to reflect on its threat to deprive your ability to think of your emotions and what you need, and standardize the complex emotion. The artwork becomes not only a place to play around with the developed facial recognition technology but also to reflect what we might lose during that procession.
Elevator Pitch:
“How Do I Really Feel?” expresses concern about facial recognition technology and what it might entail. The whole project consists of two parts: “Artificial Friend” and “Mirror”, which allow the visitors to observe themselves from different views. As visitors communicate with “Artificial Friends,” they might doubt their real emotions, which questioned if the facial recognition technology could know you better than yourself. On the contrary, “Mirror” allows you to observe yourself from your own view. We use facial technology so frequently, will we share a “standard face” based on facial detection technology in the future?
Image:
Part 2: “Documentation”
1. Process: Design and Composition
I’m seeking possibilities to present my sketch in a more interactive way before I started researching my ideas. I found p5.js’s insert camera could add more interactive elements to my sketch, and I decided to focus on the camera work to let my project hold ground to exhibit in an installation format.
Also, an individual game “Who’s Lila?”, which let the players drive the story by manipulating the expressions of the protagonist inspired me a lot. When I see myself in the camera, or simply the mirror, I will always have a strange feeling that I might not be the person who in front of me, or, could the mirror reflect one’s true self and feeling?
The special aesthetic style of “Who’s Lila?” also inspired me to create this special 1-bit “dithering”/ dither punk style for my project.
In this scene of “Who’s Lila?”, you can only guess what happened on the face of the left side of the skull.
This “dithering” creates a Lovecraftian’s Horror, which allows you to imagine what happened in the Negative space of the image. This is the horror of the unknown. “Dithering” for “How Do I Really Feel?” has a similar effect that you might also guess what your real emotion is when you see yourself under that scale of grayness.
“The Return of Obra Dinn” is the pioneer which utilizes either punk style.
The camera work and “Who’s Lila?” inspired me to produce an algorithm that standardized people’s facial expressions.
This is the prototype of “AF” and a brief introduction:
On the right side of the screen is your real expression, and on the left side of the computer is your “standard” expression after computer processing.
For example, the computer recognizes that you are smiling, so it produces a standard eight-tooth smile on the left.
Theme: The theme of this work is a reflection on the possible objectification of human beings caused by the recognition of human feelings by the human face.
After I got acquainted with the face.API produced by ml5.js and found it interesting that my facial expression could be well detected by the facial detection database. I got the idea of “Artificial Friend,” which was created by Kazoo Ishiguro in his book “Klara and Sun.”
“Klara and Sun”. by Kazuo Ishiguro
“Klara and Sun” was narrated by Klara, who was considered an intelligent artificial friend. Klara describes how she perceives the world, especially how “Artificial Friend” understood the deeds of human beings in this book.
Based on this idea, I want to further broaden my project to the “other’s” view of the player.
The prototype of “Artificial Friend” and a brief introduction:
On the right is a mirror image of your face, the computer will recognize your expression. On the left is a dialog box, and the first entry in the dialog box is “I guess you feel …. Now”. The second is the computer’s response to your current feelings.
For example, if you make a sad face, then the dialog box on the left will say, “I guess you feel upset now. But I’ve got a joke for you. Then it will tell you a joke.
For further development, I was thinking of having the computer say something on purpose that might make the sad person even sadder.
Theme: The theme for this work is to give you an artificial friend… Yet is it a happy thing or you should be afraid about?
If you feel that the computer is indeed an artificial friend, should you consider his feelings? Or does it just give us little human beings a little comfort with its powerful information function, when in reality it doesn’t care about your feelings?
As for the text information for “AF,” I look at some psychological journals to seek help on how to encourage or comfort people.
After I realized the main interface for AF, I started to wonder what the other website would look like, and I specially choose windows 95 as another standard style, and I download a windows 95 stimulator.
2. Process: Technical
I completed AF at first.
I learnt how to use the facial recognition by using the content from the facial detection database from “https://ml5js.org.”
The second problem is how to attach the capture to the canvas.
I solved it to put the capture from the “setup()” function to “preload().”
As for dithering, instead of calculating dithering pixel by pixel, I utilize the already dithered grayscale to remap them into my canvas.
I write this code to realize the typewriter effect for creating a communicative effect for Jane.
I also utilize the class for “HAHAH” and my cursor.
At first, I also tried to add an email element for “AF,” so that they could send you a email a few days later asking if you’d like to be their friend. However, it might be too difficult for me to do so now.
Also, I tried to add the GPT-3 to create a random communication system for AF, however, it might takes me too long to figure out how to realize it.
As for “Mirror,”seeking for a suitable standardized facial expression really cost me lots of time. At the end, I utilize one set of picture from the Chicago Face Database to create a suitable standardized face model for our Asian people.
Also, me and fellow figured out a calculation in a function to return the value I want to simplify the whole algorithm.
Besides, it is really hard to figure out how to dither the whole canvas, so I separate them into two parts and the n using the dither and the copy function.
I also learn how to use the dictionary to store a bunch of numbers.
However, the standardization progress might be fitter if it could recognize the player’s gender and race, which requires me to have a larger database.
Besides that, I even have an idea 3 as building the Babel tower of human and computer interaction, which have not been completed yet as it might use the HTML as the main coding source.
3. Reflection and Further Development
I think I achieved :
- Pixel Iteration (“dithering”)
- How to realize the typewriter effect
- How to use the camera(capture), functions related with the images(copy, ) and machine learning database
- Form a coordinate aesthetic style
For further development, I think I could:
- Add the email element for AF
- Give a more specialized data for the “Mirror” based on the gender/ species of human
- Use the “dither filter” more smartly to let the CPU consuming less