Week 01 Case Study: AI Facial Profiling, Levels of Paranoia — Katie

Presentation: https://docs.google.com/presentation/d/18gT9DN-SiBhwvFuDLMEAzr__pwge8fzzJnmgTDHqEi8/edit?usp=sharing

The project I chose for this case study is Marta Revuelta’s AI Facial Profiling, Levels of Paranoia. This project is a performance art/supervised machine learning piece that sorts participants into two categories: likely high ability to handle firearms and likely low ability to handle firearms. Revuelta’s project was influenced by recent developments in machine learning technology toward facial profiling—Faception, an Israeli company that uses algorithms to determine potential behaviors of its human subjects (white collar offenders, terrorists, pedophiles), and a paper by Jiao Tong University scientists Wu Xiaolin and Zhang Xi on the ability to detect criminal behavior only through a person’s face.

Revuelta’s project uses a convolutional neural network. Through supervised learning, the network is trained with two datasets, resulting in its ability to attribute one of two labels to an image. On their website, Revuelta notes that “the first set includes 7535 images and was created by automating the extraction of the faces of more than 9000 Youtube videos on which people appear to be handling or shooting firearms, while the second set of 7540 images of faces comes from a random set of people posing for selfies.”

revuelta2

At exhibition, a participant stands as someone (a performer?) holds a camera to their face; the act looks disturbingly similar to someone holding a gun to another person. There is a very blatant power dynamic exposed in this alone. The camera takes a picture of the participant’s face, and through the algorithm the image (and thus the participant) is determined to be potentially dangerous or not. A printed photo is then sent down a conveyor belt, and stamped in red or black ink, “HIGH” or “LOW.” They are sorted into their respective piles in glass cases, for all other passersby to see.

revuelta2

I think this project is really significant; as someone who formerly studied media theory, surveillance is a really interesting topic to me.  Surveillance by humans is already something that can contribute to inequality, as humans have their own biases and preconceptions that lead to discrimination. While surveillance by artificial intelligence is something that is newly developing, it carries all the same cons as the former—bias in AI is also unavoidable, since those programmers and developers of it are human and have their own biases, and the data that they are trained with may also exhibit bias. I think that to create a model with such a starkly contrasting binary between low/high (or safe/dangerous or good/evil) is already a statement in itself, but to accompany that with the performance made it much more impactful. The powerlessness the participant had in being photographed, being defined by a single stamp, and then exposed with that definition as their identity all encompassed the idea of dehumanization through AI surveillance technology. Given my own background, I may have already had my biases on the topic, but Revuelta’s project certainly reinforced my concerns.

Sources:

https://revuelta.ch/ai-facial-profiling

https://www.creativeapplications.net/arduino-2/ai-facial-profiling-levels-of-paranoia-exploring-the-effects-of-algorithmic-classification/

Leave a Reply