Categories
Research Post

Research Post 3 – Laura Lachin

“color photography was initially optimized for lighter skin tones at the expense of people with darker skin, a bias corrected mainly due to the efforts and funding of furniture manufacturers and chocolate sellers to render darker tones more easily visible in photographs — the better to sell their products” (Nabil Hassein)

This statement struck me because it is horrible that racism in technology was only fixed in order to accommodate objects that businesses were selling. It was basically just a coincidence this fixed the issue that people with darker skin were having with the technology.

 

Facial recognition can be used on other creatures, such as fish, in order to determine which fish is which. (John Oliver)

I found this very interesting because I had never heard of facial recognition being used on anything other than humans. I think this technology could be very useful in some areas, such as wildlife preservation, where animals living in a particular area are tracked. Usually this is done via catch-and-release and the animal is either tagged or attached with a tracking device, but this technology could prevent the need to disturb the animals at all.

Categories
Research Post Uncategorized

Research Post 3 – Image Analysis and Face Recognition

“The Coding Gaze” 

One idea that stuck out to me the most in this TED talk is how judges are using “machine-generated risk scores” (Joy Buolamwini) when deciding how long someone’s sentence will be. The fact that a sentence is set based on an average of other people is one that I struggle to fully wrap my head around. This brings forward many important and urgent questions about how algorithmic bias could in turn have a real physical effect on a person.

Privacy

I was shocked to find out that anything existed with the possibility of “correctly [predicting] your Facebook profile and the last five digits of your SSN for a third of the public, in under three seconds” (Kyle McDonald). The many privacy concerns that arise with this are overwhelming. To have such a large database of information without people’s consent can feel invasive.

Categories
Research Post

Research Post 3

It is very interesting to me how not only race affects facial recognition, but also cultural norms and actions. This article discusses the different testings and research that had to be done for facial recognition, and some situations surprised me such as this excerpt: “If we maintain an expression for too long because our environment requires it, we can start to have emotional trauma. This is a documented problem in the service industry in Japan and Korea where employees are expected to constantly maintain smiles, creating a disorder called smile mask syndrome.” I grew up in Korea and this sentence made me think about how different facial expressions are between Korea and America. It is interesting how there are so many different changes, whether gradual, or sudden we all have and how it affects our technology. 

Furthermore, this article showed how racism ideals in facial recognition technology has affected communities in different aspects. The last paragraph is especially powerful: “…The second is that the liberation of Black folks and all oppressed peoples will never be achieved by inclusion in systems controlled by a capitalist elite which benefits from the perpetuation of racism and related oppressions. It can only be achieved by the destruction of those systems, and the construction of new technologies designed, developed, and deployed by our own communities for our own benefit. The struggle for liberation is not a struggle for diversity and inclusion — it is a struggle for decolonization, reparations, and self-determination.” It is very important to have representation in designs of new technologies because of how it can affect different people groups in different ways; especially if the technology was designed by one people group, and basically for one people group only. We want the world to be the most equality it can be, and sometimes it means tearing down a wall to build a new one, not just adding on to it. I never realized how racist facial recognition was and still is today, but I understand know how very important it is that this injustice be reformed. 

 

Categories
Research Post

Research Post 3 – Bias in Machine Learning Algorithms

One main idea that caught my attention is how training sets can cause bias in the machine learning algorithm. An example that was mentioned multiple times was the inability of facial recognition software to detect faces of Black people due to the lack of diversity in the data set that was used to train the machine learning model. I find this very interesting because I never thought of machine learning algorithms possibly being biased, this showed me how a bias from a coder can be seen in their algorithms and how it can lead to serious issues such as misidentifying criminals. Thus, it is important to always factor fairness and have inclusive code.

Categories
Research Post

Research Post 3 – Ama Achampong

John Oliver: Face Recognition

Something that got my attention and was disturbing was how the use of facial recognition is being used, especially by law enforcement. Yes, it may be used in an effort to protect the nation, but to have people who were part of the Black Lives Movement and had outstanding warrants for their arrest being traced and targeted is unsettling. Those people were exercising this right to assemble and beg the question if we truly have freedom in the country. 

Moreover, the way John Oliver describes the scenario of the guy using the app FindFace on the girl in the coffee shop. It brings a whole new meaning to stalking and the fact they hide that by saying you can use the app to find friends, come on. It should be stalk “friends”.

Joy Buolamwini: How I’m fighting bias in algorithms

I already know there were biases in the world; no matter where I would go there is a high chance of me experiencing it. I never thought I could expect it from an algorithm. It is not even a living human being and it could treat the same way it treated Joy. I could not believe when she tried to get the computer to recognize her face unless she had a white mask was truly upsetting. It not only her, but it also misrepresents Asians as well, basically, any face that was outside the norm of the algorithm.

Joy mentioned an In-coding movement will still be with me whenever I code. 

  • Who Codes Matter
  • How We Code Matters
  • Why We Code Matter

A system is like a blank paper and there is no subjective judgment. It is the developer that provides it. 

Nabil Hassein: Against Black Inclusion in Facial Recognition 

Nabil Hassein’s stance struck me, especially after listening to Joy Buolamwini’s TedTalk. Joy was fighting bias in algorithms; she wanted to have technology that is able to treat every individual interacting with it equally, especially with recognition errors and possible wrongful accusations. However, Hassein saw it as a temporary advantage due to those who have day-to-day access to this technology would predominantly be in the hands of our oppressors in the coming future. If anything, this technology would just worse the biases we already received; it is better to forbid it. 

Categories
Research Post

Research Post 3 – Nan Lin

This image stuck in my head after I watched Joy Buolamwini’s Ted Talk “How I’m fighting bias in algorithms”. She talked about how bias in algorithms are systematic, the data set they use to train machine for facial recognition is not diverse enough that it  can not recognize Joy Buolamwini’s face and tell her that no face was detected. I remember the Hong Kong conference example she gave about, that how she’s the only one who’s face wasn’t recognized by the machine, and how bias in algorithms can travel so far so quickly. 

This photo was from John Oliver’s “Last Week Tonight” show, in this show he focused on how facial recognition can be harm to people’s privacy and violates human rights when used in wrong way. This image stood out because this is when the founder of clear view uses the reporter’s current photo and tracked to a photo of the reporter when he was 16 years old. The founder Hoan Ton-That emphasized that he will only work with the authorities with his software, but it also raise concern that if a country who has a very different opinion on people’s sexual orientation could use it to harm people that they consider “not normal” or “a sin”, he didn’t answer the question directly, which could really make people including myself have trouble to believe his statements and “good intention”.  

Categories
Research Post

Research Blog 3

I think the development of face recognition techniques is interesting. The techniques involve three major sections: detection, recognition, and tracking, which allows the camera to recognize the existence of faces, distinguish the face between other faces, and keep track of the movement that is being made by that specific face. Although this technique is used in the security system for detecting the faces of the criminals, the cameras still contain some bugs, such as humans that have similar face features will be determined as the same person. According to Last Week Tonight, the police have used the face of a celebrity to detect the criminal successfully. 

One thing that brought my attention to was the incapability of detecting females who have darker skins. In Joy Buolamwini’s Ted Talk, she talked about how she found the problem of cameras being unable to recognize the faces of dark women’s faces as faces when she was in undergraduate school. Surprisingly, this problem is still not being solved after she was a graduate student in MIT, and this problem might not even be considered as an urgent problem that needs to be solved. During the talk, Buolamwini also talked about how the camera detected the appearance of a face after she put a white face mask on her face. I think that is ironic and discriminatory in some way, because the system has decided a face-like, light colored object looks more like an actual human face than a real human face. 

Categories
Research Post

Research Post #2

The Jobless Rate for People Like You

Amanda Cox’s Data Set, “The Jobless Rate for People Like You” really interested me. 

This article was published in 2009, and I imagine that it was a visual representation of the effects of the 2008 recession. Unfortunately, the code for this project no longer runs due to Adobe Flash Player not being supported. However, I still think there is a lot to take away from what we can see from the article. What’s really interesting about this project is the title, because it influences the way people interact with the visual information. The data is objective, but the way its presented makes the reader think about their relationship to the data in a deeper context. This graph could just be titled “The Jobless Rate”, but in being titled “The Jobless Rate for People Like You”, it makes readers acknowledge their biases. Data is used in newspaper articles all the time, but the execution of this data is particularly interesting to me because of how it changes the reader’s relationship to the data.

Categories
Research Post

Research Post #1

I really enjoyed listening to Lauren McCarthy’s Eyeo Talk from 2019. Through their presentation, you can tell a lot of their work has a sense of playfulness. I appreciate the ways their work raises questions about surveillance and the internet. The Follower project in particular was so captivating to hear about and watch. It definitely reminded me of the parasocial relationships we can adapt when we interact with content creators online. I think there is something distinctly human about the work Lauren McCarthy is producing and it’s really neat to see that aspect of humanity be brought into a tech space. Another interesting aspect of their work is the participants. I have to wonder how they go about finding willing participants for these semi-invasive art pieces. Especially with the LAUREN and the Follower projects, the idea of a human being a follower or an ALEXA sounds like something no one would agree to. However, the Follower project required that subjects apply to be followed, which is another interesting layer. Overall I felt that the Eyeo Talk was incredibly engaging and I really enjoyed seeing their work because it was funny and creative.

Categories
Research Post

Research Post #2 – Architecture of Radio

http://www.architectureofradio.com

 

The project I dove into is called “Architecture of Radio”. It is an application created by Richard Vijgen in 2016. The project is meant to visualise the invisible radio signal networks that we are constantly surrounded by dubbed the “infosphere”. The infosphere is an interdependent environment of invisible networks that is made up of “informational entities”. These signals come from cell towers, wifi routers, and various types of satellites. The application is site-based meaning that it’s appearance morphs based on your geographic location and where your device is pointed to. In the video demonstration we see the lines of the network shift, shrink, and grow as the user moves around which shows us how these invisible networks actually behave in 360 degrees. The application relies on openly available data which includes almost 7 million cell towers, 19 million wifi routers, and hundreds of satellites. 

What drew me to this project was the initial appearance of it. The blue and white map reminded me of something I’d see in some futuristic sci-fi movie. Like some piece of technology that’s meant to look completely advanced and impossible, however the Art of Radio is very much a living representation of the present and not the future. Vijgen could’ve represented the infosphere in so many ways: dots, lines, pink or green or yellow circles but he chose this specific appearance for the project. It’s clean and simple, yet is a spectacle once you understand what you’re looking at. It opens the observers eyes and lets them into a world that you’d never get to see otherwise. I suppose the algorithm that makes this infosphere visualization might be quite complicated. It’s constantly taking in millions of data points that change each frame, and mapping them out. For this I’d assume the code takes in the coordinates of the current location and the signals passing through it at that exact time, then draws a line to symbolize the origin, direction, and destination of the signal.