Coded Bias

To be honest, Coded Bias is a very rich video and very closely tied to politics, which is an area I know very little about and therefore can’t express too much opinion on. The following are just some humble insights.

 

  1. It has mentioned in it that artificial intelligence was first developed in 1956 when there were maybe a hundred people researching this topic. These one hundred people to a certain extent determine the direction of AI. Even now, behind the AI that is being used on a large scale, it is still a very small number of people who have the right to regulate the models and participate in the programming. This makes me wonder about the democratic and inclusive nature of AI.
  2. The film gives many examples of AI bias. For example, Amazon’s face recognition system is racially biased, the new digital credit card model of Apple has gender discrimination, and the risk assessment tool for judges to use at sentencing is also racially biased. This raises some questions for me. Why are these models so easily biased by race and gender? Like the suspicion raised in the film, is it because AI is a black box, and because people don’t really pay attention to the effects of AI, the people behind it are deliberately imposing bias for some reason? And further more, what data should be used to train the model? What kind of people can be involved in the process? How do you determine that a model has been trained and is ready to be put to use? I am thinking about whether the relevant systems and laws are missing.
  3. The film also introduces the concept of screen-by-screen. Because you can only see what is happening on your screen or what is happening to you, without access to group data, individuals will unlikely be able to determine that this is a bias against a group and it will be difficult to organize resistance. Thus, AI bias can easily be invisible, which means it is more dangerous.
  4. The film also mentions another problem with putting AI into use, which is the inability of AI to give logical explanations for its results. Everything is attributed to algorithms that make judgments or predictions. When people are fired, treated unfairly, and lose various opportunities, it is very difficult to accept that the machine that caused the result cannot give a reasonable reason. Especially considering that the prediction or judgment process is likely to be biased in unknown ways. Our right to be informed about the criteria by which we are evaluated is denied by machines, and to the extent that this is a trait of AI, it means that it is very difficult to change.
  5. Finally, the film brings up something that I completely overlooked. Namely, that the states, especially China, are so exaggeratedly frequent and transparent about the collection of data. We seem to be completely used to having cameras everywhere, needing identity authentication to register or use all apps, and to using face recognition to pay in all places. When the thought of some products or companies detecting our lives and collecting data, we all feel our privacy is offended. But once the target of that infringement becomes the state, why do citizens feel nothing? I thought about it for a moment, and it probably stems from trust and understanding of the state. Because I believe that these facial recognition, cameras and identity authentication are used to ensure the security and rights of citizens. Because I think, since I have not done anything wrong, what can I do with all these surveillance? This is the moment when one of the words in the film hit me – “algorithmic obedience training”. Even if I didn’t do anything wrong, what about the bias of the mod? Even if the mod is not biased, is this monitoring, this level of monitoring, justified?

Lastly, about the impact of AI miscalculations on my life. There are two events.

The first is that there was a time when our high school needed face recognition to get into the campus, but because I changed my hair style and glasses, the picture on the campus website differed greatly from my own. So it caused a lot of trouble for my face recognition. Every time I entered the campus I needed to log in to the campus website and show my campus card, even to the point of calling my teachers to confirm my identity. In the end, it was solved after the security guard helped me to replace my photo in the system.

The second thing was that during the serious time of the epidemic in Beijing, Beijing’s ๅฅๅบทๅฎ would change the color of the code by detecting whether the citizen had purchased the medication needed for the covid-related symptoms such as fever. But the scope of medication was too broad that I was mis-detected when I was only buying ointments to help my ear piercings recover. And to eliminate the discoloration, not only did I need to do PCR test, but I also need to contact the community repeatedly, which was a very torturous and complicated process.

Both of these things caused a lot of trouble, but the good thing is that they did not cause serious consequences. And since I knew the reason for the AI detecting failure, through active communication and handling with the person in charge, the AI’s wrong identification was corrected. 

Perhaps this is a necessary condition for dealing with AI bias. The user should know why he or she is detected incorrectly and have someone in charge who can correct the results for the error.

Leave a Reply

Your email address will not be published. Required fields are marked *