Reflection about excavating AI

Reflect on the relationship between labels and images in a machine learning image classification dataset? Who has the power to label images and how do those labels and machine learning models trained on them impact society?

I’d like to say it’s not that important to clarify the relationship between labels and images when excavating AI since the deciding line that clarifies it is quite fuzzy, which makes sense to the artist RenĂ© Magritte’s assumption “images in and of themselves have, at best, a very unstable relationship to the things seem to represent.”

In “excavating AI,” we can found out how human were excited about training their developing learning machine and getting the machine into use in their daily life, then technical challenges like object detection and facial recognition have been largely developed and solves, and people are happy with that. These techniques are quickly moving into the architecture of social institutions as described in the article: they are even making determinations on social problems like which suspects to arrest or whom to interview for a job, and that’s incredible. By using incredible here, I’m not commending AI with the power to label images in a positive way, I’d better like to leave a question mark here before human are going to widely put those AI into use in their daily life: can we ensure the what the machine is doing is right on the subjective level of humanity?

Perhaps the answer is not defined yet right now, and it is good to see human training AI systems in further and further fields, bringing new elements and techniques to human’s society and daily life, but, we should make is clear that what is at stake in the way AI systems use labels to clarify everything before human giving them capacities to judge.

Leave a Reply

Your email address will not be published. Required fields are marked *