Machine learning bias in google’s quick draw

Machine learning systems are increasingly influencing many aspects of everyday life, and are used by both the hardware and software products that serve people globally. As such, researchers and designers seeking to create products that are useful and accessible for everyone often face the challenge of finding data sets that reflect the variety and backgrounds of users around the world. In order to train these machine learning systems, open, global — and growing — datasets are needed.

Google’s latest approach to helping wide, international audiences understand how neural networks work is a fun drawing project called quick draw. In short, the quick draw system will try to guess what you are drawing within 20 seconds. While the goal of Quick, Draw! was simply to create a fun game that runs on machine learning, it has resulted in 800 million drawings from twenty million people in 100 nations, from Brazil to Japan to the U.S. to South Africa.

In an article posted by google in 2017, google shared the inherent bias in the Quick, Draw! database that they collected. One example that stands out is the shoe style example. when analyzed, 115,000+ drawings of shoes in the Quick, Draw! dataset, it was discovered that a single style of shoe, which resembles a sneaker, was overwhelmingly represented. Because it was so frequently drawn, the neural network learned to recognize only this style as a “shoe.”

 

This biased is may be rooted in the Quick, Draw! user base (could it be mostly men)? or is it even more primal than that? could this bias be rooted in us as we (women and men), are thinking of a shoe? this of course is varied between cultures and context and has relates to how we know what we know – an epistemological question. 

Of course, when we build machines that aspire to know something we want them to know all the possibilities of a specific something, and for them to keep learning and be flexible to changes and a wide verity of possibilities.

This, in my perspective, is an enormous challenge, philosophically and technically. Machine learning has to adapt like humans do because change is the only constant.

 

What’s good bot 👍

We live in an era of media explosion. our minds are bombarded with information from countless media channels on a daily basis. We can’t really control it, it’s everywhere, with data that’s accessibility is increasing everyday, it seems that every passing day there is more and more bad news reported everywhere. 

It isn’t that these are the only things that happen. Perhaps the readers, have trained journalists to focus on these things and we are fed what we actually need in order to progress and survive. Bad news sales. 

But what if there was a channel that would surface only good news? would we watch it?

I chose to focus this in my bot project.

Using news API, Im querying a list of news headlines from different news channels throughout the world and from different topics such as business, entertainment, health, science, technology and more.

After fetching the answers back from the API and pushing them to an array, using a sentiment analysis library sentiment.js , I’m checking the score of the headlines and filtering only positive results, picking the highest score from a batch of 20 headlines.

Posting to mastodon:

Once every 5 minutes I’m pushing my results to mastodon via the mastodon API, creating a feed of filtered good news results – the whats good bot 👍.

 

 

Tree music – generative music deprived from NYC tree census data

I stumbled upon this database containing NYC tree information. over 60,000 trees are documented in this database that includes tree type, diameter, health condition, GPS location and more.

I thought it will be interesting to build an experiment that will sonify the tree information, and to see what types of weird sounds and patterns I could get from different neighborhoods, streets and parks.  

I started building this experiment using Mapbox API as the core element  to visualize the tree instances on a map. with the help of the Mapbox API, I was able to filter, search, and gather information I needed to pass to the auditory aspect of the system – tone.js.

Using tone, I’m sequencing sounds in correlation to a type of tree’s diameter variable from the database. This is then sent to 3 different synthesizers built in tone.js that produce the sound. I am mapping the diameter of the trees to a major C scale over three octaves. And ermined by the amount of trees in the the area. 

 

Upending the Uncanny Valley

Reading Upending the Uncanny Valley made me think of the growth and progression of human consciousness throughout its years. There have been physiological and cultural reasons (among numerous other reasons)  for the formation, development and progression of the human consciousness. in the beginning some thousands years ago, it was limited. As humans did not have the technological tools of language not to mention written text. The formation of these tools along the years, along with other progressions in human history made our minds broaden and we are now able to grasp wide array of ideas, metaphors and other concepts that humans thousands years ago just couldn’t.

When the art of cinema was in its early days, people actually believed that what they see on screen on the screen actually exists. On Jan. 25, 1896, the Lumière brothers screened their short film “L’Arrivée d’un Train en Gare de La Ciotat” (“Arrival of a Train at La Ciotat Station”) at the Salon Indien du Grand Café in Paris. There were only a handful of people in the world who had seen a movie at this point, and it was probably the first time for everyone in this particular audience. And, so the story goes, they did not take it sitting down. On the contrary, they broke into a panic, screaming and running for their lives as the locomotive on the screen seemingly headed straight toward them. 

~100 years later and we can mock these people in the movie theatre, for it is rare that we mistake visual effect in the cinema for reality. Technology has introduced us to new forms of knowing what is real and what is not.

I believe that the uncanny valley is an ongoing continuum that keeps appearing as a side effect of what we know vs. what we expect. In my opinion, As our ways of knowing progress and broaden, upending it through hyper resemblance is bound to fail.

Particles – Installation by Daito Manabe and Motoi Ishibashi

Particles

Particles is an installation by Japanese artists Daito Manabe and Motoi Ishibashi created in 2011. In it, wirelessly controlled illuminated LED light balls travel through an 8-spiral shape rail, imitating a rollercoaster ride. 

The position of each ball is determined via total of 17 control points on the rail. Every time a ball passes through one of them, the respective ballʼs positional information is transmitted via a built-in infrared sensor. Sound is generated via digital synthesis every time the ball passes through one of these check points – the luminance information of the ball’s LED pattern is picked up by the sensor and is being translated to midi parameters to produce sounds that are played through 8 channel speakers. In fact, the movement of the balls, their speed and the way they flicker is the score of this generative piece.

A touch screen UI allows the user to interact with the system. By using it he can change the LED lights patterns through time, changing in effect, the characteristics of the sounds, its rhythm and other properties.

System plan

 

I find the use of movement and light manipulation as a mean to produce notations for a sound in this piece extremely interesting. Balancing the control that the observer has on the system with nature’s randomness makes an interesting piece to watch, both visually and sonically.