Machine learning bias in google’s quick draw

Machine learning systems are increasingly influencing many aspects of everyday life, and are used by both the hardware and software products that serve people globally. As such, researchers and designers seeking to create products that are useful and accessible for everyone often face the challenge of finding data sets that reflect the variety and backgrounds of users around the world. In order to train these machine learning systems, open, global — and growing — datasets are needed.

Google’s latest approach to helping wide, international audiences understand how neural networks work is a fun drawing project called quick draw. In short, the quick draw system will try to guess what you are drawing within 20 seconds. While the goal of Quick, Draw! was simply to create a fun game that runs on machine learning, it has resulted in 800 million drawings from twenty million people in 100 nations, from Brazil to Japan to the U.S. to South Africa.

In an article posted by google in 2017, google shared the inherent bias in the Quick, Draw! database that they collected. One example that stands out is the shoe style example. when analyzed, 115,000+ drawings of shoes in the Quick, Draw! dataset, it was discovered that a single style of shoe, which resembles a sneaker, was overwhelmingly represented. Because it was so frequently drawn, the neural network learned to recognize only this style as a “shoe.”


This biased is may be rooted in the Quick, Draw! user base (could it be mostly men)? or is it even more primal than that? could this bias be rooted in us as we (women and men), are thinking of a shoe? this of course is varied between cultures and context and has relates to how we know what we know – an epistemological question. 

Of course, when we build machines that aspire to know something we want them to know all the possibilities of a specific something, and for them to keep learning and be flexible to changes and a wide verity of possibilities.

This, in my perspective, is an enormous challenge, philosophically and technically. Machine learning has to adapt like humans do because change is the only constant.


Tree music – generative music deprived from NYC tree census data

I stumbled upon this database containing NYC tree information. over 60,000 trees are documented in this database that includes tree type, diameter, health condition, GPS location and more.

I thought it will be interesting to build an experiment that will sonify the tree information, and to see what types of weird sounds and patterns I could get from different neighborhoods, streets and parks.  

I started building this experiment using Mapbox API as the core element  to visualize the tree instances on a map. with the help of the Mapbox API, I was able to filter, search, and gather information I needed to pass to the auditory aspect of the system – tone.js.

Using tone, I’m sequencing sounds in correlation to a type of tree’s diameter variable from the database. This is then sent to 3 different synthesizers built in tone.js that produce the sound. I am mapping the diameter of the trees to a major C scale over three octaves. And ermined by the amount of trees in the the area. 


Particles – Installation by Daito Manabe and Motoi Ishibashi


Particles is an installation by Japanese artists Daito Manabe and Motoi Ishibashi created in 2011. In it, wirelessly controlled illuminated LED light balls travel through an 8-spiral shape rail, imitating a rollercoaster ride. 

The position of each ball is determined via total of 17 control points on the rail. Every time a ball passes through one of them, the respective ballʼs positional information is transmitted via a built-in infrared sensor. Sound is generated via digital synthesis every time the ball passes through one of these check points – the luminance information of the ball’s LED pattern is picked up by the sensor and is being translated to midi parameters to produce sounds that are played through 8 channel speakers. In fact, the movement of the balls, their speed and the way they flicker is the score of this generative piece.

A touch screen UI allows the user to interact with the system. By using it he can change the LED lights patterns through time, changing in effect, the characteristics of the sounds, its rhythm and other properties.

System plan


I find the use of movement and light manipulation as a mean to produce notations for a sound in this piece extremely interesting. Balancing the control that the observer has on the system with nature’s randomness makes an interesting piece to watch, both visually and sonically.