MLAD: Assignment 1, DoodleNet Test & Reading Prompt Responses

Copying the code we did in class last week,  I did a short test run with DoodleNet, keeping the camera in place but not necessarily remaining still, or even in frame just to see what the classifier would come up with.

The first result that came up was moustache, which is fairly appropriate since I do have a moustache. However, the next few results of high confidence did not make as much sense, being “camouflage”, “hurricane”, and “lion” (the latter popped up the most frequently).  The most outlandish result would have to be “hurricane” to me, since the only reason I could think of it being registered in the classification system would be due to my gray sweatshirt, but given the solid color of the sweatshirt and the fact that seemingly only color was used to indicate the classification, this was the one most far off base. Although a similar case, I can moreso understand the results of “lion” and particularly “camouflage” being present. The background behind me contains mixed shades of what would be present in desert camouflage, and the depth, and therefore lighting, of each object/area could be misconstrued as flat, therefore justifying the identification of “camouflage”. This seemed particularly evident when I removed myself from frame, in which case only “camouflage” would ever return.

To a lesser extend, this applies to ‘lion” as well, especially in combination with my face being registered probably, although I will say it is a little more outlandish.

In order to attempt to neutralize the background, I then tried putting a napkin on my table and have it register a butter knife.

Certainly the result “beard” in this case beat out “hurricane” in terms of being egregiously incorrect. By rotating the blade around several times, I received many results such as marker, train, roller coaster, and, firing just once, the correct answer: knife.

In order to get a proper list of objects, I added a simple loop at the end of function gotResults to list unique items, and once again spun the knife around on the table.

function gotResults(error, results) {
if (!error) {
for (let i=0; i<results.length; i++) {
if (!lablist.includes(results[i].label)) {
lablist+=results[i].label+ ", ";
console.log(lablist);
}
}
label = results[0].label;
}

Results were:

drill, fire_hydrant, book, purse, eraser, lighthouse, The_Mona_Lisa, rifle, table, dresser, dumbbell, passport, paint_can, camouflage, see_saw, mermaid, lion, pliers, megaphone, horse, face, moustache, submarine, hedgehog, panda, beard, matches, microphone, syringe, knife, hurricane, goatee, marker, peas, strawberry, angel, pineapple, binoculars, smiley_face, moon, banana, soccer_ball, wristwatch, lipstick, crayon, toothpaste, dragon, helicopter, bush, telephone, paintbrush, pants, firetruck, roller_coaster, eyeglasses, stove, sink, broccoli, The_Great_Wall_of_China, fireplace, swing_set, flower, bandage, hockey_puck, bottlecap, lantern, octagon, hexagon, power_outlet, rabbit, train, picture_frame, square, suitcase, bathtub, necklace, pickup_truck, crocodile, helmet, blackberry, sleeping_bag, blueberry, light_bulb, onion, toothbrush, asparagus, streetlight, cello, pool, calculator, belt, fence, toaster, sheep, whale, bridge, palm_tree, backpack

which seems to be inkeeping with the actual list of results the dataset contains, assuming I’ve found the correct one.

Since the dataset was originally developed to register hand-drawn doodles, I assume that the objects were selected with fairly unique features, so as to best encompass the emphasis on unique identifiers/the lack of other details that a doodle would contain, e.g. wings on a plane, rotor on a helicopter, mane on a lion. That being said, the dataset is fairly small, and the amount of inaccuracies in my test with it could also be explained by the intention of the classifier, which when combined with complex real-world images (even my knife on a napkin on the table) the small details may get blown out of proportion since it expects no details, leading to very wide inaccuracies.


Reading Prompt Responses:

  1. Although it’s hard to pinpoint only a single intelligence type I most identify with, at the very least in regards to work, I probably lean closest with Spatial Intelligence. When I did art (painting, drawing, etc) in the past, and when I do physical work with either hardware or fabrication, ideas tend to be born from mental 3D images of what the result would be, which I then try to disassemble in my head in order to find the best process of which to achieve that goal.
  2. Again, it is hard to pinpoint a single intelligence that would be a universal priority for an AI. If it’s overall purpose is to be applied in social service for example, linguistic intelligence or interpersonal intelligence may be most important, whereas an AI used for scouting a location, and mapping it’s surroundings in an unknown environment would definitely need a strong sense of Spatial Intelligence instead. That being said, since my interests mostly lean towards the latter, it’d be safe to assume that my goals and priorities would lean in the same direction.
  3. None of the applications listed were of surprise to me.
  4. Although it somewhat falls under virtual assistance and machine translation, AI tends to be present even when writing our own languages, correcting grammar mistakes and giving suggestions as to how best condense/phrase our thoughts. Also, circling back to the idea of navigation, GPS’s in our cars and phones certainly rely on AI to find the best routes for us, taking into account live traffic updates such as road work, collisions, and rush hour routes. I’m sure that on larger platforms too, they may also register the actual time it took for a user to travel along a particular route, and compare it with original expectations to further update the optimal route in the area.
  5. Computer/Phone, Google, Wechat (social media).
  6. Despite their denial, I’m certain Google just listens in on our entire day because I once randomly had an in-person conversation with someone about shaving my head as a joke and the next day I got a bunch of targeted ads for haircutting equipment.
  7.  
    1. Spam filter
    2. Handwriting recognition
    3. predictive text input
    4. recommendations
    5. predicted search & intention (eg. “…for sale” vs “…info”)
    6. recommended public content, friend suggestions
    7. if referring to things like phone calls, virtual assistance

Leave a Reply

Your email address will not be published. Required fields are marked *