Week 02 Assignment: ml5.js Experiment – Eszter Vigh

So, I wanted to break the ml5 sentiment analysis demo. I guess the main question is, why? 

I knew that the training module likely had underlying bias, based on readings from my other class. I wanted to see how I could break the system through interesting, or deceptive phrasing. I put in phrases that leaned more positive, but were phrased negatively just to explore the limitations of the example. 

So I put in phrases like this:

example

example

example

example

So what I found was that inserting “not” (ex: not poor, not bad, etc) did not negate the negative (that is, it did not make the phrase positive when tested by the sentiment analyzer). I think this is really cool to test the extreme cases. It shows the short comings of the training. It is important to acknowledge this issue when training your application. It is incredibly valuable to test these extreme cases because it could significantly affect the success of the project as the meaning is completely the opposite of what this algorithm showed. 

Sure maybe in this case it isn’t a big deal, but imagine when testing this with a different trained data set. It was really amazing that there was a correlation factor that worked for a classic data set. I’ve never trained anything before, so it was really amazing to see what a trained model could do. 

Do I think I pushed this example to the limit? No, I was hoping that the reading would be somewhere closer to 0.5 for these negated negatives, it was really surprising to see the overwhelmingly negative score. I feel like I have learned a lot about how “black and white” ml5 is when it comes to training modules. Which I think will be helpful when creating our own projects later. We want to make them as accurate as possible right? Or maybe our art will come from the “mistakes” it makes. 

Week 02 Assignment: ml5.js Experiment

  • Try with any given example on ml5.js website and report your findings.
    • It could be either technical, such as “what is an async function or promises, callbacks in Javascript and its application in ml5.js”
    • Or it can be conceptual, like “brainstorming on how to use MobileNet for artistic purpose”
  • It literally can be  anything you find interesting and would like to share. The goal is to let you play with ml5.js and discover some fun! 
  • Post it on the IMA blog before Friday Midnight, 13th with tag: aiarts02
  • Supporting materials should be uploaded to NYU google drive or your GitHub if prefer. Make it public or at least grant the instructor access (aven@nyu.edu)  before Friday 11:59 pm, late submission will have influence on the mark.

Week 2 AI Arts: ml5.js Experiment(Ronan)

Technology is not enough. 

Consider the technology as a tool which, in itself, could do nothing.

Treat the technology as something that everyone on the team could learn, understand and explore freely.

– Red Burns

I found this quote from Daniel Shiffman’s introduction video on ml5. Red Burns was a chair of the ITP program at ITP. Her work has inspired a lot of people. I really like this quote, because it tells us that don’t be too caught up in this shining technology and “we have to remember that without human beings on the earth, what’s the point?”, as Daniel Shiffman said. So whenever we are looking at the algorithms or rules, also think about how we can make use of it and how we can teach the technology.

After I explored a bit of the ml5.js website, I am really fascinated by all the possibilities that were brought to me. As someone who doesn’t really have a lot of knowledge about machine learning, I really like the way that ml5.js provides lots of pre-trained models for people to play around, and I can also re-train a model by myself. 

For example, when I was playing around with the image classifier example, I realized that it will return both a label of the picture as well as a confidence score.  When I was watching Daniel Shiffman’s introduction on ml5.js, he used a puffin as the source image and the model didn’t get the right label. However, that was in 2018 and I wanted to try it out for myself to see whether they have updated the model. So I started to look at the example code provided on the ml5 website.

At first, the example code didn’t work out:

After examining the code, I realized there was a bug on the website. On line 32, it should be “Images” instead of “images”.

But again, now I can see the picture. However, I can’t really see the label and confidence score and there is another error on the website. I’m not sure how to fix this right now, but maybe I will look into later with more knowledge about this framework. 

Besides, I’ve also learned that the image dataset that is used in the imageClassifier() example is the ImageNet.