Week 02 Assignment: ml5.js Experiment – Eszter Vigh

So, I wanted to break the ml5 sentiment analysis demo. I guess the main question is, why? 

I knew that the training module likely had underlying bias, based on readings from my other class. I wanted to see how I could break the system through interesting, or deceptive phrasing. I put in phrases that leaned more positive, but were phrased negatively just to explore the limitations of the example. 

So I put in phrases like this:

example

example

example

example

So what I found was that inserting “not” (ex: not poor, not bad, etc) did not negate the negative (that is, it did not make the phrase positive when tested by the sentiment analyzer). I think this is really cool to test the extreme cases. It shows the short comings of the training. It is important to acknowledge this issue when training your application. It is incredibly valuable to test these extreme cases because it could significantly affect the success of the project as the meaning is completely the opposite of what this algorithm showed. 

Sure maybe in this case it isn’t a big deal, but imagine when testing this with a different trained data set. It was really amazing that there was a correlation factor that worked for a classic data set. I’ve never trained anything before, so it was really amazing to see what a trained model could do. 

Do I think I pushed this example to the limit? No, I was hoping that the reading would be somewhere closer to 0.5 for these negated negatives, it was really surprising to see the overwhelmingly negative score. I feel like I have learned a lot about how “black and white” ml5 is when it comes to training modules. Which I think will be helpful when creating our own projects later. We want to make them as accurate as possible right? Or maybe our art will come from the “mistakes” it makes. 

Leave a Reply