Week 02: Sentiment Example Analysis

https://ml5js.github.io/ml5-examples/p5js/Sentiment/

The sentiment analysis demo is a fairly simple model. All you have to do is type in words or sentences and it will evaluate the sentiment of what you wrote on a scale of 0 (unhappy) to 1 (Happy). It sounds simple and it is if you just follow the rules and write either sad or happy words. However, if you challenge the model and push the boundaries, you will be able to observe that the model has issues solving contexts. 

I first tried using the sentence “You don’t stay sad forever” which resulted in 0.0682068020105362. This means that the system deemed that the sentence is an unhappy sentence. However, anyone who reads this sentence will realize that this actually has a positive sentiment to it.

Then I thought, the system must be detecting words that it deems either happy or sad. So I changed the word “sad” to “happy”. This resulted in a 0.9993866682052612 sentiment score. This seems like the happiest thing you can say to a person if you only looked at the score. However, we all know that this is a pretty sad quote.   

I think that through models like these we can observe how the future of systems like Bixby and Siri will be. Things like sarcasm and context are very difficult to program, let alone for some humans to understand. If we can crack how people interact and within what context, we could possibly transfer that knowledge to our machines, creating AI companions for fields like healthcare, customer service and so on. 

Hellooo ml5.js | aiarts.week02

This week I introduced ml5.js to myself by playing with its examples and digging around its documentation. The examples and the results that they’re able to generate on the fly is fascinating, and they led to many rather inspiring findings.

ml5.index

Following are a few examples:

imageClassifier()

The image classifier is a very classic example of how trained models can be put into meaningful usage. From the cloud, ml5.p5 accesses a model, which is pre-trained with approximately 15 million images from imageNet. ImageNet is one of my unexpected findings, as it seems to be a rich and easy-to-use database for model training.

I wonder if imageNet’s pool of categories are ever updated?

Continue reading “Hellooo ml5.js | aiarts.week02”

Week 02: ml5.js exploration

I really like the “StyleTransfer_Video” example, and I think it could have very interesting artistic uses. The video aspect is fun and interactive, and I like the stylistic possibilities.

https://ml5js.github.io/ml5-examples/p5js/StyleTransfer/StyleTransfer_Video/

How it works

The program takes an image and uses tracking software to match certain stylistic elements to the webcam images. The software applies the color scheme and patterns of the “Style” image to whatever the webcam captures, creating a surprisingly similar style while keeping the webcam aspects identifiable.

Here was the “style” image:

Here are some webcam screencaps of me using this style:

Potential uses

I think it could be a cool public art piece, especially in an area like M50 with a lot of graffiti or outdoor art. The webcam could be displayed on a large screen that places passersby into the art of the location, taking the stylistic inspiration from the art pieces around it. I also think it could be cool way to make “real-time animations” using cartoon or anime styles to stylize webcam footage. If a simple editing software was added to the code, such as slo-mo effects, jump cuts and zoom, and the program could become an interactive game that “directs” people and helps them create their own “animated film.”

I’m also curious how the program would work if the “style” images were screencaps of the webcam itself. Would repeated screencaps of the webcam fed through it as the “style” create trippy, psychedelic video? I would love to find out!

Week 02 Assignment: imageClassifier() Testing – Katie

For this week’s assignment, I chose to look closer at the Image Classifier; this uses neural networks to “recognize the content of images” and to classify those images. It also works in real time, and the example I tried specifically uses a webcam input.

You can tell a bit about its development based on what you see on the user side. I noticed that it wasn’t too bad at assessing images if they were visually straightforward—I held up a water bottle against a mostly plain background, and it caught on quickly. But when I would turn it on its side or upside down, the classifier had a harder time trying to identify it.

I looked more into the training of this model and learned that it was trained on the ImageNet (image-net.org) database. ImageNet has around 14 million images divided into different synsets, which are labelled and monitored by humans.

Image Net database

imagenet-database

I started to think more about how that training really translates to its function, and what the computer is actually ‘seeing’—if it’s only recognizing one angle of a given object, does it only learn in groups of pixels? Even if that’s the case, is it possible for it to understand those groups if they were rotated? I’m not sure if these questions have obvious answers, but I’m excited to hopefully understand better over the next few months.

Week 2 AI Arts Assignment – Jonghyun Jee

Among the number of ml5.js examples I tried, what I enjoyed the most to play around with was the “Sentiment Analysis Demo.”  The model is pretty simple: you put some sentences, and the algorithm will score the given text and identify which sentiment is expressed—in this model, it gives you a value between 0 (negative) and 1 (positive).

The developers of this model noted that this example is trained with a data set of movie reviews, which makes sense because most of the film reviews have both comments and ratings. I think it is a clever way of collecting data for this sort of sentiment analysis model: a large enough collection of data for free!

So I began with inserting some quotes that are either *obviously* positive or negative, just to see if this model can do the easiest tasks.

It’s not a challenging task for us to tell whether what he says is positive. Pretty much same for the analysis model as well, it yielded a result of 0.9661046.

And for Squidward’s quote, the model identified it with confidence: 0.0078813. One thing I noticed from this quote is, the word “cruel” here is presumably the only word that has a negative connotation, in this sentence. I removed that single word and guess what; the model seems quite puzzled now. It gave me a surprising result of 0.5622794, which means the sentence “It’s just a reminder that I’m single and likely to remain that way forever,” according to the algorithm, falls into a gray zone between negative and positive. And yet, this sentence without the word “cruel” still seems somewhat negative for us. An idea came to my mind: is this algorithm smart enough to understand sarcasm?

“If you see me smiling it’s because I’m thinking of doing something bad. If you see me laughing, it’s because I already have.”

I googled up for some sarcastic sentences without any “negative words.” The result was 0.9988156. Not sure if the lines above are more positive than what Spongebob has said with glee, but the algorithm thinks so anyway.

“Algorithm, I love how you look everything on the bright side. You’re smarter than you look. I wish I could be as confident as you are, haha.”

I inserted this sort of backhanded compliment and it seemed pretty much flattered; it showed a result of 0.9985754.  Now I feel somewhat bad for this algorithm, as if I’m making a fool of it. The last text I inserted is, all the text I’ve written so far.

For some reason the result was 0.9815316. I enjoyed doing some experiments with this model, because it’s pretty interesting to ponder about the results it has shown. Observing how an omission of the single word can dramatically change its result, I think a word-by-word approach has certain limits. In coder’s perspective, I’m wondering how I may improve this sentiment analysis model to the point which it can comprehend an underlying “sarcasm.” It’ll be a tough challenge, but definitely worthy in terms of computational linguistics and affective computing. Hope Siri and I can happily share the lovely act of sarcasm someday.