Trying/Experimenting PoseNet Drawings

Code: https://drive.google.com/openid=1tr31FxBcRtMLQI2WckLDGOQUmL1OeXl7

I really enjoyed playing with the posenet examples given to us for the second week assignment period. I really liked how the identification of the nose allowed the user to draw on the screen, leading to a much more interactive experience compared to most examples. 

I decided to play with the identification of the body part used for painting and made it so that two body parts were used.

A problem that I ran into was figuring out how to change the code so that it would recognize both right and left eyes to draw. I initially thought that if I could change the code to a “else if” statement, it would work. Turns out that it still only recognized the first body part in the code “rightEye”. After asking around I realized that I way to make the code recognize the left eye means that I need to write “||” meaning “or” in the code.  

I also changed the colorization of the elements that appeared to random, setting a limit to a maximum of 255 on the RGB scale. 

If I had more time and knowledge of coding, I would like to add features such as choosing your own colour and adding a button to decide which body part to activate for the painting. This would result in more interactivity for the user. 

Week 03 Assignment: trying Save/Load Model with Image Classification

This week’s assignment turned out to be bit less successful than I hoped. I wanted to use the assignment as an opportunity to build off of an earlier project I did using image classification. For that project, I loosely trained the model with images of the alphabet in American Sign Language (ASL). *note that I’m not very knowledgable about ASL and even worse at signing (I had to reference an alphabet guide) but here’s a video for reference of the last project:

Despite the fact that I only trained it with 24 of the 26 letters (J and Z require motion while image classification training requires still images), it was incredibly time consuming to have to retrain it every time I opened the project. Previously when I was doing this project, I didn’t know that a save/load model had recently been developed, so I figured I should try to implement that in this project to make it more efficient.

I referenced Daniel Shiffman’s video on the Save/Load model, and for the first part it does seem to be working; after training, the model.json and model.weights.bin files download and open to look like the demonstration in the video. It’s only when I try to load them back into the program that it stops working. In localhost, it stops running at “loading model.” My terminal shows this:

badterminal

I think there’s probably a relatively simple explanation to this or something that I’m overlooking, and I think I should continue to work on it and make sure I can make this work. If I could get it to work, I think the save/load model would be extremely helpful in developing a larger project, and especially if I wanted to move forward with something similar to the image classification model.

Code:

Training & saving model code: https://github.com/katkrobock/aiarts/tree/master/train_saveModel

Loading model code: https://github.com/katkrobock/aiarts/tree/master/loadModel

Reference:

Daniel Shiffman / The Coding Train video: https://www.youtube.com/watch?v=eU7gIy3xV30

Week 3 Assignment – Cassie

Code: https://drive.google.com/drive/folders/1iYzpoaUQ30HNqOw04meyBtWoBm53cihh?usp=sharing

I really liked the Posenet examples we went through in class so I wanted to experiment a bit with those. I especially liked the week03-5-PosenetExamples-1 example where you could draw with your nose, and initially wanted to adapt this example so that you could draw with a different body part. I started off searching for the word “nose” within the ml5.min.js code in order to see what the other options are. However, after experimentation with other body parts, the nose was still the most fun to create shapes with.

So, I decided to create a very simple game where the user gets to use this nose movement but in a game format:

Essentially, there are a set of five circles, and the user must “boop” each circle with their nose to change the circle’s color.

Of course, I ran into a couple of challenges while doing this assignment. The main challenge was getting the ellipse to change once the nose reached it. I initially constructed the if statement like this:

However, nothing was happening when I touched my nose to the ellipse. Eventually, I ended up widening the range of the positioning values for the nose to be in in order to get a result:

After broadening the range, it started working properly. I ended up narrowing it by half later, as the above screenshot had too wide of a range. Once this essential part of the game was working, I worked on the user interaction part of the game. This included adding more ellipses and having the ellipses change color once the user touches them in order to indicate they have been touched.

If I were to build upon this project, I would want to add more UI components such as a timer or some kind of score-keeping system as well as prompts to start and finish. It would also be cool if the ellipses were randomly generated as the game goes along.