Week 3 : ml5.js Assignment – Who’s That Human?!

For this assignment, I was really inspired to create a model where I can train an image classifier with images of Pokemon, and then use that model on human faces. The output would be which Pokemon the human face looks most like.  I thought it be a fun, whimsical project to work on since the outcomes would be quite interesting( and, funny) to observe. 

The first step was to arrange the dataset. This was much easier than I thought. I stumbled across a zip file on Kaggle with over 800 PNG files of Pokemon. After downloading it, I cleaned out the unnecessary files and and left only the first 721 original Pokemon. Since I needed labels too, I was able to find that as well on a Github Gist.
Once I matched the labels and the images, I attempted to train an existing model with the help of ml5’s “feature extractor”. (According to the ml5js documentation, there is no method to train a classifier from scratch as of yet so I had to retrain an existing model)
This is the stage where I got stuck at and wasn’t able to make much progress in because of reasons described in the post below.

To get started, I followed the steps outlined in this link, which outlines the gist of how to re-train a model with custom images. 
The steps seemed fairly straightforward.
1) Get the features from ‘MobileNet’ using ml5’s feature extractor, and create a classifier out of it.
2) Add all the new images to the classifier.
3) Train the classifier with the new images.

From this point onwards, it was a mix of dealing with Javascript-related, and several logic errors in ml5js workflow. 
My approach was to save all the images into a file, and then read them off the file into an array of images. Same goes for the labels.
This was my initial piece of code. (Posting this here to highlight some of the early mistakes).

let text;
let features = ml5.featureExtractor('MobileNet');
let classifier = features.classification();

function preload(){
text = loadStrings("../pokemonList.txt");

}

function setup() {
createCanvas(1280, 720);
console.log("Gonna add images now");
for(var i=1; i<=text.length;i++){
img = new Image();
img.src = "pokemon/"+i.toString()+".png";
//we need to add each pokemon image in succession
classifier.addImage(img, text[i-1], imageAdded);
}
console.log("done training");
}

function imageAdded()
{
console.log("Done addding images, gonna start training now. might take some time?");
console.log("Image added, training model..");
classifier.train();
}

function draw() {
// console.log(text);
console.log(text.length);
noLoop();
}

The initial error was trying to load the images at once and then attempting to classify them all in one go. There are multiple problems associated with trying to do this. First of all, because of the way Javascript works, the program might not wait for the image to completely load to save it.  The solution(?) was to move the loading images segment to the preload() function instead. (However, even though the images seemed to be loaded successfully over, the tensorflow.js model returns errors relating to null pixel values). Thus, it tries to move onto the next function/task even though the current/previous task has not completed yet. Secondly, since I was using p5js, there was a more appropriate syntax to load an image which is literally a function called loadImage(). Then, I was attempting to “bulk train” the model as described on the ml5js page. This would cause all sorts of erratic behavior including error messages such as  “Uncaught (in promise) Error: Add some examples before training!”,”ValueError: Input arrays should have the same number of samples as target arrays”, and several graphics related errors such as failures to compile fragment shaders, and so on. After a quick Google search, I found out that it is best to train the classifier one image at a time, through a callback function that invokes the train() function. Lastly, there was an issue with using the value of text.length as the for loop boundary. For some reason, the value would appear to be zero in the setup() loop, and would only appear to be 721 in the draw() loop even though the textfile was loaded and read in the preload() function. The solution to this was simply hardcoding the value 721 in the loop until I found out exactly what went wrong and why. 
Making a couple of tweaks to the code in terms seemed to load all the images. (this was verified by trying to invoke the image() function on every image in the draw loop and seeing if they actually displayed on the screen. They did.)
The next step was to train the model, but unfortunately, I seemed to have reached another roadblock with this. 
The program loads all the images, runs through the train() function, but it triggers an error in the the underlying tensorflow.js framework. This in turn seems to propagate the error all the way upto ml5 again.Error Messages on Image Classifier 
My presumption is that even though the for loop that loads the images runs through, the images are not really loaded just yet and ready to be read into the classifier.

Afterwards, to check if the images were actually loaded, the images were displayed before running it through the .train(). They displayed successfully. Additionally, the callback to the imageAdded() was removed and we tried to train the images in bulk. This created the same error.
For now, there seems to be little documentation with how to solve issues like this, but I enjoyed this entire process (although it was and is frustrating to not be able to finish it on time). But, at the same time, I feel this project helped me learn so much more about the Javascript workflow, and the unexpected behaviour that could come with it, along with some useful p5js functions. The most useful thing that I learnt was to structure your code in such a way that there is enough time for the images to load completely and be cached before any sort of logic can be applied to them.
If anyone has any ideas or suggestions as to how I might be able to proceed with this little project, please send them this way!

 The code can be found here.

Comic Idea – Abdullah and Xavier

Inspiration : 

The idea for this comic is heavily influenced by a Japanese Game Trilogy – Zero Escape : The Nonary Games. The game itself is presented in a Visual Novel style format and the mechanics of the comic will draw on this mechanism to facilitate a great deal of interactivity.

Plotline :

The protagonist (you) wakes up in his car in an abandoned parking lot. He is confused as to how he got there, but even before he begins to think about it, he sees a silhouette in front of him, and he passes out…
When he wakes up, he finds himself in a sort of dark dungeon. But he isn’t alone- there seem to be others with him. A young teenage girl. A little boy and what looks like his granddad. A young delinquent with tattoos. A nervous looking teenage boy. And, a meek, young woman.  You do not recognize any of these people, and they don’t seem to know each other either. For what its worth, you and your newfound group start to explore the dungeon in an attempt to escape. You make your way to a hall where there seems to be a message pained in blood on the wall. “Trust No One”, it seems to say. With this, a seed of distrust has immediately been sowed in your group.  
What will you do now? Will you trust them and work together as a group to escape? Or, will you try to find out which one of your group could be a potential threat?
In this story of building trust, you will work your way through  a dungeon where you will be forced to make certain choices, some of which will push your moral conscience to its limit. But, in a world where it seems to be every man for himself, what will you do? 
Will you ally?
Or, will you betray?

Actual Implementation :

The comic will have more than one ending, depending on the choices that will be made. Creating a branching timeline and implementing choices in the game seem to be fairly straightforward from a technical standpoint. However, our main concern is the actual drawings, so we hope to find a mix of online tools and resources to help us out with that, in addition to the Photoshop skills we picked up in class. 

Week 3 : CSS Portfolio – Abdullah Zameek

The link to my portfolio site can be found here

Thoughts and Reflections :

I always had certain lingering suspicions at the back of my mind, and this assignment helped me confirm them.
a) I’m terrible at design.
b) I loathe CSS (No offense to the web designers and artists out there)
I found it difficult to find a nice aesthetic to work with until I stumbled across this site which generates color schemes for you (Yay for artsy decisions that I didn’t have to make!). I found a palette that I liked and went with that.  I’ve had some experience with CSS in the past when I took part in hackathons and whatnot, and I’d often leave it upto my other teammates to do the CSS bits since I often got frustrated working with CSS and experimenting with different numbers until I got the desired outcome. This time around, however, I had to finally confront it head on, and with the help of numerous references to W3Schools and StackOverflow, I was able to to put together the simple design I had in mind. In the weeks to come, however, I hope to improve my relationship with CSS and web development as a whole, and be able to create a coherent, aesthetically pleasing website. 

At one point, my flexboxes were not arranging themselves  properly and I was frustrated with it for a while, so I decided to play around with something that I liked and was more familiar with -p5.js. I ended up included a little p5js sketch in my website, where if you click anywhere in the body of the page, you can draw little circles that come in a random shade of blue. Pressing the enter key erases whatever you drew, giving you  a clean “canvas”.

All in all, this was a pretty fun exercise and I hope to improve my site over the weeks to come (maybe do a complete overhaul) and learn more about and appreciate web design fundamentals and elements.7

Week 2 : Response to Understanding Comics – Abdullah Zameek

I never really thought of comics to be an artform of its own category per se. To me, they were simply another sort of book or magazine that you would pick up at your local bookstore. However, after reading McCloud’s work, it is clear that comics do deserve a pedestal of their own as there are several key components that clearly distinguish them from regular novels. 

One of the more interesting aspects of the reading was the idea of human interaction with the medium itself.  McCloud stated, “we humans are a self-centered race……we see ourselves in everything.” This aspect of human nature seems to work in favour of maximizing engagement between the audience and the medium. The reader would put him or herself in the shoes of the characters that they see and would experience a whole new narrative from their eyes. McCloud also said, “Participation is a powerful force in any medium”, and comics seem to capitalize well on user engagement. 
This is something I have personally experienced as I was (at one point of time), an avid manga reader. The factor that played a role in the immersive experience was the inclusion of sound effects (in words, of course) alongside the panels. This allows the reader to really visualize and gain a sense of the world that they are in not only in a visual sense, but an auditory sense as well. 
And, achieving that level of engagement with simple print is quite impressive.

Week 2 : Portfolio Homepage – Abdullah Zameek

The link to my portfolio can be found here

The actual process itself was fairly quite straightforward since we covered most of the concepts in class. I played around more with the flexbox styles, and it seemed to be quite intuitive and easy to understand. The only thing that (sort of) annoyed me was the repetition of the same content box div. But, I assume that modern HTML templating engines or some similar framework have a way of reducing that sort of redundancy.