HW2: Try with Feature Extractor Image Classification Model

For the assignment, I intended to make a physical controlled flappy bird. Instead of using a keyboard to control the bird’s movement, I want the users to self-define the commend, it can either be facial expressions, or different objects… So I chose the Feature Extractor Image Classification Model. On the ml5js reference page, it described this model ‘will allow you to train a neural network to distinguish between two different set of custom images’. This feature can perfectly realize the function of using different expressions or different objects to control the game. 

Here’s a demo of the final results. 

Using Instruction:
The first interface is the training interface. The users can define what is their normal mode, and what is the mode they want the bird to fly up. In the demo, I choose to cover the webcam as a suggestion of flying up. Because this is a very obvious difference, it is easier to differentiate. The normal mode is me doing nothing. The normal button and the up button are used for add many images of the two states(normal state and up state). Pressing on the ‘train’ button will start the training process. After the training process completes, the user is able to press “Down Arrow” to get into the flappy bird game interface. And every time I cover the webcam, the bird will fly up. If the bird hit on the blocks, the game will over. 

Although the model did help me realize physical controlling, the disadvantages are also obvious. Because the runtime is too long, and the training results are not very satisfying, it can easily get confused. So sometimes, it is not reacting well if the training sets are not accurate enough. And it will cause serious delay in the running process. 

Here is the code on GitHub. 

Leave a Reply