MLNI – Final Project Concept – Dr. Manhattan (Lishan)

Overview

For the final project, I’m planning to build an educational game that tests the users about their biology knowledge about human body by asking them to assemble their own body parts. It’s called “Dr. Manhattan”.

Inspiration

The project was largely inspired by the superhero character “Dr. Manhattan” from Watchman. Jonathan Osterman, aka the Dr. Manhattan, was a researcher at a physics lab and one day an experiment went wrong, the lab machine tore the body of the researcher into pieces. However, in the following months, a series of strange events happened in the lab. It turned out that the conscious of Dr. Manhattan survived from the accident is progressively re-forming himself, from a disembodied nervous system including the brain and eyes; then as a circulatory system; then as a partially muscled skeleton, and finally he managed to rebuilt himself as a person. So, in my project the user will become the conscious of Dr. Manhattan that survived from the accident and has to rebuild his body from parts.

.     

         

How it works

In the beginning of the game, the player can see all these body organs in the correct position on his body in the screen. Then the player can press a key to trigger an explosion that will tear apart the body of the users shown in the screen and scatter the body parts. I will train a style transfer model and apply it here to make the image look less disturbing and cooler. The players will then use their hands to retreat their scattered body parts, organs and assemble them correctly to rebuild their body. I will use PoseNet to track the position of their body and “conscious” and to calculate the correct position of their body organ that the player should place the specific body organs on.

Machine Learning techniques used:

  • PoseNet : to track the position of the player and calculate the correct position the organs need to be on
  • Style Transfer

MLNI-Final Project Concept — Crystal Liu

Initial Thought:

I want to develop my final project based on my midterm project. As I said in my midterm documentation, I want to add storytelling part and smooth interaction to my final project. For the storytelling part, I plan to design a theme about the festival. Since Christmas is around the corner, I choose Santa’s journey on Christmas Eve as the main topic. 

     

If the users touch the Merry Christmas, they will see this crystal ball.

As the user gets closer and closer to this image, the image will be bigger and bigger, which seems like the user is approaching this crystal ball in a real world. If the distance reach the point, the user will see another image, which means they enter into the scene successfully:

I will set a large size for these images, which is larger than the canvas size. The users can drag this image by stretching their left or right hand. Also, they can trigger something in the image. For example, if the butterfly approach the elf who is raising his hands in the air on the second image, the user will hear “Merry Christmas” in an exciting mood. This is the first scene. The users can go to the next scene by letting the butterfly get close to the right edge of the image. If they do so, they can see an arrow guiding them to the next scene. Every scene has its own surprising part as my midterm and I plan to add some hints to guide the users. As Tristan suggested, I can use fade function to let the users recognize they just triggered something. 

Technology

The core technology is still poseNet. I was inspired by Shenshen’s and Billy’s midterm project. The users can zoom in the image by getting close to the screen. Also, I want to make some filters for the users and the image or GIF’s position is located based on poseNet. I also want to use style transfer to enrich the visual output. But I’m afraid that the model will get stuck and can’t work smoothly.

 

MLNI – Final Project Concept (Wei Wang | Cherry Cai)

Project Presentation

Teammate: Cherry Cai

  • Inspiration
wood marionette
wood marionette

Nowadays students/workers are under too much pressure bought by society. A lot of us are acting against our wish, like a marionette from the threads, arms dangling, floating at the mercy of the breeze. Some who tired of being cooped up struggled to free themselves from the control, but finally accepted their fate and submit to the pressure.

  • Interface

A wood marionette placed in the middle of the screen

  • Interaction
    1. Two control bars operating marionette’s body segments’ orientations
    2. Based on the user’s control, a protest will be given out by the marionette. The effect will be triggered accordingly(e.g. voice, movements, etc.)
    3. Segments will be thrown away in an attempt to get out of control if the instruction is not followed.
    4. The segment will be regenerated with a string that could, again, be controlled by the bars.
  • Machine Learning

KNN classification to recognize the orientation of control bars

Week 10 MLNI – Final Project Concept Presentation (Cherry Cai)

Control (Project Presentation)

  • Teammate: Wei Wang (ww1110)
  • Inspiration

Nowadays students/workers are under too much pressure bought by society. A lot of us are acting against our wish, like a marionette from the threads, arms dangling, floating at the mercy of the breeze. Some who tired of being cooped up struggled to free themselves from the control, but finally accepted their fate and submit to the pressure.

  • Interface

A wood marionette placed in the middle of the screen

  • Interaction
    1. Two control bars operating marionette’s body segments’ orientations
    2. Based on the user’s control, a protest will be given out by the marionette. The effect will be triggered accordingly(e.g. voice, movements, etc.)
    3. Segments will be thrown away in an attempt to get out of control if the instruction is not followed.
    4. The segment will be regenerated with a string that could, again, be controlled by the bars.
  • Machine Learning

KNN classification to recognize the orientation of control bars

MLNI week9 KNN training (Shenshen Lei)

This week we are supposed to create a real time KNN model. I used the sampled code called KNN-image-Classification(link at the end). 

I trained the model to recognized whether the user is wearing glasses. 

The program will take screenshot during the training process. While in the get result function the machine will comparing the screenshot of video with those in the databases. I also added a filter that when the machine detects the result of “not wearing glasses”, the screen will became blur.

The obstacles I faced during editing the model is that I cannot find the parameters of the video in the origin model, so I did not add the filter directly on the html linked video. To show the result constantly, I added an image function and the filter function in the getResult part.

There are still some problems in the project. For example, the when the imaged blurred, the computing speed went too slow. 

Github link of the model: 

https://github.com/cvalenzuela/ml5_KNN_example