MLNI Final Project – Jessica Chon

Project Name: Namaste

Description

For my final project, I wanted to create an interactive yoga teaching application for people unfamiliar with yoga. The project layout was that users had to choose from 6 different images of yoga poses, and guess which of those poses matched the background on the screen and the pose name at the bottom. Users were able to pick from either cobra, tree, chair, corpse, downwards dog, and cat pose. 

Link to Demo

The goal of my project was to not only get users to draw the connection between the yoga pose name and the animal/object it was named after, but also feel immersed in the natural environment that these poses are typically found in. 

Inspiration

I was thinking of games and body movement/positions, and remembered one of my favorite childhood games, WarioWare Smooth Moves. One of the moments that stuck the most with me in the game, was the pose imitation levels. The point of the game was to complete simple tasks within 3 seconds. It very easily got stressful and fast-paced, so the random “Let’s Pose” breaks where gamers had to simply follow the pose on the screen was peaceful. Thus, the idea came to me where I wanted people to also do yoga poses.

As I was doing research on what poses I could incorporate into my project, I found it interesting that all the poses were named after objects, animals, etc. As I looked into it more, I found that one of yoga’s main principles is to really embrace nature and the environment you’re in. This point of information, in addition to enjoying my midterm project immersive environment, was what made me want to make users feel immersed in the environment and truly understand the context in which these poses were named after. Continue reading “MLNI Final Project – Jessica Chon”

MLNI Final Project Concept – Jessica Chon

Link to presentation

Project Idea

For my final project, I was thinking of creating an interactive website where users can learn yoga themselves and have a greater appreciation for the practice of it. Essentially, users can choose from several images of yoga poses and try to imitate them. Based on how accurate their poses are, parts of their environment will change to match the pose name. If their pose is completely accurate, they themselves will become the animal/object the pose is named after.

For example, “downwards dog” is a famous yoga pose. If a users pose is 50% accurate, perhaps only the background image on the web screen will become a backyard, but at 100% accurate, the user will become a downwards facing dog.

 

Inspiration

I was inspired by both the names of certain yoga poses and by one of my favorite childhood Wii games, WarioWare Smooth Moves. I always found the names of certain yoga poses really interesting/amusing, such as “downwards dog”, “cat”, etc. and always wondered if these poses were actually similar to the animals/objects they were named after. As for the Wii game, there is a part in the game where players have to follow designated poses in order to earn a point. The game is typically fast paced but there are random “yoga” poses that are thrown in to give users a break. Here is a link to a game play to understand (skip to 3:30).

Developing the Project

Yoga comes from India, which is a country that heavily believes in Hinduism. After looking into Hinduism, it really values nature and respects it which is why I want to make users grow their appreciation and understanding of the poses. Because yoga is often very calm and spiritual, I want to try and keep the environment/user changes serene and very nature-like.

Technically speaking, I would like to follow the code that Professor Moon provided our class, which uses the KNN classification, mobileNet, and webcame iamges. This means that I’m going to need to gather a database of images of people’s poses for the yoga poses. Because my project would involve a lot of body movement, I realized it would be difficult for users to capture an image of themselves without breaking the pose. So, I will also try to incorporate voice recognition to trigger a image capture. Aside from the KNN classification, mobileNet, and voice recognition, I think the remainder of the project will just be very css heavy. 

MLNI Week 9 HW Jessica Chon

Project Summary

For my project, I decided to use the KNNClassification_PoseNet code Professor Moon sent to us in order to make text responsive to the distance between your face and the camera. I was inspired by my parents, who whenever I show them things on my phone, they have to move their faces further/closer to see what I show them. 

The way my project works is that I have two different font-sized sentences. Below the sentences are buttons that the user clicks to gather samples on how close/far their face needs to be in order to read that font size. After they gather the samples, the user clicks a button that says “start reading” and then part of the first chapter of the first book of Harry Potter shows up. The font size changes based on the distance between the users face and camera that was gathered from the previous samples.

 

Process

There was definitely a learning curve with understanding how the code works at first. I had to analyze how each function was triggered and where the data was being gathered/reset. But after looking at the code for a while, I was able to understand. I dealt with a lot of css for this assignment because I wanted to  organize the page more neatly so that the reading and steps would feel more intuitive to users. I also included bootstrap to move the first two buttons into two columns. In the script, it was mostly just using document.getElementById(“”).style.fontSize to change the font size. The rest was fairly simple. 

Final Thoughts

My intial goal for this assignment was to make an interface that allows people to adjust the text to their eyesight. I think if I were to work more on this project, I would make the font size changes feel smoother. Right now, they just abruptly change between the face distance changes. I would also probably add images and change the sizes. Aside from that, I would probably just make the interface look neater and more professional.

MLNI Midterm Jessica Chon

Project

For my midterm, I wanted to make an interactive landscape where users can manipulate the time of day, weather, season, and animals/insects using their body.

Controls

  • Sunrise/sunset– head position on either the left or right side of the screen
  • Bird & beehive– hands and arms; when raised, the right arm becomes smoke that gets rid of the beehive and the left arm becomes a tree which the bird flies to
  • Rain and clouds– raising either of the arms makes the rain and clouds disappear
  • Winter– cross your arms across the chest makes the screen change to a winter scenery.

 

A link to the video demonstration can be found here

Inspiration & Process

The inspiration for this project came from a previous homework, where I had a much more basic version of this project. The initial project was just that when you raised your arms, your body would change colors to look similar to either tree or blend in with the environment. 

I used bodypix for this project to track the body movements of a user to manipulate the environment. In terms of coding, the way I created the functions listed above was closely related to designating the position of certain body parts, finding the center average area of the hands, and turning on/off functions with many if/else statements. Continue reading “MLNI Midterm Jessica Chon”

MLNI Week 5 Interactive Portraiture HW (Jessica Chon)

Summary

For this assignment I decided to go back to one of my earlier homeworks and revamp it to make it more interactive. The portraiture is the user who blends in as part of nature. The head is supposed to be the sun, and the rest of the body is supposed to symbolize bushes. If the user raises their right hand, the hand turns into a cloud, the sun gets brighter, and the rest of the environment also. If the user raises the left hand, then the hand becomes the new sun and the rest of the body changes into a tree. 

 

Process

Initially, what I had in mind was for the image to look like it’s raining and then the rain would stop if the user raised their left hand. However, I found that I was really enjoying changing the color and my friends who tried using my homework also had more fun seeing the color changes. 

Regarding how I changed the colors, I wrote code that said if the camera detects a certain  body part and if that body part is in a certain area within the screen, then change the colors. My code itself was just a lot of if, else statements and creating variables. 

Reflection

I didn’t really have as much difficulty with this assignment compared to previous assignments. Rather, I still need more clarification on what the specific coding means such as what the segmentation and threshold does. But I did receive more clarification after having an office hour with Professor Moon and I will continue to do more research on my own time.