Week 6: Midterm Proposal

American Sign Language (ASL) Interpreter 

Goal 

In this project I hope to create a tool in which those unfamiliar with American Sign Language can have uninhibited interactions with speech or hearing impaired individuals. Using a camera to catch a user’s hand gestures and body movements, we can determine the letters, words and meaning behind them. 

Stages 

  1. Alphabet 
    1. Images 
    2. Live Video Feed 
  2. Words 
  3. Phrases 
  4. Sentences 

First, I will work to create an image classifier for the ASL alphabet that will utilize the ASL alphabet MNIST dataset (source listed below). Next, I will look to implement a similar model using a live video feed. This is my goal for the midterm project. 

As I work towards the final project, I will try implement the system already in place for the alphabet and try to create a model that can output the signed word. If this is reach I can move on to phrases and sentences. 

Hurdles 

Despite breaking this down into sub-divisions there are a couple hurdles I still must overcome. These all come from difficulties of interpreting video feed data and the nuances found in each ASL sign. Listed below are the primary elements that make up the meaning of each sign (or collection of signs): 

  1. hand configuration 
  2. hand orientation: right or left
  3. hand positioning in relation to the body
  4. grammar (word order)

Figuring out of the hands in relation to the individuals body is possible through mapping the body. This also applies to figuring out which is the left and right hand. 

An issue I predict to arise is the difficulties found in creating grammatically correct sentences from ASL. This is due to the fact that ASL is a meaning based language where grammar rules are not employed. In contrast, written English requires a collection of commonly used articles (Example: to) to be fully comprehensible. 

Sources: 

ASL Alphabet MNIST Dataset 

American Sign Language WIKI

ASL 101 Tutorials

Leave a Reply