Project Proposal – Kris

As the case is that: we thought of what project we want to make before research, so we propose two different version of our original idea as well as a backup.

1. Life Line ver.1
We decided to make a massive interactive art. Players (estimated number 6 – 10) will each have a device in their hand. On the celling is a projector that cast a circle around each player on the floor. The color of the circle is dimming, meaning that one is losing his life. The only way to make one exist longer is to “build interactions” with other people, that is to connect the device with another for several seconds, until a line appears on the ground that connect you two. The more lines you have connecting with others, the longer you exist. The lines are also disappearing gradually so one needs to keep making new connections.
In the research, we read the Isea essay that discusses the relationship between nodes and networks, which help us to design the function and life cycle of each player to present a profound idea (which will be mentioned later). The research on other project offered us inspiration and detailed way of implementation to use projection mapping (i.e. the function of the projector on the celling).
There are several technology difficulties in the project: wireless communication of many Arduinos and computer; inter-Arduino communication; programming the network; locate everyone’s location and cast projections around them…
As a art work, it is intended for all types of audience who are interested. Instead of having a really practical use, it is intended to convey a message: what it means by saying we exist?

2. Life Line ver. 2
As the most difficult technology might be to locate the space of players, the version two of life line uses a screen that shows different points as player, and lines connecting the points as the connection built by players.

3. Glove Translator
It is a glove that can translate sign language (people with speaking disability use it) into text casted on a screen, and machine synthesized verbal language. It is to help the people with speaking disability have more efficient communication with the rest.
I researched in two existing implementations:
1.
https://economictimes.indiatimes.com/magazines/panache/meet-the-new-google-translator-an-ai-app-that-converts-sign-language-into-text-speech/articleshow/66379450.cms
Instead of using hardware, it uses cameras on your phone as well as AI to recognize sign language. It as the same purpose with our design but a completely different approach.

2.
https://www.newscientist.com/article/2140592-glove-turns-sign-language-into-text-for-real-time-translation/
DOI: 10.1371/journal.pone.0179766
It is an ongoing project, that uses Knuckle sensors to detect knuckle curvature when using sign language. It can only translate 26 letters in English.

Leave a Reply