Blog Post #3

Last Friday I was told to attend a briefing on eating disorders. The briefing’s intention was to give the Congress a sense of how urgent eating disorders are and how they can be addressed through legislation. It was not a very informative briefing because the panelists told too many stories about eating disoders while we generally expect the them to give explicit information on the status quo and a cleaer solution to the problem. Despite that, I found one of the panelists told a really powerful story that really changed what I think about eating disorders and made me want to do something for them.

The panelist who is a nutritionist and also mother of a 12 year-old girl first shared a few case studies on eating disorders that explained how eating disorders can be treated in a both psychological and medical way. She told us that the weight stigman and the stress of “healthy eating” in schools are main causes of eating disorders among teenagers. She then went on to share her own story about her daughter who learned about how to eat healthily at school. The panelist then explained how she approached the school and provided resources for school to avoid invoking eating disorders. The main goal of her story is to tell the lawmakers that the best way to address the problem is to provide schools with enough resources and education. Her story didn’t fall perfectly into the three-stage model of public narrative, but nontheless conveyed stories of self, us, and now. She began with stories of others and then introduced her own story of self, and after that she impliclity told the story of us and of now — that eating disorder is a problem that most of us share and should be addressed now. 

Just as I said before, I find her story convincing because she used her own example to show that the problem of eating disorder is both prevalent and solvable. She also successfully convinced me that there is an urgent need to address the issue.

Magic Brush-Guangbo Niu-Rudi

Magic Brush Logo

Test video: https://drive.google.com/file/d/1VjWs9XvbtSQxb0JJEkcue5Dtv2v6pDmz/view

Conception and Design

From the beginning throughout the end, we have always wanted our project to be accessible and available for everyone, so we tried as best as we can to eliminate physical contact and interactions that require precision (ie. keyboard and mouse). This doctrine is what we have always stuck to so we made a few decisions around it. The major decision we had is to use Leap Motion. After we borrowed it from the ER and tested it, we found that it is very delicate and sensitive, easy to operate for almost everyone, and more importantly, it does not require physical contact. We used several very BIG pushbuttons to make it easy for the user to push. Also, since the Leap Motion has a detection boundary and we wanted to be visible to the user, so we had a fabricated frame to indicate the boundary.

During our talk with Young, he pointed out that we should focus of one type of disability instead of expanding our user to everyone with limb disabilities. Therefore, we decided to make this project targeted for people with hand disabilities. We then made a few alterations to our project, for example, we adjusted the code of Leap Motion to make it detect only the wrist rather than the entire hand. 

Actually, in the beginning, we wanted to use a joystick to control the brush instead of leap motion because we thought it is easy to use a joy stick even for people with hand disabilities. However, after we studied the Xbox Adaptive and consulted our friends, we found joystick is even not easy to use for people without disabilities, so we abandoned it and turned to leap motion.

Fabrication and Production

The fabrication and production process was generally smooth. One of the biggest trouble was the save feature. Our design is that pushing a button would save the current frame into the computer. We used the “saveFrame” function in processing, but every time we press the button, it would automatically save more than one frame. We studied for hours and found that it was the serial communication that caused it. When we press the button, several signals would be send to processing during the contact time to save multiple frames. To solve the problem, we modified the button code so that only one signal would be sent when the button is released. Also, during the user test session, Rudi suggested that we should add a thumbnail feature. So we spent another few hours figuring out how to make it happen.

Despite all the adaptions to make our project better, the real significant thing happened during the process was the change in our project context and target user. During the user test session, both Rudi and Young pointed out that our project didn’t look like a device specially designed for people with disabilities. What they said made us carefully rethink our project, and we found that there were actually better ways help people with disabilities draw. For example, we could design a drawing game on Xbox and use Xbox Adaptive as the input because it is a manufactured product and must have been tested thousands of times. Our project looked more like a fancy device for amusing and entertaining people without disabilities. I mean, it is not that people with disabilities may find our device friendly, but that the device is might not be the best choice for them. 

Therefore, after user test, we decided to make our project an educational one, which educates people about people with hand disabilities. We want the project to educate people that, even with many measures to ensure accessibility, it is still difficult for people with hand disabilities to create art works. To make that happen, we used tape to bind the user’s fingers and put a red sticker on their wrist in order to make them pretend they have hand disabilities. In addition, we added a QR code on the canvas which directs to a Wikipedia page of hand disabilities.

Conclusions

Our initial goal was to make a project that enables people with hand disabilities to draw free from restraints, but in the end, we changed our goal to make an educational project that educates about the hardships that people with hand disabilities are facing. Despite, that we think our project generally aligns with our definition of interaction, which is a process which 1) involves two or more actors, 2) at lease two of the actors should be cognitive systems, 3) requires input, processing, and output, 4) gives clear and proper feedback. It will raise people’s awareness of hand disabilities and empathize with them.

The real important lesson I myself get from this experience is that, to build a project that really targets your user, you should really find those users to give advice even before you design it. We concluded that the reason why people didn’t think it is a device for people with disabilities is that, we did not consult people with disabilities nor have them to test it. If we had more time, we will probably stick to our initial goal and invite people with disabilities to talk about what they really need, and invite them to test our product.

Recitation 10: Media Controller by Guangbo Niu

For this recitation, I decided to use infrared distance sensor to control the shape and size of the pixels of the live cam. Here is a video.

After reading “Computer Vision for Artists and Designers”, I feel that what I have done here is far from computer vision, because I think computer vision should at least involve “image understanding”, and my project did not require computer to understand the image. However, I think my project allows the computer to manipulate image with user’s instructions, which is kind of an application of computer vision.

Recitation 9: Final Project Process by Guangbo Niu

Step 1

“Space Time Symphony”

This project by Robin integrates audio and visual experience. It will display different abstract visual effects and sound effects according to the user’s movement or other inputs. This project looks like one of the project I researched before which gives the user all-round visual and audio experience. We suggested that she should use proper sensors to detect the user’s input. I think this project is interesting although it does not show much significance in terms of human. It looks cool, but I don’t know why people care about it.

“Do you know you have magic?”

I really like this project. It allows the user to direct a robot to move on the sand in order to draw some shape. I personally suggested that she could use leap motion sensor to detect the user’s input. We together suggested that she could allow the user to draw both using the robot and on processing, where the robot drawing cleans itself every minute and processing drawing stays forever, in order to create some sense of Zen. I think this project allows user to interact with the sand in a very indirect way using robot as a media, which make this interaction more fun.

“Driving into imagination”

This project is basically a driving game, which allows user to use a steering wheel and provide visual experience through processing. We suggested that building a real car-like device could be hard, and the tilt sensor could be a lot help for them. However, I didn’t quite get what she tries to make of its, I mean, I don’t understand what kind of users it is intended for and what utility will it bring, because there are a lot of driving arcade machines out on the market. 

Step 2

One of the most useful advice I got is that Robin explained to me how exactly leap motion worked. Also, I got some suggestions that we could use user’s head as input since head movement is one of the most friendly way for people with disabilities to use. They also suggested that we could offer different brush types to the user, but this could be difficult for us. Finally, they all thought that this project is a little simple and boring, so maybe I should elaborate on my project to add some more features and functions.