Tenderheart-Olivia Zhou-Younghyun Chung

CONTEXT AND SIGNIFICANCE

Like what we do in our previous Group project, I began my midterm project by reflecting on my own definition of “interaction”. Then the word “interdependence” came to me, as well as the analogy of “conversation” which left a deep impression on me in “The Art of Interactive Design, Crawford“. Afterwards, I thought of the app called “Talking Tom”, when you touch its different body parts, it has different reactions. Of course, its main function is repeating what people said in a funny voice. So the idea of a talking teddy bear hit me, which talks to people according to our different postures like hug or touching its head. I didn’t think about its targeted audience, but after I shared  my idea with my partner Citlaly, she applauded it and said that it could cheer up a sad person with sweet messages. So we applied it as our intention happily.

CONCEPTION AND DESIGN & FABRICATION AND PRODUCTION

At first, I imagined our project to respond to people according to their different postures. My users could hug the bear, touch its head, or hold its hand. And the teddy bear can talk to them with sweet voice. And because we are required to utilize at least one digital method that we learned in the digital fabrication classes, so my partner 3D-printed a teddy bear. Then the first problem came: people can’t hug it because it’s too small and hard. And there’s no way for us to put sensors inside to sense people’s postures because it’s solid. So we decided at first to attach sensors on its surface. The second problem we encountered is that we didn’t know what sensors can sense people’s large but light gestures and where can we find them? We borrowed 3 FSR pressure sensors at ER and found it so small and insensitive that it can’t tell when we touch it slightly, instead we need to press the small round metal slice hard. But we didn’t have any other alternatives, so we had to adjust our project to this sensor. The third problem is that we didn’t know how to make the bear talk like a human, it must be too complex for us to operate. So my partner came up with the idea of Morse Code, which is a surprisingly good idea, so that we can have the bear express different messages without making sounds (with LED). And our last problem is coding. At first we searched the Internet and get some inspirations on it. Then my partner turned to a teaching assistant for help. Before the User Test, we could only run one option. But at least it succeeded. Below is our first version.

1

We got a lot of useful feedbacks and learned a lot at the User Testing Session. The one that mentioned most is that the users got confused about the Morse Code and didn’t get it the moment our LED started blinking. Some thought they should pinch the sensor according to the three morse codes we provide, but its actually the output. Some even ignored the little LED. So we added a short instruction as well as a Morse Code alphabet using laser cutter. I also added a 2-second delay before every response starts.

Professors also gave us two extra advises. The first one is to use a real stuffed toy, which is easy to buy on Taobao, and this also solved our problem of attachment of the sensor to the bear. Because the sensor often separated from the the 3D-printed bear, but after my partner sewed it into the bear, it worked well ever since. Another one is to use an actuator which vibrates, but I found it more convenient to use the buzzer in my Arduino kit and because I was not familiar with the code for that actuator, so I used my buzzer instead. What’s more, because of my limited ability, it seems easier for me to use one sensor to sense people’s different strength than to connect 3 sensors to different parts of the bear, so we decided to give up our original plan. After that, I learned the code for the buzzer and analogized the code for it according to the code for LED. At last they worked, although the LED and the buzzer are not so synchronized as I suppose them to be, but I was very satisfied at the outcome. 

Before the final presentation, I also tried many times and change the parameter of the range to make the chance of triggering the three options as equal as possible. And my partner took the responsibility of preparing the PPT and she also shoot a cute video. Good cooperation and work division are important and I’m glad we did it. Here’s the last version (without bear) and the 1-minute video.

Citlaly Weed 1-minute video

CONCLUSIONS

I think our project partially aligns with the goal of cheering up sad people. Because it sends sweet messages with Morse Code and most people can understand it with the prompt we gave. However, without real voice and direct expression, it still seems a little strange and sometimes confusing. Also, the level of interaction is a little low, more researches and technology are needed if we want to build a more humanized project. If I have more time, I might try to insert in more suitable sensors and change the Morse Code into human voice. However, after experiencing it, I also learned that even a seemingly simple project takes a lot of time and effort to finish, and that we can’t always achieve what we expected. If we want to realize our expectation, the only way is to keep experimenting, keep failing, and keep improving.

Reference List

 Pressure Sensor 

Morse Code

Leave a Reply