• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Ken's Documentation Blog

  • Home
  • Interaction Lab
  • Communication Lab
  • VR/AR Fundamentals
  • NIME
  • Toy Design and Prototyping
  • Make Believe
  • Remade in China
  • The Cultivated City
  • Interactive Fashion

Week 14: Challenge Blog

December 17, 2022 by Ken Wu Leave a Comment

Hearo

Hearo is a fusion of physical product, being glasses and bracelet, and digital application that is designed to aid people with hearing disabilities in daily communication. 

In order to truly understand the issue of having a hearing disability, we developed 3 empathy tools, focusing on the physical impairment as well as the mental feeling. The first one was “The Wall” which is an installation of a 4 sided mirror that causes people to not be able to see the outside and only their own reflection, creating the feeling of isolation of felt where one can only see their voice and also mimicking the fact that these individuals are often omitted and othered in society. For the second one, “The Distortion” is about the distorted sounds individuals with hearing disability feel. They may hear something, but it will be muffled. That’s the concept of the distortion. There’s also a code that was done to mimic this feeling where users can ask questions and the AI will answer randomly, expressing a sort of confusion felt by the hearing impaired individuals. Lastly, the mentality has to do with a very tight elastic band on your head to create a sort of tension and stress. This really pushes people to feel the sort of pressure that deaf individuals often feel when interacting with others and going out. 

As a verification of the empathy tool, I interviewed Adam Liu at the event “Innovation for Inclusion”. Adam Liu is a severely deaf individual that has the inability to effectively communicate vocally. He can’t hear what people say and has to use applications to both listen and to communicate with others. Moreover, he faces issues of communication even when using these applications mainly due to the lack of clarity of language from people where he can only listen to one voice at once through mobile voice recognition. This may be impacted by the noisiness and when he cannot catch the information. It’s also this sort of miscommunication that he has to often guess what others say as it’s inconvenient to ask at times. 

Therefore for our solution, we focused on both the physical devices and the auxiliary app. For the physical device, it also has an output interface to express the sort of device this is. Furthermore, there is also the presentation of the auxiliary app which controls the output interface for personalized use and experience.

For the physical devices, there is the glasses and the bracelet which are designed to look like everyday objects to make sure that it doesn’t stand out, making the user feel normal. Moreover, these devices are easy to use and easy to carry as people don’t have to hold it, but can actually wear it. Everything is also automated according to the settings from the auxiliary app.

The glasses have built-in cameras to capture movements and sounds from eyetracking technology. The user can look at one person and the glasses will focus on the sound from the user, especially the dialogue they speak. As for the built-in sensors, there is one to capture audio input from the person talking and one sensor to transmit the voice of the user to the interface. Lastly, there is an audio system built in to help Adam talk, being his digital voice as he has speaking impairment too. 

The bracelet is used to transfer sign language into actual language. This is what is transferred to the audio system of the glasses to talk. Moreover, it has a camera and sensors to complete these task through tracking the fingers and hand movement which will then be transmitted to the glasses for output in the interface. 

As for the output interface, it’s all about sensors transmitting message to show on the glasses from the input of the talker. This can be presented in bubble mode which is similar to the texting system in place on phone and caption mode which is like movie captions. We also have sound detect on the glasses which detects the ambient sound around and suggests voices to speak, letting people know how to talk in situations. Adam doesn’t know how big his voice is and this will help him know. 

This is a clearer example of this in an interview situation where you may feel that visualizing what you said before is important. You can also see how the digital voice appears during the meeting for good communication. 

Lastly, the application is connected to our device to help make personal adjustments. You can choose whether or not you want to turn Hearo on. You can also choose the mode of dialogue e.g. bubble mode or caption mode. This will then be what you see on the glasses. Moreover, as everybody has different levels of eyesight as well as preferences of color, opacity and size, they can also design this in the interface style to help them use the product comfortably. The last important feature is creating the profiles which are shortcuts for the application. For the shortcuts, it is a sort of tap interface that the user can press which will give the correct presets in certain situations. This will save them time, much like personal customized buttons on cameras that photographers use. 

Filed Under: Application Lab

Week 13: Challenge Blog

December 14, 2022 by Ken Wu Leave a Comment

For this week, we focused on connecting all our research together as well as recreate the application in a more holistic manner. Firstly, for our research, we focused on the individual that I initially interviewed, Adam Liu. We identified that communication, both speaking and listening were highly important. In this week, I communicated more with Adam, especially as it was Deaf Awareness Day. He mentioned more issues, in particular, an article of how it really felt to be deaf. This article mentioned the issues of miscommunication, of guessing, and lack of awareness of surroundings that Adam mentioned. This was great affirmation for our empathy tools as well as the solution we came up with. 

Due to this, we focused more on the user experience and the features which included indicators of sound for the glasses and personalization through the application. For the application, it was redesigned by Zhenyu after our group discussion on the necessary features for the design. I created more of the prototyping logic and recreating the buttons to be usable accordingly to create the user experience. In this way, the application becomes an understandable prototype.

Before:

After:

Filed Under: Application Lab

Week 12: Challenge Blog

December 7, 2022 by Ken Wu Leave a Comment

From the three research directions of a communication system of glasses and glove, volume interface, and car driving interface, we understood that the string connecting all of these together was the communication aspect. We found that we wanted the user to focus on the communication system, but also add the other ideas within it for the final project. We received feedback that gloves would be responsive, but it would be too eye-catching which would make the users feel different from others. We then created a system of glasses and a bracelet. 

The glasses have built-in cameras and speakers. The cameras have eye tracking technology to track and focus on the individual speaking. It also offers speakers as an output of the user for communication as they may have oral impairment. This is where the bracelets come in. The bracelets likewise have a camera that essentially maps and visualizes the arm and fingers. This helps the user, supposedly a sign-language speaker to communicate faster with others. 

We imagined this to be used in different scenarios whether it be buying groceries or just doing an interview. When using the glasses one will see text bubbles from tracking the communication of others. 

To personalize the glasses and bracelets to the user, we imagined that there would be an application that has an interface for modifying the changes. It includes settings to connect to the bracelet and glasses, editing the settings of the text size, eyetracking speed, spoken voice, and etc. This was a draft version I created in figma without regards to the actual user experience which would need to be more minimal for optimized user satisfaction. I imagine that the step I’ll take in the upcoming week is actually modifying the brackets of the app. 

Filed Under: Application Lab

Week 10: Challenge Blog

November 30, 2022 by Ken Wu Leave a Comment

For my learnings about the empathy tool, we designed it together, thinking about the complexity of feelings that the hearing-impaired individuals feel.

For example, we feel that individuals not only can’t hear that well, but there are feelings of loneliness and difference that they feel. They become like someone living in their own box where they can only see their own reflection, absent from others. It is also this understanding of loneliness and confinement that I believe this to be similar to the feelings they have of helplessness. From this experience, I felt that I could also only hear my personal voice rather than the outside voice which sort of mimics the hearing-impaired experience. I learned that there is a sort of wall between myself and everyone.

Likewise, for the second empathy tool of the earbuds and band around my head, I feel a sort of inner pressure. I feel that this design doesn’t allow me to experience the world in a way that’s normal. I feel weight on myself. I feel that this case also develops a sort of learning for me in terms of the inner world of the hearing impaired.

Likewise, for the third tool, we actually created a visualization of the device in terms of how people receive information. People cannot hear what the outside hears and they have to sort of guess the question, answering something random. I feel that this is also a clear indicator of the deaf experience as their whole life depends on their intuition and imagination. I feel that the coding for this empathy test also allows the user to understand how their own voice is perceived. Sometimes, we ourselves, do not speak clearly for them to listen. At other times, they can only say yes, no, and not elaborate that much due to vocal impairment. 

Likewise, for the second empathy tool of the 

 

Filed Under: Application Lab

Week 11: Challenge Blog

November 30, 2022 by Ken Wu Leave a Comment

For the research interaction, I wanted to have primary research and personal engagement with actual people that have hearing impairment. It’s something to think about the issues and thinking about empathy tools to understand how the targeted users feel, but it’s another thing to understand the personal experiences of the hearing-impaired individuals. This led me to attend actual workshops to develop an understanding of how to actually develop inclusive design for those with hearing impairment. I worked on creating a solution based on actual interview with the hearing impaired and personally experienced how to communicate with them. This sort of communication experience actually led me to understand more issues about the deaf experience. My sort of understanding changed from simply “they experience communication issues” to “the communication between us is slow and sometimes has misunderstandings”. 

 

I interviewed Adam Liu, someone with severe hearing impairment, to understand more about how his daily life experience is like where I understand that he can’t hear and has issues with the tools to help him understand. Not only does he not understand, but sometimes, it is inconvenient for him to actually ask for understanding. 

This initial understanding led me to understand more issues in his daily life. I found that he also mentioned that the voice recognition tools also can’t differentiate multiple voices and he can’t communicate to drivers which led to my design direction and design solution. I wanted to focus on the clarity of language, resolving singularity of voice, with noisiness, and verbal communication. 

From my initial focus direction, I then thought of issues to resolve to stimulate my design mindset and truly understand what the issue at hand is. The three issues are all related to communication and understanding, and essentially lead to the question “how might we restructure the way Adam communicates with his surroundings?” really pushed me to think about their interactions. 

Thinking about how their can be responsive interactions with the environment for communication, I thought about the solution of Hand Talk which essentially is a combination of digital and physical interfaces to aid the communication of hearing-impaired individuals. It’s not only glasses to have a reception of information, but I also think that the output from the individual is important. If we only create something to let the user become passive in the interaction stage, it does not resolve the issue. Therefore, the output of the individual is important too! Therefore, while the glasses help the visualization and understanding of dialogue, the gloves allow the users to communicate faster rather than just typing. I feel that the digital voice from the gloves to the phone aids in communication which is very important to my concept. I think that because Adam can’t talk to drivers and drivers don’t read the text, it’s important to create a methodology for him to communicate. Thus, there should be a sort of digital transmission/translation aspect of glove and glasses where your sign language can become a digital voice for you to talk in calls and in normal life. It’s also important to understand who’s talking so there’s the eye tracking aspect of it. 

Filed Under: Application Lab

Week 9: Design Discussion

November 18, 2022 by Ken Wu Leave a Comment

In the “The Lows of High Tech”, the author discusses the topic of prosthetics for humans that do not have limbs. A lot of the times, humans may lose limbs at birth, but most of the time it may be due to events such as war conflicts or accidents. When this happens, design of prosthetics would need to come into play to help the user experience again the feeling of reality and normalcy. However, this often comes at a high expense as such technology is catered towards the individuals personal needs. The individuals would need to pay high amounts of money in order to receive a product that helps them experience life with all limbs. 

One thing that was notable about the podcast was when the topic of “average” was discussed. I feel that normally, average is considered a one-size fits all or general formula for creating such products, however, when we consider disabilities, especially prosthetics as a solution there is no one-size fits all. There is no concept of being average in terms of physical handicaps of not having a limb. Therefore, whenever average is considered to be a solution for such disability, it should be clearly reflected upon in terms of general applicability for design. 



Filed Under: Application Lab

Week 9: Challenge Blog

November 16, 2022 by Ken Wu Leave a Comment

Nowadays, digitalization has prompted a dynamic shift in information reception and action. In the digital age where information is at the tips of our very fingertips, it is easy to lose oneself and one’s standing in the rapid pace of the world. Everyday, we find that our memory isn’t suffice to store all the information in our mind. We utilize calendars, notes, and digital applications to assure that we keep pace, yet in the midst of this we see failures of technology and the disconnection of technology with our physical world. Thus this leads to the question

How can the disconnection between the digital and physical world be resolved for calendar applications?

How can I track my progress on a calendar and digital device simultaneously?

This leads to the ideation of MyTracker, a smart calendar that connects the digital and physical world through simple analog controls. MyTracker is like any other calendar where you write your notes, but it is merely a physical interface for time tracking. 

The MyTracker includes a simple tap interface for users to visualize their time commitments on a daily basis and weekly basis. Through a one-finger tap, one can see a color from an LED light of the calendar which represents how packed their schedule is where green represents a rather open schedule while red would represent a packed schedule. This would inform the user of their free time throughout the day for further planning. 

  

When the MyTracker is tapped with two fingers, it will show how busy one is throughout the week. This will inform users of how they should reschedule their time. 

With a tap with three fingers, this will send a message to the application on the mobile device that one task has been completed which syncs the two interfaces together, further connecting the two modes of tracking time. 

    

In this flowchart, it can be observed that the human is interconnected with the application and calendar where their actions of tapping informs both themselves on the calendar as well as the application. There is also the action of writing tasks which will then be scanned and transferred to the application interface to interlink the digital and physical world of tracking time. When there is a three finger task, it will not only inform the physical calendar, but also the application calendar to inform the user. 

In this Figma design, it can be observed that one can view the day, date,  busyness scale through the background color of the day, the upcoming task they should finish, the completion status of the planned day tasks, and the remaining tasks. Within the interface for the calendar, the three finger tap will add a check mark to the tasks listed in the calendar. There is also a scanner function to scan the task list and transfer it to digital format. 

Below are photos of the interface on the phone.

  

The photo below shows the flow of the app in a basic manner, showing how each function works. 

W

Filed Under: Application Lab

Week 8: Design Blog

November 9, 2022 by Ken Wu Leave a Comment

I feel that the concept of minimalist design by Rams where he says “Good design is as little design as possible”, interconnects with the screen less design emerging technology. When we consider how we interact with the world, most of it is through the screen whether it be a mobile app or a screen on the phone. Although these are good design meant to aid the life satisfaction of users with the product, it marginalizes users in the process such as the old. Even in modern day China where digitalization has been pushed by not only the culture of communication but also the policies, there are still hundreds of millions of users that do not have this access. Therefore, I’m sort of hesitant on screenless design in terms of creating a response system, especially when the examples noted by Golden are a response system to voice or a response system that remembers what you look like. I feel that it is this smart screenless design that might make the uiux design even more inaccessible to people, at least it will provide them a scare because this sort of interaction is not widespread. It is similar to the use of Siri for smartphone users which isn’t too prevalent. 

From the screenless designs, there is also the danger of surveillance that comes into play. One example of this danger of surveillance is being identified 24/7. Whenever you enter a building, it will know who you are and accordingly have data of your past order to inform the cashier. This sort of example is dangerous. It is an invasion of privacy and this sort of surveillance and collection of data does not create the good design experience. It is obtrusive, something that doesn’t make the experience something that I’d want. Rather than what Golden said about the existence of Phantom Vibration Syndrome, I suppose that there might be Phantom Surveillance Syndrome if this persists. Despite being screenless, when people become accustomed to a world of voice surveillance and digital surveillance for convenience purposes, it will push more issues of being watched and overheard. 

Filed Under: Application Lab

Week 8: Challenge Blog

November 8, 2022 by Ken Wu Leave a Comment

Nowadays, digitalization has prompted a dynamic shift in information reception and action. In the digital age where information is at the tips of our very fingertips, it is easy to lose oneself and one’s standing in the rapid pace of the world. Everyday, we find that our memory isn’t suffice to store all the information in our mind. We utilize calendars, notes, and digital applications to assure that we keep pace, yet in the midst of this we see failures of technology and the disconnection of technology with our physical world. Thus this leads to the question

How can the disconnection between the digital and physical world be resolved for calendar applications?

How can I track my progress on a calendar and digital device simultaneously?

This leads to the ideation of MyTracker, a smart calendar that connects the digital and physical world through simple analog controls. MyTracker is like any other calendar where you write your notes, but it is merely a physical interface for time tracking. 

The MyTracker includes a simple tap interface for users to visualize their time commitments on a daily basis and weekly basis. Through a one-finger tap, one can see a color from an LED light of the calendar which represents how packed their schedule is where green represents a rather open schedule while red would represent a packed schedule. This would inform the user of their free time throughout the day for further planning. 

  

When the MyTracker is tapped with two fingers, it will show how busy one is throughout the week. This will inform users of how they should reschedule their time. 

With a tap with three fingers, this will send a message to the application on the mobile device that one task has been completed which syncs the two interfaces together, further connecting the two modes of tracking time. 

    

In this flowchart, it can be observed that the human is interconnected with the application and calendar where their actions of tapping informs both themselves on the calendar as well as the application. There is also the action of writing tasks which will then be scanned and transferred to the application interface to interlink the digital and physical world of tracking time. When there is a three finger task, it will not only inform the physical calendar, but also the application calendar to inform the user. 

In this Figma design, it can be observed that one can view the day, date,  busyness scale through the background color of the day, the upcoming task they should finish, the completion status of the planned day tasks, and the remaining tasks. Within the interface for the calendar, the three finger tap will add a check mark to the tasks listed in the calendar. There is also a scanner function to scan the task list and transfer it to digital format. 

Below are photos of the interface on the phone.

  

The photo below shows the flow of the app in a basic manner, showing how each function works. 

Filed Under: Application Lab

Week 7: Challenge Blog

November 2, 2022 by Ken Wu Leave a Comment

Possible interactions with a calendar:

1. Writing and Checking Things Off

One interaction you have with your calendar is that you have to write what you need to do at a corresponding time, but another aspect is actually crossing it off so that you know that it’s been completed. Most people that view their calendars have to check tasks on a daily basis. I feel that there should be an interaction like notion where you put things in order of priority for a calendar. I feel that often when it’s marked in the notes, it often appears like the rest of the tasks, therefore, a possible interaction might be a shadow over the less important tasks. This way, the user only focuses on one important task at a time. This could also be the case for the actual scheduling where there is a shadow/overlay over the things ahead. That way, you can focus on the moment rather than stressing over the future. 

2. Forgetting Something

Another interaction often found with using a calendar is that you might forget something. I feel that while most people have a task reminder of some sorts, you might overlook it. I feel that in order for you to remember that you have tasks not completed, there should be a signal to remind you. This could be in the form of a flashing light whenever you close the calendar. This way, whenever you do not complete a task, there is a yellow alert light signal to remind you so. Of course, the prevalence of flashing could be lowered to once every 3-5 minutes to preserve energy, but the alert is important.

3. Planning the future

One last interaction that I feel is crucial is planning the future. A lot of us plan the future differently, whether it be a couple weeks ahead or just a few days ahead. I feel that some sort of visual to represent the commitments is important to allow people to fully understand and utilize their time. I imagine that this could be represented in a signal of a color, especially one which interacts with the hours committed to events. I feel that most people don’t understand how packing all of their time may be unhealthy, so some color scale such as green to yellow to orange to red to black might be important to let people know that how full their days are for better future planning.

Filed Under: Application Lab

  • Page 1
  • Page 2
  • Page 3
  • Interim pages omitted …
  • Page 14
  • Go to Next Page »

Primary Sidebar

Copyright © 2025 · News Pro on Genesis Framework · WordPress · Log in