Video Documentation
Conception and Design
My project aimed to create an interactive glove that allows users to draw on a white canvas by triggering sensors embedded in the glove. Each sensor would control different drawing functions, such as stroke width, color, and shapes. The concept was inspired by my childhood experience with Paint software, where I imagined that a glove could make the drawing experience more immersive and intuitive.
Initial Project Concept
In my research, I explored motion sensors like accelerometers and examined interactive design tools such as Processing. The second project proposal helped refine my concept by focusing on user interaction and potential challenges like ease of use and precision. Understanding that my target audience—individuals curious about the artifact—would need an intuitive design guided key decisions, such as making the glove lightweight, ensuring responsive but not overly sensitive sensors, and adding on-screen instructions to assist users.
After attending Professor Weil’s office hours, I gained valuable insights on how to improve my prototype and brainstormed ways to make the project more engaging than a simple painting program. The concept began to shift toward a more game-like experience, where users would replicate a pre-set image using various tools, such as colors, stroke sizes, and shapes.
While configuring the flex sensors, I faced challenges with their sensitivity and fragility. Without proper calibration, they triggered too easily, causing overlapping inputs in Processing, and they were prone to breaking if bent too far. As a result, I switched to pressure sensors for the first round of user testing, using three in total.
During testing, I demonstrated how the sensors triggered different shapes on the screen based on a pre-selected image (Fig. 1). Several issues arose, including one malfunctioning sensor, an overly broad threshold range, and a lack of delay, which caused the sensors to activate in quick succession. This led to insufficient user interaction and a lack of control.
fig.1
The feedback I received was crucial in developing this project, professors and fellow classmates offered several ideas for improving. Although I initially wanted to use a variety of sensors, I ultimately decided to focus on pressure sensors and refine the Processing program to enhance interactivity. I worked on adding features like changing the color of shapes, adjusting their size and position, and ensuring smoother transitions. These changes greatly improved both the accuracy and overall user experience, making the project more functional, engaging, and user-friendly.
Fabrication and Production
The production process for my interactive glove project was a series of iterative steps, marked by both successes and failures that ultimately led to a more refined and functional product. My primary goal was to design a user-friendly tool for children aged 3 to 5 that would allow them to interact with a digital canvas and trigger different drawing functions. Initially, I envisioned this as an artistic tool, but after consultations with faculty and rethinking the project’s goals, I reframed it as a game aimed at promoting fine motor skills development.
One of the first major design decisions I made was the selection of sensors. I initially considered flex sensors but switched to pressure sensors after learning they were more durable, provided greater control over input (especially for children), and were simpler to integrate with the Arduino. Flex sensors, while useful for measuring bending, were not as responsive or precise for the types of interactions I wanted to create—namely, tapping or pressing actions. Pressure sensors allowed for more tactile, interaction, which was essential for making the drawing experience feel intuitive and engaging.
The glove design itself went through several iterations:
First Approach: I initially tried wiring the sensors directly to the Arduino, positioning them at the fingertips for easy accessibility. While this setup allowed for quick adjustments, the exposed wiring looked unattractive and was prone to wear and tear. This was a failure in terms of aesthetics and durability.
Second Approach: I then attempted to use a breathable fabric glove with sensors sewn onto the surface. While comfortable, the sewing process seemed to be time-consuming and difficult, especially given the small size and complexity of the sensor placements. This approach, though practical, felt too cumbersome for my project’s goals.
Third Approach: In my next design, I experimented with creating two rings from stacked acrylic layers, one for the sensor and the other for the flexible part of the glove. However, the use of superglue to attach the sensors caused them to malfunction. This failure taught me the importance of more secure, reliable fastening methods.
Ultimately, I chose a case made from cardboard and fabric. This design offered the best combination of durability, comfort, and aesthetic appeal. It also allowed for precise placement of sensors without overcomplicating the overall structure.
The next phase of the project involved the integration of the electronics with the Processing interface. I used an Arduino to read input from six pressure sensors, each assigned different functions in the game, as well as two buttons. The coding process was challenging, particularly when trying to control the transition between levels in the game. Initially, I relied on if-else statements, but after guidance from Professor Andy, I transitioned to using an integer variable (gameStage) to manage the flow between the menu and levels. This significantly simplified the logic and improved the game’s interactivity.
For Level 1, I developed a basic interaction where shapes moved across the screen when a certain (>500) pressure threshold was reached. User testing revealed issues with linear movement, which I addressed by adding a time delay to prevent immediate reactivation of the shapes. This refinement was critical in making the game more engaging and less frustrating. Professor Andy also helped me in this stage, and wrote down this code example which I implemented in every shape that had to be drawn so the interactions down overlap.
Level 2 introduced more complexity, where children had to match colors based on an image. This required a more precise implementation of color detection and feedback systems. I used Sensor 3 to change the background color and Sensor 4 to alter the color of flower petals on the screen. A “Yay” sound provided feedback when the child succeeded in matching the colors, reinforcing positive reward.
Conclusion
The goal of my project was to create an interactive glove that enabled users to draw on a digital canvas by triggering sensors for various drawing functions. While the project evolved from an artistic tool to a game for children, it ultimately achieved its goal of providing an engaging, intuitive interface for users to interact with a digital canvas.
Audience interaction during testing confirmed that the game-like experience was more engaging than a simple drawing program. Users interacted with the controller by pressing sensors to trigger actions, like changing colors and moving shapes. The results aligned with my definition of interaction, as it involved real-time input from the user, followed by visual and auditive feedback on the screen.
If I had more time, I would focus on refining and adding more complex interactions to enhance the user experience. Additionally, I would explore more durable sensor placement methods and further streamline the design for ease of use.
The setbacks—such as sensor malfunctions and design challenges—taught me the importance of iteration and the value of user feedback in shaping a functional product. My accomplishments, such as the final game interface and successful sensor integration, demonstrate that with persistence and adaptability, I can overcome technical challenges.
Disassembly
Appendix
Code for Arduino and Processing
Wiring Diagram: