Rube Telephone Update

Here’s what has happened since we last spoke:

Shortly after we first playtested our idea in class, I realized that the project had somehow evolved away from the original idea to the point where I wasn’t having fun with it anymore. We decided to return to the original idea of having a two way conversation and allowing the machine to do all the work in between instead of trying to force human interaction into the process. We had been playing with things like dictation, text to speech and language translation in p5 so it wasn’t difficult to change gears because the work we had already done until this point would still apply.

My original idea was to create a complex chain of communication technologies that would talk to each other and ultimately allow two people to converse over a distance through this chain. I liked the idea of different technologies talking to each other instead of being individual interfaces for humans. I also wanted to explore the idea that technology is supposed to make communication easier or more efficient but it also can make communication convoluted and distorted.

I took inspiration from Rube Goldberg machines and whisper walls, which both provide a very tangible experience and elicit a sense of wonder in their own way. The Rube Telephone makes communication more tangible, presents it in a fun way, and hopefully prompts people to think about how technology affects how we interact and communicate with others.

Here are a few things I’ve learned about in making this project:

  • p5 web speech library
  • Google Translate API
  • OCR and the newocr.com API
  • node and various modules (socket.io, serialport, python-shell, etc.)
  • python
  • writing a morse code generator from scratch
  • arduino serial handshaking
  • rotary phone circuitry

My partner Aaron Montoya has been focusing more on the audio routing side of the project using Max and other audio software, and we have been working together on the circuitry related work.

Here is how it currently works:

  1. Someone speaks into one of the phones.
  2. p5 web speech converts what was said to text and then translates that to Morse code.
  3. The Morse code is tapped out on a Morse key by a solenoid and when the Morse key is pressed down, a piezo plays a tone.
  4. p5 translates the original English sentence to Spanish and this new sentence is spoken out of the speaker in the center.
  5. p5 translates the Spanish sentence back to English.
  6. The Arduino prints this English sentence on receipt paper.
  7. A webcam takes a photo of this printed text and the newOCR API pulls the text out of the image.
  8. The English text from the translation step is spoken out of the earpiece of the other phone.

2015-12-09 14.10.52

2015-12-09 14.11.22

To do:

  • Reverse the flow (OCR > translation > Morse) when someone speaks into phone 2.
  • Improve Morse and OCR steps and use the results from them.
  • Improve behavior of rotary phones (ringing, hold music) and implement light bulb progress bar so users know what’s happening.
  • Use Google search suggestions to make sure the Morse and OCR steps result in real words.
  • Improve visuals/experience for language translation and OCR steps.
  • Improve audio routing.
  • Build an i/o box to connect each rotary phone to power and audio cabling.
  • etc.

Code to be posted soon!

Playtesting

After I had decided to pursue the Rube Goldberg communication/distortion machine for my final project, I spent a week wallowing in circular thoughts trying to narrow it down. I had nothing prepared for playtesting aside from the drawing above, so twenty minutes before class my partner Aaron Montoya and I improvised a workflow that would vaguely mimic what we’re trying to build.

Round 1

Our first workflow was supposed to go as follows:

  1. Playtester 1 speaks a sentence of his/her creation into my computer’s mic.
  2. The built in OS X dictation software converts the audio to text.
  3. I play the text into Aaron’s computer’s mic using the built in OS X text-to-speech software.
  4. Aaron’s computer converts the audio to text and then plays this line back to my computer.
  5. Playtester 2 hears final output via text-to-speech.

This failed miserably. Our playtesters Corbin and Avika were both standing around in confusion for most of the process and the computers couldn’t hear each other so we had nothing to show them. Once we explained the idea to them, they thought it sounded interesting and fun. Unfortunately we couldn’t get much feedback regarding the hands-on aspects of the project.

To get around the issues from this round, we decided to get our next playtester more involved in the process by being the interface between computers. This way she wouldn’t be standing around after providing the initial input, and this would also avoid the issue of the computer speakers not being loud enough. We were also able to implement some of our other ideas like language translation and Morse code.

Round 2

  1. Playtester speaks a sentence into my computer’s mic.
  2. The built in OS X dictation software converts the audio to text.
  3. I translate the text to Spanish using Google Translate.
  4. Playtester reads the Spanish line aloud into my computer’s mic.
  5. Google Translate’s built in Spanish dictation converts the audio to text and translates it back to English text.
  6. I translate this text to Morse code using this online translator.
  7. Playtester reads the Morse code aloud while Aaron types it out into a text file.
  8. Aaron translates his Morse code back into English using the same translator.
  9. Playtester sees final output.

Our second playtester Yuli was much more engaged and enjoyed the interaction. However, we only initiated her by saying “Say a sentence” and sticking a mic in front of her so she wasn’t sure what the purpose or outcome of all these tasks was and thought we could have given her more incentive to play along till the end.

She also wished she knew the translations for the Spanish and Morse code to see how the message changed along the way. Our idea is to show all these permutations at the end after presenting the final result to maintain the element of surprise.

We decided to change the initial input to typed text instead of spoken word so that we would have a pure version of the user’s sentence. Our playtester’s sentence was wrong off the bat because the dictation software couldn’t understand her accent.

Round 3

The only thing we changed this round was having the user type her initial sentence instead of speaking it.

From Yan’s feedback, we decided to play the audio for the translated text in conjunction with showing the user the translated text because on top of someone’s Spanish accent being poor, they also may not know how written Spanish is supposed to be pronounced. To make it challenging, we may only allow one or two replays of the audio in our end product.

She was also curious if there would be instructions for the user at each step and also how the overall machine would look. This led us to a few ideas:

  • Simple graphical instructions at each station.
  • A sample exercise for each station that also instructs the user on what to do while performing the same task.
  • On screen instructions built into the interaction.

In terms of the look of the machine, we explained that we want it to be a continuous, linear chain that mimics an actual telephone game.

Other thoughts

The natural evolution of our projects flow during playtesting actually solved a lot of my concerns going into it.

One of my concerns before playtesting was how to make the project more interactive than just speaking into a microphone. While I think it would have been interactive if we showed how the spoken words changed while they traveled through the machine, I wanted to force the user to exert him or herself a bit more than that. Our playtesting naturally evolved to solve this issue by getting the user involved in the propagation of the message and performing different tasks.

I was also worried about how engaging it would be for two people at the ends of this machine if the messages going across didn’t make any sense. By having one user go through the process from start to finish, he/she would have a reference to the original input and have a better understanding of each step in the process. It may also be more fun to see the output when you know that it was influenced directly by your performance.

Another concern I had was how to get the different technologies talking to each other. This design also solved that by having the user act as the interface between each technology.

 

PCOMP Final Ideas

I think I generally like making games and getting people to move their bodies. I also like getting weird and making people break out of norms or turning conventions on their heads.

The idea I’ve been most excited about is creating a combination Rube Goldberg/Whisper Wall type contraption that would allow two people to communicate over a distance through a chain of various communication technologies that would talk to each other (pen and paper, beeper, tapedeck, dictation/TTS software, string and cups, etc.). Observers and participants would be able to see how the audio is processed and propagated at each step, making it somewhat tangible and similar in a way to the whisper wall. I think it would be fun to play with the idea of technology as innovation and progress and convey how convoluted communication can be because of technology by using the Rube Goldberg theme and making the overall communication inefficient and flawed. I’m also interested in incorporating antiquated and uncommon communication technologies and having them interface with each other, which brings a nostalgia factor to the project. This could also serve as a bizarre museum or timeline of such technologies, speaking to the speed of technological innovation and obsolescence.

I also came up with this idea of standing desks that have no legs and are held up by being handcuffed to your arms. This could maybe be turned into a combination Mavis Beacon type game and a game of strength where you have to keep the desk steady or maintain a specific ergonomic posture. I think it would be funny for the game to involve something really mundane like data entry.  I also thought about using a treadmill or stationary bicycle. If I do all three I could make a sort of twisted office/gym/cubicle environment. I really like the dark humor to this idea and how it forces people to exert themselves physically.

I think it would also be fun to make a game where people compete to touch the ground the least over a certain amount of time. I like that it’s such a simple concept but allows for such a wide range of responses. I would love to see how people would act when playing this.

I would also love to get people to play with each other’s faces but I haven’t really thought about what to do with that.

 

Guitar Chords

Click here for the code

When Esther and I first talked about what our project should be, she expressed interest in making some kind of panel with light up buttons that would help build up finger strength and dexterity. This was inspired by her experience learning the guitar and having a hard time moving her fingers to the right places, let alone press down hard enough to play a chord.

In the following days I kept coming back to this idea and wondering how we could make the experience more in line with the experience of playing a guitar. Esther’s initial idea was a panel that would sit on a tabletop that you would interact with like a keyboard. One idea I had was to turn it into something you hold like a guitar, maybe by mounting buttons in a 2×4. I also thought about the tactile experience and what kinds of buttons or switches would mimic the pressure needed to play a note on a guitar, or what would be small enough to mimic the spacing of the strings and frets. Eventually it struck me that the strings and frets on most guitars are metal so we could just use those as our switches.

The next step was to figure out if it would actually work. We had no idea how to work with a grid of interconnected switches but Tom Igoe quickly pointed us in the right direction by telling us to look into row column scanning. We wired each string as a digital output and each fret as a digital input to the Arduino. Then we’d send a high voltage over one string at a time and while high, iterate over the frets to see which ones were getting a voltage. The fret value for each string would be set to the highest fret that received a voltage while that string was providing a voltage. After doing this for each string, we would end up with a value of zero to three for each string and the Arduino would spit out the chord being played as a string of six digits. For example, a C chord would read as 032010. We didn’t have a guitar at the time so we tested this by wiring a bunch of wires to the Arduino and touching them together as if they were strings and frets.

2015-10-20 01.13.41 2015-10-20 01.13.15

Somehow I had a feeling it wouldn’t be that easy. Once we got our hands on an actual guitar to test with, our excitement died pretty quickly. The first issue we ran into was that if more than one string was touching the same fret, all our fret values zeroed out. I eventually figured that if the voltage wasn’t reaching our frets, the only other way it could go was up another string. I placed diodes between the Arduino and the strings and this fixed the issue halfway. Now when we pressed multiple strings to the same fret, it wouldn’t zero the values out but instead it would make them equal to the highest fret being pressed on any of the strings that were connected to that fret. This is because when you play a note, you are pressing a string down between two frets and the string touches both frets. So when you played a C, instead of outputting 032010 we would get 033030 since the second and third strings were connected by a common fret and same with the third and fifth strings. We almost thought this was the death of our project, but after lots of brainstorming with a few second years, someone suggested that we just put something between the frets to prevent the strings from touching two frets at once. We placed toothpicks in between the frets and that ended up fixing it. It was a great and simple fix but it was a shame that we had to modify the guitar to get things working. However, by painting them black you can hardly notice them, and on the plus side it encourages the user to place their fingers closer to the upper fret, which is what you’re supposed to do to get the best sound. I would love to know if there are other ways to solve this problem.

Screen Shot 2015-10-21 at 1.40.38 AM

Our p5 program gives you a random chord to play and shows you the finger placements as white dots on a fretboard. If you press a correct note, the dot turns green and if you press an incorrect note, a red dot appears where you are pressing. Once you match the chord, the dots flash green, an audio recording of the chord is played and you are prompted to play a new chord. You can also change the difficulty so that the program tells you which chord to play but doesn’t show you the finger guides.

It wasn’t too hard getting the initial functionality with the different colored dots and producing random chords, but programming the feedback was much more difficult. Specifically, timing the flashing of the dots and the new chord and also getting the audio to play. One weird thing regarding the audio is that I couldn’t put all my audio variables into an array and play those files by calling sounds[index].play(). Instead, I had to create an object for each audio file using this constructor function and play each sound using the play function built into the object.

There are so many things you could do with this project. For example:

  • Teaching individual notes.
  • Teaching scales.
  • Teaching all the different ways to play the same chord.
  • Teaching entire songs.
  • Adding a time limit and a score based on how many chords you play before the time runs out.
  • Having the program play the audio to the chords you play so that Rebecca can play the guitar with one hand.

Esther actually showed me the ChordBuddy after we were finished with our project and apparently it was funded on the show Shark Tank and is doing very well. It’s a similar idea but I think far inferior to what we made.

I even thought about what it would be like to make an adapter for our project so that you could easily attach this Arduino setup to any guitar. I think the toothpick thing makes it more complicated. However, I could definitely imagine people wanting something like this. I learned all the chords in our program really quickly without even meaning to just from testing the program. I’m glad that we were able to use an actual guitar for this project instead of building something proprietary and create something really useful that I haven’t come across before.

Synthesis

Synthesis was great! I thought it was a really fun challenge and it was cool to see what people came up with. I felt really proud of what I was able to accomplish in such a short amount of time, also considering we’re only one month into the semester.

My partner and I made a game involving a homemade foot pedal and a bunch of bubbles on the screen. You control one main bubble by stepping on the foot sensor to move right or releasing it to move left. The foot sensor was also pressure sensitive so you could control the speed of the ball or keep it in one place. We split it up so my partner made the foot sensor and I did the programming (here is the code). It was such a blur so I don’t remember the specific issues I ran into but I managed to figure them out along the way!

It’s a good thing I took this video because I made my own foot sensor and tried to set it up again this week and I couldn’t get it to work!

Piano

2015-09-29 21.38.04

This week I used what I learned about analog output last week to make a mini piano.

The piano has buttons corresponding to one octave of the C major scale, a knob that controls volume and a demo button that triggers the demo song, Twinkle Twinkle Little Star.

Before I put this together I actually made a theremin-like instrument that would change in pitch with varying light levels and change in volume with the turn of a knob. I decided to make a piano instead because it was difficult to control the pitch with a photoresistor so the instrument didn’t hold my attention for very long.

I used the following image from a Google search to help map each note to the correct analog output value for the piezo.

Piezo note mappings

One thing I was confused by was the use of floats versus integers in arrays and for loops. When you hit a button the program sends the float values for each note to the piezo, but when you play the demo song, the program sends integer values because I couldn’t figure out how to use the float values in the for loop.

Here is my code for the piano:

Handshake Meter

I was playing around with analog input and digital output to light different colored LEDs for different ranges of inputs and since I had so much luck getting my hands involved last week, I decided to go a similar route and make a handshake meter that measures the grip strength of a handshake using an FSR.

2015-09-22 19.32.30 2015-09-22 19.31.58

I identified a few different high pressure points when giving a handshake and I decided to place the FSR on the edge of the palm right beside the thumb. Next time I might put a few sensors at different points and determine the quality of the handshake based on the pressure applied simultaneously to all sensors. In response to minimal to no pressure the red LED remains lit, medium pressure turns on the yellow light, high pressure lights the green light and very high pressure causes the lights to flash on and off, one after the other.

I only recorded it in action once because there were too many things to carry but I ended up adjusting the ranges a few times to make sure it wasn’t too easy to get green or too difficult to get them all flashing.

Design Meets Disability

Graham Pullin’s reading on design for disability made some strong points about the need for a culture shift in the field and more openness to the design aspects of products that are often only seen for the function they replace. I have never realized how poorly designed many devices for disability are. It also felt strange to consider glasses to be addressing a disability, but this further emphasized the point Pullin made that the success of glasses design has a lot to do with how it transcends definition as a medical device and is now considered more of an accessory that are even worn without prescription.

When I read Charles Eames’ quote that “design depends largely on constraints,” I immediately thought of how the prevalence of mobile devices has changed software design significantly. Once designers had to provide the same functionality within such a tight space, they were forced to simplify. The innovations that came out of this were so effective that now, even desktop software is being modeled after their mobile counterparts, like OS X for example.

I had mentioned in our last class that I believed balance was an important concept in design, and this came up a lot in this reading. In the very beginning, Pullin talks about how the Eameses success had to do with their work culture, which balanced problem solving and respecting constraints with exploration, playfulness and challenging those same constraints. There are many other instances throughout where he argues for the balance between aesthetics and function, visibility and invisibility, realism and artificiality, etc. For example, many hearing aids provide inferior sound quality because their invisibility and small size are prioritized over their function.

This example leads to another good point Pullin made, which is that there needs to be more variety in designs and less of a one size fits all mindset. While an invisible hearing aid may be desirable for some people, others may not care as much about the look and more about hearing well. And some may even want a hearing aid that is clearly visible. Pullin makes it clear that the effort to make medical products invisible in order to minimize social stigma often makes dealing with the stigma even worse for users.

This leaves me wondering about how to attract designers to this field of designing for medical products. Since I don’t use these products and neither do most people in my close circles, it isn’t something I ever gave much thought to before reading Pullin’s work. However, I can see with Pullin’s help that this sort of thinking is what allows many medical devices to remain stuck with with poor designs because the people who use them make up such a small portion of the population. I found Pullin’s comparison to using cutlery versus a Swiss army knife to drive the point home; a Swiss army knife fills a need, but nobody would ever use one to eat at home.

Pullin’s original point about how the Eameses’ work designing leg splints ended up informing their iconic furniture designs speaks to the value of designing within constraints and the great potential for innovation that can transcend any one industry.

 

Home Depot Self Checkout Observation

2015-09-21 13.47.29

I found it interesting that the only examples I could think of for this assignment had to do with money transactions – ATM machines, Metrocard dispensers, movie ticket kiosks, etc. I ended up deciding on the self-checkout stations at Home Depot because it’s a high traffic store and I conveniently needed some supplies from there.

People basically use these stations to quickly pay for their own merchandise without having to wait for a cashier to become available. This also allows the store to process more transactions without needing as much staff – there were about eight of these kiosks and only one employee overseeing them.

I assumed it would be confusing and more time consuming for customers to check themselves out since I’ve used similar kiosks in the past at other stores and ended up running into unclear instructions or having to wait for an employee to intervene anyway.

Observations

I found the graphical interface to be very effective and flow well. It wasn’t the best looking interface but it was easy to figure out what to do at each step. There were many functions and pieces of information that I didn’t pay attention to, such as selecting the language or calling the attendant, but I’m sure if I needed any of it I would have found it quickly.

There were basically two main steps to the process: scan your items and pay. I didn’t see anyone stray from this to change the language or the volume, for example. The whole process typically took people less than 5 minutes.

2015-09-21 13.41.45 2015-09-21 13.41.49 2015-09-21 13.42.38 2015-09-21 13.42.48 2015-09-21 13.43.02 2015-09-21 13.43.06 2015-09-21 13.43.31 2015-09-21 13.43.35 2015-09-21 13.43.45 2015-09-21 13.43.48

The main hangups were scanning the items and making payment.

I didn’t see too many people struggle with scanning their items, but you first had to find the barcode, which took a few seconds, and then you had to position it over the scanning bed until you heard a beep and saw it appear on the screen. If it was a large or oddly shaped item, it might take a bit of waving around to get the item to scan. If someone had items that were too large to position over the scanning bed, they would have to call the attendant and wait for him or her to scan the item with their scan gun.

2015-09-21 13.51.43

Credit card payments were actually the most common cause of delay for people, which was surprising because it was a generic Ingenico card terminal that I’m sure everyone is already familiar with. The problem was that if anyone swiped a credit card that had an embedded security chip, the Ingenico would beep at them and ask them to insert the card chip-first into the bottom of the terminal instead of swiping it. I have never actually used the chip on any of my cards in the US so this was a first for me, and many of the people who ran into this just kept swiping and swiping not realizing that it wanted the chip. It wasn’t that the instructions weren’t clear, because on top of the obnoxious error beeps the terminal specifically instructed you to insert the card into the bottom of the terminal chip-first and displayed an animation of the card being inserted properly. Many people just ignored this and kept swiping because this is such a stereotyped behavior that people don’t even think about it.

2015-09-21 13.50.42-1

Scanning items was the more common source of delay since everyone had a different number of items and most people are already used to using Ingenico terminals, but when people did get held up trying to pay with a chip protected credit card, this ended up causing the longest delays.

Afterthoughts

I didn’t think to time the transactions and compare them with checking out with a human cashier but I would guess the in person cashier would be only slightly faster than the self checkout kiosks. I also should have taken a video to capture the more dynamic aspects of the process such as animations used in the instructions and auditory feedback, and also to better capture the flow of the process.

For the most part, people zipped through the process and, aside from the credit card chip hangup, I didn’t see any major issues. I used it to check out myself and, while my large toolbox took a few tries to scan, it was extremely smooth.

Don Norman Readings

I thought both of Donald Norman’s articles made great points about good design and it supplemented last week’s reading nicely. It provided some helpful ways to measure the quality of a physical interaction, which is something we touched on last week. For example how understandable the feedback is if there even is any, or how well a device’s controls and their intended functions map to each other in a way that we can easily understand. He also reinforced the idea that more functionality means more complexity, which is a strong point and one that I’ve never considered before.

At points I felt that Norman was being too hard on designers. When he spoke of the atrocious designs that he has come across such as the multimillion dollar telephone system that was installed at a university he was working for, I couldn’t help but blame the university instead of the designers for ending up with a terrible phone system. There will always be bad design, and while designers have a responsibility to be conscientious of the user perspective, I believe people also have a responsibility to be smart consumers. And in doing so, consumers play an important role in influencing design by voting with their money.

But then when Norman mentions that “the competing phone systems would not have been any better,” I thought of my experiences dealing with Time Warner customer service, a perfect example of a business that provides a universally atrocious user experience and yet still manages to succeed because they have no competition. And Norman supports this by pointing out that it takes many iterations to come to a good design and that designers face an uphill battle trying to make a good product but not being given the room to fail in order to get there.

His later article also made a good point about how you can design products for use by someone in a specific positive or negative emotional state and reap benefits in usability from design choices that might only work for one affect or another. It’s also a cool idea that the aesthetics of a product can influence the affect of the user, and therefore influence the usability of a product. I can relate strongly with the need for a product to be 100% straightforward and understandable when I’m under a tight deadline, for example; I will only take the path of absolute least resistance in such situations.

I had a good laugh with his observation that “Many of the peculiar behaviors of people using computer systems or complex household appliances result from such false coincidences. When an action has no apparent result, you may conclude that the action was ineffective. So you repeat it.” It made me think of a quirky old coworker who would always hit the “s” on his keyboard five times when using the shortcut to copy something, “command +”s”.