Interaction Lab Midterm project: test your eyesight by yourself —— Skye (Spring 2018)

Midterm project : Testing your EYE-Q!

Partner: Louis

Goals and original ideas

For midterm project, we wanted to build a device that can imitate the process of eyesight test. The idea comes from the experience of testing eyesight in the hospital. In the hospital, we have to have a doctor to help us do the test, we  find this not effective enough since we have to wait in line. Also, having some one looking at you when you are having a eyesight test may make some people, especially those with poor eyesight feel uncomfortable. Now that digital devices can be applied everywhere, we think a device that can allow patients do their eyesight test by themselves can make this precess more private and effective.

The original thought was that when the test starts, there will one image from eyesight test chart showing on the screen, and users can use a remote control to choose the corresponding directions.

one of the image is like this (it means down);

The direction will be random, and the size of image will be changing from big to small. Every time users make a correct choice, the size will become smaller. while if users make a wrong choice, the size will not change but the direction will become a different one. The chart of a ideal precess is like this:

Material:

  • 1* Arduino Kit and its contents, including:
  • 1 * Breadboard
  • 1 * Arduino Uno
  • 1 * remote control
  • 1 * USB A to B Cable
  • Jumper Cables (Hook-up Wires)
  • 1* infrared receiver
  • 6*big buttons
  • 6*220K resisters

Process and problems:

Since Louis was occupied by some stuffs at this time, I was mostly responsible for this part.

Since we did not learn how to use the remote control, I checked some instructions online and consulted Nick for detailed information. The circuit I later built was like this:

It took me al lot of time to insert all the data into Arduino and processing. To imitate the precess of eyesight testing to the most, I searched online for the exact sizes of eyesight chart images . The information for a 5-meter-far eyesight test chart was like this (sorry it is in Chinese…):

Then I found one problem was that these sizes are for human eyes. While for a computer screen, it is  pixels that constitute all the images, which means the image size will be different. In order to make sure the image on the screen looks exactly the same size as in human eyes, I need to convert visual size into pixels form. To do that,  I found a website that can transform mm to pixels. The link is here :https://www.unitconverters.net/typography/millimeter-to-pixel-x.htm

I thought it would be the exact outcome. However, after some test by professor, it turned out that the actual size shown on the screen was still not the same as  the visual size. And there was no other way to convert it concisely. So I used a mathematic method. Instead of changing the size of image, I calculated the proportion of distance and images, and changed the distance from 5 meters to 3.7 meters, so that the visual size will be the same. In order to reach that distance, professor helped me to borrow a long USB form the resource center, which is like this:

After this test, professor  gave me another suggestion that the remote control was not convenient enough for using  as it is too small and may cause confusion for users that which button should be pressed. Also, the processing of using remote control was not interesting enough for interaction. Meanwhile, from my other tests, I found that the data sent by remote control was not stable enough, as it can send several data at one time and cause the image to change several times while the button was just pressed once. Considering all these elements, I decided to change the remote control to some buttons.  I wished this can solve these problems. Since I wanted to make the operating device like a box, the buttons in my Kit were too small for operating. So I borrowed some big buttons from resource center. The buttons as well as the circuit I built was like this:

After having all the components and datasets done, I moved on to do the logics in processing. This is most challenging part for me, as I was not good at coding.

For coding, in order to reach our ideal outcome, first of all, we made a cover for the project for some instructions and named one button as “begin”. Every time button “begin” is pressed, the test will begin and images will show up. Then we divided our coding into two parts: the right choices part and the wrong choices part. We first dealt with the “right” part. (I was still responsible for this part since my partner haven’t return.)For this part, I tried three ways to do the coding. For the first one, I simply used the “if/else” and “boolean” condition to classify all the variables. However, this totally did not work. After consulting our fellows(Nick and Leon), I  added a switch function and used several “if / else” condition to match all the corresponding datasets. This trial could let the image become smaller each time a right choice was made, However, it also brought up another problem. I found out the button can also send several data at one time when keeping pressing it, which meant the data was still not stable. To further modify the data, I consulted another fellow (Lewis) how to do that.  I learnt how to set each value back to the original state after it change the value, so that they will not be influenced by the data sent by buttons. Till then, I can make the right part work. Also, I wrote some texts on the screen to show the eyesight level of each size of image. The outcome looked like below:

Then I moved on to work on the “wrong” part. At that time my partner came back so we did the following process together. We thought the logic for  “wrong” will be the similar with the “right” part, just with different variables. Unfortunately, it turned out that there must some logical conflict between these two parts. We spent a whole day one this part and still could not figure this out. Since we did not had enough time, we decided to simply replace the ‘wrong’ part with a ‘stop’ button. Every time users can not figure out the direction ( which may indicate a wrong decision), they can press the ‘stop’ button to check the result of their eye-sight level. We knew this is not the best solution, but we had no idea how to figure the logic out at that time…

After adding the ‘stop’ button, we added another back cover for our project to show the result. We wanted to let the test begin again after pressing ‘begin’ again, but we can only make it return to the first image instead of the cover. This is another failure.

Context and decoration:

After barely finished the functional part of our project, we continued to make some decorations and context for our project. (I was responsible for this part when Louis was doing final modification.) I used a box to hold all the circuits and made it as a operating device, which looks like this:

To let users know what they need to do, I drew the images and “begin” and “stop” signal besides the corresponding buttons. To make the instruction more clear and give a full context to our project, I made another two storyboards to give specific instructions, which looked like this:

Also, as you can see, to make our project be entertaining as well as functional, we added the minion characters to our context (because they wear glasses…). We also add these characters on the screen.

Outcome:

Here is the outcome tested by one of our friends:

User Testing:

For our project: most of the people who used it found it very useful and operable. They can immediately understand what they are supposed to do and find the process very similar to their experience of eyesight testing. Some of them really like the figure of Minion because they are very cute. Some of them really appreciate the idea of adapting interactive devices to medial use. Many people tried it again and again to see whether the result matches that of their tests in hospitals. One user found it very nice that she will not relate our device to anything like Arduino because the decoration made her focus on the functions.

Here is one video of user testing:

Here some suggestions that we received for improving our project:

  • give some messages about wrong choices, and show the result after several wrong trials. prevent people from guessing answers
  • make the buttons closer so that users can operate it without looking at it (like a game controller)
  • if want to apply this in real life, better take special conditions into consideration, like color-blind and other eye diseases.
  • do more research about concision, make sure the distance is correct
  • add instructions and logics for “covering one eye”
  • take different ages into consideration. make it more applicable for kids

Considering all these suggestions, we conclude that to improve our projects, first of all, we need to fix the logic for wrong choices and add more indications. Also, to provide better user experience, we should make the controller more comfortable to operate and considering different user requirement. What’s more, we should keep working to make the device more accurate.

For someone else’s project: one project I remembered most was a drawing book. There was a book with several tags on it. The data receivers ware light sensors in the labels. Each time you open a label, you can do a certain kind of drawing on the screen. While when you close the label, the images will disappear. The idea was actually very interesting, cause it gave another way for drawing. However, there were some problems during my test. One main problem was that the light sensors were very sensitive, which caused the data very unstable and affected the outcomes. I think use other components like buttons or pressure sensors can be a better idea. Secondly, the user instructions was not vey clear. When I looked at the devices, I was not quite sure what to do until the developer told me. I think there should be more thoughts about the appropriate components to use and user experiences.

Source Code:

//Arduino
const int buttonPinBegin = 6;
const int buttonPinLeft = 7;
const int buttonPinUp = 8;
const int buttonPinDown = 9;
const int buttonPinRight = 10;
const int buttonPinStop = 5;

int buttonStateBegin = 1;
int buttonStateLeft = 1;
int buttonStateUp = 1;
int buttonStateDown = 1;
int buttonStateRight = 1;
int buttonStateStop = 1;

int Direction = 0;

void setup() {
Serial.begin(9600);
pinMode(6, INPUT);
pinMode(7, INPUT);
pinMode(8, INPUT);
pinMode(9, INPUT);
pinMode(10, INPUT);
pinMode(5, INPUT);
}
void loop() {
buttonStateBegin = digitalRead(6);
buttonStateLeft = digitalRead(7);
buttonStateUp = digitalRead(8);
buttonStateDown = digitalRead(9);
buttonStateRight = digitalRead(10);
buttonStateStop = digitalRead(5);

if (buttonStateRight == HIGH) {
//Serial.println(2);
Direction = 2;
} else if (buttonStateLeft == HIGH) {
Direction = 1;
} else if (buttonStateUp == HIGH) {
Direction = 3;
} else if (buttonStateDown == HIGH) {
Direction = 4;
} else if (buttonStateBegin== HIGH) {
// Serial.println(‘6’);
Direction = 5;
} else if (buttonStateStop == HIGH) {
Direction = 6;
}else{
Direction =0;
}
//if (Direction != 0){
//Serial.println(buttonStateUp);
Serial.write(Direction);
Direction =0;
// irrecv.resume(); // Receive the next value

delay(100);
}

// Processing
import processing.serial.*;
Serial myPort;
int valueFromArduino;
PImage leftimg, rightimg, upimg, downimg;
PImage cover,end;

PImage [] img ;
String [] direction ={“up”, “down”, “right”, “left”};
//the size of “E” from 4.0 to 5.3
float [] size = {274.847244094, 218.305511811, 173.404724409, 137.763779528, 109.417322835, 86.929133858,
69.051968504, 54.840944882, 43.577952756, 34.620472441, 27.477165354, 21.845669291, 17.348031496, 13.757480315} ;
int i= round(random (0, 3));
float s= 4.0;
int counter = 0;
String D;
int error=0;
int pval=0;
int transparency = 255;

int yPos, yPos1 = -500;
int xPos, xPos1, xPos2, xPos3 = -100;
int interval = 1750;
boolean state = false;
boolean startover=false;

void setup() {
size (displayWidth, displayHeight);
//fullScreen();
background (0);
printArray(Serial.list());
myPort = new Serial(this, Serial.list()[3], 9600);
// Images must be in the “data” directory to load correctly
img = new PImage [4];
img [0]= leftimg = loadImage(“E left.jpeg”);
img [1]= rightimg = loadImage(“E right.jpeg”);
img[2] = upimg = loadImage(“E up.jpeg”);
img[3]= downimg = loadImage(“E down.jpeg”);
D = “0”;
cover=loadImage(“cover.jpg”);
cover.resize(displayWidth, displayHeight);
end=loadImage(“end.png”);
}

void draw() {
//receiving data from Arduino
while ( myPort.available() > 0) {
valueFromArduino = myPort.read();
println (valueFromArduino);
}

background(cover);
textSize (100);
fill (0, 0, 0,150);
noStroke();
textMode (CENTER);
tint(255, transparency);
text (“Press ‘BEGIN’ to start !”, 200, height*6/7);
imageMode(CENTER);

if (valueFromArduino==5& pval!=5) {
//valueFromArduino = 0;
background(255);
imageMode(CENTER);
D = direction[i];
textSize (50);
}
fill(255);

switch (D) {
case “up”:
background(255);
image(img[2], width/2, height/2, size[counter], size[counter]);
break;
case “down”:
background(255);
image(img [3], width/2, height/2, size[counter], size[counter]);
break;
case “left”:
background(255);
image(img [0], width/2, height/2, size[counter], size[counter]);
break;
case “right”:
background(255);
image(img [1], width/2, height/2, size[counter], size[counter]);
break;

}
//Correctly guessed “up”
if (D==”up” && valueFromArduino==3 && pval!=3) {
pval=9;
counter+=1;
s=s+0.1;
i= round(random (0, 3));
D = direction[i];
//Correctly guessed “down”
} else if (D==”down” && valueFromArduino==4 && pval!=4) {
pval=9;
counter+=1;
s=s+0.1;
i= round(random (0, 3));
D = direction[i];
//Correctly guessed “left”
} else if (D==”left” && valueFromArduino==1 && pval!=1) {
counter+=1;
s=s+0.1;
i= round(random (0, 3));
D = direction[i];
//Correctly guessed “right”
} else if (D==”right” && valueFromArduino==2 && pval!=2) {
pval=9;
counter+=1;
s=s+0.1;
i= round(random (0, 3));
D = direction[i];
}
//STOP Pressed or 14 Correct Answers
if (valueFromArduino==6 || counter == 14) {
state=true;
}

if (state==true){

background (#FCE58F);
pushMatrix();
fill(0, 0, 0,150);
textSize(90);
text (” THIS IS YOUR EYESIGHT LEVEL:”, 10, height/3);
popMatrix();
textSize(300);
fill(255,0,0,150);
text(s,200, height*7/10);
fill(0, 0, 0,150);
textSize(80);
text (” PRESS ‘BEGIN’ TO TRY AGAIN!”, 90, height*7/8);
image(end, width/2, height/10);
startover=true;
}
//Press BEGIN to start over
if (startover==true && valueFromArduino==5& pval!=5) {
background(255);
imageMode(CENTER);
counter=0;
D = direction[i];
textSize (50);
s = 4.0;
state=false;

}

pval=valueFromArduino;
}

Leave a Reply