Creative Motion – Yu Yan (Sonny) – Inmi

Creative Motion – Yu Yan (Sonny) – Inmi

Conception and Design:

During the brainstorming phase, my partner Lillie and I tended to build an interactive project that allows users to create digital paintings only with their motions. The interaction of this project includes users’ movements as the input and the image displayed on the digital device as the output. Our enlightenment comes from Leap Motion interactive art exhibit. At this point, we thought about using multiple sensors on Arduino to catch the movements, and display the image on Processing. However, after we tried on several sensors and did some researches, we found that there is no sensor suitable for our needs and even if there is, it would take a huge amount of time to build the circuit and understand how to code. So we turned to our instructor for help and also did some further researches to see other alternative ways. Finally, we decided to use the webcam in Processing as our “sensor” to catch the input (users’ movements) and build an LED board on Arduino to display the output (painting). The reasons why we choose the webcam are that it’s easier to catch images from the camera than from the sensor, the color values detected from the camera are more accurate than from the sensor, and the code is not too difficult to learn with the help of other IMA fellows. However, when we were figuring out the Arduino part, we found it hard to build the circuit using single-colored LEDs and connect all of them on the breadboard. With our further researches, we managed to find that the 8*8 LED matrix can replace the single-colored LEDs and also generate more colors. But the first few pieces of LED matrix are not satisfactory because we don’t know how to connect them to the Arduino board and we were unable to find the solutions online (We found this video that we thought it would be helpful for us to understand how to connect the LED matrix to the Arduino, but it wasn’t). We also found a sample code to test the LED matrix, but since we were unable to connect it to the Arduino, this code became useless as well. Moreover, those pieces can only generate three colors that didn’t meet our needs.

Since we want to allow users to create paintings with more diversity, we tried to find the LED matrix that can display in rainbow colors. After consulting with other IMA fellows, we found that the Rainbowduino can work with one kind of LED matrix and display in rainbow colors. The code for this is also easy to comprehend. So eventually, we decided to use the Rainbowduino and the LED matrix in Arduino as our output device, and the webcam in Processing as our input detector. 

Fabrication and Production:

One of the most significant steps in our production process in terms of failures are the coding phase. Since when we chose materials for the output device, we tried quite a few kinds of LED matrix and also looked for their codes, we discovered that the code for previous LED matrix are too complex to comprehend. We needed to set different variables for different rows and columns of the LEDs, which is quite confusing sometimes. But after we decided to use the Rainbowduino, the code for Arduino became much easier because we can use the coordinate to code for each single LED. And with the help of IMA fellows, we managed to write the code that satisfied our needs. This experience tells us that choosing suitable equipment is very crucial to a project,  for choosing a good one can bring great convenience to the progress and save us a lot of time. Another significant step is the feedback we received during the user testing session. The good things are that many users showed their interests to our project and thought it’s really cool when displaying different colors. They all thought the interaction with the piece is quite intriguing and they liked that their movements can light up the LEDs in different colors. This feedback meets our initial goal of providing users with opportunities to create their own art using their motions. However, there were still some problems we can still improve. First of all, one of the users said that the way the LEDs lighted up could be a little confusing because it cannot well illustrate where the user moves. It’s because we didn’t separate the x-axis and the y-axis for each section of LEDs at first. The following sketches and video help explain the situation.

To solve this issue, we modified our code and separated the x-axis and the y-axis for each section so that it can light up without causing other sections of LEDs lighting up as well. After we showed our modified project to the user who gave us this comment, he said that the experience is better now and he can see himself moving in the LED matrix more clearly. Second, the experience of interaction could be too single and boring and it’s hard to convey our message to users through this experience. Since the interaction is only about moving their bodies and displaying different colors in the same position of their movements on the LED matrix, it might be too stuffless for an interactive project. Marcela and Inmi suggested that maybe adding some sounds to it can make it more attractive and more meaningful. So we took their advice. In addition to turning up a section of LEDs when moving to the relative area, we also added some sound files to each section of LEDs and made them play with the lighting up of the corresponding LEDs. The following sketches illustrate how we defined each section and different sound file.

 

Initially, we used several random sounds such as “kick” and “snare” because we wanted to bring more diversity into our project. But during the presentation, some users commented that the sound is too random and sounded chaotic when they were all turned on. One of them also mentioned that the sound of “snapshot” made her feel uncomfortable. So for the final IMA show, we adjusted all the sound files to different key notes of the piano. This change made the sound more harmonious and comfortable to hear when users are interacting with the project. Third, some users mentioned that the LED matrix is too small and sometimes they might neglect the LED and pay more attention to the computer screen instead. At first, we thought about connecting more LED matrixes together and making a bigger screen, but we didn’t manage to do that. So instead of magnifying the LED matrix, we made the computer screen more invisible and the LED matrix more outstanding by putting it into our fabrication box. The result turned out to be much better than before and we caught users’ attention to our LED matrix instead of the computer screen.

By contrast, the fabrication process is one of the most significant steps of our project in terms of success. Before we settled down the final polygon shape, we came up with a few other shapes as well. Similar to my midterm project, we also chose Laser-cut and glued each layer together to build the shape. Since we wanted to make something cool and make the most use of our material, we decided to choose transparent plastic board to make the shape. We also discovered that polygon can help build a sense of geometric beauty, so finally, we made our box into a polygon shape. At first, we tended to just put the LED matrix on the top of the polygon. But one of IMA fellows suggested that we can put the LED matrix at the bottom of the polygon so that the light can reflect through the plastic and make it prettier. Thanks to this advice, it turned out to be a really cool project!

 

Conclusions:

For our final project, our goal is always allowing people to create their own art using their motions and also encourage them to create art in different forms. Although we changed our single output (painting) to multiple outputs (painting and music), our goal of creating art with motions still remains the same. Initially, we defined interaction as a continuous communication between two or more corresponding elements, an iterative process which involves actions and feedbacks. Our project successfully aligned with our definition by creating a constant communication between the project and the users and providing immediate feedback to users’ motions. However, the experience of interacting with the piece is still not satisfactory enough because we could not magnify the LED matrix so that it’s too small to notice. We didn’t create the best experience to users. But fortunately, most of the users understood that they can change the image and create different sounds with their motions. They all thought that this is a really interesting and interactive project that they can play with for a long time. Some users even tried to play a full song after they discovered the location of each keynote. If we had more time, we would definitely build a bigger LED board to make it easier for users to experience the process of creating art with their motions. The setbacks and obstacles we’ve encountered are all seemed quite fair during the process of completing a project. But the most important thing is to learn some lessons from these setbacks and obstacles. What I learned from them are that we should humbly take people’s comments about our project and turn them into useful improvements and motivations. In addition, I noticed that I still didn’t pay enough attention to the experience of the project. Since experience is one of the most vital parts of an interactive project, it should always be the first consideration. However, I also learned that the reason why many people like our project is that it can display their existence and be controlled by them. Users are in charge of everything the project displays. This also shows that we have created a tight and effective communication between the project and users. Furthermore, making the most use of our materials is also very important. Sometimes it can make a big change to the whole project and turn it into a more complete version. Since nowadays many people still hold the concept that art can only be created in those limited forms, we want to break this concept by providing them with tools to create new forms of art and inspiring them to think outside of the box. Art is limitless and with great potential. By showing that motion can also create different forms of art, this project is not only a recreation but also an enlightenment to help people generate more creative ideas of new forms of art and free their imagination. It also helps make people be aware of their ability and their “power”, and let them control the creation of art. “Be bold, be creative, and be limitless.” This is the message we want to convey to our audience. 

The code for Arduino is here. And the code for Processing is here.

Now, let’s have a look at how our users interact with our project!

Recitation 10: Object Oriented Programming Workshop by Yu Yan (Sonny)

For this recitation, we first went through the content about “map()” function concerning what it is used for and how to use it. After this, we went separately to attend different workshops. I participated in the Object Oriented Programming workshop hosted by Tristan. 

During the workshop, we learned about two parts of an “object”, “class” and “instance”, and how to use them. By using “class”, we can make our code more concise and clear to read. We can also use “ArrayList” to create multiple objects. 

Exercise:

For exercise, I created a “class” called “Image” that generated two ellipses and one rectangle. I put them into random colors and set 20 of them off from the center of the canvas. For interaction, I used a “mouseMoved” function so that it can display the images based on the location of my mouse.

Here is the animation of my work.

Here is my code:

ArrayList<Image> images;

void setup(){
  size(1000,600);
  colorMode(HSB,360,100,100);
  images = new ArrayList<Image>();
  
  for(int i=0; i<20; i++){
    images.add(new Image(width/2, height/2,random(-5,5),random(-5,5),color(random(360),100,100)));    
  }
}

void draw(){
  background(270,100,100);
  
  for(int i=0; i<images.size(); i++){
    Image temp = images.get(i);
    temp.display();
    temp.move();
  }  
}

void mouseMoved(){
  images.add(new Image(mouseX,mouseY,random(-5,5),random(-5,5),color(random(360),100,100)));    
}

class Image{
  float x,y,spdX,spdY;
  color c;
  
  Image(float newX, float newY, float newSpdX, float newSpdY, color newC){
    spdX = newSpdX;
    spdY = newSpdY;
    c = newC;
    
    x = newX;
    y = newY;
  }
  
  void move(){
    x += spdX;
    y += spdY;
    if (x > width || x < 0){
      spdX *= -1;
    }
    if (y > height || y < 0){
      spdY *= -1;
    }
  }
  
  void display(){
    fill(c);
    ellipse(x,y,30,30);
    ellipse(x+40,y,30,30);
    rect(x,y,40,40);
  }
}

Recitation 9: Media Controller by Yu Yan (Sonny)

Introduction:

In this recitation, we were asked to build a connection between Arduino and Processing to control a media/image, which is similar to what we did for last recitation. 

Exercise:

For this exercise, I used two potentiometers to control two parameters in Processing, one is “tint”, another is the level of “BLUR” in “filter”. Also, by mapping them into a certain amount, the parameter can stay in a reasonable range and change the color and effect of my image. The final effect is that for the potentiometer that controls the filter, the bigger value it shows, the more blurry the image would be; for the potentiometer that controls the “tint”, I can change the color of the image by rotating the potentiometer to different value.

The process of setting the parameter for “tint” is quite smooth. However, when I coded for “filter”, I was thinking about creating an array to store a few different kinds of filters and when I press a button, it would randomly display a filter covered on the image. But after trying for several times, it seems that I cannot use the array in this way. So I gave up on this idea and change into using a potentiometer to control a parameter in one specific filter. 

The code for Processing and Arduino is as follows.

Processing:

PImage photo;

import processing.serial.*;
String myString = null;
Serial myPort;

int NUM_OF_VALUES = 2;
int[] sensorValues = new int [NUM_OF_VALUES];
float x;
float y;

void setup(){
  size(1000,600);
  color(HSB);
  photo = loadImage("daniel touchin.jpg");
  setupSerial();
}

void draw(){
  updateSerial();
  printArray(sensorValues);
  x = map(sensorValues[0],0,1023,0,360);
  y = map(sensorValues[1],0,1023,0,10);
  tint(x,100,100);
  image(photo,0,0);
  filter(BLUR, y);
}


void setupSerial() {
  printArray(Serial.list());
  myPort = new Serial(this, Serial.list()[0], 9600);
  // WARNING!
  // You will definitely get an error here.
  // Change the PORT_INDEX to 0 and try running it again.
  // And then, check the list of the ports,
  // find the port "/dev/cu.usbmodem----" or "/dev/tty.usbmodem----" 
  // and replace PORT_INDEX above with the index number of the port.

  myPort.clear();
  // Throw out the first reading,
  // in case we started reading in the middle of a string from the sender.
  myString = myPort.readStringUntil( 10 );  // 10 = '\n'  Linefeed in ASCII
  myString = null;

  sensorValues = new int[NUM_OF_VALUES];
}

void updateSerial() {
  while (myPort.available() > 0) {
    myString = myPort.readStringUntil( 10 ); // 10 = '\n'  Linefeed in ASCII
    if (myString != null) {
      String[] serialInArray = split(trim(myString), ",");
      if (serialInArray.length == NUM_OF_VALUES) {
        for (int i=0; i<serialInArray.length; i++) {
          sensorValues[i] = int(serialInArray[i]);
        }
      }
    }
  }
}

Arduino:

void setup() {
Serial.begin(9600);
}

void loop() {
int sensor1 = analogRead(A0);
//int sensor2 = digitalRead(9);
int sensor3 = analogRead(A2);

// keep this format
Serial.print(sensor1);
Serial.print(","); // put comma between sensor values
// Serial.print(sensor2);
// Serial.print(",");
Serial.print(sensor3);
Serial.println(); // add linefeed after sending the last sensor value

// too fast communication might cause some latency in Processing
// this delay resolves the issue.
delay(100);
}

Reflection:

The reading Computer Vision for Artist and Designers really gives me some enlightenment for my final project. But first of all, it shows me the background of computer vision, which is what we’re using during the class. I learned many different aspects and types of computer vision techniques. The project mentioned in the reading that really intrigued me is Messa di Voce by Golan Levin and Zachary Lieberman, which “uses a set of vision algorithms to track the locations of the performers’ heads…… also analyses the audio signals coming from the performers’ microphones”. I like how this project interact with the audience because tracking people’s heads and turning the sound they make into the output displayed on the screen is very novel and interesting for me. This is a kind of interaction that I cannot think of with my shallow knowledge and creativity. But somehow it inspires me that there can always be an innovative way of interaction that I am able to come up with. For my final project, the computer vision technique that my partner and I want to apply is “Detecting Motion”, also mentioned in the reading. This is a very basic technique but also can be really attractive and intriguing. I would use the example project mentioned in the reading as a reference to improve my final project and try to apply more creative and interesting interaction in the project.

Final Project: Essay by Yu Yan (Sonny)

Project Title:

Motion Painting

Project Statement of Purpose:

The enlightenment of our project is an interactive piece that uses motion detectors to catch people’s hand movements and draw corresponding images on the screen. By interacting with the piece, people can draw whatever they want using their movements such as pushing their arms and waving their hands, instead of just painting with a pen. Looking closely, how people move their hands is like they’re using magic to paint on the screen. This is also very interesting and creative. Our project is going to make some improvements on the basis of our enlightenment, which is mentioned above. Similarly, we also want to show people that sometimes you don’t necessarily need a pen or pencil to create a beautiful painting. There may be an inherent concept in many people’s mind that if you want to draw something, you have to use a pen or pencil or some pigments which would use up your physical strength really quickly. We want to break this concept by creating an interactive art piece that allows people to draw only with their motions. Since art can be in any forms you can think of, we want to let people create their own art interactively and creatively. So we would like to use motion sensors or distance sensors to detect people’s body movements and generate different forms and colors of images on the canvas according to different movements people make. We also intend to inspire people to think outside of the box and create new forms of art with their imagination.

Project Plan:

In order to make our Motion Painting, we would use Arduino to build the circuit for the sensors and use Processing to display people’s art piece on “motion painting”. We would also fabricate the appearance of sensors using 3D printer or Laser-cut. For Arduino, our initial thought is to use a motion sensor to detect people’s movements. To avoid getting confused, we would set a detectable range so that the sensor can only detect people’s hand and arm movements instead of the whole body movements. We also thought about using a couple of distance sensors to build this detector, but due to previous experience, the sensitivity of distance sensors may not satisfy our goal. So the sensor remains to be discussed and tested. In terms of Processing, we want to assign different values from Arduino with different figures shown on the screen. For instance, if people push their hands, Processing would generate different shapes of stars on the canvas; if they wave their hands, Processing would present different sizes of circles. The color of each figure is also changeable based on the speed of people’s movements. If they move faster, the color would be redder; if slower, it would be bluer. Our intended audience is people who are interested in interactive art pieces and willing to create interactive and creative work by themselves. We draw from our previous experience of visiting art exhibitions and learn that people prefer interactive art pieces when they go to an art exhibition and these kinds of art pieces are easily to engage and understand. So what we want to create is also an interactive art piece that is easily to engage and understand.

Our project timeline is basically as follows:

  • Nov. 22: Start coding for Arduino, setting up the circuit and testing the sensor.
  • Nov. 26: Start coding for Processing and combine it with Arduino.
  • Dec. 3: Finish the code for Arduino and Processing and test the circuit.
  • Dec. 4: Fabricate the controller (using 3D printer or Laser-cut).
  • Dec. 6: Finish the project.
  • Dec. 9: Finishing touches.

Context and Significance:

My preparatory research and the experience of midterm project both show that interactive art pieces should focus more on users’ experience. So this has become one of the goals we want to fulfill in our project. On the one hand, experience means communications between the art piece and the users, including different kinds of input and output.In the article called “Introduction to Physical Computing”, Igoe and O’sullivan show us how the computer sees us as a sad creature: “we might look like a hand with one finger, one eye, and two ears” (19). In order to change this, we need to add more ways of input when we communicate with the computer. They also mention that “we need to take a better look at ourselves to see our full range of expression” (Igoe and O’sullivan, 19). What we are capable of when communicating with computers is not limited to clicking the mouse or the keyboard. We should explore more kinds of experience in order to be more interactive with the art piece. On the other hand, experience also includes the simplicity of understanding the project. So for this project, we also focus on how to make people be aware of how to interact with our project and understand it as soon as they see it. Another goal we want to accomplish is to create a continuous communication between the art piece and the users.This aligns with my definition of interaction as well. My definition for interaction is “a continuous conversation between two or more corresponding elements”. It’s important for us to build a constant communication between the project and the users.

Since we are re-creating the art piece that inspires us, what our take from the art piece is the way of communication between the art piece and the users. However, we also make some improvements to it. Instead of generating random figures, we want to create different figures based on different motions people make. The audience of our project can be anyone. But it is especially intended for people who would like to create different forms of art and people who are interested in interactive art. Our project can work as a tool and also an enlightenment for them to create their own art piece. We want to put our project in an art exhibition so that it can inspire more people to create novel and creative art in whatever forms they can think of. Then the subsequent projects can be even more creative tool for people to create art. There is no limitation when creating an art piece. What limited us is only our imagination. People should give full play to their imagination in order to create new forms of art.

Reference:

Physical Computing – Introduction by O’sullivan and Igoe

Recitation 8: Serial Communication by Yu Yan (Sonny)

Introduction:

In this recitation, we worked on building communication between Arduino and Processing. The first exercise is to use Arduino to send data to Processing, for Arduino is the input and Processing is the output. The second exercise is to use Processing to send data to Arduino, for Processing becomes the input and Arduino is the output. Through these two practises, I get more familiar with building the connection between Arduino and Processing via serial communication.

Exercise 1:

The first exercise is to make an Etch A Sketch. For this exercise, the input is two potentiometers from Arduino, and the output is the canvas of Processing. I used two potentiometers to control the position of the “pen”, then rotated them to make a sketch on Processing. Before making an actual sketch, I followed the instruction to draw an ellipse first. I practised this in previous classes, so this step is easy for me. I created two variables to stand for the values of x-axis and y-axis. I also used the “map” to keep the ellipse move inside the canvas, since the biggest value of the potentiometer is 1023, while the limit of width and height of the canvas is only 500. Then I modified the Processing code and turned the ellipse into a line. To do this, I used a sample code from a previous class as a reference, which is about using the mouse’s movement to draw a line. In order to make a “real” Etch a Sketch, I need to keep track on the previous x and y values as well. So I added two more variables that stand for previous x and y. I defined them as x and y in the “draw()” loop which were after the “line()”, so they can be stored in the previous values of x and y. 

The code for Processing and Arduino is as followed. 

Processing:

import processing.serial.*;

String myString = null;
Serial myPort;

int NUM_OF_VALUES = 2;   /** YOU MUST CHANGE THIS ACCORDING TO YOUR PROJECT **/
int[] sensorValues;      /** this array stores values from Arduino **/
float x;
float y;
float px;
float py;

void setup() {
  size(500, 500);
  background(0);
  setupSerial();
}

void draw() {
  //background(0);
  updateSerial();
  printArray(sensorValues);
  x = map(sensorValues[0], 0, 1023, 0, 500);
  y = map(sensorValues[1], 0, 1023, 0, 500);
  stroke(255);
  line(px,py,x,y);
  px = x;
  py = y;
}

void setupSerial() {
  printArray(Serial.list());
  myPort = new Serial(this, Serial.list()[0], 9600);

  myPort.clear();
  myString = myPort.readStringUntil( 10 ); 
  myString = null;

  sensorValues = new int[NUM_OF_VALUES];
}

void updateSerial() {
  while (myPort.available() > 0) {
    myString = myPort.readStringUntil( 10 ); // 10 = '\n'  Linefeed in ASCII
    if (myString != null) {
      String[] serialInArray = split(trim(myString), ",");
      if (serialInArray.length == NUM_OF_VALUES) {
        for (int i=0; i<serialInArray.length; i++) {
          sensorValues[i] = int(serialInArray[i]);
        }
      }
    }
  }
}

Arduino:

void setup() {
  Serial.begin(9600);
}

void loop() {
  int sensor1 = analogRead(A0);
  int sensor3 = analogRead(A2);

  Serial.print(sensor1);
  Serial.print(",");
  Serial.print(sensor3);
  Serial.println();
  delay(100);
}

Exercise 2:

The second exercise is to make a musical instrument. For this exercise, the input is the mouse’s movement in Processing, and the output is the buzzer on Arduino board. In order to do this, I only made a few changes to the sample code. I used the mouse’s x and y positions to control frequency and duration of the tone. Since the buzzer can only make a sound when the frequency is bigger than 31, I used the “map” to keep the frequency in the range of 200 and 2000. I also used “mousePressed()” to set the pressing of the mouse to control the on and off of the sound. On this basis, I made some improvements to the original canvas. I modified the Processing code so that the canvas would change the color when I press the mouse and turn to black when I release my mouse. In this way, you can visually see that you are playing with this musical instrument.

The code for Processing and Arduino is as followed.

Processing:

import processing.serial.*;

int NUM_OF_VALUES = 3; 
int x;
int y;
color c;
float h = 0;

Serial myPort;
String myString;

int values[] = new int[NUM_OF_VALUES];

void setup() {
  size(500, 500);
  colorMode(HSB,360,100,100);
  background(360);

  printArray(Serial.list());
  myPort = new Serial(this, Serial.list()[0], 9600);

  myPort.clear();
  myString = myPort.readStringUntil( 10 );  // 10 = '\n'  Linefeed in ASCII
  myString = null;
}

void draw() {
  background(360);
  c = color(h,100,100);
  if (mousePressed){
    h += 1;
    background(c);
    if (h > 360){
      h = 0;
    }
  } else {
    background(0,0,0);
  }

  x = mouseX;
  map(x,0,500,200,2000);
  y = mouseY;
  map(y,0,500,100,1000);

  values[0] = x;
  values[2] = y;

  
  if (mousePressed == true){
    values[1] = 1;
  } else {
    values[1] = 0;
  }
  
  sendSerialData();
  echoSerialData(200);
}

void sendSerialData() {
  String data = "";
  for (int i=0; i<values.length; i++) {
    data += values[i];
    //if i is less than the index number of the last element in the values array
    if (i < values.length-1) {
      data += ","; // add splitter character "," between each values element
    } 
    //if it is the last element in the values array
    else {
      data += "n"; // add the end of data character "n"
    }
  }
  //write to Arduino
  myPort.write(data);
}

void echoSerialData(int frequency) {
  //write character 'e' at the given frequency
  //to request Arduino to send back the values array
  if (frameCount % frequency == 0) myPort.write('e');

  String incomingBytes = "";
  while (myPort.available() > 0) {
    //add on all the characters received from the Arduino to the incomingBytes string
    incomingBytes += char(myPort.read());
  }
  //print what Arduino sent back to Processing
  print( incomingBytes );
}

Arduino:

#define NUM_OF_VALUES 3    /** YOU MUST CHANGE THIS ACCORDING TO YOUR PROJECT **/
/** DO NOT REMOVE THESE **/
int tempValue = 0;
int valueIndex = 0;
/* This is the array of values storing the data from Processing. */
int values[NUM_OF_VALUES];

void setup() {
  Serial.begin(9600);
  pinMode(9, OUTPUT);
}

void loop() {
  getSerialData();

  if (values[1] == 1) {
    tone(9, values[0],values[2]);
  } else {
    noTone(9);
  }
}

//recieve serial data from Processing
void getSerialData() {
  if (Serial.available()) {
    char c = Serial.read();
    //switch - case checks the value of the variable in the switch function
    //in this case, the char c, then runs one of the cases that fit the value of the variable
    //for more information, visit the reference page: https://www.arduino.cc/en/Reference/SwitchCase
    switch (c) {
      //if the char c from Processing is a number between 0 and 9
      case '0'...'9':
        //save the value of char c to tempValue
        //but simultaneously rearrange the existing values saved in tempValue
        //for the digits received through char c to remain coherent
        //if this does not make sense and would like to know more, send an email to me!
        tempValue = tempValue * 10 + c - '0';
        break;
      //if the char c from Processing is a comma
      //indicating that the following values of char c is for the next element in the values array
      case ',':
        values[valueIndex] = tempValue;
        //reset tempValue value
        tempValue = 0;
        //increment valuesIndex by 1
        valueIndex++;
        break;
      //if the char c from Processing is character 'n'
      //which signals that it is the end of data
      case 'n':
        //save the tempValue
        //this will b the last element in the values array
        values[valueIndex] = tempValue;
        //reset tempValue and valueIndex values
        //to clear out the values array for the next round of readings from Processing
        tempValue = 0;
        valueIndex = 0;
        break;
      //if the char c from Processing is character 'e'
      //it is signalling for the Arduino to send Processing the elements saved in the values array
      //this case is triggered and processed by the echoSerialData function in the Processing sketch
      case 'e': // to echo
        for (int i = 0; i < NUM_OF_VALUES; i++) {
          Serial.print(values[i]);
          if (i < NUM_OF_VALUES - 1) {
            Serial.print(',');
          }
          else {
            Serial.println();
          }
        }
        break;
    }
  }
}