Recitation 8: Serial Communication by Sheldon Chen

In the first exercise, I had to edit on both the processing and Arduino file. In the Arduino file, I had to use analog read to receive values from potentiometer, before mapping the value into the screen size in processing. Then, I need to use Serial.println to “print” their values. To let the values flow into processing correctly, I need to put Serial.println() at the end of the code. In the processing file, since the reading function was already completely, I only need to implement the function to read data from the Arduino, which is sent into processing in the form of arrays. However, at first, the circles simply couldn’t be drawn on the screen. It was later found by Young, that I didn’t put the number of values processing is receiving into the variable, NUM_OF_VALUES. Everything worked normally once I altered the number to two.

In the second exercise, I need to alter the code in processing, such that when “z”, “x”, “c”, “v”, “b”, “n”, “m” on the keyboard are pressed, the speaker would play tones ranging from C(4) to C(5). To achieve this, I need to set seven different conditions, as well as an array at the length of seven. The setup values of all positions in the array are 0. When any of the keys above is stroke, its corresponding value will be altered to 1. The processing would, in turn, receive the value and check through whether there are 1s in the array. If so, it will play the tone. Despite the logic seems pretty sound, tones couldn’t be played correctly. Only C, D, B were making sound. Later, Leon noticed the logic in the Arduino file is not very correct. Since number in the array has an if-else condition to check, noTone() function will be triggered once any of the key is not pressed. This would make it execute a play tone for once and seven no tone for six times. After placing all conditions as else if, it worked better than before. But it still doesn’t seem to function completely.

In this recitation, I’ve learned that it is very important to alter the number of values in the processing in accordance with your actual need if you are trying to send something from Arduino to processing. The noTone() function in Arduino is also something to take notice of. Since it would be executed all the time.

Schematic
Schematic

Preparatory Research by Sheldon Chen

Definition

In my first group project, I defined the interaction as a process with consisting of input and output in various forms. The user should also feel something different after interacting with the device, such as happy, sad, amused. After the mid-term project, I think the design should also be easy for users to interact with. No matter how badass the design is, if it is complicated to use, it can barely be called “interactive”.

Interactive Design 1

The first design I’ve looked at is a butterfly program. By blowing, the butterflies on the design could “fly”. The users need to blow into the hole with sensors inside. After that, the motors inside the butterflies would be activated to make their wings swing. The design matches my understanding of interaction, mostly because it looks like a design attractive, soothing to interact with. It is also very easy for users to interact.

Interactive Design 2

The other not very interactive design involves using motors and light sensors to make static sculptures move. When user passes in front of the device, the design would move to face the user. According to the video, it seems that the design is not very stable, as the motor moves even nothing happens. It would, therefore, make users hard to interact with.

Final Definition of Interaction

At this moment, I consider interaction a stably performed process where users can input dynamically and receive the output easily, dynamically, just as “What Exactly is interactivity” describes (5). The process should also bring about emotional changes for the user.

Recitation 7: Processing Animation by Sheldon Chen

What I made during the recitation last week was a drawing board popping up colorful circles on the path the mouse moves. During the process, I’ve learned to modularize my code by setting up a function to generate circles. This has greatly simplified my code, making the main part of my code clearly and concise. I also learned to use the transform function, so that I could generate multiple circles around the mouse’s position at the same time, while keeping my code clean. Of course, pushMatrix and popMatrix() were also implemented to make sure the transformation function works properly. I also learned a strange, yet practical way to achieve fading effect, which is by using a rectangle at the size of the window to cover it while filling it using black. Through setting the opacity of the rectangle, the objects on the screen would disappear at different speed.

What I found most interesting is using the rect() function to fade things out. It is definitely an effective, yet simple way to fade things out. I also think mouseX and mouseY functions are pretty interesting, as they are the gate through which users could interact with the shapes in the processing window.

The followings are the code for the design and the homework, together with the design’s video. 

Code for the design

// create the animation
// generates circles and dots around the position of the mouse
void setup(){
size(600,600);
background(0);
noStroke();
smooth();
}

void draw(){
if (keyPressed == true){
if (key == ‘b’){
fill(0, 20);
rect(0, 0, width, height);
println(“no”);
}else if (key == ‘v’){
fill(0, 2);
rect(0, 0, width, height);
println(“changed”);
}
if (pmouseX != mouseX){
for (int i = 0; i < 8; i ++){
float trans1 = random(5, 30);
float trans2 = random(5, 30);
circles(mouseX, mouseY, trans1, trans2);
}
}
}else{
fill(0, 10);
rect(0, 0, width, height);
float r = random(255);
float b = random(255);
float x = random(width);
float y = random(height);
float ra = random(10, 30);
fill(r, 0, b);
ellipse(x, y, r, r);
}
}
// in the function generating
// the position of the mouse need to be measured
void mousePressed(){
clear();
}

void circles(int mouseX, int mouseY, float trans1, float trans2){

pushMatrix();
translate(trans1, trans2);
fill(random(255), 0, random(255), random(20,200));
float r = random(5, 20);
ellipse(mouseX, mouseY, r, r);
popMatrix();
}

Code for the homework

int i = 80;
int stats = 0;
void setup(){
size(600, 600);
smooth();
}

void draw(){
// the first ellipse filled with black
noStroke();
colorMode(RGB, 80);
background(80);
fill(i, 80 – i, 70);
int x = mouseX;
int y = mouseY;
if (x <= (200 – i) / 2){
x = (200 – i) / 2;
}else if (x >= width – (200 – i) / 2){
x = width – (200 – i) / 2;
}
if (y <= (200 – i) / 2){
y = (200 – i) / 2;
}else if (y >= width – (200 – i) / 2){
y = width – (200 – i) / 2;
}

ellipse(x, y, 200 – i, 200 – i);
fill(80);
ellipse(x, y, 180 – i, 180 – i);
if (stats == 0){
i –;
if (i == 0){
stats = 1;
}
}else{
i ++;
if (i == 80){
stats = 0;
}
}
}

Recitation 6: Processing Basics by Sheldon Chen

The biggest reason I wanted to use the image was because of its design. Its simple, yet harmonic arrangement of quads of different shapes, together with the ingenious choice of different colors, have made the image visually poised and peaceful. Apart from the artistic view, I also chose this design out of my perception that such shapes are easier to draw and colors are easier to pick.

Original Motif
Original Motif

My original design idea was to utilize the pattern of the quadrilateral shown in the original drawing and maybe change the color of the shapes, add some of my own decorations on to it. To draw such shapes in Processing, I had to use lots of “quad()” functions out outline the quadrilateral one by one. I also used the noStroke() function to get rid of all the strokes for the shapes. Some fill() and noFill() functions were also used to constantly change the color of different layers on the picture.  

Just as I expected, a great similarity between my design and the original motif was the pattern in the arrangement of the quadrilaterals. However, because of the color choice, as well as the different textures, these two images present very different feelings. For the original motif, the color chosed was within the grayscale. The visual effects it presents is thus, more classical, stronger, also the sense of layering is more obvious. The colors in my image, on the other hand, are mostly blue. And its texture is apparentlysmoother than the original motif. The visual effect it presents, therefore, is calmer, more modern, also doesn’t have a strong sense of layering.

After all the efforts, I think processing might not be the best tool achieving such design. An obvious reason is all quadrilaterals must be drawn using the quad() function by putting in the coordinates of all four corners. Because this image involves many parallel shapes, I have to either calculate the slope of different lines in the drawing, or to put in a random number and adjust it by checking back and forth. Perhaps this would be much easier to draw using photoshop, as the shapes could be copied so there is no need to worry about the slope of the shapes.

Grass-Liyu Chen-Eric Parren

Context and significance

The previous group project has made me think more in terms of making the output of the interaction perceivable in more dimensions such as looking, hearing, touching, even sniffing. Before starting the projects, we searched tons of interactive designs. A very interesting design is a dog that could turn its to face the user when the user is looking at it. When the user is trying to pat the dog, it would shake its head to avoid being hit. I think this design has made me reconsider the emotional effect an interactive device should bring. For example, after seeing the interaction with the dog, I felt very amused, for it is funny to see the dog reacting lively to my physical activity. What also left a deep impression on me, yet not viewed as an interactive design was the LED cube. Though the device could make very cool light effects using LEDs, the only thing the user could do is looking at the light effects. In other words, there is no input or output in this interactive design at all. That alerts me an interactive design should be interactable before adding other cool features or effects.

My understanding of the interaction, therefore, has a strict and clear boundary. Just as what I wrote in the last blog post for the group project, I was expecting a design where users could have untraditional way of inputting. After the computation, the design would present its output in different ways for users to perceive. The output users perceive, then, should make the user feel emotionally different.

What makes my project unique is the representation of tones is not limited to its sound, but also through the turning on and off of the LED, the textile feeling of touching the optical fibers in the shape of grass. In general, it could make this instrument more approachable, and users could get a more natural feeling when playing music. What I would like to redesign is the entrance door of the library. This is an auto door during library time. When the library hours have passed, however, the door wouldn’t open without swiping the NYU ID. But it’s normal that people forget to bring along their ID cards when trying to get in or out of the library, making them have to either go back to get their card or to wait till someone swipe their card. If I were to redesign the door, I would replace the card swiping machine with a facial-recognition machine. Students only need to “swipe their faces” to get in or out of the library, an act to make their stressful life easier. The target audience of the project is certainly the whole student body of NYU Shanghai, as the majority of them are suffering from the entrance door.

Conception and design

As I mentioned above, I expected the users to not only perceive the tones through hearing, but also through touching and watching, thus bring about more immersive experience playing music. It should also be a more approachable musical instrument compared common instruments.

In the actual design, we used the optical fiber to replace keys or strings on common musical instruments. These optical fibers are arranged in the shape of grass. Users only need to push or shake these optical fibers to make tones. We chose optical fibers out of two main reasons, its shape resembling real grass, its ability of transmitting light from LEDs at its bottom. Copper wires could also be an option. We rejected it because it cannot transmit light coming from the LEDs at its bottom. It was rejected also because it doesn’t fit into the aesthetic scene we set.

Raw Material for Optical Fibers
Raw Material for Optical Fibers

To indicate the on and off of the tone, we used 30 white LEDs to shoot light through the optical fibers to create shiny white dots at the top of the optical fibers. The LEDs would turn on slowly after the user the grass and would fade away when the user let go. In terms of other options, during the user testing session, many have asked whether there would be other colors for LEDs. We rejected this idea, for it would again, ruin the aesthetic feeling the optical fiber creates. There were also people suggesting using LED matrix or strip to replace individual LEDs placed on the breadboard. Some other people also suggested using the LED matrix or LED strip to replace bread boards with LEDs on it. However, the optical fibers are arranged in a special way that the position of LEDs needs to be flexible. The light matrix certainly cannot meet criteria because the position of each LED is fixed.

Expected Effect of the Grass
Expected Effect of the Grass

In terms of the sound, we attempted to play the sound of harp using speakers. The pitch ranges from C to G. This time, no other options arose for the sound of the instrument we are using. As tones played by harp is so harmonic that it perfectly fits into the scene we set.

Speakers we used
Speakers we used

Fabrication and production

The first important step we made was making two sets of LEDs. Each of the set has a button to control the on and off of six LEDs with fading effect. The LEDs would stay bright as long as the button is pressed. When it is released, the corresponding LEDs would also fade away. It was a bit hard to achieve such function in coding, as the Arduino would be running in a loop after it is powered up. Therefore, I have to introduce certain variables memorizing the on and off of the light after a loop is executed.

Attempting to use the Arduino to play the sound of the harp was also an important yet struggling step. To make it play such sound, we have to make the Arduino read external sound files. Currently, there are two options to make the Arduino play sound files, one is to store the file in Arduino’s internal memory, the other is using the SD card as storage. We first tried to use the internal storage of the Arduino. But it was too small to even store two sound files while we are trying to store five. As for the SD card plan, we tried two kinds of SD modules, neither of them worked. When trying the module by DF Robot, the files were read successfully, but the volume was so small that one has to put the speaker on his ear to hear it. It didn’t work even with an amplified circuit. We then turned to try the MP3 SD card module. Instead of using the speaker connecting to pins, this module uses a head jack to connect to the speaker. Unfortunately, this option also failed because the SD card is only capable of reading one file from the directory at a time, while we were expecting it to read multiple tones at one time. We ended up going back to the Stereo Enclosed Speaker Set to play tones using the tone() function.

SD Module from DF Robot
SD Module from DF Robot

Transplanting LEDs onto the amplified circuit was also an important step. Because we need to power thirty LEDs, the internal 5V power supply from the Arduino is far from enough. Instead, an amplified circuit is needed. After the circuit is changed, the LEDs couldn’t light up after the buttons were pressed. Nick helped me out by reconnecting everything in the circuit. It turned out I made the wrong connection because of the messy circuit.

Amplified Circuit for Speakers and LEDs
Amplified Circuit for Speakers and LEDs

After the struggles above, we finally reached some success. Before user testing, we assembled two sets of LEDs using the physical button to trigger the on and off of each unit. When the user push or shake the grass, the physical button beneath would be pressed by the pillar under the platform holding the optical fibers. A series of things would happen subsequently.

Another failure, yet interesting step we encountered was after the user testing, where Nick suggested us using the conductive tape as switches, turning the set on when the bread board is touching the shield. However, it was so weird that the LEDs would light up even when there is no physical contact between tapes. After testing and thinking for a morning, it turned out that this was caused by magnetic induction. One part of the tape is connected to the 5V and has circled the bread board, and the other part of the tape is also circled in the shield. When the bread board in the shield is moving, the magnetic flux would change and produce electricity to make the digital pin receive false message of the switch being turned on. All the light and sound would be made consequently.

Apart from above, we have been encountering the 3D printing failure throughout the process. Even after the first and second layers are printed successfully, there are still high chances the printing would fail.

Failed Printing
Failed Printing

During the user testing session, some users were had no idea how to interact with the grass. The common scene was the grass being very unsensitive to users’ pushing, shaking. In fact, users have to push down the grass to trigger the button sets. Some asked what are the optical fibers used for. A lot of users also asked whether there will be any other colors for the LED.

As for the unsensitive switches, we consulted Nick for an alternative. Nick offered us that method of using the conductive tape to act as switches. When the user pushes the grass, it would make the whole bread board move. And the conductive tape on the bread board would make contact with the conductive tape on the surrounding areas on the shield. After encountering the difficulties mentioned above, we cut the tape to one small piece sticking at one side of the bread board and the inner part of the shield. This switch was definitely more sensitive than the physical switch. But because we are moving the whole bread board with optical fibers and stuff on it, the set always has problem repositioning itself to the right place. In other words, it became too sensitive.

In terms of how people asked the meaning of grass, we thought it was because the light is not coming through the optical fiber to hit the top of the fibers. We first removed glue covered at the bottom of the fibers. To make the light of the LED go precisely to the bottom of the optical fibers, we 3D printed some sockets to line up the bottom of the optical fibers and the top of the LED. And it was successfully addressed.  

Apart from the suggestions received from the user testing, there were also numerous design changes we made during the production. A significant one involves the platform holding the optical fibers and the number of the LEDs. To attain the best visual effect, we originally planned to use nine LEDs in each set and arrange them in a circle. But it turned out this would use too much of the optical fibers and the circuit would be too complicated to assemble on a bread board. Thus, the number of LED was set down to six, and they would be positioned in a parallel way.

Schematic for Nine LEDs
Schematic for Nine LEDs
Holder Prototype
Holder Prototype

There were also several choices for the material of the platform holding optical fibers. We originally planned to use 3D print these platforms. But given the prototypes we printed, 3D printing wasn’t efficient, and it doesn’t look well-designed. Thus, we turned to use the laser cut and put two of them together to form one platform.

Grass Holder (Laser Cut)
 Grass Holder (Laser Cut)

Conclusion

The goal of my project, as mentioned above, is to create an instrument enabling users to perceive music not only through sound, but also through seeing and touching. It is aligning with my definition of interaction because the interaction involves a clear process of having the user input by physical action and present the output in multiple ways. Most importantly, users were amazed by the beautiful visual effect when they are trying to make sound with the device. What this project failed to align, then, was it is not easy to interact with. Because the switches are usually too sensitive, the user have to reposition the whole bread board set after interacting. This could result in huge inconvenience for the users. And like some users mentioned after the presentation, one could hardly produce real music with such device. In the end, the audience were not able to remember the direction to push the grass, as each grass set is triggered in different direction. Without knowing some “tips” on repositioning the grass, some grasses keep going off after the user’s operation. If I have time to improve this project, I would make each tone controlled by one Arduino with speaker and LED set. And I’ll make seven of them to attain an octave. What’s more, I would use joystick to replace the conductive tapes. Whenever the joystick is not at its original position, it would send a signal indicating back to allow following activities.

My biggest take-away from my failures is that â€œćŠŸć€«äžæ€•æœ‰ćżƒäșșïŒŒé“æ”çŁšæˆé’ˆâ€(Progress would be made with lots of trials and errors). As for my accomplishment, I have learned to deal with malfunctioning 3D printers, learned that Arduino is worst for playing sound, the differences between different Arduino boards, etc.