After researching online, I chose the art from sol lewitt design:
It only contains black and white color, and straight lines, looks simple for the first try of Processing. But what’s more, the design is very attractive. It only contains two colors and lines, but the networked shape looks beautiful. (simple elements always become complex when networked together after all.)
Therefore my design is to create different pattern using the same idea of design, so:
We use the same design with different patterns. The difference is that the points in my design are generated randomly in terms of location, so we can create different patterns whenever we re-run the program.
The code is amazingly simple:
Then I compared my work and the original, find I have less points then it. So I make the number of points to be random as well:
Now we have random number of points!
I think it is a good tool to produce artwork in processing, as it offers detailed control to our graph, and can avoid repetitive work by coding. What’s more, we don’t need hand on art skills such as drawing to produce artwork. However, I think a good improvement of the software is to enable drawing and toggling the patterns around as well, (perhaps digitalize the operation in backstage), so that it will be easier to use and more expressful.
We designed and implemented an interactive device that aims at extending human’s perception on colors, that is, electromagnetic waves. When a user scans a point on an object, our interactive device will play a note on piano corresponding to the specific color, with the distance between the user and the object corresponding to the loudness, in a high quality of sound. A more advanced use of this project is to scan through a color board and play some music.
How we come up with the idea.
In the previous research we were amazed by the interactive musical instrument that one can control the pitch by the distance from his hand to the device. Then we consider an interactive device that relates to music can be a newly designed musical instrument, that is, a human send some signal to a machine and it gives out a sound.
We thought about making a musical instrument at first, then we realized whatever we make could neither have the potential to excel the performance of piano, violin … etc. which has been tested in hundreds of years, nor have the potential to add more artistic or practical value into it in a musical instrument (for example, we can make a musical device that will also light an LED on press, but what’s the point of it?).
Yet in the following research we discovered a MIT neuro science project that shows: we can create a map in our brain from specific sound to specific pattern, for example, if in blind people’s mind we create the “beep” sound to the pattern of black-white strips, the blind people would be able to “see” the pattern in their brain. The team is building such a device to test the discovery.
link: https://youtu.be/2w3bfmL0RXg
This triggers the my thought as I just encountered the idea of extended perception at that time when I was randomly watching cool design videos online. Then the idea started to form in my mind:
From electromagnetic waves to mechanic waves: an interactive perception extending device
After presentation Tristian recommended some articles that are relevant to our subject, after some search I find some online and planned to read them later (perhaps it will help with the final)
for example:
https://pdfs.semanticscholar.org/a1bd/629ba961a9635ec7ff186ed7b8171b31fbb6.pdf
/*Something else we want to mention
1. Trasty mentioned we could also use the device to specially target at the blind people, either to help them understand color or to help the in daily life (such as sensing red light and green light). We considered it as a very useful advice, and it is a practical idea (while our perception idea is perhaps a bit philosophical?). So we considered it also a very important purpose for our device.
2. Actually before that we thought of implement a music game, but Rudi told us there is already a very similar game. It taught me a important lesson: always google your “original” idea before you “invent something new”.
*/
Interactive Model
After studied the theory of interactivity and extended perception and recalled what we read and learned in class before, we the finalized an interactive model for such a device:
1. a agent (user) send a signal to an interactive device with a coordination in 3D space
2. the device convert the electromagnetic waves (color) emitted (or reflected) from the position in to mechanic waves (sound), then send the sound back to the user
3. the user process the sound and the color itself with neuro cells that analyzes visual art and audio art.
4. The agent decided another color it wants to sense and gives signal to the machine.
Device and UX Design
Based on our model, what we need to make is a handy device with a set of sensors in the front, when the user use it to scan different objects, it will play a piano note that corresponding the color of the object. Some key elements are: color sensor (to sense color for sure), a ultrasonic sensor to sense distance covert to loudness (to have a full sense of music, and perhaps to pay some attribution to our first inspiration, the music instrument). MP3 module, Speaker/headphone/SD card and related interface These are all functional necessary parts.
We also pay special attention to the user experience design. The idea of UX came to me in class for the first time, and I immediately realized its importance when someone mentioned the doors at our university. Indeed they are either slow or heavy. So we have the following UX guidelines:
1. hide all the wires and technical parts and gives out a clear Interface
2. gives the user a basic idea on how to use it on the first look.
To achieve 1. we laser cut a box to contain the Arduino, all the modules and all the wires, with only part of the sensors showing out.
Achieving 2 is more tricky, we finally settled down several ways: we have a earphone interface to suggest the user to plug in earphone and listen, or plug in a speaker; the name suggests the device can let you “hear” the colors; and we made the shape of our device into a gun, or rather, a handy scanner we commonly see at supermarket to scan barcode or Wechat pay code, as the intuition of seeing such a stuff is to use it scan something, or at least to point at something closely. (We also have an alternative to make it the shape of a torch, but we finally gave up the idea because the intuition of seeing a torch is to use it to light up something, with rather large distance, while the color sensor has best performance when it is close an object.)
Implementation Highlights
The UX design took 3 iterations and 1 more in the future:
The first time we just connect all the units and all the wires and boards are showing outside;
The second time we laser cut a small box to contain the Arduino, the mp3 module and most of the wires, but there is no explicit hint of the function of this device as well as how to use it.
So in the third time we developed the current model
The technology part is also very challenging, as we have never encountered most of the units we used so we have to learn from scratch. I tried 3 different mp3 implementation and SD card implementation, tested on the color sensor, and tried different speaker implementation while Steve researched on color theory and computer color theory to create a RGB-HSV-Frequency mapping that can be more easily understood my human, and calibrated the color sensor by hand. The 3D-printing caused an overnight stay at the school.
Modification
After the presentation, we realized there are still many things we can improve on.
1. setting
Rather then designing a universal application of our device, we could focus on a specific scene where it is used. For example, it is used in a museum where people can use it to scan some paintings. This way of designing enables us to ignore the complex interference in different environment and focus on the details and improving the performance in a specific setting.
2. shape
Although the reason of our device’s shape is explained, it is, a bit weird as I now realized the subtle difference between a gun and a gun like scanner. It is strange to point a gun at Mona Lisa. A very simple modification yet can generate good result is to make the box horizontal so it is more like a scanner.
3. duration
So far every note has the same duration, which limits the music it can play. Right now I can think of two ways of modification: add a control interface so that the user can control the duration, it gives best performance but also make the device a bit more difficult to use as user needs to master a new function; or we can let the chip decided: if it keeps sensing the same color, it will play a long note until the color changed instead of a note every 500ms
Conclusion & Reflection
Our goal is to build a interactive perception extending device. We intended to let the user actively experience visual art — colors in the form of audio arts. The outcome is quite satisfying. We conduct several test among our friends and received good results. The interaction model we proposed succeeds.
To be honest I underestimated the difficulty in the project a bit, as we always operated on things we don’t know the theory and we have to learn everything in two weeks by our self and from scratch — color theory, inter-hardware communication, theory of speakers/earphones, user experience design … sometimes I was very worried, because we sometimes don’t know what we are doing: the SD card cannot load on serial 115200 and we don’t know why, we tried whatever we came up with, is it a bug in coding, a conflict among pins? etc. and finally when I changed the serial to 9600 it worked! It happens everyday and finally we solve the problems one by one either knowing why, or perhaps not knowing why at first. Then I thought of what Rudi said in class (or on ppt?): it is not an exploration if you now what you are doing.
It is true.
Crisis broke out in presentation. We force ourselves too calm down and solve the problem: the extended board somehow got detached from Arduino. We have to disassemble all the boxes to fix it. But for us, it is never too late.
Tristian, who knows the concept and theory of extend perception well, praised our project and pointed out the future research in the field. Interactivity is a complex systematic behavior, and so is human brain and perception. The combination of these to does not only serves for entertainment, or help the blind, but it might triggers more interesting study and ideas on the frontier science: the cognition and interaction in human (the awesome function of the brain is the “interaction” of cells!) and inter-human, as well as what is art and aesthetics.
IN this recitation we implemented a useable stepper motor and used it to make a drawing machine.
It is the most challenging task so far in all the recitation, as we needed to manipulate several units whose working theory we are not entirely familiar with, such as the H bridge. Only by following the graph on the webpage it is quite easy to make mistakes. For example I had my H bridge up side down the first time. My stepper motor could not work, my Arduino went hot and my computer cannot recognize my Arduino. I thought I burned the board but luckily it didn’t (I heard someone did damage his board in our recitation). With the help of Rudi and Jingtian I finally found the problem and corrected the circuit. It is a very difficult task to check the correctness of the circuit as all the wires are put in the tiny breadboard and we have to check them one by one. So we should be really careful when we make the circuit so that we don’t find ourselves troubles afterwards. Also, most important lesson: the part with a semi circle is the top of a H bridge!
Finally we succeeded in making a drawing machine.
Later I researched more details of H bridge on the Internet so that I can fully understand it and use it to develop other project by myself later:
Some of my materials: https://www.build-electronic-circuits.com/h-bridge/
During the research I realized it is the protection diode that saved my board.
A graph I downloaded as reference for later use of H bridge.
I also checked the PPT in the lecture for better understanding.
Question 1
The use of stepper motor made me realize its power in precise rotation. Therefore we could use it to achieve highly quantified acts, in quantified arts such as robot drawing or robot that plays some musical instruments. Yet we should always remember it does not create art. Human creates art and programmed the actuators. The actuators are pens but not artists. Therefore to make the machines “more creative” and “real artists independent from human”, I’d like to make a machine do what a drawing practice do: to facsimile a painting. Yet instead of programming it directly to tell it how to move on the paper, we let the machine “see” a real photo using a camera and let it calculate how to draw it on the paper.
Questing 2
The large spider robot that takes a whole page is really eye catching. While what we made at recitation and the robot are clearly not at a same level, our robot painter, if we remove the pen and connect it to some other moveable part, what we made in recitation could actually be a part of the robot’s joint or leg. In the article it mentions that the robots also communicate and interact with each other, and can learn in the interaction and can collaborate with each other. It reminds me of the swarm robot I learned in another course. It seems that interaction can be achieved not just human-machine and human-human but also machine-machine, where different machines each with relatively simple function forms a complex system.
An interaction system involves (at least) 2 agents A, each contains 3 functions: .
Rec is the receiving function that receives the information send by another agent.
Prc is the processing function that process the information. It decides what new information it the agent would generate from the information received. It also judge whether to end the interaction process.
Snd is the sending function that send the new information to another agent.
The system also contains an action loop: the two parties keeps running the three function until one’s Prc function decides to stop.
A simple illustration is shown as below:
This definition is inspired by the readings as well as the research. It is also improved by the discussion with my group during making our project.
Both the reading of interaction and the reading of physical computing cover an important idea, that there is a action – process – reaction process between two clients for an interactive system. In the reading the Art of Interactive Design (Crawford) and Physical Computing – Introduction (O’Sullivan and Igoe) the author analyzed several cases that are not interactive devices: such as the folding tree branch example, the branch is not reacting. But for me the more important is that after the tree behaves, the process is over, and human don’t need to react based on tree’s reaction. It is a one way “interaction”. Another example is that, when we turn on a light, we act on the button and the circuit processes, then the light turns on. It somehow fit the action – process – reaction model but for me it is weird to call it an interaction.
From https://www.creativeapplications.net/ we find two projects.
I actually had a basic idea of the definition before I searched the project, then I wanted to try and see whether the definition successfully describe the reality while browsing the projects on the websites. The first one, the Algorithmic Drive exemplified the interaction process. Human sent out controls by using the panel, the machine receives and display different images on driving, then a human see the images and resend new command to create artwork or have fun. The loop continues until a human decides to stop playing. This project shows the whole process accurately, proving the definition supports the reality of interactive devices.
For the second one, Artificial Arcadia, it is also a interactive device, and it seems also fits less to my definition. But the tricky thing is here: The device receives the information of a human and react. But the human does not need to care about the information sent by the device, one moves wherever he wants and it has little to do with the falling of the device. So the loop that is shown in the definition ends here. When a human move to a new place in the device, it actually starts a new loop but not continues the old one.
This idea and the definition helps in our own project. At first another group member proposed an idea of making a magic wand that can achieve many functions. Despite other arguments from others such as “it is not applicable even 100 years later” and “it is just a copy of harry potter”, I argued that it is not an interaction at all. A human just send a command to the wand and it executes, it does not fit the definition (it does not even finish one loop). Then we found we all agreed that interaction involves two parties with the three function. However some thought my definition on the loop part is too strict (as I thought the loop should be executed many times). Finally we decided to use a lower bound: after a human sends information as long as a human receives the information of the device and act upon it then call it a stop (that is, one complete cycle shown in the graph), it could be count as an interaction. Finally we decided to make a sensor system paired with a smart kitchen that will prepare meals according to your mood and health. In the last moment of our project presentation scene 3, the boyfriend (acted by me), felt less worried and tricked his girlfriend that he had prepared the anniversary after the machine offered help. It finished the cycle of interaction (loop once). In the QA session we explained that the machine could observe and learn from the user’s habits for better service, it is a long term interaction with loop for many times. Clearly, this device also involves the three functions on both agents.
A photo of our prototyping.
During the QA session the professor proposed an interesting idea that our focus is more on the sensor system, while the smart kitchen itself is a great device. It is indeed. The two devices are a pair. And for me I think the sensors are still a crucial part as it add more interesting interactions to the system. Our information sent out is not in the form of language anymore, but chemicals, health signals etc. and the device can make decision “by itself”. The smart kitchen need more focus in the presentation, while without the sensors the whole project would become like “I tell the kitchen to make a pizza and it makes a pizza”.
What I want to say is that, I believe in this project it does not matter so much what magical function we imagined in the project, whether a magic kitchen or a mirror or a time machine. After all we cannot make them. What matters is how we involve the idea of interaction in it. The theory of interaction has already been proposed, but even 100 year later we can still use this theory and the idea of interaction to transform some mundane functions into something interesting and novel. That, together with how we are going to do it, is what we all need to learn in the project and the class.
In this recitation each team explores a sensor. My teammate chose the old friend of mine: the infra red sensor: previously we use it to control the motion of robot, this time we decided to use the distance signal to control the brightness of LEDs, thought in different expressions, the technology is the same.
However, by having a human control this device there is a level of interaction in it. A simple implementation of this prototype is that: one can deploy it on a car and use the infra red sensor to detect its distance between a wall when parking, when it is too close a warning light on the panel will shine.
We can also apply speakers in stead of LEDs as notification. They are both simple application. A perhaps more advanced design is to build a negative-feedback loop using it, that i, the sensor keep detecting the distance, and there is a threshold X, if distance X it will act to decrease. This type of system can be easily implemented using sensors and actuators, but is widely seen in nature and human bodies.
Besides hardware, we have plenty of time to study the character of Arduino coding. By observing the style I find it is actually C language, perhaps with a few modification. Then I find the function pinMode is perhaps useless, we can delete these lines while having the coding working. Another thing we find is that we must have a setup function, even if we don’t write anything in it. So theoretically the code could be written in this simple way:
We shared this code with the group next to us, and they seemed very curious and surprised by this one line coding. Well in practical we wouldn’t write code like this, too hard to read.
As for how computer changed human’s behavior. We will first get the logic chain here: human’s behavior is determined by its consciousness; consciousness is affect by information input through the “sensors”, eyes, ears …. Computers provided new types of imformation human has never seen before. That’s just a general description.
In detail, I happen to be taking commlab at the same time, and the many ideas in the readings this week are actually mentioned in commlab. Whether computers can give mathematical expressions to images or not is not important, modularity is also not important (in terms of affecting people), they do not change the “medium”. Algorithm changes the medium so that the photo’s can move and can make sounds, lights can be changed and people’s face can be beatified.
Automation, variability and all the features are just tools human invented to make the medium achievable. That’s why it is the language of the machine, using the language, we can “code” re medium we want.
As for the result of the change, I’d like to use an example a discussed with my professor in commlab:
“I would like to talk about the failure of U.S. in Vietnam’s war. Many have already know that the U.S. army was not at very inferior situation in the war. It ended abruptly and brought more disaster to Vietnam and U.S. because of the surprisingly strong and irrational antiwar trend in the domestic. This out-of-no-where nationwide antiwar is actually came from satellite TV, the new medium itself. The war broke out not long after satellite TV was widely applied in the U.S. At that time, both the government and the people knows nothing of the new medium’s power. Then the wars broke out, reporters sent back colored photos and videos of the battlefield. It is the first time that the people of U.S., living in peace for years, see the real blood and dead bodies almost in real time. What’s more, in order to catch more eyes, the TV program service providers purposely chose those emotion-triggering pictures and videos: an American solder had his arms shoot off etc. Then, people were shocked and cannot accept the war anymore. They know that the atom bomb killed more, yet they didn’t react on it because no TVs sent them images. On the contrary, if the states government realized the power of the new medium and make use of it, the situation would be different.” (repost from my commlab blog wp.nyu.edu/krischen)