Recitation 5 – Connor S.

I remember this recitation being one of my favorites over the course of the semester. I had never heard of Processing before that day, and was surprised to find out how enjoyable this sort of implementation of coding skills could be. I saw (and still) the process as almost “code-based art,” where physical performance and artistic swift-handedness matter significantly less than they would if we were painting on canvass or sculpting clay with our hands. While coding pictures and art through Processing does not produce a tangible work, the idea that virtually anyone can create a digitized artistic piece based on their own vision by essentially learning a second language opens a lot of doors to the physically disabled or others without the means to make physical art. 

For this recitation, I chose to attempt to recreate the simplest and most straightforward (or so I thought) sketch possible to code in processing: 

    Image result for simple dinosaur"

While on paper, both literally and figuratively, this dinosaur would be pretty easy to draw, I quickly became aware that certain aspects of a sketch commonly  presumed to be “simple” may prove challenging to recreate in Processing code. For example, when trying to draw the dinosaur in question, basically any able-bodied and minded person over the age of 3 could fairly intuitively mimic the general shape of the body, the basic details in the eyes and scales, etc. and produce a completed rendition in under three minutes. Due to the nature of Processing, this simple task became exponentially more involved and daunting after I did a rough estimate of the number of functions necessary to produce something even remotely similar. I spent the bulk of the recitation trying to figure out outcomes from different orders of code, and generally how to produce, not necessarily the picture, but the shapes I needed to form a coherent sketch. In this way, Processing is interesting because it forces the coder to more deeply consider otherwise unhelpful details like the precise curvature of the dinosaur’s back, or the height of the top of the head relative to that of the end of its tail. I appreciated how in this recitation, we got an opportunity to view “simple” drawings as complex or simple depending on how one would define these two words, by considering art through a different lens.    

Recitation 6 – Connor S.

Lab: 

For this recitation, I decided to begin a fresh project in Processing that focused more on the functions we covered in class. Starting with a function that randomly generates shapes and colors seemed like a good way to develop my understanding of Processing, particularly so I could fiddle with limits, size, color, etc. with a constantly changing shapescape, which makes it easier to immediately see what I changed in each iteration of my code. 

I initially programmed an assortment of triangles, circles, and rectangles of almost completely random colors and sizes (limit of shapes’ dimensions were all set to 500, and fill color to 255) to flood a 500, 500 window:

void setup() {
size(500, 500);
}
void draw() {
float w = random(500);
float x = random(500);
float y = random(500);
float z = random(500);
fill(random(255), random(255), random(255));
rect(w, x, y, z);
ellipse(w, x, y, z);
triangle(random(500),random(500),random(500),random(500),random(500),random(500));

Next, to incorporate some sort of interaction, I added a keyPressed command. Specifically, one that causes any ellipses that appear on the screen while a key is pressed to turn black:

void setup() {
size(500, 500);
}
void draw() {
float w = random(500);
float x = random(500);
float y = random(500);
float z = random(500);
fill(random(255), random(255), random(255));
rect(w, x, y, z);
ellipse(w, x, y, z);
triangle(random(500),random(500),random(500),random(500),random(500),random(500));

if (keyPressed == true) {
fill(0);
}
ellipse(w, x, y, z);
}

Additional Recitation Work: 

Step 1:

For the additional work, we were assigned to first, draw a circle in the middle of a 600, 600 processing window, and make it expand and contract. To do this I drew an ellipse, and created a boolean condition to essentially tell the circle to gradually get larger if it gets smaller than a certain size, and vice versa: 

Step 2:

Next, we were instructed to cause the color of the shape’s outline to smoothly change color. For this, I changed the color mode to HSB, and decided to give the stroke() a variable so I could tell it to tack onto values derived from the size fluctuation I already programmed for the circle:  

float x = (300);
float y = (300);
float ballsize = 0;
boolean ballshrinking = true;
int shrinkorgrow = 1;
int colorz = 50;

void setup() {
size(600, 600);
colorMode(HSB, 100);
stroke(255);
strokeWeight(6);
}

void draw() {
background(0);
stroke(random(255),random(255),random(255));
if (true){
stroke(colorz++);
}
if (ballsize > 100) {
shrinkorgrow = 0;
} else if (ballsize < 1) {
shrinkorgrow = 1;
}
if (shrinkorgrow ==1 ) {
ballsize += 1;
} else
if(shrinkorgrow == 0){
ballsize -= 1;
}
circle(x, y, ballsize);
}

Step 3: 

Finally, in having the circle respond to arrow keys, I used the Keypressed function by instructing the circle to move to the right when the right key is pressed, left when the left key is pressed, and so on. This ended up being significantly easier for me than gradually changing colors because operating an object on the screen is not as conditional on other factors than the color changing. 

Documentation:

I would say the most interesting function for me that I used this recitation was the one involving the first Keypressed function that caused the ellipses on the screen to turn black. I say this not because it was very difficult, but because it was one of the first times I felt I was beginning to grasp the underlying logic  behind the language of processing.      

Recitation 4 – Connor S.

Lab:

Recitation 4 incorporated relatively complex components, particularly when considering those used in recitations 1 through 3.  The circuits used for this recitation were also significantly higher in voltage than in previous recitations that may “damage your computer”, which made me a bit nervous at first, but everything about the circuits and powering went smoothly. The biggest problem I encountered in creating the contraption was effectively organizing/managing the wires in the circuit. It took me at least two attempts to correctly wiring the circuit without having to start over. Otherwise, I was surprised by how straightforward the Arduino code itself was for a seemingly complex piece of equipment. 

Documentation: 

I would be interested in building machines designed to twist and turn in a certain way to remove garbage from bodies of water. I have seen this sort of project before, and have always been fascinated by how the physics of the water and trash differ enough to allow a machine to separate the two. In an article about actuators, Creative Mechanisms.com gives definitions of different types of actuators. The types of actuator they list being (1) Pneumatic Actuators, (2) Electric Actuators, and (3) Hydraulic Actuators. While a complex implementation of some of the more intricate variations would likely be more advanced than what we’ve been discussing, it is interesting to notice the variability of actuators, and how important choosing the right one could be in a high-stakes scenario. For the purpose of a water filtration system, I think a rotary actuator to power some sort of gear mechanism would work well to produce a motion almost to “twist” the trash out of the water similarly to a cement mixer.
 
In ART + Science NOW, Stephen Wilson (Kinetics chapter), it was particularly interesting for me to see a description and photo of the Time’s Up, Gravitron, 2005 project, which challenges the user to maintain a level of balance in accordance to a changing gravitational terrain displayed on a projection below them. This project is extremely cool because the base concept expertly uses actuators to respond to a human’s physical movement whilst engaging with the installation. Although I am not entirely familiar with the specific actuator, one could only surmise it would have to be fairly strong to produce a reciprocal physical response to a human’s swaying from side to side. It looks a bit like a trampoline, so it may have to do with tension and/or stretching(?)     
 
Image result for Time's Up, Gravitron, 2005"
 
 
 
 
 
 
 

Connor S. – Research and Analysis 

The Chronos interactive art exhibition demonstrated how technology, art, and interaction could work together to produce pieces that are both entertaining and thought provoking. In my experience, art exhibitions are generally one dimensional, leaving the viewer with little opportunity to engage in the work beyond viewing, and reacting to the piece as such. Non-interactive, one dimensional art invites the viewer to engage with it internally, but the experience essentially ends there. The Chronos exhibition allows for a more intimate experience with the art because of the interactive qualities of many pieces. Interactive art not only invites one response of the viewer, but rather, at least two. 

The first interactive project that stuck in my mind, and that I think about at least once a month, is an M&M themed music making game, in which the user drags and drops different M&Ms figures to different, labeled spots in a window, each character adding an instrument to a musical ensemble. While, unfortunately the original website appears to have since been taken down, here→ https://www.youtube.com/watch?v=7xdvZMwV7DI is a Youtube link to someone playing the game. After some consideration, I probably continue thinking about this game so often because of how effortless it makes the act of making music. In a similar fashion to a game like Guitar Hero or Rock Band, this little M&M’s online game really makes you feel like a musical artist; you have access to different instruments, characters, beats, and melodies which creates a sense of personalization and accomplishment for having dragged and dropped these animated candies with human traits onto the stage. 

Another interactive project that tickled my fancy was a simulated soccer free kick simulator, in which the user approaches a physical ball on the ground in front of a projector screen. The user is then prompted to kick the ball at the screen, which is displaying a goal. After the ball hits the screen, a sensor detects the presumed trajectory of the ball, and determines whether the user would have scored or not depending on their kick. I found this concept particularly interesting because of its ability to bring something that would otherwise require a lot of space to essentially anywhere. Not only does this project compress the activity, but it also does not necessarily detract from the original experience; everything on an actual soccer field that a player directly interacts with is present in this virtual version, which is why I particularly admired the concept.

My initial definition of interaction relied fairly heavily on the idea of what makes an effective prompt, and the extent to which the user/project give and take experience feels organic or natural. For example, in the case of the soccer free kick game, while I have yet to actually play it myself, I would likely consider this interaction to be fairly good, in that it both invites the user to interact with it (by way of a soccer ball sitting on a grassy platform in front of an image of a goal), and directly responds to the users engagement with it by transposing and image of the kicked ball onto the screen, providing the user with an immediate and clear response to their action. My goal for my final project is generally on which transfers the experience one has with something bigger or that requires more resources to something smaller while retaining a high level of meaning as it relates to its original form. I think the soccer free kick example is much better at achieving this goal than the M&M’s music game because, while the M&M’s game makes more accessible to the user the means to create a personalized song, it does not give the user as direct a sense of the actual interactive experience of making music, whereas the soccer free kick example does. In an article in https://www.intechopen.com/, definitions of interaction are coupled with tips for successful interactive design. One of the more interesting words of advice from the article was the idea that the system in question should be effectively positioned to serve the physical needs of the person engaging with the project. For example, the soccer free kick project accomplishes this goal fairly well by including an actual sized ball, and a screen big enough for the user to have a relatively immersive experience taking a free kick. The M&M’s game, however, transposes the experience of making music, but does so in a fairly limiting way; for that project to have achieved a more immersive experience, a larger, more hands on setup may have served it well.

Connor S. – Project Proposals

  1. “Beat Orchestra” 

“Beat Orchestra” gives aspiring hip-hop producers the opportunity to make rap beats in a fun and interactive way through the use of multiple sensors that serve as different instruments (eg. a physical kick drum on the ground one can stomp on to add a kick drum to their beat). This project would bring the fun of making music without forcing the monotony of the sole use of a monitor, mouse, and keyboard. This project could be used by people of all ages who are able to use their hands, arms, feet, etc. to engage with sensors to produce music for their and others’ enjoyment. After some research into music production, it turns out that most producers just use a monitor, mouse, and often an electric keyboard for all the instruments that go into a beat. This method sounds boring, repetitive, and alienating to those who don’t necessarily want to slouch over a computer to make music. With “Beat Orchestra,” anyone from small children to serious producers can start to see music production as an active activity. 

  1. “Soccer Skills Assister” 

    Imagine Dance Dance Revolution but for learning how to do stepovers and other soccer skills. Various sensors are placed around a stationary soccer ball which light up depending on where the user is supposed to place his/her feet to perform the intended skill. In front of the user is a screen with a person demonstrating how the move should be performed in real life. The user can change difficulties by selecting different skills, and deciding at what speed they want to perform them. By altering the delay between the different light up sensors, the speed at which the user performs the skill can change. I was not able to find anything like this after some research online, so this is probably a trillion dollar idea that I should probably patent sooner rather than later. The intended audience is essentially anyone with function in their legs. It’s pretty much like Guitar Hero but for soccer, but “Soccer Skills Assister” can actually teach you something useable on a soccer field unlike Guitar Hero which just teaches you how to push buttons. 

  1.  â€śSecurity Sensor”

Everyone has some concern for the safety of their personal belongings, and the “Security Sensor” is marketable to anyone with drawers containing valuables they wish to protect from stealers. One just places the “Security Sensor” inside any drawer that contains valuables, and the light sensor on the “Security Sensor” detects that the drawer has been opened, and sends a message to the owner, and snaps a photo of the perpetrator as well. When you want to open the drawer yourself, simply turn off the “Security Sensor” from your phone. After initially learning about light sensors, I immediately thought of a fridge, and how the light (allegedly) is off when you close it, and on when you open it. A light sensor would know for sure whether the light is on or off, so why not apply this to a closed drawer or cabinet?