Recitation 9 by Hangkai Qian

In my recitation, I planned to use two potentiometers to control the image in the Processing. The first potentiometer is to control the size of the image, and the second potentiometer is to control the degree of the blur of the image.

At first, after I read the Powerpoint in the folder, I used the function “resize” to  change the size of the image and the grammar of “Resize” is photo.resize( , ).  However, a mistake happened. When I resize the picture for several times, the picture blur a little. So I know the resize function will change the function, which leads to the change of the picture. Therefore, later I chose another function which do not  change the image: 

image(photo,0,0,a);

Second, when I first use the code that can transfer data from Arduino to Processing, the error said the ArrayException. At first, I thought it was because of the number from Arduino exceed the range that Processing. However, when I asked Rudi, he said the number of the blur function couldn’t be so big. So I used 

float a=map(sensorValues[1],0,1024,0,40);

to make the number from Arduino smaller.

Here is my arduino code.

void setup() {
Serial.begin(9600);
}

void loop() {
int sensor1 = analogRead(A0);
int sensor2 = analogRead(A1);

// keep this format
Serial.print(sensor1);
Serial.print(“,”); // put comma between sensor values
Serial.print(sensor2);
Serial.println(); // add linefeed after sending the last sensor value

// too fast communication might cause some latency in Processing
// this delay resolves the issue.
delay(100);
}

Here is my processing code

import processing.serial.*;

String myString = null;
Serial myPort;
PImage photo;

int NUM_OF_VALUES = 2; /** YOU MUST CHANGE THIS ACCORDING TO YOUR PROJECT **/
int[] sensorValues; /** this array stores values from Arduino **/

void setup() {
size(500, 500);
background(0);
photo=loadImage(“12deeaa706628e537518aa533d6c8658.jpeg”);
setupSerial();
}

void draw() {
background(0);
updateSerial();
printArray(sensorValues);

image(photo, 0, 0,sensorValues[0],sensorValues[0]);//,width,height);
float a=map(sensorValues[1],0,1024,0,40);
println(“senmap!”,a);
filter(BLUR,a);

//
//filter(BLUR, sensorValues[1]);

// use the values like this!
// sensorValues[0]

// add your code

//
}

void setupSerial() {
printArray(Serial.list());
myPort = new Serial(this, Serial.list()[ 4 ], 9600);
// WARNING!
// You will definitely get an error here.
// Change the PORT_INDEX to 0 and try running it again.
// And then, check the list of the ports,
// find the port “/dev/cu.usbmodem—-” or “/dev/tty.usbmodem—-”
// and replace PORT_INDEX above with the index number of the port.

myPort.clear();
// Throw out the first reading,
// in case we started reading in the middle of a string from the sender.
myString = myPort.readStringUntil( 10 ); // 10 = ‘\n’ Linefeed in ASCII
myString = null;

sensorValues = new int[NUM_OF_VALUES];
}

void updateSerial() {
while (myPort.available() > 0) {
myString = myPort.readStringUntil( 10 ); // 10 = ‘\n’ Linefeed in ASCII
if (myString != null) {
String[] serialInArray = split(trim(myString), “,”);
if (serialInArray.length == NUM_OF_VALUES) {
for (int i=0; i<serialInArray.length; i++) {
sensorValues[i] = int(serialInArray[i]);
}
}
}
}
}

Here is my circuit:

circuit

Here is my video:

r9

Reading

In this week’s reading, the author talked about the development of Computer Image analysis. It showed that computer vision doesn’t only can be used in military and law-enforcement actions. One of the most impressive project based on the technology is called “Standards and Double Standards”, which can recognize the people in the room and make the belt rotate to face to the audience. The project used belts to represent people face to the audience using the tech.

What makes me think is that the game I want to make can use this technique to track the movement of the player in order to make them like moving the stick on the screen, but I’m not sure if I can manage to understand it.

   Source:Computer Vision for Artist and Designers

Recitation 8: Serial Communication by Tya Wang (rw2399)

Exercise 1: Make a Processing Etch A Sketch

Here is a schematic of a circuit made up of two potentiometers, each controlling the x-value and y-value on the screen.

With the ellipse changing position, the program operates this way:

When you think about how Etch a Sketch works, essentially it is only detecting values on the two handles and drawing a straight line between the values returned in two continuous detects. In the video given, the reason why the lines are so smooth and real is because the toy makes detections way faster than Processing, and handles on the toy is also larger than Arduino processing, making controlling them easier. 

I think while this is a very simple and flat interactive relationship, their is quite a huge potential in what people can achieve through it. It can also help its children or even adult users to develop creative thinking by brainstorming what one single line can make up. However, I don’t think this is the best way to create art for a commoner on a daily basis because it requires some sort of artistic threshold to make a vivid picture with such limited material. Other digital drawboards with way more features would help daily art creation better.

Exercise 2: Make a musical instrument with Arduino

Building the circuit and coding is easier compared with designing how this program should be like. Since you can choose either or both of your keyboard and mouse to control the sound, which one you wants to use controlling length and tone of the notes is a central question that I needed to design to make using this program easier. Finally, I decided that while moving the mouse creating a sound might be fun, it is not quite practical. Therefore, I chose to build this classic keyboard where each key on the second of your keyboard leads to the sound of a different note. And I made the length of one sound linger a bit longer after the user stopped pressing the key to create an experience like playing a piano.

Sarah Chung Drawing machines

INTRODUCTION

In this recitation we built a drawing machine by utilizing actuators and Arduinos. We also utilized a H-bridge powered by a stepper motor.

 

MATERIALS

For Steps 1 and 2

1 * 42STH33-0404AC stepper motor
1 * L293D ic chip
1 * power jack
1 * 12 VDC power supply
1 * Arduino kit and its contents

For Step 3

2 * Laser-cut short arms
2 * Laser-cut long arms
1* Laser-cut motor holder
2 * 3D printed motor coupling
5 * Paper Fasteners
1 * Pen that fits the laser-cut mechanisms
Paper

         For step one the circuit building was more or less easy as clear instructions were given on the recitation page, we had no trouble incorporating the H-bridge onto out breadboard. The H-bridge was important as it allowed the DC (direct current) stepper to run both forwards and backwards .Our only worry was that this was our first time using a 12V power instead of 5V and we were scared of the damage that could have been done to the Arduino and our computer. My partner and I colour coded our wires to avoid confusion in case we had to track back our work and to ensure we wired everything correctly to avoid damage to the Arduino or the laptop. Finally when we ran our code the project worked perfectly.

        

In step 2 we added a potentiometer to the circuit, so we could control the rotation of the machine. We mapped the machine to the minimum and maximum values of both the analog and the stepper motor. We programmed the Arduino with analogRead so that the motor could move via the input from the potentiometer. With all this done we finally could control the rotation of the motor with the knob of the potentiometer. After this we were ready to move onto step 3.

In step 3 we assembled the laser cut short & long arms with paper fasteners and mounted them onto our potentiometers. We them laid out a piece of paper and inserted the pencil onto the drawing machine. Though step 1 and 2 were completed successfully and the drawing machine was mounted as instructed we found it hard to control it to draw a fixed pattern.

Question 1

What kind of machines would you be interested in building? Add a reflection about the use of actuators, the digital manipulation of art, and the creative process to your blog post.

I would be interested in building machines that can enhance human creativity, much like drawing machines. I would like to build something that allows others to express themselves in ways they couldn’t before (like the machine that allowed the paralyzed graffiti artist to write with his eyes).I believe that in projects like this (digital manipulation of art), humans rely on the machines in a healthy way, they use it to heighten their skills. The machine is not used as a substitute for creativity, there is a great deal of thought and processing on both sides of the interaction. Actuators were an integral part of this project and are integral for any moving machine. When a creator understands how to make proper use of actuators it allows for limitless avenues of creativity.

Question 2

Choose an art installation mentioned in the reading ART + Science NOW, Stephen Wilson (Kinetics chapter). Post your thoughts about it and make a comparison with the work you did during this recitation. How do you think that the artist selected those specific actuators for his project?

Douglas Irving Repetto’s Giant Painting Machine/San Mateo reminded me of our drawing machine project. In both projects (Repetto’s and ours) a motor was used to allow a drawing tool to mark a canvas. However, our project was controlled by direct human interaction (us controlling with the knob of the potentiometer) whereas Repetto’s machine was controlled by electronics (code).I believe he chose those specific actuators as it allowed the machine the most fluid and erratic movement.

Week12 Assignment: Final Concept Documentation–Crystal Liu

Background+Motivation

My final project is mainly inspired by the re-created famous paintings, especially the portraits. Some people replace the Mona Lisa’s face with the Mr. Bean’s face and the painting is really weird but interesting. 

Image result for mr bean and mona lisa

Also, I found that some people tend to motivate the poses of the characters in the painting, such as The Scream:

Image result for people imitate the screamImage result for people imitate the scream

Therefore, I want to build a project to let the users add their creativity to the famous painting and personalize the paintings to recreate these paintings. It reminds me my previous assignment for style-transfer. For that assignment I use a painting from Picasso to train the model, so that everyone or everything showing in the video can be changed into Picasso’s style. Even though the result is not that good, it still shows a way to personalize the painting or to let the users create their own version of paintings.

My idea is that the user can trigger a famous painting by imitating the pose of the characters in that painting. For example, if the user wants to trigger The Scream, he or she needs to make the pose like this: 😱. After the painting showing up, the user can choose to transfer the style of the live camera to the style of The Scream. If the users want to change to another painting, they just need to do the corresponding pose to trigger the expected painting.

Reference

My reference is the project called moving mirror. The basic idea is that when the user makes a certain pose, there will be lots of images with people making the same or similar pose.

What attracts me most is the connection between images and human poses. It displays a new way of interaction between human and computer or machine. Users can use certain poses to trigger things they want, and in my project it is the painting. 

The second one is style-transfer. It reminds me some artistic filters in Meituxiuxiu, a popular Chinese photo beautification application. These filters can change the style of the picture to sketch style, watercolor style or crayon style.

But the filter is only for still picture. I want to use style-transfer model to add this filter to the dynamic video so that the user can see their style-changed motions in a real time.

Reading Response 8: Live Cinema – Celine Yu

Reading Response:

To differentiate between the terms of VJing, Live Cinema and Live Audiovisual Performances, we must understand the relation between them and their supposed hierarchal standings. Live Audiovisual, as depicted by Ana Carvalho, works as an “umbrella that extends to all manner of audiovisual performative expressions” (134). This artistic umbrella harnesses under its wing, expressions that include VJing, live cinema, expanded cinema and visual music. The term itself is generic and vast for it fails to identify a single style, technique or medium, rendering it complex at the same time. Its “live”, “audiovisual” and “performative” features are grounded in a nature of improvisation. This sense of improvisation has developed alongside the increase in immediacy and ‘liveness’ in technology (cameras, mixers, software) for image and sound manipulation that now permit the capturing and presenting of performance simultaneously while an action is happening (134). The category is often commended for the opportunity it provides audience members across the global to understand the numerous innovative expressions that it entails. 

Though similar, the practices of VJing and Live Cinema are crucially distinct under the wing of Live Audiovisual Performance. VJing, as we had learned in the past, runs parallel to the responsibilities of a disk jockey (DJ). VJs are rooted in the manipulations of live visuals as DJs are rooted in the manipulation of audio. It is however, unlike Live Cinema, much more interactive based in terms of its relationship with the performer and the audience. The acts of VJing, can be relatively restrictive when compared to the likes of Live Cinema. VJs may have the artistic freedom of improvisation and a lower demand for narration, but for the most part, VJs lack the upper hand in a performance. They have less control; having to rely on their fellow collaborators (lighting engineers, DJs, set producers) as well as the response of the audience members. Since the VJing aspect is so heavily interaction-based, it is here we see that the performative aspects of VJs are often times more than not, restricted to the monotonous setting of a nightclub, where they are treated like wallpaper and supporting acts. 

Live Cinema on the other hand is a much more hands on and demanding genre. In essence, the goals of Live Cinema are described to be much more personal and artistic to the eyes of the creator as well as the audience member on the receiving end. This is where “many live cinema creators feel the need to separate themselves from the VJ scene altogether” (93), for goals of live cinema are relatively difficult to achieve in the club environment. The creator is given a much “larger degree of creative control over the performance” (95), there is much more leeway for the artist to create what they want, given that they don’t need to follow trends and situational norms. Furthermore, compared to VJing, Live Cinema houses a much larger importance on narration and communication, where story telling then becomes a needed skill in articulating meaningful representations to the audience. 

Examples

Live Audiovisual Performance

Ryoichi Kurokawa is a household name in the genre of Live Audiovisual Performance. The performance’s usage of synthesized and impactful sounds that play in collaboration with distinct visuals do not have strong narrative sense, but nonetheless work as a personal and artistic piece. His usage of human depictions, animal species and other meaningful representations further convey a sense of artistic expression to the audience member. 

Live Cinema

This performance by Ge-Suk Yeo can be categorized underneath the Live Cinema wing of live audiovisual performances for its usage of concrete narrative aspects to form visual art. The theme of aquatic life and narration of the light down under within the dark seas is prominent in this performance. 

VJing

This example of VJing is a standard performance that does not necessarily harness any narrative components, but makes use of live manipulation of audio and its visual storage to create an atmosphere that allows members present to become harmonious with the performance. The performance brings people closer to the performance they are seeing. 

Sources:

Carvalho, Ana. “Live Audiovisual Performance.” The Audiovisual Breakthrough, 2015, pp. 131–143.

Menotti, Gabriel. “Live Cinema.” The Audiovisual Breakthrough, Edited by Cornelia Lund, 2015, pp. 83–108.