During this week’s recitation, I drew several things according to the instruction.
Firstly this is the shape I drew. This is a rotating Taichi and its size would changes. I originally planned to only make it rotate but I realized that it was very similar to the rotating mills I drew for the first recitation task. So I tried scaling them as well. This is the Taichi I drew:
And here is my code for building this:
float []angle; float []Scale; int []tScale; int m=0; void setup(){ size(600,600); angle = new float[50]; Scale = new float[50]; tScale = new int[50]; for (int i = 0; i<30; i++){ tScale[i] = 1; Scale[i] = 1; } } void draw(){ background(255); for (int a = 1; a<5; a++){ for (int b = 1; b<5; b++){ rotateTaichi(75+(a-1)*150,75+(b-1)*150,m); m = m+1; } } m = 0; //println(m); } void taichi(){ fill(0); arc(0,0,100,100,0,PI,PIE); fill(255); arc(0,0,100,100,PI,2*PI); noStroke(); fill(0); circle(-25,0,50); fill(255); circle(25,0,50); noStroke(); fill(0); circle(25,0,20); fill(255); circle(-25,0,20); } void rotateTaichi(int x,int y,int k){ Scale[k] = Scale[k]+0.01*tScale[k]; if (Scale[k]<0.3 || Scale[k]>1){ tScale[k] = -tScale[k]; } //println(k,Scale[k],tScale[k]); push(); translate(x,y); rotate(radians(angle[k])); angle[k] = angle[k]+1; scale(Scale[k]); taichi(); pop(); }
Then according to next step, I built this. But I realized that I couldn’t present a rotating and scaling Taichi at the place where I clicked the mouse. Even though I tried to use object- every time I clicked the mouse, there would be a new object presenting on the screen-I failed. I was only able to present the static one. Here’s what I built:
I found that if I only use mousePressed() as the only condition, every time I click, there will be many Taichis presenting on the some position. So I added another boolean preCheck to make sure there will be only one Taichi at one position every time I click. Here is my code:
boolean preCheck = false; void setup(){ size(600,600); background(255); } void draw(){ if (mousePressed && preCheck == false){ taichi(mouseX,mouseY); preCheck = true; } if(mousePressed == false){ preCheck = false; } if(keyPressed && key ==CODED){ if(keyCode == UP){ background(255); } } } void taichi(int x,int y){ fill(0); arc(x,y,100,100,0,PI,PIE); fill(255); arc(x,y,100,100,PI,2*PI); push(); noStroke(); fill(0); circle(x-25,y,50); fill(255); circle(x+25,y,50); noStroke(); fill(0); circle(x+25,y,20); fill(255); circle(x-25,y,20); pop(); }
Here is my failed attempt to use object. I just record it here and I will try to fix this with more knowledge about object.
int m=0; boolean preCheck = false; Bug[] bugs; void setup(){ size(600,600); bugs = new Bug[50]; for (int i = 0; i < 50; i++) { bugs[i] = new Bug(); } } void draw(){ background(255); if (mousePressed == true && preCheck == false){ bugs[m].rotateTaichi(mouseX,mouseY,m); m = m+1; preCheck = true; } if (mousePressed == false){ preCheck = false; } } class Bug{ int x,y; float []angle; float []Scale; int []tScale; Bug(){ angle = new float[50]; Scale = new float[50]; tScale = new int[50]; for (int i = 0; i<30; i++){ tScale[i] = 1; Scale[i] = 1; } } void taichi(){ fill(0); arc(0,0,100,100,0,PI,PIE); fill(255); arc(0,0,100,100,PI,2*PI); noStroke(); fill(0); circle(-25,0,50); fill(255); circle(25,0,50); noStroke(); fill(0); circle(25,0,20); fill(255); circle(-25,0,20); } void rotateTaichi(float x,float y,int k){ Scale[k] = Scale[k]+0.01*tScale[k]; if (Scale[k]<0.3 || Scale[k]>1){ tScale[k] = -tScale[k]; } //println(k,Scale[k],tScale[k]); push(); translate(x,y); rotate(radians(angle[k])); angle[k] = angle[k]+1; scale(Scale[k]); taichi(); pop(); } }
Then in the third step, I create many Taichis to let them present randomly on the screen. To make it more diverse, except for randomizing the position, I randomized the color, rotating angle and size as well. Here is what I built (something strange just happened to my computer’s screen recording so I took this video with my phone):
Here is my code:
float r,g,b; void setup(){ size(600,600); background(255); } void draw(){ r = random(0,255); g = random(0,255); b = random(0,255); taichi(int(random(width)),int(random(height)),r,g,b,radians(random(360)),random(0.3,1.5)); if(keyPressed && key ==CODED){ if(keyCode == UP){ background(255); } } } void taichi(int x,int y, float r,float g,float b,float angle,float Scale){ push(); rotate(angle); scale(Scale); fill(r,g,b); arc(x,y,100,100,0,PI,PIE); fill(255); arc(x,y,100,100,PI,2*PI); push(); noStroke(); fill(r,g,b); circle(x-25,y,50); fill(255); circle(x+25,y,50); noStroke(); fill(r,g,b); circle(x+25,y,20); fill(255); circle(x-25,y,20); pop(); pop(); }
Questions:
- The first part of my work is definitely dynamic-passive. Every Taichi rotates and scales, but this movement would not be influence by any audience.
For the second part of my work, I think it is dynamic-interactive (varying) because this mini project could react to the audience when the audience click the mouse and the position of each Taichi depends on the position of the mouse when clicking, this shows “varying”. And when pressing up arrow on the keyboard, the screen would be clear, this is also an interaction.
And I think the third part of my work is dynamic-interactive. Taichis are presented on the screen with random position, color, size and rotating angle and every time when the user presses the up arrow, the screen would be cleared. Here, pressing the key to clear the screen is interactive, but the only way of interaction is clearing the screen. There is no other kind of interaction. So I think this part is dynamic-interactive but not varying - This Taichis in circle just remind me of the chess. Maybe I could build a chessboard on the screen and the users could place the chess by pressing the button and the position would be sensed by distance sensor. At the same time the press the button, the chess would be places on the corresponding place on the screen. This could help people who want to play chess but don’t have chess pieces or board.
- I think I could use distance sensor and pressure sensor to get the information of pressing and the position. With these, I think I could build I drawing board with Arduino and processing. I could even change the size of the brush when the pressure changes to make is more real.
Research
For loading, I could load my own image, audio, video or gif into the processing. I could read the color information of each pixel of one image, the amplitude of the sound and the information of each pixels of a video as well. And not only can I put my image, sound or video files into the corresponding folder and load it in my project, but I could also use microphone or camera to input image, sound or video as well.
Here are the detailed functions I found:
Loading image:
loadImage();
Loading video:
import processing.video.*; Movie myMovie; void setup() { size(200, 200); myMovie = new Movie(this, "totoro.mov"); myMovie.loop(); } void draw() { tint(255, 20); image(myMovie, mouseX, mouseY); } // Called every time a new frame is available to read void movieEvent(Movie m) { m.read(); }
Load sound:
import processing.sound.*; Sound s; void setup() { size(200, 200); SinOsc sin = new SinOsc(this); sin.play(200, 0.2); sin = new SinOsc(this); sin.play(205, 0.2); // Create a Sound object for globally controlling the output volume. s = new Sound(this); }
Get pixels:
get(); updatePixels();
Get amplitude (and show it with a graph):
float amplitude = map(mouseY, 0, height, 0.4, 0.0); s.volume(amplitude);
Camera:
import processing.video.*; Capture cam; void setup() { size(640, 480); String[] cameras = Capture.list(); if (cameras.length == 0) { println("There are no cameras available for capture."); exit(); } else { println("Available cameras:"); for (int i = 0; i < cameras.length; i++) { println(cameras[i]); } cam = new Capture(this, cameras[0]); cam.start(); } } void draw() { if (cam.available() == true) { cam.read(); } image(cam, 0, 0); }
Microphone:
I found something on this discourse website but I’m not sure about it:
https://discourse.processing.org/t/using-sound-input-from-mic/21448
Leave a Reply