Announcements
- Record session
- Student lead discussions (after break): Youssef and Amy
- Please keep your video on, unless there are issues of privacy. I need the feedback from your faces so that I know if I’m making sense or not.
Today’s lecture: Computer Vision!
General concepts
e.g. in ColorTrack
- Add library
- Set up capture device (camera)
- Read from camera (and display if desired)
- loadPixels()
- Loop through pixels
for (int x = 0; x < video.width; x ++ ) { for (int y = 0; y < video.height; y ++ ) { int loc = x + y*video.width;
- Extract color
float r1 = red(currentColor);
or
int currR = (currColor >> 16) & 0xFF;
- Calculate distance from whatever we seek
float d = dist(r1, g1, b1, r2, g2, b2);
- If this pixel is new world record, remember its location
if (d < worldRecord) { worldRecord = d; closestX = x; closestY = y; }
- Once we’re done looking at all the pixels, do whatever we need at the location indicated by the world record
Other concepts
- PImage vs. pixels array
- Canvas pixels array vs. image pixels array
- Running average
Install the video library:
- Sketch -> Import Library -> Add Library
- In the filter box enter “video”
- Select the library called “Video – GStreamer-based video library for Processing”
- To get your camera name first run this program:
import processing.video.*; void setup() { String[] cameras = Capture.list(); if (cameras == null) { println("Could not find list of available cameras, try the default"); exit(); } else if (cameras.length == 0) { println("There are no cameras available for capture."); } else { println("Available cameras:"); printArray(cameras); } }
Test that the camera is working:
import processing.video.*; Capture video; void setup() { size(640, 480); // Change size to 320 x 240 if too slow at 640 x 480 video = new Capture(this, width, height, "Integrated Camera: Integrated C"); video.start(); } void draw() { if (video.available()) { video.read(); image(video, 0, 0, width, height); // Draw the webcam video onto the screen } }
With this information, you can add the camera name to the call to the Capture() constructor in e.g. Golan Levin’s Code Listing 1, 2, 3, and 4 (also remove the final 24 and add the call to video.start(), as in the example above):
video = new Capture(this, width, height, "your camera name here"); video.start();
Install the examples from Daniel Shiffman’s book “Learning Processing”
-
- Sketch -> Import Library -> Add Library -> Examples tab -> Learning Processing
- Examples will be in File -> Examples -> Contributed Examples -> Learning Processing -> Ch. 16 Video
- As before, add your camera name to the constructor:
void setup() { video = new Capture(this, 320, 240, "Integrated Camera: Integrated C"); video.start(); }
captureEvent()
vs.capture.Available()
- Remember the importance of setting up the environment: “Background subtraction and brightness thresholding, for example, can fail if the people in the scene are too close in color or brightness to their surroundings. For these algorithms to work well, it is greatly beneficial to prepare physical circumstances which naturally emphasize the contrast between people and their environments. This can be achieved with lighting situations that silhouette the people, for example, or through the use of specially-colored costumes. The frame-differencing technique, likewise, fails to detect people if they are stationary”
- Good examples are:
- Exercise 16-6: Greenscreen
- Exercise 16-7: Track Motion
- Example 16-11: Color Track
- Example 16-12: Background Removal
- Example 16-13: Motion Pixels
- Example 16-14: Motion Sensor
- Let’s use this last example to do something with what we’ve located. Let’s draw a line at the highest point where motion is detected.
- First add a new variable at the top of the draw() function:
// record the Y coordinate of the highest moving pixel. // Initialize to the lowest value of Y. int highest = video.height-1;
- In the for() loop, when motion is detected, see if this motion is higher than the highest recorded motion:
if (diff > threshold) { // If motion, display black pixels[loc] = color(0); // if this is the highest pixel, record it's position if (y < highest) { highest = y; } }
- At the end of the draw() function, but before the call to updatePixels(), draw a line at the highest location, and then reset the highest variable for the next frame:
// draw a line at the highest pixel lineAt(highest); // reset for next frame highest = video.height-1;
- First add a new variable at the top of the draw() function:
-
- Finally the function that draws the line:
void lineAt(int y) { for (int x = 0; x < width; x++) { pixels[x + y*video.width] = color(255, 0, 0); } }
- Finally the function that draws the line:
Homework due Tuesday May 5
- Please send me the following information as soon as possible, whether the video library worked for you or not:
- Windows version or MacOS version (click on Apple logo in top left corner, select “About this Mac”
- Try to run the ColorTrack example (File -> Examples -> Contributed Examples -> Learning Processing 2nd Edition -> Chp16_video -> example_16_11_ColorTrack
- If you get any error messages, email me the error message
- If this works or not, email me, and if it doesn’t work, explain to me what happens
- Final projects draft
- New folder in Github called “Final Project”
- Description
- Diagrams or pictures if relevant
- Your final project can be anything using Processing, using what you’ve learned in class or anything you find on the internet. You may use any of your assignments as a starting point and then add to that.