ECE 6123 – Image and Video Processing (Spring 2020)
Course Description:
This course introduces fundamentals of image and video processing, including color image capture and representation; contrast enhancement; spatial domain filtering; two-dimensional (2D) Fourier transform and frequency domain interpretation of linear convolution; image sampling and resizing; multi-resolution image representation using pyramid and wavelet transforms; feature point detection and feature correspondence; geometric transformation, image registration, and image stitching; selected advanced image processing techniques (sparsity-based image recovery); video motion characterization and estimation; video stabilization and panoramic view generation; basic image compression techniques and standards (JPEG and JPEG2000 standard); video compression using adaptive spatial and temporal prediction; video coding standards (MPEGx/H26x); Stereo and multi- view image and video processing (depth from disparity, disparity estimation, video synthesis, compression). Basics of deep learning for image processing will also be introduced. Students will learn to implement selected algorithms in Python. Prior experience with Python and deep learning are not required. You will learn as the course progresses. A class project, preferably in teams of 2 to 3 people, is required.
Prerequisites:
Graduate status. ECE-GY 6113 and ECE-GY 6303 preferred but not required. Should have good background in linear algebra. Undergraduate students must have completed EE-UY 3054 Signals and systems and EE-UY 2233 Probability, and linear algebra.
Instructor:
Professor Yao Wang, 370 Jay Street, Rm 957, (646)-997-3469, Email: yaowang@nyu.edu. Homepage
Teaching Assistants:
Jacky Yuan, Email: zy740@nyu.edu
Bolin Liu, Email: bl2684@nyu.edu
Course Schedule:
Thursday 3.20 PM – 5.50 PM, JB 473, Brooklyn.
Office Hour:
Yao Wang: Mon 4-5 PM, Wed 4-5 PM or appointment by email.
Jacky Yuan: Thurs 1-3PM (370 Jay Street, Rm 938, or Rm966)
Bolin Liu: Tues 1-3PM and Fri 2-4PM (370 Jay Street, Rm 938)
Text Book/References:
- Richard Szeliski, Computer Vision: Algorithms and Applications. (Available online:”Link“) (Cover most of the material, except sparsity-based image processing and image and video coding)
- (Optional) Y. Wang, J. Ostermann, and Y.Q.Zhang, Video Processing and Communications. Prentice Hall, 2002. “Link” (Reference for image and video coding, motion estimation, and stereo)
- (Optional) R. C. Gonzalez and R. E. Woods, Digital Image Processing, Prentice Hall, (3rd Edition) 2008. ISBN number 9780131687288. “Link” (Good reference for basic image processing, wavelet transforms and image coding).
Grading Policy:
Exam (1 exam): 40%, Final Project: 30%, Programming assignments: 20%, Written assignments: 10%. Project grade depends on project proposal, project presentation, final report, and technical accomplishment.
Homework Policy:
Written HW will be assigned after each lecture and due at the beginning of the following lecture time. Programming assignment will be due as posted and will be submitted through NYUclasses. Each assignment counts for 10 points. Late submission of written assignment and programming assignment will be accepted up to 3 days late, with 2 pt deduction for each day. Students can work in teams, but you must submit you own solutions.
Project Guideline: Link
Suggested Project List: Link (Updated 3/01/2020)
Sample Data:
Sample Images
Middelbury Stereo Image Database
Links to Resources (lecture notes and sample exams) in Previous Offerings:
- EL 5123 Image Processing
- EL 6123 Video Processing
- EL 6123 Image and Video Processing (S16)
- EL 6123 Image and Video Processing (S18)
- The coursera image processing course by Prof. Katsaggelos: Link
- The image processing course at Stanford: Link
- The computer vision course at U. Washington: Link
- Stanford course by Feifei Li, et al: CS231n: Convolutional Neural Networks for Visual Recognition. Link to class site Link to lecture videos
Other Useful Links
- Basics of Python and Its Application to Image Processing Through OpenCV: Link
- Example codes and images used in the above guide: Link
- OpenCV: an open source package including many computer vision algorithms
- Numpy
- Scipy
- Matrix Reference Manual
- Codeacdemy : python
- Anaconda
Tentative Course Schedule
- Week 1 (1/30): Course introduction. Part 1: Image Formation and Representation: 3D to 2D projection, photometric image formation, trichromatic color representation, video format (SD, HD, UHD, HDR). Lecture note (Updated 1/27/2019). Part 2: Contrast enhancement (concept of histogram, nonlinear mapping, histogram equalization). Lecture note (Updated 1/27/2019)
- Tutorial on python (1/31, 10:30 AM-12PM)
- Computer assignment 1 (Learning Python and histogram equalization) (Due 2/6)
- Week 2 (2/6): Review of 1D Fourier transform and convolution. Concept of spatial frequency. Continuous and Discrete Space 2D Fourier transform. 2D convolution and its interpretation in frequency domain. Implementation of 2D convolution. Separable filters. Frequency response. Linear filtering (2D convolution) for noise removal, image sharpening, and edge detection. Gaussian filters, DOG and LOG filters as image gradient operators. Lecture note (Updated 2/6/2020).
- Computer assignment 2 (2D filtering) (Due 2/20)
- Week 3 (2/13): Image sampling and resizing. Antialiasing and interpolation filters. Spatial and temporal resolutions of human visual systems. Lecture note on ImageSampling (updated 2/14/19). Reference materials (updated 2/15/19): Selesnick_MultirateSystems, Selesnick_SamplingTheorem
- Week 4 (2/20): Image representation using orthonormal transform. DCT and KLT; multi-resolution representation: Pyramid and Wavelet Transforms. Transform-based image coding. Lecture note on transform (updated 2/20/2020), Lecture note on Wavelet (updated 2/20/2020).
- Programming assignment 3 (Pyramids and wavelet transforms) (Due 3/12)
- Week 5 (2/27): Sparse-representation based image recovery. General formulation of image enhancement as an optimization problem. Sparsity for regularization. L0 vs. L1 vs. L2 prior. Optimization techniques for solving L2-L1 problems (soft thresholding, ISTA, ADMM). Applications in denoising, debluring, inpainting, compressive sensing, superresolution. Lecture note (updated 2/27/2020).
- Week 6 (3/5): Overview of machine learning, neural networks, convolutional networks. Convolutional Network for classification. Training and validation. Lecture note (updated 3/5/2020)
- Week 7 (3/12): Convolutional Networks for Image Processing, including segmentation, denoising, object detection. part2, part3 (updated 3/12/2020)
- Tutorial on using PyTorch and Google Cloud Platform for deep learning (3/13, 2-4pm, via Zoom)
- Programming assignment 4 (Training a U-Net for image segmentation) (Due 3/26)
- 3/16–3/22: Spring Recess
- Week 8 (3/26): Project proposal due (You should prepare the proposal following the format described in project guideline. You should have read a couple of reference papers and a detailed milestone chart and partition of project roles among the project team members).
- Week 8 (3/26): Feature detection (Harris corner, scale space, SIFT), feature descriptors (SIFT). Bag of Visual Word representation for image classification. Lecture note (updated 3/26/2020)
- Week 9 (4/2): Geometric mapping (affine, homography), Feature based camera motion estimation (RANSAC). Image warping. Image registration. Panoramic view stitching. Lecture note (updated 4/1/2020)
- Programming assignment 5 (Due 4/16): Stitching a panoramic picture (Feature detection, finding global mapping, warping, combining).
- Week 10 (4/9): Dense motion/displacement estimation: optical flow equation, optical flow estimation (Lucas-Kanade method, KLT tracker); block matching, multi-resolution estimation. Deformable registration (medical applications). Deep learning approach. Lecture note. (updated 4/09/2020)
- Week 11 (4/16): Moving object detection (background/foreground separation): Robust PCA (low rank + sparse decomposition). Global camera motion estimation from optical flows. Video stabilization. Video scene change detection. Lecture note. (updated 4/15/2020)
- Week 12 (4/23): Exam (including all material in Weeks 1-11)
- Week 13 (4/30): Video Coding. Part 1: block-based motion compensated prediction and interpolation, adaptive spatial prediction, block-based hybrid video coding, rate-distortion optimized mode selection, rate control, Group of pictures (GoP) structure, tradeoff between coding efficiency, delay, and complexity. Lecture note. (updated 4/29/2020) Part 2: Overview of video coding standards (AVC/H.264, HEVC/H.265); Layered video coding: general concept and H.264/SVC. Multiview video compression. Lecture note. (updated 4/29/2020)
- Programming assignment 6 (Due 5/21): Video Coding
- Week 14 (5/7): Stereo and multiview video: depth from disparity, disparity estimation, view synthesis. Multiview video compression. Depth camera (Kinect). 360 video camera and view stitching. Lecture note. (updated 5/2/2019)
- Week 15 (5/14): Project Presentation.
- 5/18: Project Report and all other material must be uploaded.
Sample Exams:
-
- S15_midterm_w_solution
- S15 Final Exam solution
- S16_midterm solution
- S16 final exam solution
- S17 exam solution (updated 4/17/2019)
- S18 exam solution (updated 4/19/2019)
- S19 exam solution (updated 4/13/2020)
Sample Images:
Policy on Academic Dishonesty:
The School of Engineering encourages academic excellence in an environment that promotes honesty, integrity, and fairness. Please see the policy on academic dishonesty: Link