ECE 6123 – Image and Video Processing (Spring 2023)
Course Description: This course introduces fundamentals of image and video processing, including color image capture and representation; contrast enhancement; spatial domain filtering; two-dimensional (2D) Fourier transform and frequency domain interpretation of linear convolution; image sampling and resizing; multi-resolution image representation using pyramid and wavelet transforms; feature point detection and feature correspondence; geometric transformation, image registration, and image stitching; video motion characterization and estimation; video stabilization and panoramic view generation; image representation using orthogonal transforms; sparsity-based image recovery; basic image compression techniques and standards (JPEG and JPEG2000 standard); video compression using adaptive spatial and temporal prediction; video coding standards (MPEGx/H26x); Stereo and multi- view image and video processing (depth from disparity, disparity estimation, video synthesis, compression). Basics of deep learning for image processing and computer vision will also be introduced. Students will learn to implement selected algorithms in Python. Prior experience with Python and deep learning are not required. You will learn as the course progresses. A class project, preferably in teams of 2 to 3 people, is required.
Prerequisites: Graduate status. ECE-GY 6113 and ECE-GY 6303 preferred but not required. Should have good background in linear algebra. Undergraduate students must have completed EE-UY 3054 Signals and systems and EE-UY 2233 Probability, and linear algebra.
Instructor: Professor Yao Wang, 370 Jay Street, Rm 957, (646)-997-3469, Email: yaowang at nyu.edu. Homepage Office hour: Mon. 5:00-6:00 PM, Wed. 5:00-6:00 PM (online).
Teaching Assistants: Nikola Janjusevic, Email: npj226 at nyu.edu, Office hour: Thurs. 12:30-1:30PM (370 Jay Street, Office 922-E); Vara Lakshmi Bayanagari, Email: vb2183 at nyu.edu, Office hour: Tues. 11AM-12PM (370 Jay Street, Office 966); Samyak Rawlekar, Email: skr2369 at nyu.edu, Office hour: Fri. 3-4PM (370 Jay Street, Office 966). Office hour Zoom links available on Brightspace.
Course Schedule: In person: Thursday 2:00 PM – 4:30 PM, JAB 475, Brooklyn.
Text Book/References:
- Richard Szeliski, Computer Vision: Algorithms and Applications. 2nd Edition (Sept. 30,2021 version) (Available online:”Link“) (Cover most of the material, except sparsity-based image processing and image and video coding)
- (Optional) Y. Wang, J. Ostermann, and Y.Q.Zhang, Video Processing and Communications. Prentice Hall, 2002. “Link” (Reference for Fourier transforms, image and video coding, motion estimation, and stereo)
- (Optional) R. C. Gonzalez and R. E. Woods, Digital Image Processing, Prentice Hall, (3rd Edition) 2008. ISBN number 9780131687288. “Link” (Good reference for basic image processing, wavelet transforms and image coding).
Course Structure: The class will consist of weekly lectures, weekly written homework assignments (not graded, solution will be given), biweekly short quizzes (based on homework assignment), computer assignments, a team project (2-3 people in a team). There will be two optional tutorials outside the class time, one to introduce Python programing, another to introduce PyTorch and Google Cloud Platform.
Grading: Quizzes: 40%, Programming assignments: 30%, Project: 30%. Project grade depends on project proposal (5%), midterm project report (5%), project presentation (10%), final report (5%), and technical accomplishment (5pt).
Attendance: Students are expected to attend all lectures and quizzes in-person.
Homework: Written HW will be assigned after each lecture but not graded, and solutions will be provided. Programming assignments will be due as posted. Each assignment counts for 10 points. Late submission of programming assignment will be accepted up to 3 days late, with 2 pt deduction for each day. Students can work in teams, but you must submit you own solutions. Solutions to computer assignments will be posted 1 week after the due date. We will aim to complete the grading of each quiz and computer assignment within 1-2 weeks.
Quiz: A quiz will be held biweekly. The total time for each quiz is 20 minutes. The quiz problems will be similar to the written HW problems and/or review questions in the lecture note.
Project Guideline: Link
Suggested Project List: Link (Updated 1/16/2023. To be further updated.)
Sample Data: Sample Images Middelbury Stereo Image Database
Links to Resources (lecture notes) in Previous Offerings:
- ECE-GY 6123 Image and Video Processing (S21)
- ECE-GY 6123 Image and Video Processing (S20)
- ECE-GY 6123 Image and Video Processing (S19)
- EL 5123 Image Processing
- EL 6123 Video Processing
- EL 6123 Image and Video Processing (S16)
- EL 6123 Image and Video Processing (S18)
- The coursera image processing course by Prof. Katsaggelos: Link
- The image processing course at Stanford: Link
- The computer vision course at U. Washington: Link
- Stanford course by Feifei Li, et al: CS231n: Convolutional Neural Networks for Visual Recognition. Link to class site Link to lecture videos
Other Useful Links
- Basics of Python and Its Application to Image Processing Through OpenCV: Link
- Example codes and images used in the above guide: Link
- OpenCV: an open source package including many computer vision algorithms
- Numpy
- Scipy
- Matrix Reference Manual
- Codeacdemy : python
- Anaconda
Tentative Course Schedule (lecture notes may be updated shortly before the lecture date)
- Week 1 (1/26): Course introduction. Lecture note (Updated 1/26/2023) Part 1: Image Formation and Representation: 3D to 2D projection, photometric image formation, trichromatic color representation, video format (SD, HD, UHD, HDR). Lecture note (Updated 1/25/2022). Part 2: Contrast enhancement (concept of histogram, nonlinear mapping, histogram equalization). Lecture note (Updated 1/25/2022)
- Tutorial on python (1/27, 9:30 AM-11:00 AM). Materials (Updated 1/26/2021)
- Programming assignment 1: Learning Python and histogram equalization (Assignment 1/26, Due 2/9)
- Week 2 (2/2): Review of 1D Fourier transform and convolution. Concept of spatial frequency. Continuous and Discrete Space 2D Fourier transform. Lecture note: “FT.pdf” (updated 02/01/2023)
- Week 3 (2/9) Completion of week 2 lecture. 2D convolution and its interpretation in frequency domain. Implementation of 2D convolution. Separable filters. Frequency response.Lecture note: “convolution.pdf” (updated 01/31/2023) Linear filtering (2D convolution) for noise removal, image sharpening, and edge detection. Gaussian filters, DOG and LOG filters as image gradient operators. Lecture note: “filtering_edge detection.pdf” (updated 02/01/2023)
- Programming assignment 2: 2D filtering (Assignment 2/9, Due 2/23)
- Week 3 (2/9): Submit a preliminary project proposal, including project team members and a brief description of the chosen project. Feel free to schedule a meeting with the instructor prior to the deadline to discuss your project ideas.
- 2/16: Quiz 1 (covering lecture 1,2 ,3)
- Week 4 (2/16): Image sampling and resizing. Antialiasing and interpolation filters. Spatial and temporal resolutions of human visual systems. Lecture note on ImageSampling (updated 2/19/22). Reference materials (updated 2/15/19): Selesnick_MultirateSystems, Selesnick_SamplingTheorem
- 2/23: Quiz 2 (covering lecture 4)
- Week 5 (2/23): Image representation using orthonormal transform and dictionary. DCT and KLT; DCT-based image coding (JPEG). Lecture note on transform coding (updated 2/22/2023).
- Week 5 (2/23): Submit a final project proposal, including a list of project team members, an updated project description, literature survey, your major tasks along with completion schedule (including who is responsible for each task), and a bibliography. You should prepare the proposal following the format described in project guideline.
- Week 6 (3/2): Multi-resolution representation: Pyramid and Wavelet Transforms. Wavelet-based image coding (JPEG2K). Lecture note on Wavelet (updated 3/02/2023).
- Programming assignment 3: Pyramids and wavelet transforms (Assignment 3/3, Due 3/23)
- Week 7 (3/9): Sparse-representation based image recovery. General formulation of image enhancement as an optimization problem. Sparsity for regularization. L0 vs. L1 vs. L2 prior. Optimization techniques for solving L2-L1 problems (soft thresholding, ISTA, ADMM). Applications in denoising, debluring, inpainting, compressive sensing, superresolution. Lecture note (updated 2/27/2020).
- 3/9: Quiz 3 (covering lecture 5 and 6)
- Spring break (3/12-3/18)
- Week 8 (3/23): Overview of machine learning, neural networks, convolutional networks. Convolutional Network for classification. Training and validation. Lecture note on CNN (part 1) (updated 3/24/2023)
- Week 9 (3/30): Convolutional Networks for Image Processing, including segmentation, denoising, object detection. Lecture note on CNN (part2) (updated 3/30/2023)
- Tutorial on using PyTorch and Google Cloud Platform for deep learning (3/24, 9:30AM-11:00AM) Materials (updated 3/25/2023)
- Programming assignment 4: Training a U-Net for image segmentation (Assignment 3/30, Due 4/14)
- 4/6: Quiz 4 (covering lecture 8 and 9)
- Week 10 (4/6): Feature detection (Harris corner, scale space, SIFT), feature descriptors (SIFT). Bag of Visual Word representation for image classification. Lecture note on Features (updated 3/27/2022)
- Week 10 (4/6): Submit the midterm project report, including the updated project description, and literature survey, preliminary results, remaining work along with the completion schedule, updated biography. You should prepare the proposal following the format described in project guideline.
- Week 11 (4/13): Geometric mapping (affine, homography), Feature based camera motion estimation (RANSAC). Image warping. Image registration. Panoramic view stitching. Video stabilization. Lecture note (updated 4/7/2022)
- Programming assignment 5: Stitching a panoramic picture (Feature detection, finding global mapping, warping, combining). (Assignment 4/13, Due 4/27)
- 4/20: Quiz 5 (covering lecture 10 and 11)
- Week 12 (4/20): Dense motion/displacement estimation: optical flow equation, optical flow estimation (Lucas-Kanade method, KLT tracker); block matching, multi-resolution estimation. Deformable registration (medical applications). Deep learning approach. Lecture note. (updated 04/19/2023)
- Week 13 (4/27): Video Coding Part 1: block-based motion-compensated prediction and interpolation, adaptive spatial prediction, block-based hybrid video coding, rate-distortion optimized mode selection, rate control, Group of pictures (GoP) structure, the tradeoff between coding efficiency, delay, and complexity. Lecture note (updated 4/26/2023)
-
Programming assignment 6: Video Coding (Assignment 4/27, Due 5/11)
- Week 14 (5/4): Stereo and multiview video: depth from disparity, disparity estimation, view synthesis. Multiview video compression. Depth camera (Kinect). 360 video camera and view stitching. Lecture note. (updated 5/03/2022); Video Coding Part 2: Overview of video coding standards (AVC/H.264, HEVC/H.265); Layered video coding: general concept and H.264/SVC. Lecture note (updated 5/03/2023)
- 5/4: Quiz 6: Video coding (covering lecture 12, 13)
- Week 15 (5/11-5/12): Project Presentation.
- 5/16: Project Report and all other material must be uploaded. Final project report should include an abstract, a literature survey, and your accomplishments, and a bibliography, updated from the midterm report. You should prepare the proposal following the format described in project guideline.
Sample Exams:
-
- S15_midterm_w_solution
- S15 Final Exam solution
- S16_midterm solution
- S16 final exam solution
- S17 exam solution (updated 4/17/2019)
- S18 exam solution (updated 4/19/2019)
- S19 exam solution (updated 4/13/2020)
- S20 exam solution (updated 4/17/2021)
Sample Images:
Policy on Academic Integrity: The School of Engineering encourages academic excellence in an environment that promotes honesty, integrity, and fairness. Please see the policy on academic dishonesty: Link to NYU Tandon Policy, Link to NYU Policy.
Inclusion Statement: The NYU Tandon School values an inclusive and equitable environment for all our students. I hope to foster a sense of community in this class and consider it a place where individuals of all backgrounds, beliefs, ethnicities, national origins, gender identities, sexual orientations, religious and political affiliations, and abilities will be treated with respect. It is my intent that all students’ learning needs be addressed both in and out of class, and that the diversity that students bring to this class be viewed as a resource, strength and benefit. If this standard is not being upheld, please feel free to speak with me. Please visit this link for NYU Tandon’s effort in diversity and inclusion.
Moses Center Statement of Disability: If you are a student with a disability and would like to request accommodations, please contact New York University’s Moses Center for Students with Disabilities (CSD). You must be registered with CSD to receive accommodations. Information about the Moses Center can be found at www.nyu.edu/csd. The Moses Center is located at 726 Broadway on the 3rd floor.