EL-GY 8233 Optimal Control Theory

Course description: This course is appropriate for second-year graduate students with basic knowledge in linear systems, stochastic processes, and linear programming. Optimal control theory has been widely used in many branches of science and engineering, including economics, financial engineering, system engineering, and aerospace engineering. The course will first review the basics of optimization theory and then focuses on continuous-time deterministic optimal control and discrete-time stochastic control problems. The topics of the course include theoretical and algorithmic foundations of optimal control theory, including calculus of variations, maximum principle, and dynamic programming. In addition, the course introduces linear quadratic control designs, Kalman filtering, differential games, and H-infinity optimal control design problems.

Pre-requisites:The course is offered as a second-year graduate level course. Basic knowledge of linear systems, optimization, stochastic processes, and scientific computing is assumed.

Grading:

Homework: 25%

Midterm Exam 25%

Final Exam: 25%

Project: 25%

Required Text:

[GS] Gelfand, I.M. and Silverman, R.A., Calculus of variations. Dover 2000.

[DL] D. Luenberger, Optimization by Vector Space Methods, Wiley, 1997.

[AF] M. Athans and P. L. Falb, Optimal Control: An Introduction to the Theory and Its Applications, Dover Publications Inc., 2007

[DB] D. P. Bertsekas, Dynamic programming and stochastic control,  Academic Press, April 17, 2012.

Supplementary Text:

[DBb] D. Bertsekas, Dynamic Programming and Optimal Control. Vol. 1. Nashua, NH: Athena Scientific, 2007. 

[MI] M. D. Intriligator, Mathematical Optimization and Economic Theory, SIAM Classics in Applied Mathematics, 2002.

[DL] D. Liberzon, Calculus of Variations and Optimal Control Theory: A Concise Introduction, Princeton University Press, 2012.

[ZS] S. Zlobec, Stable Parametric Programming. Springer 2001.

Additional References:

[ST] S. P. Sethi and G. L. Thompson, Optimal Control Theory: Applications to Management Science and Economics, Springer 2006. 

[FR] W. H. Fleming and R. W. Rishel, Deterministic and stochastic optimal control, Springer, 2012.

[DBa] D. Bertsekas, Nonlinear Programming, Athena Scientific, Second Edition, 1999.

[DBc] D. Bertsekas, Convex Optimization Algorithms. Nashua, NH: Athena Scientific, 2015. 

[AM] B. Anderson and J. B. Moore, Optimal Control: Linear Quadratic Methods, Dover, 1990.

[LY] D. Luenberger and Y. Ye, Linear and Nonlinear Programming, Springer, 2008.

[CZ] E. K. P. Chong and S. H. Zak, An Introduction to Optimization, John Wiley & Sons Inc., 4th edition, 2013.

[JE] J. Engwerda, LQ dynamic optimization and differential games. John Wiley & Sons, 2005.

[DH] D. G. Hull, Optimal control theory for applications, Springer, 2013.

[BB] T. Ba┼čar and P. Bernhard, H-infinity optimal control and related minimax design problems: a dynamic game approach, Springer, 2008.

[ED] E. Dockner, Differential games in economics and management science, Cambridge University Press, 2000.

[LE] L. C. Evans, An Introduction to Mathematical Optimal Control Theory, Lecture Notes.

[PV] P. Varaiya, Lecture Notes on Optimization, Lecture Notes.

Course Outline:

  1. Nonlinear optimization
  2. Convex optimization
  3. Vector space optimization
  4. Calculus of variations
  5. Dynamic programming
  6. Kalman filtering
  7. Stochastic control
  8. Maximum principle
  9. H-infinity control
  10. Linear-quadratic optimal control