The course aims to provide mathematical tools dor the analysis and
design of feedback control systems.
The main topics are:
1) STABILITY OF FEEDBACK CONTROL SYSTEMS AND STABILIZATION
2) DIRECT SYNTHESIS METHODS
3) SAMPLED-DATA CONTROL SYSTEMS
4) CONTROL DESIGN VIA STATE-SPACE METHODS
5) OPTIMAL CONTROL
6) PERFORMANCE LIMITATIONS OF FEEDBACK CONTROL SYSTEMS AND
ROBUST CONTROL
Basso, Chisci, Falugi. Fondamenti di Automatica, De Agostini-UTET, 2007.
Bolzern, Scattolini, Schiavoni. Fondamenti di Controlli Automatici
(seconda edizione). McGraw-Hill, 2004
Doyle, Francis, Tannenbaum. Feedback Control Theory. Maxwell McMillan,
1992
Goodwin, Graebe, Salgado. Control System Design. Prentice-Hall, 2001.
Isidori. Sistemi di Controllo: seconda edizione, Vol. I. Siderea, Roma,
1993.
Learning Objectives
To provide mathematical tools for the analysis and design of feedback
control systems to be applied to the solution of practical engineering
control problems.
Prerequisites
Math analysis.
Linear algebra.
Elements of control engineering.
Teaching Methods
Lectures and practice in class.
Type of Assessment
Written test and oral exam.
Course program
1. INTRODUCTION
Background on linear system theory. The internal model principle and its
applications.
2. STABILITY OF FEEDBACK CONTROL SYSTEMS AND STABILIZATION
Internal stability: definition, mathematical conditions and connection with
the Nyquist criterion. Characterization of stabilizing controllers: case of
stable process and general case.
3. DIRECT SYNTHESIS TECHNIQUES
Choice of the closed-loop transfer function. Controller design meeting
desired control specifications. Hints on multiobjective direct synthesis.
4. SAMPLED-DATA SYSTEMS
Sampling and reconstruction of signals. Discretization of a continuoustime
linear time-invariant process. Analysis of the dynamic behaviour via
z-transform. Design of digital controllers. Controller discretization techniques.
5. REGULATOR PROBLEM
Background on state-space representations. Observability and
controllability. Static state feedback and eigenvalue/pole placement.
Asymptotic state observers. Regulator design. Stabilization via statespace
methods. Internal model regulator. Design of linear regulators for
nonlinear processes via process linearization.
6. OPTIMAL CONTROL
Optimal control problem statement: dynamic programming and Hamilton-
Jacobi-Bellman equation. Linear Quadratic (LQ) regulator on a finite
control horizon for discrete-time systems. Infinite-horizon LQ regolators.
LQ regulators for continuos-time systems. Riccati equations, returndifference
identities and spectral factorization equations for both
discrete-time and continuous-time infinite-horizon LQ regulator design.
Guaranteed stability margins of the continuous-time LQ static regulator.
7. PERFORMANCE LIMITATIONS ON FEEDBACK CONTROL SYSTEMS AND
ROBUST CONTROL
Influence of open-loop right half-plane poles and zeros on the control
system bandpass and
step-response. Bode's theorem on the sensitivity function. Robust
stability: constraint on the infinity-norm of the complementary sensitivity
function.