Optimal Adaptive Control And Differential Games By Reinforcement Learning Principles (control Engineering)
by Draguna Vrabie /
2012 / English / PDF
20.6 MB Download
Adaptive controllers and optimal controllers are two distinct
methods for the design of automatic control systems. Adaptive
controllers learn online in real time how to control systems but do
not yield optimal performance, whereas optimal controllers must be
designed offline using full knowledge of the systems dynamics. This
book shows how approximate dynamic programming - a reinforcement
machine learning technique that is motivated by learning mechanisms
in biological and animal systems - can be used to design a family
of adaptive optimal control algorithms that converge in real-time
to optimal control solutions by measuring data along the system
trajectories.
Adaptive controllers and optimal controllers are two distinct
methods for the design of automatic control systems. Adaptive
controllers learn online in real time how to control systems but do
not yield optimal performance, whereas optimal controllers must be
designed offline using full knowledge of the systems dynamics. This
book shows how approximate dynamic programming - a reinforcement
machine learning technique that is motivated by learning mechanisms
in biological and animal systems - can be used to design a family
of adaptive optimal control algorithms that converge in real-time
to optimal control solutions by measuring data along the system
trajectories.
The book also describes how to use approximate dynamic
programming methods to solve multi-player differential games
online. Differential games have been shown to be important in
H-infinity robust control for disturbance rejection, and in
coordinating activities among multiple agents in networked teams.
The book also describes how to use approximate dynamic
programming methods to solve multi-player differential games
online. Differential games have been shown to be important in
H-infinity robust control for disturbance rejection, and in
coordinating activities among multiple agents in networked teams.
The focus of this book is on continuous-time systems, whose
dynamical models can be derived directly from physical principles
based on Hamiltonian or Lagrangian dynamics. Simulation examples
are given throughout the book, and several methods are described
that do not require full state dynamics information.
The focus of this book is on continuous-time systems, whose
dynamical models can be derived directly from physical principles
based on Hamiltonian or Lagrangian dynamics. Simulation examples
are given throughout the book, and several methods are described
that do not require full state dynamics information.