language-iconOld Web
English
Sign In

Hamiltonian (control theory)

The Hamiltonian is a function used to solve a problem of optimal control for a dynamical system. It can be understood as an instantaneous increment of the Langrangian expression of the problem that is to be optimized over a certain time horizon. Inspired by, but distinct from, the Hamiltonian of classical mechanics, the Hamiltonian of optimal control theory was developed by Lev Pontryagin as part of his maximum principle. Pontryagin proved that a necessary condition for solving the optimal control problem is that the control should be chosen so as to optimize the Hamiltonian. H ( x ( t ) , u ( t ) , λ ( t ) , t ) ≡ I ( x ( t ) , u ( t ) , t ) + λ T ( t ) f ( x ( t ) , u ( t ) , t ) {displaystyle H(mathbf {x} (t),mathbf {u} (t),mathbf {lambda } (t),t)equiv I(mathbf {x} (t),mathbf {u} (t),t)+mathbf {lambda } ^{mathsf {T}}(t)mathbf {f} (mathbf {x} (t),mathbf {u} (t),t)} The Hamiltonian is a function used to solve a problem of optimal control for a dynamical system. It can be understood as an instantaneous increment of the Langrangian expression of the problem that is to be optimized over a certain time horizon. Inspired by, but distinct from, the Hamiltonian of classical mechanics, the Hamiltonian of optimal control theory was developed by Lev Pontryagin as part of his maximum principle. Pontryagin proved that a necessary condition for solving the optimal control problem is that the control should be chosen so as to optimize the Hamiltonian. Consider a dynamical system of n {displaystyle n} first-order differential equations where x ( t ) = [ x 1 ( t ) , x 2 ( t ) , … , x n ( t ) ] T {displaystyle mathbf {x} (t)=left^{mathsf {T}}} denotes a vector of state variables, and u ( t ) = [ u 1 ( t ) , u 2 ( t ) , … , u r ( t ) ] T {displaystyle mathbf {u} (t)=left^{mathsf {T}}} a vector of control variables. Once initial conditions x ( t 0 ) = x 0 {displaystyle mathbf {x} (t_{0})=mathbf {x} _{0}} and controls u ( t ) {displaystyle mathbf {u} (t)} are specified, a solution to the differential equations, called a trajectory x ( t ; x 0 , t 0 ) {displaystyle mathbf {x} (t;mathbf {x} _{0},t_{0})} , can be found. The problem of optimal control is to choose u ( t ) {displaystyle mathbf {u} (t)} (from a compact and convex set U ∈ R r {displaystyle {mathcal {U}}in mathbb {R} ^{r}} ) so that x ( t ) {displaystyle mathbf {x} (t)} maximizes or minimizes a certain objective function between an initial time t = t 0 {displaystyle t=t_{0}} and a terminal time t = t 1 {displaystyle t=t_{1}} (where t 1 {displaystyle t_{1}} may be infinity). Specifically, the goal is to optimize a performance index I ( x ( t ) , u ( t ) , t ) {displaystyle I(mathbf {x} (t),mathbf {u} (t),t)} at each point in time,

[ "Pontryagin's minimum principle", "Control theory", "Mathematical optimization" ]
Parent Topic
Child Topic
    No Parent Topic
Baidu
map