# Control theory

651pages on
this wiki

In engineering and mathematics, control theory deals with the behaviour of dynamical systems over time. The desired output of a system is called the reference variable. When one or more output variables of a system need to show a certain behaviour over time, a controller tries to manipulate the inputs of the system to realize this behaviour at the output of the system.

## An example Edit

As an example, consider cruise control. In this case, the system is a car. The goal of cruise control is to keep the car at a constant speed. Here, the output variable of the system is the speed of the car. The primary means to control the speed of the car is the amount of fuel being fed into the engine.

A simple way to implement cruise control is to lock the position of the throttle the moment the driver engages cruise control. This is fine if the car is driving on perfectly flat terrain. On hilly terrain, the car will slow down when going uphill and accelerate when going downhill; something its driver may find highly undesirable.

This type of controller is called an open-loop controller because there is no direct connection between the output of the system and its input. One of the main disadvantages of this type of controller is the lack of sensitivity to the dynamics of the system under control.

The actual way that cruise control is implemented involves feedback control, whereby the speed is monitored and the amount of gas is increased if the car is driving slower than the intended speed and decreased if the car is driving faster. This feedback makes the car less sensitive to disturbances to the system, such as changes in slope of the ground or wind speed. This type of controller is called a closed-loop controller.

## History Edit

The importance of this topic of study was recognized during the development of the airplane: The Wright Brothers made their first successful test flights in December 17, 1903 and by 1904 Flyer III was capable of fully-controllable stable flight for substantial periods. Control of the airplane was necessary for its safe, economical, and economically successful use.

By World War II, control theory was an important part of fire control, guidance, and cybernetics. The Space Race to the Moon depended on accurate control of the spacecraft. But control theory is not only useful in technological applications.

## Classical control theory Edit

To avoid the problems of the open-loop controller, control theory introduces feedback. The output of the system $y(t)$

is fed back to the reference value $r(t)$


. The controller C then takes the difference between the reference and the output, the error e, to change the inputs u to the system under control P. This is shown in the figure. This kind of controller is a closed-loop controller or feedback controller. This is a so-called single-input-single-output (SISO) control system: example where one or more variables can contain more than a value (MIMO, i.e. Multi-Input-Multi-Output - for example when outputs to be controlled are two or more) are frequent. In such cases variables are represented through vectors instead of simple scalar values.

If we assume the controller C and the plant P are linear and time-invariant (i.e.: elements of their transfer function $C(s)$

and $P(s)$
do not depend from time), we can analyze the system above by using the Laplace transform on the variables.  This gives us the following relations:

$Y(s) = P(s) U(s)\,\!$
$U(s) = C(s) E(s)\,\!$
$E(s) = R(s) - Y(s)\,\!$

Solving for Y(s) in terms of R(s), we obtain:

$Y(s) = \left( \frac{P(s)C(s)}{1 + P(s)C(s)} \right) R(s)$

The term $\frac{P(s)C(s)}{1 + P(s)C(s)}$

is referred to as the transfer function of the system. If we can ensure $P(s)C(s) >> 1$


, i.e. it has very great norm with each value of $s$ , then $Y(s)$

is approximately equal to $R(s)$


. This means we control the output by simply setting the reference.

## Stability Edit

Stability (in control theory) means that for any bounded input over any amount of time, the output will also be bounded. This is known as BIBO stability (see also Lyapunov stability). If a system is BIBO stable then the output cannot "blow up" if the input remains finite. Mathematically, this means that for a linear continuous-time system to be stable all of the poles of its transfer function must

OR

In the two cases, if respectively the pole has a real part strictly greater of zero or a module strictly greater than one, we speak of asymptotic stability: the variables of an asymptotically stable control system always decrease from their initial value and do not show permanent oscillations, which are instead present if a pole has exactly a real part equal to zero (or a module equal to one). If a simply stable system response neither decays nor grows over time, and has no oscillations, it referred to as marginally stable: in this case it has non-repeated poles along the vertical axis (i.e. their real and complex component is zero). Oscillations are present when poles with real part equal to zero have also complex part not equal to zero.

Difference between the two cases are not a contradiction. The Laplace transform is in Cartesian coordinates and the Z-transform is in circular coordinates and it can be shown that

• the negative-real part in the Laplace domain can map onto the interior of the unit circle
• the positive-real part in the Laplace domain can map onto the exterior of the unit circle

If the system in question has an impulse response of

$x[n] = 0.5^n u[n]$

and considering the Z-transform (see this example), it yields

$X(z) = \frac{1}{1 - 0.5z^{-1}}\$

which has a pole in $z = 0.5$

(zero imaginary part).  This system is BIBO (asymptotically) stable since the pole is inside the unit circle.


However, if the impulse response was

$x[n] = 1.5^n u[n]$

then the Z-transform is

$X(z) = \frac{1}{1 - 1.5z^{-1}}\$

which has a pole at $z = 1.5$

and is not BIBO stable since the pole has a module strictly greater than one.


## Controllability and observability Edit

See controllability and observability.

 $Y(s) = P(s) U(s)\,\!$ (1) $U(s) = C(s) E(s)\,\!$ (2) $E(s) = R(s) - Y(s)\,\!$ (3) (1) + (2) $Y = P C E\,\!$ (4) (4) + (3) $Y = P C ( R - Y )\,\!$ $Y = P C R - P C Y\,\!$ Expanding out ( R − Y ) $Y + P C Y = P C R\,\!$ Moving P C Y to the left hand side $Y ( 1 + P C ) = P C R\,\!$ Consolidating the common term Y $Y = \frac{P C R}{1 + P C}$ Isolating out the term Y $Y = \frac{P C}{1 + P C} R$ (5)