Dynamic Behavior of Feedback Control Processes
Dynamic Behavior of Feedback Control Processes
Prepared by:
GROUP 8
Abrahan, Hazel
Balmes, Patricia R.
Catapang, Cathy Mae B.
Layosa, Dianne H.
Lopez, Aubrenica Rose Pauline T.
Che-5201
Submitted to:
Engr. Monroe De Guzman
DYNAMIC BEHAVIOR OF FEEDBACK CONTROL PROCESSES
I. Block Diagrams / Algebra
Figure 14.1 shows the block diagram for the generalized closed loop system and is
nothing more than a pictorial representation of the following equations.
Corresponding transfer function relating its output to its input.
Process Eq. 1
Controller Mechanism
Comparator Eq. 3a
Figure 14.2b shows a block diagram equivalent to that of Figure 14.2 but further
simplified.
Forward Path
The series block between the comparator and the controlled output ( Gc ,G f , G p)
constitutes the forward path.
Feedback path
The block G mis on the feedback path between the controlled output and the comparator.
Closed-loop response
If G=GC Gf G p,
The closed-loop overall transfer functions G SP and G load depend not only on the process
dynamics but also on the dynamics of the measuring sensor, controller and final control element.
For every feedback control system, we can distinguish two types of control problems:
1. Servo problem
The disturbance does not change while the set point undergoes a change. The feedback
controller acts in such a way as to keep yclose to the changing y SP . In such a chase,
2. Regulator problem
The set point remains the same while the load changes. Then,
PROPORTIONAL CONTROL
One type of action used in PID controllers is the proportional control. Proportional
control is a form of feedback control. It is the simplest form of continuous control that can be
used in a closed-looped system.
Mathematical Equations
P-control linearly correlates the controller output (actuating signal) to the error (diference
between measured signal and set point). This P-control behavior is mathematically illustrated as
follows:
c(t)=Kce(t)+b
e(t) = SP- PV
c(t) = controller output
Kc = controller gain
e(t) = error
b = bias
In this equation, the bias and controller gain are constants specific to each controller. The
bias is simply the controller output when the error is zero. The controller gain is the change in
the output of the controller per change in the input to the controller.
Let’s also suppose that the speed SP is 70 and the measured PV is also 70 (units can be mph or
kph depending on where you live in the world). Since PV = SP, then e(t) = 0 and the algorithm
reduces to:
c(t) = Kc∙(0) + b
c(t) = b
If b is zero, then when set point equals measurement, the above equation says that the throttle
signal, CO, is also zero. This makes no sense. Clearly if the car is traveling 70 kph, then some
baseline flow of fuel is going to the engine.
This baseline value of the CO is called the bias or null value. In this example, CObias is the flow
of fuel that, in manual mode, causes the car to travel the design speed of 70 kph when on flat
ground on a calm day.
Controller Gain, Kc
The P-Only controller has the advantage of having only one adjustable or tuning parameter, Kc,
that defines how active or aggressive the C(t) will move in response to changes in controller
error, e(t).
For a given value of e(t) in the P-Only algorithm above
if Kc is small, then the amount added to bias is small and the controller response will be
slow or sluggish.
If Kc is large, then the amount added to bias is large and the controller response will be
fast or aggressive.
Thus, Kc can be adjusted or tuned for each process to make the controller more or less active in
its actions when measurement does not equal set point.
Offset
P-only control minimizes the fluctuation in the process variable, but it does not always
bring the system to the desired set point. It provides a faster response than most other controllers,
initially allowing the P-only controller to respond a few seconds faster. However, as the system
becomes more complex (i.e. more complex algorithm) the response time difference could
accumulate, allowing the P-controller to possibly respond even a few minutes faster.
Athough the P-only controller does offer the advantage of faster response time, it
produces deviation from the set point. This deviation is known as the offset, and it is usually not
desired in a process. The existence of an offset implies that the system could not be maintained at
the desired set point at steady state. It is analogous to the systematic error in a calibration curve,
where there is always a set, constant error that prevents the line from crossing the origin. The
offset can be minimized by combining P-only control with another form of control, such as I- or
D- control. It is important to note, however, that it is impossible to completely eliminate the
offset, which is implicitly included within each equation.
As Kc increases, offset decreases, oscillatory behaviour increases
DERIVATIVE CONTROL
The "D" part of a PID controller. With derivative action the controller output is
proportional to the rate of change of the process variable or error
Unlike P-only and I-only controls, D-control is a form of feed forward control. D-control
anticipates the process conditions by analyzing the change in error. It functions to minimize the
change of error, thus keeping the system at a consistent setting. The primary benefit of D
controllers is to resist change in the system, the most important of these being oscillations. The
control output is calculated based on the rate of change of the error with time. The larger the rate
of the change in error, the more pronounced the controller response will be.
Unlike proportional and integral controllers, derivative controllers do not guide the
system to a steady state. Because of this property, D controllers must be coupled with P, I or PI
controllers to properly control the system.
Mathematically, derivative control is the opposite of integral control. Although I-only
controls exist, D-only controls do not exist. D-controls measure only the change in error. D-
controls do not know where the setpoint is, so it is usually used in conjunction with another
method of control, such as P-only or a PI combination control. D-control is usually used for
processes with rapidly changing process outputs. However, like the I control, the D control is
mathematically more complex than the P-control. Since it will take a computer algorithm longer
to calculate a derivative or an integral than to simply linearly relate the input and output
variables, adding a D-control slows down the controller’s response time. A graphical
representation of the D-controller output for a step increase in input at time t0 is shown below in
the figure. As expected, this graph represents the derivative of the step input graph.
Derivative Control
Derivative action can be thought of as making smaller and smaller changes as one gets
close to the right value, and then stopping in the correct region, rather than making further
changes. Derivative control quantifies the need to apply more change by linking the amount of
change applied to the rate of change needed. For example, an accelerator would be applied more
as the speed of the car continues to drop. However, the actual speed drop is independent of this
process. On its own, derivative control is not sufficient to restore the speed to a specific value.
Pairing the match in change with a proportionality constant is enough to properly control the
speed.
The derivative control is usually used in conjunction with P and/or I controls because it
generally is not effective by itself. The derivative control alone does not know where the set
point is located and is only used to increase precision within the system. This is only control type
that is open loop control (also known as feedforward loop). The derivative control operates in
order to determine what will happen to the process in the future by examining the rate of change
of error within the system. When the derivative control is implemented the following general
equation is used:
This equation shows the derivative control is proportional to the change in error within the
system.
Limitations
The major problem associated with this control is the noise problem. When the frequency
within the system is high (change in error of the system is large), taking the derivative of this
signal may amplify the signal greatly. Therefore, small amounts of noise within the system will
then cause the output of system to change by a great amount. In these circumstances, it is often
sensible use a PI controller or set the derivative action of a PID controller to zero.
Using the derivative control mode is a bad idea when the process variable (PV) has a lot
of noise on it. ‘Noise’ is small, random, rapid changes in the PV, and consequently rapid changes
in the error. Because the derivative mode extrapolates the current slope of the error, it is highly
affected by noise (Figure). You could try to filter the PV so you can use derivative, as long as
your filter time constant is shorter than 1/5 of your derivative time.
The derivative term in the equation is usually changed by putting a first order filter on the
term so that the derivative does not amplify the high frequency noise that is attenuated. Below is
a sample outcome figure of a possible derivative of the output signal shown above along with the
filtered signal.
As shown, it is possible for the amplitude to be magnified when the derivative is taken
for a sinusoidal function. A filter is usually a set of complicated equations that are added to the
derivative that effect the function as shown.
Temperature control loops normally have smooth measurements and long time
constants. The process variable of a temperature loop tends to move in the same direction for a
long time, so its slope can be used for predicting future error. So temperature loops are ideal
candidates for using derivative control – if needed.
Flow control loops tend to have noisy PVs (depending on the flow measurement
technology used). They also tend to have short time constants. And they normally act quite fast
already, so speed is not an issue. These factors all make flow control loops poor candidates for
using derivative control.
Pressure control loops come in two flavors: liquid and gas. Liquid pressure behaves very
much like flow loops, so derivative should not be used. Gas pressure loops behave more like
temperature loops (some even behave like level loops / integrating processes), making them good
candidates for using derivative control.
INTEGRAL CONTROL
Integral control describes a controller in which the output rate of change is dependent on
the magnitude of the input. Specifically, a smaller amplitude input causes a slower rate of change
of the output. The integral control method is also known as reset control.
A device that performs the mathematical function of integration is called an integrator.
The mathematical result of integration is called the integral. The integrator provides a linear
output with a rate of change that is directly related to the amplitude of the step change input and
a constant that specifies the function of integration.
Moreover, integral control is the control mode where the controller output is proportional
to the integral of the error with respect to time, i.e.:
controller output ∝ integral of error with time
Therefore:
Icontroller output = KI x integral of error with time
where KI is the constant of proportionality and, when the controller output is expressed as a
percentage and the error as a percentage, has a unit of s-1.
Example
An integral controller has a value of KI of 0.10 s-1. What will be the output after times of (a) 1 s,
(b) 2 s, if there is a sudden change to a constant error of 20%, as illustrated in Figure 1?
Solution:
(a) t = 1 s
Icontroller output = KI x integral of error with time
Icontroller output = (0.10)(20%)
Icontroller output = 2%
(b) t = 2 s
Icontroller output = KI x integral of error with time
Icontroller output = (0.10)(40%)
Icontroller output = 4%
With integral control, the final control element’s position changes at a rate determined by
the amplitude of the input error signal. Recall that:
Error = Setpoint - Measured Variable
If a large difference exists between the setpoint and the measured variable, a large error
results. This causes the final control element to change position rapidly. If, however, only a small
difference exists, the small error signal causes the final control element to change position
slowly.
PID Control
Proportional-integral-derivative control is a combination of all three types of control
methods. It works by controlling an output to bring a process value to a desired set point.
A block diagram of a PID controller in a feedback loop
Mathematical Equations
PID-control correlates the controller output to the error, integral of the error, and derivative of
the error. This PID-control behavior is mathematically illustrated.
As shown in the above equation, PID control is the combination of all three types of control. In
this equation, the gain is multiplied with the integral and derivative terms, along with the
proportional term, because in PID combination control, the gain affects the I and D actions as
well. Because of the use of derivative control, PID control cannot be used in processes where
there is a lot of noise, since the noise would interfere with the predictive, feedforward aspect.
However, PID control is used when the process requires no offset and a fast response time.