Peephole Optimization
Peephole Optimization
A simple but effective technique for locally improving the target code is peephole optimization,
which is done by examining a sliding window of target instructions (called the peephole) and
replacing instruction sequences within the peephole with a shorter or faster sequence, whenever
possible.
Peephole optimization can also be applied directly after intermediate code generation to
improve the intermediate representation.
The peephole is a small, sliding window on a program. The code in the peephole need not be
contiguous, although some implementations do require this. It is characteristic of peephole
optimization that each improvement may spawn opportunities for additional improvements. In
general, repeated passes over the target, code are necessary to get the maximum benefit. In this
section, we shall give the following examples of program transformations that are characteristic
of peephole optimizations:
➢ Redundant-instruction elimination
➢ Flow-of-control optimizations
➢ Algebraic simplifications
➢ Use of machine idioms
Optimized code:
y = x + 5;
i = y;
w = y * 3;
2. Eliminating Unreachable Code
if debug == 1 goto L1
goto L2
L1: print debugging information
L2:
One obvious peephole optimization is to eliminate jumps over jumps. Thus, no matter what
the value of debug, the code sequence above can be replaced by
if debug != 1 goto L2
print debugging information
L2:
If debug is set to 0 at the beginning of the program, constant propagation would transform
this sequence into
if 0 != 1 goto L2
print debugging information
L2:
Now the argument of the first statement always evaluates to true, so the statement can be
replaced by goto L2. Then all statements that print debugging information are unreachable
and can be eliminated one at a time.
3. Flow-of-Control Optimizations
goto L1
……..
L1: goto L2
by the sequence
goto L2
………
L1: goto L2
If there are now no jumps to L1, then it may be possible to eliminate the statement L1: goto
L2 provided it is preceded by an unconditional jump.
Similarly, the sequence
if a < b goto L1
……..
L1: goto L2
can be replaced by the sequence
if a < b goto L2
.…….
L1: goto L2
Finally, suppose there is only one jump to L1 and L1 is preceded by an unconditional goto.
Then the sequence
goto L1
...
L1: if a < b goto L2
L3:
may be replaced by the sequence
if a < b goto L2
goto L3
...
L3:
While the number of instructions in the two sequences is the same, we sometimes skip the
unconditional jump in the second sequence, but never in the first. Thus, the second sequence
is superior to the first in execution time.
In Section 8.5 we discussed algebraic identities that could be used to simplify DAG. These
algebraic identities can also be used by a peephole optimizer to eliminate three-address
statements such as
x=x+0
or
x=x*1
in the peephole.
Similarly, reduction-in-strength transformations can be applied in the peephole to replace
expensive operations with equivalent cheaper ones on the target machine. Certain machine
instructions are considerably cheaper than others and can often be used in special cases of
more expensive operators.
For example, x2 is invariably cheaper to implement as x * x than as a call to an exponentiation
routine. Fixed-point multiplication or division by a power of two is cheaper to implement as
a shift. Floating-point division by a constant can be approximated as multiplication by a
constant, which may be cheaper.
The target machine may have hardware instructions to implement certain specific operations
efficiently. Detecting situations that permit the use of these instructions can reduce execution
time significantly. For example, some machines have auto-increment and auto-decrement
addressing modes. These add or subtract one from an operand before or after using its value.
The use of the modes greatly improves the quality of code when pushing or popping a stack,
as in parameter passing. These modes can also be used in code for statements
Like x = x+1.