Cheat-Sheet For Dynamic Programming: Template For Solution Short-Hand
Cheat-Sheet For Dynamic Programming: Template For Solution Short-Hand
Written by Marcelo Siero, based on UCSC’s class on Algorithm Design taught by Prof. Dmitris
Achlioptas, with ideas passed on by Greg Levin.
In this cheat-sheet we provide a short-hand for the solutions to a wide-variety problems that
are solved with the techniques of dynamic programming.
So essentially the problem is to calculate the maximum profit at each sale day relative to all
the possible previous buy days.
1
Brute Force Solution: O(n2 )
Create a 2 D array P [i , j ] of profits . 1
for i =1 to n : 2
for j = i +1 to i : 3
P [i , j ] = A [ j ] - A [ i ] 4
Accumulate profits in the form of: sell(high) - buy(low) on selected days to maximize profit:
1) OPT(k) is the best profit that can be made on that day (based on accumulation of profit
of previous days).
5) To find solution, Keep track of largest value (the optimal sale day), backtrack to its
previous 0, the optimal purchase day.
6) OPT(n)
Problem: robber puts any of n items in knapsack w. weight cap. W. items are indexed with i,
weights wi , value vi . We wish to optimize the value of the heist.
3b) OPT(i) = max{ OPT(i − 1) + vi , OPT(i − 1)} It does not account for weight. Note that
solutions of previous OPT[i] will change as i advances due to weight limits.
1) OPT[i,c] is the optimal value in knapsack after making the best decision to steal items i or
not, along with all previous such decisions for items less than i, their corresponding possible
capacities.
2
2) Base case: OPT(0,W) = 0
3a) Decision (2-way): whether to grab item i (choice 1) or not grab it (choice 2)
(
OPT(i − 1, c) if c ≤ 0
3b) OPT(i, c) = max
OPT(i − 1, c − wi ) + vi , OPT(i − 1, c)} otherwise
We create a temp var w and record a memoized a 2D array OPT[i,w]; c − wi reduces the
remaining weight capacity in knapsack, or thought another way it increases the weight in the
knapsack and how close we are to W.
We can store values on an n ∗ W OP T [i, c] array. All valid leftover capacities get explored
within this 2D array.
5) PTR keeps track of the best decisions sequence wrt i’s taken, to create solution.
Problem: is the same as Knapsack 1, but it can take an infinite number of each kind of item.
3a) Decision (multi-way): how much of the item to grap, with quanta related to its weight.
3b) We create a temp var w and store best values of items for different knapsack capacities on
a 2D array OPT[i,c]
The c − j ∗ wi term reduces the remaining weight capacity in knapsack, or thought another
way it increases the weight in the knapsack and how close we are to W.
We must memoize values on an array mxW OPT[i,c] array. All valid W’s get explored within
this 2D array.
5) PTR can be kept to keep track of the best decisions in terms of qty taken of each item at
each capacity, used to create the solution. A pretty monstrous program.
Optimization notes: It would probably be best to cache the W dimension, and PTR array in a
hash to avoid the enormous use of space, for unused indices. Also using W as lcd(all wi ) would
3
reduce the size of the space, and the runtime instead of trying all integer values of the
weight.
Problem: Select a subset S ⊆ {1, X..., n} of mutually compatible intervals, to maximize sum of
values of the selected intervals, vi , where vi are the weights of these intervals.
i∈S
TRICK: Trick here is to start with the end, potential optimal choice and work backwards in
terms of compatibility, to earlier good choices. Latter choices are dependent on earlier choices.
Working backwards means looking back at previous compatible interval. We use function p(j)
for closest previous compatible interval. This is true for most all Dynamic Programming
Problems.
1) OPT(j) provides the max sum of weights for an optimal set of the first j compatible
intervals.
3a) We call Oj the optimal scheduling set and p(j) is the largest index i < j such that interval
i ends before j begins. The choices for OPT(j) are binary in form:
Choice 1: j ∈ Oj
Choice 2: j ∈
/ Oj .
It belongs to Oj iff the 1st option is at least as good as the 2nd. In other words, request j
belongs to an optimal solution on the set 1,...,j iff vj + OP T (p(j)) ≥ OP T (j − 1).
Thus recurrence OPT(j) expresses optimal value in terms of the optimal values to smaller
subproblems as DP requires.
5) The solution is obtained by storing the choices made as to what j intervals are part of Oj
into array PTR or exracting that out in a backtrace. This then will provide the Oj set that
will maximize the results.
6) Runtime is O(n).
4
Segmented Least Squares, Pretty-Printing:
Basic Observation: If last segment of optimal partition is pi , ..., pn , then the value of the
optimal solution is OPT(n) = ei,n + C + OPT(i − 1). (i corresponds to the beginning of each
optimal segment, where ei,n is the least square errors (cached) of a line segment from i to n.
Cerr is an additive penalty incurred as more segments get created. Note that for different j’s
end values: multi-way min tests out all possible partitionings.
Template Desc: OPT(j) multi-way choice with internal variable (1 ≤ i ≤ j) and 1D storage.
Note: memoize/store OPT[j] gets bigger the values of OPT[j] would change.
1) OPT(j) is min total error score up to point j with best chosen partitionings at each point.
2) OPT(0) = 0
3a) Select best previous segment start point i ending in j that optimizes overall error.
5) Choices of optimal values for i choices get stored in PTR to recoup the solution, scanned
backwards. These make up the optimal end points of the segments.
6) Runtime O(n2 ) assuming that err(i,j) can be computed or pre-computed in O(n2 ) also.
1) OPT(i, j) is a best score for converting one string to another up to char i in source and j in
target. String xi for source is n long, and string yj is m long. Advancing in j is a deletion,
advancing in i is an insertion, advancing in both is a match or conversion. A deletion or an
insertion create a gap, which has a penalty cost of δ. A character substitution has a penalty of
αxi yj to convert from xi to yj . Edits are by deleting inserting, or substituting. There is a
deletion cost cd , an insertion cost ce and a substitute cost: C(A[i], A[j]). In Molecular Biology
this kind of alignment is referred to as sequence alignment. The Smith-Wasserman Algorithm
is similar to this.
2) Initialize A[i, 0] := i ∗ δ for each i
Initialize A[0, j] := j ∗ δ for each j
5
3a) OPT(i) requires a 3-way decision for delete, insert, substitute.
3b) min alignment costs satisfy the following recurrence for i 1 and j 1:
αxi yj + OP T (i − 1, j − 1)
OPT(i,j) = min δ + OP T (i − 1, j)
δ + OP T (i, j − 1)
Moreover, (i,j) is in an optimal alignment M for this subproblem iff the minimum is achieved
by the first of these values.
6) Runtime is O(nm).