We began our study of algorithmic techniques with greedy algorithms, which in some sense form the most natural approach to algorithm design. Faced with a new computational problem, we've seen that it's not hard to propose multiple possible greedy algorithms; the challenge is then to determine whether any of these algorithms provides a correct solution to the problem in all cases.
6.1 Weighted Interval Scheduling: A Recursive Procedure
We have seen that a particular greedy algorithm produces an optimal solution to the Interval Scheduling Problem, where the goal is to accept as large a set of nonoverlapping intervals as possible. The weighted Interval Scheduling Problem is a strictly more general version, in which each interval has a certain value (or weight), and we want to accept a set of maximum value.
Designing a Recursive Algorithm
Since the original Interval Scheduling Problem is simply the special case in which all values are equal to 1, we know already that most greedy algorithms will not solve this problem optimally. But even the algorithm that worked before (repeatedly choosing the interval that ends earliest) is no longer optimal in this more general setting.
Indeed, no natural greedy algorithm is known for this problem, which is what motivates our switch to dynamic programming. As discussed above, we will begin our introduction to dynamic programming with a recursive type of algorithm for this problem, and then in the next section we'll move to a more iterative method that is closer to the style we use in the rest of this chapter.
We use the notation from our discussion of Interval Scheduling. We have
Let's suppose that the requests are sorted in order of nondecreasing finish time:
Now, given an instance of the Weighted Interval Scheduling Problem, let's consider an optimal solution
On the other hand, if
All this suggests that finding the optimal solution on intervals
And how do we decide whether
Request
These facts form the first crucial component on which a dynamic programming solution is based: a recurrence equation that expresses the optimal solution (or its value) in terms of the optimal solutions to smaller subproblems.
Despite the simple reasoning that led to this point, (1) is already a significant development. It directly gives us a recursive algorithm to compute
If Return Else Return Endif |
The correctness of the algorithm follows directly by induction on
Proof. By definition
Unfortunately, if we really implemented the algorithm
Memoizing the Recursion
In fact, though, we're not so far from having a polynomial-time algorithm. A fundamental observation, which forms the second crucial component of a dynamic programming solution, is that our recursive algorithm
How could we eliminate all this redundancy? We could store the value of memoization.
We implement the above strategy in the more “intelligent” procedure
If Return Else if Return Else Define Return Endif |
Analyzing the Memoized Version
Clearly, this looks very similar to our previous implementation of the algorithm; however, memoization has brought the running time way down.
The running time of