Memoization is a simple yet smart technique to make recursive overlapping computations really fast. Using this technique, as the program runs we make calculations only if they have not been made already. Each time a new calcu- lation happens, we cache the results and reuse those for subsequent calls for the same input. This technique is useful only if the computations are expected to return the same result each time for a given input. Our rod-cutting problem fits that requirement: the profit is the same for a given length and a given set of prices, no matter how many times we ask. Let’s memoize the result of the profit calculation.
When seeking the profit for a sublength, we can skip the computation if the profit for that length has been already been computed. This will speed up the program, as the redundant calls to find the profit will turn into a quick lookup of a hashmap. Sounds good, but it would be nice to have reusable code for that.
Let’s create a reusable class; we’ll call it the Memoizer. It does not yet exist, but we’ll pretend it does and write the code to use it. Let’s refactor the maxProfit() method to use a static method, callMemoized(), of the Memoizer class.
public int maxProfit(final int rodLength) {
BiFunction<Function<Integer, Integer>, Integer, Integer> compute = (func, length) -> {
int profit = (length <= prices.size()) ? prices.get(length - 1) : 0;
for(int i = 1; i < length; i++) {
int priceWhenCut = func.apply(i) + func.apply(length - i);
if(profit < priceWhenCut) profit = priceWhenCut;
}
return profit;
};
return callMemoized(compute, rodLength);
}
Let’s look at the crux of the design before we dig into the code. We create a function and memoize it. The memoized version will look up values before making a call to the actual implementation. Let’s figure out how we achieve this.
In the maxProfit() method we call the (yet-to-be-implemented) Memoizer’s callMemoized() method. This function takes a lambda expression as an argument.
This lambda expression has two parameters, a reference to the memoized version of the function and the incoming parameter. Within the lambda expression, we perform our task, and when it’s time to recurse we route the call to the memoized reference. This will return quickly if the value has been cached or memoized. Otherwise, it will recursively route the call to this lambda expression to compute for that length.
The missing piece of the puzzle is the memoized reference we receive from the callMemoized() method, so let’s look at the Memoizer class’s implementation.
recur/fpij/Memoizer.java public class Memoizer {
public static <T, R> R callMemoized(
final BiFunction<Function<T,R>, T, R> function, final T input) { Function<T, R> memoized = new Function<T, R>() {
private final Map<T, R> store = new HashMap<>();
public R apply(final T input) {
return store.computeIfAbsent(input, key -> function.apply(this, key));
} };
return memoized.apply(input);
} }
The Memoizer has just one short function. In callMemoized() we create an imple- mentation of Function in which we check to see if the solution for a given input is already present. We use the newly added computeIfAbsent() method of Map. If a value is present for the given input, we return it; otherwise we forward the call to the intended function and send a reference to the memoized function so the intended function can swing back here for subsequent computations.
This version of the maxProfit() method nicely encapsulates the details of memo- ization. The call to this method looks the same as the previous version:
System.out.println(rodCutter.maxProfit(5));
System.out.println(rodCutter.maxProfit(22));
Speeding Up with Memoization
•
133Let’s run the memoized version and ensure the profit reported is the same as in the previous version.
10 44
The profit is consistent between the versions, but the execution speeds are a world apart. The memoized version took less than 0.15 seconds, compared to around 45 seconds for the previous version. With this memoized version, we can easily bump up our rod lengths to large values and still take only a fraction of a second to get the results. For example, a length of 500" makes no dent on the execution time; it’s blazingly fast.
In this chapter we used lambda expressions and infinite Streams to implement TCO and memoization. The examples show us how the new features in Java 8 can come together to create powerful solutions. You can use similar tech- niques to create nifty solutions to your own complex problems.
Recap
Recursions are a valuable tool in programming, but a simple implementation of recursion is often not useful for practical problems. Functional interfaces, lambda expression, and infinite Streams can help us design tail-call optimization to make recursions feasible in such cases. Furthermore, we can combine recursions and memoization to make execution of overlapping recursions really fast.
In the next chapter we’ll explore a practical example that employs lambda expressions and then we’ll parallelize it with little effort.
CHAPTER 8
Programs must be written for people to read, and only incidentally for machines to execute.1
➤ Hal Abelson and Jerry Sussman
Composing with Lambda Expressions
With Java 8 we have two powerful tools: the object-oriented approach and the functional style. They are not mutually exclusive; they can work together for the greater good.
In OOP we often mutate state. If we combine OOP with the functional style, we can instead transform objects by passing lightweight objects through a series of cohesive functions. This can help us create code that’s easier to extend—to produce a different result we simply alter the way the functions are composed. We can use the functions, in addition to the objects, as com- ponents to program with.
In this chapter we look into function composition. Then we use that to create a practical working example of the popular MapReduce pattern, where we scatter independent calculations, and gather the results to create the solution.
As a final step, we parallelize those calculations almost effortlessly, thanks to the ubiquitous JDK library.