# HG changeset patch # User Arun Giridhar # Date 1664630512 14400 # Node ID f957849b2ba52e5f4245f8259a1d2f7abfff20bf # Parent de6fc38c78c67ecb6b7fe51fdebee76c8649fd45 doc: Minor addition to memoization section for recursive functions (bug #60860) diff -r de6fc38c78c6 -r f957849b2ba5 doc/interpreter/vectorize.txi --- a/doc/interpreter/vectorize.txi Fri Nov 12 08:53:05 2010 +0100 +++ b/doc/interpreter/vectorize.txi Sat Oct 01 09:21:52 2022 -0400 @@ -553,7 +553,7 @@ @DOCSTRING(accumdim) @node Memoization -@section Memoization Techniques +@section Memoization Memoization is a technique to cache the results of slow function calls and return the cached value when the function is called with the same inputs again, @@ -580,7 +580,7 @@ In the above example, the first line creates a memoized version @code{foo2} of the function @code{foo}. For simple functions with only trivial wrapping, this -line can also be shortened to +line can also be shortened to: @example @group foo2 = memoize (@@foo); @@ -590,18 +590,39 @@ The second line @code{z = foo2 (x, y);} calls that memoized version @code{foo2} instead of the original function, allowing @code{memoize} to intercept the call and replace it with a looked-up value from a table if the inputs have occurred -before, instead of evaluating the original function again. Note that this will -not accelerate the @emph{first} call to the function but only subsequent calls. +before, instead of evaluating the original function again. + +Note that this will not accelerate the @emph{first} call to the function but +only subsequent calls. Note that due to the overhead incurred by @code{memoize} to create and manage -the lookup tables for each function the user seeks to memoize, this technique -is useful only for functions that take a significant time to execute, at least -a few seconds. Such functions can be replaced by table lookups taking only a -millisecond or less, but if the original function itself was taking only -milliseconds or microseconds, memoizing it will not speed it up. +the lookup tables for each function, this technique is useful only for +functions that take at least a couple of seconds to execute. Such functions +can be replaced by table lookups taking only a millisecond or so, but if the +original function itself was taking only milliseconds, memoizing it will not +speed it up. + +Recursive functions can be memoized as well, using a pattern like: +@example +@group +function z = foo (x, y) + persistent foo2 = memoize (@@foo); + foo2.CacheSize = 1e6; -Octave's memoization also allows the user to clear the cache of lookup values -when it is no longer needed, using the function @code{clearAllMemoizedCaches}. + ## Call the memoized version when recursing + z = foo2 (x, y); +endfunction +@end group +@end example + +The @code{CacheSize} can be optionally increased in anticipation of a large +number of function calls, such as from inside a recursive function. If +@code{CacheSize} is exceeded, the memoization tables are resized, causing a +slowdown. Increasing the @code{CacheSize} thus works like preallocation to +speed up execution. + +The function @code{clearAllMemoizedCaches} clears the memoization tables when +they are no longer needed. @DOCSTRING(clearAllMemoizedCaches)