changeset 31250:f957849b2ba5

doc: Minor addition to memoization section for recursive functions (bug #60860)
author Arun Giridhar <arungiridhar@gmail.com>
date Sat, 01 Oct 2022 09:21:52 -0400
parents de6fc38c78c6
children e78f6e2aa807
files doc/interpreter/vectorize.txi
diffstat 1 files changed, 32 insertions(+), 11 deletions(-) [+]
line wrap: on
line diff
--- a/doc/interpreter/vectorize.txi	Fri Nov 12 08:53:05 2010 +0100
+++ b/doc/interpreter/vectorize.txi	Sat Oct 01 09:21:52 2022 -0400
@@ -553,7 +553,7 @@
 @DOCSTRING(accumdim)
 
 @node Memoization
-@section Memoization Techniques
+@section Memoization
 
 Memoization is a technique to cache the results of slow function calls and
 return the cached value when the function is called with the same inputs again,
@@ -580,7 +580,7 @@
 
 In the above example, the first line creates a memoized version @code{foo2} of
 the function @code{foo}.  For simple functions with only trivial wrapping, this
-line can also be shortened to
+line can also be shortened to:
 @example
 @group
 foo2 = memoize (@@foo);
@@ -590,18 +590,39 @@
 The second line @code{z = foo2 (x, y);} calls that memoized version @code{foo2}
 instead of the original function, allowing @code{memoize} to intercept the call
 and replace it with a looked-up value from a table if the inputs have occurred
-before, instead of evaluating the original function again.  Note that this will
-not accelerate the @emph{first} call to the function but only subsequent calls.
+before, instead of evaluating the original function again.
+
+Note that this will not accelerate the @emph{first} call to the function but
+only subsequent calls.
 
 Note that due to the overhead incurred by @code{memoize} to create and manage
-the lookup tables for each function the user seeks to memoize, this technique
-is useful only for functions that take a significant time to execute, at least
-a few seconds.  Such functions can be replaced by table lookups taking only a
-millisecond or less, but if the original function itself was taking only
-milliseconds or microseconds, memoizing it will not speed it up.
+the lookup tables for each function, this technique is useful only for
+functions that take at least a couple of seconds to execute.  Such functions
+can be replaced by table lookups taking only a millisecond or so, but if the
+original function itself was taking only milliseconds, memoizing it will not
+speed it up.
+
+Recursive functions can be memoized as well, using a pattern like:
+@example
+@group
+function z = foo (x, y)
+  persistent foo2 = memoize (@@foo);
+  foo2.CacheSize = 1e6;
 
-Octave's memoization also allows the user to clear the cache of lookup values
-when it is no longer needed, using the function @code{clearAllMemoizedCaches}.
+  ## Call the memoized version when recursing
+  z = foo2 (x, y);
+endfunction
+@end group
+@end example
+
+The @code{CacheSize} can be optionally increased in anticipation of a large
+number of function calls, such as from inside a recursive function.  If
+@code{CacheSize} is exceeded, the memoization tables are resized, causing a
+slowdown.  Increasing the @code{CacheSize} thus works like preallocation to
+speed up execution.
+
+The function @code{clearAllMemoizedCaches} clears the memoization tables when
+they are no longer needed.
 
 @DOCSTRING(clearAllMemoizedCaches)