# HG changeset patch # User Arun Giridhar # Date 1664534339 14400 # Node ID 3dae836c598c8b91dc9604aa470788cc11fa3853 # Parent 43a6be589387c5de9ab10b856331776d6b77326c doc: Expand and edit documentation for memoization (bug #60860) diff -r 43a6be589387 -r 3dae836c598c doc/interpreter/vectorize.txi --- a/doc/interpreter/vectorize.txi Thu Sep 29 20:31:52 2022 -0400 +++ b/doc/interpreter/vectorize.txi Fri Sep 30 06:38:59 2022 -0400 @@ -555,9 +555,21 @@ @node Memoization @section Memoization Techniques -Memoization is a technique to cache the results of slow function calls and return the cached value when the function is called with the same inputs again, instead of reevaluating it. It is very common to replace function calls with lookup tables if the same inputs are happening over and over again in a known, predictable way. Memoization is, at its core, an extension of this practice where the lookup table is extended even during runtime for new arguments not seen previously. A basic theoretical background can be found on Wikipedia or any undergraduate-level computer science textbook. +Memoization is a technique to cache the results of slow function calls and +return the cached value when the function is called with the same inputs again, +instead of reevaluating it. It is very common to replace function calls with +lookup tables if the same inputs are happening over and over again in a known, +predictable way. Memoization is, at its core, an extension of this practice +where the lookup table is extended even during runtime for new arguments not +seen previously. A basic theoretical background can be found on Wikipedia or +any undergraduate-level computer science textbook. -Octave's @code{memoize} function provides drop-in memoization functionality for any user function or Octave function, including compiled functions. To memoize a function @code{z = foo(x, y)}, use this pattern: +Octave's @code{memoize} function provides drop-in memoization functionality for +any user function or Octave function, including compiled functions. + +@DOCSTRING(memoize) + +To memoize a function @code{z = foo(x, y)}, use this general pattern: @example @group @@ -566,18 +578,32 @@ @end group @end example -In the above example, the first line creates a memoized version @code{foo2} of the function @code{foo}. For simple functions with only trivial wrapping, this line can also be shortened to +In the above example, the first line creates a memoized version @code{foo2} of +the function @code{foo}. For simple functions with only trivial wrapping, this +line can also be shortened to @example @group foo2 = memoize (@@foo); @end group @end example -The second line @code{z = foo2 (x, y);} calls that memoized version @code{foo2} instead of the original function, allowing @code{memoize} to intercept the call and replace it with a looked-up value from a table if the inputs have occurred before, instead of evaluating the original function again. Note that this will not accelerate the @emph{first} call to the function but only subsequent calls. +The second line @code{z = foo2 (x, y);} calls that memoized version @code{foo2} +instead of the original function, allowing @code{memoize} to intercept the call +and replace it with a looked-up value from a table if the inputs have occurred +before, instead of evaluating the original function again. Note that this will +not accelerate the @emph{first} call to the function but only subsequent calls. -Note that due to the overhead incurred by @code{memoize} to create and manage the lookup tables for each function the user seeks to memoize, this technique is useful only for functions that take a significant time to execute, at least a few seconds. Such functions can be replaced by table lookups taking only a millisecond or less, but if the original function itself was taking only milliseconds or microseconds, memoizing it will not speed it up. +Note that due to the overhead incurred by @code{memoize} to create and manage +the lookup tables for each function the user seeks to memoize, this technique +is useful only for functions that take a significant time to execute, at least +a few seconds. Such functions can be replaced by table lookups taking only a +millisecond or less, but if the original function itself was taking only +milliseconds or microseconds, memoizing it will not speed it up. -Octave's memoization also allows the user to clear the cache of lookup values when it is no longer needed, using the function @code{clearAllMemoizedCaches}. +Octave's memoization also allows the user to clear the cache of lookup values +when it is no longer needed, using the function @code{clearAllMemoizedCaches}. + +@DOCSTRING(clearAllMemoizedCaches) @node Miscellaneous Techniques @section Miscellaneous Techniques diff -r 43a6be589387 -r 3dae836c598c scripts/miscellaneous/clearAllMemoizedCaches.m --- a/scripts/miscellaneous/clearAllMemoizedCaches.m Thu Sep 29 20:31:52 2022 -0400 +++ b/scripts/miscellaneous/clearAllMemoizedCaches.m Fri Sep 30 06:38:59 2022 -0400 @@ -27,6 +27,10 @@ ## @deftypefn {} {} clearAllMemoizedCaches () ## Clear all memoized caches. ## +## Memoization maintains internal tables of which functions have been called +## with which inputs. This function clears those tables to free memory, +## or for a fresh start. +## ## @seealso{memoize} ## @end deftypefn diff -r 43a6be589387 -r 3dae836c598c scripts/miscellaneous/memoize.m --- a/scripts/miscellaneous/memoize.m Thu Sep 29 20:31:52 2022 -0400 +++ b/scripts/miscellaneous/memoize.m Fri Sep 30 06:38:59 2022 -0400 @@ -26,6 +26,40 @@ ## -*- texinfo -*- ## @deftypefn {} {@var{mem_fcn_handle} =} memoize (@var{fcn_handle}) ## +## Create a memoized version @var{mem_fcn_handle} of function +## @var{fcn_handle}. +## +## Each call to the memoized version @var{mem_fcn_handle} checks the inputs +## against an internally maintained table, and if the inputs have occurred +## previously, then the result of the function call is returned from the table +## itself instead of evaluating the full function again. This speeds up the +## execution of functions that are called with the same inputs multiple times. +## +## For example, here we take a slow user-written function named +## @code{slow_fcn} and memoize it to a new handle @code{cyc}. +## The first executions of both versions take the same time, but the subsequent +## executions of the memoized version returns the previously computed value, +## thus reducing 2.4 seconds of runtime to only 2.4 milliseconds. The final +## check verifies that the same result was returned from both versions. +## +## @example +## @group +## >> tic; @var{p} = slow_fcn (5040); toc +## Elapsed time is 2.41244 seconds. +## >> tic; @var{p} = slow_fcn (5040); toc +## Elapsed time is 2.41542 seconds. +## +## >> cyc = memoize (@@slow_fcn); +## >> tic; @var{r} = cyc (5040); toc +## Elapsed time is 2.42609 seconds. +## >> tic; @var{r} = cyc (5040); toc +## Elapsed time is 0.00236511 seconds. +## +## >> all (@var{p} == @var{r}) +## ans = 1 +## @end group +## @end example +## ## @seealso{clearAllMemoizedCaches} ## @end deftypefn