view doc/interpreter/vectorize.txi @ 31247:3dae836c598c

doc: Expand and edit documentation for memoization (bug #60860)
author Arun Giridhar <arungiridhar@gmail.com>
date Fri, 30 Sep 2022 06:38:59 -0400
parents 43a6be589387
children f957849b2ba5
line wrap: on
line source

@c Copyright (C) 2012-2022 The Octave Project Developers
@c
@c This file is part of Octave.
@c
@c Octave is free software: you can redistribute it and/or modify it
@c under the terms of the GNU General Public License as published by
@c the Free Software Foundation, either version 3 of the License, or
@c (at your option) any later version.
@c
@c Octave is distributed in the hope that it will be useful, but
@c WITHOUT ANY WARRANTY; without even the implied warranty of
@c MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
@c GNU General Public License for more details.
@c
@c You should have received a copy of the GNU General Public License
@c along with Octave; see the file COPYING.  If not, see
@c <https://www.gnu.org/licenses/>.

@node Vectorization and Faster Code Execution
@chapter Vectorization and Faster Code Execution
@cindex vectorization
@cindex vectorize

Vectorization is a programming technique that uses vector operations
instead of element-by-element loop-based operations.  Besides frequently
producing more succinct Octave code, vectorization also allows for better
optimization in the subsequent implementation.  The optimizations may occur
either in Octave's own Fortran, C, or C++ internal implementation, or even at a
lower level depending on the compiler and external numerical libraries used to
build Octave.  The ultimate goal is to make use of your hardware's vector
instructions if possible or to perform other optimizations in software.

Vectorization is not a concept unique to Octave, but it is particularly
important because Octave is a matrix-oriented language.  Vectorized
Octave code will see a dramatic speed up (10X--100X) in most cases.

This chapter discusses vectorization and other techniques for writing faster
code.

@menu
* Basic Vectorization::        Basic techniques for code optimization
* Broadcasting::               Broadcasting operations
* Function Application::       Applying functions to arrays, cells, and structs
* Accumulation::               Accumulation functions
* Memoization::                Memoization techniques
* Miscellaneous Techniques::   Other techniques for speeding up code
* Examples::
@end menu

@node Basic Vectorization
@section Basic Vectorization

To a very good first approximation, the goal in vectorization is to
write code that avoids loops and uses whole-array operations.  As a
trivial example, consider

@example
@group
for i = 1:n
  for j = 1:m
    c(i,j) = a(i,j) + b(i,j);
  endfor
endfor
@end group
@end example

@noindent
compared to the much simpler

@example
c = a + b;
@end example

@noindent
This isn't merely easier to write; it is also internally much easier to
optimize.  Octave delegates this operation to an underlying
implementation which, among other optimizations, may use special vector
hardware instructions or could conceivably even perform the additions in
parallel.  In general, if the code is vectorized, the underlying
implementation has more freedom about the assumptions it can make
in order to achieve faster execution.

This is especially important for loops with "cheap" bodies.  Often it
suffices to vectorize just the innermost loop to get acceptable
performance.  A general rule of thumb is that the "order" of the
vectorized body should be greater or equal to the "order" of the
enclosing loop.

As a less trivial example, instead of

@example
@group
for i = 1:n-1
  a(i) = b(i+1) - b(i);
endfor
@end group
@end example

@noindent
write

@example
a = b(2:n) - b(1:n-1);
@end example

This shows an important general concept about using arrays for indexing
instead of looping over an index variable.  @xref{Index Expressions}.
Also use boolean indexing generously.  If a condition needs to be tested,
this condition can also be written as a boolean index.  For instance,
instead of

@example
@group
for i = 1:n
  if (a(i) > 5)
    a(i) -= 20
  endif
endfor
@end group
@end example

@noindent
write

@example
a(a>5) -= 20;
@end example

@noindent
which exploits the fact that @code{a > 5} produces a boolean index.

Use elementwise vector operators whenever possible to avoid looping
(operators like @code{.*} and @code{.^}).  @xref{Arithmetic Ops}.

Also exploit broadcasting in these elementwise operators both to avoid
looping and unnecessary intermediate memory allocations.
@xref{Broadcasting}.

Use built-in and library functions if possible.  Built-in and compiled
functions are very fast.  Even with an m-file library function, chances
are good that it is already optimized, or will be optimized more in a
future release.

For instance, even better than

@example
a = b(2:n) - b(1:n-1);
@end example

@noindent
is

@example
a = diff (b);
@end example

Most Octave functions are written with vector and array arguments in
mind.  If you find yourself writing a loop with a very simple operation,
chances are that such a function already exists.  The following functions
occur frequently in vectorized code:

@itemize @bullet
@item
Index manipulation

@itemize
@item
find

@item
sub2ind

@item
ind2sub

@item
sort

@item
unique

@item
lookup

@item
ifelse / merge
@end itemize

@item
Repetition

@itemize
@item
repmat

@item
repelems
@end itemize

@item
Vectorized arithmetic

@itemize
@item
sum

@item
prod

@item
cumsum

@item
cumprod

@item
sumsq

@item
diff

@item
dot

@item
cummax

@item
cummin
@end itemize

@item
Shape of higher dimensional arrays

@itemize
@item
reshape

@item
resize

@item
permute

@item
squeeze

@item
deal
@end itemize

@end itemize

@node Broadcasting
@section Broadcasting
@cindex broadcast
@cindex broadcasting
@cindex @nospell{BSX}
@cindex recycling
@cindex SIMD

Broadcasting refers to how Octave binary operators and functions behave
when their matrix or array operands or arguments differ in size.  Since
version 3.6.0, Octave now automatically broadcasts vectors, matrices,
and arrays when using elementwise binary operators and functions.
Broadly speaking, smaller arrays are ``broadcast'' across the larger
one, until they have a compatible shape.  The rule is that corresponding
array dimensions must either

@enumerate
@item
be equal, or

@item
one of them must be 1.
@end enumerate

@noindent
In case all dimensions are equal, no broadcasting occurs and ordinary
element-by-element arithmetic takes place.  For arrays of higher
dimensions, if the number of dimensions isn't the same, then missing
trailing dimensions are treated as 1.  When one of the dimensions is 1,
the array with that singleton dimension gets copied along that dimension
until it matches the dimension of the other array.  For example, consider

@example
@group
x = [1 2 3;
     4 5 6;
     7 8 9];

y = [10 20 30];

x + y
@end group
@end example

@noindent
Without broadcasting, @code{x + y} would be an error because the dimensions
do not agree.  However, with broadcasting it is as if the following
operation were performed:

@example
@group
x = [1 2 3
     4 5 6
     7 8 9];

y = [10 20 30
     10 20 30
     10 20 30];

x + y
@result{}    11   22   33
      14   25   36
      17   28   39
@end group
@end example

@noindent
That is, the smaller array of size @code{[1 3]} gets copied along the
singleton dimension (the number of rows) until it is @code{[3 3]}.  No
actual copying takes place, however.  The internal implementation reuses
elements along the necessary dimension in order to achieve the desired
effect without copying in memory.

Both arrays can be broadcast across each other, for example, all
pairwise differences of the elements of a vector with itself:

@example
@group
y - y'
@result{}    0   10   20
    -10    0   10
    -20  -10    0
@end group
@end example

@noindent
Here the vectors of size @code{[1 3]} and @code{[3 1]} both get
broadcast into matrices of size @code{[3 3]} before ordinary matrix
subtraction takes place.

A special case of broadcasting that may be familiar is when all
dimensions of the array being broadcast are 1, i.e., the array is a
scalar.  Thus for example, operations like @code{x - 42} and @code{max
(x, 2)} are basic examples of broadcasting.

For a higher-dimensional example, suppose @code{img} is an RGB image of
size @code{[m n 3]} and we wish to multiply each color by a different
scalar.  The following code accomplishes this with broadcasting,

@example
img .*= permute ([0.8, 0.9, 1.2], [1, 3, 2]);
@end example

@noindent
Note the usage of permute to match the dimensions of the
@code{[0.8, 0.9, 1.2]} vector with @code{img}.

For functions that are not written with broadcasting semantics,
@code{bsxfun} can be useful for coercing them to broadcast.

@DOCSTRING(bsxfun)

Broadcasting is only applied if either of the two broadcasting
conditions hold.  As usual, however, broadcasting does not apply when two
dimensions differ and neither is 1:

@example
@group
x = [1 2 3
     4 5 6];
y = [10 20
     30 40];
x + y
@end group
@end example

@noindent
This will produce an error about nonconformant arguments.

Besides common arithmetic operations, several functions of two arguments
also broadcast.  The full list of functions and operators that broadcast
is

@example
      plus      +
      minus     -
      times     .*
      rdivide   ./
      ldivide   .\
      power     .^
      lt        <
      le        <=
      eq        ==
      gt        >
      ge        >=
      ne        !=  ~=
      and       &
      or        |
      atan2
      hypot
      max
      min
      mod
      rem
      xor

      +=  -=  .*=  ./=  .\=  .^=  &=  |=
@end example

Here is a real example of the power of broadcasting.  The Floyd-Warshall
algorithm is used to calculate the shortest path lengths between every pair of
vertices in a graph.  A naive implementation for a graph adjacency matrix of
order @var{n} might look like this:

@example
@group
for k = 1:n
  for i = 1:n
    for j = 1:n
      dist(i,j) = min (dist(i,j), dist(i,k) + dist(k,j));
    endfor
  endfor
endfor
@end group
@end example

Upon vectorizing the innermost loop, it might look like this:

@example
@group
for k = 1:n
  for i = 1:n
    dist(i,:) = min (dist(i,:), dist(i,k) + dist(k,:));
  endfor
endfor
@end group
@end example

Using broadcasting in both directions, it looks like this:

@example
@group
for k = 1:n
  dist = min (dist, dist(:,k) + dist(k,:));
endfor
@end group
@end example

The relative time performance of the three techniques for a given graph with
100 vertices is 7.3 seconds for the naive code, 87 milliseconds for the
singly vectorized code, and 1.3 milliseconds for the fully broadcast code.
For a graph with 1000 vertices, vectorization takes 11.7 seconds while
broadcasting takes only 1.15 seconds.  Therefore in general it is worth
writing code with broadcasting semantics for performance.

However, beware of resorting to broadcasting if a simpler operation will
suffice.  For matrices @var{a} and @var{b}, consider the following:

@example
@var{c} = sum (permute (@var{a}, [1, 3, 2]) .* permute (@var{b}, [3, 2, 1]), 3);
@end example

@noindent
This operation broadcasts the two matrices with permuted dimensions
across each other during elementwise multiplication in order to obtain a
larger 3-D array, and this array is then summed along the third dimension.
A moment of thought will prove that this operation is simply the much
faster ordinary matrix multiplication, @code{@var{c} = @var{a}*@var{b};}.

A note on terminology: ``broadcasting'' is the term popularized by the
Numpy numerical environment in the Python programming language.  In other
programming languages and environments, broadcasting may also be known
as @emph{binary singleton expansion} (@nospell{BSX}, in @sc{matlab}, and the
origin of the name of the @code{bsxfun} function), @emph{recycling} (R
programming language), @emph{single-instruction multiple data} (SIMD),
or @emph{replication}.

@subsection Broadcasting and Legacy Code

The new broadcasting semantics almost never affect code that worked
in previous versions of Octave.  Consequently, all code inherited from
@sc{matlab} that worked in previous versions of Octave should still work
without change in Octave.  The only exception is code such as

@example
@group
try
  c = a.*b;
catch
  c = a.*a;
end_try_catch
@end group
@end example

@noindent
that may have relied on matrices of different size producing an error.
Because such operation is now valid Octave syntax, this will no longer
produce an error.  Instead, the following code should be used:

@example
@group
if (isequal (size (a), size (b)))
  c = a .* b;
else
  c = a .* a;
endif
@end group
@end example


@node Function Application
@section Function Application
@cindex map
@cindex apply
@cindex function application

As a general rule, functions should already be written with matrix
arguments in mind and should consider whole matrix operations in a
vectorized manner.  Sometimes, writing functions in this way appears
difficult or impossible for various reasons.  For those situations,
Octave provides facilities for applying a function to each element of an
array, cell, or struct.

@DOCSTRING(arrayfun)

@DOCSTRING(spfun)

@DOCSTRING(cellfun)

@DOCSTRING(structfun)

Consistent with earlier advice, seek to use Octave built-in functions whenever
possible for the best performance.  This advice applies especially to the four
functions above.  For example, when adding two arrays together
element-by-element one could use a handle to the built-in addition function
@code{@@plus} or define an anonymous function @code{@@(x,y) x + y}.  But, the
anonymous function is 60% slower than the first method.
@xref{Operator Overloading}, for a list of basic functions which might be used
in place of anonymous ones.

@node Accumulation
@section Accumulation

Whenever it's possible to categorize according to indices the elements
of an array when performing a computation, accumulation functions can be
useful.

@DOCSTRING(accumarray)

@DOCSTRING(accumdim)

@node Memoization
@section Memoization Techniques

Memoization is a technique to cache the results of slow function calls and
return the cached value when the function is called with the same inputs again,
instead of reevaluating it.  It is very common to replace function calls with
lookup tables if the same inputs are happening over and over again in a known,
predictable way.  Memoization is, at its core, an extension of this practice
where the lookup table is extended even during runtime for new arguments not
seen previously.  A basic theoretical background can be found on Wikipedia or
any undergraduate-level computer science textbook.

Octave's @code{memoize} function provides drop-in memoization functionality for
any user function or Octave function, including compiled functions.

@DOCSTRING(memoize)

To memoize a function @code{z = foo(x, y)}, use this general pattern:

@example
@group
foo2 = memoize (@@(@var{x, y}) @var{foo(x, y)});
z = foo2 (x, y);
@end group
@end example

In the above example, the first line creates a memoized version @code{foo2} of
the function @code{foo}.  For simple functions with only trivial wrapping, this
line can also be shortened to
@example
@group
foo2 = memoize (@@foo);
@end group
@end example

The second line @code{z = foo2 (x, y);} calls that memoized version @code{foo2}
instead of the original function, allowing @code{memoize} to intercept the call
and replace it with a looked-up value from a table if the inputs have occurred
before, instead of evaluating the original function again.  Note that this will
not accelerate the @emph{first} call to the function but only subsequent calls.

Note that due to the overhead incurred by @code{memoize} to create and manage
the lookup tables for each function the user seeks to memoize, this technique
is useful only for functions that take a significant time to execute, at least
a few seconds.  Such functions can be replaced by table lookups taking only a
millisecond or less, but if the original function itself was taking only
milliseconds or microseconds, memoizing it will not speed it up.

Octave's memoization also allows the user to clear the cache of lookup values
when it is no longer needed, using the function @code{clearAllMemoizedCaches}.

@DOCSTRING(clearAllMemoizedCaches)

@node Miscellaneous Techniques
@section Miscellaneous Techniques
@cindex execution speed
@cindex speedups
@cindex optimization

Here are some other ways of improving the execution speed of Octave
programs.

@itemize @bullet

@item Avoid computing costly intermediate results multiple times.
Octave currently does not eliminate common subexpressions.  Also, certain
internal computation results are cached for variables.  For instance, if
a matrix variable is used multiple times as an index, checking the
indices (and internal conversion to integers) is only done once.

@item Be aware of lazy copies (copy-on-write).
@cindex copy-on-write
@cindex COW
@cindex memory management
When a copy of an object is created, the data is not immediately copied, but
rather shared.  The actual copying is postponed until the copied data needs to
be modified.  For example:

@example
@group
a = zeros (1000); # create a 1000x1000 matrix
b = a; # no copying done here
b(1) = 1; # copying done here
@end group
@end example

Lazy copying applies to whole Octave objects such as matrices, cells,
struct, and also individual cell or struct elements (not array
elements).

Additionally, index expressions also use lazy copying when Octave can
determine that the indexed portion is contiguous in memory.  For example:

@example
@group
a = zeros (1000); # create a 1000x1000 matrix
b = a(:,10:100);  # no copying done here
b = a(10:100,:);  # copying done here
@end group
@end example

This applies to arrays (matrices), cell arrays, and structs indexed
using @samp{()}.  Index expressions generating comma-separated lists can also
benefit from shallow copying in some cases.  In particular, when @var{a} is a
struct array, expressions like @code{@{a.x@}, @{a(:,2).x@}} will use lazy
copying, so that data can be shared between a struct array and a cell array.

Most indexing expressions do not live longer than their parent
objects.  In rare cases, however, a lazily copied slice outlasts its
parent, in which case it becomes orphaned, still occupying unnecessarily
more memory than needed.  To provide a remedy working in most real cases,
Octave checks for orphaned lazy slices at certain situations, when a
value is stored into a "permanent" location, such as a named variable or
cell or struct element, and possibly economizes them.  For example:

@example
@group
a = zeros (1000); # create a 1000x1000 matrix
b = a(:,10:100);  # lazy slice
a = []; # the original "a" array is still allocated
c@{1@} = b; # b is reallocated at this point
@end group
@end example

@item Avoid deep recursion.
Function calls to m-file functions carry a relatively significant overhead, so
rewriting a recursion as a loop often helps.  Also, note that the maximum level
of recursion is limited.

@item Avoid resizing matrices unnecessarily.
When building a single result matrix from a series of calculations, set the
size of the result matrix first, then insert values into it.  Write

@example
@group
result = zeros (big_n, big_m)
for i = over:and_over
  ridx = @dots{}
  cidx = @dots{}
  result(ridx, cidx) = new_value ();
endfor
@end group
@end example

@noindent
instead of

@example
@group
result = [];
for i = ever:and_ever
  result = [ result, new_value() ];
endfor
@end group
@end example

Sometimes the number of items can not be computed in advance, and
stack-like operations are needed.  When elements are being repeatedly
inserted or removed from the end of an array, Octave detects it as stack
usage and attempts to use a smarter memory management strategy by
pre-allocating the array in bigger chunks.  This strategy is also applied
to cell and struct arrays.

@example
@group
a = [];
while (condition)
  @dots{}
  a(end+1) = value; # "push" operation
  @dots{}
  a(end) = []; # "pop" operation
  @dots{}
endwhile
@end group
@end example

@item Avoid calling @code{eval} or @code{feval} excessively.
Parsing input or looking up the name of a function in the symbol table are
relatively expensive operations.

If you are using @code{eval} merely as an exception handling mechanism, and not
because you need to execute some arbitrary text, use the @code{try}
statement instead.  @xref{The try Statement}.

@item Use @code{ignore_function_time_stamp} when appropriate.
If you are calling lots of functions, and none of them will need to change
during your run, set the variable @code{ignore_function_time_stamp} to
@qcode{"all"}.  This will stop Octave from checking the time stamp of a
function file to see if it has been updated while the program is being run.
@end itemize

@node Examples
@section Examples

The following are examples of vectorization questions asked by actual
users of Octave and their solutions.

@c FIXME: We need a lot more examples here.

@itemize @bullet
@item
For a vector @code{A}, the following loop

@example
@group
n = length (A) - 1;
B = zeros (n, 2);
for i = 1:n
  ## this will be two columns, the first is the difference and
  ## the second the mean of the two elements used for the diff.
  B(i,:) = [A(i+1)-A(i), (A(i+1) + A(i))/2];
endfor
@end group
@end example

@noindent
can be turned into the following one-liner:

@example
B = [diff(A)(:), 0.5*(A(1:end-1)+A(2:end))(:)]
@end example

Note the usage of colon indexing to flatten an intermediate result into
a column vector.  This is a common vectorization trick.

@end itemize