diff doc/interpreter/linalg.txi @ 3294:bfe1573bd2ae

[project @ 1999-10-19 10:06:07 by jwe]
author jwe
date Tue, 19 Oct 1999 10:08:42 +0000
parents
children f16c2ce14886
line wrap: on
line diff
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/doc/interpreter/linalg.txi	Tue Oct 19 10:08:42 1999 +0000
@@ -0,0 +1,906 @@
+@c Copyright (C) 1996, 1997 John W. Eaton
+@c This is part of the Octave manual.
+@c For copying conditions, see the file gpl.texi.
+
+@node Linear Algebra, Nonlinear Equations, Arithmetic, Top
+@chapter Linear Algebra
+
+This chapter documents the linear algebra functions of Octave.
+Reference material for many of these functions may be found in
+Golub and Van Loan, @cite{Matrix Computations, 2nd Ed.}, Johns Hopkins,
+1989, and in @cite{@sc{Lapack} Users' Guide}, SIAM, 1992.
+
+@menu
+* Basic Matrix Functions::      
+* Matrix Factorizations::       
+* Functions of a Matrix::       
+@end menu
+
+@node Basic Matrix Functions, Matrix Factorizations, Linear Algebra, Linear Algebra
+@section Basic Matrix Functions
+
+@deftypefn {Loadable Function} {@var{aa} =} balance (@var{a}, @var{opt})
+@deftypefnx {Loadable Function} {[@var{dd}, @var{aa}] =} balance (@var{a}, @var{opt})
+@deftypefnx {Loadable Function} {[@var{cc}, @var{dd}, @var{aa}, @var{bb]} =} balance (@var{a}, @var{b}, @var{opt})
+
+@code{[dd, aa] = balance (a)} returns @code{aa = dd \ a * dd}.
+@code{aa} is a matrix whose row and column norms are roughly equal in
+magnitude, and @code{dd} = @code{p * d}, where @code{p} is a permutation
+matrix and @code{d} is a diagonal matrix of powers of two.  This allows
+the equilibration to be computed without roundoff.  Results of
+eigenvalue calculation are typically improved by balancing first.
+
+@code{[cc, dd, aa, bb] = balance (a, b)} returns @code{aa = cc*a*dd} and
+@code{bb = cc*b*dd)}, where @code{aa} and @code{bb} have non-zero
+elements of approximately the same magnitude and @code{cc} and @code{dd}
+are permuted diagonal matrices as in @code{dd} for the algebraic
+eigenvalue problem.
+
+The eigenvalue balancing option @code{opt} is selected as follows:
+
+@table @asis
+@item @code{"N"}, @code{"n"}
+No balancing; arguments copied, transformation(s) set to identity.
+
+@item @code{"P"}, @code{"p"}
+Permute argument(s) to isolate eigenvalues where possible.
+
+@item @code{"S"}, @code{"s"}
+Scale to improve accuracy of computed eigenvalues.
+
+@item @code{"B"}, @code{"b"}
+Permute and scale, in that order. Rows/columns of a (and b)
+that are isolated by permutation are not scaled.  This is the default
+behavior.
+@end table
+
+Algebraic eigenvalue balancing uses standard @sc{Lapack} routines.
+
+Generalized eigenvalue problem balancing uses Ward's algorithm
+(SIAM Journal on Scientific and Statistical Computing, 1981).
+@end deftypefn
+
+@deftypefn {} {} cond (@var{a})
+Compute the (two-norm) condition number of a matrix. @code{cond (a)} is
+defined as @code{norm (a) * norm (inv (a))}, and is computed via a
+singular value decomposition.
+@end deftypefn
+
+@deftypefn {Loadable Function} {} det (@var{a})
+Compute the determinant of @var{a} using @sc{Linpack}.
+@end deftypefn
+
+@deftypefn {Loadable Function} {@var{lambda} =} eig (@var{a})
+@deftypefnx {Loadable Function} {[@var{v}, @var{lambda}] =} eig (@var{a})
+The eigenvalues (and eigenvectors) of a matrix are computed in a several
+step process which begins with a Hessenberg decomposition, followed by a
+Schur decomposition, from which the eigenvalues are apparent.  The
+eigenvectors, when desired, are computed by further manipulations of the
+Schur decomposition.
+@end deftypefn
+
+@deftypefn {Loadable Function} {@var{G} =} givens (@var{x}, @var{y})
+@deftypefnx {Loadable Function} {[@var{c}, @var{s}] =} givens (@var{x}, @var{y})
+@iftex
+@tex
+Return a $2\times 2$ orthogonal matrix
+$$
+ G = \left[\matrix{c & s\cr -s'& c\cr}\right]
+$$
+such that
+$$
+ G \left[\matrix{x\cr y}\right] = \left[\matrix{\ast\cr 0}\right]
+$$
+with $x$ and $y$ scalars.
+@end tex
+@end iftex
+@ifinfo
+Return a 2 by 2 orthogonal matrix
+@code{@var{G} = [@var{c} @var{s}; -@var{s}' @var{c}]} such that
+@code{@var{G} [@var{x}; @var{y}] = [*; 0]} with @var{x} and @var{y} scalars.
+@end ifinfo
+
+For example,
+
+@example
+@group
+givens (1, 1)
+     @result{}   0.70711   0.70711
+         -0.70711   0.70711
+@end group
+@end example
+@end deftypefn
+
+@deftypefn {Loadable Function} {} inv (@var{a})
+@deftypefnx {Loadable Function} {} inverse (@var{a})
+Compute the inverse of the square matrix @var{a}.
+@end deftypefn
+
+@deftypefn {Function File} {} norm (@var{a}, @var{p})
+Compute the p-norm of the matrix @var{a}.  If the second argument is
+missing, @code{p = 2} is assumed.
+
+If @var{a} is a matrix:
+
+@table @asis
+@item @var{p} = @code{1}
+1-norm, the largest column sum of @var{a}.
+
+@item @var{p} = @code{2}
+Largest singular value of @var{a}.
+
+@item @var{p} = @code{Inf}
+@cindex infinity norm
+Infinity norm, the largest row sum of @var{a}.
+
+@item @var{p} = @code{"fro"}
+@cindex Frobenius norm
+Frobenius norm of @var{a}, @code{sqrt (sum (diag (@var{a}' * @var{a})))}.
+@end table
+
+If @var{a} is a vector or a scalar:
+
+@table @asis
+@item @var{p} = @code{Inf}
+@code{max (abs (@var{a}))}.
+
+@item @var{p} = @code{-Inf}
+@code{min (abs (@var{a}))}.
+
+@item other
+p-norm of @var{a}, @code{(sum (abs (@var{a}) .^ @var{p})) ^ (1/@var{p})}.
+@end table
+@end deftypefn
+
+@deftypefn {Function File} {} null (@var{a}, @var{tol})
+Return an orthonormal basis of the null space of @var{a}.
+
+The dimension of the null space is taken as the number of singular
+values of @var{a} not greater than @var{tol}.  If the argument @var{tol}
+is missing, it is computed as
+
+@example
+max (size (@var{a})) * max (svd (@var{a})) * eps
+@end example
+@end deftypefn
+
+@deftypefn {Function File} {} orth (@var{a}, @var{tol})
+Return an orthonormal basis of the range space of @var{a}.
+
+The dimension of the range space is taken as the number of singular
+values of @var{a} greater than @var{tol}.  If the argument @var{tol} is
+missing, it is computed as
+
+@example
+max (size (@var{a})) * max (svd (@var{a})) * eps
+@end example
+@end deftypefn
+
+@deftypefn {Function File} {} pinv (@var{x}, @var{tol})
+Return the pseudoinverse of @var{x}.  Singular values less than
+@var{tol} are ignored. 
+
+If the second argument is omitted, it is assumed that
+
+@example
+tol = max (size (@var{x})) * sigma_max (@var{x}) * eps,
+@end example
+
+@noindent
+where @code{sigma_max (@var{x})} is the maximal singular value of @var{x}.
+@end deftypefn
+
+@deftypefn {Function File} {} rank (@var{a}, @var{tol})
+Compute the rank of @var{a}, using the singular value decomposition.
+The rank is taken to be the number  of singular values of @var{a} that
+are greater than the specified tolerance @var{tol}.  If the second
+argument is omitted, it is taken to be
+
+@example
+tol = max (size (@var{a})) * sigma (1) * eps;
+@end example
+
+@noindent
+where @code{eps} is machine precision and @code{sigma} is the largest
+singular value of @var{a}.
+@end deftypefn
+
+@deftypefn {Function File} {} trace (@var{a})
+Compute the trace of @var{a}, @code{sum (diag (@var{a}))}.
+@end deftypefn
+
+@node Matrix Factorizations, Functions of a Matrix, Basic Matrix Functions, Linear Algebra
+@section Matrix Factorizations
+
+@deftypefn {Loadable Function} {} chol (@var{a})
+@cindex Cholesky factorization
+Compute the Cholesky factor, @var{r}, of the symmetric positive definite
+matrix @var{a}, where
+@iftex
+@tex
+$ R^T R = A $.
+@end tex
+@end iftex
+@ifinfo
+
+@example
+r' * r = a.
+@end example
+@end ifinfo
+@end deftypefn
+
+@deftypefn {Loadable Function} {@var{h} =} hess (@var{a})
+@deftypefnx {Loadable Function} {[@var{p}, @var{h}] =} hess (@var{a})
+@cindex Hessenberg decomposition
+Compute the Hessenberg decomposition of the matrix @var{a}.
+
+The Hessenberg decomposition is usually used as the first step in an
+eigenvalue computation, but has other applications as well (see Golub,
+Nash, and Van Loan, IEEE Transactions on Automatic Control, 1979.  The
+Hessenberg decomposition is
+@iftex
+@tex
+$$
+A = PHP^T
+$$
+where $P$ is a square unitary matrix ($P^HP = I$), and $H$
+is upper Hessenberg ($H_{i,j} = 0, \forall i \ge j+1$).
+@end tex
+@end iftex
+@ifinfo
+@code{p * h * p' = a} where @code{p} is a square unitary matrix
+(@code{p' * p = I}, using complex-conjugate transposition) and @code{h}
+is upper Hessenberg (@code{i >= j+1 => h (i, j) = 0}).
+@end ifinfo
+@end deftypefn
+
+@deftypefn {Loadable Function} {[@var{l}, @var{u}, @var{p}] =} lu (@var{a})
+@cindex LU decomposition
+Compute the LU decomposition of @var{a}, using subroutines from
+@sc{Lapack}.  The result is returned in a permuted form, according to
+the optional return value @var{p}.  For example, given the matrix
+@code{a = [1, 2; 3, 4]},
+
+@example
+[l, u, p] = lu (a)
+@end example
+
+@noindent
+returns
+
+@example
+l =
+
+  1.00000  0.00000
+  0.33333  1.00000
+
+u =
+
+  3.00000  4.00000
+  0.00000  0.66667
+
+p =
+
+  0  1
+  1  0
+@end example
+@end deftypefn
+
+@deftypefn {Loadable Function} {[@var{q}, @var{r}, @var{p}] =} qr (@var{a})
+@cindex QR factorization
+Compute the QR factorization of @var{a}, using standard @sc{Lapack}
+subroutines.  For example, given the matrix @code{a = [1, 2; 3, 4]},
+
+@example
+[q, r] = qr (a)
+@end example
+
+@noindent
+returns
+
+@example
+q =
+
+  -0.31623  -0.94868
+  -0.94868   0.31623
+
+r =
+
+  -3.16228  -4.42719
+   0.00000  -0.63246
+@end example
+
+The @code{qr} factorization has applications in the solution of least
+squares problems
+@iftex
+@tex
+$$
+\min_x \left\Vert A x - b \right\Vert_2
+$$
+@end tex
+@end iftex
+@ifinfo
+
+@example
+@code{min norm(A x - b)}
+@end example
+
+@end ifinfo
+for overdetermined systems of equations (i.e.,
+@iftex
+@tex
+$A$
+@end tex
+@end iftex
+@ifinfo
+@code{a}
+@end ifinfo
+ is a tall, thin matrix).  The QR factorization is
+@iftex
+@tex
+$QR = A$ where $Q$ is an orthogonal matrix and $R$ is upper triangular.
+@end tex
+@end iftex
+@ifinfo
+@code{q * r = a} where @code{q} is an orthogonal matrix and @code{r} is
+upper triangular.
+@end ifinfo
+
+The permuted QR factorization @code{[@var{q}, @var{r}, @var{p}] =
+qr (@var{a})} forms the QR factorization such that the diagonal
+entries of @code{r} are decreasing in magnitude order.  For example,
+given the matrix @code{a = [1, 2; 3, 4]},
+
+@example
+[q, r, pi] = qr(a)
+@end example
+
+@noindent
+returns
+
+@example
+q = 
+
+  -0.44721  -0.89443
+  -0.89443   0.44721
+
+r =
+
+  -4.47214  -3.13050
+   0.00000   0.44721
+
+p =
+
+   0  1
+   1  0
+@end example
+
+The permuted @code{qr} factorization @code{[q, r, p] = qr (a)}
+factorization allows the construction of an orthogonal basis of
+@code{span (a)}.
+@end deftypefn
+
+
+@deftypefn {Function File} {@var{lambda} =} qz (@var{a}, @var{b})
+Generalized eigenvalue problem @math{A x = s B x},
+@var{QZ} decomposition.  Three ways to call:
+@enumerate
+@item @code{lambda = qz(A,B)}
+
+Computes the generalized eigenvalues @var{lambda} of @math{(A - sB)}.
+
+@item @code{[AA, BB, Q, Z @{, V, W, lambda@}] = qz (A, B)}
+
+Computes qz decomposition, generalized eigenvectors, and 
+        generalized eigenvalues of @math{(A - sB)}
+@example
+@group
+        A V = B V diag(lambda)
+        W' A = diag(lambda) W' B
+        AA = Q'*A*Z, BB = Q'*B*Z  with Q, Z orthogonal (unitary)= I
+@end group
+@end example
+
+@item @code{[AA,BB,Z@{,lambda@}] = qz(A,B,opt)}
+
+As in form [2], but allows ordering of generalized eigenpairs
+        for (e.g.) solution of discrete time algebraic Riccati equations.
+        Form 3 is not available for complex matrices and does not compute
+        the generalized eigenvectors V, W, nor the orthogonal matrix Q.
+@table @var
+@item opt
+ for ordering eigenvalues of the GEP pencil.  The leading  block
+             of the revised pencil contains all eigenvalues that satisfy:
+@table @code
+@item "N"
+ = unordered (default) 
+
+@item "S"
+= small: leading block has all |lambda| <=1 
+
+@item "B"
+ = big: leading block has all |lambda >= 1 
+
+@item "-"
+ = negative real part: leading  block has all eigenvalues
+                  in the open left half-plant
+
+@item "+"
+ = nonnegative real part:  leading block has all eigenvalues
+                  in the closed right half-plane
+@end  table
+@end table
+@end enumerate
+
+Note: qz performs permutation balancing, but not scaling (see balance).
+      Order of output arguments was selected for compatibility with MATLAB
+
+See also: balance, dare, eig, schur
+@end deftypefn
+
+@deftypefn {Function File} {[@var{aa}, @var{bb}, @var{q}, @var{z}] =} qzhess (@var{a}, @var{b})
+Compute the Hessenberg-triangular decomposition of the matrix pencil
+@code{(@var{a}, @var{b})}, returning
+@code{@var{aa} = @var{q} * @var{a} * @var{z}}, 
+@code{@var{bb} = @var{q} * @var{b} * @var{z}}, with @var{q} and @var{z}
+orthogonal.  For example,
+
+@example
+@group
+[aa, bb, q, z] = qzhess ([1, 2; 3, 4], [5, 6; 7, 8])
+     @result{} aa = [ -3.02244, -4.41741;  0.92998,  0.69749 ]
+     @result{} bb = [ -8.60233, -9.99730;  0.00000, -0.23250 ]
+     @result{}  q = [ -0.58124, -0.81373; -0.81373,  0.58124 ]
+     @result{}  z = [ 1, 0; 0, 1 ]
+@end group
+@end example
+
+The Hessenberg-triangular decomposition is the first step in
+Moler and Stewart's QZ decomposition algorithm.
+
+Algorithm taken from Golub and Van Loan, @cite{Matrix Computations, 2nd
+edition}.
+@end deftypefn
+
+@deftypefn {Loadable Function} {} qzval (@var{a}, @var{b})
+Compute generalized eigenvalues of the matrix pencil 
+@iftex
+@tex
+$a - \lambda b$.
+@end tex
+@end iftex
+@ifinfo
+@code{@var{a} - lambda @var{b}}.
+@end ifinfo
+
+The arguments @var{a} and @var{b} must be real matrices.
+@end deftypefn
+
+@deftypefn {Loadable Function} {@var{s} =} schur (@var{a})
+@deftypefnx {Loadable Function} {[@var{u}, @var{s}] =} schur (@var{a}, @var{opt})
+@cindex Schur decomposition
+The Schur decomposition is used to compute eigenvalues of a
+square matrix, and has applications in the solution of algebraic
+Riccati equations in control (see @code{are} and @code{dare}).
+@code{schur} always returns
+@iftex
+@tex
+$S = U^T A U$
+@end tex
+@end iftex
+@ifinfo
+@code{s = u' * a * u}
+@end ifinfo
+where
+@iftex
+@tex
+$U$
+@end tex
+@end iftex
+@ifinfo
+@code{u}
+@end ifinfo
+ is a unitary matrix
+@iftex
+@tex
+($U^T U$ is identity)
+@end tex
+@end iftex
+@ifinfo
+(@code{u'* u} is identity)
+@end ifinfo
+and
+@iftex
+@tex
+$S$
+@end tex
+@end iftex
+@ifinfo
+@code{s}
+@end ifinfo
+is upper triangular.  The eigenvalues of
+@iftex
+@tex
+$A$ (and $S$)
+@end tex
+@end iftex
+@ifinfo
+@code{a} (and @code{s})
+@end ifinfo
+are the diagonal elements of
+@iftex
+@tex
+$S$
+@end tex
+@end iftex
+@ifinfo
+@code{s}
+@end ifinfo
+If the matrix
+@iftex
+@tex
+$A$
+@end tex
+@end iftex
+@ifinfo
+@code{a}
+@end ifinfo
+is real, then the real Schur decomposition is computed, in which the
+matrix
+@iftex
+@tex
+$U$
+@end tex
+@end iftex
+@ifinfo
+@code{u}
+@end ifinfo
+is orthogonal and
+@iftex
+@tex
+$S$
+@end tex
+@end iftex
+@ifinfo
+@code{s}
+@end ifinfo
+is block upper triangular
+with blocks of size at most
+@iftex
+@tex
+$2\times 2$
+@end tex
+@end iftex
+@ifinfo
+@code{2 x 2}
+@end ifinfo
+blocks along the diagonal.  The diagonal elements of
+@iftex
+@tex
+$S$
+@end tex
+@end iftex
+@ifinfo
+@code{s}
+@end ifinfo
+(or the eigenvalues of the
+@iftex
+@tex
+$2\times 2$
+@end tex
+@end iftex
+@ifinfo
+@code{2 x 2}
+@end ifinfo
+blocks, when
+appropriate) are the eigenvalues of
+@iftex
+@tex
+$A$
+@end tex
+@end iftex
+@ifinfo
+@code{a}
+@end ifinfo
+and
+@iftex
+@tex
+$S$.
+@end tex
+@end iftex
+@ifinfo
+@code{s}.
+@end ifinfo
+
+The eigenvalues are optionally ordered along the diagonal according to
+the value of @code{opt}.  @code{opt = "a"} indicates that all
+eigenvalues with negative real parts should be moved to the leading
+block of
+@iftex
+@tex
+$S$
+@end tex
+@end iftex
+@ifinfo
+@code{s}
+@end ifinfo
+(used in @code{are}), @code{opt = "d"} indicates that all eigenvalues
+with magnitude less than one should be moved to the leading block of
+@iftex
+@tex
+$S$
+@end tex
+@end iftex
+@ifinfo
+@code{s}
+@end ifinfo
+(used in @code{dare}), and @code{opt = "u"}, the default, indicates that
+no ordering of eigenvalues should occur.  The leading
+@iftex
+@tex
+$k$
+@end tex
+@end iftex
+@ifinfo
+@code{k}
+@end ifinfo
+columns of
+@iftex
+@tex
+$U$
+@end tex
+@end iftex
+@ifinfo
+@code{u}
+@end ifinfo
+always span the
+@iftex
+@tex
+$A$-invariant
+@end tex
+@end iftex
+@ifinfo
+@code{a}-invariant
+@end ifinfo
+subspace corresponding to the
+@iftex
+@tex
+$k$
+@end tex
+@end iftex
+@ifinfo
+@code{k}
+@end ifinfo
+leading eigenvalues of
+@iftex
+@tex
+$S$.
+@end tex
+@end iftex
+@ifinfo
+@code{s}.
+@end ifinfo
+@end deftypefn
+
+@deftypefn {Loadable Function} {@var{s} =} svd (@var{a})
+@deftypefnx {Loadable Function} {[@var{u}, @var{s}, @var{v}] =} svd (@var{a})
+@cindex singular value decomposition
+Compute the singular value decomposition of @var{a}
+@iftex
+@tex
+$$
+ A = U\Sigma V^H
+$$
+@end tex
+@end iftex
+@ifinfo
+
+@example
+a = u * sigma * v'
+@end example
+@end ifinfo
+
+The function @code{svd} normally returns the vector of singular values.
+If asked for three return values, it computes
+@iftex
+@tex
+$U$, $S$, and $V$.
+@end tex
+@end iftex
+@ifinfo
+U, S, and V.
+@end ifinfo
+For example,
+
+@example
+svd (hilb (3))
+@end example
+
+@noindent
+returns
+
+@example
+ans =
+
+  1.4083189
+  0.1223271
+  0.0026873
+@end example
+
+@noindent
+and
+
+@example
+[u, s, v] = svd (hilb (3))
+@end example
+
+@noindent
+returns
+
+@example
+u =
+
+  -0.82704   0.54745   0.12766
+  -0.45986  -0.52829  -0.71375
+  -0.32330  -0.64901   0.68867
+
+s =
+
+  1.40832  0.00000  0.00000
+  0.00000  0.12233  0.00000
+  0.00000  0.00000  0.00269
+
+v =
+
+  -0.82704   0.54745   0.12766
+  -0.45986  -0.52829  -0.71375
+  -0.32330  -0.64901   0.68867
+@end example
+
+If given a second argument, @code{svd} returns an economy-sized
+decomposition, eliminating the unnecessary rows or columns of @var{u} or
+@var{v}.
+@end deftypefn
+
+@node Functions of a Matrix,  , Matrix Factorizations, Linear Algebra
+@section Functions of a Matrix
+
+@deftypefn {Loadable Function} {} expm (@var{a})
+Return the exponential of a matrix, defined as the infinite Taylor
+series
+@iftex
+@tex
+$$
+ \exp (A) = I + A + {A^2 \over 2!} + {A^3 \over 3!} + \cdots
+$$
+@end tex
+@end iftex
+@ifinfo
+
+@example
+expm(a) = I + a + a^2/2! + a^3/3! + ...
+@end example
+
+@end ifinfo
+The Taylor series is @emph{not} the way to compute the matrix
+exponential; see Moler and Van Loan, @cite{Nineteen Dubious Ways to
+Compute the Exponential of a Matrix}, SIAM Review, 1978.  This routine
+uses Ward's diagonal
+@iftex
+@tex
+Pad\'e
+@end tex
+@end iftex
+@ifinfo
+Pade'
+@end ifinfo
+approximation method with three step preconditioning (SIAM Journal on
+Numerical Analysis, 1977).  Diagonal
+@iftex
+@tex
+Pad\'e
+@end tex
+@end iftex
+@ifinfo
+Pade'
+@end ifinfo
+ approximations are rational polynomials of matrices
+@iftex
+@tex
+$D_q(a)^{-1}N_q(a)$
+@end tex
+@end iftex
+@ifinfo
+
+@example
+     -1
+D (a)   N (a)
+@end example
+
+@end ifinfo
+ whose Taylor series matches the first
+@iftex
+@tex
+$2 q + 1 $
+@end tex
+@end iftex
+@ifinfo
+@code{2q+1}
+@end ifinfo
+terms of the Taylor series above; direct evaluation of the Taylor series
+(with the same preconditioning steps) may be desirable in lieu of the
+@iftex
+@tex
+Pad\'e
+@end tex
+@end iftex
+@ifinfo
+Pade'
+@end ifinfo
+approximation when
+@iftex
+@tex
+$D_q(a)$
+@end tex
+@end iftex
+@ifinfo
+@code{Dq(a)}
+@end ifinfo
+is ill-conditioned.
+@end deftypefn
+
+@deftypefn {Loadable Function} {} logm (@var{a})
+Compute the matrix logarithm of the square matrix @var{a}.  Note that
+this is currently implemented in terms of an eigenvalue expansion and
+needs to be improved to be more robust.
+@end deftypefn
+
+@deftypefn {Loadable Function} {} sqrtm (@var{a})
+Compute the matrix square root of the square matrix @var{a}.  Note that
+this is currently implemented in terms of an eigenvalue expansion and
+needs to be improved to be more robust.
+@end deftypefn
+
+@deftypefn {Function File} {} kron (@var{a}, @var{b})
+Form the kronecker product of two matrices, defined block by block as
+
+@example
+x = [a(i, j) b]
+@end example
+
+For example,
+
+@example
+@group
+kron (1:4, ones (3, 1))
+     @result{}  1  2  3  4
+         1  2  3  4
+         1  2  3  4
+@end group
+@end example
+@end deftypefn
+
+@deftypefn {Loadable Function} {@var{x} =} syl (@var{a}, @var{b}, @var{c})
+Solve the Sylvester equation
+@iftex
+@tex
+$$
+ A X + X B + C = 0
+$$
+@end tex
+@end iftex
+@ifinfo
+
+@example
+A X + X B + C = 0
+@end example
+@end ifinfo
+using standard @sc{Lapack} subroutines.  For example,
+
+@example
+@group
+syl ([1, 2; 3, 4], [5, 6; 7, 8], [9, 10; 11, 12])
+     @result{} [ -0.50000, -0.66667; -0.66667, -0.50000 ]
+@end group
+@end example
+@end deftypefn