diff scripts/special-matrix/gallery.m @ 19833:9fc020886ae9

maint: Clean up m-files to follow Octave coding conventions. Try to trim long lines to < 80 chars. Use '##' for single line comments. Use '(...)' around tests for if/elseif/switch/while. Abut cell indexing operator '{' next to variable. Abut array indexing operator '(' next to variable. Use space between negation operator '!' and following expression. Use two newlines between endfunction and start of %!test or %!demo code. Remove unnecessary parens grouping between short-circuit operators. Remove stray extra spaces (typos) between variables and assignment operators. Remove stray extra spaces from ends of lines.
author Rik <rik@octave.org>
date Mon, 23 Feb 2015 14:54:39 -0800
parents 4197fc428c7d
children 941e782d0429
line wrap: on
line diff
--- a/scripts/special-matrix/gallery.m	Mon Feb 23 16:42:25 2015 +0000
+++ b/scripts/special-matrix/gallery.m	Mon Feb 23 14:54:39 2015 -0800
@@ -505,26 +505,26 @@
 
 function C = cauchy (x, y)
   ##CAUCHY  Cauchy matrix.
-  ##        C = CAUCHY(X, Y), where X, Y are N-vectors, is the N-by-N matrix
-  ##        with C(i,j) = 1/(X(i)+Y(j)).   By default, Y = X.
-  ##        Special case: if X is a scalar CAUCHY(X) is the same as CAUCHY(1:X).
-  ##        Explicit formulas are known for DET(C) (which is nonzero if X and Y
-  ##        both have distinct elements) and the elements of INV(C).
-  ##        C is totally positive if 0 < X(1) < ... < X(N) and
-  ##        0 < Y(1) < ... < Y(N).
+  ##  C = CAUCHY(X, Y), where X, Y are N-vectors, is the N-by-N matrix
+  ##  with C(i,j) = 1/(X(i)+Y(j)).   By default, Y = X.
+  ##  Special case: if X is a scalar CAUCHY(X) is the same as CAUCHY(1:X).
+  ##  Explicit formulas are known for DET(C) (which is nonzero if X and Y
+  ##  both have distinct elements) and the elements of INV(C).
+  ##  C is totally positive if 0 < X(1) < ... < X(N) and
+  ##  0 < Y(1) < ... < Y(N).
   ##
-  ##        References:
-  ##        N.J. Higham, Accuracy and Stability of Numerical Algorithms,
-  ##          Society for Industrial and Applied Mathematics, Philadelphia, PA,
-  ##          USA, 1996; sec. 26.1.
-  ##        D.E. Knuth, The Art of Computer Programming, Volume 1,
-  ##          Fundamental Algorithms, second edition, Addison-Wesley, Reading,
-  ##          Massachusetts, 1973, p. 36.
-  ##        E.E. Tyrtyshnikov, Cauchy-Toeplitz matrices and some applications,
-  ##          Linear Algebra and Appl., 149 (1991), pp. 1-18.
-  ##          O. Taussky and M. Marcus, Eigenvalues of finite matrices, in
-  ##          Survey of Numerical Analysis, J. Todd, ed., McGraw-Hill, New York,
-  ##          pp. 279-313, 1962. (States the totally positive property on p. 295.)
+  ##  References:
+  ##  N.J. Higham, Accuracy and Stability of Numerical Algorithms,
+  ##    Society for Industrial and Applied Mathematics, Philadelphia, PA,
+  ##    USA, 1996; sec. 26.1.
+  ##  D.E. Knuth, The Art of Computer Programming, Volume 1,
+  ##    Fundamental Algorithms, second edition, Addison-Wesley, Reading,
+  ##    Massachusetts, 1973, p. 36.
+  ##  E.E. Tyrtyshnikov, Cauchy-Toeplitz matrices and some applications,
+  ##    Linear Algebra and Appl., 149 (1991), pp. 1-18.
+  ##    O. Taussky and M. Marcus, Eigenvalues of finite matrices, in
+  ##    Survey of Numerical Analysis, J. Todd, ed., McGraw-Hill, New York,
+  ##    pp. 279-313, 1962. (States the totally positive property on p. 295.)
 
   if (nargin < 1 || nargin > 2)
     error ("gallery: 1 or 2 arguments are required for cauchy matrix.");
@@ -561,23 +561,23 @@
 
 function C = chebspec (n, k = 0)
   ## CHEBSPEC  Chebyshev spectral differentiation matrix.
-  ##           C = CHEBSPEC(N, K) is a Chebyshev spectral differentiation
-  ##           matrix of order N.  K = 0 (the default) or 1.
-  ##           For K = 0 (`no boundary conditions'), C is nilpotent, with
-  ##               C^N = 0 and it has the null vector ONES(N,1).
-  ##               C is similar to a Jordan block of size N with eigenvalue zero.
-  ##           For K = 1, C is nonsingular and well-conditioned, and its eigenvalues
-  ##               have negative real parts.
-  ##           For both K, the computed eigenvector matrix X from EIG is
-  ##               ill-conditioned (MESH(REAL(X)) is interesting).
+  ##   C = CHEBSPEC(N, K) is a Chebyshev spectral differentiation
+  ##   matrix of order N.  K = 0 (the default) or 1.
+  ##   For K = 0 (`no boundary conditions'), C is nilpotent, with
+  ##       C^N = 0 and it has the null vector ONES(N,1).
+  ##       C is similar to a Jordan block of size N with eigenvalue zero.
+  ##   For K = 1, C is nonsingular and well-conditioned, and its eigenvalues
+  ##       have negative real parts.
+  ##   For both K, the computed eigenvector matrix X from EIG is
+  ##       ill-conditioned (MESH(REAL(X)) is interesting).
   ##
-  ##           References:
-  ##           C. Canuto, M.Y. Hussaini, A. Quarteroni and T.A. Zang, Spectral
-  ##              Methods in Fluid Dynamics, Springer-Verlag, Berlin, 1988; p. 69.
-  ##           L.N. Trefethen and M.R. Trummer, An instability phenomenon in
-  ##              spectral methods, SIAM J. Numer. Anal., 24 (1987), pp. 1008-1023.
-  ##           D. Funaro, Computing the inverse of the Chebyshev collocation
-  ##              derivative, SIAM J. Sci. Stat. Comput., 9 (1988), pp. 1050-1057.
+  ##   References:
+  ##   C. Canuto, M.Y. Hussaini, A. Quarteroni and T.A. Zang, Spectral
+  ##      Methods in Fluid Dynamics, Springer-Verlag, Berlin, 1988; p. 69.
+  ##   L.N. Trefethen and M.R. Trummer, An instability phenomenon in
+  ##      spectral methods, SIAM J. Numer. Anal., 24 (1987), pp. 1008-1023.
+  ##   D. Funaro, Computing the inverse of the Chebyshev collocation
+  ##      derivative, SIAM J. Sci. Stat. Comput., 9 (1988), pp. 1050-1057.
 
   if (nargin < 1 || nargin > 2)
     error ("gallery: 1 to 2 arguments are required for chebspec matrix.");
@@ -628,18 +628,18 @@
 
 function C = chebvand (m, p)
   ## CHEBVAND Vandermonde-like matrix for the Chebyshev polynomials.
-  ##          C = CHEBVAND(P), where P is a vector, produces the (primal)
-  ##          Chebyshev Vandermonde matrix based on the points P,
-  ##          i.e., C(i,j) = T_{i-1}(P(j)), where T_{i-1} is the Chebyshev
-  ##          polynomial of degree i-1.
-  ##          CHEBVAND(M,P) is a rectangular version of CHEBVAND(P) with M rows.
-  ##          Special case: If P is a scalar then P equally spaced points on
-  ##                        [0,1] are used.
+  ##   C = CHEBVAND(P), where P is a vector, produces the (primal)
+  ##   Chebyshev Vandermonde matrix based on the points P,
+  ##   i.e., C(i,j) = T_{i-1}(P(j)), where T_{i-1} is the Chebyshev
+  ##   polynomial of degree i-1.
+  ##   CHEBVAND(M,P) is a rectangular version of CHEBVAND(P) with M rows.
+  ##   Special case: If P is a scalar then P equally spaced points on
+  ##                 [0,1] are used.
   ##
-  ##          Reference:
-  ##          N.J. Higham, Stability analysis of algorithms for solving confluent
-  ##            Vandermonde-like systems, SIAM J. Matrix Anal. Appl., 11 (1990),
-  ##            pp. 23-41.
+  ##   Reference:
+  ##   N.J. Higham, Stability analysis of algorithms for solving confluent
+  ##     Vandermonde-like systems, SIAM J. Matrix Anal. Appl., 11 (1990),
+  ##     pp. 23-41.
 
   if (nargin < 1 || nargin > 2)
     error ("gallery: 1 or 2 arguments are required for chebvand matrix.");
@@ -680,17 +680,17 @@
 
 function A = chow (n, alpha = 1, delta = 0)
   ## CHOW    Chow matrix - a singular Toeplitz lower Hessenberg matrix.
-  ##         A = CHOW(N, ALPHA, DELTA) is a Toeplitz lower Hessenberg matrix
-  ##         A = H(ALPHA) + DELTA*EYE, where H(i,j) = ALPHA^(i-j+1).
-  ##         H(ALPHA) has p = FLOOR(N/2) zero eigenvalues, the rest being
-  ##         4*ALPHA*COS( k*PI/(N+2) )^2, k=1:N-p.
-  ##         Defaults: ALPHA = 1, DELTA = 0.
+  ##   A = CHOW(N, ALPHA, DELTA) is a Toeplitz lower Hessenberg matrix
+  ##   A = H(ALPHA) + DELTA*EYE, where H(i,j) = ALPHA^(i-j+1).
+  ##   H(ALPHA) has p = FLOOR(N/2) zero eigenvalues, the rest being
+  ##   4*ALPHA*COS( k*PI/(N+2) )^2, k=1:N-p.
+  ##   Defaults: ALPHA = 1, DELTA = 0.
   ##
-  ##         References:
-  ##         T.S. Chow, A class of Hessenberg matrices with known
-  ##            eigenvalues and inverses, SIAM Review, 11 (1969), pp. 391-395.
-  ##         G. Fairweather, On the eigenvalues and eigenvectors of a class of
-  ##            Hessenberg matrices, SIAM Review, 13 (1971), pp. 220-221.
+  ##   References:
+  ##   T.S. Chow, A class of Hessenberg matrices with known
+  ##      eigenvalues and inverses, SIAM Review, 11 (1969), pp. 391-395.
+  ##   G. Fairweather, On the eigenvalues and eigenvectors of a class of
+  ##      Hessenberg matrices, SIAM Review, 13 (1971), pp. 220-221.
 
   if (nargin < 1 || nargin > 3)
     error ("gallery: 1 to 3 arguments are required for chow matrix.");
@@ -707,18 +707,18 @@
 
 function C = circul (v)
   ## CIRCUL  Circulant matrix.
-  ##         C = CIRCUL(V) is the circulant matrix whose first row is V.
-  ##         (A circulant matrix has the property that each row is obtained
-  ##         from the previous one by cyclically permuting the entries one step
-  ##         forward; it is a special Toeplitz matrix in which the diagonals
-  ##         `wrap round'.)
-  ##         Special case: if V is a scalar then C = CIRCUL(1:V).
-  ##         The eigensystem of C (N-by-N) is known explicitly.   If t is an Nth
-  ##         root of unity, then the inner product of V with W = [1 t t^2 ... t^N]
-  ##         is an eigenvalue of C, and W(N:-1:1) is an eigenvector of C.
+  ##   C = CIRCUL(V) is the circulant matrix whose first row is V.
+  ##   (A circulant matrix has the property that each row is obtained
+  ##   from the previous one by cyclically permuting the entries one step
+  ##   forward; it is a special Toeplitz matrix in which the diagonals
+  ##   `wrap round'.)
+  ##   Special case: if V is a scalar then C = CIRCUL(1:V).
+  ##   The eigensystem of C (N-by-N) is known explicitly.   If t is an Nth
+  ##   root of unity, then the inner product of V with W = [1 t t^2 ... t^N]
+  ##   is an eigenvalue of C, and W(N:-1:1) is an eigenvector of C.
   ##
-  ##         Reference:
-  ##         P.J. Davis, Circulant Matrices, John Wiley, 1977.
+  ##   Reference:
+  ##   P.J. Davis, Circulant Matrices, John Wiley, 1977.
 
   if (nargin != 1)
     error ("gallery: 1 argument is required for circul matrix.");
@@ -742,28 +742,28 @@
 
 function A = clement (n, k = 0)
   ## CLEMENT   Clement matrix - tridiagonal with zero diagonal entries.
-  ##           CLEMENT(N, K) is a tridiagonal matrix with zero diagonal entries
-  ##           and known eigenvalues.  It is singular if N is odd.  About 64
-  ##           percent of the entries of the inverse are zero.  The eigenvalues
-  ##           are plus and minus the numbers N-1, N-3, N-5, ..., (1 or 0).
-  ##           For K = 0 (the default) the matrix is unsymmetric, while for
-  ##           K = 1 it is symmetric.
-  ##           CLEMENT(N, 1) is diagonally similar to CLEMENT(N).
+  ##   CLEMENT(N, K) is a tridiagonal matrix with zero diagonal entries
+  ##   and known eigenvalues.  It is singular if N is odd.  About 64
+  ##   percent of the entries of the inverse are zero.  The eigenvalues
+  ##   are plus and minus the numbers N-1, N-3, N-5, ..., (1 or 0).
+  ##   For K = 0 (the default) the matrix is unsymmetric, while for
+  ##   K = 1 it is symmetric.
+  ##   CLEMENT(N, 1) is diagonally similar to CLEMENT(N).
   ##
-  ##           Similar properties hold for TRIDIAG(X,Y,Z) where Y = ZEROS(N,1).
-  ##           The eigenvalues still come in plus/minus pairs but they are not
-  ##           known explicitly.
+  ##   Similar properties hold for TRIDIAG(X,Y,Z) where Y = ZEROS(N,1).
+  ##   The eigenvalues still come in plus/minus pairs but they are not
+  ##   known explicitly.
   ##
-  ##           References:
-  ##           P.A. Clement, A class of triple-diagonal matrices for test
-  ##              purposes, SIAM Review, 1 (1959), pp. 50-52.
-  ##           A. Edelman and E. Kostlan, The road from Kac's matrix to Kac's
-  ##              random polynomials. In John~G. Lewis, editor, Proceedings of
-  ##              the Fifth SIAM Conference on Applied Linear Algebra Society
-  ##              for Industrial and Applied Mathematics, Philadelphia, 1994,
-  ##              pp. 503-507.
-  ##           O. Taussky and J. Todd, Another look at a matrix of Mark Kac,
-  ##              Linear Algebra and Appl., 150 (1991), pp. 341-360.
+  ##   References:
+  ##   P.A. Clement, A class of triple-diagonal matrices for test
+  ##      purposes, SIAM Review, 1 (1959), pp. 50-52.
+  ##   A. Edelman and E. Kostlan, The road from Kac's matrix to Kac's
+  ##      random polynomials. In John~G. Lewis, editor, Proceedings of
+  ##      the Fifth SIAM Conference on Applied Linear Algebra Society
+  ##      for Industrial and Applied Mathematics, Philadelphia, 1994,
+  ##      pp. 503-507.
+  ##   O. Taussky and J. Todd, Another look at a matrix of Mark Kac,
+  ##      Linear Algebra and Appl., 150 (1991), pp. 341-360.
 
   if (nargin < 1 || nargin > 2)
     error ("gallery: 1 or 2 arguments are required for clement matrix.");
@@ -789,17 +789,17 @@
 
 function C = compar (A, k = 0)
   ## COMP    Comparison matrices.
-  ##         COMP(A) is DIAG(B) - TRIL(B,-1) - TRIU(B,1), where B = ABS(A).
-  ##         COMP(A, 1) is A with each diagonal element replaced by its
-  ##         absolute value, and each off-diagonal element replaced by minus
-  ##         the absolute value of the largest element in absolute value in
-  ##         its row.  However, if A is triangular COMP(A, 1) is too.
-  ##         COMP(A, 0) is the same as COMP(A).
-  ##         COMP(A) is often denoted by M(A) in the literature.
+  ##   COMP(A) is DIAG(B) - TRIL(B,-1) - TRIU(B,1), where B = ABS(A).
+  ##   COMP(A, 1) is A with each diagonal element replaced by its
+  ##   absolute value, and each off-diagonal element replaced by minus
+  ##   the absolute value of the largest element in absolute value in
+  ##   its row.  However, if A is triangular COMP(A, 1) is too.
+  ##   COMP(A, 0) is the same as COMP(A).
+  ##   COMP(A) is often denoted by M(A) in the literature.
   ##
-  ##         Reference (e.g.):
-  ##         N.J. Higham, A survey of condition number estimation for
-  ##         triangular matrices, SIAM Review, 29 (1987), pp. 575-596.
+  ##   Reference (e.g.):
+  ##   N.J. Higham, A survey of condition number estimation for
+  ##   triangular matrices, SIAM Review, 29 (1987), pp. 575-596.
 
   if (nargin < 1 || nargin > 2)
     error ("gallery: 1 or 2 arguments are required for compar matrix.");
@@ -841,26 +841,26 @@
 
 function A = condex (n, k = 4, theta = 100)
   ## CONDEX   `Counterexamples' to matrix condition number estimators.
-  ##          CONDEX(N, K, THETA) is a `counterexample' matrix to a condition
-  ##          estimator.  It has order N and scalar parameter THETA (default 100).
-  ##          If N is not equal to the `natural' size of the matrix then
-  ##          the matrix is padded out with an identity matrix to order N.
-  ##          The matrix, its natural size, and the estimator to which it applies
-  ##          are specified by K (default K = 4) as follows:
-  ##              K = 1:   4-by-4,     LINPACK (RCOND)
-  ##              K = 2:   3-by-3,     LINPACK (RCOND)
-  ##              K = 3:   arbitrary,  LINPACK (RCOND) (independent of THETA)
-  ##              K = 4:   N >= 4,     SONEST (Higham 1988)
-  ##          (Note that in practice the K = 4 matrix is not usually a
-  ##           counterexample because of the rounding errors in forming it.)
+  ##   CONDEX(N, K, THETA) is a `counterexample' matrix to a condition
+  ##   estimator.  It has order N and scalar parameter THETA (default 100).
+  ##   If N is not equal to the `natural' size of the matrix then
+  ##   the matrix is padded out with an identity matrix to order N.
+  ##   The matrix, its natural size, and the estimator to which it applies
+  ##   are specified by K (default K = 4) as follows:
+  ##       K = 1:   4-by-4,     LINPACK (RCOND)
+  ##       K = 2:   3-by-3,     LINPACK (RCOND)
+  ##       K = 3:   arbitrary,  LINPACK (RCOND) (independent of THETA)
+  ##       K = 4:   N >= 4,     SONEST (Higham 1988)
+  ##   (Note that in practice the K = 4 matrix is not usually a
+  ##    counterexample because of the rounding errors in forming it.)
   ##
-  ##          References:
-  ##          A.K. Cline and R.K. Rew, A set of counter-examples to three
-  ##             condition number estimators, SIAM J. Sci. Stat. Comput.,
-  ##             4 (1983), pp. 602-611.
-  ##          N.J. Higham, FORTRAN codes for estimating the one-norm of a real or
-  ##             complex matrix, with applications to condition estimation
-  ##             (Algorithm 674), ACM Trans. Math. Soft., 14 (1988), pp. 381-396.
+  ##   References:
+  ##   A.K. Cline and R.K. Rew, A set of counter-examples to three
+  ##      condition number estimators, SIAM J. Sci. Stat. Comput.,
+  ##      4 (1983), pp. 602-611.
+  ##   N.J. Higham, FORTRAN codes for estimating the one-norm of a real or
+  ##      complex matrix, with applications to condition estimation
+  ##      (Algorithm 674), ACM Trans. Math. Soft., 14 (1988), pp. 381-396.
 
   if (nargin < 1 || nargin > 3)
     error ("gallery: 1 to 3 arguments are required for condex matrix.");
@@ -913,14 +913,14 @@
 
 function A = cycol (n, k)
   ## CYCOL   Matrix whose columns repeat cyclically.
-  ##         A = CYCOL([M N], K) is an M-by-N matrix of the form A = B(1:M,1:N)
-  ##         where B = [C C C...] and C = RANDN(M, K).  Thus A's columns repeat
-  ##         cyclically, and A has rank at most K.   K need not divide N.
-  ##         K defaults to ROUND(N/4).
-  ##         CYCOL(N, K), where N is a scalar, is the same as CYCOL([N N], K).
+  ##   A = CYCOL([M N], K) is an M-by-N matrix of the form A = B(1:M,1:N)
+  ##   where B = [C C C...] and C = RANDN(M, K).  Thus A's columns repeat
+  ##   cyclically, and A has rank at most K.   K need not divide N.
+  ##   K defaults to ROUND(N/4).
+  ##   CYCOL(N, K), where N is a scalar, is the same as CYCOL([N N], K).
   ##
-  ##         This type of matrix can lead to underflow problems for Gaussian
-  ##         elimination: see NA Digest Volume 89, Issue 3 (January 22, 1989).
+  ##   This type of matrix can lead to underflow problems for Gaussian
+  ##   elimination: see NA Digest Volume 89, Issue 3 (January 22, 1989).
 
   if (nargin < 1 || nargin > 2)
     error ("gallery: 1 or 2 arguments are required for cycol matrix.");
@@ -947,19 +947,19 @@
 
 function [c, d, e] = dorr (n, theta = 0.01)
   ## DORR  Dorr matrix - diagonally dominant, ill conditioned, tridiagonal.
-  ##       [C, D, E] = DORR(N, THETA) returns the vectors defining a row diagonally
-  ##       dominant, tridiagonal M-matrix that is ill conditioned for small
-  ##       values of the parameter THETA >= 0.
-  ##       If only one output parameter is supplied then
-  ##       C = FULL(TRIDIAG(C,D,E)), i.e., the matrix iself is returned.
-  ##       The columns of INV(C) vary greatly in norm.  THETA defaults to 0.01.
-  ##       The amount of diagonal dominance is given by (ignoring rounding errors):
-  ##             COMP(C)*ONES(N,1) = THETA*(N+1)^2 * [1 0 0 ... 0 1]'.
+  ##   [C, D, E] = DORR(N, THETA) returns the vectors defining a row diagonally
+  ##   dominant, tridiagonal M-matrix that is ill conditioned for small
+  ##   values of the parameter THETA >= 0.
+  ##   If only one output parameter is supplied then
+  ##   C = FULL(TRIDIAG(C,D,E)), i.e., the matrix iself is returned.
+  ##   The columns of INV(C) vary greatly in norm.  THETA defaults to 0.01.
+  ##   The amount of diagonal dominance is given by (ignoring rounding errors):
+  ##         COMP(C)*ONES(N,1) = THETA*(N+1)^2 * [1 0 0 ... 0 1]'.
   ##
-  ##       Reference:
-  ##       F.W. Dorr, An example of ill-conditioning in the numerical
-  ##       solution of singular perturbation problems, Math. Comp., 25 (1971),
-  ##       pp. 271-283.
+  ##   Reference:
+  ##   F.W. Dorr, An example of ill-conditioning in the numerical
+  ##   solution of singular perturbation problems, Math. Comp., 25 (1971),
+  ##   pp. 271-283.
 
   if (nargin < 1 || nargin > 2)
     error ("gallery: 1 or 2 arguments are required for dorr matrix.");
@@ -998,27 +998,27 @@
 
 function A = dramadah (n, k = 1)
   ## DRAMADAH  A (0,1) matrix whose inverse has large integer entries.
-  ##           An anti-Hadamard matrix A is a matrix with elements 0 or 1 for
-  ##           which MU(A) := NORM(INV(A),'FRO') is maximal.
-  ##           A = DRAMADAH(N, K) is an N-by-N (0,1) matrix for which MU(A) is
-  ##           relatively large, although not necessarily maximal.
-  ##           Available types (the default is K = 1):
-  ##           K = 1: A is Toeplitz, with ABS(DET(A)) = 1, and MU(A) > c(1.75)^N,
-  ##                  where c is a constant.
-  ##           K = 2: A is upper triangular and Toeplitz.
-  ##           The inverses of both types have integer entries.
+  ##   An anti-Hadamard matrix A is a matrix with elements 0 or 1 for
+  ##   which MU(A) := NORM(INV(A),'FRO') is maximal.
+  ##   A = DRAMADAH(N, K) is an N-by-N (0,1) matrix for which MU(A) is
+  ##   relatively large, although not necessarily maximal.
+  ##   Available types (the default is K = 1):
+  ##   K = 1: A is Toeplitz, with ABS(DET(A)) = 1, and MU(A) > c(1.75)^N,
+  ##          where c is a constant.
+  ##   K = 2: A is upper triangular and Toeplitz.
+  ##   The inverses of both types have integer entries.
   ##
-  ##           Another interesting (0,1) matrix:
-  ##           K = 3: A has maximal determinant among (0,1) lower Hessenberg
-  ##           matrices: det(A) = the n'th Fibonacci number.  A is Toeplitz.
-  ##           The eigenvalues have an interesting distribution in the complex
-  ##           plane.
+  ##   Another interesting (0,1) matrix:
+  ##   K = 3: A has maximal determinant among (0,1) lower Hessenberg
+  ##   matrices: det(A) = the n'th Fibonacci number.  A is Toeplitz.
+  ##   The eigenvalues have an interesting distribution in the complex
+  ##   plane.
   ##
-  ##           References:
-  ##           R.L. Graham and N.J.A. Sloane, Anti-Hadamard matrices,
-  ##              Linear Algebra and Appl., 62 (1984), pp. 113-137.
-  ##           L. Ching, The maximum determinant of an nxn lower Hessenberg
-  ##              (0,1) matrix, Linear Algebra and Appl., 183 (1993), pp. 147-153.
+  ##   References:
+  ##   R.L. Graham and N.J.A. Sloane, Anti-Hadamard matrices,
+  ##      Linear Algebra and Appl., 62 (1984), pp. 113-137.
+  ##   L. Ching, The maximum determinant of an nxn lower Hessenberg
+  ##      (0,1) matrix, Linear Algebra and Appl., 183 (1993), pp. 147-153.
 
   if (nargin < 1 || nargin > 2)
     error ("gallery: 1 to 2 arguments are required for dramadah matrix.");
@@ -1065,24 +1065,24 @@
 
 function A = fiedler (c)
   ## FIEDLER  Fiedler matrix - symmetric.
-  ##          A = FIEDLER(C), where C is an n-vector, is the n-by-n symmetric
-  ##          matrix with elements ABS(C(i)-C(j)).
-  ##          Special case: if C is a scalar, then A = FIEDLER(1:C)
-  ##                        (i.e. A(i,j) = ABS(i-j)).
-  ##          Properties:
-  ##            FIEDLER(N) has a dominant positive eigenvalue and all the other
-  ##                       eigenvalues are negative (Szego, 1936).
-  ##            Explicit formulas for INV(A) and DET(A) are given by Todd (1977)
-  ##            and attributed to Fiedler.  These indicate that INV(A) is
-  ##            tridiagonal except for nonzero (1,n) and (n,1) elements.
-  ##            [I think these formulas are valid only if the elements of
-  ##            C are in increasing or decreasing order---NJH.]
+  ##   FIEDLER(C), where C is an n-vector, is the n-by-n symmetric
+  ##   matrix with elements ABS(C(i)-C(j)).
+  ##   Special case: if C is a scalar, then A = FIEDLER(1:C)
+  ##               (i.e. A(i,j) = ABS(i-j)).
+  ##   Properties:
+  ##   FIEDLER(N) has a dominant positive eigenvalue and all the other
+  ##              eigenvalues are negative (Szego, 1936).
+  ##   Explicit formulas for INV(A) and DET(A) are given by Todd (1977)
+  ##   and attributed to Fiedler.  These indicate that INV(A) is
+  ##   tridiagonal except for nonzero (1,n) and (n,1) elements.
+  ##   [I think these formulas are valid only if the elements of
+  ##   C are in increasing or decreasing order---NJH.]
   ##
-  ##            References:
-  ##            G. Szego, Solution to problem 3705, Amer. Math. Monthly,
-  ##               43 (1936), pp. 246-259.
-  ##            J. Todd, Basic Numerical Mathematics, Vol. 2: Numerical Algebra,
-  ##               Birkhauser, Basel, and Academic Press, New York, 1977, p. 159.
+  ##   References:
+  ##   G. Szego, Solution to problem 3705, Amer. Math. Monthly,
+  ##      43 (1936), pp. 246-259.
+  ##   J. Todd, Basic Numerical Mathematics, Vol. 2: Numerical Algebra,
+  ##      Birkhauser, Basel, and Academic Press, New York, 1977, p. 159.
 
   if (nargin != 1)
     error ("gallery: 1 argument is required for fiedler matrix.");
@@ -1107,11 +1107,11 @@
 
 function A = forsythe (n, alpha = sqrt (eps), lambda = 0)
   ## FORSYTHE  Forsythe matrix - a perturbed Jordan block.
-  ##           FORSYTHE(N, ALPHA, LAMBDA) is the N-by-N matrix equal to
-  ##           JORDBLOC(N, LAMBDA) except it has an ALPHA in the (N,1) position.
-  ##           It has the characteristic polynomial
-  ##                   DET(A-t*EYE) = (LAMBDA-t)^N - (-1)^N ALPHA.
-  ##           ALPHA defaults to SQRT(EPS) and LAMBDA to 0.
+  ##   FORSYTHE(N, ALPHA, LAMBDA) is the N-by-N matrix equal to
+  ##   JORDBLOC(N, LAMBDA) except it has an ALPHA in the (N,1) position.
+  ##   It has the characteristic polynomial
+  ##           DET(A-t*EYE) = (LAMBDA-t)^N - (-1)^N ALPHA.
+  ##   ALPHA defaults to SQRT(EPS) and LAMBDA to 0.
 
   if (nargin < 1 || nargin > 3)
     error ("gallery: 1 to 3 arguments are required for forsythe matrix.");
@@ -1129,41 +1129,41 @@
 
 function F = frank (n, k = 0)
   ## FRANK   Frank matrix---ill conditioned eigenvalues.
-  ##         F = FRANK(N, K) is the Frank matrix of order N.  It is upper
-  ##         Hessenberg with determinant 1.  K = 0 is the default; if K = 1 the
-  ##         elements are reflected about the anti-diagonal (1,N)--(N,1).
-  ##         F has all positive eigenvalues and they occur in reciprocal pairs
-  ##         (so that 1 is an eigenvalue if N is odd).
-  ##         The eigenvalues of F may be obtained in terms of the zeros of the
-  ##         Hermite polynomials.
-  ##         The FLOOR(N/2) smallest eigenvalues of F are ill conditioned,
-  ##         the more so for bigger N.
+  ##   F = FRANK(N, K) is the Frank matrix of order N.  It is upper
+  ##   Hessenberg with determinant 1.  K = 0 is the default; if K = 1 the
+  ##   elements are reflected about the anti-diagonal (1,N)--(N,1).
+  ##   F has all positive eigenvalues and they occur in reciprocal pairs
+  ##   (so that 1 is an eigenvalue if N is odd).
+  ##   The eigenvalues of F may be obtained in terms of the zeros of the
+  ##   Hermite polynomials.
+  ##   The FLOOR(N/2) smallest eigenvalues of F are ill conditioned,
+  ##   the more so for bigger N.
   ##
-  ##         DET(FRANK(N)') comes out far from 1 for large N---see Frank (1958)
-  ##         and Wilkinson (1960) for discussions.
+  ##   DET(FRANK(N)') comes out far from 1 for large N---see Frank (1958)
+  ##   and Wilkinson (1960) for discussions.
   ##
-  ##         This version incorporates improvements suggested by W. Kahan.
+  ##   This version incorporates improvements suggested by W. Kahan.
   ##
-  ##         References:
-  ##         W.L. Frank, Computing eigenvalues of complex matrices by determinant
-  ##            evaluation and by methods of Danilewski and Wielandt, J. Soc.
-  ##            Indust. Appl. Math., 6 (1958), pp. 378-392 (see pp. 385, 388).
-  ##         G.H. Golub and J.H. Wilkinson, Ill-conditioned eigensystems and the
-  ##            computation of the Jordan canonical form, SIAM Review, 18 (1976),
-  ##              pp. 578-619 (Section 13).
-  ##         H. Rutishauser, On test matrices, Programmation en Mathematiques
-  ##            Numeriques, Editions Centre Nat. Recherche Sci., Paris, 165,
-  ##            1966, pp. 349-365.  Section 9.
-  ##         J.H. Wilkinson, Error analysis of floating-point computation,
-  ##            Numer. Math., 2 (1960), pp. 319-340 (Section 8).
-  ##         J.H. Wilkinson, The Algebraic Eigenvalue Problem, Oxford University
-  ##            Press, 1965 (pp. 92-93).
-  ##         The next two references give details of the eigensystem, as does
-  ##         Rutishauser (see above).
-  ##         P.J. Eberlein, A note on the matrices denoted by B_n, SIAM J. Appl.
-  ##            Math., 20 (1971), pp. 87-92.
-  ##         J.M. Varah, A generalization of the Frank matrix, SIAM J. Sci. Stat.
-  ##            Comput., 7 (1986), pp. 835-839.
+  ##   References:
+  ##   W.L. Frank, Computing eigenvalues of complex matrices by determinant
+  ##      evaluation and by methods of Danilewski and Wielandt, J. Soc.
+  ##      Indust. Appl. Math., 6 (1958), pp. 378-392 (see pp. 385, 388).
+  ##   G.H. Golub and J.H. Wilkinson, Ill-conditioned eigensystems and the
+  ##      computation of the Jordan canonical form, SIAM Review, 18 (1976),
+  ##        pp. 578-619 (Section 13).
+  ##   H. Rutishauser, On test matrices, Programmation en Mathematiques
+  ##      Numeriques, Editions Centre Nat. Recherche Sci., Paris, 165,
+  ##      1966, pp. 349-365.  Section 9.
+  ##   J.H. Wilkinson, Error analysis of floating-point computation,
+  ##      Numer. Math., 2 (1960), pp. 319-340 (Section 8).
+  ##   J.H. Wilkinson, The Algebraic Eigenvalue Problem, Oxford University
+  ##      Press, 1965 (pp. 92-93).
+  ##   The next two references give details of the eigensystem, as does
+  ##   Rutishauser (see above).
+  ##   P.J. Eberlein, A note on the matrices denoted by B_n, SIAM J. Appl.
+  ##      Math., 20 (1971), pp. 87-92.
+  ##   J.M. Varah, A generalization of the Frank matrix, SIAM J. Sci. Stat.
+  ##      Comput., 7 (1986), pp. 835-839.
 
   if (nargin < 1 || nargin > 2)
     error ("gallery: 1 to 2 arguments are required for frank matrix.");
@@ -1196,20 +1196,20 @@
 function A = gearmat (n, i = n, j = -n)
   ## NOTE: this function was named gearm in the original Test Matrix Toolbox
   ## GEARMAT   Gear matrix.
-  ##         A = GEARMAT(N,I,J) is the N-by-N matrix with ones on the sub- and
-  ##         super-diagonals, SIGN(I) in the (1,ABS(I)) position, SIGN(J)
-  ##         in the (N,N+1-ABS(J)) position, and zeros everywhere else.
-  ##         Defaults: I = N, j = -N.
-  ##         All eigenvalues are of the form 2*COS(a) and the eigenvectors
-  ##         are of the form [SIN(w+a), SIN(w+2a), ..., SIN(w+Na)].
-  ##         The values of a and w are given in the reference below.
-  ##         A can have double and triple eigenvalues and can be defective.
-  ##         GEARMAT(N) is singular.
+  ##   A = GEARMAT(N,I,J) is the N-by-N matrix with ones on the sub- and
+  ##   super-diagonals, SIGN(I) in the (1,ABS(I)) position, SIGN(J)
+  ##   in the (N,N+1-ABS(J)) position, and zeros everywhere else.
+  ##   Defaults: I = N, j = -N.
+  ##   All eigenvalues are of the form 2*COS(a) and the eigenvectors
+  ##   are of the form [SIN(w+a), SIN(w+2a), ..., SIN(w+Na)].
+  ##   The values of a and w are given in the reference below.
+  ##   A can have double and triple eigenvalues and can be defective.
+  ##   GEARMAT(N) is singular.
   ##
-  ##         (GEAR is a Simulink function, hence GEARMAT for Gear matrix.)
-  ##         Reference:
-  ##         C.W. Gear, A simple set of test matrices for eigenvalue programs,
-  ##         Math. Comp., 23 (1969), pp. 119-125.
+  ##   (GEAR is a Simulink function, hence GEARMAT for Gear matrix.)
+  ##   Reference:
+  ##   C.W. Gear, A simple set of test matrices for eigenvalue programs,
+  ##   Math. Comp., 23 (1969), pp. 119-125.
 
   if (nargin < 1 || nargin > 3)
     error ("gallery: 1 to 3 arguments are required for gearmat matrix.");
@@ -1228,18 +1228,18 @@
 
 function G = grcar (n, k = 3)
   ## GRCAR     Grcar matrix - a Toeplitz matrix with sensitive eigenvalues.
-  ##           GRCAR(N, K) is an N-by-N matrix with -1s on the
-  ##           subdiagonal, 1s on the diagonal, and K superdiagonals of 1s.
-  ##           The default is K = 3.  The eigenvalues of this matrix form an
-  ##           interesting pattern in the complex plane (try PS(GRCAR(32))).
+  ##   GRCAR(N, K) is an N-by-N matrix with -1s on the
+  ##   subdiagonal, 1s on the diagonal, and K superdiagonals of 1s.
+  ##   The default is K = 3.  The eigenvalues of this matrix form an
+  ##   interesting pattern in the complex plane (try PS(GRCAR(32))).
   ##
-  ##           References:
-  ##           J.F. Grcar, Operator coefficient methods for linear equations,
-  ##                Report SAND89-8691, Sandia National Laboratories, Albuquerque,
-  ##                New Mexico, 1989 (Appendix 2).
-  ##           N.M. Nachtigal, L. Reichel and L.N. Trefethen, A hybrid GMRES
-  ##                algorithm for nonsymmetric linear systems, SIAM J. Matrix Anal.
-  ##                Appl., 13 (1992), pp. 796-825.
+  ##   References:
+  ##   J.F. Grcar, Operator coefficient methods for linear equations,
+  ##        Report SAND89-8691, Sandia National Laboratories, Albuquerque,
+  ##        New Mexico, 1989 (Appendix 2).
+  ##   N.M. Nachtigal, L. Reichel and L.N. Trefethen, A hybrid GMRES
+  ##        algorithm for nonsymmetric linear systems, SIAM J. Matrix Anal.
+  ##        Appl., 13 (1992), pp. 796-825.
 
   if (nargin < 1 || nargin > 2)
     error ("gallery: 1 to 2 arguments are required for grcar matrix.");
@@ -1254,16 +1254,16 @@
 
 function A = hanowa (n, d = -1)
   ## HANOWA  A matrix whose eigenvalues lie on a vertical line in the complex plane.
-  ##         HANOWA(N, d) is the N-by-N block 2x2 matrix (thus N = 2M must be even)
-  ##                       [d*EYE(M)   -DIAG(1:M)
-  ##                        DIAG(1:M)   d*EYE(M)]
-  ##         It has complex eigenvalues lambda(k) = d +/- k*i  (1 <= k <= M).
-  ##         Parameter d defaults to -1.
+  ##   HANOWA(N, d) is the N-by-N block 2x2 matrix (thus N = 2M must be even)
+  ##                 [d*EYE(M)   -DIAG(1:M)
+  ##                  DIAG(1:M)   d*EYE(M)]
+  ##   It has complex eigenvalues lambda(k) = d +/- k*i  (1 <= k <= M).
+  ##   Parameter d defaults to -1.
   ##
-  ##         Reference:
-  ##         E. Hairer, S.P. Norsett and G. Wanner, Solving Ordinary
-  ##         Differential Equations I: Nonstiff Problems, Springer-Verlag,
-  ##         Berlin, 1987. (pp. 86-87)
+  ##   Reference:
+  ##   E. Hairer, S.P. Norsett and G. Wanner, Solving Ordinary
+  ##   Differential Equations I: Nonstiff Problems, Springer-Verlag,
+  ##   Berlin, 1987. (pp. 86-87)
 
   if (nargin < 1 || nargin > 2)
     error ("gallery: 1 to 2 arguments are required for hanowa matrix.");
@@ -1282,28 +1282,28 @@
 
 function [v, beta] = house (x)
   ## HOUSE   Householder matrix.
-  ##         If [v, beta] = HOUSE(x) then H = EYE - beta*v*v' is a Householder
-  ##         matrix such that Hx = -sign(x(1))*norm(x)*e_1.
-  ##         NB: If x = 0 then v = 0, beta = 1 is returned.
-  ##             x can be real or complex.
-  ##             sign(x) := exp(i*arg(x)) ( = x./abs(x) when x ~= 0).
+  ##   If [v, beta] = HOUSE(x) then H = EYE - beta*v*v' is a Householder
+  ##   matrix such that Hx = -sign(x(1))*norm(x)*e_1.
+  ##   NB: If x = 0 then v = 0, beta = 1 is returned.
+  ##       x can be real or complex.
+  ##       sign(x) := exp(i*arg(x)) ( = x./abs(x) when x ~= 0).
   ##
-  ##         Theory: (textbook references Golub & Van Loan 1989, 38-43;
-  ##                  Stewart 1973, 231-234, 262; Wilkinson 1965, 48-50).
-  ##         Hx = y: (I - beta*v*v')x = -s*e_1.
-  ##         Must have |s| = norm(x), v = x+s*e_1, and
-  ##         x'y = x'Hx =(x'Hx)' real => arg(s) = arg(x(1)).
-  ##         So take s = sign(x(1))*norm(x) (which avoids cancellation).
-  ##         v'v = (x(1)+s)^2 + x(2)^2 + ... + x(n)^2
-  ##             = 2*norm(x)*(norm(x) + |x(1)|).
+  ##   Theory: (textbook references Golub & Van Loan 1989, 38-43;
+  ##            Stewart 1973, 231-234, 262; Wilkinson 1965, 48-50).
+  ##   Hx = y: (I - beta*v*v')x = -s*e_1.
+  ##   Must have |s| = norm(x), v = x+s*e_1, and
+  ##   x'y = x'Hx =(x'Hx)' real => arg(s) = arg(x(1)).
+  ##   So take s = sign(x(1))*norm(x) (which avoids cancellation).
+  ##   v'v = (x(1)+s)^2 + x(2)^2 + ... + x(n)^2
+  ##       = 2*norm(x)*(norm(x) + |x(1)|).
   ##
-  ##         References:
-  ##         G.H. Golub and C.F. Van Loan, Matrix Computations, second edition,
-  ##            Johns Hopkins University Press, Baltimore, Maryland, 1989.
-  ##         G.W. Stewart, Introduction to Matrix Computations, Academic Press,
-  ##            New York, 1973,
-  ##         J.H. Wilkinson, The Algebraic Eigenvalue Problem, Oxford University
-  ##            Press, 1965.
+  ##   References:
+  ##   G.H. Golub and C.F. Van Loan, Matrix Computations, second edition,
+  ##      Johns Hopkins University Press, Baltimore, Maryland, 1989.
+  ##   G.W. Stewart, Introduction to Matrix Computations, Academic Press,
+  ##      New York, 1973,
+  ##   J.H. Wilkinson, The Algebraic Eigenvalue Problem, Oxford University
+  ##      Press, 1965.
 
   if (nargin != 1)
     error ("gallery: 1 argument is required for house matrix.");
@@ -1371,26 +1371,26 @@
 
 function A = invhess (x, y)
   ## INVHESS  Inverse of an upper Hessenberg matrix.
-  ##          INVHESS(X, Y), where X is an N-vector and Y an N-1 vector,
-  ##          is the matrix whose lower triangle agrees with that of
-  ##          ONES(N,1)*X' and whose strict upper triangle agrees with
-  ##          that of [1 Y]*ONES(1,N).
-  ##          The matrix is nonsingular if X(1) ~= 0 and X(i+1) ~= Y(i)
-  ##          for all i, and its inverse is an upper Hessenberg matrix.
-  ##          If Y is omitted it defaults to -X(1:N-1).
-  ##          Special case: if X is a scalar INVHESS(X) is the same as
-  ##          INVHESS(1:X).
+  ##   INVHESS(X, Y), where X is an N-vector and Y an N-1 vector,
+  ##   is the matrix whose lower triangle agrees with that of
+  ##   ONES(N,1)*X' and whose strict upper triangle agrees with
+  ##   that of [1 Y]*ONES(1,N).
+  ##   The matrix is nonsingular if X(1) ~= 0 and X(i+1) ~= Y(i)
+  ##   for all i, and its inverse is an upper Hessenberg matrix.
+  ##   If Y is omitted it defaults to -X(1:N-1).
+  ##   Special case: if X is a scalar INVHESS(X) is the same as
+  ##   INVHESS(1:X).
   ##
-  ##          References:
-  ##          F.N. Valvi and V.S. Geroyannis, Analytic inverses and
-  ##              determinants for a class of matrices, IMA Journal of Numerical
-  ##              Analysis, 7 (1987), pp. 123-128.
-  ##          W.-L. Cao and W.J. Stewart, A note on inverses of Hessenberg-like
-  ##              matrices, Linear Algebra and Appl., 76 (1986), pp. 233-240.
-  ##          Y. Ikebe, On inverses of Hessenberg matrices, Linear Algebra and
-  ##              Appl., 24 (1979), pp. 93-97.
-  ##          P. Rozsa, On the inverse of band matrices, Integral Equations and
-  ##              Operator Theory, 10 (1987), pp. 82-95.
+  ##   References:
+  ##   F.N. Valvi and V.S. Geroyannis, Analytic inverses and
+  ##       determinants for a class of matrices, IMA Journal of Numerical
+  ##       Analysis, 7 (1987), pp. 123-128.
+  ##   W.-L. Cao and W.J. Stewart, A note on inverses of Hessenberg-like
+  ##       matrices, Linear Algebra and Appl., 76 (1986), pp. 233-240.
+  ##   Y. Ikebe, On inverses of Hessenberg matrices, Linear Algebra and
+  ##       Appl., 24 (1979), pp. 93-97.
+  ##   P. Rozsa, On the inverse of band matrices, Integral Equations and
+  ##       Operator Theory, 10 (1987), pp. 82-95.
 
   if (nargin < 1 || nargin > 2)
     error ("gallery: 1 to 2 arguments are required for invhess matrix.");
@@ -1425,15 +1425,15 @@
 
 function A = invol (n)
   ## INVOL   An involutory matrix.
-  ##         A = INVOL(N) is an N-by-N involutory (A*A = EYE(N)) and
-  ##         ill-conditioned matrix.
-  ##         It is a diagonally scaled version of HILB(N).
-  ##         NB: B = (EYE(N)-A)/2 and B = (EYE(N)+A)/2 are idempotent (B*B = B).
+  ##   A = INVOL(N) is an N-by-N involutory (A*A = EYE(N)) and
+  ##   ill-conditioned matrix.
+  ##   It is a diagonally scaled version of HILB(N).
+  ##   NB: B = (EYE(N)-A)/2 and B = (EYE(N)+A)/2 are idempotent (B*B = B).
   ##
-  ##         Reference:
-  ##         A.S. Householder and J.A. Carpenter, The singular values
-  ##         of involutory and of idempotent matrices, Numer. Math. 5 (1963),
-  ##         pp. 234-237.
+  ##   Reference:
+  ##   A.S. Householder and J.A. Carpenter, The singular values
+  ##   of involutory and of idempotent matrices, Numer. Math. 5 (1963),
+  ##   pp. 234-237.
 
   if (nargin != 1)
     error ("gallery: 1 argument is required for invol matrix.");
@@ -1454,19 +1454,19 @@
 
 function [A, detA] = ipjfact (n, k = 0)
   ## IPJFACT   A Hankel matrix with factorial elements.
-  ##           A = IPJFACT(N, K) is the matrix with
-  ##                     A(i,j) = (i+j)!    (K = 0, default)
-  ##                     A(i,j) = 1/(i+j)!  (K = 1)
-  ##           Both are Hankel matrices.
-  ##           The determinant and inverse are known explicitly.
-  ##           If a second output argument is present, d = DET(A) is returned:
-  ##           [A, d] = IPJFACT(N, K);
+  ##   A = IPJFACT(N, K) is the matrix with
+  ##             A(i,j) = (i+j)!    (K = 0, default)
+  ##             A(i,j) = 1/(i+j)!  (K = 1)
+  ##   Both are Hankel matrices.
+  ##   The determinant and inverse are known explicitly.
+  ##   If a second output argument is present, d = DET(A) is returned:
+  ##   [A, d] = IPJFACT(N, K);
   ##
-  ##           Suggested by P. R. Graves-Morris.
+  ##   Suggested by P. R. Graves-Morris.
   ##
-  ##           Reference:
-  ##           M.J.C. Gover, The explicit inverse of factorial Hankel matrices,
-  ##           Dept. of Mathematics, University of Bradford, 1993.
+  ##   Reference:
+  ##   M.J.C. Gover, The explicit inverse of factorial Hankel matrices,
+  ##   Dept. of Mathematics, University of Bradford, 1993.
 
   if (nargin < 1 || nargin > 2)
     error ("gallery: 1 to 2 arguments are required for ipjfact matrix.");
@@ -1515,8 +1515,8 @@
 
 function J = jordbloc (n, lambda = 1)
   ## JORDBLOC  Jordan block.
-  ##           JORDBLOC(N, LAMBDA) is the N-by-N Jordan block with eigenvalue
-  ##           LAMBDA.  LAMBDA = 1 is the default.
+  ##   JORDBLOC(N, LAMBDA) is the N-by-N Jordan block with eigenvalue
+  ##   LAMBDA.  LAMBDA = 1 is the default.
 
   if (nargin < 1 || nargin > 2)
     error ("gallery: 1 to 2 arguments are required for jordbloc matrix.");
@@ -1531,30 +1531,30 @@
 
 function U = kahan (n, theta = 1.2, pert = 25)
   ## KAHAN  Kahan matrix - upper trapezoidal.
-  ##        KAHAN(N, THETA) is an upper trapezoidal matrix
-  ##        that has some interesting properties regarding estimation of
-  ##        condition and rank.
-  ##        The matrix is N-by-N unless N is a 2-vector, in which case it
-  ##        is N(1)-by-N(2).
-  ##        The parameter THETA defaults to 1.2.
-  ##        The useful range of THETA is 0 < THETA < PI.
+  ##   KAHAN(N, THETA) is an upper trapezoidal matrix
+  ##   that has some interesting properties regarding estimation of
+  ##   condition and rank.
+  ##   The matrix is N-by-N unless N is a 2-vector, in which case it
+  ##   is N(1)-by-N(2).
+  ##   The parameter THETA defaults to 1.2.
+  ##   The useful range of THETA is 0 < THETA < PI.
   ##
-  ##        To ensure that the QR factorization with column pivoting does not
-  ##        interchange columns in the presence of rounding errors, the diagonal
-  ##        is perturbed by PERT*EPS*diag( [N:-1:1] ).
-  ##        The default is PERT = 25, which ensures no interchanges for KAHAN(N)
-  ##        up to at least N = 90 in IEEE arithmetic.
-  ##        KAHAN(N, THETA, PERT) uses the given value of PERT.
+  ##   To ensure that the QR factorization with column pivoting does not
+  ##   interchange columns in the presence of rounding errors, the diagonal
+  ##   is perturbed by PERT*EPS*diag( [N:-1:1] ).
+  ##   The default is PERT = 25, which ensures no interchanges for KAHAN(N)
+  ##   up to at least N = 90 in IEEE arithmetic.
+  ##   KAHAN(N, THETA, PERT) uses the given value of PERT.
   ##
-  ##        The inverse of KAHAN(N, THETA) is known explicitly: see
-  ##        Higham (1987, p. 588), for example.
-  ##        The diagonal perturbation was suggested by Christian Bischof.
+  ##   The inverse of KAHAN(N, THETA) is known explicitly: see
+  ##   Higham (1987, p. 588), for example.
+  ##   The diagonal perturbation was suggested by Christian Bischof.
   ##
-  ##        References:
-  ##        W. Kahan, Numerical linear algebra, Canadian Math. Bulletin,
-  ##           9 (1966), pp. 757-801.
-  ##        N.J. Higham, A survey of condition number estimation for
-  ##           triangular matrices, SIAM Review, 29 (1987), pp. 575-596.
+  ##   References:
+  ##   W. Kahan, Numerical linear algebra, Canadian Math. Bulletin,
+  ##      9 (1966), pp. 757-801.
+  ##   N.J. Higham, A survey of condition number estimation for
+  ##      triangular matrices, SIAM Review, 29 (1987), pp. 575-596.
 
   if (nargin < 1 || nargin > 3)
     error ("gallery: 1 to 3 arguments are required for kahan matrix.");
@@ -1584,22 +1584,22 @@
 
 function A = kms (n, rho = 0.5)
   ## KMS   Kac-Murdock-Szego Toeplitz matrix.
-  ##       A = KMS(N, RHO) is the N-by-N Kac-Murdock-Szego Toeplitz matrix with
-  ##       A(i,j) = RHO^(ABS((i-j))) (for real RHO).
-  ##       If RHO is complex, then the same formula holds except that elements
-  ##       below the diagonal are conjugated.
-  ##       RHO defaults to 0.5.
-  ##       Properties:
-  ##          A has an LDL' factorization with
-  ##                   L = INV(TRIW(N,-RHO,1)'),
-  ##                   D(i,i) = (1-ABS(RHO)^2)*EYE(N) except D(1,1) = 1.
-  ##          A is positive definite if and only if 0 < ABS(RHO) < 1.
-  ##          INV(A) is tridiagonal.
+  ##   A = KMS(N, RHO) is the N-by-N Kac-Murdock-Szego Toeplitz matrix with
+  ##   A(i,j) = RHO^(ABS((i-j))) (for real RHO).
+  ##   If RHO is complex, then the same formula holds except that elements
+  ##   below the diagonal are conjugated.
+  ##   RHO defaults to 0.5.
+  ##   Properties:
+  ##      A has an LDL' factorization with
+  ##               L = INV(TRIW(N,-RHO,1)'),
+  ##               D(i,i) = (1-ABS(RHO)^2)*EYE(N) except D(1,1) = 1.
+  ##      A is positive definite if and only if 0 < ABS(RHO) < 1.
+  ##      INV(A) is tridiagonal.
   ##
-  ##        Reference:
-  ##        W.F. Trench, Numerical solution of the eigenvalue problem
-  ##        for Hermitian Toeplitz matrices, SIAM J. Matrix Analysis and Appl.,
-  ##        10 (1989), pp. 135-146 (and see the references therein).
+  ##    Reference:
+  ##    W.F. Trench, Numerical solution of the eigenvalue problem
+  ##    for Hermitian Toeplitz matrices, SIAM J. Matrix Analysis and Appl.,
+  ##    10 (1989), pp. 135-146 (and see the references therein).
 
   if (nargin < 1 || nargin > 2)
     error ("gallery: 1 to 2 arguments are required for lauchli matrix.");
@@ -1619,15 +1619,15 @@
 
 function B = krylov (A, x, j)
   ## KRYLOV    Krylov matrix.
-  ##           KRYLOV(A, x, j) is the Krylov matrix
-  ##                [x, Ax, A^2x, ..., A^(j-1)x],
-  ##           where A is an n-by-n matrix and x is an n-vector.
-  ##           Defaults: x = ONES(n,1), j = n.
-  ##           KRYLOV(n) is the same as KRYLOV(RANDN(n)).
+  ##   KRYLOV(A, x, j) is the Krylov matrix
+  ##        [x, Ax, A^2x, ..., A^(j-1)x],
+  ##   where A is an n-by-n matrix and x is an n-vector.
+  ##   Defaults: x = ONES(n,1), j = n.
+  ##   KRYLOV(n) is the same as KRYLOV(RANDN(n)).
   ##
-  ##           Reference:
-  ##           G.H. Golub and C.F. Van Loan, Matrix Computations, second edition,
-  ##           Johns Hopkins University Press, Baltimore, Maryland, 1989, p. 369.
+  ##   Reference:
+  ##   G.H. Golub and C.F. Van Loan, Matrix Computations, second edition,
+  ##   Johns Hopkins University Press, Baltimore, Maryland, 1989, p. 369.
 
   if (nargin < 1 || nargin > 3)
     error ("gallery: 1 to 3 arguments are required for krylov matrix.");
@@ -1662,14 +1662,14 @@
 
 function A = lauchli (n, mu = sqrt (eps))
   ## LAUCHLI   Lauchli matrix - rectangular.
-  ##           LAUCHLI(N, MU) is the (N+1)-by-N matrix [ONES(1,N); MU*EYE(N))].
-  ##           It is a well-known example in least squares and other problems
-  ##           that indicates the dangers of forming A'*A.
-  ##           MU defaults to SQRT(EPS).
+  ##   LAUCHLI(N, MU) is the (N+1)-by-N matrix [ONES(1,N); MU*EYE(N))].
+  ##   It is a well-known example in least squares and other problems
+  ##   that indicates the dangers of forming A'*A.
+  ##   MU defaults to SQRT(EPS).
   ##
-  ##           Reference:
-  ##           P. Lauchli, Jordan-Elimination und Ausgleichung nach
-  ##           kleinsten Quadraten, Numer. Math, 3 (1961), pp. 226-240.
+  ##   Reference:
+  ##   P. Lauchli, Jordan-Elimination und Ausgleichung nach
+  ##   kleinsten Quadraten, Numer. Math, 3 (1961), pp. 226-240.
 
   if (nargin < 1 || nargin > 2)
     error ("gallery: 1 to 2 arguments are required for lauchli matrix.");
@@ -1685,19 +1685,19 @@
 
 function A = lehmer (n)
   ## LEHMER  Lehmer matrix - symmetric positive definite.
-  ##         A = LEHMER(N) is the symmetric positive definite N-by-N matrix with
-  ##                          A(i,j) = i/j for j >= i.
-  ##         A is totally nonnegative.  INV(A) is tridiagonal, and explicit
-  ##         formulas are known for its entries.
-  ##         N <= COND(A) <= 4*N*N.
+  ##   A = LEHMER(N) is the symmetric positive definite N-by-N matrix with
+  ##                    A(i,j) = i/j for j >= i.
+  ##   A is totally nonnegative.  INV(A) is tridiagonal, and explicit
+  ##   formulas are known for its entries.
+  ##   N <= COND(A) <= 4*N*N.
   ##
-  ##         References:
-  ##         M. Newman and J. Todd, The evaluation of matrix inversion
-  ##            programs, J. Soc. Indust. Appl. Math., 6 (1958), pp. 466-476.
-  ##         Solutions to problem E710 (proposed by D.H. Lehmer): The inverse
-  ##            of a matrix, Amer. Math. Monthly, 53 (1946), pp. 534-535.
-  ##         J. Todd, Basic Numerical Mathematics, Vol. 2: Numerical Algebra,
-  ##            Birkhauser, Basel, and Academic Press, New York, 1977, p. 154.
+  ##   References:
+  ##   M. Newman and J. Todd, The evaluation of matrix inversion
+  ##      programs, J. Soc. Indust. Appl. Math., 6 (1958), pp. 466-476.
+  ##   Solutions to problem E710 (proposed by D.H. Lehmer): The inverse
+  ##      of a matrix, Amer. Math. Monthly, 53 (1946), pp. 534-535.
+  ##   J. Todd, Basic Numerical Mathematics, Vol. 2: Numerical Algebra,
+  ##      Birkhauser, Basel, and Academic Press, New York, 1977, p. 154.
 
   if (nargin != 1)
     error ("gallery: 1 argument is required for lehmer matrix.");
@@ -1712,23 +1712,23 @@
 
 function T = lesp (n)
   ## LESP   A tridiagonal matrix with real, sensitive eigenvalues.
-  ##        LESP(N) is an N-by-N matrix whose eigenvalues are real and smoothly
-  ##        distributed in the interval approximately [-2*N-3.5, -4.5].
-  ##        The sensitivities of the eigenvalues increase exponentially as
-  ##        the eigenvalues grow more negative.
-  ##        The matrix is similar to the symmetric tridiagonal matrix with
-  ##        the same diagonal entries and with off-diagonal entries 1,
-  ##        via a similarity transformation with D = diag(1!,2!,...,N!).
+  ##   LESP(N) is an N-by-N matrix whose eigenvalues are real and smoothly
+  ##   distributed in the interval approximately [-2*N-3.5, -4.5].
+  ##   The sensitivities of the eigenvalues increase exponentially as
+  ##   the eigenvalues grow more negative.
+  ##   The matrix is similar to the symmetric tridiagonal matrix with
+  ##   the same diagonal entries and with off-diagonal entries 1,
+  ##   via a similarity transformation with D = diag(1!,2!,...,N!).
   ##
-  ##        References:
-  ##        H.W.J. Lenferink and M.N. Spijker, On the use of stability regions in
-  ##             the numerical analysis of initial value problems,
-  ##             Math. Comp., 57 (1991), pp. 221-237.
-  ##        L.N. Trefethen, Pseudospectra of matrices, in Numerical Analysis 1991,
-  ##             Proceedings of the 14th Dundee Conference,
-  ##             D.F. Griffiths and G.A. Watson, eds, Pitman Research Notes in
-  ##             Mathematics, volume 260, Longman Scientific and Technical, Essex,
-  ##             UK, 1992, pp. 234-266.
+  ##   References:
+  ##   H.W.J. Lenferink and M.N. Spijker, On the use of stability regions in
+  ##        the numerical analysis of initial value problems,
+  ##        Math. Comp., 57 (1991), pp. 221-237.
+  ##   L.N. Trefethen, Pseudospectra of matrices, in Numerical Analysis 1991,
+  ##        Proceedings of the 14th Dundee Conference,
+  ##        D.F. Griffiths and G.A. Watson, eds, Pitman Research Notes in
+  ##        Mathematics, volume 260, Longman Scientific and Technical, Essex,
+  ##        UK, 1992, pp. 234-266.
 
   if (nargin != 1)
     error ("gallery: 1 argument is required for lesp matrix.");
@@ -1742,13 +1742,13 @@
 
 function A = lotkin (n)
   ## LOTKIN  Lotkin matrix.
-  ##         A = LOTKIN(N) is the Hilbert matrix with its first row altered to
-  ##         all ones.  A is unsymmetric, ill-conditioned, and has many negative
-  ##         eigenvalues of small magnitude.
-  ##         The inverse has integer entries and is known explicitly.
+  ##   A = LOTKIN(N) is the Hilbert matrix with its first row altered to
+  ##   all ones.  A is unsymmetric, ill-conditioned, and has many negative
+  ##   eigenvalues of small magnitude.
+  ##   The inverse has integer entries and is known explicitly.
   ##
-  ##         Reference:
-  ##         M. Lotkin, A set of test matrices, MTAC, 9 (1955), pp. 153-161.
+  ##   Reference:
+  ##   M. Lotkin, A set of test matrices, MTAC, 9 (1955), pp. 153-161.
 
   if (nargin != 1)
     error ("gallery: 1 argument is required for lotkin matrix.");
@@ -1762,21 +1762,21 @@
 
 function A = minij (n)
   ## MINIJ   Symmetric positive definite matrix MIN(i,j).
-  ##         A = MINIJ(N) is the N-by-N symmetric positive definite matrix with
-  ##         A(i,j) = MIN(i,j).
-  ##         Properties, variations:
-  ##         INV(A) is tridiagonal: it is minus the second difference matrix
-  ##                     except its (N,N) element is 1.
-  ##         2*A-ONES(N) (Givens' matrix) has tridiagonal inverse and
-  ##                     eigenvalues .5*sec^2([2r-1)PI/4N], r=1:N.
-  ##         (N+1)*ONES(N)-A also has a tridiagonal inverse.
+  ##   A = MINIJ(N) is the N-by-N symmetric positive definite matrix with
+  ##   A(i,j) = MIN(i,j).
+  ##   Properties, variations:
+  ##   INV(A) is tridiagonal: it is minus the second difference matrix
+  ##               except its (N,N) element is 1.
+  ##   2*A-ONES(N) (Givens' matrix) has tridiagonal inverse and
+  ##               eigenvalues .5*sec^2([2r-1)PI/4N], r=1:N.
+  ##   (N+1)*ONES(N)-A also has a tridiagonal inverse.
   ##
-  ##         References:
-  ##         J. Todd, Basic Numerical Mathematics, Vol. 2: Numerical Algebra,
-  ##            Birkhauser, Basel, and Academic Press, New York, 1977, p. 158.
-  ##         D.E. Rutherford, Some continuant determinants arising in physics and
-  ##            chemistry---II, Proc. Royal Soc. Edin., 63, A (1952), pp. 232-241.
-  ##            (For the eigenvalues of Givens' matrix.)
+  ##   References:
+  ##   J. Todd, Basic Numerical Mathematics, Vol. 2: Numerical Algebra,
+  ##      Birkhauser, Basel, and Academic Press, New York, 1977, p. 158.
+  ##   D.E. Rutherford, Some continuant determinants arising in physics and
+  ##      chemistry---II, Proc. Royal Soc. Edin., 63, A (1952), pp. 232-241.
+  ##      (For the eigenvalues of Givens' matrix.)
 
   if (nargin != 1)
     error ("gallery: 1 argument is required for minij matrix.");
@@ -1789,17 +1789,17 @@
 
 function A = moler (n, alpha = -1)
   ## MOLER   Moler matrix - symmetric positive definite.
-  ##         A = MOLER(N, ALPHA) is the symmetric positive definite N-by-N matrix
-  ##         U'*U where U = TRIW(N, ALPHA).
-  ##         For ALPHA = -1 (the default) A(i,j) = MIN(i,j)-2, A(i,i) = i.
-  ##         A has one small eigenvalue.
+  ##   A = MOLER(N, ALPHA) is the symmetric positive definite N-by-N matrix
+  ##   U'*U where U = TRIW(N, ALPHA).
+  ##   For ALPHA = -1 (the default) A(i,j) = MIN(i,j)-2, A(i,i) = i.
+  ##   A has one small eigenvalue.
   ##
-  ##         Nash (1990) attributes the ALPHA = -1 matrix to Moler.
+  ##   Nash (1990) attributes the ALPHA = -1 matrix to Moler.
   ##
-  ##         Reference:
-  ##         J.C. Nash, Compact Numerical Methods for Computers: Linear
-  ##         Algebra and Function Minimisation, second edition, Adam Hilger,
-  ##         Bristol, 1990 (Appendix 1).
+  ##   Reference:
+  ##   J.C. Nash, Compact Numerical Methods for Computers: Linear
+  ##   Algebra and Function Minimisation, second edition, Adam Hilger,
+  ##   Bristol, 1990 (Appendix 1).
 
   if (nargin < 1 || nargin > 2)
     error ("gallery: 1 to 2 arguments are required for moler matrix.");
@@ -1814,16 +1814,16 @@
 
 function [A, T] = neumann (n)
   ## NEUMANN  Singular matrix from the discrete Neumann problem (sparse).
-  ##          NEUMANN(N) is the singular, row diagonally dominant matrix resulting
-  ##          from discretizing the Neumann problem with the usual five point
-  ##          operator on a regular mesh.
-  ##          It has a one-dimensional null space with null vector ONES(N,1).
-  ##          The dimension N should be a perfect square, or else a 2-vector,
-  ##          in which case the dimension of the matrix is N(1)*N(2).
+  ##   NEUMANN(N) is the singular, row diagonally dominant matrix resulting
+  ##   from discretizing the Neumann problem with the usual five point
+  ##   operator on a regular mesh.
+  ##   It has a one-dimensional null space with null vector ONES(N,1).
+  ##   The dimension N should be a perfect square, or else a 2-vector,
+  ##   in which case the dimension of the matrix is N(1)*N(2).
   ##
-  ##          Reference:
-  ##          R.J. Plemmons, Regular splittings and the discrete Neumann
-  ##          problem, Numer. Math., 25 (1976), pp. 153-161.
+  ##   Reference:
+  ##   R.J. Plemmons, Regular splittings and the discrete Neumann
+  ##   problem, Numer. Math., 25 (1976), pp. 153-161.
 
   if (nargin != 1)
     error ("gallery: 1 argument is required for neumann matrix.");
@@ -1890,37 +1890,37 @@
 
 function Q = orthog (n, k = 1)
   ## ORTHOG Orthogonal and nearly orthogonal matrices.
-  ##        Q = ORTHOG(N, K) selects the K'th type of matrix of order N.
-  ##        K > 0 for exactly orthogonal matrices, K < 0 for diagonal scalings of
-  ##        orthogonal matrices.
-  ##        Available types: (K = 1 is the default)
-  ##        K = 1:  Q(i,j) = SQRT(2/(n+1)) * SIN( i*j*PI/(n+1) )
-  ##                Symmetric eigenvector matrix for second difference matrix.
-  ##        K = 2:  Q(i,j) = 2/SQRT(2*n+1)) * SIN( 2*i*j*PI/(2*n+1) )
-  ##                Symmetric.
-  ##        K = 3:  Q(r,s) = EXP(2*PI*i*(r-1)*(s-1)/n) / SQRT(n)  (i=SQRT(-1))
-  ##                Unitary, the Fourier matrix.  Q^4 is the identity.
-  ##                This is essentially the same matrix as FFT(EYE(N))/SQRT(N)!
-  ##        K = 4:  Helmert matrix: a permutation of a lower Hessenberg matrix,
-  ##                whose first row is ONES(1:N)/SQRT(N).
-  ##        K = 5:  Q(i,j) = SIN( 2*PI*(i-1)*(j-1)/n ) + COS( 2*PI*(i-1)*(j-1)/n ).
-  ##                Symmetric matrix arising in the Hartley transform.
-  ##        K = -1: Q(i,j) = COS( (i-1)*(j-1)*PI/(n-1) )
-  ##                Chebyshev Vandermonde-like matrix, based on extrema of T(n-1).
-  ##        K = -2: Q(i,j) = COS( (i-1)*(j-1/2)*PI/n) )
-  ##                Chebyshev Vandermonde-like matrix, based on zeros of T(n).
+  ##   Q = ORTHOG(N, K) selects the K'th type of matrix of order N.
+  ##   K > 0 for exactly orthogonal matrices, K < 0 for diagonal scalings of
+  ##   orthogonal matrices.
+  ##   Available types: (K = 1 is the default)
+  ##   K = 1:  Q(i,j) = SQRT(2/(n+1)) * SIN( i*j*PI/(n+1) )
+  ##           Symmetric eigenvector matrix for second difference matrix.
+  ##   K = 2:  Q(i,j) = 2/SQRT(2*n+1)) * SIN( 2*i*j*PI/(2*n+1) )
+  ##           Symmetric.
+  ##   K = 3:  Q(r,s) = EXP(2*PI*i*(r-1)*(s-1)/n) / SQRT(n)  (i=SQRT(-1))
+  ##           Unitary, the Fourier matrix.  Q^4 is the identity.
+  ##           This is essentially the same matrix as FFT(EYE(N))/SQRT(N)!
+  ##   K = 4:  Helmert matrix: a permutation of a lower Hessenberg matrix,
+  ##           whose first row is ONES(1:N)/SQRT(N).
+  ##   K = 5:  Q(i,j) = SIN( 2*PI*(i-1)*(j-1)/n ) + COS( 2*PI*(i-1)*(j-1)/n ).
+  ##           Symmetric matrix arising in the Hartley transform.
+  ##   K = -1: Q(i,j) = COS( (i-1)*(j-1)*PI/(n-1) )
+  ##           Chebyshev Vandermonde-like matrix, based on extrema of T(n-1).
+  ##   K = -2: Q(i,j) = COS( (i-1)*(j-1/2)*PI/n) )
+  ##           Chebyshev Vandermonde-like matrix, based on zeros of T(n).
   ##
-  ##        References:
-  ##        N.J. Higham and D.J. Higham, Large growth factors in Gaussian
-  ##             elimination with pivoting, SIAM J. Matrix Analysis and  Appl.,
-  ##             10 (1989), pp. 155-164.
-  ##        P. Morton, On the eigenvectors of Schur's matrix, J. Number Theory,
-  ##             12 (1980), pp. 122-127. (Re. ORTHOG(N, 3))
-  ##        H.O. Lancaster, The Helmert Matrices, Amer. Math. Monthly, 72 (1965),
-  ##             pp. 4-12.
-  ##        D. Bini and P. Favati, On a matrix algebra related to the discrete
-  ##             Hartley transform, SIAM J. Matrix Anal. Appl., 14 (1993),
-  ##             pp. 500-507.
+  ##   References:
+  ##   N.J. Higham and D.J. Higham, Large growth factors in Gaussian
+  ##        elimination with pivoting, SIAM J. Matrix Analysis and  Appl.,
+  ##        10 (1989), pp. 155-164.
+  ##   P. Morton, On the eigenvectors of Schur's matrix, J. Number Theory,
+  ##        12 (1980), pp. 122-127. (Re. ORTHOG(N, 3))
+  ##   H.O. Lancaster, The Helmert Matrices, Amer. Math. Monthly, 72 (1965),
+  ##        pp. 4-12.
+  ##   D. Bini and P. Favati, On a matrix algebra related to the discrete
+  ##        Hartley transform, SIAM J. Matrix Anal. Appl., 14 (1993),
+  ##        pp. 500-507.
 
   if (nargin < 1 || nargin > 2)
     error ("gallery: 1 to 2 arguments are required for orthog matrix.");
@@ -1976,20 +1976,20 @@
 
 function A = parter (n)
   ## PARTER    Parter matrix - a Toeplitz matrix with singular values near PI.
-  ##           PARTER(N) is the matrix with (i,j) element 1/(i-j+0.5).
-  ##           It is a Cauchy matrix and a Toeplitz matrix.
+  ##   PARTER(N) is the matrix with (i,j) element 1/(i-j+0.5).
+  ##   It is a Cauchy matrix and a Toeplitz matrix.
   ##
-  ##           At the Second SIAM Conference on Linear Algebra, Raleigh, N.C.,
-  ##           1985, Cleve Moler noted that most of the singular values of
-  ##           PARTER(N) are very close to PI.  An explanation of the phenomenon
-  ##           was given by Parter; see also the paper by Tyrtyshnikov.
+  ##   At the Second SIAM Conference on Linear Algebra, Raleigh, N.C.,
+  ##   1985, Cleve Moler noted that most of the singular values of
+  ##   PARTER(N) are very close to PI.  An explanation of the phenomenon
+  ##   was given by Parter; see also the paper by Tyrtyshnikov.
   ##
-  ##           References:
-  ##           The MathWorks Newsletter, Volume 1, Issue 1, March 1986, page 2.
-  ##           S.V. Parter, On the distribution of the singular values of Toeplitz
-  ##                matrices, Linear Algebra and Appl., 80 (1986), pp. 115-130.
-  ##           E.E. Tyrtyshnikov, Cauchy-Toeplitz matrices and some applications,
-  ##                Linear Algebra and Appl., 149 (1991), pp. 1-18.
+  ##   References:
+  ##   The MathWorks Newsletter, Volume 1, Issue 1, March 1986, page 2.
+  ##   S.V. Parter, On the distribution of the singular values of Toeplitz
+  ##        matrices, Linear Algebra and Appl., 80 (1986), pp. 115-130.
+  ##   E.E. Tyrtyshnikov, Cauchy-Toeplitz matrices and some applications,
+  ##        Linear Algebra and Appl., 149 (1991), pp. 1-18.
 
   if (nargin != 1)
     error ("gallery: 1 argument is required for parter matrix.");
@@ -2002,14 +2002,14 @@
 
 function P = pei (n, alpha = 1)
   ## PEI    Pei matrix.
-  ##        PEI(N, ALPHA), where ALPHA is a scalar, is the symmetric matrix
-  ##        ALPHA*EYE(N) + ONES(N).
-  ##        If ALPHA is omitted then ALPHA = 1 is used.
-  ##        The matrix is singular for ALPHA = 0, -N.
+  ##   PEI(N, ALPHA), where ALPHA is a scalar, is the symmetric matrix
+  ##   ALPHA*EYE(N) + ONES(N).
+  ##   If ALPHA is omitted then ALPHA = 1 is used.
+  ##   The matrix is singular for ALPHA = 0, -N.
   ##
-  ##        Reference:
-  ##        M.L. Pei, A test matrix for inversion procedures,
-  ##        Comm. ACM, 5 (1962), p. 508.
+  ##   Reference:
+  ##   M.L. Pei, A test matrix for inversion procedures,
+  ##   Comm. ACM, 5 (1962), p. 508.
 
   if (nargin < 1 || nargin > 2)
     error ("gallery: 1 to 2 arguments are required for pei matrix.");
@@ -2024,14 +2024,14 @@
 
 function A = poisson (n)
   ## POISSON   Block tridiagonal matrix from Poisson's equation (sparse).
-  ##           POISSON(N) is the block tridiagonal matrix of order N^2
-  ##           resulting from discretizing Poisson's equation with the
-  ##           5-point operator on an N-by-N mesh.
+  ##   POISSON(N) is the block tridiagonal matrix of order N^2
+  ##   resulting from discretizing Poisson's equation with the
+  ##   5-point operator on an N-by-N mesh.
   ##
-  ##           Reference:
-  ##           G.H. Golub and C.F. Van Loan, Matrix Computations, second edition,
-  ##           Johns Hopkins University Press, Baltimore, Maryland, 1989
-  ##           (Section 4.5.4).
+  ##   Reference:
+  ##   G.H. Golub and C.F. Van Loan, Matrix Computations, second edition,
+  ##   Johns Hopkins University Press, Baltimore, Maryland, 1989
+  ##   (Section 4.5.4).
 
   if (nargin != 1)
     error ("gallery: 1 argument is required for poisson matrix.");
@@ -2046,17 +2046,17 @@
 
 function A = prolate (n, w = 0.25)
   ## PROLATE   Prolate matrix - symmetric, ill-conditioned Toeplitz matrix.
-  ##           A = PROLATE(N, W) is the N-by-N prolate matrix with parameter W.
-  ##           It is a symmetric Toeplitz matrix.
-  ##           If 0 < W < 0.5 then
-  ##              - A is positive definite
-  ##              - the eigenvalues of A are distinct, lie in (0, 1), and
-  ##                tend to cluster around 0 and 1.
-  ##           W defaults to 0.25.
+  ##   A = PROLATE(N, W) is the N-by-N prolate matrix with parameter W.
+  ##   It is a symmetric Toeplitz matrix.
+  ##   If 0 < W < 0.5 then
+  ##      - A is positive definite
+  ##      - the eigenvalues of A are distinct, lie in (0, 1), and
+  ##        tend to cluster around 0 and 1.
+  ##   W defaults to 0.25.
   ##
-  ##           Reference:
-  ##           J.M. Varah. The Prolate matrix. Linear Algebra and Appl.,
-  ##           187:269--278, 1993.
+  ##   Reference:
+  ##   J.M. Varah. The Prolate matrix. Linear Algebra and Appl.,
+  ##   187:269--278, 1993.
 
   if (nargin < 1 || nargin > 2)
     error ("gallery: 1 to 2 arguments are required for prolate matrix.");
@@ -2076,23 +2076,23 @@
 function H = randhess (x)
   ## NOTE: this function was named ohess in the original Test Matrix Toolbox
   ## RANDHESS  Random, orthogonal upper Hessenberg matrix.
-  ##        H = RANDHESS(N) is an N-by-N real, random, orthogonal
-  ##        upper Hessenberg matrix.
-  ##        Alternatively, H = RANDHESS(X), where X is an arbitrary real
-  ##        N-vector (N > 1) constructs H non-randomly using the elements
-  ##        of X as parameters.
-  ##        In both cases H is constructed via a product of N-1 Givens rotations.
+  ##   H = RANDHESS(N) is an N-by-N real, random, orthogonal
+  ##   upper Hessenberg matrix.
+  ##   Alternatively, H = RANDHESS(X), where X is an arbitrary real
+  ##   N-vector (N > 1) constructs H non-randomly using the elements
+  ##   of X as parameters.
+  ##   In both cases H is constructed via a product of N-1 Givens rotations.
   ##
-  ##        Note: See Gragg (1986) for how to represent an N-by-N (complex)
-  ##        unitary Hessenberg matrix with positive subdiagonal elements in terms
-  ##        of 2N-1 real parameters (the Schur parametrization).
-  ##        This M-file handles the real case only and is intended simply as a
-  ##        convenient way to generate random or non-random orthogonal Hessenberg
-  ##        matrices.
+  ##   Note: See Gragg (1986) for how to represent an N-by-N (complex)
+  ##   unitary Hessenberg matrix with positive subdiagonal elements in terms
+  ##   of 2N-1 real parameters (the Schur parametrization).
+  ##   This M-file handles the real case only and is intended simply as a
+  ##   convenient way to generate random or non-random orthogonal Hessenberg
+  ##   matrices.
   ##
-  ##        Reference:
-  ##        W.B. Gragg, The QR algorithm for unitary Hessenberg matrices,
-  ##        J. Comp. Appl. Math., 16 (1986), pp. 1-8.
+  ##   Reference:
+  ##   W.B. Gragg, The QR algorithm for unitary Hessenberg matrices,
+  ##   J. Comp. Appl. Math., 16 (1986), pp. 1-8.
 
   if (nargin != 1)
     error ("gallery: 1 argument is required for randhess matrix.");
@@ -2125,12 +2125,12 @@
 
 function A = rando (n, k = 1)
   ## RANDO   Random matrix with elements -1, 0 or 1.
-  ##         A = RANDO(N, K) is a random N-by-N matrix with elements from
-  ##         one of the following discrete distributions (default K = 1):
-  ##           K = 1:  A(i,j) =  0 or 1    with equal probability,
-  ##           K = 2:  A(i,j) = -1 or 1    with equal probability,
-  ##           K = 3:  A(i,j) = -1, 0 or 1 with equal probability.
-  ##         N may be a 2-vector, in which case the matrix is N(1)-by-N(2).
+  ##   A = RANDO(N, K) is a random N-by-N matrix with elements from
+  ##   one of the following discrete distributions (default K = 1):
+  ##     K = 1:  A(i,j) =  0 or 1    with equal probability,
+  ##     K = 2:  A(i,j) = -1 or 1    with equal probability,
+  ##     K = 3:  A(i,j) = -1, 0 or 1 with equal probability.
+  ##   N may be a 2-vector, in which case the matrix is N(1)-by-N(2).
 
   if (nargin < 1 || nargin > 2)
     error ("gallery: 1 to 2 arguments are required for rando matrix.");
@@ -2156,37 +2156,37 @@
 
 function A = randsvd (n, kappa = sqrt (1/eps), mode = 3, kl = n-1, ku = kl)
   ## RANDSVD  Random matrix with pre-assigned singular values.
-  ##       RANDSVD(N, KAPPA, MODE, KL, KU) is a (banded) random matrix of order N
-  ##       with COND(A) = KAPPA and singular values from the distribution MODE.
-  ##       N may be a 2-vector, in which case the matrix is N(1)-by-N(2).
-  ##       Available types:
-  ##              MODE = 1:   one large singular value,
-  ##              MODE = 2:   one small singular value,
-  ##              MODE = 3:   geometrically distributed singular values,
-  ##              MODE = 4:   arithmetically distributed singular values,
-  ##              MODE = 5:   random singular values with unif. dist. logarithm.
-  ##       If omitted, MODE defaults to 3, and KAPPA defaults to SQRT(1/EPS).
-  ##       If MODE < 0 then the effect is as for ABS(MODE) except that in the
-  ##       original matrix of singular values the order of the diagonal entries
-  ##       is reversed: small to large instead of large to small.
-  ##       KL and KU are the lower and upper bandwidths respectively; if they
-  ##       are omitted a full matrix is produced.
-  ##       If only KL is present, KU defaults to KL.
-  ##       Special case: if KAPPA < 0 then a random full symmetric positive
-  ##                     definite matrix is produced with COND(A) = -KAPPA and
-  ##                     eigenvalues distributed according to MODE.
-  ##                     KL and KU, if present, are ignored.
+  ##   RANDSVD(N, KAPPA, MODE, KL, KU) is a (banded) random matrix of order N
+  ##   with COND(A) = KAPPA and singular values from the distribution MODE.
+  ##   N may be a 2-vector, in which case the matrix is N(1)-by-N(2).
+  ##   Available types:
+  ##          MODE = 1:   one large singular value,
+  ##          MODE = 2:   one small singular value,
+  ##          MODE = 3:   geometrically distributed singular values,
+  ##          MODE = 4:   arithmetically distributed singular values,
+  ##          MODE = 5:   random singular values with unif. dist. logarithm.
+  ##   If omitted, MODE defaults to 3, and KAPPA defaults to SQRT(1/EPS).
+  ##   If MODE < 0 then the effect is as for ABS(MODE) except that in the
+  ##   original matrix of singular values the order of the diagonal entries
+  ##   is reversed: small to large instead of large to small.
+  ##   KL and KU are the lower and upper bandwidths respectively; if they
+  ##   are omitted a full matrix is produced.
+  ##   If only KL is present, KU defaults to KL.
+  ##   Special case: if KAPPA < 0 then a random full symmetric positive
+  ##                 definite matrix is produced with COND(A) = -KAPPA and
+  ##                 eigenvalues distributed according to MODE.
+  ##                 KL and KU, if present, are ignored.
   ##
-  ##       Reference:
-  ##       N.J. Higham, Accuracy and Stability of Numerical Algorithms,
-  ##          Society for Industrial and Applied Mathematics, Philadelphia, PA,
-  ##          USA, 1996; sec. 26.3.
+  ##   Reference:
+  ##   N.J. Higham, Accuracy and Stability of Numerical Algorithms,
+  ##      Society for Industrial and Applied Mathematics, Philadelphia, PA,
+  ##      USA, 1996; sec. 26.3.
   ##
-  ##       This routine is similar to the more comprehensive Fortran routine xLATMS
-  ##       in the following reference:
-  ##       J.W. Demmel and A. McKenney, A test matrix generation suite,
-  ##       LAPACK Working Note #9, Courant Institute of Mathematical Sciences,
-  ##       New York, 1989.
+  ##   This routine is similar to the more comprehensive Fortran routine xLATMS
+  ##   in the following reference:
+  ##   J.W. Demmel and A. McKenney, A test matrix generation suite,
+  ##   LAPACK Working Note #9, Courant Institute of Mathematical Sciences,
+  ##   New York, 1989.
 
   if (nargin < 1 || nargin > 5)
     error ("gallery: 1 to 5 arguments are required for randsvd matrix.");
@@ -2245,7 +2245,7 @@
 
   ##  Convert to diagonal matrix of singular values.
   if (mode < 0)
-    sigma = sigma (p:-1:1);
+    sigma = sigma(p:-1:1);
   endif
   sigma = diag (sigma);
 
@@ -2280,28 +2280,28 @@
 
 function A = redheff (n)
   ## REDHEFF    A (0,1) matrix of Redheffer associated with the Riemann hypothesis.
-  ##            A = REDHEFF(N) is an N-by-N matrix of 0s and 1s defined by
-  ##                A(i,j) = 1 if j = 1 or if i divides j,
-  ##                A(i,j) = 0 otherwise.
-  ##            It has N - FLOOR(LOG2(N)) - 1 eigenvalues equal to 1,
-  ##            a real eigenvalue (the spectral radius) approximately SQRT(N),
-  ##            a negative eigenvalue approximately -SQRT(N),
-  ##            and the remaining eigenvalues are provably ``small''.
-  ##            Barrett and Jarvis (1992) conjecture that
-  ##              ``the small eigenvalues all lie inside the unit circle
-  ##                ABS(Z) = 1'',
-  ##            and a proof of this conjecture, together with a proof that some
-  ##            eigenvalue tends to zero as N tends to infinity, would yield
-  ##            a new proof of the prime number theorem.
-  ##            The Riemann hypothesis is true if and only if
-  ##            DET(A) = O( N^(1/2+epsilon) ) for every epsilon > 0
-  ##                                              (`!' denotes factorial).
-  ##            See also RIEMANN.
+  ##   A = REDHEFF(N) is an N-by-N matrix of 0s and 1s defined by
+  ##       A(i,j) = 1 if j = 1 or if i divides j,
+  ##       A(i,j) = 0 otherwise.
+  ##   It has N - FLOOR(LOG2(N)) - 1 eigenvalues equal to 1,
+  ##   a real eigenvalue (the spectral radius) approximately SQRT(N),
+  ##   a negative eigenvalue approximately -SQRT(N),
+  ##   and the remaining eigenvalues are provably ``small''.
+  ##   Barrett and Jarvis (1992) conjecture that
+  ##     ``the small eigenvalues all lie inside the unit circle
+  ##       ABS(Z) = 1'',
+  ##   and a proof of this conjecture, together with a proof that some
+  ##   eigenvalue tends to zero as N tends to infinity, would yield
+  ##   a new proof of the prime number theorem.
+  ##   The Riemann hypothesis is true if and only if
+  ##   DET(A) = O( N^(1/2+epsilon) ) for every epsilon > 0
+  ##                                     (`!' denotes factorial).
+  ##   See also RIEMANN.
   ##
-  ##            Reference:
-  ##            W.W. Barrett and T.J. Jarvis,
-  ##            Spectral Properties of a Matrix of Redheffer,
-  ##            Linear Algebra and Appl., 162 (1992), pp. 673-683.
+  ##   Reference:
+  ##   W.W. Barrett and T.J. Jarvis,
+  ##   Spectral Properties of a Matrix of Redheffer,
+  ##   Linear Algebra and Appl., 162 (1992), pp. 673-683.
 
   if (nargin != 1)
     error ("gallery: 1 argument is required for redheff matrix.");
@@ -2316,22 +2316,22 @@
 
 function A = riemann (n)
   ## RIEMANN    A matrix associated with the Riemann hypothesis.
-  ##            A = RIEMANN(N) is an N-by-N matrix for which the
-  ##            Riemann hypothesis is true if and only if
-  ##            DET(A) = O( N! N^(-1/2+epsilon) ) for every epsilon > 0
-  ##                                              (`!' denotes factorial).
-  ##            A = B(2:N+1, 2:N+1), where
-  ##            B(i,j) = i-1 if i divides j and -1 otherwise.
-  ##            Properties include, with M = N+1:
-  ##               Each eigenvalue E(i) satisfies ABS(E(i)) <= M - 1/M.
-  ##               i <= E(i) <= i+1 with at most M-SQRT(M) exceptions.
-  ##               All integers in the interval (M/3, M/2] are eigenvalues.
+  ##   A = RIEMANN(N) is an N-by-N matrix for which the
+  ##   Riemann hypothesis is true if and only if
+  ##   DET(A) = O( N! N^(-1/2+epsilon) ) for every epsilon > 0
+  ##                                     (`!' denotes factorial).
+  ##   A = B(2:N+1, 2:N+1), where
+  ##   B(i,j) = i-1 if i divides j and -1 otherwise.
+  ##   Properties include, with M = N+1:
+  ##      Each eigenvalue E(i) satisfies ABS(E(i)) <= M - 1/M.
+  ##      i <= E(i) <= i+1 with at most M-SQRT(M) exceptions.
+  ##      All integers in the interval (M/3, M/2] are eigenvalues.
   ##
-  ##            See also REDHEFF.
+  ##   See also REDHEFF.
   ##
-  ##            Reference:
-  ##            F. Roesler, Riemann's hypothesis as an eigenvalue problem,
-  ##            Linear Algebra and Appl., 81 (1986), pp. 153-198.
+  ##   Reference:
+  ##   F. Roesler, Riemann's hypothesis as an eigenvalue problem,
+  ##   Linear Algebra and Appl., 81 (1986), pp. 153-198.
 
   if (nargin != 1)
     error ("gallery: 1 argument is required for riemann matrix.");
@@ -2348,16 +2348,16 @@
 function A = ris (n)
   ## NOTE: this function was named dingdong in the original Test Matrix Toolbox
   ## RIS       Dingdong matrix - a symmetric Hankel matrix.
-  ##           A = RIS(N) is the symmetric N-by-N Hankel matrix with
-  ##                          A(i,j) = 0.5/(N-i-j+1.5).
-  ##           The eigenvalues of A cluster around PI/2 and -PI/2.
+  ##   A = RIS(N) is the symmetric N-by-N Hankel matrix with
+  ##                  A(i,j) = 0.5/(N-i-j+1.5).
+  ##   The eigenvalues of A cluster around PI/2 and -PI/2.
   ##
-  ##           Invented by F.N. Ris.
+  ##   Invented by F.N. Ris.
   ##
-  ##           Reference:
-  ##           J.C. Nash, Compact Numerical Methods for Computers: Linear
-  ##           Algebra and Function Minimisation, second edition, Adam Hilger,
-  ##           Bristol, 1990 (Appendix 1).
+  ##   Reference:
+  ##   J.C. Nash, Compact Numerical Methods for Computers: Linear
+  ##   Algebra and Function Minimisation, second edition, Adam Hilger,
+  ##   Bristol, 1990 (Appendix 1).
 
   if (nargin != 1)
     error ("gallery: 1 argument is required for ris matrix.");
@@ -2371,20 +2371,20 @@
 
 function A = smoke (n, k = 0)
   ## SMOKE     Smoke matrix - complex, with a `smoke ring' pseudospectrum.
-  ##           SMOKE(N) is an N-by-N matrix with 1s on the
-  ##           superdiagonal, 1 in the (N,1) position, and powers of
-  ##           roots of unity along the diagonal.
-  ##           SMOKE(N, 1) is the same except for a zero (N,1) element.
-  ##           The eigenvalues of SMOKE(N, 1) are the N'th roots of unity;
-  ##           those of SMOKE(N) are the N'th roots of unity times 2^(1/N).
+  ##   SMOKE(N) is an N-by-N matrix with 1s on the
+  ##   superdiagonal, 1 in the (N,1) position, and powers of
+  ##   roots of unity along the diagonal.
+  ##   SMOKE(N, 1) is the same except for a zero (N,1) element.
+  ##   The eigenvalues of SMOKE(N, 1) are the N'th roots of unity;
+  ##   those of SMOKE(N) are the N'th roots of unity times 2^(1/N).
   ##
-  ##           Try PS(SMOKE(32)).  For SMOKE(N, 1) the pseudospectrum looks
-  ##           like a sausage folded back on itself.
-  ##           GERSH(SMOKE(N, 1)) is interesting.
+  ##   Try PS(SMOKE(32)).  For SMOKE(N, 1) the pseudospectrum looks
+  ##   like a sausage folded back on itself.
+  ##   GERSH(SMOKE(N, 1)) is interesting.
   ##
-  ##           Reference:
-  ##           L. Reichel and L.N. Trefethen, Eigenvalues and pseudo-eigenvalues of
-  ##           Toeplitz matrices, Linear Algebra and Appl., 162-164:153-185, 1992.
+  ##   Reference:
+  ##   L. Reichel and L.N. Trefethen, Eigenvalues and pseudo-eigenvalues of
+  ##   Toeplitz matrices, Linear Algebra and Appl., 162-164:153-185, 1992.
 
   if (nargin < 1 || nargin > 2)
     error ("gallery: 1 to 2 arguments are required for smoke matrix.");
@@ -2408,18 +2408,18 @@
 function T = toeppd (n, m = n, w = rand (m,1), theta = rand (m,1))
   ## NOTE: this function was named pdtoep in the original Test Matrix Toolbox
   ## TOEPPD   Symmetric positive definite Toeplitz matrix.
-  ##          TOEPPD(N, M, W, THETA) is an N-by-N symmetric positive (semi-)
-  ##          definite (SPD) Toeplitz matrix, comprised of the sum of M rank 2
-  ##          (or, for certain THETA, rank 1) SPD Toeplitz matrices.
-  ##          Specifically,
-  ##                  T = W(1)*T(THETA(1)) + ... + W(M)*T(THETA(M)),
-  ##          where T(THETA(k)) has (i,j) element COS(2*PI*THETA(k)*(i-j)).
-  ##          Defaults: M = N, W = RAND(M,1), THETA = RAND(M,1).
+  ##   TOEPPD(N, M, W, THETA) is an N-by-N symmetric positive (semi-)
+  ##   definite (SPD) Toeplitz matrix, comprised of the sum of M rank 2
+  ##   (or, for certain THETA, rank 1) SPD Toeplitz matrices.
+  ##   Specifically,
+  ##           T = W(1)*T(THETA(1)) + ... + W(M)*T(THETA(M)),
+  ##   where T(THETA(k)) has (i,j) element COS(2*PI*THETA(k)*(i-j)).
+  ##   Defaults: M = N, W = RAND(M,1), THETA = RAND(M,1).
   ##
-  ##          Reference:
-  ##          G. Cybenko and C.F. Van Loan, Computing the minimum eigenvalue of
-  ##          a symmetric positive definite Toeplitz matrix, SIAM J. Sci. Stat.
-  ##          Comput., 7 (1986), pp. 123-131.
+  ##   Reference:
+  ##   G. Cybenko and C.F. Van Loan, Computing the minimum eigenvalue of
+  ##   a symmetric positive definite Toeplitz matrix, SIAM J. Sci. Stat.
+  ##   Comput., 7 (1986), pp. 123-131.
 
   if (nargin < 1 || nargin > 4)
     error ("gallery: 1 to 4 arguments are required for toeppd matrix.");
@@ -2442,25 +2442,25 @@
 function P = toeppen (n, a = 1, b = -10, c = 0, d = 10, e = 1)
   ## NOTE: this function was named pentoep in the original Test Matrix Toolbox
   ## TOEPPEN   Pentadiagonal Toeplitz matrix (sparse).
-  ##           P = TOEPPEN(N, A, B, C, D, E) is the N-by-N pentadiagonal
-  ##           Toeplitz matrix with diagonals composed of the numbers
-  ##           A =: P(3,1), B =: P(2,1), C =: P(1,1), D =: P(1,2), E =: P(1,3).
-  ##           Default: (A,B,C,D,E) = (1,-10,0,10,1) (a matrix of Rutishauser).
-  ##                     This matrix has eigenvalues lying approximately on
-  ##                     the line segment 2*cos(2*t) + 20*i*sin(t).
+  ##   P = TOEPPEN(N, A, B, C, D, E) is the N-by-N pentadiagonal
+  ##   Toeplitz matrix with diagonals composed of the numbers
+  ##   A =: P(3,1), B =: P(2,1), C =: P(1,1), D =: P(1,2), E =: P(1,3).
+  ##   Default: (A,B,C,D,E) = (1,-10,0,10,1) (a matrix of Rutishauser).
+  ##             This matrix has eigenvalues lying approximately on
+  ##             the line segment 2*cos(2*t) + 20*i*sin(t).
   ##
-  ##           Interesting plots are
-  ##           PS(FULL(TOEPPEN(32,0,1,0,0,1/4)))  - `triangle'
-  ##           PS(FULL(TOEPPEN(32,0,1/2,0,0,1)))  - `propeller'
-  ##           PS(FULL(TOEPPEN(32,0,1/2,1,1,1)))  - `fish'
+  ##   Interesting plots are
+  ##   PS(FULL(TOEPPEN(32,0,1,0,0,1/4)))  - `triangle'
+  ##   PS(FULL(TOEPPEN(32,0,1/2,0,0,1)))  - `propeller'
+  ##   PS(FULL(TOEPPEN(32,0,1/2,1,1,1)))  - `fish'
   ##
-  ##           References:
-  ##           R.M. Beam and R.F. Warming, The asymptotic spectra of
-  ##              banded Toeplitz and quasi-Toeplitz matrices, SIAM J. Sci.
-  ##              Comput. 14 (4), 1993, pp. 971-1006.
-  ##           H. Rutishauser, On test matrices, Programmation en Mathematiques
-  ##              Numeriques, Editions Centre Nat. Recherche Sci., Paris, 165,
-  ##              1966, pp. 349-365.
+  ##   References:
+  ##   R.M. Beam and R.F. Warming, The asymptotic spectra of
+  ##      banded Toeplitz and quasi-Toeplitz matrices, SIAM J. Sci.
+  ##      Comput. 14 (4), 1993, pp. 971-1006.
+  ##   H. Rutishauser, On test matrices, Programmation en Mathematiques
+  ##      Numeriques, Editions Centre Nat. Recherche Sci., Paris, 165,
+  ##      1966, pp. 349-365.
 
   if (nargin < 1 || nargin > 6)
     error ("gallery: 1 to 6 arguments are required for toeppen matrix.");
@@ -2476,23 +2476,23 @@
 
 function T = tridiag (n, x = -1, y = 2, z = -1)
   ## TRIDIAG  Tridiagonal matrix (sparse).
-  ##          TRIDIAG(X, Y, Z) is the tridiagonal matrix with subdiagonal X,
-  ##          diagonal Y, and superdiagonal Z.
-  ##          X and Z must be vectors of dimension one less than Y.
-  ##          Alternatively TRIDIAG(N, C, D, E), where C, D, and E are all
-  ##          scalars, yields the Toeplitz tridiagonal matrix of order N
-  ##          with subdiagonal elements C, diagonal elements D, and superdiagonal
-  ##          elements E.   This matrix has eigenvalues (Todd 1977)
-  ##                   D + 2*SQRT(C*E)*COS(k*PI/(N+1)), k=1:N.
-  ##          TRIDIAG(N) is the same as TRIDIAG(N,-1,2,-1), which is
-  ##          a symmetric positive definite M-matrix (the negative of the
-  ##          second difference matrix).
+  ##   TRIDIAG(X, Y, Z) is the tridiagonal matrix with subdiagonal X,
+  ##   diagonal Y, and superdiagonal Z.
+  ##   X and Z must be vectors of dimension one less than Y.
+  ##   Alternatively TRIDIAG(N, C, D, E), where C, D, and E are all
+  ##   scalars, yields the Toeplitz tridiagonal matrix of order N
+  ##   with subdiagonal elements C, diagonal elements D, and superdiagonal
+  ##   elements E.   This matrix has eigenvalues (Todd 1977)
+  ##            D + 2*SQRT(C*E)*COS(k*PI/(N+1)), k=1:N.
+  ##   TRIDIAG(N) is the same as TRIDIAG(N,-1,2,-1), which is
+  ##   a symmetric positive definite M-matrix (the negative of the
+  ##   second difference matrix).
   ##
-  ##          References:
-  ##          J. Todd, Basic Numerical Mathematics, Vol. 2: Numerical Algebra,
-  ##            Birkhauser, Basel, and Academic Press, New York, 1977, p. 155.
-  ##          D.E. Rutherford, Some continuant determinants arising in physics and
-  ##            chemistry---II, Proc. Royal Soc. Edin., 63, A (1952), pp. 232-241.
+  ##   References:
+  ##   J. Todd, Basic Numerical Mathematics, Vol. 2: Numerical Algebra,
+  ##     Birkhauser, Basel, and Academic Press, New York, 1977, p. 155.
+  ##   D.E. Rutherford, Some continuant determinants arising in physics and
+  ##     chemistry---II, Proc. Royal Soc. Edin., 63, A (1952), pp. 232-241.
 
   if (nargin != 1 && nargin != 3 && nargin != 4)
     error ("gallery: 1, 3, or 4 arguments are required for tridiag matrix.");
@@ -2524,33 +2524,33 @@
 
 function t = triw (n, alpha = -1, k = n(end) - 1)
   ## TRIW   Upper triangular matrix discussed by Wilkinson and others.
-  ##        TRIW(N, ALPHA, K) is the upper triangular matrix with ones on
-  ##        the diagonal and ALPHAs on the first K >= 0 superdiagonals.
-  ##        N may be a 2-vector, in which case the matrix is N(1)-by-N(2) and
-  ##        upper trapezoidal.
-  ##        Defaults: ALPHA = -1,
-  ##                  K = N - 1     (full upper triangle).
-  ##        TRIW(N) is a matrix discussed by Kahan, Golub and Wilkinson.
+  ##   TRIW(N, ALPHA, K) is the upper triangular matrix with ones on
+  ##   the diagonal and ALPHAs on the first K >= 0 superdiagonals.
+  ##   N may be a 2-vector, in which case the matrix is N(1)-by-N(2) and
+  ##   upper trapezoidal.
+  ##   Defaults: ALPHA = -1,
+  ##             K = N - 1     (full upper triangle).
+  ##   TRIW(N) is a matrix discussed by Kahan, Golub and Wilkinson.
   ##
-  ##        Ostrowski (1954) shows that
-  ##          COND(TRIW(N,2)) = COT(PI/(4*N))^2,
-  ##        and for large ABS(ALPHA),
-  ##          COND(TRIW(N,ALPHA)) is approximately ABS(ALPHA)^N*SIN(PI/(4*N-2)).
+  ##   Ostrowski (1954) shows that
+  ##     COND(TRIW(N,2)) = COT(PI/(4*N))^2,
+  ##   and for large ABS(ALPHA),
+  ##     COND(TRIW(N,ALPHA)) is approximately ABS(ALPHA)^N*SIN(PI/(4*N-2)).
   ##
-  ##        Adding -2^(2-N) to the (N,1) element makes TRIW(N) singular,
-  ##        as does adding -2^(1-N) to all elements in the first column.
+  ##   Adding -2^(2-N) to the (N,1) element makes TRIW(N) singular,
+  ##   as does adding -2^(1-N) to all elements in the first column.
   ##
-  ##        References:
-  ##        G.H. Golub and J.H. Wilkinson, Ill-conditioned eigensystems and the
-  ##           computation of the Jordan canonical form, SIAM Review,
-  ##           18(4), 1976, pp. 578-619.
-  ##        W. Kahan, Numerical linear algebra, Canadian Math. Bulletin,
-  ##           9 (1966), pp. 757-801.
-  ##        A.M. Ostrowski, On the spectrum of a one-parametric family of
-  ##           matrices, J. Reine Angew. Math., 193 (3/4), 1954, pp. 143-160.
-  ##        J.H. Wilkinson, Singular-value decomposition---basic aspects,
-  ##           in D.A.H. Jacobs, ed., Numerical Software---Needs and Availability,
-  ##           Academic Press, London, 1978, pp. 109-135.
+  ##   References:
+  ##   G.H. Golub and J.H. Wilkinson, Ill-conditioned eigensystems and the
+  ##      computation of the Jordan canonical form, SIAM Review,
+  ##      18(4), 1976, pp. 578-619.
+  ##   W. Kahan, Numerical linear algebra, Canadian Math. Bulletin,
+  ##      9 (1966), pp. 757-801.
+  ##   A.M. Ostrowski, On the spectrum of a one-parametric family of
+  ##      matrices, J. Reine Angew. Math., 193 (3/4), 1954, pp. 143-160.
+  ##   J.H. Wilkinson, Singular-value decomposition---basic aspects,
+  ##      in D.A.H. Jacobs, ed., Numerical Software---Needs and Availability,
+  ##      Academic Press, London, 1978, pp. 109-135.
 
   if (nargin < 1 || nargin > 3)
     error ("gallery: 1 to 3 arguments are required for triw matrix.");
@@ -2610,79 +2610,79 @@
 endfunction
 
 function A = wathen (nx, ny, k = 0)
-  ## # WATHEN returns the Wathen matrix.
+  ## WATHEN returns the Wathen matrix.
   ##
-  ##   Discussion:
+  ## Discussion:
   ##
-  ##     The Wathen matrix is a finite element matrix which is sparse.
+  ##   The Wathen matrix is a finite element matrix which is sparse.
   ##
-  ##     The entries of the matrix depend in part on a physical quantity
-  ##     related to density.  That density is here assigned random values between
-  ##     0 and 100.
+  ##   The entries of the matrix depend in part on a physical quantity
+  ##   related to density.  That density is here assigned random values between
+  ##   0 and 100.
   ##
-  ##     A = WATHEN ( NX, NY ) is a sparse random N-by-N finite element matrix
-  ##     where N = 3*NX*NY + 2*NX + 2*NY + 1.
+  ##   A = WATHEN ( NX, NY ) is a sparse random N-by-N finite element matrix
+  ##   where N = 3*NX*NY + 2*NX + 2*NY + 1.
   ##
-  ##     A is the consistent mass matrix for a regular NX-by-NY
-  ##     grid of 8-node (serendipity) elements in 2 space dimensions.
+  ##   A is the consistent mass matrix for a regular NX-by-NY
+  ##   grid of 8-node (serendipity) elements in 2 space dimensions.
   ##
-  ##     Here is an illustration for NX = 3, NX = 2:
+  ##   Here is an illustration for NX = 3, NX = 2:
   ##
-  ##      23-24-25-26-27-28-29
-  ##       |     |     |     |
-  ##      19    20    21    22
-  ##       |     |     |     |
-  ##      12-13-14-15-16-17-18
-  ##       |     |     |     |
-  ##       8     9    10    11
-  ##       |     |     |     |
-  ##       1--2--3--4--5--6--7
+  ##    23-24-25-26-27-28-29
+  ##     |     |     |     |
+  ##    19    20    21    22
+  ##     |     |     |     |
+  ##    12-13-14-15-16-17-18
+  ##     |     |     |     |
+  ##     8     9    10    11
+  ##     |     |     |     |
+  ##     1--2--3--4--5--6--7
   ##
-  ##     For this example, the total number of nodes is, as expected,
+  ##   For this example, the total number of nodes is, as expected,
   ##
-  ##       N = 3 * 3 * 2 + 2 * 2 + 2 * 3 + 1 = 29.
+  ##     N = 3 * 3 * 2 + 2 * 2 + 2 * 3 + 1 = 29.
   ##
-  ##     A is symmetric positive definite for any (positive) values of
-  ##     the density, RHO(NX,NY), which is chosen randomly in this routine.
+  ##   A is symmetric positive definite for any (positive) values of
+  ##   the density, RHO(NX,NY), which is chosen randomly in this routine.
   ##
-  ##     In particular, if D = DIAG(DIAG(A)), then
-  ##       0.25 <= EIG(INV(D)*A) <= 4.5
-  ##     for any positive integers NX and NY and any densities RHO(NX,NY).
+  ##   In particular, if D = DIAG(DIAG(A)), then
+  ##     0.25 <= EIG(INV(D)*A) <= 4.5
+  ##   for any positive integers NX and NY and any densities RHO(NX,NY).
   ##
-  ##     A = WATHEN ( NX, NY, 1 ) returns the diagonally scaled matrix.
+  ##   A = WATHEN ( NX, NY, 1 ) returns the diagonally scaled matrix.
   ##
-  ##   Modified:
+  ## Modified:
   ##
-  ##     17 September 2007
+  ##   17 September 2007
   ##
-  ##   Author:
+  ## Author:
   ##
-  ##     Nicholas Higham
+  ##   Nicholas Higham
   ##
-  ##   Reference:
+  ## Reference:
   ##
-  ##     Nicholas Higham,
-  ##     Algorithm 694: A Collection of Test Matrices in MATLAB,
-  ##     ACM Transactions on Mathematical Software,
-  ##     Volume 17, Number 3, September 1991, pages 289-305.
+  ##   Nicholas Higham,
+  ##   Algorithm 694: A Collection of Test Matrices in MATLAB,
+  ##   ACM Transactions on Mathematical Software,
+  ##   Volume 17, Number 3, September 1991, pages 289-305.
   ##
-  ##     Andrew Wathen,
-  ##     Realistic eigenvalue bounds for the Galerkin mass matrix,
-  ##     IMA Journal of Numerical Analysis,
-  ##     Volume 7, 1987, pages 449-457.
+  ##   Andrew Wathen,
+  ##   Realistic eigenvalue bounds for the Galerkin mass matrix,
+  ##   IMA Journal of Numerical Analysis,
+  ##   Volume 7, 1987, pages 449-457.
   ##
-  ##   Parameters:
+  ## Parameters:
   ##
-  ##     Input, integer NX, NY, the number of elements in the X and Y directions
-  ##     of the finite element grid.  NX and NY must each be at least 1.
+  ##   Input, integer NX, NY, the number of elements in the X and Y directions
+  ##   of the finite element grid.  NX and NY must each be at least 1.
   ##
-  ##     Optional input, integer K, is used to request that the diagonally scaled
-  ##     version of the matrix be returned.  This happens if K is specified with
-  ##     the value 1.
+  ##   Optional input, integer K, is used to request that the diagonally scaled
+  ##   version of the matrix be returned.  This happens if K is specified with
+  ##   the value 1.
   ##
-  ##     Output, sparse real A(N,N), the matrix.  The dimension N is determined by
-  ##     NX and NY, as described above.  A is stored in the MATLAB sparse matrix
-  ##     format.
+  ##   Output, sparse real A(N,N), the matrix.  The dimension N is determined by
+  ##   NX and NY, as described above.  A is stored in the MATLAB sparse matrix
+  ##   format.
 
   if (nargin < 2 || nargin > 3)
     error ("gallery: 2 or 3 arguments are required for wathen matrix.");
@@ -2746,19 +2746,19 @@
 
 function [A, b] = wilk (n)
   ## WILK   Various specific matrices devised/discussed by Wilkinson.
-  ##        [A, b] = WILK(N) is the matrix or system of order N.
-  ##        N = 3: upper triangular system Ux=b illustrating inaccurate solution.
-  ##        N = 4: lower triangular system Lx=b, ill-conditioned.
-  ##        N = 5: HILB(6)(1:5,2:6)*1.8144.  Symmetric positive definite.
-  ##        N = 21: W21+, tridiagonal.   Eigenvalue problem.
+  ##   [A, b] = WILK(N) is the matrix or system of order N.
+  ##   N = 3: upper triangular system Ux=b illustrating inaccurate solution.
+  ##   N = 4: lower triangular system Lx=b, ill-conditioned.
+  ##   N = 5: HILB(6)(1:5,2:6)*1.8144.  Symmetric positive definite.
+  ##   N = 21: W21+, tridiagonal.   Eigenvalue problem.
   ##
-  ##        References:
-  ##        J.H. Wilkinson, Error analysis of direct methods of matrix inversion,
-  ##           J. Assoc. Comput. Mach., 8 (1961),  pp. 281-330.
-  ##        J.H. Wilkinson, Rounding Errors in Algebraic Processes, Notes on Applied
-  ##           Science No. 32, Her Majesty's Stationery Office, London, 1963.
-  ##        J.H. Wilkinson, The Algebraic Eigenvalue Problem, Oxford University
-  ##           Press, 1965.
+  ##   References:
+  ##   J.H. Wilkinson, Error analysis of direct methods of matrix inversion,
+  ##      J. Assoc. Comput. Mach., 8 (1961),  pp. 281-330.
+  ##   J.H. Wilkinson, Rounding Errors in Algebraic Processes, Notes on Applied
+  ##      Science No. 32, Her Majesty's Stationery Office, London, 1963.
+  ##   J.H. Wilkinson, The Algebraic Eigenvalue Problem, Oxford University
+  ##      Press, 1965.
 
   if (nargin != 1)
     error ("gallery: 1 argument is required for wilk matrix.");
@@ -2807,21 +2807,21 @@
 ## NOTE: bandred is part of the Test Matrix Toolbox and is used by randsvd()
 function A = bandred (A, kl, ku)
   ## BANDRED  Band reduction by two-sided unitary transformations.
-  ##          B = BANDRED(A, KL, KU) is a matrix unitarily equivalent to A
-  ##          with lower bandwidth KL and upper bandwidth KU
-  ##          (i.e. B(i,j) = 0 if i > j+KL or j > i+KU).
-  ##          The reduction is performed using Householder transformations.
-  ##          If KU is omitted it defaults to KL.
+  ##   B = BANDRED(A, KL, KU) is a matrix unitarily equivalent to A
+  ##   with lower bandwidth KL and upper bandwidth KU
+  ##   (i.e. B(i,j) = 0 if i > j+KL or j > i+KU).
+  ##   The reduction is performed using Householder transformations.
+  ##   If KU is omitted it defaults to KL.
   ##
-  ##          Called by RANDSVD.
-  ##          This is a `standard' reduction.  Cf. reduction to bidiagonal form
-  ##          prior to computing the SVD.  This code is a little wasteful in that
-  ##          it computes certain elements which are immediately set to zero!
+  ##   Called by RANDSVD.
+  ##   This is a `standard' reduction.  Cf. reduction to bidiagonal form
+  ##   prior to computing the SVD.  This code is a little wasteful in that
+  ##   it computes certain elements which are immediately set to zero!
   ##
-  ##          Reference:
-  ##          G.H. Golub and C.F. Van Loan, Matrix Computations, second edition,
-  ##          Johns Hopkins University Press, Baltimore, Maryland, 1989.
-  ##          Section 5.4.3.
+  ##   Reference:
+  ##   G.H. Golub and C.F. Van Loan, Matrix Computations, second edition,
+  ##   Johns Hopkins University Press, Baltimore, Maryland, 1989.
+  ##   Section 5.4.3.
 
   ##  Check for special case where order of left/right transformations matters.
   ##  Easiest approach is to work on the transpose, flipping back at the end.