comparison doc/interpreter/sparse.txi @ 5506:b4cfbb0ec8c4

[project @ 2005-10-23 19:09:32 by dbateman]
author dbateman
date Sun, 23 Oct 2005 19:09:33 +0000
parents d2d11284528e
children 7e008607a86e
comparison
equal deleted inserted replaced
5505:17682e3fba2a 5506:b4cfbb0ec8c4
9 * Basics:: The Creation and Manipulation of Sparse Matrices 9 * Basics:: The Creation and Manipulation of Sparse Matrices
10 * Graph Theory:: Graphs are their use with Sparse Matrices 10 * Graph Theory:: Graphs are their use with Sparse Matrices
11 * Sparse Linear Algebra:: Linear Algebra on Sparse Matrices 11 * Sparse Linear Algebra:: Linear Algebra on Sparse Matrices
12 * Iterative Techniques:: Iterative Techniques applied to Sparse Matrices 12 * Iterative Techniques:: Iterative Techniques applied to Sparse Matrices
13 * Oct-Files:: Using Sparse Matrices in Oct-files 13 * Oct-Files:: Using Sparse Matrices in Oct-files
14 * License:: Licensing of Third Party Software
15 * Function Reference:: Documentation from the Specific Sparse Functions 14 * Function Reference:: Documentation from the Specific Sparse Functions
16 @end menu 15 @end menu
17 16
18 @node Basics, Graph Theory, Sparse Matrices, Sparse Matrices 17 @node Basics, Graph Theory, Sparse Matrices, Sparse Matrices
19 @section The Creation and Manipulation of Sparse Matrices 18 @section The Creation and Manipulation of Sparse Matrices
21 The size of mathematical problems that can be treated at any particular 20 The size of mathematical problems that can be treated at any particular
22 time is generally limited by the available computing resources. Both, 21 time is generally limited by the available computing resources. Both,
23 the speed of the computer and its available memory place limitation on 22 the speed of the computer and its available memory place limitation on
24 the problem size. 23 the problem size.
25 24
26 There are many classes mathematical problems which give rise to 25 There are many classes of mathematical problems which give rise to
27 matrices, where a large number of the elements are zero. In this case 26 matrices, where a large number of the elements are zero. In this case
28 it makes sense to have a special matrix type to handle this class of 27 it makes sense to have a special matrix type to handle this class of
29 problems where only the non-zero elements of the matrix are 28 problems where only the non-zero elements of the matrix are
30 stored. Not only done this reduce the amount of memory to store the 29 stored. Not only does this reduce the amount of memory to store the
31 matrix, but it also means that operations on this type of matrix can 30 matrix, but it also means that operations on this type of matrix can
32 take advantage of the a-priori knowledge of the positions of the 31 take advantage of the a-priori knowledge of the positions of the
33 non-zero elements to accelerate their calculations. 32 non-zero elements to accelerate their calculations.
34 33
35 A matrix type that stores only the non-zero elements is generally called 34 A matrix type that stores only the non-zero elements is generally called
67 within the matrix is implied by its position in the computers memory. 66 within the matrix is implied by its position in the computers memory.
68 However, this is not the case for sparse matrices, and so the positions 67 However, this is not the case for sparse matrices, and so the positions
69 of the non-zero elements of the matrix must equally be stored. 68 of the non-zero elements of the matrix must equally be stored.
70 69
71 An obvious way to do this is by storing the elements of the matrix as 70 An obvious way to do this is by storing the elements of the matrix as
72 triplets, with two elements being the of the position in the array 71 triplets, with two elements being their position in the array
73 (rows and column) and the third being the data itself. This is conceptually 72 (rows and column) and the third being the data itself. This is conceptually
74 easy to grasp, but requires more storage than is strictly needed. 73 easy to grasp, but requires more storage than is strictly needed.
75 74
76 The storage technique used within Octave is compressed column 75 The storage technique used within Octave is compressed column
77 format. In this format the position of each element in a row and the 76 format. In this format the position of each element in a row and the
120 representing the column indexing, row indexing and data respectively. The 119 representing the column indexing, row indexing and data respectively. The
121 contents of these three vectors for the above matrix will be 120 contents of these three vectors for the above matrix will be
122 121
123 @example 122 @example
124 @group 123 @group
125 @var{cidx} = [0, 2, 2, 4] 124 @var{cidx} = [0, 1, 2, 2, 4]
126 @var{ridx} = [0, 0, 1, 2] 125 @var{ridx} = [0, 0, 1, 2]
127 @var{data} = [1, 2, 3, 4] 126 @var{data} = [1, 2, 3, 4]
128 @end group 127 @end group
129 @end example 128 @end example
130 129
131 Note that this is the representation of these elements with the first row 130 Note that this is the representation of these elements with the first row
132 and column assumed to start as zero. Thus the number of elements in the 131 and column assumed to start at zero. Thus the number of elements in the
133 @var{i}-th column is given by @code{@var{cidx} (@var{i} + 1) - 132 @var{i}-th column is given by @code{@var{cidx} (@var{i} + 1) -
134 @var{cidx} (@var{i})}. 133 @var{cidx} (@var{i})}.
135 134
136 It should be noted that compressed row formats are equally possible. However, 135 It should be noted that compressed row formats are equally possible. However,
137 in the context of mixed operations between mixed sparse and dense matrices, 136 in the context of mixed operations between mixed sparse and dense matrices,
139 order as the dense matrices. Octave stores dense matrices in column 138 order as the dense matrices. Octave stores dense matrices in column
140 major ordering, and so sparse matrices are equally stored in this manner. 139 major ordering, and so sparse matrices are equally stored in this manner.
141 140
142 A further constraint on the sparse matrix storage used by Octave is that 141 A further constraint on the sparse matrix storage used by Octave is that
143 all elements in the rows are stored in increasing order of their row 142 all elements in the rows are stored in increasing order of their row
144 index. This makes certain operations, later be faster. However, it imposes 143 index, which makes certain operations faster. However, it imposes
145 the need to sort the elements on the creation of sparse matrices. Having 144 the need to sort the elements on the creation of sparse matrices. Having
146 dis-ordered elements is potentially an advantage in that it makes operations 145 dis-ordered elements is potentially an advantage in that it makes operations
147 such as concatenating two sparse matrices together easier and faster, however 146 such as concatenating two sparse matrices together easier and faster, however
148 it adds complexity and speed problems elsewhere. 147 it adds complexity and speed problems elsewhere.
149 148
156 @item Returned from a function 155 @item Returned from a function
157 There are many functions that directly return sparse matrices. These include 156 There are many functions that directly return sparse matrices. These include
158 @dfn{speye}, @dfn{sprand}, @dfn{spdiag}, etc. 157 @dfn{speye}, @dfn{sprand}, @dfn{spdiag}, etc.
159 @item Constructed from matrices or vectors 158 @item Constructed from matrices or vectors
160 The function @dfn{sparse} allows a sparse matrix to be constructed from 159 The function @dfn{sparse} allows a sparse matrix to be constructed from
161 three vectors representing the row, column and data. ALternatively, the 160 three vectors representing the row, column and data. Alternatively, the
162 function @dfn{spconvert} uses a three column matrix format to allow easy 161 function @dfn{spconvert} uses a three column matrix format to allow easy
163 importation of data from elsewhere. 162 importation of data from elsewhere.
164 @item Created and then filled 163 @item Created and then filled
165 The function @dfn{sparse} or @dfn{spalloc} can be used to create an empty 164 The function @dfn{sparse} or @dfn{spalloc} can be used to create an empty
166 matrix that is then filled by the user 165 matrix that is then filled by the user
175 creates an @var{n}-by-@var{n} or @var{r}-by-@var{c} sparse identity 174 creates an @var{n}-by-@var{n} or @var{r}-by-@var{c} sparse identity
176 matrix. 175 matrix.
177 176
178 Another typical sparse matrix that is often needed is a random distribution 177 Another typical sparse matrix that is often needed is a random distribution
179 of random elements. The functions @dfn{sprand} and @dfn{sprandn} perform 178 of random elements. The functions @dfn{sprand} and @dfn{sprandn} perform
180 this for uniform and random distributions of elements. They have exactly 179 this for uniform and normal random distributions of elements. They have exactly
181 the same calling convention as @code{sprand (@var{r}, @var{c}, @var{d})}, 180 the same calling convention, where @code{sprand (@var{r}, @var{c}, @var{d})},
182 which creates an @var{r}-by-@var{c} sparse matrix a density of filled 181 creates an @var{r}-by-@var{c} sparse matrix with a density of filled
183 elements of @var{d}. 182 elements of @var{d}.
184 183
185 Other functions of interest that directly creates a sparse matrix, are 184 Other functions of interest that directly creates a sparse matrices, are
186 @dfn{spdiag} or its generalization @dfn{spdiags}, that can take the 185 @dfn{spdiag} or its generalization @dfn{spdiags}, that can take the
187 definition of the diagonals of the matrix and create the sparse matrix 186 definition of the diagonals of the matrix and create the sparse matrix
188 that corresponds to this. For example 187 that corresponds to this. For example
189 188
190 @c FIXME, when spdiag is overloaded to diag the below won't work. 189 @example
191 190 s = diag (sparse(randn(1,n)), -1);
192 @example
193 s = spdiag (randn(1,n), -1);
194 @end example 191 @end example
195 192
196 creates a sparse (@var{n}+1)-by-(@var{n}+1) sparse matrix with a single 193 creates a sparse (@var{n}+1)-by-(@var{n}+1) sparse matrix with a single
197 diagonal defined. 194 diagonal defined.
198 195
220 s = sparse (ri, ci, d, r, c); 217 s = sparse (ri, ci, d, r, c);
221 @end example 218 @end example
222 219
223 creates an @var{r}-by-@var{c} sparse matrix with a random distribution of 220 creates an @var{r}-by-@var{c} sparse matrix with a random distribution of
224 2 elements per row. The elements of the vectors do not need to be sorted in 221 2 elements per row. The elements of the vectors do not need to be sorted in
225 any particular order as Octave will store them prior to storing the 222 any particular order as Octave will sort them prior to storing the
226 data. However, per sorting the data will make teh creation of the sparse 223 data. However, pre-sorting the data will make the creation of the sparse
227 matrix faster. 224 matrix faster.
228 225
229 The function @dfn{spconvert} takes a three or four column real matrix. 226 The function @dfn{spconvert} takes a three or four column real matrix.
230 The first two columns represent the row and column index respectively and 227 The first two columns represent the row and column index respectively and
231 the third and four columns, the real and imaginary parts of the sparse 228 the third and four columns, the real and imaginary parts of the sparse
253 endfor 250 endfor
254 @end example 251 @end example
255 252
256 It should be noted, that due to the way that the Octave 253 It should be noted, that due to the way that the Octave
257 assignment functions are written that the assignment will reallocate 254 assignment functions are written that the assignment will reallocate
258 the memory used by the sparse matrix at each iteration. Therefore the 255 the memory used by the sparse matrix at each iteration of the above loop.
259 @dfn{spalloc} function ignores the @var{nz} argument and does not 256 Therefore the @dfn{spalloc} function ignores the @var{nz} argument and
260 preassign the memory for the matrix. Therefore, it is vitally 257 does not preassign the memory for the matrix. Therefore, it is vitally
261 important that code using to above structure should be as vectorized 258 important that code using to above structure should be as vectorized
262 as possible to minimize the number of assignments and reduce the 259 much as possible to minimize the number of assignments and reduce the
263 number of memory allocations. 260 number of memory allocations.
264 261
265 The above problem can be avoided in oct-files. However, the 262 The above problem can be avoided in oct-files. However, the
266 construction of a sparse matrix from an oct-file is more complex than 263 construction of a sparse matrix from an oct-file is more complex than
267 can be discussed in this brief introduction, and you are referred to 264 can be discussed in this brief introduction, and you are referred to
283 WRITE ME 280 WRITE ME
284 281
285 @node ReturnType, MathConsiderations, Functions, Operators and Functions 282 @node ReturnType, MathConsiderations, Functions, Operators and Functions
286 @subsubsection The Return Types of Operators and Functions 283 @subsubsection The Return Types of Operators and Functions
287 284
288 The two basic reason to use sparse matrices are to reduce the memory 285 The two basic reasons to use sparse matrices are to reduce the memory
289 usage and to not have to do calculations on zero elements. The two are 286 usage and to not have to do calculations on zero elements. The two are
290 closely related in that the computation time on a sparse matrix operator 287 closely related in that the computation time on a sparse matrix operator
291 or function is roughly linear with the numberof non-zero elements. 288 or function is roughly linear with the number of non-zero elements.
292 289
293 Therefore, there is a certain density of non-zero elements of a matrix 290 Therefore, there is a certain density of non-zero elements of a matrix
294 where it no longer makes sense to store it as a sparse matrix, but rather 291 where it no longer makes sense to store it as a sparse matrix, but rather
295 as a full matrix. For this reason operators and functions that have a 292 as a full matrix. For this reason operators and functions that have a
296 high probability of returning a full matrix will always return one. For 293 high probability of returning a full matrix will always return one. For
371 The above behaviour is consistent with full matrices, but is not 368 The above behaviour is consistent with full matrices, but is not
372 consistent with sparse implementations in other products. 369 consistent with sparse implementations in other products.
373 370
374 A particular problem of sparse matrices comes about due to the fact that 371 A particular problem of sparse matrices comes about due to the fact that
375 as the zeros are not stored, the sign-bit of these zeros is equally not 372 as the zeros are not stored, the sign-bit of these zeros is equally not
376 stored... In certain cases the sign-bit of zero is important. For example 373 stored. In certain cases the sign-bit of zero is important. For example
377 374
378 @example 375 @example
379 a = 0 ./ [-1, 1; 1, -1]; 376 a = 0 ./ [-1, 1; 1, -1];
380 b = 1 ./ a 377 b = 1 ./ a
381 @result{} -Inf Inf 378 @result{} -Inf Inf
389 sign-bit would need to be stored in the matrix to ensure that their 386 sign-bit would need to be stored in the matrix to ensure that their
390 sign-bit was respected. This is not done at this time, for reasons of 387 sign-bit was respected. This is not done at this time, for reasons of
391 efficient, and so the user is warned that calculations where the sign-bit 388 efficient, and so the user is warned that calculations where the sign-bit
392 of zero is important must not be done using sparse matrices. 389 of zero is important must not be done using sparse matrices.
393 390
391 In general any function or operator used on a sparse matrix will result
392 in a sparse matrix with the same or a larger number of non-zero elements
393 than the original matrix. This is particularly true for the important
394 case of sparse matrix factorizations.
395
396
394 Also discuss issues of fill-in. Discuss symamd etc, and mention randperm 397 Also discuss issues of fill-in. Discuss symamd etc, and mention randperm
395 that is included elsewhere in the docs... 398 that is included elsewhere in the docs...
396 399
397 WRITE ME 400 WRITE ME
398 401
413 @node Sparse Linear Algebra, Iterative Techniques, Graph Theory, Sparse Matrices 416 @node Sparse Linear Algebra, Iterative Techniques, Graph Theory, Sparse Matrices
414 @section Linear Algebra on Sparse Matrices 417 @section Linear Algebra on Sparse Matrices
415 418
416 Octave includes a poly-morphic solver for sparse matrices, where 419 Octave includes a poly-morphic solver for sparse matrices, where
417 the exact solver used to factorize the matrix, depends on the properties 420 the exact solver used to factorize the matrix, depends on the properties
418 of the sparse matrix, itself. The cost of determining the matrix type 421 of the sparse matrix itself. The cost of determining the matrix type
419 is small relative to the cost of factorizing the matrix itself, but in any 422 is small relative to the cost of factorizing the matrix itself, but in any
420 case the matrix type is cached once it is calculated, so that it is not 423 case the matrix type is cached once it is calculated, so that it is not
421 re-determined each time it is used in a linear equation. 424 re-determined each time it is used in a linear equation.
422 425
423 The selection tree for how the linear equation is solve is 426 The selection tree for how the linear equation is solve is
460 @item If the matrix is a upper triangular matrix with column permutations 463 @item If the matrix is a upper triangular matrix with column permutations
461 or lower triangular matrix with row permutations, perform a sparse forward 464 or lower triangular matrix with row permutations, perform a sparse forward
462 or backward subsitution, and goto 9 465 or backward subsitution, and goto 9
463 466
464 @item If the matrix is hermitian with a real positive diagonal, attempt 467 @item If the matrix is hermitian with a real positive diagonal, attempt
465 sparse Cholesky factorization. 468 sparse Cholesky factorization using CHOLMOD.
466
467 FIXME: Detection of positive definite matrices written and tested, but
468 Cholesky factorization isn't yet written
469 469
470 @item If the sparse Cholesky factorization failed or the matrix is not 470 @item If the sparse Cholesky factorization failed or the matrix is not
471 hermitian with a real positive diagonal, factorize using UMFPACK. 471 hermitian with a real positive diagonal, factorize using UMFPACK.
472 472
473 @item If the matrix is not square, or any of the previous solvers flags 473 @item If the matrix is not square, or any of the previous solvers flags
495 495
496 The user can force the type of the matrix with the @code{matrix_type} 496 The user can force the type of the matrix with the @code{matrix_type}
497 function. This overcomes the cost of discovering the type of the matrix. 497 function. This overcomes the cost of discovering the type of the matrix.
498 However, it should be noted incorrectly identifying the type of the matrix 498 However, it should be noted incorrectly identifying the type of the matrix
499 will lead to unpredictable results, and so @code{matrix_type} should be 499 will lead to unpredictable results, and so @code{matrix_type} should be
500 use dwith care. 500 used with care.
501 501
502 @node Iterative Techniques, Oct-Files, Sparse Linear Algebra, Sparse Matrices 502 @node Iterative Techniques, Oct-Files, Sparse Linear Algebra, Sparse Matrices
503 @section Iterative Techniques applied to sparse matrices 503 @section Iterative Techniques applied to sparse matrices
504 504
505 WRITE ME 505 WRITE ME
506 506
507 @node Oct-Files, License, Iterative Techniques, Sparse Matrices 507 @node Oct-Files, Function Reference, Iterative Techniques, Sparse Matrices
508 @section Using Sparse Matrices in Oct-files 508 @section Using Sparse Matrices in Oct-files
509 509
510 An oct-file is a means of writing an Octave function in a compilable 510 An oct-file is a means of writing an Octave function in a compilable
511 language like C++, rather than as a script file. This results in a 511 language like C++, rather than as a script file. This results in a
512 significant acceleration in the code. It is not the purpose of this 512 significant acceleration in the code. It is not the purpose of this
760 ii++; 760 ii++;
761 @} 761 @}
762 @} 762 @}
763 sm.cidx(j+1) = ii; 763 sm.cidx(j+1) = ii;
764 @} 764 @}
765 sm.maybe_mutate (); // If don't know a-priori the final no of nz. 765 sm.maybe_compress (); // If don't know a-priori the final no of nz.
766 @end example 766 @end example
767 767
768 which is probably the most efficient means of creating the sparse matrix. 768 which is probably the most efficient means of creating the sparse matrix.
769 769
770 Finally, it might sometimes arise that the amount of storage initially 770 Finally, it might sometimes arise that the amount of storage initially
835 @end example 835 @end example
836 836
837 The conversion to an octave-value is handled by the sparse 837 The conversion to an octave-value is handled by the sparse
838 @code{octave_value} constructors, and so no special care is needed. 838 @code{octave_value} constructors, and so no special care is needed.
839 839
840 @node License, Function Reference, Oct-Files, Sparse Matrices 840 @node Function Reference, , Oct-Files, Sparse Matrices
841 @section Licensing of Third Party Software
842
843 There are several third party software packages used by the Octave
844 sparse matrix.
845
846 @table @asis
847 @item COLAMD
848 is used for the @dfn{colamd} and @dfn{symamd} functions.
849
850 @table @asis
851 @item Authors
852 The authors of the code itself are Stefan I. Larimore and Timothy A.
853 Davis (davis@@cise.ufl.edu), University of Florida. The algorithm was
854 developed in collaboration with John Gilbert, Xerox PARC, and Esmond
855 Ng, Oak Ridge National Laboratory.
856
857 @item License
858 Copyright @copyright{} 1998-2003 by the University of Florida.
859 All Rights Reserved.
860
861 THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
862 EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
863
864 Permission is hereby granted to use, copy, modify, and/or distribute
865 this program, provided that the Copyright, this License, and the
866 Availability of the original version is retained on all copies and made
867 accessible to the end-user of any code or package that includes COLAMD
868 or any modified version of COLAMD.
869
870 @item Availability
871 @url{http://www.cise.ufl.edu/research/sparse/colamd/}
872 @end table
873
874 @item UMFPACK
875 is used in various places with the sparse types, including the
876 LU decomposition and solvers.
877
878 @table @asis
879 @item License
880 UMFPACK Version 4.4 (Jan. 28, 2005), Copyright @copyright{} 2005 by
881 Timothy A. Davis. All Rights Reserved.
882
883 Your use or distribution of UMFPACK or any modified version of
884 UMFPACK implies that you agree to this License.
885
886 THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
887 EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
888
889 Permission is hereby granted to use or copy this program, provided
890 that the Copyright, this License, and the Availability of the original
891 version is retained on all copies. User documentation of any code that
892 uses UMFPACK or any modified version of UMFPACK code must cite the
893 Copyright, this License, the Availability note, and 'Used by permission.'
894 Permission to modify the code and to distribute modified code is granted,
895 provided the Copyright, this License, and the Availability note are
896 retained, and a notice that the code was modified is included. This
897 software was developed with support from the National Science Foundation,
898 and is provided to you free of charge.
899
900 @item Availability
901 @url{http://www.cise.ufl.edu/research/sparse/umfpack/}
902 @end table
903 @end table
904
905 @node Function Reference, , License, Sparse Matrices
906 @section Function Reference 841 @section Function Reference
907 842
908 @iftex 843 @iftex
909 @subsection Functions by Category 844 @subsection Functions by Category
910 @subsubsection Generate sparse matrix 845 @subsubsection Generate sparse matrix
963 @item treeplot 898 @item treeplot
964 @emph{Not implemented} 899 @emph{Not implemented}
965 @end table 900 @end table
966 @subsubsection Sparse matrix reordering 901 @subsubsection Sparse matrix reordering
967 @table @asis 902 @table @asis
903 @item ccolamd
904 Constrained column approximate minimum degree permutation.
968 @item colamd 905 @item colamd
969 Column approximate minimum degree permutation. 906 Column approximate minimum degree permutation.
970 @item colperm 907 @item colperm
971 Returns the column permutations such that the columns of `S (:, P)' are ordered in terms of increase number of non-zero elements. 908 Returns the column permutations such that the columns of `S (:, P)' are ordered in terms of increase number of non-zero elements.
909 @item csymamd
910 For a symmetric positive definite matrix S, returns the permutation vector p such that `S (P, P)' tends to have a sparser Cholesky factor than S.
972 @item dmperm 911 @item dmperm
973 Perform a Deulmage-Mendelsohn permutation on the sparse matrix S. 912 Perform a Deulmage-Mendelsohn permutation on the sparse matrix S.
974 @item symamd 913 @item symamd
975 For a symmetric positive definite matrix S, returns the permutation vector p such that `S (P, P)' tends to have a sparser Cholesky factor than S. 914 For a symmetric positive definite matrix S, returns the permutation vector p such that `S (P, P)' tends to have a sparser Cholesky factor than S.
976 @item symrcm 915 @item symrcm
986 @emph{Not implemented} 925 @emph{Not implemented}
987 @item matrix_type 926 @item matrix_type
988 Identify the matrix type or mark a matrix as a particular type. 927 Identify the matrix type or mark a matrix as a particular type.
989 @item normest 928 @item normest
990 @emph{Not implemented} 929 @emph{Not implemented}
930 @item spchol
931 Compute the Cholesky factor, R, of the symmetric positive definite.
932 @item spcholinv
933 Use the Cholesky factorization to compute the inverse of the
934 sparse symmetric positive definite matrix A.
935 @item spchol2inv
936 Invert a sparse symmetric, positive definite square matrix from its
937 Cholesky decomposition, U.
991 @item spdet 938 @item spdet
992 Compute the determinant of sparse matrix A using UMFPACK. 939 Compute the determinant of sparse matrix A using UMFPACK.
993 @item spinv 940 @item spinv
994 Compute the inverse of the square matrix A. 941 Compute the inverse of the square matrix A.
995 @item spkron 942 @item spkron
996 Form the kronecker product of two sparse matrices. 943 Form the kronecker product of two sparse matrices.
944 @item splchol
945 Compute the Cholesky factor, L, of the symmetric positive definite.
997 @item splu 946 @item splu
998 Compute the LU decomposition of the sparse matrix A, using subroutines from UMFPACK. 947 Compute the LU decomposition of the sparse matrix A, using subroutines from UMFPACK.
999 @item sprank 948 @item sprank
1000 @emph{Not implemented} 949 @emph{Not implemented}
1001 @item svds 950 @item svds
1052 For a vector argument, return the maximum value. 1001 For a vector argument, return the maximum value.
1053 @item spatan2 1002 @item spatan2
1054 Compute atan (Y / X) for corresponding sparse matrix elements of Y and X. 1003 Compute atan (Y / X) for corresponding sparse matrix elements of Y and X.
1055 @item spdiag 1004 @item spdiag
1056 Return a diagonal matrix with the sparse vector V on diagonal K. 1005 Return a diagonal matrix with the sparse vector V on diagonal K.
1057 @item spreshape
1058 Return a sparse matrix with M rows and N columns whose elements are taken from the sparse matrix A.
1059 @end table 1006 @end table
1060 1007
1061 @subsection Functions Alphabetically 1008 @subsection Functions Alphabetically
1062 @end iftex 1009 @end iftex
1063 1010
1064 @menu 1011 @menu
1012 * ccolamd:: Constrained column approximate minimum degree permutation.
1065 * colamd:: Column approximate minimum degree permutation. 1013 * colamd:: Column approximate minimum degree permutation.
1066 * colperm:: Returns the column permutations such that the columns of `S 1014 * colperm:: Returns the column permutations such that the columns of `S
1067 (:, P)' are ordered in terms of increase number of non-zero 1015 (:, P)' are ordered in terms of increase number of non-zero
1068 elements. 1016 elements.
1017 * csymamd:: For a symmetric positive definite matrix S, returns the
1018 permutation vector p such that `S (P, P)' tends to have a
1019 sparser Cholesky factor than S.
1069 * dmperm:: Perfrom a Deulmage-Mendelsohn permutation on the sparse 1020 * dmperm:: Perfrom a Deulmage-Mendelsohn permutation on the sparse
1070 matrix S. 1021 matrix S.
1071 * etree:: Returns the elimination tree for the matrix S. 1022 * etree:: Returns the elimination tree for the matrix S.
1072 * full:: returns a full storage matrix from a sparse one See also: 1023 * full:: returns a full storage matrix from a sparse one See also:
1073 sparse 1024 sparse
1084 matrix SM. 1035 matrix SM.
1085 * spalloc:: Returns an empty sparse matrix of size R-by-C. 1036 * spalloc:: Returns an empty sparse matrix of size R-by-C.
1086 * sparse:: SPARSE: create a sparse matrix 1037 * sparse:: SPARSE: create a sparse matrix
1087 * spatan2:: Compute atan (Y / X) for corresponding sparse matrix 1038 * spatan2:: Compute atan (Y / X) for corresponding sparse matrix
1088 elements of Y and X. 1039 elements of Y and X.
1040 * spchol:: Compute the Cholesky factor, R, of the symmetric
1041 positive definite.
1042 * spcholinv:: Use the Cholesky factorization to compute the inverse of the
1043 sparse symmetric positive definite matrix A.
1044 * spchol2inv:: Invert a sparse symmetric, positive definite square matrix
1045 from its Cholesky decomposition, U.
1089 * spconvert:: This function converts for a simple sparse matrix format 1046 * spconvert:: This function converts for a simple sparse matrix format
1090 easily produced by other programs into Octave's internal 1047 easily produced by other programs into Octave's internal
1091 sparse format. 1048 sparse format.
1092 * spcumprod:: Cumulative product of elements along dimension DIM. 1049 * spcumprod:: Cumulative product of elements along dimension DIM.
1093 * spcumsum:: Cumulative sum of elements along dimension DIM. 1050 * spcumsum:: Cumulative sum of elements along dimension DIM.
1099 * spfind:: SPFIND: a sparse version of the find operator 1. 1056 * spfind:: SPFIND: a sparse version of the find operator 1.
1100 * spfun:: Compute `f(X)' for the non-zero values of X This results in 1057 * spfun:: Compute `f(X)' for the non-zero values of X This results in
1101 a sparse matrix with the same structure as X. 1058 a sparse matrix with the same structure as X.
1102 * spinv:: Compute the inverse of the square matrix A. 1059 * spinv:: Compute the inverse of the square matrix A.
1103 * spkron:: Form the kronecker product of two sparse matrices. 1060 * spkron:: Form the kronecker product of two sparse matrices.
1061 * splchol:: Compute the Cholesky factor, L, of the symmetric positive
1062 definite.
1104 * splu:: Compute the LU decomposition of the sparse matrix A, using 1063 * splu:: Compute the LU decomposition of the sparse matrix A, using
1105 subroutines from UMFPACK. 1064 subroutines from UMFPACK.
1106 * spmax:: For a vector argument, return the maximum value. 1065 * spmax:: For a vector argument, return the maximum value.
1107 * spmin:: For a vector argument, return the minimum value. 1066 * spmin:: For a vector argument, return the minimum value.
1108 * spones:: Replace the non-zero entries of X with ones. 1067 * spones:: Replace the non-zero entries of X with ones.
1109 * spparms:: Sets or displays the parameters used by the sparse solvers 1068 * spparms:: Sets or displays the parameters used by the sparse solvers
1110 and factorization functions. 1069 and factorization functions.
1111 * spprod:: Product of elements along dimension DIM. 1070 * spprod:: Product of elements along dimension DIM.
1112 * sprand:: Generate a random sparse matrix. 1071 * sprand:: Generate a random sparse matrix.
1113 * sprandn:: Generate a random sparse matrix. 1072 * sprandn:: Generate a random sparse matrix.
1114 * spreshape:: Return a sparse matrix with M rows and N columns whose
1115 elements are taken from the sparse matrix A.
1116 * spstats:: Return the stats for the non-zero elements of the sparse 1073 * spstats:: Return the stats for the non-zero elements of the sparse
1117 matrix S COUNT is the number of non-zeros in each column, 1074 matrix S COUNT is the number of non-zeros in each column,
1118 MEAN is the mean of the non-zeros in each column, and VAR 1075 MEAN is the mean of the non-zeros in each column, and VAR
1119 is the variance of the non-zeros in each column 1076 is the variance of the non-zeros in each column
1120 * spsum:: Sum of elements along dimension DIM. 1077 * spsum:: Sum of elements along dimension DIM.
1127 matrix S. 1084 matrix S.
1128 * symrcm:: Returns the Reverse Cuthill McKee reordering of the sparse 1085 * symrcm:: Returns the Reverse Cuthill McKee reordering of the sparse
1129 matrix S. 1086 matrix S.
1130 @end menu 1087 @end menu
1131 1088
1132 @node colamd, colperm, , Function Reference 1089 @node colamd, ccolamd, , Function Reference
1133 @subsubsection colamd 1090 @subsubsection colamd
1134 1091
1135 @DOCSTRING(colamd) 1092 @DOCSTRING(colamd)
1136 1093
1137 @node colperm, dmperm, colamd, Function Reference 1094 @node ccolamd, colperm, colamd, Function Reference
1095 @subsubsection ccolamd
1096
1097 @DOCSTRING(ccolamd)
1098
1099 @node colperm, csymamd, ccolamd, Function Reference
1138 @subsubsection colperm 1100 @subsubsection colperm
1139 1101
1140 @DOCSTRING(colperm) 1102 @DOCSTRING(colperm)
1141 1103
1142 @node dmperm, etree, colperm, Function Reference 1104 @node csymamd, dmperm, colperm, Function Reference
1105 @subsubsection csymamd
1106
1107 @DOCSTRING(csymamd)
1108
1109 @node dmperm, etree, csymamd, Function Reference
1143 @subsubsection dmperm 1110 @subsubsection dmperm
1144 1111
1145 @DOCSTRING(dmperm) 1112 @DOCSTRING(dmperm)
1146 1113
1147 @node etree, full, dmperm, Function Reference 1114 @node etree, full, dmperm, Function Reference
1192 @node sparse, spatan2, spalloc, Function Reference 1159 @node sparse, spatan2, spalloc, Function Reference
1193 @subsubsection sparse 1160 @subsubsection sparse
1194 1161
1195 @DOCSTRING(sparse) 1162 @DOCSTRING(sparse)
1196 1163
1197 @node spatan2, spconvert, sparse, Function Reference 1164 @node spatan2, spchol, sparse, Function Reference
1198 @subsubsection spatan2 1165 @subsubsection spatan2
1199 1166
1200 @DOCSTRING(spatan2) 1167 @DOCSTRING(spatan2)
1201 1168
1202 @node spconvert, spcumprod, spatan2, Function Reference 1169 @node spchol, spcholinv, spatan2, Function Reference
1170 @subsubsection spchol
1171
1172 @DOCSTRING(spchol)
1173
1174 @node spcholinv, spchol2inv, spchol, Function Reference
1175 @subsubsection spcholinv
1176
1177 @DOCSTRING(spcholinv)
1178
1179 @node spchol2inv, spconvert, spcholinv, Function Reference
1180 @subsubsection spchol2inv
1181
1182 @DOCSTRING(spchol2inv)
1183
1184 @node spconvert, spcumprod, spchol2inv, Function Reference
1203 @subsubsection spconvert 1185 @subsubsection spconvert
1204 1186
1205 @DOCSTRING(spconvert) 1187 @DOCSTRING(spconvert)
1206 1188
1207 @node spcumprod, spcumsum, spconvert, Function Reference 1189 @node spcumprod, spcumsum, spconvert, Function Reference
1247 @node spinv, spkron, spfun, Function Reference 1229 @node spinv, spkron, spfun, Function Reference
1248 @subsubsection spinv 1230 @subsubsection spinv
1249 1231
1250 @DOCSTRING(spinv) 1232 @DOCSTRING(spinv)
1251 1233
1252 @node spkron, splu, spinv, Function Reference 1234 @node spkron, splchol, spinv, Function Reference
1253 @subsubsection spkron 1235 @subsubsection spkron
1254 1236
1255 @DOCSTRING(spkron) 1237 @DOCSTRING(spkron)
1256 1238
1257 @node splu, spmax, spkron, Function Reference 1239 @node splchol, splu, spkron, Function Reference
1240 @subsubsection splchol
1241
1242 @DOCSTRING(splchol)
1243
1244 @node splu, spmax, splchol, Function Reference
1258 @subsubsection splu 1245 @subsubsection splu
1259 1246
1260 @DOCSTRING(splu) 1247 @DOCSTRING(splu)
1261 1248
1262 @node spmax, spmin, splu, Function Reference 1249 @node spmax, spmin, splu, Function Reference
1287 @node sprand, sprandn, spprod, Function Reference 1274 @node sprand, sprandn, spprod, Function Reference
1288 @subsubsection sprand 1275 @subsubsection sprand
1289 1276
1290 @DOCSTRING(sprand) 1277 @DOCSTRING(sprand)
1291 1278
1292 @node sprandn, spreshape, sprand, Function Reference 1279 @node sprandn, spstats, sprand, Function Reference
1293 @subsubsection sprandn 1280 @subsubsection sprandn
1294 1281
1295 @DOCSTRING(sprandn) 1282 @DOCSTRING(sprandn)
1296 1283
1297 @node spreshape, spstats, sprandn, Function Reference 1284 @node spstats, spsum, sprandn, Function Reference
1298 @subsubsection spreshape
1299
1300 @DOCSTRING(spreshape)
1301
1302 @node spstats, spsum, spreshape, Function Reference
1303 @subsubsection spstats 1285 @subsubsection spstats
1304 1286
1305 @DOCSTRING(spstats) 1287 @DOCSTRING(spstats)
1306 1288
1307 @node spsum, spsumsq, spstats, Function Reference 1289 @node spsum, spsumsq, spstats, Function Reference