5164
|
1 @c Copyright (C) 2004, 2005 David Bateman |
|
2 @c This is part of the Octave manual. |
|
3 @c For copying conditions, see the file gpl.texi. |
|
4 |
|
5 @node Sparse Matrices |
|
6 @chapter Sparse Matrices |
|
7 |
|
8 @menu |
|
9 * Basics:: The Creation and Manipulation of Sparse Matrices |
|
10 * Graph Theory:: Graphs are their use with Sparse Matrices |
|
11 * Sparse Linear Algebra:: Linear Algebra on Sparse Matrices |
|
12 * Iterative Techniques:: Iterative Techniques applied to Sparse Matrices |
|
13 * Oct-Files:: Using Sparse Matrices in Oct-files |
|
14 * License:: Licensing of Third Party Software |
|
15 * Function Reference:: Documentation from the Specific Sparse Functions |
|
16 @end menu |
|
17 |
|
18 @node Basics, Graph Theory, Sparse Matrices, Sparse Matrices |
|
19 @section The Creation and Manipulation of Sparse Matrices |
|
20 |
|
21 The size of mathematical problems that can be treated at any particular |
|
22 time is generally limited by the available computing resources. Both, |
|
23 the speed of the computer and its available memory place limitation on |
|
24 the problem size. |
|
25 |
|
26 There are many classes mathematical problems which give rise to |
|
27 matrices, where a large number of the elements are zero. In this case |
|
28 it makes sense to have a special matrix type to handle this class of |
|
29 problems where only the non-zero elements of the matrix are |
|
30 stored. Not only done this reduce the amount of memory to store the |
|
31 matrix, but it also means that operations on this type of matrix can |
|
32 take advantage of the a-priori knowledge of the positions of the |
|
33 non-zero elements to accelerate their calculations. |
|
34 |
|
35 A matrix type that stores only the non-zero elements is generally called |
|
36 sparse. It is the purpose of this document to discuss the basics of the |
|
37 storage and creation of sparse matrices and the fundamental operations |
|
38 on them. |
|
39 |
|
40 THIS DOCUMENT STILL HAS LARGE BLANKS. PLEASE FILL THEM IN. LOOK FOR |
|
41 THE TAG "WRITE ME" |
|
42 |
|
43 @menu |
|
44 * Storage:: Storage of Sparse Matrices |
|
45 * Creation:: Creating Sparse Matrices |
|
46 * Operators and Functions:: Basic Operators and Functions on Sparse Matrices |
|
47 * Information:: Finding out Information about Sparse Matrices |
|
48 @end menu |
|
49 |
|
50 @node Storage, Creation, Basics, Basics |
|
51 @subsection Storage of Sparse Matrices |
|
52 |
|
53 It is not strictly speaking necessary for the user to understand how |
|
54 sparse matrices are stored. However, such an understanding will help |
|
55 to get an understanding of the size of sparse matrices. Understanding |
|
56 the storage technique is also necessary for those users wishing to |
|
57 create their own oct-files. |
|
58 |
|
59 There are many different means of storing sparse matrix data. What all |
|
60 of the methods have in common is that they attempt to reduce the compelxity |
|
61 and storage given a-priori knowledge of the particular class of problems |
|
62 that will be solved. A good summary of the available techniques for storing |
|
63 sparse matrix is given by Saad @footnote{Youcef Saad "SPARSKIT: A basic toolkit |
|
64 for sparse matrix computation", 1994, |
|
65 @url{ftp://ftp.cs.umn.edu/dept/sparse/SPARSKIT2/DOC/paper.ps}}. |
|
66 With full matrices, knowledge of the point of an element of the matrix |
|
67 within the matrix is implied by its position in the computers memory. |
|
68 However, this is not the case for sparse matrices, and so the positions |
|
69 of the non-zero elements of the matrix must equally be stored. |
|
70 |
|
71 An obvious way to do this is by storing the elements of the matrix as |
|
72 triplets, with two elements being the of the position in the array |
|
73 (rows and column) and the third being the data itself. This is conceptually |
|
74 easy to grasp, but requires more storage than is strictly needed. |
|
75 |
5324
|
76 The storage technique used within Octave is compressed column |
5164
|
77 format. In this format the position of each element in a row and the |
|
78 data are stored as previously. However, if we assume that all elements |
|
79 in the same column are stored adjacent in the computers memory, then |
|
80 we only need to store information on the number of non-zero elements |
|
81 in each column, rather than their positions. Thus assuming that the |
|
82 matrix has more non-zero elements than there are columns in the |
|
83 matrix, we win in terms of the amount of memory used. |
|
84 |
|
85 In fact, the column index contains one more element than the number of |
|
86 columns, with the first element always being zero. The advantage of |
|
87 this is a simplication in the code, in that their is no special case |
|
88 for the first or last columns. A short example, demonstrating this in |
|
89 C is. |
|
90 |
|
91 @example |
|
92 for (j = 0; j < nc; j++) |
|
93 for (i = cidx (j); i < cidx(j+1); i++) |
|
94 printf ("non-zero element (%i,%i) is %d\n", ridx(i), j, data(i)); |
|
95 @end example |
|
96 |
|
97 A clear understanding might be had by considering an example of how the |
|
98 above applies to an example matrix. Consider the matrix |
|
99 |
|
100 @example |
|
101 @group |
|
102 1 2 0 0 |
|
103 0 0 0 3 |
|
104 0 0 0 4 |
|
105 @end group |
|
106 @end example |
|
107 |
|
108 The non-zero elements of this matrix are |
|
109 |
|
110 @example |
|
111 @group |
|
112 (1, 1) @result{} 1 |
|
113 (1, 2) @result{} 2 |
|
114 (2, 4) @result{} 3 |
|
115 (3, 4) @result{} 4 |
|
116 @end group |
|
117 @end example |
|
118 |
|
119 This will be stored as three vectors @var{cidx}, @var{ridx} and @var{data}, |
|
120 representing the column indexing, row indexing and data respectively. The |
|
121 contents of these three vectors for the above matrix will be |
|
122 |
|
123 @example |
|
124 @group |
|
125 @var{cidx} = [0, 2, 2, 4] |
|
126 @var{ridx} = [0, 0, 1, 2] |
|
127 @var{data} = [1, 2, 3, 4] |
|
128 @end group |
|
129 @end example |
|
130 |
|
131 Note that this is the representation of these elements with the first row |
|
132 and column assumed to start as zero. Thus the number of elements in the |
|
133 @var{i}-th column is given by @code{@var{cidx} (@var{i} + 1) - |
|
134 @var{cidx} (@var{i})}. |
|
135 |
|
136 It should be noted that compressed row formats are equally possible. However, |
|
137 in the context of mixed operations between mixed sparse and dense matrices, |
|
138 it makes sense that the elements of the sparse matrices are in the same |
5324
|
139 order as the dense matrices. Octave stores dense matrices in column |
5164
|
140 major ordering, and so sparse matrices are equally stored in this manner. |
|
141 |
5324
|
142 A further constraint on the sparse matrix storage used by Octave is that |
5164
|
143 all elements in the rows are stored in increasing order of their row |
|
144 index. This makes certain operations, later be faster. However, it imposes |
|
145 the need to sort the elements on the creation of sparse matrices. Having |
|
146 dis-ordered elements is potentially an advantage in that it makes operations |
|
147 such as concatenating two sparse matrices together easier and faster, however |
|
148 it adds complexity and speed problems elsewhere. |
|
149 |
|
150 @node Creation, Operators and Functions, Storage, Basics |
|
151 @subsection Creating Sparse Matrices |
|
152 |
|
153 There are several means to create sparse matrix. |
|
154 |
|
155 @table @asis |
|
156 @item Returned from a function |
|
157 There are many functions that directly return sparse matrices. These include |
|
158 @dfn{speye}, @dfn{sprand}, @dfn{spdiag}, etc. |
|
159 @item Constructed from matrices or vectors |
|
160 The function @dfn{sparse} allows a sparse matrix to be constructed from |
|
161 three vectors representing the row, column and data. ALternatively, the |
|
162 function @dfn{spconvert} uses a three column matrix format to allow easy |
|
163 importation of data from elsewhere. |
|
164 @item Created and then filled |
|
165 The function @dfn{sparse} or @dfn{spalloc} can be used to create an empty |
|
166 matrix that is then filled by the user |
|
167 @item From a user binary program |
|
168 The user can directly create the sparse matrix within an oct-file. |
|
169 @end table |
|
170 |
|
171 There are several basic functions to return specific sparse |
|
172 matrices. For example the sparse identity matrix, is a matrix that is |
|
173 often needed. It therefore has its own function to create it as |
|
174 @code{speye (@var{n})} or @code{speye (@var{r}, @var{c})}, which |
|
175 creates an @var{n}-by-@var{n} or @var{r}-by-@var{c} sparse identity |
|
176 matrix. |
|
177 |
|
178 Another typical sparse matrix that is often needed is a random distribution |
|
179 of random elements. The functions @dfn{sprand} and @dfn{sprandn} perform |
|
180 this for uniform and random distributions of elements. They have exactly |
|
181 the same calling convention as @code{sprand (@var{r}, @var{c}, @var{d})}, |
|
182 which creates an @var{r}-by-@var{c} sparse matrix a density of filled |
|
183 elements of @var{d}. |
|
184 |
|
185 Other functions of interest that directly creates a sparse matrix, are |
|
186 @dfn{spdiag} or its generalization @dfn{spdiags}, that can take the |
|
187 definition of the diagonals of the matrix and create the sparse matrix |
|
188 that corresponds to this. For example |
|
189 |
|
190 @c FIXME, when spdiag is overloaded to diag the below won't work. |
|
191 |
|
192 @example |
|
193 s = spdiag (randn(1,n), -1); |
|
194 @end example |
|
195 |
|
196 creates a sparse (@var{n}+1)-by-(@var{n}+1) sparse matrix with a single |
|
197 diagonal defined. |
|
198 |
|
199 The recommended way for the user to create a sparse matrix, is to create |
|
200 two vectors contain the row and column index of the data and a third |
|
201 vector of the same size containing the data to be stored. For example |
|
202 |
|
203 @example |
|
204 function x = foo (r, j) |
|
205 idx = randperm (r); |
|
206 x = ([zeros(r-2,1); rand(2,1)])(idx); |
|
207 endfunction |
|
208 |
|
209 ri = []; |
|
210 ci = []; |
|
211 d = []; |
|
212 |
|
213 for j=1:c |
|
214 dtmp = foo (r, j); # Returns vector of length r with (:,j) values |
|
215 idx = find (dtmp != 0.); |
|
216 ri = [ri; idx]; |
|
217 ci = [ci; j*ones(length(idx),1)]; |
|
218 d = [d; dtmp(idx)]; |
|
219 endfor |
|
220 s = sparse (ri, ci, d, r, c); |
|
221 @end example |
|
222 |
|
223 creates an @var{r}-by-@var{c} sparse matrix with a random distribution of |
|
224 2 elements per row. The elements of the vectors do not need to be sorted in |
5324
|
225 any particular order as Octave will store them prior to storing the |
5164
|
226 data. However, per sorting the data will make teh creation of the sparse |
|
227 matrix faster. |
|
228 |
|
229 The function @dfn{spconvert} takes a three or four column real matrix. |
|
230 The first two columns represent the row and column index respectively and |
|
231 the third and four columns, the real and imaginary parts of the sparse |
|
232 matrix. The matrix can contain zero elements and the elements can be |
|
233 sorted in any order. Adding zero elements is a convenient way to define |
|
234 the size of the sparse matrix. For example |
|
235 |
|
236 @example |
|
237 s = spconvert ([1 2 3 4; 1 3 4 4; 1 2 3 0]') |
|
238 @result{} Compressed Column Sparse (rows=4, cols=4, nnz=3) |
|
239 (1 , 1) -> 1 |
|
240 (2 , 3) -> 2 |
|
241 (3 , 4) -> 3 |
|
242 @end example |
|
243 |
|
244 An example of creating and filling a matrix might be |
|
245 |
|
246 @example |
|
247 k = 5; |
|
248 nz = r * k; |
|
249 s = spalloc (r, c, nz) |
|
250 for j = 1:c |
|
251 idx = randperm (r); |
|
252 s (:, j) = [zeros(r - k, 1); rand(k, 1)] (idx); |
|
253 endfor |
|
254 @end example |
|
255 |
5324
|
256 It should be noted, that due to the way that the Octave |
5164
|
257 assignment functions are written that the assignment will reallocate |
|
258 the memory used by the sparse matrix at each iteration. Therefore the |
|
259 @dfn{spalloc} function ignores the @var{nz} argument and does not |
|
260 preassign the memory for the matrix. Therefore, it is vitally |
|
261 important that code using to above structure should be as vectorized |
|
262 as possible to minimize the number of assignments and reduce the |
|
263 number of memory allocations. |
|
264 |
|
265 The above problem can be avoided in oct-files. However, the |
|
266 construction of a sparse matrix from an oct-file is more complex than |
|
267 can be discussed in this brief introduction, and you are referred to |
|
268 section @ref{Oct-Files}, to have a full description of the techniques |
|
269 involved. |
|
270 |
|
271 @node Operators and Functions, Information, Creation, Basics |
|
272 @subsection Basic Operators and Functions on Sparse Matrices |
|
273 |
|
274 @menu |
|
275 * Functions:: Operators and Functions |
|
276 * ReturnType:: The Return Types of Operators and Functions |
|
277 * MathConsiderations:: Mathematical Considerations |
|
278 @end menu |
|
279 |
|
280 @node Functions, ReturnType, Operators and Functions, Operators and Functions |
|
281 @subsubsection Operators and Functions |
|
282 |
|
283 WRITE ME |
|
284 |
|
285 @node ReturnType, MathConsiderations, Functions, Operators and Functions |
|
286 @subsubsection The Return Types of Operators and Functions |
|
287 |
|
288 The two basic reason to use sparse matrices are to reduce the memory |
|
289 usage and to not have to do calculations on zero elements. The two are |
|
290 closely related in that the computation time on a sparse matrix operator |
|
291 or function is roughly linear with the numberof non-zero elements. |
|
292 |
|
293 Therefore, there is a certain density of non-zero elements of a matrix |
|
294 where it no longer makes sense to store it as a sparse matrix, but rather |
|
295 as a full matrix. For this reason operators and functions that have a |
|
296 high probability of returning a full matrix will always return one. For |
|
297 example adding a scalar constant to a sparse matrix will almost always |
|
298 make it a full matrix, and so the example |
|
299 |
|
300 @example |
|
301 speye(3) + 0 |
|
302 @result{} 1 0 0 |
|
303 0 1 0 |
|
304 0 0 1 |
|
305 @end example |
|
306 |
|
307 returns a full matrix as can be seen. Additionally all sparse functions |
|
308 test the amount of memory occupied by the sparse matrix to see if the |
|
309 amount of storage used is larger than the amount used by the full |
|
310 equivalent. Therefore @code{speye (2) * 1} will return a full matrix as |
|
311 the memory used is smaller for the full version than the sparse version. |
|
312 |
|
313 As all of the mixed operators and functions between full and sparse |
|
314 matrices exist, in general this does not cause any problems. However, |
|
315 one area where it does cause a problem is where a sparse matrix is |
|
316 promoted to a full matrix, where subsequent operations would resparsify |
|
317 the matrix. Such cases as rare, but can be artificially created, for |
|
318 example @code{(fliplr(speye(3)) + speye(3)) - speye(3)} gives a full |
|
319 matrix when it should give a sparse one. In general, where such cases |
|
320 occur, they impose only a small memory penalty. |
|
321 |
5324
|
322 There is however one known case where this behaviour of Octave's |
5164
|
323 sparse matrices will cause a problem. That is in the handling of the |
|
324 @dfn{diag} function. Whether @dfn{diag} returns a sparse or full matrix |
|
325 depending on the type of its input arguments. So |
|
326 |
|
327 @example |
|
328 a = diag (sparse([1,2,3]), -1); |
|
329 @end example |
|
330 |
|
331 should return a sparse matrix. To ensure this actually happens, the |
|
332 @dfn{sparse} function, and other functions based on it like @dfn{speye}, |
|
333 always returns a sparse matrix, even if the memory used will be larger |
|
334 than its full representation. |
|
335 |
|
336 @node MathConsiderations, , ReturnType, Operators and Functions |
|
337 @subsubsection Mathematical Considerations |
|
338 |
|
339 The attempt has been made to make sparse matrices behave in exactly the |
|
340 same manner as there full counterparts. However, there are certain differences |
|
341 and especially differences with other products sparse implementations. |
|
342 |
|
343 Firstly, the "./" and ".^" operators must be used with care. Consider what |
|
344 the examples |
|
345 |
|
346 @example |
|
347 s = speye (4); |
|
348 a1 = s .^ 2; |
|
349 a2 = s .^ s; |
|
350 a3 = s .^ -2; |
|
351 a4 = s ./ 2; |
|
352 a5 = 2 ./ s; |
|
353 a6 = s ./ s; |
|
354 @end example |
|
355 |
|
356 will give. The first example of @var{s} raised to the power of 2 causes |
|
357 no problems. However @var{s} raised element-wise to itself involves a |
|
358 a large number of terms @code{0 .^ 0} which is 1. There @code{@var{s} .^ |
|
359 @var{s}} is a full matrix. |
|
360 |
|
361 Likewise @code{@var{s} .^ -2} involves terms terms like @code{0 .^ -2} which |
|
362 is infinity, and so @code{@var{s} .^ -2} is equally a full matrix. |
|
363 |
|
364 For the "./" operator @code{@var{s} ./ 2} has no problems, but |
|
365 @code{2 ./ @var{s}} involves a large number of infinity terms as well |
|
366 and is equally a full matrix. The case of @code{@var{s} ./ @var{s}} |
|
367 involves terms like @code{0 ./ 0} which is a @code{NaN} and so this |
|
368 is equally a full matrix with the zero elements of @var{s} filled with |
|
369 @code{NaN} values. |
|
370 |
|
371 The above behaviour is consistent with full matrices, but is not |
|
372 consistent with sparse implementations in other products. |
|
373 |
|
374 A particular problem of sparse matrices comes about due to the fact that |
|
375 as the zeros are not stored, the sign-bit of these zeros is equally not |
|
376 stored... In certain cases the sign-bit of zero is important. For example |
|
377 |
|
378 @example |
|
379 a = 0 ./ [-1, 1; 1, -1]; |
|
380 b = 1 ./ a |
|
381 @result{} -Inf Inf |
|
382 Inf -Inf |
|
383 c = 1 ./ sparse (a) |
|
384 @result{} Inf Inf |
|
385 Inf Inf |
|
386 @end example |
|
387 |
|
388 To correct this behaviour would mean that zero elements with a negative |
|
389 sign-bit would need to be stored in the matrix to ensure that their |
|
390 sign-bit was respected. This is not done at this time, for reasons of |
|
391 efficient, and so the user is warned that calculations where the sign-bit |
|
392 of zero is important must not be done using sparse matrices. |
|
393 |
|
394 Also discuss issues of fill-in. Discuss symamd etc, and mention randperm |
|
395 that is included elsewhere in the docs... |
|
396 |
|
397 WRITE ME |
|
398 |
|
399 @node Information, , Operators and Functions, Basics |
|
400 @subsection Finding out Information about Sparse Matrices |
|
401 |
|
402 Talk about the spy, spstats, nnz, spparms, etc function |
|
403 |
|
404 WRITE ME |
|
405 |
|
406 @node Graph Theory, Sparse Linear Algebra, Basics, Sparse Matrices |
|
407 @section Graphs are their use with Sparse Matrices |
|
408 |
|
409 Someone who knows more about this than me should write this... |
|
410 |
|
411 WRITE ME |
|
412 |
|
413 @node Sparse Linear Algebra, Iterative Techniques, Graph Theory, Sparse Matrices |
|
414 @section Linear Algebra on Sparse Matrices |
|
415 |
5324
|
416 Octave includes a poly-morphic solver for sparse matrices, where |
5164
|
417 the exact solver used to factorize the matrix, depends on the properties |
5322
|
418 of the sparse matrix, itself. The cost of determining the matrix type |
|
419 is small relative to the cost of factorizing the matrix itself, but in any |
|
420 case the matrix type is cached once it is calculated, so that it is not |
|
421 re-determined each time it is used in a linear equation. |
5164
|
422 |
|
423 The selection tree for how the linear equation is solve is |
|
424 |
|
425 @enumerate 1 |
|
426 @item If the matrix is not square go to 9. |
|
427 |
|
428 @item If the matrix is diagonal, solve directly and goto 9 |
|
429 |
|
430 @item If the matrix is a permuted diagonal, solve directly taking into |
|
431 account the permutations. Goto 9 |
|
432 |
|
433 @item If the matrix is banded and if the band density is less than that |
|
434 given by @code{spparms ("bandden")} continue, else goto 5. |
|
435 |
|
436 @enumerate a |
|
437 @item If the matrix is tridiagonal and the right-hand side is not sparse |
|
438 continue, else goto 4b. |
|
439 |
|
440 @enumerate |
|
441 @item If the matrix is hermitian, with a positive real diagonal, attempt |
|
442 Cholesky factorization using @sc{Lapack} xPTSV. |
|
443 |
|
444 @item If the above failed or the matrix is not hermitian with a positive |
|
445 real diagonal use Gaussian elimination with pivoting using |
|
446 @sc{Lapack} xGTSV, and goto 9. |
|
447 @end enumerate |
|
448 |
|
449 @item If the matrix is hermitian with a positive real diagonal, attempt |
|
450 Cholesky factorization using @sc{Lapack} xPBTRF. |
|
451 |
|
452 @item if the above failed or the matrix is not hermitian with a positive |
|
453 real diagonal use Gaussian elimination with pivoting using |
|
454 @sc{Lapack} xGBTRF, and goto 9. |
|
455 @end enumerate |
|
456 |
|
457 @item If the matrix is upper or lower triangular perform a sparse forward |
|
458 or backward subsitution, and goto 9 |
|
459 |
5322
|
460 @item If the matrix is a upper triangular matrix with column permutations |
|
461 or lower triangular matrix with row permutations, perform a sparse forward |
|
462 or backward subsitution, and goto 9 |
5164
|
463 |
|
464 @item If the matrix is hermitian with a real positive diagonal, attempt |
|
465 sparse Cholesky factorization. |
|
466 |
|
467 FIXME: Detection of positive definite matrices written and tested, but |
|
468 Cholesky factorization isn't yet written |
|
469 |
|
470 @item If the sparse Cholesky factorization failed or the matrix is not |
|
471 hermitian with a real positive diagonal, factorize using UMFPACK. |
|
472 |
|
473 @item If the matrix is not square, or any of the previous solvers flags |
|
474 a singular or near singular matrix, find a minimum norm solution |
|
475 |
|
476 FIXME: QR solvers not yet written. |
|
477 |
|
478 @end enumerate |
|
479 |
|
480 The band density is defined as the number of non-zero values in the matrix |
|
481 divided by the number of non-zero values in the matrix. The banded matrix |
|
482 solvers can be entirely disabled by using @dfn{spparms} to set @code{bandden} |
|
483 to 1 (i.e. @code{spparms ("bandden", 1)}). |
|
484 |
|
485 All of the solvers above, expect the banded solvers, calculate an |
|
486 estimate of the condition number. This can be used to detect numerical |
|
487 stability problems in the solution and force a minimum norm solution |
|
488 to be used. However, for narrow banded matrices, the cost of |
|
489 calculating the condition number is significant, and can in fact exceed |
|
490 the cost of factoring the matrix. Therefore the condition number is |
|
491 not calculated for banded matrices, and therefore unless the factorization |
|
492 is exactly singular, these numerical instabilities won't be detected. |
|
493 In cases where, this might be a problem the user is recommended to disable |
|
494 the banded solvers as above, at a significant cost in terms of speed. |
|
495 |
5322
|
496 The user can force the type of the matrix with the @code{matrix_type} |
|
497 function. This overcomes the cost of discovering the type of the matrix. |
|
498 However, it should be noted incorrectly identifying the type of the matrix |
|
499 will lead to unpredictable results, and so @code{matrix_type} should be |
|
500 use dwith care. |
|
501 |
5164
|
502 @node Iterative Techniques, Oct-Files, Sparse Linear Algebra, Sparse Matrices |
|
503 @section Iterative Techniques applied to sparse matrices |
|
504 |
|
505 WRITE ME |
|
506 |
|
507 @node Oct-Files, License, Iterative Techniques, Sparse Matrices |
|
508 @section Using Sparse Matrices in Oct-files |
|
509 |
5324
|
510 An oct-file is a means of writing an Octave function in a compilable |
5164
|
511 language like C++, rather than as a script file. This results in a |
|
512 significant acceleration in the code. It is not the purpose of this |
|
513 section to discuss how to write an oct-file, or discuss what they |
|
514 are. There are already two @footnote{Paul Thomas "Dal Segno al Coda |
|
515 - The octave dynamically linked function cookbook", |
|
516 @url{http://perso.wanadoo.fr/prthomas/intro.html}, and Cristophe Spiel |
|
517 "Del Coda Al Fine - Pushing Octave's Limits", |
|
518 @url{http://octave.sourceforge.net/coda/coda.pdf}} very good |
|
519 references on oct-files themselves. Users who are not familiar with |
|
520 oct-files are urged to read these references to fully understand this |
|
521 chapter. The examples discussed here assume that the oct-file is written |
|
522 entirely in C++. |
|
523 |
|
524 There are three classes of sparse objects that are of interest to the |
|
525 user. |
|
526 |
|
527 @table @asis |
|
528 @item SparseMatrix |
|
529 A double precision sparse matrix class |
|
530 @item SparseComplexMatrix |
|
531 A Complex sparse matrix class |
|
532 @item SparseBoolMatrix |
|
533 A boolen sparse matrix class |
|
534 @end table |
|
535 |
|
536 All of these classes inherit from the @code{Sparse<T>} template class, |
|
537 and so all have similar capabilities and usage. The @code{Sparse<T>} |
5324
|
538 class was based on Octave @code{Array<T>} class, and so users familar |
|
539 with Octave's Array classes will be comfortable with the use of |
5164
|
540 the sparse classes. |
|
541 |
|
542 The sparse classes will not be entirely described in this section, due |
|
543 to their similar with the existing Array classes. However, there are a |
|
544 few differences due the different nature of sparse objects, and these |
|
545 will be described. Firstly, although it is fundamentally possible to |
5324
|
546 have N-dimensional sparse objects, the Octave sparse classes do |
5164
|
547 not allow them at this time. So all operations of the sparse classes |
|
548 must be 2-dimensional. This means that in fact @code{SparseMatrix} is |
5324
|
549 similar to Octave's @code{Matrix} class rather than its |
5164
|
550 @code{NDArray} class. |
|
551 |
|
552 @menu |
|
553 * OctDifferences:: The Differences between the Array and Sparse Classes |
|
554 * OctCreation:: Creating Spare Matrices in Oct-Files |
|
555 * OctUse:: Using Sparse Matrices in Oct-Files |
|
556 @end menu |
|
557 |
|
558 @node OctDifferences, OctCreation, Oct-Files, Oct-Files |
|
559 @subsection The Differences between the Array and Sparse Classes |
|
560 |
|
561 The number of elements in a sparse matrix is considered to be the number |
|
562 of non-zero elements rather than the product of the dimensions. Therefore |
|
563 |
|
564 @example |
|
565 SparseMatrix sm; |
|
566 @dots{} |
|
567 int nel = sm.nelem (); |
|
568 @end example |
|
569 |
|
570 returns the number of non-zero elements. If the user really requires the |
|
571 number of elements in the matrix, including the non-zero elements, they |
|
572 should use @code{numel} rather than @code{nelem}. Note that for very |
|
573 large matrices, where the product of the two dimensions is large that |
|
574 the representation of the an unsigned int, then @code{numel} can overflow. |
|
575 An example is @code{speye(1e6)} which will create a matrix with a million |
|
576 rows and columns, but only a million non-zero elements. Therefore the |
|
577 number of rows by the number of columns in this case is more than two |
|
578 hundred times the maximum value that can be represented by an unsigned int. |
|
579 The use of @code{numel} should therefore be avoided useless it is known |
|
580 it won't overflow. |
|
581 |
|
582 Extreme care must be take with the elem method and the "()" operator, |
|
583 which perform basically the same function. The reason is that if a |
5324
|
584 sparse object is non-const, then Octave will assume that a |
5164
|
585 request for a zero element in a sparse matrix is in fact a request |
|
586 to create this element so it can be filled. Therefore a piece of |
|
587 code like |
|
588 |
|
589 @example |
|
590 SparseMatrix sm; |
|
591 @dots{} |
|
592 for (int j = 0; j < nc; j++) |
|
593 for (int i = 0; i < nr; i++) |
|
594 std::cerr << " (" << i << "," << j << "): " << sm(i,j) |
|
595 << std::endl; |
|
596 @end example |
|
597 |
|
598 is a great way of turning the sparse matrix into a dense one, and a |
|
599 very slow way at that since it reallocates the sparse object at each |
|
600 zero element in the matrix. |
|
601 |
|
602 An easy way of preventing the above from hapening is to create a temporary |
|
603 constant version of the sparse matrix. Note that only the container for |
|
604 the sparse matrix will be copied, while the actual representation of the |
|
605 data will be shared between the two versions of the sparse matrix. So this |
|
606 is not a costly operation. For example, the above would become |
|
607 |
|
608 @example |
|
609 SparseMatrix sm; |
|
610 @dots{} |
|
611 const SparseMatrix tmp (sm); |
|
612 for (int j = 0; j < nc; j++) |
|
613 for (int i = 0; i < nr; i++) |
|
614 std::cerr << " (" << i << "," << j << "): " << tmp(i,j) |
|
615 << std::endl; |
|
616 @end example |
|
617 |
|
618 Finally, as the sparse types aren't just represented as a contiguous |
|
619 block of memory, the @code{fortran_vec} method of the @code{Array<T>} |
|
620 is not available. It is however replaced by three seperate methods |
|
621 @code{ridx}, @code{cidx} and @code{data}, that access the raw compressed |
5324
|
622 column format that the Octave sparse matrices are stored in. |
5164
|
623 Additionally, these methods can be used in a manner similar to @code{elem}, |
|
624 to allow the matrix to be accessed or filled. However, in that case it is |
|
625 up to the user to repect the sparse matrix compressed column format |
|
626 discussed previous. |
|
627 |
|
628 @node OctCreation, OctUse, OctDifferences, Oct-Files |
|
629 @subsection Creating Spare Matrices in Oct-Files |
|
630 |
|
631 The user has several alternatives in how to create a sparse matrix. |
|
632 They can first create the data as three vectors representing the |
|
633 row and column indexes and the data, and from those create the matrix. |
|
634 Or alternatively, they can create a sparse matrix with the appropriate |
|
635 amount of space and then fill in the values. Both techniques have their |
|
636 advantages and disadvantages. |
|
637 |
|
638 An example of how to create a small sparse matrix with the first technique |
|
639 might be seen the example |
|
640 |
|
641 @example |
|
642 int nz = 4, nr = 3, nc = 4; |
|
643 ColumnVector ridx (nz); |
|
644 ColumnVector cidx (nz); |
|
645 ColumnVector data (nz); |
|
646 |
|
647 ridx(0) = 0; ridx(1) = 0; ridx(2) = 1; ridx(3) = 2; |
|
648 cidx(0) = 0; cidx(1) = 1; cidx(2) = 3; cidx(3) = 3; |
|
649 data(0) = 1; data(1) = 2; data(2) = 3; data(3) = 4; |
|
650 |
|
651 SparseMatrix sm (data, ridx, cidx, nr, nc); |
|
652 @end example |
|
653 |
|
654 which creates the matrix given in section @ref{Storage}. Note that |
|
655 the compressed matrix format is not used at the time of the creation |
|
656 of the matrix itself, however it is used internally. |
|
657 |
|
658 As previously mentioned, the values of the sparse matrix are stored |
|
659 in increasing column-major ordering. Although the data passed by the |
|
660 user does not need to respect this requirement, the pre-sorting the |
|
661 data significantly speeds up the creation of the sparse matrix. |
|
662 |
|
663 The disadvantage of this technique of creating a sparse matrix is |
|
664 that there is a brief time where two copies of the data exists. Therefore |
|
665 for extremely memory constrained problems this might not be the right |
|
666 technique to create the sparse matrix. |
|
667 |
|
668 The alternative is to first create the sparse matrix with the desired |
|
669 number of non-zero elements and then later fill those elements in. The |
|
670 easiest way to do this is |
|
671 |
|
672 @example |
|
673 int nz = 4, nr = 3, nc = 4; |
|
674 SparseMatrix sm (nr, nc, nz); |
|
675 sm(0,0) = 1; sm(0,1) = 2; sm(1,3) = 3; sm(2,3) = 4; |
|
676 @end example |
|
677 |
|
678 That creates the same matrix as previously. Again, although it is not |
|
679 strictly necessary, it is significantly faster if the sparse matrix is |
|
680 created in this manner that the elements are added in column-major |
|
681 ordering. The reason for this is that if the elements are inserted |
|
682 at the end of the current list of known elements then no element |
|
683 in the matrix needs to be moved to allow the new element to be |
|
684 inserted. Only the column indexes need to be updated. |
|
685 |
|
686 There are a few further points to note about this technique of creating |
|
687 a sparse matrix. Firstly, it is not illegal to create a sparse matrix |
|
688 with fewer elements than are actually inserted in the matrix. Therefore |
|
689 |
|
690 @example |
|
691 int nz = 4, nr = 3, nc = 4; |
|
692 SparseMatrix sm (nr, nc, 0); |
|
693 sm(0,0) = 1; sm(0,1) = 2; sm(1,3) = 3; sm(2,3) = 4; |
|
694 @end example |
|
695 |
|
696 is perfectly legal. However it is a very bad idea. The reason is that |
|
697 as each new element is added to the sparse matrix the space allocated |
|
698 to it is increased by reallocating the memory. This is an expensive |
|
699 operation, that will significantly slow this means of creating a sparse |
|
700 matrix. Furthermore, it is not illegal to create a sparse matrix with |
|
701 too much storage, so having @var{nz} above equaling 6 is also legal. |
|
702 The disadvantage is that the matrix occupies more memory than strictly |
|
703 needed. |
|
704 |
|
705 It is not always easy to known the number of non-zero elements prior |
|
706 to filling a matrix. For this reason the additional storage for the |
|
707 sparse matrix can be removed after its creation with the |
|
708 @dfn{maybe_compress} function. Furthermore, the maybe_compress can |
|
709 deallocate the unused storage, but it can equally remove zero elements |
|
710 from the matrix. The removal of zero elements from the matrix is |
|
711 controlled by setting the argument of the @dfn{maybe_compress} function |
|
712 to be 'true'. However, the cost of removing the zeros is high because it |
|
713 implies resorting the elements. Therefore, if possible it is better |
|
714 is the user doesn't add the zeros in the first place. An example of |
|
715 the use of @dfn{maybe_compress} is |
|
716 |
|
717 @example |
|
718 int nz = 6, nr = 3, nc = 4; |
|
719 SparseMatrix sm1 (nr, nc, nz); |
|
720 sm1(0,0) = 1; sm1(0,1) = 2; sm1(1,3) = 3; sm1(2,3) = 4; |
|
721 sm1.maybe_compress (); // No zero elements were added |
|
722 |
|
723 SparseMatrix sm2 (nr, nc, nz); |
|
724 sm2(0,0) = 1; sm2(0,1) = 2; sm(0,2) = 0; sm(1,2) = 0; |
|
725 sm1(1,3) = 3; sm1(2,3) = 4; |
|
726 sm2.maybe_compress (true); // Zero elements were added |
|
727 @end example |
|
728 |
|
729 The use of the @dfn{maybe_compress} function should be avoided if |
|
730 possible, as it will slow the creation of the matrices. |
|
731 |
|
732 A third means of creating a sparse matrix is to work directly with |
|
733 the data in compressed row format. An example of this technique might |
|
734 be |
|
735 |
|
736 @c Note the @verbatim environment is a relatively new addition to texinfo. |
|
737 @c Therefore use the @example environment and replace @, with @@, |
|
738 @c { with @{, etc |
|
739 |
|
740 @example |
|
741 octave_value arg; |
|
742 |
|
743 @dots{} |
|
744 |
|
745 int nz = 6, nr = 3, nc = 4; // Assume we know the max no nz |
|
746 SparseMatrix sm (nr, nc, nz); |
|
747 Matrix m = arg.matrix_value (); |
|
748 |
|
749 int ii = 0; |
|
750 sm.cidx (0) = 0; |
|
751 for (int j = 1; j < nc; j++) |
|
752 @{ |
|
753 for (int i = 0; i < nr; i++) |
|
754 @{ |
|
755 double tmp = foo (m(i,j)); |
|
756 if (tmp != 0.) |
|
757 @{ |
|
758 sm.data(ii) = tmp; |
|
759 sm.ridx(ii) = i; |
|
760 ii++; |
|
761 @} |
|
762 @} |
|
763 sm.cidx(j+1) = ii; |
|
764 @} |
|
765 sm.maybe_mutate (); // If don't know a-priori the final no of nz. |
|
766 @end example |
|
767 |
|
768 which is probably the most efficient means of creating the sparse matrix. |
|
769 |
|
770 Finally, it might sometimes arise that the amount of storage initially |
|
771 created is insufficient to completely store the sparse matrix. Therefore, |
|
772 the method @code{change_capacity} exists to reallocate the sparse memory. |
|
773 The above example would then be modified as |
|
774 |
|
775 @example |
|
776 octave_value arg; |
|
777 |
|
778 @dots{} |
|
779 |
|
780 int nz = 6, nr = 3, nc = 4; // Assume we know the max no nz |
|
781 SparseMatrix sm (nr, nc, nz); |
|
782 Matrix m = arg.matrix_value (); |
|
783 |
|
784 int ii = 0; |
|
785 sm.cidx (0) = 0; |
|
786 for (int j = 1; j < nc; j++) |
|
787 @{ |
|
788 for (int i = 0; i < nr; i++) |
|
789 @{ |
|
790 double tmp = foo (m(i,j)); |
|
791 if (tmp != 0.) |
|
792 @{ |
|
793 if (ii == nz) |
|
794 @{ |
|
795 nz += 2; // Add 2 more elements |
|
796 sm.change_capacity (nz); |
|
797 @} |
|
798 sm.data(ii) = tmp; |
|
799 sm.ridx(ii) = i; |
|
800 ii++; |
|
801 @} |
|
802 @} |
|
803 sm.cidx(j+1) = ii; |
|
804 @} |
|
805 sm.maybe_mutate (); // If don't know a-priori the final no of nz. |
|
806 @end example |
|
807 |
|
808 Note that both increasing and decreasing the number of non-zero elements in |
|
809 a sparse matrix is expensive, as it involves memory reallocation. Also as |
|
810 parts of the matrix, though not its entirety, exist as the old and new copy |
|
811 at the same time, additional memory is needed. Therefore if possible this |
|
812 should be avoided. |
|
813 |
|
814 @node OctUse, , OctCreation, Oct-Files |
|
815 @subsection Using Sparse Matrices in Oct-Files |
|
816 |
|
817 Most of the same operators and functions on sparse matrices that are |
5324
|
818 available from the Octave are equally available with oct-files. |
5164
|
819 The basic means of extracting a sparse matrix from an @code{octave_value} |
|
820 and returning them as an @code{octave_value}, can be seen in the |
|
821 following example |
|
822 |
|
823 @example |
|
824 octave_value_list retval; |
|
825 |
|
826 SparseMatrix sm = args(0).sparse_matrix_value (); |
|
827 SparseComplexMatrix scm = args(1).sparse_complex_matrix_value (); |
|
828 SparseBoolMatrix sbm = args(2).sparse_bool_matrix_value (); |
|
829 |
|
830 @dots{} |
|
831 |
|
832 retval(2) = sbm; |
|
833 retval(1) = scm; |
|
834 retval(0) = sm; |
|
835 @end example |
|
836 |
|
837 The conversion to an octave-value is handled by the sparse |
|
838 @code{octave_value} constructors, and so no special care is needed. |
|
839 |
|
840 @node License, Function Reference, Oct-Files, Sparse Matrices |
|
841 @section Licensing of Third Party Software |
|
842 |
5324
|
843 There are several third party software packages used by the Octave |
5164
|
844 sparse matrix. |
|
845 |
|
846 @table @asis |
|
847 @item COLAMD |
|
848 is used for the @dfn{colamd} and @dfn{symamd} functions. |
|
849 |
|
850 @table @asis |
|
851 @item Authors |
|
852 The authors of the code itself are Stefan I. Larimore and Timothy A. |
|
853 Davis (davis@@cise.ufl.edu), University of Florida. The algorithm was |
|
854 developed in collaboration with John Gilbert, Xerox PARC, and Esmond |
|
855 Ng, Oak Ridge National Laboratory. |
|
856 |
|
857 @item License |
|
858 Copyright @copyright{} 1998-2003 by the University of Florida. |
|
859 All Rights Reserved. |
|
860 |
|
861 THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY |
|
862 EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. |
|
863 |
|
864 Permission is hereby granted to use, copy, modify, and/or distribute |
|
865 this program, provided that the Copyright, this License, and the |
|
866 Availability of the original version is retained on all copies and made |
|
867 accessible to the end-user of any code or package that includes COLAMD |
|
868 or any modified version of COLAMD. |
|
869 |
|
870 @item Availability |
|
871 @url{http://www.cise.ufl.edu/research/sparse/colamd/} |
|
872 @end table |
|
873 |
|
874 @item UMFPACK |
|
875 is used in various places with the sparse types, including the |
|
876 LU decomposition and solvers. |
|
877 |
|
878 @table @asis |
|
879 @item License |
5282
|
880 UMFPACK Version 4.4 (Jan. 28, 2005), Copyright @copyright{} 2005 by |
5164
|
881 Timothy A. Davis. All Rights Reserved. |
|
882 |
|
883 Your use or distribution of UMFPACK or any modified version of |
|
884 UMFPACK implies that you agree to this License. |
|
885 |
|
886 THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY |
|
887 EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. |
|
888 |
|
889 Permission is hereby granted to use or copy this program, provided |
|
890 that the Copyright, this License, and the Availability of the original |
|
891 version is retained on all copies. User documentation of any code that |
|
892 uses UMFPACK or any modified version of UMFPACK code must cite the |
|
893 Copyright, this License, the Availability note, and 'Used by permission.' |
|
894 Permission to modify the code and to distribute modified code is granted, |
|
895 provided the Copyright, this License, and the Availability note are |
|
896 retained, and a notice that the code was modified is included. This |
|
897 software was developed with support from the National Science Foundation, |
|
898 and is provided to you free of charge. |
|
899 |
|
900 @item Availability |
|
901 @url{http://www.cise.ufl.edu/research/sparse/umfpack/} |
|
902 @end table |
|
903 @end table |
|
904 |
|
905 @node Function Reference, , License, Sparse Matrices |
|
906 @section Function Reference |
|
907 |
|
908 @iftex |
|
909 @subsection Functions by Category |
|
910 @subsubsection Generate sparse matrix |
|
911 @table @asis |
|
912 @item spdiags |
|
913 A generalization of the function `spdiag'. |
|
914 @item speye |
|
915 Returns a sparse identity matrix. |
|
916 @item sprand |
|
917 Generate a random sparse matrix. |
|
918 @item sprandn |
|
919 Generate a random sparse matrix. |
|
920 @item sprandsym |
|
921 @emph{Not implemented} |
|
922 @end table |
|
923 @subsubsection Sparse matrix conversion |
|
924 @table @asis |
|
925 @item full |
|
926 returns a full storage matrix from a sparse one See also: sparse |
|
927 @item sparse |
|
928 SPARSE: create a sparse matrix |
|
929 @item spconvert |
|
930 This function converts for a simple sparse matrix format easily produced by other programs into Octave's internal sparse format. |
|
931 @item spfind |
|
932 SPFIND: a sparse version of the find operator 1. |
|
933 @end table |
|
934 @subsubsection Manipulate sparse matrices |
|
935 @table @asis |
|
936 @item issparse |
|
937 Return 1 if the value of the expression EXPR is a sparse matrix. |
|
938 @item nnz |
|
939 returns number of non zero elements in SM See also: sparse |
|
940 @item nonzeros |
|
941 Returns a vector of the non-zero values of the sparse matrix S |
|
942 @item nzmax |
|
943 Returns the amount of storage allocated to the sparse matrix SM. |
|
944 @item spalloc |
|
945 Returns an empty sparse matrix of size R-by-C. |
|
946 @item spfun |
|
947 Compute `f(X)' for the non-zero values of X This results in a sparse matrix with the same structure as X. |
|
948 @item spones |
|
949 Replace the non-zero entries of X with ones. |
|
950 @item spy |
|
951 Plot the sparsity pattern of the sparse matrix X |
|
952 @end table |
|
953 @subsubsection Graph Theory |
|
954 @table @asis |
|
955 @item etree |
|
956 Returns the elimination tree for the matrix S. |
|
957 @item etreeplot |
|
958 @emph{Not implemented} |
|
959 @item gplot |
|
960 @emph{Not implemented} |
|
961 @item treelayout |
|
962 @emph{Not implemented} |
|
963 @item treeplot |
|
964 @emph{Not implemented} |
|
965 @end table |
|
966 @subsubsection Sparse matrix reordering |
|
967 @table @asis |
|
968 @item colamd |
|
969 Column approximate minimum degree permutation. |
|
970 @item colperm |
|
971 Returns the column permutations such that the columns of `S (:, P)' are ordered in terms of increase number of non-zero elements. |
|
972 @item dmperm |
5322
|
973 Perform a Deulmage-Mendelsohn permutation on the sparse matrix S. |
5164
|
974 @item symamd |
|
975 For a symmetric positive definite matrix S, returns the permutation vector p such that `S (P, P)' tends to have a sparser Cholesky factor than S. |
|
976 @item symrcm |
|
977 Returns the Reverse Cuthill McKee reordering of the sparse matrix S. |
|
978 @end table |
|
979 @subsubsection Linear algebra |
|
980 @table @asis |
|
981 @item cholinc |
|
982 @emph{Not implemented} |
|
983 @item condest |
|
984 @emph{Not implemented} |
|
985 @item eigs |
|
986 @emph{Not implemented} |
5322
|
987 @item matrix_type |
|
988 Identify the matrix type or mark a matrix as a particular type. |
5164
|
989 @item normest |
|
990 @emph{Not implemented} |
|
991 @item spdet |
|
992 Compute the determinant of sparse matrix A using UMFPACK. |
|
993 @item spinv |
|
994 Compute the inverse of the square matrix A. |
5322
|
995 @item spkron |
|
996 Form the kronecker product of two sparse matrices. |
5164
|
997 @item splu |
|
998 Compute the LU decomposition of the sparse matrix A, using subroutines from UMFPACK. |
|
999 @item sprank |
|
1000 @emph{Not implemented} |
|
1001 @item svds |
|
1002 @emph{Not implemented} |
|
1003 @end table |
|
1004 @subsubsection Iterative techniques |
|
1005 @table @asis |
|
1006 @item bicg |
|
1007 @emph{Not implemented} |
|
1008 @item bicgstab |
|
1009 @emph{Not implemented} |
|
1010 @item cgs |
|
1011 @emph{Not implemented} |
|
1012 @item gmres |
|
1013 @emph{Not implemented} |
5282
|
1014 @item luinc |
|
1015 Produce the incomplete LU factorization of the sparse matrix A. |
5164
|
1016 @item lsqr |
|
1017 @emph{Not implemented} |
|
1018 @item minres |
|
1019 @emph{Not implemented} |
|
1020 @item pcg |
|
1021 @emph{Not implemented} |
|
1022 @item pcr |
|
1023 @emph{Not implemented} |
|
1024 @item qmr |
|
1025 @emph{Not implemented} |
|
1026 @item symmlq |
|
1027 @emph{Not implemented} |
|
1028 @end table |
|
1029 @subsubsection Miscellaneous |
|
1030 @table @asis |
|
1031 @item spaugment |
|
1032 @emph{Not implemented} |
|
1033 @item spparms |
|
1034 Sets or displays the parameters used by the sparse solvers and factorization functions. |
|
1035 @item symbfact |
|
1036 Performs a symbolic factorization analysis on the sparse matrix S. |
|
1037 @item spstats |
|
1038 Return the stats for the non-zero elements of the sparse matrix S COUNT is the number of non-zeros in each column, MEAN is the mean of the non-zeros in each column, and VAR is the variance of the non-zeros in each column |
|
1039 @item spprod |
|
1040 Product of elements along dimension DIM. |
|
1041 @item spcumprod |
|
1042 Cumulative product of elements along dimension DIM. |
|
1043 @item spcumsum |
|
1044 Cumulative sum of elements along dimension DIM. |
|
1045 @item spsum |
|
1046 Sum of elements along dimension DIM. |
|
1047 @item spsumsq |
|
1048 Sum of squares of elements along dimension DIM. |
|
1049 @item spmin |
|
1050 For a vector argument, return the minimum value. |
|
1051 @item spmax |
|
1052 For a vector argument, return the maximum value. |
|
1053 @item spatan2 |
|
1054 Compute atan (Y / X) for corresponding sparse matrix elements of Y and X. |
|
1055 @item spdiag |
|
1056 Return a diagonal matrix with the sparse vector V on diagonal K. |
|
1057 @item spreshape |
|
1058 Return a sparse matrix with M rows and N columns whose elements are taken from the sparse matrix A. |
|
1059 @end table |
|
1060 |
|
1061 @subsection Functions Alphabetically |
|
1062 @end iftex |
|
1063 |
|
1064 @menu |
|
1065 * colamd:: Column approximate minimum degree permutation. |
|
1066 * colperm:: Returns the column permutations such that the columns of `S |
|
1067 (:, P)' are ordered in terms of increase number of non-zero |
|
1068 elements. |
|
1069 * dmperm:: Perfrom a Deulmage-Mendelsohn permutation on the sparse |
|
1070 matrix S. |
|
1071 * etree:: Returns the elimination tree for the matrix S. |
|
1072 * full:: returns a full storage matrix from a sparse one See also: |
|
1073 sparse |
|
1074 * issparse:: Return 1 if the value of the expression EXPR is a sparse |
|
1075 matrix. |
5282
|
1076 * luinc:: Produce the incomplete LU factorization of the sparse |
|
1077 A. |
5322
|
1078 * matrix_type:: Identify the matrix type or mark a matrix as a particular |
|
1079 type. |
5164
|
1080 * nnz:: returns number of non zero elements in SM See also: sparse |
|
1081 * nonzeros:: Returns a vector of the non-zero values of the sparse |
|
1082 matrix S |
|
1083 * nzmax:: Returns the amount of storage allocated to the sparse |
|
1084 matrix SM. |
|
1085 * spalloc:: Returns an empty sparse matrix of size R-by-C. |
|
1086 * sparse:: SPARSE: create a sparse matrix |
|
1087 * spatan2:: Compute atan (Y / X) for corresponding sparse matrix |
|
1088 elements of Y and X. |
|
1089 * spconvert:: This function converts for a simple sparse matrix format |
|
1090 easily produced by other programs into Octave's internal |
|
1091 sparse format. |
|
1092 * spcumprod:: Cumulative product of elements along dimension DIM. |
|
1093 * spcumsum:: Cumulative sum of elements along dimension DIM. |
|
1094 * spdet:: Compute the determinant of sparse matrix A using UMFPACK. |
|
1095 * spdiag:: Return a diagonal matrix with the sparse vector V on |
|
1096 diagonal K. |
|
1097 * spdiags:: A generalization of the function `spdiag'. |
|
1098 * speye:: Returns a sparse identity matrix. |
|
1099 * spfind:: SPFIND: a sparse version of the find operator 1. |
|
1100 * spfun:: Compute `f(X)' for the non-zero values of X This results in |
|
1101 a sparse matrix with the same structure as X. |
|
1102 * spinv:: Compute the inverse of the square matrix A. |
5322
|
1103 * spkron:: Form the kronecker product of two sparse matrices. |
5164
|
1104 * splu:: Compute the LU decomposition of the sparse matrix A, using |
|
1105 subroutines from UMFPACK. |
|
1106 * spmax:: For a vector argument, return the maximum value. |
|
1107 * spmin:: For a vector argument, return the minimum value. |
|
1108 * spones:: Replace the non-zero entries of X with ones. |
|
1109 * spparms:: Sets or displays the parameters used by the sparse solvers |
|
1110 and factorization functions. |
|
1111 * spprod:: Product of elements along dimension DIM. |
|
1112 * sprand:: Generate a random sparse matrix. |
|
1113 * sprandn:: Generate a random sparse matrix. |
|
1114 * spreshape:: Return a sparse matrix with M rows and N columns whose |
|
1115 elements are taken from the sparse matrix A. |
|
1116 * spstats:: Return the stats for the non-zero elements of the sparse |
|
1117 matrix S COUNT is the number of non-zeros in each column, |
|
1118 MEAN is the mean of the non-zeros in each column, and VAR |
|
1119 is the variance of the non-zeros in each column |
|
1120 * spsum:: Sum of elements along dimension DIM. |
|
1121 * spsumsq:: Sum of squares of elements along dimension DIM. |
|
1122 * spy:: Plot the sparsity pattern of the sparse matrix X |
|
1123 * symamd:: For a symmetric positive definite matrix S, returns the |
|
1124 permutation vector p such that `S (P, P)' tends to have a |
|
1125 sparser Cholesky factor than S. |
|
1126 * symbfact:: Performs a symbolic factorization analysis on the sparse |
|
1127 matrix S. |
|
1128 * symrcm:: Returns the Reverse Cuthill McKee reordering of the sparse |
|
1129 matrix S. |
|
1130 @end menu |
|
1131 |
|
1132 @node colamd, colperm, , Function Reference |
|
1133 @subsubsection colamd |
|
1134 |
|
1135 @DOCSTRING(colamd) |
|
1136 |
|
1137 @node colperm, dmperm, colamd, Function Reference |
|
1138 @subsubsection colperm |
|
1139 |
|
1140 @DOCSTRING(colperm) |
|
1141 |
|
1142 @node dmperm, etree, colperm, Function Reference |
|
1143 @subsubsection dmperm |
|
1144 |
|
1145 @DOCSTRING(dmperm) |
|
1146 |
|
1147 @node etree, full, dmperm, Function Reference |
|
1148 @subsubsection etree |
|
1149 |
|
1150 @DOCSTRING(etree) |
|
1151 |
|
1152 @node full, issparse, etree, Function Reference |
|
1153 @subsubsection full |
|
1154 |
|
1155 @DOCSTRING(full) |
|
1156 |
5282
|
1157 @node issparse, luinc, full, Function Reference |
5164
|
1158 @subsubsection issparse |
|
1159 |
|
1160 @DOCSTRING(issparse) |
|
1161 |
5322
|
1162 @node luinc, matrix_type, issparse, Function Reference |
5282
|
1163 @subsubsection luinc |
|
1164 |
|
1165 @DOCSTRING(luinc) |
|
1166 |
5322
|
1167 @node matrix_type, nnz, luinc, Function Reference |
|
1168 @subsubsection matrix_type |
|
1169 |
|
1170 @DOCSTRING(matrix_type) |
|
1171 |
|
1172 @node nnz, nonzeros, matrix_type, Function Reference |
5164
|
1173 @subsubsection nnz |
|
1174 |
|
1175 @DOCSTRING(nnz) |
|
1176 |
|
1177 @node nonzeros, nzmax, nnz, Function Reference |
|
1178 @subsubsection nonzeros |
|
1179 |
|
1180 @DOCSTRING(nonzeros) |
|
1181 |
|
1182 @node nzmax, spalloc, nonzeros, Function Reference |
|
1183 @subsubsection nzmax |
|
1184 |
|
1185 @DOCSTRING(nzmax) |
|
1186 |
|
1187 @node spalloc, sparse, nzmax, Function Reference |
|
1188 @subsubsection spalloc |
|
1189 |
|
1190 @DOCSTRING(spalloc) |
|
1191 |
|
1192 @node sparse, spatan2, spalloc, Function Reference |
|
1193 @subsubsection sparse |
|
1194 |
|
1195 @DOCSTRING(sparse) |
|
1196 |
|
1197 @node spatan2, spconvert, sparse, Function Reference |
|
1198 @subsubsection spatan2 |
|
1199 |
|
1200 @DOCSTRING(spatan2) |
|
1201 |
|
1202 @node spconvert, spcumprod, spatan2, Function Reference |
|
1203 @subsubsection spconvert |
|
1204 |
|
1205 @DOCSTRING(spconvert) |
|
1206 |
|
1207 @node spcumprod, spcumsum, spconvert, Function Reference |
|
1208 @subsubsection spcumprod |
|
1209 |
|
1210 @DOCSTRING(spcumprod) |
|
1211 |
|
1212 @node spcumsum, spdet, spcumprod, Function Reference |
|
1213 @subsubsection spcumsum |
|
1214 |
|
1215 @DOCSTRING(spcumsum) |
|
1216 |
|
1217 @node spdet, spdiag, spcumsum, Function Reference |
|
1218 @subsubsection spdet |
|
1219 |
|
1220 @DOCSTRING(spdet) |
|
1221 |
|
1222 @node spdiag, spdiags, spdet, Function Reference |
|
1223 @subsubsection spdiag |
|
1224 |
|
1225 @DOCSTRING(spdiag) |
|
1226 |
|
1227 @node spdiags, speye, spdiag, Function Reference |
|
1228 @subsubsection spdiags |
|
1229 |
|
1230 @DOCSTRING(spdiags) |
|
1231 |
|
1232 @node speye, spfind, spdiags, Function Reference |
|
1233 @subsubsection speye |
|
1234 |
|
1235 @DOCSTRING(speye) |
|
1236 |
|
1237 @node spfind, spfun, speye, Function Reference |
|
1238 @subsubsection spfind |
|
1239 |
|
1240 @DOCSTRING(spfind) |
|
1241 |
|
1242 @node spfun, spinv, spfind, Function Reference |
|
1243 @subsubsection spfun |
|
1244 |
|
1245 @DOCSTRING(spfun) |
|
1246 |
5322
|
1247 @node spinv, spkron, spfun, Function Reference |
5164
|
1248 @subsubsection spinv |
|
1249 |
|
1250 @DOCSTRING(spinv) |
|
1251 |
5322
|
1252 @node spkron, splu, spinv, Function Reference |
|
1253 @subsubsection spkron |
|
1254 |
|
1255 @DOCSTRING(spkron) |
|
1256 |
|
1257 @node splu, spmax, spkron, Function Reference |
5164
|
1258 @subsubsection splu |
|
1259 |
|
1260 @DOCSTRING(splu) |
|
1261 |
|
1262 @node spmax, spmin, splu, Function Reference |
|
1263 @subsubsection spmax |
|
1264 |
|
1265 @DOCSTRING(spmax) |
|
1266 |
|
1267 @node spmin, spones, spmax, Function Reference |
|
1268 @subsubsection spmin |
|
1269 |
|
1270 @DOCSTRING(spmin) |
|
1271 |
|
1272 @node spones, spparms, spmin, Function Reference |
|
1273 @subsubsection spones |
|
1274 |
|
1275 @DOCSTRING(spones) |
|
1276 |
|
1277 @node spparms, spprod, spones, Function Reference |
|
1278 @subsubsection spparms |
|
1279 |
|
1280 @DOCSTRING(spparms) |
|
1281 |
|
1282 @node spprod, sprand, spparms, Function Reference |
|
1283 @subsubsection spprod |
|
1284 |
|
1285 @DOCSTRING(spprod) |
|
1286 |
|
1287 @node sprand, sprandn, spprod, Function Reference |
|
1288 @subsubsection sprand |
|
1289 |
|
1290 @DOCSTRING(sprand) |
|
1291 |
|
1292 @node sprandn, spreshape, sprand, Function Reference |
|
1293 @subsubsection sprandn |
|
1294 |
|
1295 @DOCSTRING(sprandn) |
|
1296 |
|
1297 @node spreshape, spstats, sprandn, Function Reference |
|
1298 @subsubsection spreshape |
|
1299 |
|
1300 @DOCSTRING(spreshape) |
|
1301 |
|
1302 @node spstats, spsum, spreshape, Function Reference |
|
1303 @subsubsection spstats |
|
1304 |
|
1305 @DOCSTRING(spstats) |
|
1306 |
|
1307 @node spsum, spsumsq, spstats, Function Reference |
|
1308 @subsubsection spsum |
|
1309 |
|
1310 @DOCSTRING(spsum) |
|
1311 |
|
1312 @node spsumsq, spy, spsum, Function Reference |
|
1313 @subsubsection spsumsq |
|
1314 |
|
1315 @DOCSTRING(spsumsq) |
|
1316 |
|
1317 @node spy, symamd, spsumsq, Function Reference |
|
1318 @subsubsection spy |
|
1319 |
|
1320 @DOCSTRING(spy) |
|
1321 |
|
1322 @node symamd, symbfact, spy, Function Reference |
|
1323 @subsubsection symamd |
|
1324 |
|
1325 @DOCSTRING(symamd) |
|
1326 |
|
1327 @node symbfact, symrcm, symamd, Function Reference |
|
1328 @subsubsection symbfact |
|
1329 |
|
1330 @DOCSTRING(symbfact) |
|
1331 |
|
1332 @node symrcm, , symbfact, Function Reference |
|
1333 @subsubsection symrcm |
|
1334 |
|
1335 @DOCSTRING(symrcm) |
|
1336 |
|
1337 @c Local Variables: *** |
|
1338 @c Mode: texinfo *** |
|
1339 @c End: *** |