7018
|
1 @c Copyright (C) 2004, 2005, 2006, 2007 David Bateman |
|
2 @c |
|
3 @c This file is part of Octave. |
|
4 @c |
|
5 @c Octave is free software; you can redistribute it and/or modify it |
|
6 @c under the terms of the GNU General Public License as published by the |
|
7 @c Free Software Foundation; either version 3 of the License, or (at |
|
8 @c your option) any later version. |
|
9 @c |
|
10 @c Octave is distributed in the hope that it will be useful, but WITHOUT |
|
11 @c ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or |
|
12 @c FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License |
|
13 @c for more details. |
|
14 @c |
|
15 @c You should have received a copy of the GNU General Public License |
|
16 @c along with Octave; see the file COPYING. If not, see |
|
17 @c <http://www.gnu.org/licenses/>. |
5164
|
18 |
5648
|
19 @ifhtml |
|
20 @set htmltex |
|
21 @end ifhtml |
|
22 @iftex |
|
23 @set htmltex |
|
24 @end iftex |
|
25 |
5164
|
26 @node Sparse Matrices |
|
27 @chapter Sparse Matrices |
|
28 |
|
29 @menu |
|
30 * Basics:: The Creation and Manipulation of Sparse Matrices |
|
31 * Sparse Linear Algebra:: Linear Algebra on Sparse Matrices |
|
32 * Iterative Techniques:: Iterative Techniques applied to Sparse Matrices |
5648
|
33 * Real Life Example:: Real Life Example of the use of Sparse Matrices |
5164
|
34 @end menu |
|
35 |
5648
|
36 @node Basics, Sparse Linear Algebra, Sparse Matrices, Sparse Matrices |
5164
|
37 @section The Creation and Manipulation of Sparse Matrices |
|
38 |
|
39 The size of mathematical problems that can be treated at any particular |
|
40 time is generally limited by the available computing resources. Both, |
|
41 the speed of the computer and its available memory place limitation on |
|
42 the problem size. |
|
43 |
5506
|
44 There are many classes of mathematical problems which give rise to |
5164
|
45 matrices, where a large number of the elements are zero. In this case |
|
46 it makes sense to have a special matrix type to handle this class of |
|
47 problems where only the non-zero elements of the matrix are |
5506
|
48 stored. Not only does this reduce the amount of memory to store the |
5164
|
49 matrix, but it also means that operations on this type of matrix can |
|
50 take advantage of the a-priori knowledge of the positions of the |
|
51 non-zero elements to accelerate their calculations. |
|
52 |
|
53 A matrix type that stores only the non-zero elements is generally called |
|
54 sparse. It is the purpose of this document to discuss the basics of the |
|
55 storage and creation of sparse matrices and the fundamental operations |
|
56 on them. |
|
57 |
|
58 @menu |
|
59 * Storage:: Storage of Sparse Matrices |
|
60 * Creation:: Creating Sparse Matrices |
5648
|
61 * Information:: Finding out Information about Sparse Matrices |
5164
|
62 * Operators and Functions:: Basic Operators and Functions on Sparse Matrices |
|
63 @end menu |
|
64 |
|
65 @node Storage, Creation, Basics, Basics |
|
66 @subsection Storage of Sparse Matrices |
|
67 |
|
68 It is not strictly speaking necessary for the user to understand how |
|
69 sparse matrices are stored. However, such an understanding will help |
|
70 to get an understanding of the size of sparse matrices. Understanding |
|
71 the storage technique is also necessary for those users wishing to |
|
72 create their own oct-files. |
|
73 |
|
74 There are many different means of storing sparse matrix data. What all |
5648
|
75 of the methods have in common is that they attempt to reduce the complexity |
5164
|
76 and storage given a-priori knowledge of the particular class of problems |
|
77 that will be solved. A good summary of the available techniques for storing |
|
78 sparse matrix is given by Saad @footnote{Youcef Saad "SPARSKIT: A basic toolkit |
|
79 for sparse matrix computation", 1994, |
6620
|
80 @url{http://www-users.cs.umn.edu/~saad/software/SPARSKIT/paper.ps}}. |
5164
|
81 With full matrices, knowledge of the point of an element of the matrix |
|
82 within the matrix is implied by its position in the computers memory. |
|
83 However, this is not the case for sparse matrices, and so the positions |
|
84 of the non-zero elements of the matrix must equally be stored. |
|
85 |
|
86 An obvious way to do this is by storing the elements of the matrix as |
5506
|
87 triplets, with two elements being their position in the array |
5164
|
88 (rows and column) and the third being the data itself. This is conceptually |
|
89 easy to grasp, but requires more storage than is strictly needed. |
|
90 |
5648
|
91 The storage technique used within Octave is the compressed column |
5164
|
92 format. In this format the position of each element in a row and the |
|
93 data are stored as previously. However, if we assume that all elements |
|
94 in the same column are stored adjacent in the computers memory, then |
|
95 we only need to store information on the number of non-zero elements |
|
96 in each column, rather than their positions. Thus assuming that the |
|
97 matrix has more non-zero elements than there are columns in the |
|
98 matrix, we win in terms of the amount of memory used. |
|
99 |
|
100 In fact, the column index contains one more element than the number of |
|
101 columns, with the first element always being zero. The advantage of |
7001
|
102 this is a simplification in the code, in that there is no special case |
5164
|
103 for the first or last columns. A short example, demonstrating this in |
|
104 C is. |
|
105 |
|
106 @example |
|
107 for (j = 0; j < nc; j++) |
|
108 for (i = cidx (j); i < cidx(j+1); i++) |
5648
|
109 printf ("non-zero element (%i,%i) is %d\n", |
|
110 ridx(i), j, data(i)); |
5164
|
111 @end example |
|
112 |
|
113 A clear understanding might be had by considering an example of how the |
|
114 above applies to an example matrix. Consider the matrix |
|
115 |
|
116 @example |
|
117 @group |
|
118 1 2 0 0 |
|
119 0 0 0 3 |
|
120 0 0 0 4 |
|
121 @end group |
|
122 @end example |
|
123 |
|
124 The non-zero elements of this matrix are |
|
125 |
|
126 @example |
|
127 @group |
|
128 (1, 1) @result{} 1 |
|
129 (1, 2) @result{} 2 |
|
130 (2, 4) @result{} 3 |
|
131 (3, 4) @result{} 4 |
|
132 @end group |
|
133 @end example |
|
134 |
|
135 This will be stored as three vectors @var{cidx}, @var{ridx} and @var{data}, |
|
136 representing the column indexing, row indexing and data respectively. The |
|
137 contents of these three vectors for the above matrix will be |
|
138 |
|
139 @example |
|
140 @group |
5506
|
141 @var{cidx} = [0, 1, 2, 2, 4] |
5164
|
142 @var{ridx} = [0, 0, 1, 2] |
|
143 @var{data} = [1, 2, 3, 4] |
|
144 @end group |
|
145 @end example |
|
146 |
|
147 Note that this is the representation of these elements with the first row |
5648
|
148 and column assumed to start at zero, while in Octave itself the row and |
|
149 column indexing starts at one. Thus the number of elements in the |
5164
|
150 @var{i}-th column is given by @code{@var{cidx} (@var{i} + 1) - |
|
151 @var{cidx} (@var{i})}. |
|
152 |
5648
|
153 Although Octave uses a compressed column format, it should be noted |
|
154 that compressed row formats are equally possible. However, in the |
|
155 context of mixed operations between mixed sparse and dense matrices, |
|
156 it makes sense that the elements of the sparse matrices are in the |
|
157 same order as the dense matrices. Octave stores dense matrices in |
|
158 column major ordering, and so sparse matrices are equally stored in |
|
159 this manner. |
5164
|
160 |
5324
|
161 A further constraint on the sparse matrix storage used by Octave is that |
5164
|
162 all elements in the rows are stored in increasing order of their row |
5506
|
163 index, which makes certain operations faster. However, it imposes |
5164
|
164 the need to sort the elements on the creation of sparse matrices. Having |
7001
|
165 disordered elements is potentially an advantage in that it makes operations |
5164
|
166 such as concatenating two sparse matrices together easier and faster, however |
|
167 it adds complexity and speed problems elsewhere. |
|
168 |
5648
|
169 @node Creation, Information, Storage, Basics |
5164
|
170 @subsection Creating Sparse Matrices |
|
171 |
|
172 There are several means to create sparse matrix. |
|
173 |
|
174 @table @asis |
|
175 @item Returned from a function |
|
176 There are many functions that directly return sparse matrices. These include |
|
177 @dfn{speye}, @dfn{sprand}, @dfn{spdiag}, etc. |
|
178 @item Constructed from matrices or vectors |
|
179 The function @dfn{sparse} allows a sparse matrix to be constructed from |
5506
|
180 three vectors representing the row, column and data. Alternatively, the |
5164
|
181 function @dfn{spconvert} uses a three column matrix format to allow easy |
|
182 importation of data from elsewhere. |
|
183 @item Created and then filled |
|
184 The function @dfn{sparse} or @dfn{spalloc} can be used to create an empty |
|
185 matrix that is then filled by the user |
|
186 @item From a user binary program |
|
187 The user can directly create the sparse matrix within an oct-file. |
|
188 @end table |
|
189 |
|
190 There are several basic functions to return specific sparse |
|
191 matrices. For example the sparse identity matrix, is a matrix that is |
|
192 often needed. It therefore has its own function to create it as |
|
193 @code{speye (@var{n})} or @code{speye (@var{r}, @var{c})}, which |
|
194 creates an @var{n}-by-@var{n} or @var{r}-by-@var{c} sparse identity |
|
195 matrix. |
|
196 |
|
197 Another typical sparse matrix that is often needed is a random distribution |
|
198 of random elements. The functions @dfn{sprand} and @dfn{sprandn} perform |
5506
|
199 this for uniform and normal random distributions of elements. They have exactly |
|
200 the same calling convention, where @code{sprand (@var{r}, @var{c}, @var{d})}, |
|
201 creates an @var{r}-by-@var{c} sparse matrix with a density of filled |
5164
|
202 elements of @var{d}. |
|
203 |
7001
|
204 Other functions of interest that directly create sparse matrices, are |
5164
|
205 @dfn{spdiag} or its generalization @dfn{spdiags}, that can take the |
|
206 definition of the diagonals of the matrix and create the sparse matrix |
|
207 that corresponds to this. For example |
|
208 |
|
209 @example |
6411
|
210 s = spdiag (sparse(randn(1,n)), -1); |
5164
|
211 @end example |
|
212 |
|
213 creates a sparse (@var{n}+1)-by-(@var{n}+1) sparse matrix with a single |
|
214 diagonal defined. |
|
215 |
6620
|
216 @DOCSTRING(spatan2) |
|
217 |
|
218 @DOCSTRING(spcumprod) |
|
219 |
|
220 @DOCSTRING(spcumsum) |
|
221 |
|
222 @DOCSTRING(spdiag) |
|
223 |
|
224 @DOCSTRING(spdiags) |
|
225 |
|
226 @DOCSTRING(speye) |
|
227 |
|
228 @DOCSTRING(spfun) |
|
229 |
|
230 @DOCSTRING(spmax) |
|
231 |
|
232 @DOCSTRING(spmin) |
|
233 |
|
234 @DOCSTRING(spones) |
|
235 |
|
236 @DOCSTRING(spprod) |
|
237 |
|
238 @DOCSTRING(sprand) |
|
239 |
|
240 @DOCSTRING(sprandn) |
|
241 |
|
242 @DOCSTRING(sprandsym) |
|
243 |
|
244 @DOCSTRING(spsum) |
|
245 |
|
246 @DOCSTRING(spsumsq) |
|
247 |
5164
|
248 The recommended way for the user to create a sparse matrix, is to create |
5648
|
249 two vectors containing the row and column index of the data and a third |
5164
|
250 vector of the same size containing the data to be stored. For example |
|
251 |
|
252 @example |
6421
|
253 ri = ci = d = []; |
|
254 for j = 1:c |
|
255 ri = [ri; randperm(r)(1:n)']; |
|
256 ci = [ci; j*ones(n,1)]; |
|
257 d = [d; rand(n,1)]; |
|
258 endfor |
|
259 s = sparse (ri, ci, d, r, c); |
5164
|
260 @end example |
|
261 |
6421
|
262 creates an @var{r}-by-@var{c} sparse matrix with a random distribution |
|
263 of @var{n} (<@var{r}) elements per column. The elements of the vectors |
|
264 do not need to be sorted in any particular order as Octave will sort |
|
265 them prior to storing the data. However, pre-sorting the data will |
|
266 make the creation of the sparse matrix faster. |
5164
|
267 |
|
268 The function @dfn{spconvert} takes a three or four column real matrix. |
|
269 The first two columns represent the row and column index respectively and |
|
270 the third and four columns, the real and imaginary parts of the sparse |
|
271 matrix. The matrix can contain zero elements and the elements can be |
|
272 sorted in any order. Adding zero elements is a convenient way to define |
|
273 the size of the sparse matrix. For example |
|
274 |
|
275 @example |
|
276 s = spconvert ([1 2 3 4; 1 3 4 4; 1 2 3 0]') |
|
277 @result{} Compressed Column Sparse (rows=4, cols=4, nnz=3) |
|
278 (1 , 1) -> 1 |
|
279 (2 , 3) -> 2 |
|
280 (3 , 4) -> 3 |
|
281 @end example |
|
282 |
|
283 An example of creating and filling a matrix might be |
|
284 |
|
285 @example |
|
286 k = 5; |
|
287 nz = r * k; |
|
288 s = spalloc (r, c, nz) |
|
289 for j = 1:c |
|
290 idx = randperm (r); |
5648
|
291 s (:, j) = [zeros(r - k, 1); ... |
|
292 rand(k, 1)] (idx); |
5164
|
293 endfor |
|
294 @end example |
|
295 |
5324
|
296 It should be noted, that due to the way that the Octave |
5164
|
297 assignment functions are written that the assignment will reallocate |
5506
|
298 the memory used by the sparse matrix at each iteration of the above loop. |
|
299 Therefore the @dfn{spalloc} function ignores the @var{nz} argument and |
|
300 does not preassign the memory for the matrix. Therefore, it is vitally |
5648
|
301 important that code using to above structure should be vectorized |
|
302 as much as possible to minimize the number of assignments and reduce the |
5164
|
303 number of memory allocations. |
|
304 |
6620
|
305 @DOCSTRING(full) |
|
306 |
|
307 @DOCSTRING(spalloc) |
|
308 |
|
309 @DOCSTRING(sparse) |
|
310 |
|
311 @DOCSTRING(spconvert) |
|
312 |
|
313 @DOCSTRING(spfind) |
|
314 |
6570
|
315 The above problem can be avoided in oct-files. However, the construction |
|
316 of a sparse matrix from an oct-file is more complex than can be |
|
317 discussed in this brief introduction, and you are referred to chapter |
|
318 @ref{Dynamically Linked Functions}, to have a full description of the |
|
319 techniques involved. |
5164
|
320 |
5648
|
321 @node Information, Operators and Functions, Creation, Basics |
|
322 @subsection Finding out Information about Sparse Matrices |
|
323 |
|
324 There are a number of functions that allow information concerning |
|
325 sparse matrices to be obtained. The most basic of these is |
|
326 @dfn{issparse} that identifies whether a particular Octave object is |
|
327 in fact a sparse matrix. |
|
328 |
|
329 Another very basic function is @dfn{nnz} that returns the number of |
|
330 non-zero entries there are in a sparse matrix, while the function |
|
331 @dfn{nzmax} returns the amount of storage allocated to the sparse |
|
332 matrix. Note that Octave tends to crop unused memory at the first |
|
333 opportunity for sparse objects. There are some cases of user created |
|
334 sparse objects where the value returned by @dfn{nzmaz} will not be |
|
335 the same as @dfn{nnz}, but in general they will give the same |
|
336 result. The function @dfn{spstats} returns some basic statistics on |
|
337 the columns of a sparse matrix including the number of elements, the |
|
338 mean and the variance of each column. |
|
339 |
6620
|
340 @DOCSTRING(issparse) |
|
341 |
|
342 @DOCSTRING(nnz) |
|
343 |
|
344 @DOCSTRING(nonzeros) |
|
345 |
|
346 @DOCSTRING(nzmax) |
|
347 |
|
348 @DOCSTRING(spstats) |
|
349 |
5648
|
350 When solving linear equations involving sparse matrices Octave |
|
351 determines the means to solve the equation based on the type of the |
|
352 matrix as discussed in @ref{Sparse Linear Algebra}. Octave probes the |
|
353 matrix type when the div (/) or ldiv (\) operator is first used with |
|
354 the matrix and then caches the type. However the @dfn{matrix_type} |
|
355 function can be used to determine the type of the sparse matrix prior |
|
356 to use of the div or ldiv operators. For example |
|
357 |
|
358 @example |
|
359 a = tril (sprandn(1024, 1024, 0.02), -1) ... |
|
360 + speye(1024); |
|
361 matrix_type (a); |
|
362 ans = Lower |
|
363 @end example |
|
364 |
|
365 show that Octave correctly determines the matrix type for lower |
|
366 triangular matrices. @dfn{matrix_type} can also be used to force |
|
367 the type of a matrix to be a particular type. For example |
|
368 |
|
369 @example |
|
370 a = matrix_type (tril (sprandn (1024, ... |
|
371 1024, 0.02), -1) + speye(1024), 'Lower'); |
|
372 @end example |
|
373 |
|
374 This allows the cost of determining the matrix type to be |
|
375 avoided. However, incorrectly defining the matrix type will result in |
|
376 incorrect results from solutions of linear equations, and so it is |
|
377 entirely the responsibility of the user to correctly identify the |
|
378 matrix type |
|
379 |
|
380 There are several graphical means of finding out information about |
|
381 sparse matrices. The first is the @dfn{spy} command, which displays |
|
382 the structure of the non-zero elements of the |
7007
|
383 matrix. @xref{fig:spmatrix}, for an example of the use of |
5704
|
384 @dfn{spy}. More advanced graphical information can be obtained with the |
5648
|
385 @dfn{treeplot}, @dfn{etreeplot} and @dfn{gplot} commands. |
|
386 |
|
387 @float Figure,fig:spmatrix |
|
388 @image{spmatrix,8cm} |
|
389 @caption{Structure of simple sparse matrix.} |
|
390 @end float |
|
391 |
|
392 One use of sparse matrices is in graph theory, where the |
7001
|
393 interconnections between nodes are represented as an adjacency |
5648
|
394 matrix. That is, if the i-th node in a graph is connected to the j-th |
|
395 node. Then the ij-th node (and in the case of undirected graphs the |
|
396 ji-th node) of the sparse adjacency matrix is non-zero. If each node |
|
397 is then associated with a set of co-ordinates, then the @dfn{gplot} |
|
398 command can be used to graphically display the interconnections |
|
399 between nodes. |
|
400 |
|
401 As a trivial example of the use of @dfn{gplot}, consider the example |
|
402 |
|
403 @example |
|
404 A = sparse([2,6,1,3,2,4,3,5,4,6,1,5], |
|
405 [1,1,2,2,3,3,4,4,5,5,6,6],1,6,6); |
|
406 xy = [0,4,8,6,4,2;5,0,5,7,5,7]'; |
|
407 gplot(A,xy) |
|
408 @end example |
|
409 |
|
410 which creates an adjacency matrix @code{A} where node 1 is connected |
|
411 to nodes 2 and 6, node 2 with nodes 1 and 3, etc. The co-ordinates of |
|
412 the nodes are given in the n-by-2 matrix @code{xy}. |
|
413 @ifset htmltex |
|
414 @xref{fig:gplot}. |
|
415 |
|
416 @float Figure,fig:gplot |
|
417 @image{gplot,8cm} |
|
418 @caption{Simple use of the @dfn{gplot} command.} |
|
419 @end float |
|
420 @end ifset |
|
421 |
|
422 The dependencies between the nodes of a Cholesky factorization can be |
|
423 calculated in linear time without explicitly needing to calculate the |
|
424 Cholesky factorization by the @code{etree} command. This command |
|
425 returns the elimination tree of the matrix and can be displayed |
|
426 graphically by the command @code{treeplot(etree(A))} if @code{A} is |
|
427 symmetric or @code{treeplot(etree(A+A'))} otherwise. |
|
428 |
6620
|
429 @DOCSTRING(spy) |
|
430 |
|
431 @DOCSTRING(etree) |
|
432 |
|
433 @DOCSTRING(etreeplot) |
|
434 |
|
435 @DOCSTRING(gplot) |
|
436 |
|
437 @DOCSTRING(treeplot) |
|
438 |
5648
|
439 @node Operators and Functions, , Information, Basics |
5164
|
440 @subsection Basic Operators and Functions on Sparse Matrices |
|
441 |
|
442 @menu |
5648
|
443 * Functions:: Sparse Functions |
5164
|
444 * ReturnType:: The Return Types of Operators and Functions |
|
445 * MathConsiderations:: Mathematical Considerations |
|
446 @end menu |
|
447 |
|
448 @node Functions, ReturnType, Operators and Functions, Operators and Functions |
5648
|
449 @subsubsection Sparse Functions |
|
450 |
|
451 An important consideration in the use of the sparse functions of |
|
452 Octave is that many of the internal functions of Octave, such as |
|
453 @dfn{diag}, can not accept sparse matrices as an input. The sparse |
|
454 implementation in Octave therefore uses the @dfn{dispatch} |
|
455 function to overload the normal Octave functions with equivalent |
|
456 functions that work with sparse matrices. However, at any time the |
|
457 sparse matrix specific version of the function can be used by |
|
458 explicitly calling its function name. |
|
459 |
6620
|
460 The table below lists all of the sparse functions of Octave. Note that |
7001
|
461 the names of the |
|
462 specific sparse forms of the functions are typically the same as |
6620
|
463 the general versions with a @dfn{sp} prefix. In the table below, and the |
|
464 rest of this article the specific sparse versions of the functions are |
|
465 used. |
|
466 |
|
467 @c Table includes in comments the missing sparse functions |
5648
|
468 |
|
469 @table @asis |
|
470 @item Generate sparse matrices: |
|
471 @dfn{spalloc}, @dfn{spdiags}, @dfn{speye}, @dfn{sprand}, |
|
472 @dfn{sprandn}, @dfn{sprandsym} |
|
473 |
|
474 @item Sparse matrix conversion: |
|
475 @dfn{full}, @dfn{sparse}, @dfn{spconvert}, @dfn{spfind} |
|
476 |
|
477 @item Manipulate sparse matrices |
|
478 @dfn{issparse}, @dfn{nnz}, @dfn{nonzeros}, @dfn{nzmax}, |
6620
|
479 @dfn{spfun}, @dfn{spones}, @dfn{spy} |
5164
|
480 |
5648
|
481 @item Graph Theory: |
|
482 @dfn{etree}, @dfn{etreeplot}, @dfn{gplot}, |
6620
|
483 @dfn{treeplot} |
|
484 @c @dfn{treelayout} |
5648
|
485 |
|
486 @item Sparse matrix reordering: |
6620
|
487 @dfn{ccolamd}, @dfn{colamd}, @dfn{colperm}, @dfn{csymamd}, |
|
488 @dfn{dmperm}, @dfn{symamd}, @dfn{randperm}, @dfn{symrcm} |
5648
|
489 |
|
490 @item Linear algebra: |
6754
|
491 @dfn{matrix_type}, @dfn{spchol}, @dfn{cpcholinv}, |
5648
|
492 @dfn{spchol2inv}, @dfn{spdet}, @dfn{spinv}, @dfn{spkron}, |
6620
|
493 @dfn{splchol}, @dfn{splu}, @dfn{spqr}, @dfn{normest}, |
|
494 @dfn{sprank} |
|
495 @c @dfn{condest}, @dfn{spaugment} |
|
496 @c @dfn{eigs}, @dfn{svds} but these are in octave-forge for now |
5648
|
497 |
|
498 @item Iterative techniques: |
6620
|
499 @dfn{luinc}, @dfn{pcg}, @dfn{pcr} |
|
500 @c @dfn{bicg}, @dfn{bicgstab}, @dfn{cholinc}, @dfn{cgs}, @dfn{gmres}, |
|
501 @c @dfn{lsqr}, @dfn{minres}, @dfn{qmr}, @dfn{symmlq} |
5648
|
502 |
|
503 @item Miscellaneous: |
|
504 @dfn{spparms}, @dfn{symbfact}, @dfn{spstats}, |
|
505 @dfn{spprod}, @dfn{spcumsum}, @dfn{spsum}, |
|
506 @dfn{spsumsq}, @dfn{spmin}, @dfn{spmax}, @dfn{spatan2}, |
|
507 @dfn{spdiag} |
|
508 @end table |
|
509 |
|
510 In addition all of the standard Octave mapper functions (ie. basic |
|
511 math functions that take a single argument) such as @dfn{abs}, etc |
|
512 can accept sparse matrices. The reader is referred to the documentation |
|
513 supplied with these functions within Octave itself for further |
|
514 details. |
5164
|
515 |
|
516 @node ReturnType, MathConsiderations, Functions, Operators and Functions |
|
517 @subsubsection The Return Types of Operators and Functions |
|
518 |
5506
|
519 The two basic reasons to use sparse matrices are to reduce the memory |
5164
|
520 usage and to not have to do calculations on zero elements. The two are |
|
521 closely related in that the computation time on a sparse matrix operator |
5506
|
522 or function is roughly linear with the number of non-zero elements. |
5164
|
523 |
|
524 Therefore, there is a certain density of non-zero elements of a matrix |
|
525 where it no longer makes sense to store it as a sparse matrix, but rather |
|
526 as a full matrix. For this reason operators and functions that have a |
|
527 high probability of returning a full matrix will always return one. For |
|
528 example adding a scalar constant to a sparse matrix will almost always |
|
529 make it a full matrix, and so the example |
|
530 |
|
531 @example |
|
532 speye(3) + 0 |
|
533 @result{} 1 0 0 |
|
534 0 1 0 |
|
535 0 0 1 |
|
536 @end example |
|
537 |
|
538 returns a full matrix as can be seen. Additionally all sparse functions |
|
539 test the amount of memory occupied by the sparse matrix to see if the |
|
540 amount of storage used is larger than the amount used by the full |
|
541 equivalent. Therefore @code{speye (2) * 1} will return a full matrix as |
|
542 the memory used is smaller for the full version than the sparse version. |
|
543 |
|
544 As all of the mixed operators and functions between full and sparse |
|
545 matrices exist, in general this does not cause any problems. However, |
|
546 one area where it does cause a problem is where a sparse matrix is |
|
547 promoted to a full matrix, where subsequent operations would resparsify |
5648
|
548 the matrix. Such cases are rare, but can be artificially created, for |
5164
|
549 example @code{(fliplr(speye(3)) + speye(3)) - speye(3)} gives a full |
|
550 matrix when it should give a sparse one. In general, where such cases |
|
551 occur, they impose only a small memory penalty. |
|
552 |
5648
|
553 There is however one known case where this behavior of Octave's |
5164
|
554 sparse matrices will cause a problem. That is in the handling of the |
|
555 @dfn{diag} function. Whether @dfn{diag} returns a sparse or full matrix |
|
556 depending on the type of its input arguments. So |
|
557 |
|
558 @example |
|
559 a = diag (sparse([1,2,3]), -1); |
|
560 @end example |
|
561 |
|
562 should return a sparse matrix. To ensure this actually happens, the |
|
563 @dfn{sparse} function, and other functions based on it like @dfn{speye}, |
|
564 always returns a sparse matrix, even if the memory used will be larger |
|
565 than its full representation. |
|
566 |
|
567 @node MathConsiderations, , ReturnType, Operators and Functions |
|
568 @subsubsection Mathematical Considerations |
|
569 |
|
570 The attempt has been made to make sparse matrices behave in exactly the |
|
571 same manner as there full counterparts. However, there are certain differences |
|
572 and especially differences with other products sparse implementations. |
|
573 |
|
574 Firstly, the "./" and ".^" operators must be used with care. Consider what |
|
575 the examples |
|
576 |
|
577 @example |
|
578 s = speye (4); |
|
579 a1 = s .^ 2; |
|
580 a2 = s .^ s; |
|
581 a3 = s .^ -2; |
|
582 a4 = s ./ 2; |
|
583 a5 = 2 ./ s; |
|
584 a6 = s ./ s; |
|
585 @end example |
|
586 |
|
587 will give. The first example of @var{s} raised to the power of 2 causes |
|
588 no problems. However @var{s} raised element-wise to itself involves a |
6431
|
589 large number of terms @code{0 .^ 0} which is 1. There @code{@var{s} .^ |
5164
|
590 @var{s}} is a full matrix. |
|
591 |
|
592 Likewise @code{@var{s} .^ -2} involves terms terms like @code{0 .^ -2} which |
|
593 is infinity, and so @code{@var{s} .^ -2} is equally a full matrix. |
|
594 |
|
595 For the "./" operator @code{@var{s} ./ 2} has no problems, but |
|
596 @code{2 ./ @var{s}} involves a large number of infinity terms as well |
|
597 and is equally a full matrix. The case of @code{@var{s} ./ @var{s}} |
|
598 involves terms like @code{0 ./ 0} which is a @code{NaN} and so this |
|
599 is equally a full matrix with the zero elements of @var{s} filled with |
|
600 @code{NaN} values. |
|
601 |
5648
|
602 The above behavior is consistent with full matrices, but is not |
5164
|
603 consistent with sparse implementations in other products. |
|
604 |
|
605 A particular problem of sparse matrices comes about due to the fact that |
|
606 as the zeros are not stored, the sign-bit of these zeros is equally not |
5506
|
607 stored. In certain cases the sign-bit of zero is important. For example |
5164
|
608 |
|
609 @example |
|
610 a = 0 ./ [-1, 1; 1, -1]; |
|
611 b = 1 ./ a |
|
612 @result{} -Inf Inf |
|
613 Inf -Inf |
|
614 c = 1 ./ sparse (a) |
|
615 @result{} Inf Inf |
|
616 Inf Inf |
|
617 @end example |
|
618 |
5648
|
619 To correct this behavior would mean that zero elements with a negative |
5164
|
620 sign-bit would need to be stored in the matrix to ensure that their |
|
621 sign-bit was respected. This is not done at this time, for reasons of |
6750
|
622 efficiency, and so the user is warned that calculations where the sign-bit |
5164
|
623 of zero is important must not be done using sparse matrices. |
|
624 |
5648
|
625 In general any function or operator used on a sparse matrix will |
|
626 result in a sparse matrix with the same or a larger number of non-zero |
|
627 elements than the original matrix. This is particularly true for the |
|
628 important case of sparse matrix factorizations. The usual way to |
|
629 address this is to reorder the matrix, such that its factorization is |
|
630 sparser than the factorization of the original matrix. That is the |
|
631 factorization of @code{L * U = P * S * Q} has sparser terms @code{L} |
|
632 and @code{U} than the equivalent factorization @code{L * U = S}. |
|
633 |
|
634 Several functions are available to reorder depending on the type of the |
|
635 matrix to be factorized. If the matrix is symmetric positive-definite, |
|
636 then @dfn{symamd} or @dfn{csymamd} should be used. Otherwise |
|
637 @dfn{colamd} or @dfn{ccolamd} should be used. For completeness |
|
638 the reordering functions @dfn{colperm} and @dfn{randperm} are |
|
639 also available. |
|
640 |
|
641 @xref{fig:simplematrix}, for an example of the structure of a simple |
|
642 positive definite matrix. |
5506
|
643 |
5648
|
644 @float Figure,fig:simplematrix |
|
645 @image{spmatrix,8cm} |
|
646 @caption{Structure of simple sparse matrix.} |
|
647 @end float |
5506
|
648 |
7001
|
649 The standard Cholesky factorization of this matrix can be |
5648
|
650 obtained by the same command that would be used for a full |
5652
|
651 matrix. This can be visualized with the command |
|
652 @code{r = chol(A); spy(r);}. |
|
653 @ifset HAVE_CHOLMOD |
|
654 @ifset HAVE_COLAMD |
|
655 @xref{fig:simplechol}. |
|
656 @end ifset |
|
657 @end ifset |
|
658 The original matrix had |
5648
|
659 @ifinfo |
|
660 @ifnothtml |
|
661 43 |
|
662 @end ifnothtml |
|
663 @end ifinfo |
|
664 @ifset htmltex |
|
665 598 |
|
666 @end ifset |
|
667 non-zero terms, while this Cholesky factorization has |
|
668 @ifinfo |
|
669 @ifnothtml |
|
670 71, |
|
671 @end ifnothtml |
|
672 @end ifinfo |
|
673 @ifset htmltex |
|
674 10200, |
|
675 @end ifset |
|
676 with only half of the symmetric matrix being stored. This |
|
677 is a significant level of fill in, and although not an issue |
|
678 for such a small test case, can represents a large overhead |
|
679 in working with other sparse matrices. |
5164
|
680 |
5648
|
681 The appropriate sparsity preserving permutation of the original |
|
682 matrix is given by @dfn{symamd} and the factorization using this |
|
683 reordering can be visualized using the command @code{q = symamd(A); |
|
684 r = chol(A(q,q)); spy(r)}. This gives |
|
685 @ifinfo |
|
686 @ifnothtml |
|
687 29 |
|
688 @end ifnothtml |
|
689 @end ifinfo |
|
690 @ifset htmltex |
|
691 399 |
|
692 @end ifset |
|
693 non-zero terms which is a significant improvement. |
5164
|
694 |
5648
|
695 The Cholesky factorization itself can be used to determine the |
|
696 appropriate sparsity preserving reordering of the matrix during the |
|
697 factorization, In that case this might be obtained with three return |
|
698 arguments as r@code{[r, p, q] = chol(A); spy(r)}. |
5164
|
699 |
5648
|
700 @ifset HAVE_CHOLMOD |
|
701 @ifset HAVE_COLAMD |
|
702 @float Figure,fig:simplechol |
|
703 @image{spchol,8cm} |
|
704 @caption{Structure of the un-permuted Cholesky factorization of the above matrix.} |
|
705 @end float |
5164
|
706 |
5648
|
707 @float Figure,fig:simplecholperm |
|
708 @image{spcholperm,8cm} |
|
709 @caption{Structure of the permuted Cholesky factorization of the above matrix.} |
|
710 @end float |
|
711 @end ifset |
|
712 @end ifset |
5164
|
713 |
5648
|
714 In the case of an asymmetric matrix, the appropriate sparsity |
|
715 preserving permutation is @dfn{colamd} and the factorization using |
|
716 this reordering can be visualized using the command @code{q = |
|
717 colamd(A); [l, u, p] = lu(A(:,q)); spy(l+u)}. |
5164
|
718 |
5648
|
719 Finally, Octave implicitly reorders the matrix when using the div (/) |
|
720 and ldiv (\) operators, and so no the user does not need to explicitly |
|
721 reorder the matrix to maximize performance. |
|
722 |
6620
|
723 @DOCSTRING(ccolamd) |
|
724 |
|
725 @DOCSTRING(colamd) |
|
726 |
|
727 @DOCSTRING(colperm) |
|
728 |
|
729 @DOCSTRING(csymamd) |
|
730 |
|
731 @DOCSTRING(dmperm) |
|
732 |
|
733 @DOCSTRING(symamd) |
|
734 |
|
735 @DOCSTRING(symrcm) |
|
736 |
5648
|
737 @node Sparse Linear Algebra, Iterative Techniques, Basics, Sparse Matrices |
5164
|
738 @section Linear Algebra on Sparse Matrices |
|
739 |
5324
|
740 Octave includes a poly-morphic solver for sparse matrices, where |
5164
|
741 the exact solver used to factorize the matrix, depends on the properties |
5648
|
742 of the sparse matrix itself. Generally, the cost of determining the matrix type |
5322
|
743 is small relative to the cost of factorizing the matrix itself, but in any |
|
744 case the matrix type is cached once it is calculated, so that it is not |
|
745 re-determined each time it is used in a linear equation. |
5164
|
746 |
|
747 The selection tree for how the linear equation is solve is |
|
748 |
|
749 @enumerate 1 |
5648
|
750 @item If the matrix is diagonal, solve directly and goto 8 |
5164
|
751 |
|
752 @item If the matrix is a permuted diagonal, solve directly taking into |
5648
|
753 account the permutations. Goto 8 |
5164
|
754 |
5648
|
755 @item If the matrix is square, banded and if the band density is less |
|
756 than that given by @code{spparms ("bandden")} continue, else goto 4. |
5164
|
757 |
|
758 @enumerate a |
|
759 @item If the matrix is tridiagonal and the right-hand side is not sparse |
5648
|
760 continue, else goto 3b. |
5164
|
761 |
|
762 @enumerate |
|
763 @item If the matrix is hermitian, with a positive real diagonal, attempt |
|
764 Cholesky factorization using @sc{Lapack} xPTSV. |
|
765 |
|
766 @item If the above failed or the matrix is not hermitian with a positive |
|
767 real diagonal use Gaussian elimination with pivoting using |
5648
|
768 @sc{Lapack} xGTSV, and goto 8. |
5164
|
769 @end enumerate |
|
770 |
|
771 @item If the matrix is hermitian with a positive real diagonal, attempt |
|
772 Cholesky factorization using @sc{Lapack} xPBTRF. |
|
773 |
|
774 @item if the above failed or the matrix is not hermitian with a positive |
|
775 real diagonal use Gaussian elimination with pivoting using |
5648
|
776 @sc{Lapack} xGBTRF, and goto 8. |
5164
|
777 @end enumerate |
|
778 |
|
779 @item If the matrix is upper or lower triangular perform a sparse forward |
5648
|
780 or backward substitution, and goto 8 |
5164
|
781 |
5322
|
782 @item If the matrix is a upper triangular matrix with column permutations |
|
783 or lower triangular matrix with row permutations, perform a sparse forward |
5648
|
784 or backward substitution, and goto 8 |
5164
|
785 |
5648
|
786 @item If the matrix is square, hermitian with a real positive diagonal, attempt |
5506
|
787 sparse Cholesky factorization using CHOLMOD. |
5164
|
788 |
|
789 @item If the sparse Cholesky factorization failed or the matrix is not |
5648
|
790 hermitian with a real positive diagonal, and the matrix is square, factorize |
|
791 using UMFPACK. |
5164
|
792 |
|
793 @item If the matrix is not square, or any of the previous solvers flags |
5648
|
794 a singular or near singular matrix, find a minimum norm solution using |
|
795 CXSPARSE@footnote{CHOLMOD, UMFPACK and CXSPARSE are written by Tim Davis |
|
796 and are available at http://www.cise.ufl.edu/research/sparse/}. |
5164
|
797 @end enumerate |
|
798 |
|
799 The band density is defined as the number of non-zero values in the matrix |
|
800 divided by the number of non-zero values in the matrix. The banded matrix |
|
801 solvers can be entirely disabled by using @dfn{spparms} to set @code{bandden} |
|
802 to 1 (i.e. @code{spparms ("bandden", 1)}). |
|
803 |
5681
|
804 The QR solver factorizes the problem with a Dulmage-Mendhelsohn, to |
6939
|
805 separate the problem into blocks that can be treated as over-determined, |
5681
|
806 multiple well determined blocks, and a final over-determined block. For |
6939
|
807 matrices with blocks of strongly connected nodes this is a big win as |
5681
|
808 LU decomposition can be used for many blocks. It also significantly |
|
809 improves the chance of finding a solution to over-determined problems |
|
810 rather than just returning a vector of @dfn{NaN}'s. |
|
811 |
|
812 All of the solvers above, can calculate an estimate of the condition |
|
813 number. This can be used to detect numerical stability problems in the |
|
814 solution and force a minimum norm solution to be used. However, for |
|
815 narrow banded, triangular or diagonal matrices, the cost of |
|
816 calculating the condition number is significant, and can in fact |
|
817 exceed the cost of factoring the matrix. Therefore the condition |
6939
|
818 number is not calculated in these cases, and Octave relies on simpler |
|
819 techniques to detect singular matrices or the underlying LAPACK code in |
5681
|
820 the case of banded matrices. |
5164
|
821 |
5322
|
822 The user can force the type of the matrix with the @code{matrix_type} |
|
823 function. This overcomes the cost of discovering the type of the matrix. |
|
824 However, it should be noted incorrectly identifying the type of the matrix |
|
825 will lead to unpredictable results, and so @code{matrix_type} should be |
5506
|
826 used with care. |
5322
|
827 |
6620
|
828 @DOCSTRING(normest) |
|
829 |
|
830 @DOCSTRING(spchol) |
|
831 |
|
832 @DOCSTRING(spcholinv) |
|
833 |
|
834 @DOCSTRING(spchol2inv) |
|
835 |
|
836 @DOCSTRING(spdet) |
|
837 |
|
838 @DOCSTRING(spinv) |
|
839 |
|
840 @DOCSTRING(spkron) |
|
841 |
|
842 @DOCSTRING(splchol) |
|
843 |
|
844 @DOCSTRING(splu) |
|
845 |
|
846 @DOCSTRING(spparms) |
|
847 |
|
848 @DOCSTRING(spqr) |
|
849 |
|
850 @DOCSTRING(sprank) |
|
851 |
|
852 @DOCSTRING(symbfact) |
|
853 |
5648
|
854 @node Iterative Techniques, Real Life Example, Sparse Linear Algebra, Sparse Matrices |
5164
|
855 @section Iterative Techniques applied to sparse matrices |
|
856 |
6620
|
857 The left division @code{\} and right division @code{/} operators, |
|
858 discussed in the previous section, use direct solvers to resolve a |
|
859 linear equation of the form @code{@var{x} = @var{A} \ @var{b}} or |
|
860 @code{@var{x} = @var{b} / @var{A}}. Octave equally includes a number of |
|
861 functions to solve sparse linear equations using iterative techniques. |
|
862 |
|
863 @DOCSTRING(pcg) |
|
864 |
|
865 @DOCSTRING(pcr) |
5837
|
866 |
6620
|
867 The speed with which an iterative solver converges to a solution can be |
|
868 accelerated with the use of a pre-conditioning matrix @var{M}. In this |
|
869 case the linear equation @code{@var{M}^-1 * @var{x} = @var{M}^-1 * |
|
870 @var{A} \ @var{b}} is solved instead. Typical pre-conditioning matrices |
|
871 are partial factorizations of the original matrix. |
5648
|
872 |
6620
|
873 @DOCSTRING(luinc) |
|
874 |
|
875 @node Real Life Example, , Iterative Techniques, Sparse Matrices |
5648
|
876 @section Real Life Example of the use of Sparse Matrices |
|
877 |
|
878 A common application for sparse matrices is in the solution of Finite |
|
879 Element Models. Finite element models allow numerical solution of |
|
880 partial differential equations that do not have closed form solutions, |
|
881 typically because of the complex shape of the domain. |
|
882 |
|
883 In order to motivate this application, we consider the boundary value |
|
884 Laplace equation. This system can model scalar potential fields, such |
|
885 as heat or electrical potential. Given a medium |
|
886 @iftex |
|
887 @tex |
|
888 $\Omega$ |
|
889 @end tex |
|
890 @end iftex |
|
891 @ifinfo |
|
892 Omega |
|
893 @end ifinfo |
|
894 with boundary |
|
895 @iftex |
|
896 @tex |
|
897 $\partial\Omega$ |
|
898 @end tex |
|
899 @end iftex |
|
900 @ifinfo |
|
901 dOmega |
|
902 @end ifinfo |
|
903 . At all points on the |
|
904 @iftex |
|
905 @tex |
|
906 $\partial\Omega$ |
|
907 @end tex |
|
908 @end iftex |
|
909 @ifinfo |
|
910 dOmega |
|
911 @end ifinfo |
|
912 the boundary conditions are known, and we wish to calculate the potential in |
|
913 @iftex |
|
914 @tex |
|
915 $\Omega$ |
|
916 @end tex |
|
917 @end iftex |
|
918 @ifinfo |
|
919 Omega |
|
920 @end ifinfo |
|
921 . Boundary conditions may specify the potential (Dirichlet |
|
922 boundary condition), its normal derivative across the boundary |
|
923 (Neumann boundary condition), or a weighted sum of the potential and |
|
924 its derivative (Cauchy boundary condition). |
|
925 |
|
926 In a thermal model, we want to calculate the temperature in |
|
927 @iftex |
|
928 @tex |
|
929 $\Omega$ |
|
930 @end tex |
|
931 @end iftex |
|
932 @ifinfo |
|
933 Omega |
|
934 @end ifinfo |
|
935 and know the boundary temperature (Dirichlet condition) |
|
936 or heat flux (from which we can calculate the Neumann condition |
|
937 by dividing by the thermal conductivity at the boundary). Similarly, |
|
938 in an electrical model, we want to calculate the voltage in |
|
939 @iftex |
|
940 @tex |
|
941 $\Omega$ |
|
942 @end tex |
|
943 @end iftex |
|
944 @ifinfo |
|
945 Omega |
|
946 @end ifinfo |
|
947 and know the boundary voltage (Dirichlet) or current |
|
948 (Neumann condition after diving by the electrical conductivity). |
|
949 In an electrical model, it is common for much of the boundary |
|
950 to be electrically isolated; this is a Neumann boundary condition |
|
951 with the current equal to zero. |
|
952 |
|
953 The simplest finite element models will divide |
|
954 @iftex |
|
955 @tex |
|
956 $\Omega$ |
|
957 @end tex |
|
958 @end iftex |
|
959 @ifinfo |
|
960 Omega |
|
961 @end ifinfo |
|
962 into simplexes (triangles in 2D, pyramids in 3D). |
|
963 @ifset htmltex |
|
964 We take as an 3D example a cylindrical liquid filled tank with a small |
|
965 non-conductive ball from the EIDORS project@footnote{EIDORS - Electrical |
|
966 Impedance Tomography and Diffuse optical Tomography Reconstruction Software |
|
967 @url{http://eidors3d.sourceforge.net}}. This is model is designed to reflect |
|
968 an application of electrical impedance tomography, where current patterns |
|
969 are applied to such a tank in order to image the internal conductivity |
|
970 distribution. In order to describe the FEM geometry, we have a matrix of |
|
971 vertices @code{nodes} and simplices @code{elems}. |
|
972 @end ifset |
|
973 |
|
974 The following example creates a simple rectangular 2D electrically |
|
975 conductive medium with 10 V and 20 V imposed on opposite sides |
|
976 (Dirichlet boundary conditions). All other edges are electrically |
|
977 isolated. |
|
978 |
|
979 @example |
|
980 node_y= [1;1.2;1.5;1.8;2]*ones(1,11); |
|
981 node_x= ones(5,1)*[1,1.05,1.1,1.2, ... |
|
982 1.3,1.5,1.7,1.8,1.9,1.95,2]; |
|
983 nodes= [node_x(:), node_y(:)]; |
|
984 |
|
985 [h,w]= size(node_x); |
|
986 elems= []; |
|
987 for idx= 1:w-1 |
|
988 widx= (idx-1)*h; |
|
989 elems= [elems; ... |
|
990 widx+[(1:h-1);(2:h);h+(1:h-1)]'; ... |
|
991 widx+[(2:h);h+(2:h);h+(1:h-1)]' ]; |
|
992 endfor |
|
993 |
|
994 E= size(elems,1); # No. of simplices |
|
995 N= size(nodes,1); # No. of vertices |
|
996 D= size(elems,2); # dimensions+1 |
|
997 @end example |
|
998 |
|
999 This creates a N-by-2 matrix @code{nodes} and a E-by-3 matrix |
|
1000 @code{elems} with values, which define finite element triangles: |
5164
|
1001 |
5648
|
1002 @example |
|
1003 nodes(1:7,:)' |
|
1004 1.00 1.00 1.00 1.00 1.00 1.05 1.05 ... |
|
1005 1.00 1.20 1.50 1.80 2.00 1.00 1.20 ... |
|
1006 |
|
1007 elems(1:7,:)' |
|
1008 1 2 3 4 2 3 4 ... |
|
1009 2 3 4 5 7 8 9 ... |
|
1010 6 7 8 9 6 7 8 ... |
|
1011 @end example |
|
1012 |
|
1013 Using a first order FEM, we approximate the electrical conductivity |
|
1014 distribution in |
|
1015 @iftex |
|
1016 @tex |
|
1017 $\Omega$ |
|
1018 @end tex |
|
1019 @end iftex |
|
1020 @ifinfo |
|
1021 Omega |
|
1022 @end ifinfo |
|
1023 as constant on each simplex (represented by the vector @code{conductivity}). |
|
1024 Based on the finite element geometry, we first calculate a system (or |
|
1025 stiffness) matrix for each simplex (represented as 3-by-3 elements on the |
|
1026 diagonal of the element-wise system matrix @code{SE}. Based on @code{SE} |
|
1027 and a N-by-DE connectivity matrix @code{C}, representing the connections |
7001
|
1028 between simplices and vertices, the global connectivity matrix @code{S} is |
5648
|
1029 calculated. |
|
1030 |
|
1031 @example |
|
1032 # Element conductivity |
|
1033 conductivity= [1*ones(1,16), ... |
|
1034 2*ones(1,48), 1*ones(1,16)]; |
|
1035 |
|
1036 # Connectivity matrix |
|
1037 C = sparse ((1:D*E), reshape (elems', ... |
|
1038 D*E, 1), 1, D*E, N); |
|
1039 |
|
1040 # Calculate system matrix |
|
1041 Siidx = floor ([0:D*E-1]'/D) * D * ... |
|
1042 ones(1,D) + ones(D*E,1)*(1:D) ; |
|
1043 Sjidx = [1:D*E]'*ones(1,D); |
|
1044 Sdata = zeros(D*E,D); |
|
1045 dfact = factorial(D-1); |
|
1046 for j=1:E |
|
1047 a = inv([ones(D,1), ... |
|
1048 nodes(elems(j,:), :)]); |
|
1049 const = conductivity(j) * 2 / ... |
|
1050 dfact / abs(det(a)); |
|
1051 Sdata(D*(j-1)+(1:D),:) = const * ... |
|
1052 a(2:D,:)' * a(2:D,:); |
|
1053 endfor |
|
1054 # Element-wise system matrix |
|
1055 SE= sparse(Siidx,Sjidx,Sdata); |
|
1056 # Global system matrix |
|
1057 S= C'* SE *C; |
|
1058 @end example |
|
1059 |
|
1060 The system matrix acts like the conductivity |
|
1061 @iftex |
|
1062 @tex |
|
1063 $S$ |
|
1064 @end tex |
|
1065 @end iftex |
|
1066 @ifinfo |
|
1067 @code{S} |
|
1068 @end ifinfo |
|
1069 in Ohm's law |
|
1070 @iftex |
|
1071 @tex |
|
1072 $SV = I$. |
|
1073 @end tex |
|
1074 @end iftex |
|
1075 @ifinfo |
|
1076 @code{S * V = I}. |
|
1077 @end ifinfo |
|
1078 Based on the Dirichlet and Neumann boundary conditions, we are able to |
|
1079 solve for the voltages at each vertex @code{V}. |
|
1080 |
|
1081 @example |
|
1082 # Dirichlet boundary conditions |
|
1083 D_nodes=[1:5, 51:55]; |
|
1084 D_value=[10*ones(1,5), 20*ones(1,5)]; |
|
1085 |
|
1086 V= zeros(N,1); |
|
1087 V(D_nodes) = D_value; |
|
1088 idx = 1:N; # vertices without Dirichlet |
|
1089 # boundary condns |
|
1090 idx(D_nodes) = []; |
|
1091 |
|
1092 # Neumann boundary conditions. Note that |
|
1093 # N_value must be normalized by the |
|
1094 # boundary length and element conductivity |
|
1095 N_nodes=[]; |
|
1096 N_value=[]; |
|
1097 |
|
1098 Q = zeros(N,1); |
|
1099 Q(N_nodes) = N_value; |
|
1100 |
|
1101 V(idx) = S(idx,idx) \ ( Q(idx) - ... |
|
1102 S(idx,D_nodes) * V(D_nodes)); |
|
1103 @end example |
|
1104 |
|
1105 Finally, in order to display the solution, we show each solved voltage |
|
1106 value in the z-axis for each simplex vertex. |
|
1107 @ifset htmltex |
5652
|
1108 @ifset HAVE_CHOLMOD |
|
1109 @ifset HAVE_UMFPACK |
|
1110 @ifset HAVE_COLAMD |
5648
|
1111 @xref{fig:femmodel}. |
|
1112 @end ifset |
5652
|
1113 @end ifset |
|
1114 @end ifset |
|
1115 @end ifset |
5648
|
1116 |
|
1117 @example |
|
1118 elemx = elems(:,[1,2,3,1])'; |
|
1119 xelems = reshape (nodes(elemx, 1), 4, E); |
|
1120 yelems = reshape (nodes(elemx, 2), 4, E); |
|
1121 velems = reshape (V(elemx), 4, E); |
|
1122 plot3 (xelems,yelems,velems,'k'); |
|
1123 print ('grid.eps'); |
|
1124 @end example |
|
1125 |
|
1126 |
|
1127 @ifset htmltex |
|
1128 @ifset HAVE_CHOLMOD |
|
1129 @ifset HAVE_UMFPACK |
|
1130 @ifset HAVE_COLAMD |
|
1131 @float Figure,fig:femmodel |
|
1132 @image{grid,8cm} |
|
1133 @caption{Example finite element model the showing triangular elements. |
|
1134 The height of each vertex corresponds to the solution value.} |
|
1135 @end float |
|
1136 @end ifset |
|
1137 @end ifset |
|
1138 @end ifset |
|
1139 @end ifset |
|
1140 |
5164
|
1141 @c Local Variables: *** |
|
1142 @c Mode: texinfo *** |
|
1143 @c End: *** |