changeset 9833:ec17785d6dcf octave-forge

fixed documentation typos
author mmarzolla
date Sat, 24 Mar 2012 09:46:42 +0000
parents f8c94ce69046
children 648d10dd8b02
files main/queueing/doc/markovchains.txi main/queueing/doc/queueing.html main/queueing/doc/queueing.pdf
diffstat 3 files changed, 464 insertions(+), 341 deletions(-) [+]
line wrap: on
line diff
--- a/main/queueing/doc/markovchains.txi	Fri Mar 23 01:59:38 2012 +0000
+++ b/main/queueing/doc/markovchains.txi	Sat Mar 24 09:46:42 2012 +0000
@@ -31,7 +31,7 @@
 @section Discrete-Time Markov Chains
 
 Let @math{X_0, X_1, @dots{}, X_n, @dots{} } be a sequence of random
-variables, each one defined over a discete state space @math{0, 1, 2,
+variables defined over a discete state space @math{0, 1, 2,
 @dots{}}. The sequence @math{X_0, X_1, @dots{}, X_n, @dots{}}  is a
 @emph{stochastic process} with discrete time @math{0, 1, 2,
 @dots{}}. A @emph{Markov chain} is a stochastic process @math{@{X_n,
@@ -47,21 +47,21 @@
 @math{P(X_{n+1} = x_{n+1} | X_n = x_n, X_{n-1} = x_{n-1}, ..., X_0 = x_0) = P(X_{n+1} = x_{n+1} | X_n = x_n)}
 @end ifnottex
 
-@noindent which means that the probability that the system is in
+@noindent which basically means that the probability that the system is in
 a particular state at time @math{n+1} only depends on the state the
 system was at time @math{n}.
 
 The evolution of a Markov chain with finite state space @math{@{1, 2,
 @dots{}, N@}} can be fully described by a stochastic matrix @math{{\bf
-P}(n) = [ P_{i,j}(n) ]} such that @math{P_{i, j}(n) = P( X_{n+1} = j\ |\
-X_n = i )}.  If the Markov chain is homogeneous (that is, the
+P}(n) = [ P_{i,j}(n) ]} such that @math{P_{i, j}(n) = P( X_{n+1} = j\
+|\ X_n = i )}.  If the Markov chain is homogeneous (that is, the
 transition probability matrix @math{{\bf P}(n)} is time-independent),
-we can simply write @math{{\bf P} = [P_{i, j}]}, where @math{P_{i, j} =
-P( X_{n+1} = j\ |\ X_n = i )} for all @math{n=0, 1, @dots{}}.
+we can write @math{{\bf P} = [P_{i, j}]}, where @math{P_{i, j} = P(
+X_{n+1} = j\ |\ X_n = i )} for all @math{n=0, 1, @dots{}}.
 
 The transition probability matrix @math{\bf P} must satisfy the
 following two properties: (1) @math{P_{i, j} @geq{} 0} for all
-@math{i, j}, and (2) @math{\sum_{j=1}^N P_{i,j} = 1}.
+@math{i, j}, and (2) @math{\sum_{j=1}^N P_{i,j} = 1} for all @math{i}
 
 @c
 @DOCSTRING(dtmc_check_P)
@@ -82,15 +82,14 @@
 @subsection State occupancy probabilities
 
 We denote with @math{{\bf \pi}(n) = \left(\pi_1(n), \pi_2(n), @dots{},
-\pi_N(n) \right)} the @emph{state occupancy probability vector} at step
-@math{n}. @math{\pi_i(n)} denotes the probability that the system is
-in state @math{i} after @math{n} transitions.
+\pi_N(n) \right)} the @emph{state occupancy probability vector} at
+step @math{n}. @math{\pi_i(n)} denotes the probability that the system
+is in state @math{i} after @math{n} transitions.
 
 Given the transition probability matrix @math{\bf P} and the initial
 state occupancy probability vector @math{{\bf \pi}(0) =
-\left(\pi_1(0), \pi_2(0), @dots{}, \pi_N(0)\right)}, the state
-occupancy probabilities @math{{\bf \pi}(n)} after @math{n} transitions
-can easily be computed as:
+\left(\pi_1(0), \pi_2(0), @dots{}, \pi_N(0)\right)}, @math{{\bf
+\pi}(n)} can be computed as:
 
 @iftex
 @tex
@@ -107,9 +106,9 @@
 
 Under certain conditions, there exists a @emph{stationary state
 occupancy probability} @math{{\bf \pi} = \lim_{n \rightarrow +\infty}
-{\bf \pi}(n)}, which is independent from the initial occupancy
-@math{{\bf \pi}(0)}. The stationary vector @math{\bf \pi} can be
-computed as the solution of the following linear system:
+{\bf \pi}(n)}, which is independent from @math{{\bf \pi}(0)}. The
+stationary vector @math{\bf \pi} is the solution of the following
+linear system:
 
 @iftex
 @tex
@@ -162,10 +161,10 @@
 @end example
 
 A mouse is placed in one of the rooms and can wander around. At each
-step, the mouse moves to one of the neighboring rooms with equal
-probability: if it is in room 1, it van move to room 2 and 4 with
-probability 1/2. If the mouse is in room 8, it can move to either 7, 5
-or 9 with probability 1/3.
+step, the mouse moves from the current room to a neighboring one with
+equal probability: if it is in room 1, it can move to room 2 and 4
+with probability 1/2, respectively. If the mouse is in room 8, it can
+move to either 7, 5 or 9 with probability 1/3.
 
 The transition probability @math{\bf P} from room @math{i} to room
 @math{j} is the following:
@@ -250,8 +249,8 @@
 @noindent where @math{{\bf \pi}(i) = {\bf \pi}(0){\bf P}^i} is the state 
 occupancy probability after @math{i} transitions.
 
-If @math{\bf P} is absorbing, we can rearrange the states to rewrite
-@math{\bf P} as:
+If @math{\bf P} has absorbing states, that is, states with no out
+transitions, we can rearrange the states to rewrite @math{\bf P} as:
 
 @iftex
 @tex
@@ -270,13 +269,14 @@
 @end ifnottex
 
 @noindent where the first @math{t} states are transient
-and the last @math{r} states are absorbing. The matrix @math{{\bf N} =
-({\bf I} - {\bf Q})^{-1}} is called the @emph{fundamental matrix};
-@math{N(i,j)} represents the expected number of times that the process
-is in the @math{j}-th transient state if it is started in the
-@math{i}-th transient state. If we reshape @math{\bf N} to the size of
-@math{\bf P} (filling missing entries with zeros), we have that, for
-absorbing chains @math{{\bf L} = {\bf \pi}(0){\bf N}}.
+and the last @math{r} states are absorbing (@math{t+r = N}). The
+matrix @math{{\bf N} = ({\bf I} - {\bf Q})^{-1}} is called the
+@emph{fundamental matrix}; @math{N(i,j)} represents the expected
+number of times that the process is in the @math{j}-th transient state
+if it is started in the @math{i}-th transient state. If we reshape
+@math{\bf N} to the size of @math{\bf P} (filling missing entries with
+zeros), we have that, for absorbing chains @math{{\bf L} = {\bf
+\pi}(0){\bf N}}.
 
 @DOCSTRING(dtmc_exps)
 
@@ -440,12 +440,11 @@
 @end ifnottex
 
 A continuous-time Markov chain is defined according to an
-@emph{infinitesimal generator matrix} @math{{\bf Q} = [Q_{i,j}]} such
-that for each @math{i \neq j}, @math{Q_{i, j}} is the transition rate
-from state @math{i} to state @math{j}. The elements @math{Q_{i, i}}
-qre defined as @math{Q_{i, i} = - \sum_{j \neq i} Q_{i, j}}, such that
-matrix @math{\bf Q} satisfies the property that, for all @math{i},
-@math{\sum_{j=1}^N Q_{i, j} = 0}.
+@emph{infinitesimal generator matrix} @math{{\bf Q} = [Q_{i,j}]},
+where for each @math{i \neq j}, @math{Q_{i, j}} is the transition rate
+from state @math{i} to state @math{j}. The matrix @math{\bf Q} must
+satisfy the property that, for all @math{i}, @math{\sum_{j=1}^N Q_{i,
+j} = 0}.
 
 @DOCSTRING(ctmc_check_Q)
 
@@ -463,12 +462,13 @@
 
 Similarly to the discrete case, we denote with @math{{\bf \pi}(t) =
 (\pi_1(t), \pi_2(t), @dots{}, \pi_N(t) )} the @emph{state occupancy
-probability vector} at time @math{t}. @math{\pi_i(t)} denotes the
-probability that the system is in state @math{i} at time @math{t @geq{} 0}.
+probability vector} at time @math{t}. @math{\pi_i(t)} is the
+probability that the system is in state @math{i} at time @math{t
+@geq{} 0}.
 
 Given the infinitesimal generator matrix @math{\bf Q} and the initial
-state occupancy probability vector @math{{\bf \pi}(0) = (\pi_1(0),
-\pi_2(0), @dots{}, \pi_N(0))}, the state occupancy probability vector
+state occupancy probabilities @math{{\bf \pi}(0) = (\pi_1(0),
+\pi_2(0), @dots{}, \pi_N(0))}, the state occupancy probabilities
 @math{{\bf \pi}(t)} at time @math{t} can be computed as:
 
 @iftex
@@ -488,9 +488,8 @@
 of @math{{\bf Q} t}. Under certain conditions, there exists a
 @emph{stationary state occupancy probability} @math{{\bf \pi} =
 \lim_{t \rightarrow +\infty} {\bf \pi}(t)}, which is independent from
-the initial state occupancy @math{{\bf \pi}(0)}. The stationary state
-occupancy probability vector @math{\bf \pi} can be computed as the
-solution of the following linear system:
+@math{{\bf \pi}(0)}.  @math{\bf \pi} is the solution of the following
+linear system:
 
 @iftex
 @tex
--- a/main/queueing/doc/queueing.html	Fri Mar 23 01:59:38 2012 +0000
+++ b/main/queueing/doc/queueing.html	Sat Mar 24 09:46:42 2012 +0000
@@ -60,8 +60,8 @@
 <li><a href="#Birth_002ddeath-process-_0028DTMC_0029">4.1.2 Birth-death process</a>
 <li><a href="#Expected-number-of-visits-_0028DTMC_0029">4.1.3 Expected Number of Visits</a>
 <li><a href="#Time_002daveraged-expected-sojourn-times-_0028DTMC_0029">4.1.4 Time-averaged expected sojourn times</a>
-<li><a href="#First-passage-times-_0028DTMC_0029">4.1.5 First Passage Times</a>
-<li><a href="#Mean-time-to-absorption-_0028DTMC_0029">4.1.6 Mean Time to Absorption</a>
+<li><a href="#Mean-time-to-absorption-_0028DTMC_0029">4.1.5 Mean Time to Absorption</a>
+<li><a href="#First-passage-times-_0028DTMC_0029">4.1.6 First Passage Times</a>
 </li></ul>
 <li><a href="#Continuous_002dTime-Markov-Chains">4.2 Continuous-Time Markov Chains</a>
 <ul>
@@ -573,7 +573,7 @@
 routing of jobs within the network is described with a <em>routing
 probability matrix</em> P. Specifically, a request completing
 service at center i is enqueued at center j with
-probability P_ij.  Let us assume the following routing
+probability P_i, j.  Let us assume the following routing
 probability matrix:
 
 <pre class="example">         [ 0  0.3  0.7 ]
@@ -611,7 +611,7 @@
 <pre class="example">     V_j = sum_i V_i P_ij
 </pre>
    <p>We can compute V_k from the routing probability matrix
-P_ij using the <samp><span class="command">qnvisits</span></samp> function:
+P_i, j using the <samp><span class="command">qnvisits</span></samp> function:
 
 <pre class="example">     <kbd>P = [0 0.3 0.7; 1 0 0; 1 0 0];</kbd>
      <kbd>V = qnvisits(P)</kbd>
@@ -715,7 +715,7 @@
 
 <pre class="example">     V_j = sum_i V_i P_ij
 </pre>
-   <p>where P_0j is the probability of an external arrival to
+   <p>where P_0, j is the probability of an external arrival to
 center j. This can be computed as:
 
    <p>Assuming the same service times as in the previous example, the
@@ -794,29 +794,29 @@
 <h3 class="section">4.1 Discrete-Time Markov Chains</h3>
 
 <p>Let X_0, X_1, <small class="dots">...</small>, X_n, <small class="dots">...</small>  be a sequence of random
-variables, each one defined over a discete state space 0, 1, 2,
+variables defined over a discete state space 0, 1, 2,
 <small class="dots">...</small>. The sequence X_0, X_1, <small class="dots">...</small>, X_n, <small class="dots">...</small>  is a
 <em>stochastic process</em> with discrete time 0, 1, 2,
 <small class="dots">...</small>. A <em>Markov chain</em> is a stochastic process {X_n,
-n=0, 1, 2, <small class="dots">...</small>} which satisfies the following Marrkov property:
+n=0, 1, 2, <small class="dots">...</small>} which satisfies the following Markov property:
 
    <p>P(X_n+1 = x_n+1 | X_n = x_n, X_n-1 = x_n-1, ..., X_0 = x_0) = P(X_n+1 = x_n+1 | X_n = x_n)
 
-<p class="noindent">which means that the probability that the system is in
+<p class="noindent">which basically means that the probability that the system is in
 a particular state at time n+1 only depends on the state the
 system was at time n.
 
    <p>The evolution of a Markov chain with finite state space {1, 2,
 <small class="dots">...</small>, N} can be fully described by a stochastic matrix \bf
-P(n) = P_i,j(n) such that P_i, j(n) = P( X_n+1 = j\ |\
-X_n = i ).  If the Markov chain is homogeneous (that is, the
+P(n) = [ P_i,j(n) ] such that P_i, j(n) = P( X_n+1 = j\
+|\ X_n = i ).  If the Markov chain is homogeneous (that is, the
 transition probability matrix \bf P(n) is time-independent),
-we can simply write \bf P = P_i, j, where P_i, j =
-P( X_n+1 = j\ |\ X_n = i ) for all n=0, 1, 2, <small class="dots">...</small>.
+we can write \bf P = [P_i, j], where P_i, j = P(
+X_n+1 = j\ |\ X_n = i ) for all n=0, 1, <small class="dots">...</small>.
 
    <p>The transition probability matrix \bf P must satisfy the
 following two properties: (1) P_i, j &ge; 0 for all
-i, j, and (2) \sum_j=1^N P_i,j = 1.
+i, j, and (2) \sum_j=1^N P_i,j = 1 for all i
 
    <p><a name="doc_002ddtmc_005fcheck_005fP"></a>
 
@@ -836,8 +836,8 @@
 <li><a accesskey="2" href="#Birth_002ddeath-process-_0028DTMC_0029">Birth-death process (DTMC)</a>
 <li><a accesskey="3" href="#Expected-number-of-visits-_0028DTMC_0029">Expected number of visits (DTMC)</a>
 <li><a accesskey="4" href="#Time_002daveraged-expected-sojourn-times-_0028DTMC_0029">Time-averaged expected sojourn times (DTMC)</a>
-<li><a accesskey="5" href="#First-passage-times-_0028DTMC_0029">First passage times (DTMC)</a>
-<li><a accesskey="6" href="#Mean-time-to-absorption-_0028DTMC_0029">Mean time to absorption (DTMC)</a>
+<li><a accesskey="5" href="#Mean-time-to-absorption-_0028DTMC_0029">Mean time to absorption (DTMC)</a>
+<li><a accesskey="6" href="#First-passage-times-_0028DTMC_0029">First passage times (DTMC)</a>
 </ul>
 
 <div class="node">
@@ -851,25 +851,31 @@
 
 <h4 class="subsection">4.1.1 State occupancy probabilities</h4>
 
-<p>We denote with \bf \pi(n) = (\pi_1(n), \pi_2(n), <small class="dots">...</small>,
-\pi_N(n) ) the <em>state occupancy probability vector</em> at step
-n. \pi_i(n) denotes the probability that the system is
-in state i at step n.
+<p>We denote with \bf \pi(n) = \left(\pi_1(n), \pi_2(n), <small class="dots">...</small>,
+\pi_N(n) \right) the <em>state occupancy probability vector</em> at
+step n. \pi_i(n) denotes the probability that the system
+is in state i after n transitions.
 
    <p>Given the transition probability matrix \bf P and the initial
-state occupancy probability vector \bf \pi(0) = (\pi_1(0),
-\pi_2(0), <small class="dots">...</small>, \pi_N(0)) at step 0, the state occupancy
-probability vector \bf \pi(n) at step n can be
-computed as:
+state occupancy probability vector \bf \pi(0) =
+\left(\pi_1(0), \pi_2(0), <small class="dots">...</small>, \pi_N(0)\right), \bf
+\pi(n) can be computed as:
 
 <pre class="example">     \pi(n) = \pi(0) P^n
 </pre>
    <p>Under certain conditions, there exists a <em>stationary state
 occupancy probability</em> \bf \pi = \lim_n \rightarrow +\infty
-\bf \pi(n), which is independent from the initial state occupancy
-\bf \pi(0). The stationary state occupancy probability vector
-\bf \pi satisfies \bf \pi = \bf \pi \bf P
-and \sum_i=1^N \pi_i = 1
+\bf \pi(n), which is independent from \bf \pi(0). The
+stationary vector \bf \pi is the solution of the following
+linear system:
+
+<pre class="example">     /
+     | \pi P   = \pi
+     | \pi 1^T = 1
+     \
+</pre>
+   <p class="noindent">where \bf 1 is the row vector of ones, and ( \cdot )^T
+the transpose operator.
 
    <p><a name="doc_002ddtmc"></a>
 
@@ -891,10 +897,11 @@
           <dl>
 <dt><var>P</var><dd><var>P</var><code>(i,j)</code> is the transition probability from state i
 to state j. <var>P</var> must be an irreducible stochastic matrix,
-which means that the sum of each row must be 1 (\sum_j=1^N P_i j = 1), and the rank of
+which means that the sum of each row must be 1 (\sum_j=1^N P_i, j = 1), and the rank of
 <var>P</var> must be equal to its dimension.
 
-          <br><dt><var>n</var><dd>Step at which to compute the transient probability
+          <br><dt><var>n</var><dd>Number of transitions after which compute the state occupancy probabilities
+(n=0, 1, <small class="enddots">...</small>)
 
           <br><dt><var>p0</var><dd><var>p0</var><code>(i)</code> is the probability that at step 0 the system
 is in state i.
@@ -908,7 +915,7 @@
 <var>p</var><code>(i)</code> is the steady-state probability that the system is
 in state i. <var>p</var> satisfies the equations p = p\bf P and \sum_i=1^N p_i = 1. If this function is invoked
 with three arguments, <var>p</var><code>(i)</code> is the marginal probability
-that the system is in state i at step <var>n</var>,
+that the system is in state i after <var>n</var> transitions,
 given the initial probabilities <var>p0</var><code>(i)</code> that the initial state is
 i.
 
@@ -918,20 +925,61 @@
 
 <p class="noindent"><strong>EXAMPLE</strong>
 
-<pre class="example"><pre class="verbatim">      a = 0.2;
-      b = 0.15;
-      P = [ 1-a a; b 1-b];
-      T = 0:14;
-      pp = zeros(2,length(T));
-      for i=1:length(T)
-        pp(:,i) = dtmc(P,T(i),[1 0]);
-      endfor
-      ss = dtmc(P); # compute steady state probabilities
-      plot( T, pp(1,:), "b+;p_0(t);", "linewidth", 2, \
-            T, ss(1)*ones(size(T)), "b;Steady State;", \
-            T, pp(2,:), "r+;p_1(t);", "linewidth", 2, \
-            T, ss(2)*ones(size(T)), "r;Steady State;" );
-      xlabel("Time Step");</pre>
+   <p>This example is from [GrSn97]. Let us consider a maze with nine rooms,
+as shown in the following figure
+
+<pre class="example">     +-----+-----+-----+
+     |     |     |     |
+     |  1     2     3  |
+     |     |     |     |
+     +-   -+-   -+-   -+
+     |     |     |     |
+     |  4     5     6  |
+     |     |     |     |
+     +-   -+-   -+-   -+
+     |     |     |     |
+     |  7     8     9  |
+     |     |     |     |
+     +-----+-----+-----+
+</pre>
+   <p>A mouse is placed in one of the rooms and can wander around. At each
+step, the mouse moves from the current room to a neighboring one with
+equal probability: if it is in room 1, it can move to room 2 and 4
+with probability 1/2, respectively. If the mouse is in room 8, it can
+move to either 7, 5 or 9 with probability 1/3.
+
+   <p>The transition probability \bf P from room i to room
+j is the following:
+
+<pre class="example">             / 0     1/2   0     1/2   0     0     0     0     0   \
+             | 1/3   0     1/3   0     1/3   0     0     0     0   |
+             | 0     1/2   0     0     0     1/2   0     0     0   |
+             | 1/3   0     0     0     1/3   0     1/3   0     0   |
+         P = | 0     1/4   0     1/4   0     1/4   0     1/4   0   |
+             | 0     0     1/3   0     1/3   0     0     0     1/3 |
+             | 0     0     0     1/2   0     0     0     1/2   0   |
+             | 0     0     0     0     1/3   0     1/3   0     1/3 |
+             \ 0     0     0     0     0     1/2   0     1/2   0   /
+</pre>
+   <p>The stationary state occupancy probability vector can be computed
+using the following code:
+
+<pre class="example">     <!-- @group -->
+<pre class="verbatim">      P = zeros(9,9);
+      P(1,[2 4]    ) = 1/2;
+      P(2,[1 5 3]  ) = 1/3;
+      P(3,[2 6]    ) = 1/2;
+      P(4,[1 5 7]  ) = 1/3;
+      P(5,[2 4 6 8]) = 1/4;
+      P(6,[3 5 9]  ) = 1/3;
+      P(7,[4 8]    ) = 1/2;
+      P(8,[7 5 9]  ) = 1/3;
+      P(9,[6 8]    ) = 1/2;
+      p = dtmc(P);
+      disp(p)</pre><!-- @end group -->
+         &rArr; 0.083333   0.125000   0.083333   0.125000
+            0.166667   0.125000   0.083333   0.125000
+            0.083333
 </pre>
    <div class="node">
 <a name="Birth-death-process-(DTMC)"></a>
@@ -948,30 +996,29 @@
 <p><a name="doc_002ddtmc_005fbd"></a>
 
 <div class="defun">
-&mdash; Function File: <var>P</var> = <b>dtmc_bd</b> (<var>birth, death</var>)<var><a name="index-dtmc_005fbd-11"></a></var><br>
+&mdash; Function File: <var>P</var> = <b>dtmc_bd</b> (<var>b, d</var>)<var><a name="index-dtmc_005fbd-11"></a></var><br>
 <blockquote>
         <p><a name="index-Markov-chain_002c-discrete-time-12"></a><a name="index-Birth_002ddeath-process-13"></a>
-Returns the N \times N transition probability matrix P
-for a birth-death process with given rates.
-
-        <p><strong>INPUTS</strong>
-
-          <dl>
-<dt><var>birth</var><dd>Vector with N-1 elements, where <var>birth</var><code>(i)</code> is the
-transition probability from state i to state i+1.
-
-          <br><dt><var>death</var><dd>Vector with N-1 elements, where <var>death</var><code>(i)</code> is the
-transition probability from state i+1 to state i.
-
-        </dl>
-
-        <p><strong>OUTPUTS</strong>
-
-          <dl>
-<dt><var>P</var><dd>Transition probability matrix for the birth-death process.
-
-        </dl>
-
+Returns the transition probability matrix P for a discrete
+birth-death process over state space 1, 2, <small class="dots">...</small>, N. 
+<var>b</var><code>(i)</code> is the transition probability from state
+i to i+1, and <var>d</var><code>(i)</code> is the transition
+probability from state i+1 to state i, i=1, 2,
+<small class="dots">...</small>, N-1.
+
+        <p>Matrix \bf P is therefore defined as:
+
+     <pre class="example">          /                                                             \
+          | 1-b(1)     b(1)                                             |
+          |  d(1)  (1-d(1)-b(2))     b(2)                               |
+          |            d(2)      (1-d(2)-b(3))     b(3)                 |
+          |                                                             |
+          |                 ...           ...          ...              |
+          |                                                             |
+          |                         d(N-2)   (1-d(N-2)-b(N-1))  b(N-1)  |
+          |                                        d(N-1)      1-d(N-1) |
+          \                                                             /
+</pre>
         </blockquote></div>
 
 <div class="node">
@@ -988,9 +1035,9 @@
 
 <p>Given a N state discrete-time Markov chain with transition
 matrix \bf P and an integer n &ge; 0, we let
-L-I(n) be the the expected number of visits to state i
+L_i(n) be the the expected number of visits to state i
 during the first n transitions. The vector \bf L(n) =
-(L_1(n), L_2(n), <small class="dots">...</small>, L_N(n)) is defined as:
+( L_1(n), L_2(n), <small class="dots">...</small>, L_N(n) ) is defined as
 
 <pre class="example">              n            n
              ___          ___
@@ -999,25 +1046,24 @@
             /___         /___
              i=0          i=0
 </pre>
-   <p class="noindent">where \bf \pi(i) = \bf \pi(0)\bf P^i is the state occupancy probability
-after i transitions.
-
-   <p>If \bf P is absorbing, we can rearrange the states to rewrite
-\bf P as:
+   <p class="noindent">where \bf \pi(i) = \bf \pi(0)\bf P^i is the state
+occupancy probability after i transitions.
+
+   <p>If \bf P has absorbing states, that is, states with no out
+transitions, we can rearrange the states to rewrite \bf P as:
 
 <pre class="example">         / Q | R \
      P = |---+---|
          \ 0 | I /
 </pre>
-   <p class="noindent">where we suppose that the first t states are transient
-and the last r states are absorbing.
-
-   <p>The matrix \bf N = (\bf I - \bf Q)^-1 is called the
+   <p class="noindent">where the first t states are transient
+and the last r states are absorbing (t+r = N). The
+matrix \bf N = (\bf I - \bf Q)^-1 is called the
 <em>fundamental matrix</em>; N(i,j) represents the expected
 number of times that the process is in the j-th transient state
 if it is started in the i-th transient state. If we reshape
 \bf N to the size of \bf P (filling missing entries with
-zeros), we have that, for abrosbing chains \bf L = \bf
+zeros), we have that, for absorbing chains \bf L = \bf
 \pi(0)\bf N.
 
    <p><a name="doc_002ddtmc_005fexps"></a>
@@ -1036,11 +1082,11 @@
 <dt><var>P</var><dd>N \times N transition probability matrix.
 
           <br><dt><var>n</var><dd>Number of steps during which the expected number of visits are
-computed (<var>n</var> &ge; 0). If <var>n</var><code>=0</code>, simply
-returns <var>p0</var>. If <var>n</var><code> &gt; 0</code>, returns the expected number
-of visits after exactly <var>n</var> transitions.
-
-          <br><dt><var>p0</var><dd>Initial state occupancy probability
+computed (<var>n</var> &ge; 0). If <var>n</var><code>=0</code>, returns
+<var>p0</var>. If <var>n</var><code> &gt; 0</code>, returns the expected number of
+visits after exactly <var>n</var> transitions.
+
+          <br><dt><var>p0</var><dd>Initial state occupancy probability.
 
         </dl>
 
@@ -1050,7 +1096,8 @@
 <dt><var>L</var><dd>When called with two arguments, <var>L</var><code>(i)</code> is the expected
 number of visits to transient state i before absorption. When
 called with three arguments, <var>L</var><code>(i)</code> is the expected number
-of visits to state i during the first <var>n</var> transitions.
+of visits to state i during the first <var>n</var> transitions,
+given initial occupancy probability <var>p0</var>.
 
         </dl>
 
@@ -1065,7 +1112,7 @@
 <a name="Time-averaged-expected-sojourn-times-(DTMC)"></a>
 <a name="Time_002daveraged-expected-sojourn-times-_0028DTMC_0029"></a>
 <p><hr>
-Next:&nbsp;<a rel="next" accesskey="n" href="#First-passage-times-_0028DTMC_0029">First passage times (DTMC)</a>,
+Next:&nbsp;<a rel="next" accesskey="n" href="#Mean-time-to-absorption-_0028DTMC_0029">Mean time to absorption (DTMC)</a>,
 Previous:&nbsp;<a rel="previous" accesskey="p" href="#Expected-number-of-visits-_0028DTMC_0029">Expected number of visits (DTMC)</a>,
 Up:&nbsp;<a rel="up" accesskey="u" href="#Discrete_002dTime-Markov-Chains">Discrete-Time Markov Chains</a>
 
@@ -1091,7 +1138,7 @@
 <dt><var>Q</var><dd>Infinitesimal generator matrix. <var>Q</var><code>(i,j)</code> is the transition
 rate from state i to state j,
 1 &le; i \neq j &le; N. The
-matrix <var>Q</var> must also satisfy the condition \sum_j=1^N Q_ij = 0
+matrix <var>Q</var> must also satisfy the condition \sum_j=1^N Q_i, j = 0
 
           <br><dt><var>t</var><dd>Time. If omitted, the results are computed until absorption.
 
@@ -1115,84 +1162,43 @@
         </blockquote></div>
 
 <div class="node">
-<a name="First-passage-times-(DTMC)"></a>
-<a name="First-passage-times-_0028DTMC_0029"></a>
+<a name="Mean-time-to-absorption-(DTMC)"></a>
+<a name="Mean-time-to-absorption-_0028DTMC_0029"></a>
 <p><hr>
-Next:&nbsp;<a rel="next" accesskey="n" href="#Mean-time-to-absorption-_0028DTMC_0029">Mean time to absorption (DTMC)</a>,
+Next:&nbsp;<a rel="next" accesskey="n" href="#First-passage-times-_0028DTMC_0029">First passage times (DTMC)</a>,
 Previous:&nbsp;<a rel="previous" accesskey="p" href="#Time_002daveraged-expected-sojourn-times-_0028DTMC_0029">Time-averaged expected sojourn times (DTMC)</a>,
 Up:&nbsp;<a rel="up" accesskey="u" href="#Discrete_002dTime-Markov-Chains">Discrete-Time Markov Chains</a>
 
 </div>
 
-<h4 class="subsection">4.1.5 First Passage Times</h4>
-
-<p>The First Passage Time M_i j is defined as the average
-number of transitions needed to visit state j for the first
-time, starting from state i. Matrix \bf M satisfies the
-property that
-
-<pre class="example">                ___
-               \
-     M_ij = 1 + &gt;   P_ij * M_kj
-               /___
-               k!=j
+<h4 class="subsection">4.1.5 Mean Time to Absorption</h4>
+
+<p>The <em>mean time to absorption</em> is defined as the average number of
+transitions which are required to reach an absorbing state, starting
+from a transient state (or given an initial state occupancy
+probability vector \bf \pi(0) ).
+
+   <p>Let \bf t_i be the expected number of steps before being
+absorbed in any absorbing state, starting from state i.  Vector
+\bf t can be easiliy computed from the fundamental matrix
+\bf N (see <a href="#Expected-number-of-visits-_0028DTMC_0029">Expected number of visits (DTMC)</a>) as
+
+<pre class="example">     t = 1 N
 </pre>
-   <p><a name="doc_002ddtmc_005ffpt"></a>
+   <p>We can define a matrix \bf B = [ B_i, j ] such that
+B_i, j is the probability of being absorbed in state
+j, starting from transient state i. Again, using
+the fundamental matrix \bf N and \bf R, we have
+
+<pre class="example">     B = N R
+</pre>
+   <p><a name="doc_002ddtmc_005fmtta"></a>
 
 <div class="defun">
-&mdash; Function File: <var>M</var> = <b>dtmc_fpt</b> (<var>P</var>)<var><a name="index-dtmc_005ffpt-20"></a></var><br>
+&mdash; Function File: [<var>t</var> <var>N</var> <var>B</var>] = <b>dtmc_mtta</b> (<var>P</var>)<var><a name="index-dtmc_005fmtta-20"></a></var><br>
+&mdash; Function File: [<var>t</var> <var>N</var> <var>B</var>] = <b>dtmc_mtta</b> (<var>P, p0</var>)<var><a name="index-dtmc_005fmtta-21"></a></var><br>
 <blockquote>
-        <p><a name="index-First-passage-times-21"></a><a name="index-Mean-recurrence-times-22"></a>
-Compute mean first passage times and mean recurrence times
-for an irreducible discrete-time Markov chain.
-
-        <p><strong>INPUTS</strong>
-
-          <dl>
-<dt><var>P</var><dd><var>P</var><code>(i,j)</code> is the transition probability from state i
-to state j. <var>P</var> must be an irreducible stochastic matrix,
-which means that the sum of each row must be 1 (\sum_j=1^N
-P_i j = 1), and the rank of <var>P</var> must be equal to its
-dimension.
-
-        </dl>
-
-        <p><strong>OUTPUTS</strong>
-
-          <dl>
-<dt><var>M</var><dd>For all i \neq j, <var>M</var><code>(i,j)</code> is the average number of
-transitions before state <var>j</var> is reached for the first time,
-starting from state <var>i</var>. <var>M</var><code>(i,i)</code> is the <em>mean
-recurrence time</em> of state i, and represents the average time
-needed to return to state <var>i</var>.
-
-        </dl>
-
-        <pre class="sp">
-     
-     </pre>
-     <strong>See also:</strong> ctmc_fpt.
-
-        </blockquote></div>
-
-<div class="node">
-<a name="Mean-time-to-absorption-(DTMC)"></a>
-<a name="Mean-time-to-absorption-_0028DTMC_0029"></a>
-<p><hr>
-Previous:&nbsp;<a rel="previous" accesskey="p" href="#First-passage-times-_0028DTMC_0029">First passage times (DTMC)</a>,
-Up:&nbsp;<a rel="up" accesskey="u" href="#Discrete_002dTime-Markov-Chains">Discrete-Time Markov Chains</a>
-
-</div>
-
-<h4 class="subsection">4.1.6 Mean Time to Absorption</h4>
-
-<p><a name="doc_002ddtmc_005fmtta"></a>
-
-<div class="defun">
-&mdash; Function File: [<var>t</var> <var>N</var> <var>B</var>] = <b>dtmc_mtta</b> (<var>P</var>)<var><a name="index-dtmc_005fmtta-23"></a></var><br>
-&mdash; Function File: [<var>t</var> <var>N</var> <var>B</var>] = <b>dtmc_mtta</b> (<var>P, p0</var>)<var><a name="index-dtmc_005fmtta-24"></a></var><br>
-<blockquote>
-        <p><a name="index-Mean-time-to-absorption-25"></a><a name="index-Absorption-probabilities-26"></a><a name="index-Fundamental-matrix-27"></a>
+        <p><a name="index-Mean-time-to-absorption-22"></a><a name="index-Absorption-probabilities-23"></a><a name="index-Fundamental-matrix-24"></a>
 Compute the expected number of steps before absorption for a
 DTMC with N \times N transition probability matrix <var>P</var>;
 compute also the fundamental matrix <var>N</var> for <var>P</var>.
@@ -1244,6 +1250,94 @@
         </blockquote></div>
 
 <div class="node">
+<a name="First-passage-times-(DTMC)"></a>
+<a name="First-passage-times-_0028DTMC_0029"></a>
+<p><hr>
+Previous:&nbsp;<a rel="previous" accesskey="p" href="#Mean-time-to-absorption-_0028DTMC_0029">Mean time to absorption (DTMC)</a>,
+Up:&nbsp;<a rel="up" accesskey="u" href="#Discrete_002dTime-Markov-Chains">Discrete-Time Markov Chains</a>
+
+</div>
+
+<h4 class="subsection">4.1.6 First Passage Times</h4>
+
+<p>The First Passage Time M_i, j is defined as the average
+number of transitions needed to visit state j for the first
+time, starting from state i. Matrix \bf M satisfies the
+property that
+
+<pre class="example">                ___
+               \
+     M_ij = 1 + &gt;   P_ij * M_kj
+               /___
+               k!=j
+</pre>
+   <p>To compute \bf M = [ M_i, j] a different formulation is
+used.  Let \bf W be the N \times N matrix having each
+row equal to the steady-state probability vector \bf \pi for
+\bf P; let \bf I be the N \times N identity
+matrix. Define matrix \bf Z as follows:
+
+<pre class="example">                    -1
+     Z = (I - P + W)
+</pre>
+   <p class="noindent">Then, we have that
+
+<pre class="example">            Z_jj - Z_ij
+     M_ij = -----------
+               \pi_j
+</pre>
+   <p>According to the definition above, M_i,i = 0. We arbitrarily
+redefine M_i,i to be the <em>mean recurrence time</em>
+r_i for state i, that is the average number of
+transitions needed to return to state i starting from
+it. r_i is defined as:
+
+<pre class="example">             1
+     r_i = -----
+           \pi_i
+</pre>
+   <p class="noindent">where \pi_i is the stationary probability of visiting state
+i.
+
+   <p><a name="doc_002ddtmc_005ffpt"></a>
+
+<div class="defun">
+&mdash; Function File: <var>M</var> = <b>dtmc_fpt</b> (<var>P</var>)<var><a name="index-dtmc_005ffpt-25"></a></var><br>
+<blockquote>
+        <p><a name="index-First-passage-times-26"></a><a name="index-Mean-recurrence-times-27"></a>
+Compute mean first passage times and mean recurrence times
+for an irreducible discrete-time Markov chain.
+
+        <p><strong>INPUTS</strong>
+
+          <dl>
+<dt><var>P</var><dd><var>P</var><code>(i,j)</code> is the transition probability from state i
+to state j. <var>P</var> must be an irreducible stochastic matrix,
+which means that the sum of each row must be 1 (\sum_j=1^N
+P_i j = 1), and the rank of <var>P</var> must be equal to its
+dimension.
+
+        </dl>
+
+        <p><strong>OUTPUTS</strong>
+
+          <dl>
+<dt><var>M</var><dd>For all i \neq j, <var>M</var><code>(i,j)</code> is the average number of
+transitions before state <var>j</var> is reached for the first time,
+starting from state <var>i</var>. <var>M</var><code>(i,i)</code> is the <em>mean
+recurrence time</em> of state i, and represents the average time
+needed to return to state <var>i</var>.
+
+        </dl>
+
+        <pre class="sp">
+     
+     </pre>
+     <strong>See also:</strong> ctmc_fpt.
+
+        </blockquote></div>
+
+<div class="node">
 <a name="Continuous-Time-Markov-Chains"></a>
 <a name="Continuous_002dTime-Markov-Chains"></a>
 <p><hr>
@@ -1256,17 +1350,17 @@
 
 <p>A stochastic process {X(t), t &ge; 0} is a continuous-time
 Markov chain if, for all integers n, and for any sequence
-t_0, t_1 , \ldots , t_n, t_n+1 such that t_0 &lt; t_1 &lt;
+t_0, t_1 , \ldots, t_n, t_n+1 such that t_0 &lt; t_1 &lt;
 \ldots &lt; t_n &lt; t_n+1, we have
 
    <p>P(X_n+1 = x_n+1 | X_n = x_n, X_n-1 = x_n-1, ..., X_0 = x_0) = P(X_n+1 = x_n+1 | X_n = x_n)
 
    <p>A continuous-time Markov chain is defined according to an
-<em>infinitesimal generator matrix</em> \bf Q = [Q_i,j] such
-that for each i \neq j, Q_i, j is the transition rate
-from state i to state j. The elements Q_i, i
-must be defined in such a way that the infinitesimal generator matrix
-\bf Q satisfies the property \sum_j=1^N Q_i,j = 0.
+<em>infinitesimal generator matrix</em> \bf Q = [Q_i,j],
+where for each i \neq j, Q_i, j is the transition rate
+from state i to state j. The matrix \bf Q must
+satisfy the property that, for all i, \sum_j=1^N Q_i,
+j = 0.
 
    <p><a name="doc_002dctmc_005fcheck_005fQ"></a>
 
@@ -1303,12 +1397,13 @@
 
 <p>Similarly to the discrete case, we denote with \bf \pi(t) =
 (\pi_1(t), \pi_2(t), <small class="dots">...</small>, \pi_N(t) ) the <em>state occupancy
-probability vector</em> at time t. \pi_i(t) denotes the
-probability that the system is in state i at time t &ge; 0.
+probability vector</em> at time t. \pi_i(t) is the
+probability that the system is in state i at time t
+&ge; 0.
 
    <p>Given the infinitesimal generator matrix \bf Q and the initial
-state occupancy probability vector \bf \pi(0) = (\pi_1(0),
-\pi_2(0), <small class="dots">...</small>, \pi_N(0)), the state occupancy probability vector
+state occupancy probabilities \bf \pi(0) = (\pi_1(0),
+\pi_2(0), <small class="dots">...</small>, \pi_N(0)), the state occupancy probabilities
 \bf \pi(t) at time t can be computed as:
 
 <pre class="example">     \pi(t) = \pi(0) exp(Qt)
@@ -1317,10 +1412,14 @@
 of \bf Q t. Under certain conditions, there exists a
 <em>stationary state occupancy probability</em> \bf \pi =
 \lim_t \rightarrow +\infty \bf \pi(t), which is independent from
-the initial state occupancy \bf \pi(0). The stationary state
-occupancy probability vector \bf \pi satisfies
-\bf \pi \bf Q = \bf 0 and \sum_i=1^N \pi_i = 1.
-
+\bf \pi(0).  \bf \pi is the solution of the following
+linear system:
+
+<pre class="example">     /
+     | \pi Q   = 0
+     | \pi 1^T = 1
+     \
+</pre>
    <p><a name="doc_002dctmc"></a>
 
 <div class="defun">
@@ -1342,12 +1441,12 @@
 <dt><var>Q</var><dd>Infinitesimal generator matrix. <var>Q</var> is a N \times N square
 matrix where <var>Q</var><code>(i,j)</code> is the transition rate from state
 i to state j, for 1 &le; i \neq j &le; N. 
-Transition rates must be nonnegative, and \sum_j=1^N Q_i j = 0
+Transition rates must be nonnegative, and \sum_j=1^N Q_i, j = 0
 
           <br><dt><var>t</var><dd>Time at which to compute the transient probability
 
           <br><dt><var>p0</var><dd><var>p0</var><code>(i)</code> is the probability that the system
-is in state i at time 0 .
+is in state i at time 0.
 
         </dl>
 
@@ -1390,30 +1489,28 @@
 <p><a name="doc_002dctmc_005fbd"></a>
 
 <div class="defun">
-&mdash; Function File: <var>Q</var> = <b>ctmc_bd</b> (<var>birth, death</var>)<var><a name="index-ctmc_005fbd-36"></a></var><br>
+&mdash; Function File: <var>Q</var> = <b>ctmc_bd</b> (<var>b, d</var>)<var><a name="index-ctmc_005fbd-36"></a></var><br>
 <blockquote>
         <p><a name="index-Markov-chain_002c-continuous-time-37"></a><a name="index-Birth_002ddeath-process-38"></a>
-Returns the N \times N infinitesimal generator matrix Q
-for a birth-death process with given rates.
-
-        <p><strong>INPUTS</strong>
-
-          <dl>
-<dt><var>birth</var><dd>Vector with N-1 elements, where <var>birth</var><code>(i)</code> is the
-transition rate from state i to state i+1.
-
-          <br><dt><var>death</var><dd>Vector with N-1 elements, where <var>death</var><code>(i)</code> is the
-transition rate from state i+1 to state i.
-
-        </dl>
-
-        <p><strong>OUTPUTS</strong>
-
-          <dl>
-<dt><var>Q</var><dd>Infinitesimal generator matrix for the birth-death process.
-
-        </dl>
-
+Returns the infinitesimal generator matrix Q for a continuous
+birth-death process over state space 1, 2, <small class="dots">...</small>, N. 
+<var>b</var><code>(i)</code> is the transition rate from state i to
+i+1, and <var>d</var><code>(i)</code> is the transition rate from state
+i+1 to state i, i=1, 2, <small class="dots">...</small>, N-1.
+
+        <p>Matrix \bf Q is therefore defined as:
+
+     <pre class="example">          /                                                          \
+          | -b(1)     b(1)                                           |
+          |  d(1) -(d(1)+b(2))     b(2)                              |
+          |           d(2)     -(d(2)+b(3))        b(3)              |
+          |                                                          |
+          |                ...           ...          ...            |
+          |                                                          |
+          |                       d(N-2)    -(d(N-2)+b(N-1))  b(N-1) |
+          |                                       d(N-1)     -d(N-1) |
+          \                                                          /
+</pre>
         </blockquote></div>
 
 <div class="node">
@@ -1430,11 +1527,11 @@
 
 <p>Given a N state continuous-time Markov Chain with infinitesimal
 generator matrix \bf Q, we define the vector \bf L(t) =
-(L_1(t), L_2(t), \ldots L_N(t)) such that L_i(t) is the
+(L_1(t), L_2(t), \ldots, L_N(t)) such that L_i(t) is the
 expected sojourn time in state i during the interval
 [0,t), assuming that the initial occupancy probability at time
-0 was \bf \pi(0). \bf L(t) is the solution of
-the following differential equation:
+0 was \bf \pi(0). \bf L(t) can be expressed as the
+solution of the following differential equation:
 
 <pre class="example">      dL
       --(t) = L(t) Q + pi(0),    L(0) = 0
@@ -1444,11 +1541,12 @@
 form as:
 
 <pre class="example">            / t
-     L(t) = |      pi(u) du
-            / u=0
+     L(t) = |   pi(u) du
+            / 0
 </pre>
    <p class="noindent">where \bf \pi(t) = \bf \pi(0) \exp(\bf Qt) is
-the state occupancy probability at time t.
+the state occupancy probability at time t; \exp(\bf Qt)
+is the matrix exponential of \bf Qt.
 
    <p><a name="doc_002dctmc_005fexps"></a>
 
@@ -1471,7 +1569,7 @@
 &le; i \neq j &le; N. The matrix <var>Q</var> must also satisfy the
 condition \sum_j=1^N Q_ij = 0.
 
-          <br><dt><var>t</var><dd>Time
+          <br><dt><var>t</var><dd>If given, compute the expected sojourn times in [0,t]
 
           <br><dt><var>p</var><dd>Initial occupancy probability vector; <var>p</var><code>(i)</code> is the
 probability the system is in state i at time 0, i = 1,
@@ -1485,9 +1583,9 @@
 <dt><var>L</var><dd>If this function is called with three arguments, <var>L</var><code>(i)</code> is
 the expected time spent in state i during the interval
 [0,t]. If this function is called with two arguments
-<var>L</var><code>(i)</code> is either the expected time spent in state i until
-absorption (if i is a transient state), or zero
-(if <var>i</var> is an absorbing state).
+<var>L</var><code>(i)</code> is either the expected time spent in state i
+until absorption (if i is a transient state), or zero (if
+<var>i</var> is an absorbing state).
 
         </dl>
 
@@ -1503,10 +1601,9 @@
 
 <pre class="example"><pre class="verbatim">      lambda = 0.5;
       N = 4;
-      birth = lambda*linspace(1,N-1,N-1);
-      death = zeros(1,N-1);
-      Q = diag(birth,1)+diag(death,-1);
-      Q -= diag(sum(Q,2));
+      b = lambda*[1:N-1];
+      d = zeros(size(b));
+      Q = ctmc_bd(b,d);
       t = linspace(0,10,100);
       p0 = zeros(1,N); p0(1)=1;
       L = zeros(length(t),N);
@@ -2480,11 +2577,12 @@
 service, and the instant at which service finishes and the request moves
 to another queue (or exits the system).
 
-     <br><dt>P_ij<dd>Routing probability matrix. \bf P = P_ij is a K \times
-K matrix such that P_ij is the probability that a request
-completing service at server i will move directly to server
-j, The probability that a request leaves the system after service
-at service center i is 1-\sum_j=1^K P_ij.
+     <br><dt>P_i, j<dd>Routing probability matrix. \bf P = P_i, j is a K
+\times K matrix such that P_i, j is the probability that a
+request completing service at server i will move directly to
+server j, The probability that a request leaves the system
+after service at service center i is 1-\sum_j=1^K P_i,
+j.
 
      <br><dt>V_i<dd>Average number of visits. V_i is the average number of visits to
 the service center i. This quantity will be described shortly.
@@ -2529,56 +2627,71 @@
 external arrival rate of requests to the system. The average number of
 visits satisfy the following equation:
 
-<pre class="example">     V == P0 + V*P;
+<pre class="example">                       K
+                      ___
+                     \
+     V_j = P_(0, j) + &gt;   V_i P_(i, j)
+                     /___
+                      i=1
 </pre>
-   <p class="noindent">where P_0 j is the probability that an external
+   <p class="noindent">where P_0, j is the probability that an external
 arrival goes to service center j. If \lambda_j is the
 external arrival rate to service center j, and \lambda =
 \sum_j \lambda_j is the overall external arrival rate, then
-P_0 j = \lambda_j / \lambda.
+P_0, j = \lambda_j / \lambda.
 
    <p>For closed models, the visit ratios satisfy the following equation:
 
-<pre class="example">     V(1) == 1 &amp;&amp; V == V*P;
+<pre class="example">     
+     V_1 = 1
+     
+             K
+            ___
+           \
+     V_j =  &gt;   V_i P_(i, j)
+           /___
+            i=1
+     
+     V(1) == 1 &amp;&amp; V == V*P;
 </pre>
    <h4 class="subsection">6.1.2 Multiple class models</h4>
 
 <p>In multiple class QN models, we assume that there exist C
 different classes of requests. Each request from class c spends
-on average time S_ck in service at service center k. For
-open models, we denote with \bf \lambda = \lambda_ck the
-arrival rates, where \lambda_ck is the external arrival rate of
-class c customers at service center k. For closed models,
-we denote with \bf N = (N_1, N_2, \ldots N_C) the population
-vector, where N_c is the number of class c requests in the
-system.
+on average time S_c, k in service at service center
+k. For open models, we denote with \bf \lambda =
+\lambda_ck the arrival rates, where \lambda_c, k is the
+external arrival rate of class c customers at service center
+k. For closed models, we denote with \bf N = (N_1, N_2,
+\ldots, N_C) the population vector, where N_c is the number of
+class c requests in the system.
 
    <p>The transition probability matrix for these kind of networks will be a
-C \times K \times C \times K matrix \bf P =
-P_risj such that P_risj is the probability that a
-class r request which completes service at center i will
-join server j as a class s request.
-
-   <p>Model input and outputs can be adjusted by adding additional
-indexes for the customer classes.
+C \times K \times C \times K matrix \bf P = P_r, i, s, j
+such that P_r, i, s, j is the probability that a class
+r request which completes service at center i will join
+server j as a class s request.
+
+   <p>Model input and outputs can be adjusted by adding additional indexes
+for the customer classes.
 
 <p class="noindent"><strong>Model Inputs</strong>
 
      <dl>
-<dt>\lambda_ci<dd>External arrival rate of class-c requests to service center i
-
-     <br><dt>\lambda<dd>Overall external arrival rate to the whole system: \lambda = \sum_c \sum_i \lambda_ci
-
-     <br><dt>S_ci<dd>Average service time. S_ci is the average service time on service
-center i for class c requests.
-
-     <br><dt>P_risj<dd>Routing probability matrix. \bf P = P_risj is a C
-\times K \times C \times K matrix such that P_risj is the
-probability that a class r request which completes service at
-server i will move to server j as a class s
+<dt>\lambda_c, i<dd>External arrival rate of class-c requests to service center i
+
+     <br><dt>\lambda<dd>Overall external arrival rate to the whole system: \lambda = \sum_c \sum_i \lambda_c, i
+
+     <br><dt>S_c, i<dd>Average service time. S_c, i is the average service time on
+service center i for class c requests.
+
+     <br><dt>P_r, i, s, j<dd>Routing probability matrix. \bf P = P_r, i, s, j is a C
+\times K \times C \times K matrix such that P_r, i, s, j is
+the probability that a class r request which completes service
+at server i will move to server j as a class s
 request.
 
-     <br><dt>V_ci<dd>Average number of visits. V_ci is the average number of visits
+     <br><dt>V_c, i<dd>Average number of visits. V_c, i is the average number of visits
 of class c requests to the service center i.
 
    </dl>
@@ -2586,20 +2699,20 @@
 <p class="noindent"><strong>Model Outputs</strong>
 
      <dl>
-<dt>U_ci<dd>Utilization of service center i by class c requests. The
+<dt>U_c, i<dd>Utilization of service center i by class c requests. The
 utilization is defined as the fraction of time in which the resource is
 busy (i.e., the server is processing requests).
 
-     <br><dt>R_ci<dd>Average response time experienced by class c requests on service
+     <br><dt>R_c, i<dd>Average response time experienced by class c requests on service
 center i. The average response time is defined as the average
 time between the arrival of a customer in the queue, and the completion
 of service.
 
-     <br><dt>Q_ci<dd>Average number of class c requests on service center
+     <br><dt>Q_c, i<dd>Average number of class c requests on service center
 i. This includes both the requests in the queue, and the request
 being served.
 
-     <br><dt>X_ci<dd>Throughput of service center i for class c requests.  The
+     <br><dt>X_c, i<dd>Throughput of service center i for class c requests.  The
 throughput is defined as the rate of completion of class c
 requests.
 
@@ -2622,7 +2735,7 @@
 
    </dl>
 
-   <p>We can define the visit ratios V_sj for class s
+   <p>We can define the visit ratios V_s, j for class s
 customers at service center j as follows:
 
    <p>V_sj = sum_r sum_i V_ri P_risj, for all s,j
@@ -2631,12 +2744,12 @@
 
    <p>V_sj = P_0sj + sum_r sum_i V_ri P_risj, for all s,j
 
-<p class="noindent">where P_0sj is the probability that an external
+<p class="noindent">where P_0, s, j is the probability that an external
 arrival goes to service center j as a class-s request. 
-If \lambda_sj is the external arrival rate of class s
-requests to service center j, and \lambda = \sum_s \sum_j
-\lambda_sj is the overall external arrival rate to the whole system,
-then P_0sj = \lambda_sj / \lambda.
+If \lambda_s, j is the external arrival rate of class
+s requests to service center j, and \lambda =
+\sum_s \sum_j \lambda_s, j is the overall external arrival rate to
+the whole system, then P_0, s, j = \lambda_s, j / \lambda.
 
 <div class="node">
 <a name="Generic-Algorithms"></a>
@@ -2826,11 +2939,11 @@
 requests in 0.2. Note that service times are class-independent;
 
      <li>Node 2 is a -/G/1&ndash;PS node, with service times
-S_12 = 0.4 for class 1, and S_22 = 0.6 for class 2
+S_1, 2 = 0.4 for class 1, and S_2, 2 = 0.6 for class 2
 requests;
 
      <li>Node 3 is a -/G/\infty node (delay center), with service
-times S_13=1 and S_23=2 for class 1 and 2
+times S_1, 3=1 and S_2, 3=2 for class 1 and 2
 respectively.
 
    </ul>
@@ -2910,8 +3023,8 @@
    </ul>
 
    <p>We define the <em>joint probability vector</em> \pi(k_1, k_2,
-\ldots k_N) as the steady-state probability that there are k_i
-requests at service center i, for all i=1,2, \ldots N. 
+\ldots, k_N) as the steady-state probability that there are k_i
+requests at service center i, for all i=1, 2, \ldots, N. 
 Jackson networks have the property that the joint probability is the
 product of the marginal probabilities \pi_i:
 
@@ -3020,12 +3133,12 @@
 <pre class="example">     k = [k1, k2, ... kn]; <span class="roman">population vector</span>
      p = 1/G(N+1) \prod F(i,k);
 </pre>
-   <p>Here \pi(k_1, k_2, \ldots k_K) is the joint probability of
-having k_i requests at node i, for all i=1,2,
-\ldots K.
+   <p>Here \pi(k_1, k_2, \ldots, k_K) is the joint probability of
+having k_i requests at node i, for all i=1, 2,
+\ldots, K.
 
    <p>The <em>convolution algorithms</em> computes the normalization constants
-G = (G(0), G(1), \ldots G(N)) for single-class, closed networks
+\bf G = \left(G(0), G(1), \ldots, G(N)\right) for single-class, closed networks
 with N requests.  The normalization constants are returned as
 vector <var>G</var><code>=[</code><var>G</var><code>(1), </code><var>G</var><code>(2), ... </code><var>G</var><code>(N+1)]</code> where
 <var>G</var><code>(i+1)</code> is the value of G(i) (remember that Octave
@@ -3104,7 +3217,7 @@
    <p>The normalization constant G can be used to compute the
 steady-state probabilities for a closed single class product-form
 Queueing Network with K nodes. Let <var>k</var><code>=[k_1,
-k_2, ... k_K]</code> be a valid population vector. Then, the
+k_2, ..., k_K]</code> be a valid population vector. Then, the
 steady-state probability <var>p</var><code>(i)</code> to have <var>k</var><code>(i)</code>
 requests at service center i can be computed as:
 
@@ -3876,7 +3989,7 @@
 <p class="noindent"><strong>NOTE</strong>
 
    <p>Given a network with K service centers, C job classes and
-population vector \bf N=(N_1, N_2, \ldots N_C), the MVA
+population vector \bf N=(N_1, N_2, \ldots, N_C), the MVA
 algorithm requires space O(C \prod_i (N_i + 1)). The time
 complexity is O(CK\prod_i (N_i + 1)). This implementation is
 slightly more space-efficient (see details in the code). While the space
@@ -4234,14 +4347,14 @@
 
           <li>Average service times are load-independent.
 
-          <li>P_ij is the probability that requests completing
+          <li>P_i, j is the probability that requests completing
 execution at center i are transferred to
 center j, i \neq j. For open networks, a request may leave the system
-from any node i with probability 1-\sum_j P_ij.
+from any node i with probability 1-\sum_j P_i, j.
 
           <li>Blocking type is Repetitive-Service (RS). Service
 center j is <em>saturated</em> if the number of requests is equal
-to its capacity <code>C_j</code>. Under the RS blocking discipline,
+to its capacity C_j. Under the RS blocking discipline,
 a request completing service at center i which is being
 transferred to a saturated server j is put back at the end of
 the queue of i and will receive service again. Center i
@@ -4701,18 +4814,18 @@
 
 <pre class="example">     V == P0 + V*P;
 </pre>
-   <p class="noindent">where P_0 j is the probability that an external
+   <p class="noindent">where P_0, j is the probability that an external
 arrival goes to service center j. If \lambda_j is the
 external arrival rate to service center j, and \lambda =
 \sum_j \lambda_j is the overall external arrival rate, then
-P_0 j = \lambda_j / \lambda.
+P_0, j = \lambda_j / \lambda.
 
    <p>For closed networks, the visit ratios satisfy the following equation:
 
 <pre class="example">     V(1) == 1 &amp;&amp; V == V*P;
 </pre>
    <p>The definitions above can be extended to multiple class networks as
-follows. We define the visit ratios V_sj for class s
+follows. We define the visit ratios V_s, j for class s
 customers at service center j as follows:
 
    <p>V_sj = sum_r sum_i V_ri P_risj, for all s,j
@@ -4722,12 +4835,12 @@
 
    <p>V_sj = P_0sj + sum_r sum_i V_ri P_risj, for all s,j
 
-<p class="noindent">where P_0sj is the probability that an external
+<p class="noindent">where P_0, s, j is the probability that an external
 arrival goes to service center j as a class-s request. 
-If \lambda_sj is the external arrival rate of class s
+If \lambda_s, j is the external arrival rate of class s
 requests to service center j, and \lambda = \sum_s \sum_j
-\lambda_sj is the overall external arrival rate to the whole system,
-then P_0sj = \lambda_sj / \lambda.
+\lambda_s, j is the overall external arrival rate to the whole system,
+then P_0, s, j = \lambda_s, j / \lambda.
 
    <p><a name="doc_002dqnvisits"></a>
 
@@ -4853,7 +4966,7 @@
 <a href="http://www.cs.purdue.edu/research/technical_reports/1980/TR 80-355.pdf">http://www.cs.purdue.edu/research/technical_reports/1980/TR 80-355.pdf</a>
 
    <p>Note that the slightly different problem of generating all tuples
-k_1, k_2, \ldots k_N such that \sum_i k_i = k and
+k_1, k_2, \ldots, k_N such that \sum_i k_i = k and
 k_i are nonnegative integers, for some fixed integer k
 &ge; 0 has been described in S. Santini, <cite>Computing the
 Indices for a Complex Summation</cite>, unpublished report, available at
@@ -4934,13 +5047,15 @@
      <dl>
 <dt>[Aky88]<dd>Ian F. Akyildiz, <cite>Mean Value Analysis for Blocking Queueing
 Networks</cite>, IEEE Transactions on Software Engineering, vol. 14, n. 2,
-april 1988, pp. 418&ndash;428.  <a href="http://dx.doi.org/10.1109/32.4663">http://dx.doi.org/10.1109/32.4663</a>
+april 1988, pp. 418&ndash;428.  DOI <a href="http://dx.doi.org/10.1109/32.4663">10.1109/32.4663</a>
 
      <br><dt>[Bar79]<dd>Y. Bard, <cite>Some Extensions to Multiclass Queueing Network Analysis</cite>,
 proc. 4th Int. Symp. on Modelling and Performance Evaluation of
 Computer Systems, feb. 1979, pp. 51&ndash;62.
 
-     <br><dt>[RGMT98]<dd>G. Bolch, S. Greiner, H. de Meer and
+     <br><dt>[BCMP75]<dd>Forest Baskett, K. Mani Chandy, Richard R. Muntz, and Fernando G. Palacios. 1975. <cite>Open, Closed, and Mixed Networks of Queues with Different Classes of Customers</cite>. J. ACM 22, 2 (April 1975), 248—260, DOI <a href="http://doi.acm.org/10.1145/321879.321887">10.1145/321879.321887</a>
+
+     <br><dt>[BGMT98]<dd>G. Bolch, S. Greiner, H. de Meer and
 K. Trivedi, <cite>Queueing Networks and Markov Chains: Modeling and
 Performance Evaluation with Computer Science Applications</cite>, Wiley,
 1998.
@@ -4948,17 +5063,21 @@
      <br><dt>[Buz73]<dd>Jeffrey P. Buzen, <cite>Computational Algorithms for Closed Queueing
 Networks with Exponential Servers</cite>, Communications of the ACM, volume
 16, number 9, september 1973,
-pp. 527&ndash;531. <a href="http://doi.acm.org/10.1145/362342.362345">http://doi.acm.org/10.1145/362342.362345</a>
+pp. 527&ndash;531. DOI <a href="http://doi.acm.org/10.1145/362342.362345">10.1145/362342.362345</a>
 
      <br><dt>[CMS08]<dd>G. Casale, R. R. Muntz, G. Serazzi,
 <cite>Geometric Bounds: a Non-Iterative Analysis Technique for Closed
 Queueing Networks</cite>, IEEE Transactions on Computers, 57(6):780-794,
-June 2008. <a href="http://doi.ieeecomputersociety.org/10.1109/TC.2008.37">http://doi.ieeecomputersociety.org/10.1109/TC.2008.37</a>
-
-     <br><dt>[GrSn97]<dd>Charles M. Grinstead, J. Laurie Snell, (July 1997). Introduction to
-Probability. American Mathematical Society. ISBN 978-0821807491
-
-     <br><dt>[Jai91]<dd>R. Jain , <cite>The Art of Computer Systems Performance Analysis</cite>,
+June 2008. DOI <a href="http://doi.ieeecomputersociety.org/10.1109/TC.2008.37">10.1109/TC.2008.37</a>
+
+     <br><dt>[GrSn97]<dd>Charles M. Grinstead, J. Laurie Snell, (July 1997). <cite>Introduction
+to Probability</cite>. American Mathematical Society. ISBN 978-0821807491;
+this excellent textbook is <a href="http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/amsbook.mac.pdf">available in PDF format</a>
+and can be used under the terms of the <a href="http://www.gnu.org/copyleft/fdl.html">GNU Free Documentation License (FDL)</a>
+
+     <br><dt>[Jac04]<dd>James R. Jackson, <cite>Jobshop-Like Queueing Systems</cite>, Vol. 50, No. 12, Ten Most Influential Titles of "Management Science's" First Fifty Years (Dec., 2004), pp. 1796-1802, <a href="http://www.jstor.org/stable/30046149">available online</a>
+
+     <br><dt>[Jai91]<dd>R. Jain, <cite>The Art of Computer Systems Performance Analysis</cite>,
 Wiley, 1991, p. 577.
 
      <br><dt>[HsLa87]<dd>C. H. Hsieh and S. Lam,
@@ -4968,35 +5087,40 @@
      <br><dt>[LZGS84]<dd>Edward D. Lazowska, John Zahorjan, G. Scott Graham, and Kenneth C. 
 Sevcik, <cite>Quantitative System Performance: Computer System
 Analysis Using Queueing Network Models</cite>, Prentice Hall,
-1984. <a href="http://www.cs.washington.edu/homes/lazowska/qsp/">http://www.cs.washington.edu/homes/lazowska/qsp/</a>.
+1984. <a href="http://www.cs.washington.edu/homes/lazowska/qsp/">available online</a>.
 
      <br><dt>[ReKo76]<dd>M. Reiser, H. Kobayashi, <cite>On The Convolution Algorithm for
 Separable Queueing Networks</cite>, In Proceedings of the 1976 ACM
 SIGMETRICS Conference on Computer Performance Modeling Measurement and
 Evaluation (Cambridge, Massachusetts, United States, March 29&ndash;31,
 1976). SIGMETRICS '76. ACM, New York, NY,
-pp. 109&ndash;117. <a href="http://doi.acm.org/10.1145/800200.806187">http://doi.acm.org/10.1145/800200.806187</a>
+pp. 109&ndash;117. DOI <a href="http://doi.acm.org/10.1145/800200.806187">10.1145/800200.806187</a>
 
      <br><dt>[ReLa80]<dd>M. Reiser and S. S. Lavenberg, <cite>Mean-Value Analysis of Closed
 Multichain Queuing Networks</cite>, Journal of the ACM, vol. 27, n. 2, April
-1980, pp. 313&ndash;322. <a href="http://doi.acm.org/10.1145/322186.322195">http://doi.acm.org/10.1145/322186.322195</a>
+1980, pp. 313&ndash;322. DOI <a href="http://doi.acm.org/10.1145/322186.322195">10.1145/322186.322195</a>
+
+     <br><dt>[Sch79]<dd>P. Schweitzer, <cite>Approximate Analysis of Multiclass Closed Networks of
+Queues</cite>, Proc. Int. Conf. on Stochastic Control and Optimization, jun
+1979, pp. 25—29
 
      <br><dt>[Sch81]<dd>Herb Schwetman, <cite>Some Computational
-Aspects of Queueing Network Models</cite>, Technical Report CSD-TR-354,
+Aspects of Queueing Network Models</cite>, <a href="http://www.cs.purdue.edu/research/technical_reports/1980/TR%2080-354.pdf">Technical Report CSD-TR-354</a>,
 Department of Computer Sciences, Purdue University, feb, 1981
-(revised). 
-<a href="http://www.cs.purdue.edu/research/technical_reports/1980/TR%2080-354.pdf">http://www.cs.purdue.edu/research/technical_reports/1980/TR%2080-354.pdf</a>
+(revised).
 
      <br><dt>[Sch82]<dd>Herb Schwetman, <cite>Implementing the Mean Value Algorithm for the
-Solution of Queueing Network Models</cite>, Technical Report CSD-TR-355,
-Department of Computer Sciences, Purdue University, feb 15, 1982,
-available at
-<a href="http://www.cs.purdue.edu/research/technical_reports/1980/TR%2080-355.pdf">http://www.cs.purdue.edu/research/technical_reports/1980/TR%2080-355.pdf</a>
+Solution of Queueing Network Models</cite>, <a href="http://www.cs.purdue.edu/research/technical_reports/1980/TR%2080-355.pdf">Technical Report CSD-TR-355</a>,
+Department of Computer Sciences, Purdue University, feb 15, 1982.
+
+     <br><dt>[Tij03]<dd>H. C. Tijms, <cite>A first course in stochastic models</cite>,
+John Wiley and Sons, 2003, ISBN 0471498807, ISBN 9780471498803,
+DOI <a href="http://dx.doi.org/10.1002/047001363X">10.1002/047001363X</a>
 
      <br><dt>[ZaWo81]<dd>Zahorjan, J. and Wong, E. <cite>The solution of separable queueing
 network models using mean value analysis</cite>. SIGMETRICS
 Perform. Eval. Rev. 10, 3 (Sep. 1981), 80-85. DOI
-<a href="http://doi.acm.org/10.1145/1010629.805477">http://doi.acm.org/10.1145/1010629.805477</a>
+DOI <a href="http://doi.acm.org/10.1145/1010629.805477">10.1145/1010629.805477</a>
 
 </dl>
 
@@ -5816,7 +5940,7 @@
 <h2 class="unnumbered">Concept Index</h2>
 
 <ul class="index-cp" compact>
-<li><a href="#index-Absorption-probabilities-26">Absorption probabilities</a>: <a href="#Mean-time-to-absorption-_0028DTMC_0029">Mean time to absorption (DTMC)</a></li>
+<li><a href="#index-Absorption-probabilities-23">Absorption probabilities</a>: <a href="#Mean-time-to-absorption-_0028DTMC_0029">Mean time to absorption (DTMC)</a></li>
 <li><a href="#index-Approximate-MVA-186">Approximate MVA</a>: <a href="#Algorithms-for-Product_002dForm-QNs">Algorithms for Product-Form QNs</a></li>
 <li><a href="#index-Asymmetric-_0040math_007bM_002fM_002fm_007d-system-86">Asymmetric M/M/m system</a>: <a href="#The-Asymmetric-M_002fM_002fm-System">The Asymmetric M/M/m System</a></li>
 <li><a href="#index-BCMP-network-141">BCMP network</a>: <a href="#Algorithms-for-Product_002dForm-QNs">Algorithms for Product-Form QNs</a></li>
@@ -5845,8 +5969,8 @@
 <li><a href="#index-Expected-sojourn-time-42">Expected sojourn time</a>: <a href="#Expected-sojourn-times-_0028CTMC_0029">Expected sojourn times (CTMC)</a></li>
 <li><a href="#index-Expected-sojourn-times-16">Expected sojourn times</a>: <a href="#Expected-number-of-visits-_0028DTMC_0029">Expected number of visits (DTMC)</a></li>
 <li><a href="#index-First-passage-times-56">First passage times</a>: <a href="#First-passage-times-_0028CTMC_0029">First passage times (CTMC)</a></li>
-<li><a href="#index-First-passage-times-21">First passage times</a>: <a href="#First-passage-times-_0028DTMC_0029">First passage times (DTMC)</a></li>
-<li><a href="#index-Fundamental-matrix-27">Fundamental matrix</a>: <a href="#Mean-time-to-absorption-_0028DTMC_0029">Mean time to absorption (DTMC)</a></li>
+<li><a href="#index-First-passage-times-26">First passage times</a>: <a href="#First-passage-times-_0028DTMC_0029">First passage times (DTMC)</a></li>
+<li><a href="#index-Fundamental-matrix-24">Fundamental matrix</a>: <a href="#Mean-time-to-absorption-_0028DTMC_0029">Mean time to absorption (DTMC)</a></li>
 <li><a href="#index-Jackson-network-111">Jackson network</a>: <a href="#Algorithms-for-Product_002dForm-QNs">Algorithms for Product-Form QNs</a></li>
 <li><a href="#index-load_002ddependent-service-center-130">load-dependent service center</a>: <a href="#Algorithms-for-Product_002dForm-QNs">Algorithms for Product-Form QNs</a></li>
 <li><a href="#index-g_t_0040math_007bM_002fG_002f1_007d-system-92">M/G/1 system</a>: <a href="#The-M_002fG_002f1-System">The M/G/1 System</a></li>
@@ -5868,9 +5992,9 @@
 <li><a href="#index-Markov-chain_002c-state-occupancy-probabilities-34">Markov chain, state occupancy probabilities</a>: <a href="#State-occupancy-probabilities-_0028CTMC_0029">State occupancy probabilities (CTMC)</a></li>
 <li><a href="#index-Markov-chain_002c-stationary-probabilities-7">Markov chain, stationary probabilities</a>: <a href="#State-occupancy-probabilities-_0028DTMC_0029">State occupancy probabilities (DTMC)</a></li>
 <li><a href="#index-Markov-chain_002c-transient-probabilities-9">Markov chain, transient probabilities</a>: <a href="#State-occupancy-probabilities-_0028DTMC_0029">State occupancy probabilities (DTMC)</a></li>
-<li><a href="#index-Mean-recurrence-times-22">Mean recurrence times</a>: <a href="#First-passage-times-_0028DTMC_0029">First passage times (DTMC)</a></li>
+<li><a href="#index-Mean-recurrence-times-27">Mean recurrence times</a>: <a href="#First-passage-times-_0028DTMC_0029">First passage times (DTMC)</a></li>
 <li><a href="#index-Mean-time-to-absorption-49">Mean time to absorption</a>: <a href="#Mean-time-to-absorption-_0028CTMC_0029">Mean time to absorption (CTMC)</a></li>
-<li><a href="#index-Mean-time-to-absorption-25">Mean time to absorption</a>: <a href="#Mean-time-to-absorption-_0028DTMC_0029">Mean time to absorption (DTMC)</a></li>
+<li><a href="#index-Mean-time-to-absorption-22">Mean time to absorption</a>: <a href="#Mean-time-to-absorption-_0028DTMC_0029">Mean time to absorption (DTMC)</a></li>
 <li><a href="#index-Mean-Value-Analysys-_0028MVA_0029-156">Mean Value Analysys (MVA)</a>: <a href="#Algorithms-for-Product_002dForm-QNs">Algorithms for Product-Form QNs</a></li>
 <li><a href="#index-Mean-Value-Analysys-_0028MVA_0029_002c-approximate-185">Mean Value Analysys (MVA), approximate</a>: <a href="#Algorithms-for-Product_002dForm-QNs">Algorithms for Product-Form QNs</a></li>
 <li><a href="#index-mixed-network-228">mixed network</a>: <a href="#Algorithms-for-Product_002dForm-QNs">Algorithms for Product-Form QNs</a></li>
@@ -5916,8 +6040,8 @@
 <li><a href="#index-dtmc_005fcheck_005fP-1"><code>dtmc_check_P</code></a>: <a href="#Discrete_002dTime-Markov-Chains">Discrete-Time Markov Chains</a></li>
 <li><a href="#index-dtmc_005fexps-17"><code>dtmc_exps</code></a>: <a href="#Time_002daveraged-expected-sojourn-times-_0028DTMC_0029">Time-averaged expected sojourn times (DTMC)</a></li>
 <li><a href="#index-dtmc_005fexps-14"><code>dtmc_exps</code></a>: <a href="#Expected-number-of-visits-_0028DTMC_0029">Expected number of visits (DTMC)</a></li>
-<li><a href="#index-dtmc_005ffpt-20"><code>dtmc_fpt</code></a>: <a href="#First-passage-times-_0028DTMC_0029">First passage times (DTMC)</a></li>
-<li><a href="#index-dtmc_005fmtta-23"><code>dtmc_mtta</code></a>: <a href="#Mean-time-to-absorption-_0028DTMC_0029">Mean time to absorption (DTMC)</a></li>
+<li><a href="#index-dtmc_005ffpt-25"><code>dtmc_fpt</code></a>: <a href="#First-passage-times-_0028DTMC_0029">First passage times (DTMC)</a></li>
+<li><a href="#index-dtmc_005fmtta-20"><code>dtmc_mtta</code></a>: <a href="#Mean-time-to-absorption-_0028DTMC_0029">Mean time to absorption (DTMC)</a></li>
 <li><a href="#index-population_005fmix-291"><code>population_mix</code></a>: <a href="#Utility-functions">Utility functions</a></li>
 <li><a href="#index-qnammm-85"><code>qnammm</code></a>: <a href="#The-Asymmetric-M_002fM_002fm-System">The Asymmetric M/M/m System</a></li>
 <li><a href="#index-qnclosed-285"><code>qnclosed</code></a>: <a href="#Utility-functions">Utility functions</a></li>
Binary file main/queueing/doc/queueing.pdf has changed