view main/queueing/doc/markovchains.txi @ 9913:c29eb2c09a71 octave-forge

small fixes to the documentation
author mmarzolla
date Thu, 29 Mar 2012 20:57:24 +0000
parents e07cabf3ae13
children 2eb4e527e1be
line wrap: on
line source

@c -*- texinfo -*-

@c Copyright (C) 2008, 2009, 2010, 2011, 2012 Moreno Marzolla
@c
@c This file is part of the queueing toolbox, a Queueing Networks
@c analysis package for GNU Octave.
@c
@c The queueing toolbox is free software; you can redistribute it
@c and/or modify it under the terms of the GNU General Public License
@c as published by the Free Software Foundation; either version 3 of
@c the License, or (at your option) any later version.
@c
@c The queueing toolbox is distributed in the hope that it will be
@c useful, but WITHOUT ANY WARRANTY; without even the implied warranty
@c of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
@c GNU General Public License for more details.
@c
@c You should have received a copy of the GNU General Public License
@c along with the queueing toolbox; see the file COPYING.  If not, see
@c <http://www.gnu.org/licenses/>.

@node Markov Chains
@chapter Markov Chains

@menu
* Discrete-Time Markov Chains::
* Continuous-Time Markov Chains::
@end menu

@node Discrete-Time Markov Chains
@section Discrete-Time Markov Chains

Let @math{X_0, X_1, @dots{}, X_n, @dots{} } be a sequence of random
variables defined over a discete state space @math{0, 1, 2,
@dots{}}. The sequence @math{X_0, X_1, @dots{}, X_n, @dots{}}  is a
@emph{stochastic process} with discrete time @math{0, 1, 2,
@dots{}}. A @emph{Markov chain} is a stochastic process @math{@{X_n,
n=0, 1, 2, @dots{}@}} which satisfies the following Markov property:

@iftex
@tex
$$\eqalign{P\left(X_{n+1} = x_{n+1}\ |\ X_n = x_n, X_{n-1} = x_{n-1}, \ldots, X_0 = x_0 \right) \cr
& = P\left(X_{n+1} = x_{n+1}\ |\ X_n = x_n\right)}$$
@end tex
@end iftex
@ifnottex
@math{P(X_{n+1} = x_{n+1} | X_n = x_n, X_{n-1} = x_{n-1}, ..., X_0 = x_0) = P(X_{n+1} = x_{n+1} | X_n = x_n)}
@end ifnottex

@noindent which basically means that the probability that the system is in
a particular state at time @math{n+1} only depends on the state the
system was at time @math{n}.

The evolution of a Markov chain with finite state space @math{@{1, 2,
@dots{}, N@}} can be fully described by a stochastic matrix @math{{\bf
P}(n) = [ P_{i,j}(n) ]} such that @math{P_{i, j}(n) = P( X_{n+1} = j\
|\ X_n = i )}.  If the Markov chain is homogeneous (that is, the
transition probability matrix @math{{\bf P}(n)} is time-independent),
we can write @math{{\bf P} = [P_{i, j}]}, where @math{P_{i, j} = P(
X_{n+1} = j\ |\ X_n = i )} for all @math{n=0, 1, @dots{}}.

The transition probability matrix @math{\bf P} must satisfy the
following two properties: (1) @math{P_{i, j} @geq{} 0} for all
@math{i, j}, and (2) @math{\sum_{j=1}^N P_{i,j} = 1} for all @math{i}

@c
@DOCSTRING(dtmc_check_P)

@menu
* State occupancy probabilities (DTMC)::
* Birth-death process (DTMC)::
* Expected number of visits (DTMC)::
* Time-averaged expected sojourn times (DTMC)::
* Mean time to absorption (DTMC)::
* First passage times (DTMC)::
@end menu

@c
@c
@c
@node State occupancy probabilities (DTMC)
@subsection State occupancy probabilities

We denote with @math{{\bf \pi}(n) = \left(\pi_1(n), \pi_2(n), @dots{},
\pi_N(n) \right)} the @emph{state occupancy probability vector} at
step @math{n}. @math{\pi_i(n)} denotes the probability that the system
is in state @math{i} after @math{n} transitions.

Given the transition probability matrix @math{\bf P} and the initial
state occupancy probability vector @math{{\bf \pi}(0) =
\left(\pi_1(0), \pi_2(0), @dots{}, \pi_N(0)\right)}, @math{{\bf
\pi}(n)} can be computed as:

@iftex
@tex
$${\bf \pi}(n) = {\bf \pi}(0) {\bf P}^n$$
@end tex
@end iftex
@ifnottex
@example
@group
\pi(n) = \pi(0) P^n
@end group
@end example
@end ifnottex

Under certain conditions, there exists a @emph{stationary state
occupancy probability} @math{{\bf \pi} = \lim_{n \rightarrow +\infty}
{\bf \pi}(n)}, which is independent from @math{{\bf \pi}(0)}. The
stationary vector @math{\bf \pi} is the solution of the following
linear system:

@iftex
@tex
$$
\left\{ \eqalign{
{\bf \pi P} & = {\bf \pi} \cr
{\bf \pi 1}^T & = 1
} \right.
$$
@end tex
@end iftex
@ifnottex
@example
@group
/
| \pi P   = \pi
| \pi 1^T = 1
\
@end group
@end example
@end ifnottex

@noindent where @math{\bf 1} is the row vector of ones, and @math{( \cdot )^T}
the transpose operator.

@c
@DOCSTRING(dtmc)

@noindent @strong{EXAMPLE}

This example is from [GrSn97]. Let us consider a maze with nine rooms,
as shown in the following figure

@example
@group
+-----+-----+-----+
|     |     |     |
|  1     2     3  |
|     |     |     |
+-   -+-   -+-   -+
|     |     |     |
|  4     5     6  |
|     |     |     |
+-   -+-   -+-   -+
|     |     |     |
|  7     8     9  |
|     |     |     |
+-----+-----+-----+
@end group
@end example

A mouse is placed in one of the rooms and can wander around. At each
step, the mouse moves from the current room to a neighboring one with
equal probability: if it is in room 1, it can move to room 2 and 4
with probability 1/2, respectively. If the mouse is in room 8, it can
move to either 7, 5 or 9 with probability 1/3.

The transition probability @math{\bf P} from room @math{i} to room
@math{j} is the following:

@iftex
@tex
$$ {\bf P} = 
\pmatrix{ 0   & 1/2 & 0   & 1/2 & 0   & 0   & 0   & 0   & 0   \cr
          1/3 & 0   & 1/3 & 0   & 1/3 & 0   & 0   & 0   & 0   \cr
          0   & 1/2 & 0   & 0   & 0   & 1/2 & 0   & 0   & 0   \cr
          1/3 & 0   & 0   & 0   & 1/3 & 0   & 1/3 & 0   & 0   \cr
          0   & 1/4 & 0   & 1/4 & 0   & 1/4 & 0   & 1/4 & 0   \cr
          0   & 0   & 1/3 & 0   & 1/3 & 0   & 0   & 0   & 1/3 \cr
          0   & 0   & 0   & 1/2 & 0   & 0   & 0   & 1/2 & 0   \cr
          0   & 0   & 0   & 0   & 1/3 & 0   & 1/3 & 0   & 1/3 \cr
          0   & 0   & 0   & 0   & 0   & 1/2 & 0   & 1/2 & 0   }
$$
@end tex
@end iftex
@ifnottex
@example
@group
        / 0     1/2   0     1/2   0     0     0     0     0   \
        | 1/3   0     1/3   0     1/3   0     0     0     0   |
        | 0     1/2   0     0     0     1/2   0     0     0   |
        | 1/3   0     0     0     1/3   0     1/3   0     0   |
    P = | 0     1/4   0     1/4   0     1/4   0     1/4   0   |
        | 0     0     1/3   0     1/3   0     0     0     1/3 |
        | 0     0     0     1/2   0     0     0     1/2   0   |
        | 0     0     0     0     1/3   0     1/3   0     1/3 |
        \ 0     0     0     0     0     1/2   0     1/2   0   /
@end group
@end example
@end ifnottex

The stationary state occupancy probability vector can be computed
using the following code:

@example
@c @group
@verbatiminclude @value{top_srcdir}/examples/demo_1_dtmc.m
@c @end group
    @result{} 0.083333   0.125000   0.083333   0.125000   
       0.166667   0.125000   0.083333   0.125000   
       0.083333
@end example

@c
@node Birth-death process (DTMC)
@subsection Birth-death process

@DOCSTRING(dtmc_bd)

@c
@node Expected number of visits (DTMC)
@subsection Expected Number of Visits

Given a @math{N} state discrete-time Markov chain with transition
matrix @math{\bf P} and an integer @math{n @geq{} 0}, we let
@math{L_i(n)} be the the expected number of visits to state @math{i}
during the first @math{n} transitions. The vector @math{{\bf L}(n) =
( L_1(n), L_2(n), @dots{}, L_N(n) )} is defined as

@iftex
@tex
$$ {\bf L}(n) = \sum_{i=0}^n {\bf \pi}(i) = \sum_{i=0}^n {\bf \pi}(0) {\bf P}^i $$
@end tex
@end iftex
@ifnottex
@example
@group
         n            n
        ___          ___
       \            \           i
L(n) =  >   pi(i) =  >   pi(0) P
       /___         /___
        i=0          i=0
@end group
@end example
@end ifnottex

@noindent where @math{{\bf \pi}(i) = {\bf \pi}(0){\bf P}^i} is the state 
occupancy probability after @math{i} transitions.

If @math{\bf P} is absorbing, i.e., the stochastic process eventually
reaches a state with no outgoing transitions with probability 1, then
we can compute the expected number of visits until absorption
@math{\bf L}. To do so, we first rearrange the states to rewrite
matrix @math{\bf P} as:

@iftex
@tex
$$ {\bf P} = \pmatrix{ {\bf Q} & {\bf R} \cr
                       {\bf 0} & {\bf I} }$$
@end tex
@end iftex
@ifnottex
@example
@group
    / Q | R \
P = |---+---|
    \ 0 | I /
@end group
@end example
@end ifnottex

@noindent where the first @math{t} states are transient
and the last @math{r} states are absorbing (@math{t+r = N}). The
matrix @math{{\bf N} = ({\bf I} - {\bf Q})^{-1}} is called the
@emph{fundamental matrix}; @math{N_{i,j}} is the expected number of
times that the process is in the @math{j}-th transient state if it
started in the @math{i}-th transient state. If we reshape @math{\bf N}
to the size of @math{\bf P} (filling missing entries with zeros), we
have that, for absorbing chains @math{{\bf L} = {\bf \pi}(0){\bf N}}.

@DOCSTRING(dtmc_exps)

@c
@node Time-averaged expected sojourn times (DTMC)
@subsection Time-averaged expected sojourn times

@DOCSTRING(dtmc_taexps)

@c
@node Mean time to absorption (DTMC)
@subsection Mean Time to Absorption

The @emph{mean time to absorption} is defined as the average number of
transitions which are required to reach an absorbing state, starting
from a transient state (or given an initial state occupancy
probability vector @math{{\bf \pi}(0)}). 

Let @math{{\bf t}_i} be the expected number of transitions before
being absorbed in any absorbing state, starting from state @math{i}.
Vector @math{\bf t} can be computed from the fundamental matrix
@math{\bf N} (@pxref{Expected number of visits (DTMC)}) as

@iftex
@tex
$$ {\bf t} = {\bf 1 N} $$
@end tex
@end iftex
@ifnottex
@example
t = 1 N
@end example
@end ifnottex

Let @math{{\bf B} = [ B_{i, j} ]} be a matrix where @math{B_{i, j}} is
the probability of being absorbed in state @math{j}, starting from
transient state @math{i}. Again, using matrices @math{\bf N} and
@math{\bf R} (@pxref{Expected number of visits (DTMC)}) we can write

@iftex
@tex
$$ {\bf B} = {\bf N R} $$
@end tex
@end iftex
@ifnottex
@example
B = N R
@end example
@end ifnottex

@DOCSTRING(dtmc_mtta)

@c
@node First passage times (DTMC)
@subsection First Passage Times

The First Passage Time @math{M_{i, j}} is the average number of
transitions needed to visit state @math{j} for the first time,
starting from state @math{i}. Matrix @math{\bf M} satisfies the
property that

@iftex
@tex
$$ M_{i, j} = 1 + \sum_{k \neq j} P_{i, k} M_{k, j}$$
@end tex
@end iftex
@ifnottex
@example
@group
           ___
          \
M_ij = 1 + >   P_ij * M_kj
          /___
          k!=j
@end group
@end example
@end ifnottex

To compute @math{{\bf M} = [ M_{i, j}]} a different formulation is
used.  Let @math{\bf W} be the @math{N \times N} matrix having each
row equal to the steady-state probability vector @math{\bf \pi} for
@math{\bf P}; let @math{\bf I} be the @math{N \times N} identity
matrix. Define @math{\bf Z} as follows:

@iftex
@tex
$$ {\bf Z} = \left( {\bf I} - {\bf P} + {\bf W} \right)^{-1} $$
@end tex
@end iftex
@ifnottex
@example
@group
               -1
Z = (I - P + W)
@end group
@end example
@end ifnottex

@noindent Then, we have that

@iftex
@tex
$$ M_{i, j} = {Z_{j, j} - Z_{i, j} \over \pi_j} $$
@end tex
@end iftex
@ifnottex
@example
@group
       Z_jj - Z_ij
M_ij = -----------
          \pi_j
@end group
@end example
@end ifnottex

According to the definition above, @math{M_{i,i} = 0}. We arbitrarily
let @math{M_{i,i}} to be the @emph{mean recurrence time} @math{r_i}
for state @math{i}, that is the average number of transitions needed
to return to state @math{i} starting from it. @math{r_i} is:

@iftex
@tex
$$ r_i = {1 \over \pi_i} $$
@end tex
@end iftex
@ifnottex
@example
@group
        1
r_i = -----
      \pi_i
@end group
@end example
@end ifnottex

@DOCSTRING(dtmc_fpt)

@c
@c
@c
@node Continuous-Time Markov Chains
@section Continuous-Time Markov Chains

A stochastic process @math{@{X(t), t @geq{} 0@}} is a continuous-time
Markov chain if, for all integers @math{n}, and for any sequence
@math{t_0, t_1 , \ldots, t_n, t_{n+1}} such that @math{t_0 < t_1 <
\ldots < t_n < t_{n+1}}, we have

@iftex
@tex
$$\eqalign{P(X(t_{n+1}) = x_{n+1}\ |\ X(t_n) = x_n, X(t_{n-1}) = x_{n-1}, \ldots, X(t_0) = x_0) \cr
&= P(X(t_{n+1}) = x_{n+1}\ |\ X(t_n) = x_n)}$$
@end tex
@end iftex
@ifnottex
@math{P(X_{n+1} = x_{n+1} | X_n = x_n, X_{n-1} = x_{n-1}, ..., X_0 = x_0) = P(X_{n+1} = x_{n+1} | X_n = x_n)}
@end ifnottex

A continuous-time Markov chain is defined according to an
@emph{infinitesimal generator matrix} @math{{\bf Q} = [Q_{i,j}]},
where for each @math{i \neq j}, @math{Q_{i, j}} is the transition rate
from state @math{i} to state @math{j}. The matrix @math{\bf Q} must
satisfy the property that, for all @math{i}, @math{\sum_{j=1}^N Q_{i,
j} = 0}.

@DOCSTRING(ctmc_check_Q)

@menu
* State occupancy probabilities (CTMC)::
* Birth-death process (CTMC)::
* Expected sojourn times (CTMC)::
* Time-averaged expected sojourn times (CTMC)::
* Mean time to absorption (CTMC)::
* First passage times (CTMC)::
@end menu

@node State occupancy probabilities (CTMC)
@subsection State occupancy probabilities

Similarly to the discrete case, we denote with @math{{\bf \pi}(t) =
(\pi_1(t), \pi_2(t), @dots{}, \pi_N(t) )} the @emph{state occupancy
probability vector} at time @math{t}. @math{\pi_i(t)} is the
probability that the system is in state @math{i} at time @math{t
@geq{} 0}.

Given the infinitesimal generator matrix @math{\bf Q} and the initial
state occupancy probabilities @math{{\bf \pi}(0) = (\pi_1(0),
\pi_2(0), @dots{}, \pi_N(0))}, the state occupancy probabilities
@math{{\bf \pi}(t)} at time @math{t} can be computed as:

@iftex
@tex
$${\bf \pi}(t) = {\bf \pi}(0) \exp( {\bf Q} t )$$
@end tex
@end iftex
@ifnottex
@example
@group
\pi(t) = \pi(0) exp(Qt)
@end group
@end example
@end ifnottex

@noindent where @math{\exp( {\bf Q} t )} is the matrix exponential
of @math{{\bf Q} t}. Under certain conditions, there exists a
@emph{stationary state occupancy probability} @math{{\bf \pi} =
\lim_{t \rightarrow +\infty} {\bf \pi}(t)}, which is independent from
@math{{\bf \pi}(0)}.  @math{\bf \pi} is the solution of the following
linear system:

@iftex
@tex
$$
\left\{ \eqalign{
{\bf \pi Q} & = {\bf 0} \cr
{\bf \pi 1}^T & = 1
} \right.
$$
@end tex
@end iftex
@ifnottex
@example
@group
/
| \pi Q   = 0
| \pi 1^T = 1
\
@end group
@end example
@end ifnottex

@DOCSTRING(ctmc)

@noindent @strong{EXAMPLE}

Consider a two-state CTMC such that transition rates between states
are equal to 1. This can be solved as follows:

@example
@group
@verbatiminclude @value{top_srcdir}/examples/demo_1_ctmc.m
    @result{} q = 0.50000   0.50000
@end group
@end example

@c
@c
@c
@node Birth-death process (CTMC)
@subsection Birth-Death Process

@DOCSTRING(ctmc_bd)

@c
@c
@c
@node Expected sojourn times (CTMC)
@subsection Expected Sojourn Times

Given a @math{N} state continuous-time Markov Chain with infinitesimal
generator matrix @math{\bf Q}, we define the vector @math{{\bf L}(t) =
(L_1(t), L_2(t), \ldots, L_N(t))} such that @math{L_i(t)} is the
expected sojourn time in state @math{i} during the interval
@math{[0,t)}, assuming that the initial occupancy probability at time
0 was @math{{\bf \pi}(0)}. @math{{\bf L}(t)} can be expressed as the
solution of the following differential equation:

@iftex
@tex
$$ { d{\bf L}(t) \over dt} = {\bf L}(t){\bf Q} + {\bf \pi}(0), \qquad {\bf L}(0) = {\bf 0} $$
@end tex
@end iftex
@ifnottex
@example
@group
 dL
 --(t) = L(t) Q + pi(0),    L(0) = 0
 dt
@end group
@end example
@end ifnottex

Alternatively, @math{{\bf L}(t)} can also be expressed in integral
form as:

@iftex
@tex
$$ {\bf L}(t) = \int_0^t {\bf \pi}(u) du$$
@end tex
@end iftex
@ifnottex
@example
@group
       / t
L(t) = |   pi(u) du
       / 0
@end group
@end example
@end ifnottex

@noindent where @math{{\bf \pi}(t) = {\bf \pi}(0) \exp({\bf Q}t)} is
the state occupancy probability at time @math{t}; @math{\exp({\bf Q}t)}
is the matrix exponential of @math{{\bf Q}t}.

@DOCSTRING(ctmc_exps)

@noindent @strong{EXAMPLE}

Let us consider a pure-birth, 4-states CTMC such that the transition
rate from state @math{i} to state @math{i+1} is @math{\lambda_i = i
\lambda} (@math{i=1, 2, 3}), with @math{\lambda = 0.5}. The following
code computes the expected sojourn time in state @math{i},
given the initial occupancy probability @math{{\bf \pi}_0=(1,0,0,0)}.

@example
@group
@verbatiminclude @value{top_srcdir}/examples/demo_1_ctmc_exps.m
@end group
@end example

@c
@c
@c
@node Time-averaged expected sojourn times (CTMC)
@subsection Time-Averaged Expected Sojourn Times

@DOCSTRING(ctmc_taexps)

@noindent @strong{EXAMPLE}

@example
@group
@verbatiminclude @value{top_srcdir}/examples/demo_1_ctmc_taexps.m
@end group
@end example

@c
@c
@c
@node Mean time to absorption (CTMC)
@subsection Mean Time to Absorption

If we consider a Markov Chain with absorbing states, it is possible to
define the @emph{expected time to absorption} as the expected time
until the system goes into an absorbing state. More specifically, let
us suppose that @math{A} is the set of transient (i.e., non-absorbing)
states of a CTMC with @math{N} states and infinitesimal generator
matrix @math{\bf Q}. The expected time to absorption @math{{\bf
L}_A(\infty)} is defined as the solution of the following equation:

@iftex
@tex
$$ {\bf L}_A(\infty){\bf Q}_A = -{\bf \pi}_A(0) $$
@end tex
@end iftex
@ifnottex
@example
@group
L_A( inf ) Q_A = -pi_A(0)
@end group
@end example
@end ifnottex

@noindent where @math{{\bf Q}_A} is the restriction of matrix @math{\bf Q} to
only states in @math{A}, and @math{{\bf \pi}_A(0)} is the initial
state occupancy probability at time 0, restricted to states in
@math{A}.

@DOCSTRING(ctmc_mtta)

@noindent @strong{EXAMPLE}

Let us consider a simple model of a redundant disk array. We assume
that the array is made of 5 independent disks, such that the array can
tolerate up to 2 disk failures without losing data. If three or more
disks break, the array is dead and unrecoverable. We want to estimate
the Mean-Time-To-Failure (MTTF) of the disk array.

We model this system as a 4 states Markov chain with state space
@math{\{ 2, 3, 4, 5 \}}. State @math{i} denotes the fact that exactly
@math{i} disks are active; state @math{2} is absorbing. Let @math{\mu}
be the failure rate of a single disk. The system starts in state
@math{5} (all disks are operational). We use a pure death process,
with death rate from state @math{i} to state @math{i-1} is @math{\mu
i}, for @math{i = 3, 4, 5}).

The MTTF of the disk array is the MTTA of the Markov Chain, and can be
computed with the following expression:

@example
@group
@verbatiminclude @value{top_srcdir}/examples/demo_1_ctmc_mtta.m
    @result{} t = 78.333
@end group
@end example

@noindent @strong{REFERENCES}

G. Bolch, S. Greiner, H. de Meer and
K. Trivedi, @cite{Queueing Networks and Markov Chains: Modeling and
Performance Evaluation with Computer Science Applications}, Wiley,
1998.

@auindex Bolch, G.
@auindex Greiner, S.
@auindex de Meer, H.
@auindex Trivedi, K.

@c
@c
@c
@node First passage times (CTMC)
@subsection First Passage Times

@DOCSTRING(ctmc_fpt)