Mercurial > forge
view main/queueing/doc/markovchains.txi @ 9439:f9ab20ba5834 octave-forge
fixed typo in the documentation
author | mmarzolla |
---|---|
date | Wed, 15 Feb 2012 11:54:15 +0000 |
parents | 0bd7d96cb6e3 |
children | 8e744625e429 |
line wrap: on
line source
@c -*- texinfo -*- @c Copyright (C) 2008, 2009, 2010, 2011, 2012 Moreno Marzolla @c @c This file is part of the queueing toolbox, a Queueing Networks @c analysis package for GNU Octave. @c @c The queueing toolbox is free software; you can redistribute it @c and/or modify it under the terms of the GNU General Public License @c as published by the Free Software Foundation; either version 3 of @c the License, or (at your option) any later version. @c @c The queueing toolbox is distributed in the hope that it will be @c useful, but WITHOUT ANY WARRANTY; without even the implied warranty @c of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the @c GNU General Public License for more details. @c @c You should have received a copy of the GNU General Public License @c along with the queueing toolbox; see the file COPYING. If not, see @c <http://www.gnu.org/licenses/>. @node Markov Chains @chapter Markov Chains @menu * Discrete-Time Markov Chains:: * Continuous-Time Markov Chains:: @end menu @node Discrete-Time Markov Chains @section Discrete-Time Markov Chains Let @math{X_0, X_1, @dots{}, X_n, @dots{} } be a sequence of random variables, each one defined over a discete state space @math{0, 1, 2, @dots{}}. The sequence @math{X_0, X_1, @dots{}, X_n, @dots{}} is a @emph{stochastic process} with discrete time @math{0, 1, 2, @dots{}}. A @emph{Markov chain} is a stochastic process @math{@{X_n, n=0, 1, 2, @dots{}@}} which satisfies the following Marrkov property: @iftex @tex $$P(X_{n+1} = x_{n+1}\ |\ X_n = x_n, X_{n-1} = x_{n-1}, \ldots, X_0 = x_0) = P(X_{n+1} = x_{n+1}\ |\ X_n = x_n)$$ @end tex @end iftex @ifnottex @math{P(X_{n+1} = x_{n+1} | X_n = x_n, X_{n-1} = x_{n-1}, ..., X_0 = x_0) = P(X_{n+1} = x_{n+1} | X_n = x_n)} @end ifnottex @noindent which means that the probability that the system is in a particular state at time @math{n+1} only depends on the state the system was at time @math{n}. The evolution of a Markov chain with finite state space @math{@{1, 2, @dots{}, N@}} can be fully described by a stochastic matrix @math{{\bf P}(n) = P_{i,j}(n)} such that @math{P_{i, j}(n) = P( X_{n+1} = j\ |\ X_n = j )}. If the Markov chain is homogeneous (that is, the transition probability matrix @math{{\bf P}(n)} is time-independent), we can simply write @math{{\bf P} = P_{i, j}}, where @math{P_{i, j} = P( X_{n+1} = j\ |\ X_n = j )} for all @math{n=0, 1, 2, @dots{}}. The transition probability matrix @math{\bf P} must satisfy the following two properties: (1) @math{P_{i, j} @geq{} 0} for all @math{i, j}, and (2) @math{\sum_{j=1}^N P_{i,j} = 1}. We denote with @math{{\bf \pi}(n) = (\pi_1(n), \pi_2(n), @dots{}, \pi_N(n) )} the @emph{state occupancy probability vector} at step @math{n}. @math{\pi_i(n)} denotes the probability that the system is in state @math{i} at step @math{n}. Given the transition probability matrix @math{\bf P} and the initial state occupancy probability vector @math{{\bf \pi}(0) = (\pi_1(0), \pi_2(0), @dots{}, \pi_N(0))} at step 0, the state occupancy probability vector @math{{\bf \pi}(n)} at step @math{n} can be computed as: @iftex @tex $${\bf \pi}(n) = {\bf \pi}(0) {\bf P}^n$$ @end tex @end iftex @ifnottex @example @group \pi(n) = \pi(0) P^n @end group @end example @end ifnottex Under certain conditions, there exists a @emph{stationary state occupancy probability} @math{{\bf \pi} = \lim_{n \rightarrow +\infty} {\bf \pi}(n)}, which is independent from the initial state occupancy @math{{\bf \pi}(0)}. The stationary state occupancy probability vector @math{\bf \pi} satisfies @math{{\bf \pi} = {\bf \pi} {\bf P}}. @DOCSTRING(dtmc) @noindent @strong{EXAMPLE} @example @group @verbatiminclude @value{top_srcdir}/examples/demo_1_dtmc.m @end group @end example The First Passage Time @math{M_{i j}} is defined as the average number of transitions needed to visit state @math{j} for the first time, starting from state @math{i}. Matrix @math{\bf M} satisfies the property that @iftex @tex $$ M_{i j} = 1 + \sum_{k \neq j} P_{i k} M_{k j}$$ @end tex @end iftex @ifnottex @example @group ___ \ M_ij = 1 + > P_ij * M_kj /___ k!=j @end group @end example @end ifnottex @DOCSTRING(dtmc_fpt) @c @c @c @node Continuous-Time Markov Chains @section Continuous-Time Markov Chains @menu * CTMC Stationary Probability:: * Birth-Death process:: * Expected Sojourn Time:: * Time-Averaged Expected Sojourn Time:: * Expected Time to Absorption:: * CTMC First Passage Times:: @end menu @node CTMC Stationary Probability @subsection Stationary Probability @DOCSTRING(ctmc) @noindent @strong{EXAMPLE} Consider a two-state CTMC such that transition rates between states are equal to 1. This can be solved as follows: @example @group @verbatiminclude @value{top_srcdir}/examples/demo_1_ctmc.m @result{} q = 0.50000 0.50000 @end group @end example @c @c @c @node Birth-Death process @subsection Birth-Death process @DOCSTRING(ctmc_bd) @c @c @c @node Expected Sojourn Time @subsection Expected Sojourn Time Given a @math{N} state continuous-time Markov Chain with infinitesimal generator matrix @math{\bf Q}, we define the vector @math{{\bf L}(t) = (L_1(t), L_2(t), \ldots L_N(t))} such that @math{L_i(t)} is the expected sojourn time in state @math{i} during the interval @math{[0,t)}, assuming that the initial occupancy probability at time 0 was @math{{\bf \pi}(0)}. Then, @math{{\bf L}(t)} is the solution of the following differential equation: @iftex @tex $$ { d{\bf L}(t) \over dt} = {\bf L}(t){\bf Q} + {\bf \pi}(0), \qquad {\bf L}(0) = {\bf 0} $$ @end tex @end iftex @ifnottex @example @group dL --(t) = L(t) Q + pi(0), L(0) = 0 dt @end group @end example @end ifnottex The function @code{ctmc_exps} can be used to compute @math{{\bf L}(t)}, by using the @code{lsode} Octave function to solve the above linear differential equation. @DOCSTRING(ctmc_exps) @noindent @strong{EXAMPLE} Let us consider a pure-birth, 4-states CTMC such that the transition rate from state @math{i} to state @math{i+1} is @math{\lambda_i = i \lambda} (@math{i=1, 2, 3}), with @math{\lambda = 0.5}. The following code computes the expected sojourn time in state @math{i}, given the initial occupancy probability @math{p_0=(1,0,0,0)}. @example @group @verbatiminclude @value{top_srcdir}/examples/demo_1_ctmc_exps.m @end group @end example @c @c @c @node Time-Averaged Expected Sojourn Time @subsection Time-Averaged Expected Sojourn Time @DOCSTRING(ctmc_taexps) @noindent @strong{EXAMPLE} @example @group @verbatiminclude @value{top_srcdir}/examples/demo_1_ctmc_taexps.m @end group @end example @c @c @c @node Expected Time to Absorption @subsection Expected Time to Absorption If we consider a Markov Chain with absorbing states, it is possible to define the @emph{expected time to absorption} as the expected time until the system goes into an absorbing state. More specifically, let us suppose that @math{A} is the set of transient (i.e., non-absorbing) states of a CTMC with @math{N} states and infinitesimal generator matrix @math{\bf Q}. The expected time to absorption @math{{\bf L}_A(\infty)} is defined as the solution of the following equation: @iftex @tex $$ {\bf L}_A(\infty){\bf Q}_A = -{\bf \pi}_A(0) $$ @end tex @end iftex @ifnottex @example @group L_A( inf ) Q_A = -pi_A(0) @end group @end example @end ifnottex @noindent where @math{{\bf Q}_A} is the restriction of matrix @math{\bf Q} to only states in @math{A}, and @math{{\bf \pi}_A(0)} is the initial state occupancy probability at time 0, restricted to states in @math{A}. @DOCSTRING(ctmc_mtta) @noindent @strong{EXAMPLE} Let us consider a simple model of a redundant disk array. We assume that the array is made of 5 independent disks, such that the array can tolerate up to 2 disk failures without losing data. If three or more disks break, the array is dead and unrecoverable. We want to estimate the Mean-Time-To-Failure (MTTF) of the disk array. We model this system as a 4 states Markov chain with state space @math{\{ 2, 3, 4, 5 \}}. State @math{i} denotes the fact that exactly @math{i} disks are active; state @math{2} is absorbing. Let @math{\mu} be the failure rate of a single disk. The system starts in state @math{5} (all disks are operational). We use a pure death process, with death rate from state @math{i} to state @math{i-1} is @math{\mu i}, for @math{i = 3, 4, 5}). The MTTF of the disk array is the MTTA of the Markov Chain, and can be computed with the following expression: @example @group @verbatiminclude @value{top_srcdir}/examples/demo_1_ctmc_mtta.m @result{} t = 78.333 @end group @end example @noindent @strong{REFERENCES} G. Bolch, S. Greiner, H. de Meer and K. Trivedi, @cite{Queueing Networks and Markov Chains: Modeling and Performance Evaluation with Computer Science Applications}, Wiley, 1998. @auindex Bolch, G. @auindex Greiner, S. @auindex de Meer, H. @auindex Trivedi, K. @c @c @c @node CTMC First Passage Times @subsection First Passage Times @DOCSTRING(ctmc_fpt)