Skip to the content.

15 May 2020 - Abhishek Nandekar

Markov Chains/Processes:

Markov Processes are a kind of stochastic processes which assume that the random variable $X_t$ captures all the information relevant for predicting the future. Hence, future depends upon the past and to some extent, can be predicted from the past.

Consider models in which past $\rightarrow$ future influence is summarised using states (which evolve based on PDF’s). Markov

Discrete time $\implies$ Chains
Continuous time $\implies$ Processes

Checkout Counter Example:

Lets model a queue with certain assumptions:

Observations

Discrete Time Finite State Markov Chains

\[p_{ij} = P(X_{t_1}=j~\big|X_{t_0}=i) = P(X_{t_{n+1}}=j~\big|X_{t_n}=i)~~\forall n\]

which implies, the transition probabilities are time homogenous/time invariant.

Markov (defining) property/assumption:

Given the current states, the past doesn’t matter.

Markov Chain of order:

\[\begin{aligned} P(X_{t_{n+1}} = x_{n+1}~\big| X_{t_{n}} = x_{n}, X_{t_{n-1}} = x_{n-1}, \dots, X_{t_{1}} = x_{1} )\\ = P(X_{t_{n+1}} = x_{n+1}~\big|X_{t_{n}} = x_{n})\end{aligned}\]

where, $x_{n+1}, x_{n}, \dots, x_{1}$ are the states of the process.

Markov Chain of Order $k$

\[\begin{aligned} P(&X_{t_{n+1}} = x_{n+1}~\big| X_{t_{n}} = x_{n}, X_{t_{n-1}} = x_{n-1}, \dots, X_{t_{1}} = x_{1} )\\ &= P(X_{t_{n+1}} = x_{n+1}~\big|X_{t_{n}} = x_{n}, X_{t_{n-1}} = x_{n-1}, \dots,X_{t_{(n+1)-k}} = x_{(n+1)-k})\end{aligned}\]