Markov Chains
A Markov chain is a stochastic process over a set of states where the probability of transitioning to any state depends only on the current state — not on the history of prior states. This memoryless property is called the Markov property.
The Markov Property
$$ P(X_{n+1} = j \mid X_n = i,, X_{n-1},\ldots,X_0) = P(X_{n+1} = j \mid X_n = i) = p_{ij} $$
The values $p_{ij}$ form a transition matrix $P$ where each row sums to 1.
Example Transition Matrix
| From \ To | State A | State B | State C |
|---|---|---|---|
| State A | 0.1 | 0.6 | 0.3 |
| State B | 0.4 | 0.4 | 0.2 |
| State C | 0.2 | 0.3 | 0.5 |
Stationary Distribution
A distribution $\pi$ is stationary if $\pi P = \pi$. For an ergodic chain, repeated application of $P$ converges to $\pi$ regardless of the starting state:
$$ \pi = \lim_{n \to \infty} \mu_0, P^n $$
import numpy as np
def stationary_distribution(P, steps=1000):
n = P.shape[0]
dist = np.ones(n) / n # uniform start
for _ in range(steps):
dist = dist @ P
return dist