Problems modeled as discrete homogenous (stationary or time independent) Markov chains can often be solved iteratively using conditional probabilities or conditional expectations. However, it is not uncommon for a process to begin at a specific state (or a series of states given some arbitrary probabilities) where the desired state is some *n* arbitrary steps forward in the process.

Given the relative size of *n* it may be challenging analytically to explicitly write out the necessary chain of conditional probabilities or conditional expectations for use in the law of total expectation or law of total probability to derive the value at step *n*.

The Chapman-Kolmogorov equations remedy this by proving the probability of transitioning from a given state to an arbitrary future state *n* steps forward is an element in the given transition matrix raised to the power of *n*.

This article will be broke up into the following sections…

**Motivation for the Chapman-Kolmogorov equations****Proof of the Chapman-Kolmogorov equations****Application of the Chapman-Kolmogorov equations**

## Motivation for the Chapman-Kolmogorov equations

Suppose we had two coins…

Every day we flip a coin, if it lands on heads we flip **C1** tomorrow and if it lands on tails we flip **C2** tomorrow. Assume we have an equal probability of selecting **C1** and **C2** on the first day.

What is the probability of flipping **C1** on day 2?

The toss tomorrow only depends on today, thus we can model this problem as a discrete homogenous Markov chain and use the law of total probability.

The state space is clearly either **C1** (state 0) or **C2 **(state 1)…

In the context of this example, we are trying to find the probability that day 2 results in state 0.

All of these quantities are easily derived by the problem setup.

- P(X2 = 0 | X1 = 0) = .7 =
*Probability of Flipping Heads on Day 1 given C1*