Properties
- Memoryless: the future state depends only on the current state, not the steps before.
- The sum of the weights of the outgoing arrows from any state is equal to 1.
- Stationary distribution: after a simulation of random walk, itβs the probability of occurrence of each state.
- Adjacent Matrix or Transition Matrix:
- Getting the stationary distribution:
- Similar to:
- Transient state: when there is no way to come back to this state.
- Recurrent state: when itβs possible to return to the state.
- Reducible (chain with transient states) and Irreducible chain (chain with only recurrent states).
- Communication classes: are sub-divisions of the the chain.
N-step Transition Matrix
Given a Markov Chain, what is the probability of reaching a state from state after exactly steps ?
Chapman-Kolmogorov Theorem
- Markov chains key components: