Markov Chain Model Model: MARKOV
A standard approach used in modeling random variables over time is the Markov chain approach. Refer to an operations research or probability text for complete details. The basic idea is to think of the system as being in one of a discrete number of states at each point in time. The behavior of the system is described by a transition probability matrix, which gives the probability that the system will move to a specified other state from some given state. Some example situations are:
System |
States |
Cause of Transition |
Consumer brand switching |
Brand of product most recently purchased by consumer |
Consumer changes mind, advertising |
Inventory System |
Amount of inventory on hand |
Orders for new material, demands |
An interesting problem is to determine the long-term, steady state probabilities of the system. If we assume that the system can reach equilibrium, then it must be true that the probability of leaving a particular state must equal the probability of arriving in that state. You will recall that this is the Rate In = Rate Out Principle(RIRO) we used above in building the multi-server queuing model, QUEUEM. If we let :
πi = the steady state probability of being in state i, and
pij = the transition probability of moving from state i to j,
then, by our RIRO assumption, for each state i:
RateIni = RateOuti
∑j≠i πj pji = πi ( 1 - pii)
Rewriting the above we get:
πi = ∑j πj pji
This gives us n equations to solve for the n unknown steady state probabilities. Unfortunately, it turns out that this system is not of full rank. Thus, there is not a unique solution. To guarantee a valid set of probabilities, we must make use of one final condition—the sum of the probabilities must be 1.