Carleton University
Technical Report TR-24
May 1983
Learning Automata Possessing Ergodicity of the Mean : The Two Action Case
M.A.L. Thathachar & B.J. Oommen
Abstract
Learning automata which update their action probabilities on
the basis of the responses they get from an environment are considered in this paper. The automata update the probabilities whether the environment responds with a reward or a penalty. An automaton is said to possess Ergodicity of the Mean (EM) if the mean action probability is the total state probability of an ergodic Markov chain. The only known algorithm which is Ergodic in the Mean (EM) is the Linear Reward-Penalty (LRp) scheme. For the 2-action case necessary and sufficient conditions have been derived for nonlinear updating schemes to be Ergodic in the Mean (EM). The method of controlling the rate of convergence of this scheme has been presented. In particular a generalized linear algorithm has been proposed which is superior to the Linear Reward-Penalty ( LRp) scheme. The expression for the variance of the limiting action probabilities of this scheme has been derived. The technique of designing the optimal linear automaton in this family has also been considered. Methods to decrease the variance for the general nonlinear scheme have been discussed. It has been shown that the set of absolutely expedient schemes and the set of schemes which possess ergodicity of the mean are mutually disjoint.
