Carleton University
Technical Report TR-112
May 1987
E-Optimal Discretized Linear Reward-Penalty Learning Automata
Abstract
In this paper we consider Variable Structure Stochastic Automata (VSSA) which interact with an environment and which dynamically learns the optimal action which the automaton offers. Like all VSSA the automata are fully defined by a set of action probability updating rules [4,9,22]. However, to minimize the requirements on the random number generator used to implement the VSSA, and to increase the speed of convergence of the automaton, we consider the case in which the probability updating functions can assume only a finite number of values. These values discretize the probability space [0,1] and hence they are called Discretized Learning Automata. The discretized automata are linear because the sub-intervals of [O, 1] are of equal length. We shall prove the following results: (i) Two-Action Discretized Linear Reward-Penalty Automata are ergodic and £-optimal in all enviroments whose minimum penalty probability is less than 0.5. (ii) There exist Discretized Two-Action Linear Reward-Penalty Automata which are ergodic and £-optimal in all random environments. (iii) Discretized Two-Action Linear Reward-Penalty Automata with artificially created absorbing barriers are £-optimal in all random environments.
Apart from the above theoretical results simulation results will be presented which indicate the properties of automata discussed. The rate of convergence of all these automata and some open problems are also presented.