Carleton University
Technical Report TR-157
April 1989
Discretized Pursuit Linear Reward-Inaction Automata
Abstract
We consider the problem of a stochastic learning automaton interacting with an unknown random environment. The fundamental problem is that of learning, through interaction, the best action (that is the action which is rewarded optimally) allowed by the environment. By using running estimates of reward probabilities to learn the optimal action, an extremely efficient Pursuit Algorithm was earlier reported [24, 26, 27, 28] which is presently among the fastest algorithms known. This paper investigates the improvements gained by rendering the Pursuit Algorithm discrete, and this is done by restricting the probability of selecting an action to a finite, and hence, discrete subset of [0, 1]. This improved scheme is proven to be optimal in probability (implying e-optimality) in all stationary environments. Furthermore, our experimental results seem to indicate that the algorithm presented in this paper is the fastest absorbing learning automaton reported in the literature to date. Comparison with the continuous form of the pursuit algorithm are also presented.