Markov Processes and Learning Models
M. Frank Norman
University of Pennsylvania
Availability as of 3/1/2013: After being out of print for many years, a paperback reprint is currently available at amazon.com.
Contents
[with comments, added 3/3/09]
Chapter 0: Introduction
Part
I: Distance Diminishing Models [mostly ergodic theory]
Chapter 1: Markov Processes and Random Systems with Complete Connections
[The sequence of responses generated by a discrete time learning model may have a very complex structure, but, in MPLM, it is controlled by a Markovian state variable which is the focus of the analyses presented in the book.]
Chapter 2: Distance Diminishing Models and Doeblin-Fortet Processes
Chapter 3: The Theorem of Ionescu Tulcea and Marinescu, and Compact Markov Processes
Chapter 4: Distance Diminishing Models with Noncompact State Spaces
Chapter 5: Functions of Markov Processes [e.g., learning states]
Chapter 6: Functions of Events [e.g., observable behavior]
Part
II: Slow Learning
[Here individual trials produce small changes in learning state, permitting diffusion approximations.]
Chapter 7: Introduction to Slow Learning
Chapter 8: Transient Behavior in the Case of Large Drift
Chapter 9: Transient Behavior in the Case of Small Drift
Chapter 10: Steady-State Behavior
Chapter 11: Absorption Probabilities
[It is worth noting that all of the theory developed in Parts I and II applies to continuous-state Markov processes. Moreover the theory in Part I applies to infinite-dimensional (mostly metric) state spaces and the theory in Chapter 8 applies to state spaces with multiple finite dimensions.]
Part
III: Special Models
Chapter 12: The Five-Operator Linear Model
Chapter 13: The Fixed Sample Size Model
Chapter 14: Additive Models
Chapter 15: Multiresponse Linear Models
Chapter 16: The Zeaman-House-Lovejoy Models
Chapter 17: Other Learning Models
Chapter 18: Diffusion Approximation in a Genetic Model and a Physical Model
Relevant
Papers Published Subsequent to MPLM [with comments]
l974, Markovian
learning processes, SIAM Review,
16, l43-l62.
[Like Chapter 0, this is an introductory
survey. An interesting open problem
related to Chapter 14 of MPLM is described on p. 150.]
l974, Effects
of overtraining, problem shifts, and probabilistic reinforcement in
discrimination learning: predictions
of an attentional model, in D. H. Krantz, R. C. Atkinson, R. D. Luce, and P.
Suppes (Eds.), Contemporary Developments
in Mathematical Psychology, Vol. l, Freeman, San Francisco, l85-208.
[This paper and the next extend the
analysis of the ZHL attentional learning model begun in Chapter 16. This was the most psychologically interesting
model with which I worked.]
l976, Optional
shift, and discrimination learning with redundant relevant dimensions: predictions of an attentional model, Journal of Mathematical Psychology, 14,
l30-l43.
l98l, A
"psychological" proof that certain Markov semigroups preserve
differentiability, in S. Grossberg (Ed.),
Mathematical Psychology and
Psychophysiology, American Mathematical Society, Providence, R. I.,
l97-2ll.
[This unusual paper leverages regularities
of certain learning models to deduce comparable regularities of diffusion
approximations. The latter regularities
can, in turn, be used to prove convergence of other processes to these same
limiting diffusion.]
-----------------------------------------------
The remaining papers were oriented more toward diffusion approximation of models for evolution rather than models for learning. I chose this orientation since there has traditionally been much more interest in diffusion approximation in population genetics than in psychology. (In population genetic models, the analog of an individual’s “state of learning” is a population’s proportion of a certain allele.) However most of the techniques and theorems developed in these papers are quite general and there is nothing to prevent their application to learning models, which are close cousins of evolutionary models.
l975, Diffusion
approximation of non-Markovian processes, Annals of Probability, 3, 358-364.
[This extends Chapter 9.]
l975, Approximation
of stochastic processes by Gaussian diffusions, and applications to
Wright-Fisher genetic models, SIAM
Journal of Applied Mathematics, 29, 225-242.
[This is a major extension of Chapter 8 as
well as Theorem 3.1 of the paper “Markovian learning processes” cited above.]
l975, Limit
theorems for stationary distributions, Advances
in Applied Probability, 7, 56l-575.
[Theorem 2 of this paper extends Chapter 10
in the same way that the previous paper extends Chapter 8.]
l975, An
ergodic theorem for evolution in a random environment, Journal of Applied Probability, 12, 66l-672.
[This is an analysis of a genetic model
that has the “feel” of a continuous-state learning model. It was proposed to me by John Gillespie.]
l977, Ergodicity
of diffusion and temporal uniformity of diffusion approximation, Journal of Applied Probability, 14,
399-404.
[This extends Chapters 9 and 11 and brings
together my interests in ergodic theory and diffusion approximation.]
l977, (with
S. N. Ethier) Error
estimate for the diffusion approximation of the Wright-Fisher model, Proceedings of the National Academy of
Sciences, 74, 5096-5098.
[This is a very rare example of an explicit
and usefully precise error estimate for a diffusion approximation to a
discrete-time Markov process.]