首页   按字顺浏览 期刊浏览 卷期浏览 Approximating Markov Chains: What and why
Approximating Markov Chains: What and why

 

作者: Steve Pincus,  

 

期刊: AIP Conference Proceedings  (AIP Available online 1996)
卷期: Volume 375, issue 1  

页码: 14-32

 

ISSN:0094-243X

 

年代: 1996

 

DOI:10.1063/1.51026

 

出版商: AIP

 

数据来源: AIP

 

摘要:

Much of the current study of dynamical systems is focused on geometry (e.g., chaos and bifurcations) and ergodic theory. Yet dynamical systems were originally motivated by an attempt to ‘‘solve,’’ or at least understand, a discrete‐time analogue of differential equations. As such, numerical, analytical solution techniques for dynamical systems would seem desirable. We discuss an approach that provides such techniques, the approximation of dynamical systems by suitable finite state Markov Chains. Steady state distributions for these Markov Chains, a straightforward calculation, will converge to the true dynamical system steady state distribution, with appropriate limit theorems indicated. Thus (i) approximation by a computable, linear map holds the promise of vastly faster steady state solutions for nonlinear, multidimensional differential equations; (ii) the solution procedure is unaffected by the presence or absence of a probability density function for theattractor, entirely skirting singularity, fractal/multifractal, and renormalization considerations. The theoretical machinery underpinning this development also implies that under very general conditions, steady state measures are weakly continuous with control parameter evolution. This means that even though a system may change periodicity, or become chaotic in its limiting behavior, such statistical parameters as the mean, standard deviation, and tail probabilities change continuously, not abruptly with system evolution. ©1996 American Institute of Physics.

 

点击下载:  PDF (1128KB)



返 回