E-Book, Englisch, Band 35, 417 Seiten, eBook
Kushner / Yin Stochastic Approximation and Recursive Algorithms and Applications
1997
ISBN: 978-1-4899-2696-8
Verlag: Springer US
Format: PDF
Kopierschutz: 1 - PDF Watermark
E-Book, Englisch, Band 35, 417 Seiten, eBook
Reihe: Stochastic Modelling and Applied Probability
ISBN: 978-1-4899-2696-8
Verlag: Springer US
Format: PDF
Kopierschutz: 1 - PDF Watermark
The most comprehensive and thorough treatment of modern stochastic approximation type algorithms to date, based on powerful methods connected with that of the ODE. It covers general constrained and unconstrained problems, w.p.1 as well as the very successful weak convergence methods under weak conditions on the dynamics and noise processes, asymptotic properties and rates of convergence, iterate averaging methods, ergodic cost problems, state dependent noise, high dimensional problems, plus decentralized and asynchronous algorithms, and the use of methods of large deviations. Examples from many fields illustrate and motivate the techniques.
Zielgruppe
Research
Autoren/Hrsg.
Weitere Infos & Material
Introduction 1 Review of Continuous Time Models 1.1 Martingales and Martingale Inequalities 1.2 Stochastic Integration 1.3 Stochastic Differential Equations: Diffusions 1.4 Reflected Diffusions 1.5 Processes with Jumps 2 Controlled Markov Chains 2.1 Recursive Equations for the Cost 2.2 Optimal Stopping Problems 2.3 Discounted Cost 2.4 Control to a Target Set and Contraction Mappings 2.5 Finite Time Control Problems 3 Dynamic Programming Equations 3.1 Functionals of Uncontrolled Processes 3.2 The Optimal Stopping Problem 3.3 Control Until a Target Set Is Reached 3.4 A Discounted Problem with a Target Set and Reflection 3.5 Average Cost Per Unit Time 4 Markov Chain Approximation Method: Introduction 4.1 Markov Chain Approximation 4.2 Continuous Time Interpolation 4.3 A Markov Chain Interpolation 4.4 A Random Walk Approximation 4.5 A Deterministic Discounted Problem 4.6 Deterministic Relaxed Controls 5 Construction of the Approximating Markov Chains 5.1 One Dimensional Examples 5.2 Numerical Simplifications 5.3 The General Finite Difference Method 5.4 A Direct Construction 5.5 Variable Grids 5.6 Jump Diffusion Processes 5.7 Reflecting Boundaries 5.8 Dynamic Programming Equations 5.9 Controlled and State Dependent Variance 6 Computational Methods for Controlled Markov Chains 6.1 The Problem Formulation 6.2 Classical Iterative Methods 6.3 Error Bounds 6.4 Accelerated Jacobi and Gauss-Seidel Methods 6.5 Domain Decomposition 6.6 Coarse Grid-Fine Grid Solutions 6.7 A Multigrid Method 6.8 Linear Programming 7 The Ergodic Cost Problem: Formulation and Algorithms 7.1 Formulation of the Control Problem 7.2 A Jacobi Type Iteration 7.3 Approximation in Policy Space 7.4 Numerical Methods 7.5 The Control Problem 7.6 The Interpolated Process 7.7 Computations 7.8 Boundary Costs and Controls 8 Heavy Traffic and Singular Control 8.1 Motivating Examples &nb




