Krylov | Controlled Diffusion Processes | Buch | 978-1-4612-6053-0 | sack.de

Buch, Englisch, Band 14, 308 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 493 g

Reihe: Stochastic Modelling and Applied Probability

Krylov

Controlled Diffusion Processes


Softcover Nachdruck of the original 1. Auflage 1980
ISBN: 978-1-4612-6053-0
Verlag: Springer

Buch, Englisch, Band 14, 308 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 493 g

Reihe: Stochastic Modelling and Applied Probability

ISBN: 978-1-4612-6053-0
Verlag: Springer


Stochastic control theory is a relatively young branch of mathematics. The beginning of its intensive development falls in the late 1950s and early 1960s. During that period an extensive literature appeared on optimal stochastic control using the quadratic performance criterion (see references in W onham [76J). At the same time, Girsanov [25J and Howard [26J made the first steps in constructing a general theory, based on Bellman's technique of dynamic programming, developed by him somewhat earlier [4J. Two types of engineering problems engendered two different parts of stochastic control theory. Problems of the first type are associated with multistep decision making in discrete time, and are treated in the theory of discrete stochastic dynamic programming. For more on this theory, we note in addition to the work of Howard and Bellman, mentioned above, the books by Derman [8J, Mine and Osaki [55J, and Dynkin and Yushkevich [12]. Another class of engineering problems which encouraged the development of the theory of stochastic control involves time continuous control of a dynamic system in the presence of random noise. The case where the system is described by a differential equation and the noise is modeled as a time continuous random process is the core of the optimal control theory of diffusion processes. This book deals with this latter theory.

Krylov Controlled Diffusion Processes jetzt bestellen!

Zielgruppe


Research

Weitere Infos & Material


1 Introduction to the Theory of Controlled Diffusion Processes.- 1. The Statement of Problems—Bellman’s Principle—Bellman’s Equation.- 2. Examples of the Bellman Equations—The Normed Bellman Equation.- 3. Application of Optimal Control Theory—Techniques for Obtaining Some Estimates.- 4. One-Dimensional Controlled Processes.- 5. Optimal Stopping of a One-Dimensional Controlled Process.- Notes.- 2 Auxiliary Propositions.- 1. Notation and Definitions.- 2. Estimates of the Distribution of a Stochastic Integral in a Bounded Region.- 3. Estimates of the Distribution of a Stochastic Integral in the Whole Space.- 4. Limit Behavior of Some Functions.- 5. Solutions of Stochastic Integral Equations and Estimates of the Moments.- 6. Existence of a Solution of a Stochastic Equation with Measurable Coefficients.- 7. Some Properties of a Random Process Depending on a Parameter.- 8. The Dependence of Solutions of a Stochastic Equation on a Parameter.- 9. The Markov Property of Solutions of Stochastic Equations.- 10. Ito’s Formula with Generalized Derivatives.- Notes.- 3 General Properties of a Payoff Function.- 1. Basic Results.- 2. Some Preliminary Considerations.- 3. The Proof of Theorems 1.5–1.7.- 4. The Proof of Theorems 1.8–1.11 for the Optimal Stopping Problem.- Notes.- 4 The Bellman Equation.- 1. Estimation of First Derivatives of Payoff Functions.- 2. Estimation from Below of Second Derivatives of a Payoff Function.- 3. Estimation from Above of Second Derivatives of a Payoff Function.- 4. Estimation of a Derivative of a Payoff Function with Respect to t.- 5. Passage to the Limit in the Bellman Equation.- 6. The Approximation of Degenerate Controlled Processes by Nondegenerate Ones.- 7. The Bellman Equation.- Notes.- 5 The Construction of ?-OptimalStrategies.- 1. ?-Optimal Markov Strategies and the Bellman Equation.- 2. ?-Optimal Markov Strategies. The Bellman Equation in the Presence of Degeneracy.- 3. The Payoff Function and Solution of the Bellman Equation: The Uniqueness of the Solution of the Bellman Equation.- Notes.- 6 Controlled Processes with Unbounded Coefficients: The Normed Bellman Equation.- 1. Generalization of the Results Obtained in Section 3.1.- 2. General Methods for Estimating Derivatives of Payoff Functions.- 3. The Normed Bellman Equation.- 4. The Optimal Stopping of a Controlled Process on an Infinite Interval of Time.- 5. Control on an Infinite Interval of Time.- Notes.- Appendices.- 1. Some Properties of Stochastic Integrals.- 2. Some Properties of Submartingales.



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.