Gittins / Glazebrook / Weber | Multi-armed Bandit Allocation Indices | E-Book | www2.sack.de
E-Book

E-Book, Englisch, 312 Seiten, E-Book

Gittins / Glazebrook / Weber Multi-armed Bandit Allocation Indices


2. Auflage 2011
ISBN: 978-1-119-99021-5
Verlag: John Wiley & Sons
Format: EPUB
Kopierschutz: Adobe DRM (»Systemvoraussetzungen)

E-Book, Englisch, 312 Seiten, E-Book

ISBN: 978-1-119-99021-5
Verlag: John Wiley & Sons
Format: EPUB
Kopierschutz: Adobe DRM (»Systemvoraussetzungen)



In 1989 the first edition of this book set out Gittins' pioneeringindex solution to the multi-armed bandit problem and his subsequentinvestigation of a wide of sequential resource allocation andstochastic scheduling problems. Since then there has been aremarkable flowering of new insights, generalizations andapplications, to which Glazebrook and Weber have made majorcontributions.
This second edition brings the story up to date. There are newchapters on the achievable region approach to stochasticoptimization problems, the construction of performance bounds forsuboptimal policies, Whittle's restless bandits, and the use ofLagrangian relaxation in the construction and evaluation of indexpolicies. Some of the many varied proofs of the index theorem arediscussed along with the insights that they provide. Manycontemporary applications are surveyed, and over 150 new referencesare included.
Over the past 40 years the Gittins index has helpedtheoreticians and practitioners to address a huge variety ofproblems within chemometrics, economics, engineering, numericalanalysis, operational research, probability, statistics and websitedesign. This new edition will be an important resource for otherswishing to use this approach.

Gittins / Glazebrook / Weber Multi-armed Bandit Allocation Indices jetzt bestellen!

Weitere Infos & Material


Foreword.
Foreword to the first edition.
Preface.
Preface to the first edition.
1 Introduction or Exploration.
Exercises.
2 Main Ideas: Gittins Index.
2.1 Introduction.
2.2 Decision processes.
2.3 Simple families of alternative bandit processes.
2.4 Dynamic programming.
2.5 Gittins index theorem.
2.6 Gittins index.
2.7 Proof of the index theorem by interchanging banditportions.
2.8 Continuous-time bandit processes.
2.9 Proof of the index theorem by induction and interchangeargument.
2.10 Calculation of Gittins indices.
2.11 Monotonicity conditions.
2.12 History of the index theorem.
2.13 Some decision process theory.
Exercises.
3 Necessary Assumptions for Indices.
3.1 Introduction.
3.2 Jobs.
3.3 Continuous-time jobs.
3.4 Necessary assumptions.
3.5 Beyond the necessary assumptions.
Exercises.
4 Superprocesses, Precedence Constraints andArrivals.
4.1 Introduction.
4.2 Bandit superprocesses.
4.3 The index theorem for superprocesses.
4.4 Stoppable bandit processes.
4.5 Proof of the index theorem by freezing and promotionrules.
4.6 The index theorem for jobs with precedence constraints.
4.7 Precedence constraints forming an out-forest.
4.8 Bandit processes with arrivals.
4.9 Tax problems.
4.10 Near optimality of nearly index policies.
Exercises.
5 The Achievable Region Methodology.
5.1 Introduction.
5.2 A simple example.
5.3 Proof of the index theorem by greedy algorithm.
5.4 Generalized conservation laws and indexable systems.
5.5 Performance bounds for policies for branching bandits.
5.6 Job selection and scheduling problems.
5.7 Multi-armed bandits on parallel machines.
Exercises.
6 Restless Bandits and Lagrangian Relaxation.
6.1 Introduction.
6.2 Restless bandits.
6.3 Whittle indices for restless bandits.
6.4 Asymptotic optimality.
6.5 Monotone policies and simple proofs of indexability.
6.6 Applications to multi-class queuing systems.
6.7 Performance bounds for the Whittle index policy.
6.8 Indices for more general resource configurations.
Exercises.
7 Multi-Population Random Sampling (Theory).
7.1 Introduction.
7.2 Jobs and targets.
7.3 Use of monotonicity properties.
7.4 General methods of calculation: use of invarianceproperties.
7.5 Random sampling times.
7.6 Brownian reward processes.
7.7 Asymptotically normal reward processes.
7.8 Diffusion bandits.
Exercises.
8 Multi-Population Random Sampling (Calculations).
8.1 Introduction.
8.2 Normal reward processes (known variance).
8.3 Normal reward processes (mean and variance bothunknown).
8.4 Bernoulli reward processes.
8.5 Exponential reward processes.
8.6 Exponential target process.
8.7 Bernoulli/exponential target process.
Exercises.
9 Further Exploitation.
9.1 Introduction.
9.2 Website morphing.
9.3 Economics.
9.4 Value of information.
9.5 More on job-scheduling problems.
9.6 Military applications.
References.
Tables.
Index.



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.