Buch, Englisch, 169 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 289 g
Reihe: Big Data Management
Buch, Englisch, 169 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 289 g
Reihe: Big Data Management
ISBN: 978-981-16-3422-2
Verlag: Springer
This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, implementing machine learning algorithms in a distributed environment has become a key technology, and recent research has shown gradient-based iterative optimization to be an effective solution. Focusing on methods that can speed up large-scale gradient optimization through both algorithm optimizations and careful system implementations, the book introduces three essential techniques in designing a gradient optimization algorithm to train a distributed machine learning model: parallel strategy, data compression and synchronization protocol.
Written in a tutorial style, it covers a range of topics, from fundamental knowledge to a number of carefully designed algorithms and systems of distributed machine learning. It will appeal to a broad audience in the field of machine learning, artificial intelligence, big data and database management.
Zielgruppe
Research
Autoren/Hrsg.
Fachgebiete
Weitere Infos & Material
Chapter 1: Introduction
1.1. Background
1.2. Distributed machine learning
1.3. Gradient optimization
1.4. Challenges
Chapter 2: The preliminaries
2.1. Overview
2.2. Parallel strategy
2.3. Gradient compression
2.4. Synchronization protocol
Chapter 3: Parallel strategy
1.1. Background and problem
1.2. Data parallelism
1.3. Model parallelism
1.4. Hybrid parallelism
3.5. Benchmark
3.6. Summary
Chapter 4: Gradient compression4.1. Background and problem
4.2. Lossless gradient compression
4.3. Lossy gradient compression
4.4. Sparse gradient compression
4.5. Benchmark
4.6. Summary
Chapter 5: Synchronization protocol
5.1. Background and problem
5.2. Bulk synchronous protocol
5.3. Asynchronous protocol5.4. Stale synchronous protocol
5.5. Benchmark
5.6. SummaryChapter 6: Conclusion
6.1. Summary of the book
6.2. Future work




