Freeman | Pro .NET 4 Parallel Programming in C# | E-Book | www2.sack.de
E-Book

E-Book, Englisch, 328 Seiten

Freeman Pro .NET 4 Parallel Programming in C#


1. ed
ISBN: 978-1-4302-2968-1
Verlag: Apress
Format: PDF
Kopierschutz: 1 - PDF Watermark

E-Book, Englisch, 328 Seiten

ISBN: 978-1-4302-2968-1
Verlag: Apress
Format: PDF
Kopierschutz: 1 - PDF Watermark



Parallel programming has been revolutionised in .NET 4, providing, for the first time, a standardised and simplified method for creating robust, scalable and reliable multi-threaded applications. The Parallel programming features of .NET 4 allow the programmer to create applications that harness the power of multi-core and multi-processor machines. Simpler to use and more powerful than 'classic' .NET threads, parallel programming allows the developer to remain focused on the work an application needs to perform. In Pro .NET 4 Parallel Programming in C#, Adam Freeman presents expert advice that guides you through the process of creating concurrent C# applications from the ground up. You'll be introduced to .NET's parallel programming features, both old and new, discover the key functionality that has been introduced in .NET 4, and learn how you can take advantage of the power of multi-core and multi-processor machines with ease. Pro .NET 4 Parallel Programming in C# is a reliable companion that will remain with you as you explore the parallel programming universe, elegantly and comprehensively explaining all aspects of parallel programming, guiding you around potential pitfalls and providing clear-cut solutions to the common problems that you will encounter.

Adam Freeman is an experienced IT professional who has held senior positions in a range of companies, most recently serving as chief technology officer and chief operating officer of a global bank. Now retired, he spends his time writing and long-distance running.

Freeman Pro .NET 4 Parallel Programming in C# jetzt bestellen!

Autoren/Hrsg.


Weitere Infos & Material


1;Title Page;1
2;Copyright Page;2
3;Contents at a Glance;4
4;Table of Contents;5
5;About the Author;13
6;About the Technical Reviewer;14
7;Acknowledgments;15
8;CHAPTER 1 Introducing Parallel Programming;16
8.1;Introducing .NET Parallel Programming;16
8.2;What’s in This Book (and What Is Not);17
8.3;Understanding the Benefits (and Pitfalls) of Parallel Programming;18
8.3.1;Considering Overhead;18
8.3.2;Coordinating Data;18
8.3.3;Scaling Applications;18
8.4;Deciding When to Go Parallel;18
8.5;Deciding When to Stay Sequential;19
8.6;Getting Prepared for This Book;19
8.7;Understanding the Structure of This Book;19
8.8;Getting the Example Code;20
8.9;Summary;21
9;CHAPTER 2 Task Programming;22
9.1;Hello Task;22
9.2;Creating and Starting Tasks;23
9.2.1;Creating Simple Tasks;24
9.2.2;Setting Task State;26
9.2.3;Getting a Result;28
9.2.4;Specifying Task Creation Options;30
9.2.5;Identifying Tasks;30
9.3;Cancelling Tasks;30
9.3.1;Monitoring Cancellation by Polling;32
9.3.2;Monitoring Cancellation with a Delegate;34
9.3.3;Monitoring Cancellation with a Wait Handle;35
9.3.4;Cancelling Several Tasks;37
9.3.5;Creating a Composite Cancellation Token;38
9.3.6;Determining If a Task Was Cancelled;39
9.4;Waiting for Time to Pass;40
9.4.1;Using a Cancellation Token Wait Handle;41
9.4.2;Using Classic Sleep;42
9.4.3;Using Spin Waiting;44
9.5;Waiting for Tasks;45
9.5.1;Waiting for a Single Task;46
9.5.2;Waiting for Several Tasks;48
9.5.3;Waiting for One of Many Tasks;49
9.6;Handling Exceptions in Tasks;50
9.6.1;Handling Basic Exceptions;51
9.6.2;Using an Iterative Handler;52
9.6.3;Reading the Task Properties;54
9.6.4;Using a Custom Escalation Policy;56
9.7;Getting the Status of a Task;58
9.8;Executing Tasks Lazily;58
9.9;Understanding Common Problems and Their Causes;60
9.9.1;Task Dependency Deadlock;60
9.9.1.1;Solution;60
9.9.1.2;Example;60
9.9.2;Local Variable Evaluation;61
9.9.2.1;Solution;61
9.9.2.2;Example;61
9.9.3;Excessive Spinning;62
9.9.3.1;Solution;62
9.9.3.2;Example;62
9.10;Summary;63
10;CHAPTER 3 Sharing Data;64
10.1;The Trouble with Data;65
10.1.1;Going to the Races;65
10.1.2;Creating Some Order;66
10.2;Executing Sequentially;67
10.3;Executing Immutably;67
10.4;Executing in Isolation;68
10.5;Synchronizing Execution;74
10.5.1;Defining Critical Regions;74
10.5.2;Defining Synchronization Primitives;74
10.5.3;Using Synchronization Wisely;75
10.5.3.1;Don’t Synchronize Too Much;76
10.5.3.2;Don’t Synchronize Too Little;76
10.5.3.3;Pick the Lightest Tool;76
10.5.3.4;Don’t Write Your Own Synchronization Primitives;76
10.6;Using Basic Synchronization Primitives;76
10.6.1;Locking and Monitoring;77
10.6.2;Using Interlocked Operations;82
10.6.3;Using Spin Locking;85
10.6.4;Using Wait Handles and the Mutex Class;87
10.6.4.1;Acquiring Multiple Locks;89
10.6.5;Configuring Interprocess Synchronization;91
10.6.6;Using Declarative Synchronization;93
10.6.7;Using Reader-Writer Locks;94
10.6.7.1;Using the ReaderWriterLockSlim Class;94
10.6.7.2;Using Recursion and Upgradable Read Locks;98
10.7;Working with Concurrent Collections;102
10.7.1;Using .NET 4 Concurrent Collection Classes;103
10.7.1.1;ConcurrentQueue;104
10.7.1.2;ConcurrentStack;106
10.7.1.3;ConcurrentBag;108
10.7.1.4;ConcurrentDictionary;109
10.7.2;Using First-Generation Collections;112
10.7.3;Using Generic Collections;114
10.8;Common Problems and Their Causes;115
10.8.1;Unexpected Mutability;115
10.8.1.1;Solution;115
10.8.1.2;Example;115
10.8.2;Multiple Locks;116
10.8.2.1;Solution;117
10.8.2.2;Example;117
10.8.3;Lock Acquisition Order;118
10.8.3.1;Solution;119
10.8.3.2;Example;119
10.8.4;Orphaned Locks;120
10.8.4.1;Solution;120
10.8.4.2;Example;120
10.9;Summary;122
11;CHAPTER 4 Coordinating Tasks;123
11.1;Doing More with Tasks;124
11.2;Using Task Continuations;124
11.2.1;Creating Simple Continuations;125
11.2.2;Creating One-to-Many Continuations;127
11.2.3;Creating Selective Continuations;129
11.2.4;Creating Many-to-One and Any-To-One Continuations;131
11.2.5;Canceling Continuations;134
11.2.6;Waiting for Continuations;136
11.2.7;Handling Exceptions;136
11.3;Creating Child Tasks;140
11.4;Using Synchronization to Coordinate Tasks;143
11.4.1;Barrier;145
11.4.2;CountDownEvent;150
11.4.3;ManualResetEventSlim;153
11.4.4;AutoResetEvent;155
11.4.5;SemaphoreSlim;157
11.5;Using the Parallel Producer/Consumer Pattern;160
11.5.1;Creating the Pattern;161
11.5.1.1;Creating a BlockingCollection instance;162
11.5.1.2;Selecting the Collection Type;163
11.5.1.3;Creating the Producers;163
11.5.1.4;Creating the Consumer;164
11.5.2;Combining Multiple Collections;166
11.6;Using a Custom Task Scheduler;170
11.6.1;Creating a Custom Scheduler;170
11.6.2;Using a Custom Scheduler;174
11.7;Common Problems and Their Causes;176
11.7.1;Inconsistent/Unchecked Cancellation;176
11.7.1.1;Solution;176
11.7.1.2;Example;176
11.7.2;Assuming Status on Any-To-One Continuations;178
11.7.2.1;Solution;178
11.7.2.2;Example;178
11.7.3;Trying to Take Concurrently;179
11.7.3.1;Solution;179
11.7.3.2;Example;179
11.7.4;Reusing Objects in Producers;180
11.7.4.1;Solution;180
11.7.4.2;Example;180
11.7.5;Using BlockingCollection as IEnumerable;182
11.7.5.1;Solution;182
11.7.5.2;Example;182
11.7.6;Deadlocked Task Scheduler;183
11.7.6.1;Solution;183
11.7.6.2;Example;183
11.8;Summary;186
12;CHAPTER 5 Parallel Loops;187
12.1;Parallel vs. Sequential Loops;187
12.2;The Parallel Class;189
12.2.1;Invoking Actions;189
12.2.2;Using Parallel Loops;190
12.2.2.1;Creating a Basic Parallel For Loop;191
12.2.2.2;Creating a Basic Parallel ForEach Loop;193
12.2.3;Setting Parallel Loop Options;195
12.2.4;Breaking and Stopping Parallel Loops;197
12.2.5;Handling Parallel Loop Exceptions;201
12.2.6;Getting Loop Results;202
12.2.7;Canceling Parallel Loops;203
12.2.8;Using Thread Local Storage in Parallel Loops;204
12.2.9;Performing Parallel Loops with Dependencies;207
12.2.10;Selecting a Partitioning Strategy;209
12.2.10.1;Using the Chunking Partitioning Strategy;210
12.2.10.2;Using the Ordered Default Partitioning Strategy;213
12.2.11;Creating a Custom Partitioning Strategy;214
12.2.11.1;Writing a Contextual Partitioner;215
12.2.11.2;Writing an Orderable Contextual Partitioner;223
12.3;Common Problems and Their Causes;228
12.3.1;Synchronization in Loop Bodies;228
12.3.1.1;Solution;228
12.3.1.2;Example;229
12.3.2;Loop Body Data Races;229
12.3.2.1;Solution;229
12.3.2.2;Example;229
12.3.3;Using Standard Collections;230
12.3.3.1;Solution;230
12.3.3.2;Example;230
12.3.4;Using Changing Data;231
12.3.4.1;Solution;231
12.3.4.2;Example;231
12.4;Summary;232
13;CHAPTER 6 Parallel LINQ;233
13.1;LINQ, But Parallel;233
13.2;Using PLINQ Queries;236
13.2.1;Using PLINQ Query Features;239
13.2.2;Ordering Query Results;240
13.2.2.1;Using Ordered Subqueries;244
13.2.3;Performing a No-Result Query;245
13.3;Managing Deferred Query Execution;246
13.4;Controlling Concurrency;248
13.4.1;Forcing Parallelism;249
13.4.2;Limiting Parallelism;250
13.4.3;Forcing Sequential Execution;251
13.5;Handling PLINQ Exceptions;252
13.6;Cancelling PLINQ Queries;253
13.7;Setting Merge Options;254
13.8;Using Custom Partitioning;256
13.9;Using Custom Aggregation;259
13.10;Generating Parallel Ranges;260
13.11;Common Problems and Their Causes;261
13.11.1;Forgetting the PLINQ Basics;261
13.11.1.1;Solution;261
13.11.2;Creating Race Conditions;262
13.11.2.1;Solution;262
13.11.2.2;Example;262
13.11.3;Confusing Ordering;262
13.11.3.1;Solution;263
13.11.3.2;Example;263
13.11.4;Sequential Filtering;263
13.11.4.1;Solution;264
13.11.4.2;Example;264
13.12;Summary;264
14;CHAPTER 7 Testing and Debugging;265
14.1;Making Things Better When Everything Goes Wrong;265
14.2;Measuring Parallel Performance;266
14.2.1;Using Good Coding Strategies;266
14.2.1.1;Using Synchronization Sparingly;266
14.2.1.2;Using Synchronization Readily;266
14.2.1.3;Partitioning Work Evenly;266
14.2.1.4;Avoiding Parallelizing Small Work Loads;266
14.2.1.5;Measure Different Degrees of Concurrency;267
14.2.2;Making Simple Performance Comparisons;267
14.2.3;Performing Parallel Analysis with Visual Studio;270
14.3;Finding Parallel Bugs;274
14.3.1;Debugging Program State;275
14.3.2;Handling Exceptions;279
14.3.3;Detecting Deadlocks;281
14.4;Summary;283
15;CHAPTER 8 Common Parallel Algorithms;284
15.1;Sorting, Searching, and Caching;284
15.1.1;Using Parallel Quicksort;284
15.1.1.1;The Code;285
15.1.1.2;Using the Code;286
15.1.2;Traversing a Parallel Tree;287
15.1.2.1;The Code;287
15.1.2.2;Using the Code;288
15.1.3;Searching a Parallel Tree;289
15.1.3.1;The Code;289
15.1.3.2;Using the Code;290
15.1.4;Using a Parallel Cache;291
15.1.4.1;The Code;292
15.1.4.2;Using the Code;292
15.2;Using Parallel Map and Reductions;293
15.2.1;Using a Parallel Map;293
15.2.1.1;The Code;293
15.2.1.2;Using the Code;294
15.2.2;Using a Parallel Reduction;295
15.2.2.1;The Code;295
15.2.2.2;Using the Code;295
15.2.3;Using Parallel MapReduce;296
15.2.3.1;The Code;296
15.2.3.2;Using the Code;297
15.3;Speculative Processing;298
15.3.1;Selection;298
15.3.1.1;The Code;298
15.3.1.2;Using the Code;300
15.3.2;Speculative Caching;301
15.3.2.1;The Code;301
15.3.2.2;Using the Code;302
15.4;Using Producers and Consumers;303
15.4.1;Decoupling the Console Class;303
15.4.1.1;The Code;303
15.4.1.2;Using the Code;304
15.4.2;Creating a Pipeline;305
15.4.2.1;The Code;305
15.4.2.2;Using the Code;306
16;Index;308



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.