Uhr / Rheinboldt | Algorithm-Structured Computer Arrays and Networks | E-Book | www2.sack.de
E-Book

E-Book, Englisch, 438 Seiten, Web PDF

Uhr / Rheinboldt Algorithm-Structured Computer Arrays and Networks

Architectures and Processes for Images, Percepts, Models, Information
1. Auflage 2014
ISBN: 978-1-4832-6705-0
Verlag: Elsevier Science & Techn.
Format: PDF
Kopierschutz: 1 - PDF Watermark

Architectures and Processes for Images, Percepts, Models, Information

E-Book, Englisch, 438 Seiten, Web PDF

ISBN: 978-1-4832-6705-0
Verlag: Elsevier Science & Techn.
Format: PDF
Kopierschutz: 1 - PDF Watermark



Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers. This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays with the flexibility of networks into structures that reflect and embody the flow of information through their processors. This publication is useful as a textbook or auxiliary textbook for students taking courses on computer architecture, parallel computers, arrays and networks, and image processing and pattern recognition.

Uhr / Rheinboldt Algorithm-Structured Computer Arrays and Networks jetzt bestellen!

Weitere Infos & Material


1;Front Cover;1
2;Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information;4
3;Copyright Page;5
4;Table of Contents;6
5;Preface;14
6;Acknowledgments;22
7;PART I: AN INTRODUCTION TO COMPUTERS;26
7.1;Introduction: Toward Algorithm-Structured Architectures;28
7.1.1;Arrays and Networks of Large Numbers of Closely Coupled Computers;28
7.1.2;Toward Architectures That Mirror Algorithms' Information Flow;30
7.1.3;The Traditional Single 4 'Central Processing Unit" Serial Computer;31
7.1.4;Problems for Which the One-CPU Serial Computer Is Inadequate;31
7.1.5;Toward Developing Powerfully Structured Arrays and Networks;35
7.1.6;Very Large-Scale Integration (VLSI) and Very Large Networks;36
7.1.7;Steps toward Efficient, Powerful Algorithm-Structured Architectures;37
7.2;Chapter 1. Conventional Computers and Loosely Distributed Networks;41
7.2.1;General-Purpose Computers and Turing Machines;42
7.2.2;The General-Purpose Single-Processor Serial Computer Described;43
7.2.3;How Computers Actually Work;46
7.2.4;Graphs, Automata, Petri Nets, Information Flow, Network Flow;49
7.2.5;Parallel Hardware Additions for Super (Traditional) Computers;53
7.2.6;Networks of Loosely Coupled Distributed Computers;55
7.2.7;Summary Discussion;59
8;PART II: ARRAYS AND NETWORKS BUILT OR DESIGNED;62
8.1;Chapter 2. First Attempts at Designing and Organizing Multicomputers;64
8.1.1;Mapping Process-Information Graph into Processor-Memory Network Flow;66
8.1.2;A Survey of Early (Pre-LSI) Multicomputer Arrays and Networks;68
8.1.3;Associative, Content-Addressable Memories (CAMs) and Logic-in-Memory;73
8.1.4;Super Multicomputers;74
8.1.5;Organizing Computers: Clocks, Controllers, Operating Systems;76
8.1.6;The Overall Coordination and Operation of a Multicomputer Network;81
8.1.7;Summary Discussion;84
8.2;Chapter 3. Large and Powerful Arrays and Pipelines;86
8.2.1;The General Architecture of Parallel Cellular Array Computers;87
8.2.2;The Very Large LSI Parallel-Array Computer;89
8.2.3;An Examination of Today's Very Large Arrays;91
8.2.4;Pipelines of Processors;96
8.2.5;Systems That Combine Array, Pipeline, and Specialized Hardware;100
8.2.6;Summary Discussion: Arrays, Pipelines, Parallelism, VLSI;103
8.3;Chapter 4. More or Less Tightly Coupled Networks;106
8.3.1;Simple Structures: Buses, Rings, and Cross-Point Switches;106
8.3.2;Lattices, TV-Cubes, Trees, and Stars;108
8.3.3;Augmenting Trees for Density, Connectivity, and Structure;111
8.3.4;Miscellaneous Interconnection Patterns;114
8.3.5;Reconfiguring Network Topologies and Component Parts;115
8.3.6;Network-Structured Programs and Algorithms;118
8.3.7;Summary Discussion and Preliminary Comparisons;120
8.4;Chapter 5. Parallel Parallel Computers;122
8.4.1;The Great Variety of Possible Array, Pipeline, and Network Structures;122
8.4.2;The Value of a Parallel Set of Parallel Resources;124
8.4.3;Suggested Requirements for Parallel Parallel Systems;126
8.4.4;Summary and Conclusions;128
8.5;Chapter 6. Converging Pyramids of Arrays;130
8.5.1;A Pipeline of Converging Stacked Arrays;131
8.5.2;An Example of a Potentially Powerful yet Economical Pyramid;133
8.5.3;A Combined Array/Network Architecture;134
8.5.4;Possible Mixtures of N-Bit Processors;136
8.5.5;Summary Discussion;136
8.6;Chapter 7. Comparisons among Different Interconnection Strategies;138
8.6.1;Comparisons between Arrays and Networks;138
8.6.2;Comparisons among Networks Using Formal Criteria;139
8.6.3;Tests, Simulations, Estimates, Models;143
8.6.4;Some Structural Similarities among Different Networks;148
8.6.5;X-Trees, Hex-Trees, N-Trees, Arrays, and Lattices;149
8.6.6;Moving from an N-Cube to an N-Lattice;150
8.6.7;The Need to Build and Evaluate Networks, and to Handle Messages;151
9;PART III: DEVELOPING PARALLEL ALGORITHMS, LANGUAGES, DATA FLOW;154
9.1;Chapter 8. Formulating, Programming, and Executing Efficient Parallel Algorithms;156
9.1.1;Programming Languages and Operating Systems for Parallel Programs;157
9.1.2;Converting Our Programs and Ways of Thinking from Serial to Parallel;161
9.1.3;Functional Applicative, Production, and Data-Flow Languages;167
9.1.4;Language Development Systems to Specify Data Flow through Structures;171
9.1.5;On Fitting Problem, Algorithm, Program, and Network to One Another;172
9.2;Chapter 9. Development of Algorithm-Program- Architecture-Data Flow;174
9.2.1;Developing Program, Mapping onto Network, and Program Flow;174
9.2.2;Parallel versus Serial versus Appropriately Structured Parallel-Serial;176
9.2.3;An Example Formulation of a Program for a Parallel Pyramid Network;178
9.2.4;Summarizing Observation;183
9.3;Chapter 10. Some Short Examples Mapping Algorithms onto Networks;184
9.3.1;Breadth-First and Heuristic Searches;184
9.3.2;Searches for Question-Answering and Database Access;186
9.3.3;Modeling Systems of Muscles and of Neurons;188
9.3.4;Modeling the Whole System, in Interaction with Other Systems;189
9.3.5;Parallel Algorithms for Processes That May Be Basically Serial;190
9.4;Chapter 11. A Language for Parallel Processing, Embedded in Pascal;193
9.4.1;A Brief Examination of Parallel Programming Languages;195
9.4.2;A Description, with Examples, of PascalPLl;198
9.4.3;Discussion;205
9.5;Chapter 12. A Programming Language for a Very Large Array;207
9.5.1;An Introductory Description of PascalPLO;208
9.5.2;Coding an Array like CLIP in a Language like PascalPLO;211
9.5.3;Summary of Experience with PascalPLO;212
10;PART IV: TOWARD GENERAL AND POWERFUL FUTURE NETWORK ARCHITECTURES;214
10.1;Chapter 13. A Quick Introduction to Future Possibilities;216
10.1.1;Very Very Large Networks;217
10.1.2;Very Large Networks;218
10.1.3;A Small, Powerful Network of Very Powerful Processors;219
10.1.4;Very Very Very Large Arrays and 3D Lattices of Stacked Arrays;220
10.1.5;Very Powerful Pipelines and Arrays and Trees of Pipelines;222
10.1.6;A Summarizing Comment;222
10.2;Chapter 14. Construction Principles for Efficient Parallel Structures;224
10.2.1;The Great Variety of Possible Sources of Parallelism;225
10.2.2;Additional Speedups and Efficiencies from Hardware and Software;228
10.2.3;From Tightly Coupled to Loosely Coupled and Possibilities in Between;230
10.2.4;Promising Construction Methods: General Principles;231
10.2.5;Load-Balancing Processor, Input, Output, and Memory Resources;234
10.3;Chapter 15. Designing VLSI Chip-Embodied Information-Flow Multicomputers;237
10.3.1;Embedding Networks on VLSI Chips;238
10.3.2;Examples of Related Analogous Design Problems;244
10.3.3;Arrays of Processors Surrounding Local Shared Memories;246
10.3.4;Designing in Terms of Chip Area;249
10.3.5;Summary Discussion;251
10.4;Chapter 16. Compounding Clusters and Augmenting Trees;253
10.4.1;Arrays, Tree/Pyramids, N-Cubes;253
10.4.2;Pyramids and Lattices;255
10.4.3;Toroidal Lattices and Spheres;257
10.4.4;A Summarizing Tentative Choice and Design of Attractive Structures;258
10.5;Chapter 17. Pyramid, Lattice, Discus, Torus, Sphere;261
10.5.1;The Basic Pyramid Structure;262
10.5.2;Building Blocks and Modules;267
10.5.3;Handling Input-Output, Control, and Fault Tolerance with an IOC Pyramid;270
10.5.4;Examples of Attractive Pyramids;272
10.5.5;Lattices;274
10.5.6;Discus, Torus, Egg, Cone, Sphere;277
10.6;Chapter 18. Three-Dimensional Array/Network Whorls of Processors;279
10.6.1;Spheres within Spheres of Onion-Layered Faceted Arrays;280
10.6.2;Dense Three-Dimensional Whorls with Programs Pipeline-Swirling through Them;281
10.6.3;Limited Reconfiguring for Multiprogramming and Fault Tolerance;282
10.6.4;Array/Networks That Are Sections of a Whorl;283
10.6.5;Summary Discussion;283
10.7;Chapter 19. Toward Very Large, Efficient, Usable Network Structures;285
10.7.1;The Great Variety of Potentially Powerful, Efficient Networks;285
10.7.2;The Promise, and Problems, in Future VLSI Technologies;289
10.7.3;Allocating Gates to Networks of Processors and Memory on Silicon;291
10.7.4;Pipe-Network Flow through Combined Array-Network Clusters;293
10.7.5;A Final Word: The Immediate and More Distant Futures;295
11;PART V: APPENDIXES: BACKGROUND MATTERS AND RELATED ISSUES;300
11.1;Appendix A: Applied Graphs, to Describe and Construct Computer Networks;302
11.2;Appendix B: Bits of Logical Design Embodied in Binary Switches;317
11.3;Appendix C: Component Cost and Chip Packing Estimates for VLSI;326
11.4;Appendix D: Design of Basic Components and Modules for Very Large Networks;341
11.5;Appendix E: Examples of Languages and Code for Arrays and Networks;357
11.6;Appendix F: Organizations in Crystal, Cell, Animal, and Human Groups;373
11.7;Appendix G: Messages (Information Passed between Processors);383
11.8;Appendix H: What Is " 'Real' 'Time' "?—A Metaphysical Aside;390
12;Suggestions for Further Reading;395
13;References;398
14;Glossary;416
15;Author Index;424
16;Subject Index;430
17;Computer Science and Applied Mathematics: A SERIES OF MONOGRAPHS AND TEXTBOOKS;439



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.