Gokhale / Graham | Reconfigurable Computing | E-Book | sack.de
E-Book

E-Book, Englisch, 244 Seiten, eBook

Gokhale / Graham Reconfigurable Computing

Accelerating Computation with Field-Programmable Gate Arrays
1. Auflage 2006
ISBN: 978-0-387-26106-5
Verlag: Springer US
Format: PDF
Kopierschutz: 1 - PDF Watermark

Accelerating Computation with Field-Programmable Gate Arrays

E-Book, Englisch, 244 Seiten, eBook

ISBN: 978-0-387-26106-5
Verlag: Springer US
Format: PDF
Kopierschutz: 1 - PDF Watermark



A one-of-a-kind survey of the field of Reconfigurable Computing Gives a comprehensive introduction to a discipline that offers a 10X-100X acceleration of algorithms over microprocessors Discusses the impact of reconfigurable hardware on a wide range of applications: signal and image processing, network security, bioinformatics, and supercomputing Includes the history of the field as well as recent advances Includes an extensive bibliography of primary sources

Gokhale / Graham Reconfigurable Computing jetzt bestellen!

Zielgruppe


Professional/practitioner

Weitere Infos & Material


An Introduction to Reconfigurable Computing. What is RC? RC Architectures. How did RC originate? Inside the FPGA. Mapping Algorithms to Hardware. RC Applications. Example: Dot Product. Further Reading.- Reconfigurable Logic Devices. Field-Programmable Gate Arrays. Coarse-Grained Reconfigurable Arrays. Summary.- Reconfigurable Computing Systems. Parallel Processing on Reconfigurable Computers. A Survey of Reconfigurable Computing Systems. Summary.- Languages and Compilation. Design Cycle. Languages. High Level Compilation. Low Level Design Flow. Debugging Reconfigurable Computing Applications. Summary.- Signal Processing Applications. What is Digital Signal Processing? Why Use Reconfigurable Computing for DSP? DSP Application Building Blocks. Example DSP Applications.- Image Processing. RC for Image and Video Processing. Local Neighborhood Functions. Convolution. Morphology. Feature Extraction. Automatic Target Recognition. Image Matching. Evolutionary Image Processing. Summary.- Network Security. Cryptographic Applications. Network Protocol Security. Summary.- Bioinformatics Applications. Introduction. Applications. Dynamic Programming Algorithms. Seed-Based Heuristics. Profiles, HMMs and Language Models. Bioinformatics FPGA Accelerators. Summary.- Supercomputing Applications. Introduction. Monte Carlo Simulation of Radiative Heat Transfer. Urban Road Traffic Simulation.


3 Reconfigurable Computing Systems (p. 37-38)

In this chapter, we will discuss general purpose computing systems that incorporate FPGAs into the system architecture. While modern FPGAs include processors, memory blocks, and built-in I/O interfaces on-chip, recon.gurable systems, even those with a single FPGA or tiled processor array contain off-chip memory and I/O resources as well. Since recon.gurable computing is concerned with parallel operations at any level of granularity, we will motivate the roles that FPGAs can play by first discussing parallel processing models and how they might use reconfigurable logic. We will then survey the field of reconfigurable processing systems.

3.1 Parallel Processing on Recon.gurable Computers

Reconfigurable computing systems derive high performance by exploiting parallelism at multiple levels of granularity, from instruction through task level parallelism. In this section we introduce the levels of parallelism and discuss the use of recon.gurable hardware at various granularity of parallelization.

3.1.1 Instruction Level Parallelism

The lowest level of granularity we consider is instruction-level parallelism. In conventional microprocessors, instruction-level parallelism is exploited in the micro-architecture of a superscalar processor. By having multiple instructions in progress in di.erent stages of completion, the superscalar processor is able to complete more than one instruction in a clock cycle.

Very Long Instruction Word (VLIW) processors offer another method for fine-grained parallel operation. A VLIW processor contains multiple function units operating in parallel. In Figure 3.1, the instruction word contains fields for two integer operations, two floating point operations, two memory operations, and a branch. To compile for a superscalar processor, the compiler simply generates a sequential instruction stream, and the processor parallelizes the instruction stream at run time. In contrast, the VLIW processor executes the instruction word generated by the compiler, requiring the compiler to schedule concurrent operations at compile time.

Co-processor parallelism is achieved within a single instruction stream. A customized parallel instruction is performed by co-processor. Examples of co-processors include MMX/SSE units or vector units. Instructions for the co-processor are integrated into the instruction set of the processor. The coprocessor shares register files and other internal state with other arithmetic units, such as the floating point units, as shown in Figure 3.2.



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.