Buch, Englisch, Band 314, 238 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 394 g
Reihe: The Springer International Series in Engineering and Computer Science
Vector Decomposition Analysis, Modelling and Analog Implementation
Buch, Englisch, Band 314, 238 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 394 g
Reihe: The Springer International Series in Engineering and Computer Science
ISBN: 978-1-4613-5990-6
Verlag: Springer US
Starting with the derivation of a specification and ending with its hardware implementation, analog hard-wired, feed-forward neural networks with on-chip back-propagation learning are designed in their entirety. On-chip learning is necessary in circumstances where fixed weight configurations cannot be used. It is also useful for the elimination of most mis-matches and parameter tolerances that occur in hard-wired neural network chips.
Fully analog neural networks have several advantages over other implementations: low chip area, low power consumption, and high speed operation.
is an excellent source of reference and may be used as a text for advanced courses.
Zielgruppe
Research
Autoren/Hrsg.
Fachgebiete
- Technische Wissenschaften Energietechnik | Elektrotechnik Elektrotechnik
- Naturwissenschaften Physik Physik Allgemein Theoretische Physik, Mathematische Physik, Computerphysik
- Technische Wissenschaften Elektronik | Nachrichtentechnik Elektronik Bauelemente, Schaltkreise
- Interdisziplinäres Wissenschaften Wissenschaften: Forschung und Information Kybernetik, Systemtheorie, Komplexe Systeme
Weitere Infos & Material
1 Introduction.- 2 The Vector Decomposition Method.- 3 Dynamics of Single Layer Nets.- 4 Unipolar Input Signals in Single-Layer Feed-Forward Neural Networks.- 5 Cross-talk in Single-Layer Feed-Forward Neural Networks.- 6 Precision Requirements for Analog Weight Adaptation Circuitry for Single-Layer Nets.- 7 Discretization of Weight Adaptations in Single-Layer Nets.- 8 Learning Behavior and Temporary Minima of Two-Layer Neural Networks.- 9 Biases and Unipolar Input signals for Two-Layer Neural Networks.- 10 Cost Functions for Two-Layer Neural Networks.- 11 Some issues for f’ (x).- 12 Feed-forward hardware.- 13 Analog weight adaptation hardware.- 14 Conclusions.- Nomenclature.