Buch, Englisch, 466 Seiten, Format (B × H): 160 mm x 241 mm, Gewicht: 981 g
ISBN: 978-3-031-20638-2
Verlag: Springer
This book is a comprehensive curation, exposition and illustrative discussion of recent research tools for interpretability of deep learning models, with a focus on neural network architectures. In addition, it includes several case studies from application-oriented articles in the fields of computer vision, optics and machine learning related topic.
The book can be used as a monograph on interpretability in deep learning covering the most recent topics as well as a textbook for graduate students. Scientists with research, development and application responsibilities benefit from its systematic exposition.Zielgruppe
Research
Autoren/Hrsg.
Fachgebiete
- Wirtschaftswissenschaften Betriebswirtschaft Unternehmensforschung
- Wirtschaftswissenschaften Betriebswirtschaft Management Wissensmanagement
- Mathematik | Informatik EDV | Informatik Informatik Bildsignalverarbeitung
- Mathematik | Informatik EDV | Informatik Informatik Künstliche Intelligenz Computer Vision
Weitere Infos & Material
1 INTRODUCTION
1.1 Deep Learning Glossary
1.2 Evolution of Deep Learning
1.2.1 Neural Learning
1.2.2 Fuzzy Learning
1.2.3 Convergence of Fuzzy Logic and Neural Learning
1.2.4 Synergy of Neuroscience and Deep Learning
1.3 Awakening of Interpretability
1.3.1 Relevance
1.3.2 Necessity
1.3.3 The Taxonomy of Interpretability
1.4 The Question of Interpretability
1.4.1 Interpretability - Metaverse
1.4.2 Interpretability - The Right Tool
1.4.3 Interpretability - The Wrong Tool2 NEURAL NETWORKS FOR DEEP LEARNING
2.1 Neural Network Architectures
2.1.1 Perceptron
2.1.2 Artificial Neural Network
2.1.3 Recurrent Neural Network
2.1.4 Convolutional Neural Network
2.1.5 Autoencoder Neural Network
2.1.6 Generative Adversarial Network
2.1.7 Graph Neural Network
2.2 Learning Mechanisms
2.2.1 Activation function
2.2.2 Forward Propagation
2.2.3 Backpropagation
2.2.4 Gradient Descent2.2.5 Learning Rate
2.2.6 Optimization
2.2.7 Initialization
2.2.8 Regularization
2.3 Challenges and Limitations of Traditional Techniques
2.3.1 Resource-Demanding Checks
2.3.2 Uncertainty Measure
2.3.3 Network Learning Sanity Check
2.3.4 Gradient Checks
2.3.5 Decision Transparency
3 KNOWLEDGE ENCODING AND INTERPRETATION
3.1 What is Knowledge?
3.1.1 Image Representation
3.1.2 Word Representation
3.1.3 Graph Representation
3.2 Knowledge Encoding and Architectural Understanding
3.2.1 Role of Neurons
3.2.2 Role of Layers
3.2.3 Role of Explanation
3.2.4 Semantic Understanding
3.2.5 Network Understanding
3.3 Design and Analysis of Interpretability
3.3.1 Divide and Conquer
3.3.2 Greedy
3.3.3 Back-tracking
3.3.4 Dynamic
3.3.5 Branch and Bound
3.3.6 Brute-force
3.4 Knowledge Propagation in Deep Network Optimizers
3.4.1 Knowledge versus Performance
3.4.2 Deep versus Shallow Encoding
4 INTERPRETATION IN SPECIFIC DEEP ARCHITECTURES
4.1 Interpretation in Convolution Networks
4.1.1 Case Study: Image Representation by Unmasking Clever Hans
4.1.2 Variants of CNNs
4.1.3 Interpretation of CNNs
4.1.4 Review: CNN Visualization Techniques
4.1.5 Review: CNN Adversarial Techniques
4.1.6 Inverse Image Representation
4.1.7 Case study: Superpixels Algorithm
4.1.8 Activation Grid and Activation Map
4.1.9 Convolution Trace4.2 Interpretation in Autoencoder Networks
4.2.1 Visualization of Latent Space
4.2.2 Sparsity and Interpretation
4.2.3 Case Study: Microscopy Structure to Structure Learning
4.3 Interpretation in Adversarial Networks
4.3.1 Interpretation in Generative Network
4.3.2 Interpretation in Latent Spaces
4.3.3 Evaluation Metrics
4.3.4 Case study: Digital Staining of Microscopy Images
4.4 Interpretation in Graph Networks
4.4.1 Neural Structured Learning
4.4.2 Graph Embedding and Interpretability
4.4.3 Evaluation Metrics for Interpretation
4.4.4 Disentangled Representation Learning on Graphs
4.4.5 Future Direction
4.5 Self-Interpretable Models
4.6 Pitfalls of Interpretability Methods
5 FUZZY DEEP LEARNING
5.1 Fuzzy Theory
5.1.1 Fuzzy Sets and Fuzzy Membership
5.1.2 Fuzzification and Defuzzification
5.1.3 Fuzzy Rules and Inference Systems
5.2 Neuro-Fuzzy Inference Systems
5.2.1 Architecture of a Neuro-Fuzzy Inference System
5.2.2 Other Design Elements of Neuro-Fuzzy Inference Systems5.2.3 Learning mechanisms for Neuro-Fuzzy Inference Systems
5.2.4 Online Learning with Dynamic Streaming Data
5.3 Case studies
5.3.1 POPFNN Family of NFS - evolution towards sophisticated brain-like
learning
5.3.2 Combining Conventional Deep Learning and Fuzzy Learning
A Mathematical models and theories
A.1 Choquet Integral
A.1.1 Restricting the Scope of FM/ChI
A.1.2 ChI Understanding from NN
A.2 Deformation Invariance Property
A.3 Distance Metrics
A.4 Grad Weighted Class Activation Mapping
A.5 Guided Saliency
A.6 Jensen-Shanon Divergence
A.7 Kullback-Leibler Divergence
A.8 Projected Gradient Descent
A.9 Pythagorean Fuzzy Number
A.10 Targeted Adversarial Attack
A.11 Translation Invariance Property
A.12 Universal Approximation Theorem
A List of digital resources and examples
References .




