Hyvärinen / Hurri / Hoyer | Natural Image Statistics | E-Book | www2.sack.de
E-Book

E-Book, Englisch, Band 39, 448 Seiten

Reihe: Computational Imaging and Vision

Hyvärinen / Hurri / Hoyer Natural Image Statistics

A Probabilistic Approach to Early Computational Vision.
1. Auflage 2009
ISBN: 978-1-84882-491-1
Verlag: Springer
Format: PDF
Kopierschutz: 1 - PDF Watermark

A Probabilistic Approach to Early Computational Vision.

E-Book, Englisch, Band 39, 448 Seiten

Reihe: Computational Imaging and Vision

ISBN: 978-1-84882-491-1
Verlag: Springer
Format: PDF
Kopierschutz: 1 - PDF Watermark



Aims and Scope This book is both an introductory textbook and a research monograph on modeling the statistical structure of natural images. In very simple terms, 'natural images' are photographs of the typical environment where we live. In this book, their statistical structure is described using a number of statistical models whose parameters are estimated from image samples. Our main motivation for exploring natural image statistics is computational m- eling of biological visual systems. A theoretical framework which is gaining more and more support considers the properties of the visual system to be re?ections of the statistical structure of natural images because of evolutionary adaptation processes. Another motivation for natural image statistics research is in computer science and engineering, where it helps in development of better image processing and computer vision methods. While research on natural image statistics has been growing rapidly since the mid-1990s, no attempt has been made to cover the ?eld in a single book, providing a uni?ed view of the different models and approaches. This book attempts to do just that. Furthermore, our aim is to provide an accessible introduction to the ?eld for students in related disciplines.

Hyvärinen / Hurri / Hoyer Natural Image Statistics jetzt bestellen!

Weitere Infos & Material


1;Preface;6
1.1;Aims and Scope;6
1.2;Targeted Audience and Prerequisites;6
1.3;Structure of the Book and Its Use as a Textbook;7
1.4;Referencing and Exercises;8
1.5;Code for Reproducing Experiments;8
1.6;Acknowledgements;8
2;Contents;9
3;Abbreviations;19
4;Introduction;20
4.1;What this Book Is All About;20
4.2;What Is Vision?;21
4.3;The Magic of Your Visual System;22
4.4;Importance of Prior Information;26
4.4.1;Ecological Adaptation Provides Prior Information;26
4.4.2;Generative Models and Latent Quantities;27
4.4.3;Projection onto the Retina Loses Information;28
4.4.4;Bayesian Inference and Priors;28
4.5;Natural Images;29
4.5.1;The Image Space;29
4.5.2;Definition of Natural Images;30
4.6;Redundancy and Information;32
4.6.1;Information Theory and Image Coding;32
4.6.2;Redundancy Reduction and Neural Coding;33
4.7;Statistical Modeling of the Visual System;34
4.7.1;Connecting Information Theory and Bayesian Inference;34
4.7.2;Normative vs. Descriptive Modeling of Visual System;34
4.7.3;Toward Predictive Theoretical Neuroscience;35
4.8;Features and Statistical Models of Natural Images;36
4.8.1;Image Representations and Features;36
4.8.2;Statistics of Features;37
4.8.3;From Features to Statistical Models;38
4.9;The Statistical-Ecological Approach Recapitulated;39
4.10;References;40
5;Background;41
5.1;Linear Filters and Frequency Analysis;42
5.1.1;Linear Filtering;42
5.1.1.1;Definition;42
5.1.1.2;Impulse Response and Convolution;45
5.1.2;Frequency-Based Representation;46
5.1.2.1;Motivation;46
5.1.2.2;Representation in One and Two Dimensions;46
5.1.2.2.1;Note on Terminology;50
5.1.2.3;Frequency-Based Representation and Linear Filtering;51
5.1.2.4;Computation and Mathematical Details;54
5.1.3;Representation Using Linear Basis;55
5.1.3.1;Basic Idea;55
5.1.3.2;Frequency-Based Representation as a Basis;57
5.1.4;Space-Frequency Analysis;58
5.1.4.1;Introduction;58
5.1.4.2;Space-Frequency Analysis and Gabor Filters;60
5.1.4.3;Spatial Localization vs. Spectral Accuracy;63
5.1.5;References;65
5.1.6;Exercises;65
5.1.6.1;Mathematical Exercises;65
5.1.6.2;Computer Assignments;65
5.2;Outline of the Visual System;67
5.2.1;Neurons and Firing Rates;67
5.2.1.1;Neurons;67
5.2.1.2;Axons;67
5.2.1.3;Action Potentials;67
5.2.1.4;Signal Reception and Processing;67
5.2.1.5;Firing Rate;69
5.2.1.6;Computation by the Neuron;69
5.2.2;From the Eye to the Cortex;69
5.2.3;Linear Models of Visual Neurons;70
5.2.3.1;Responses to Visual Stimulation;70
5.2.3.2;Simple Cells and Linear Models;72
5.2.3.3;Gabor Models and Selectivities of Simple Cells;73
5.2.3.4;Frequency Channels;74
5.2.4;Non-linear Models of Visual Neurons;75
5.2.4.1;Non-linearities in Simple-Cell Responses;75
5.2.4.2;Complex Cells and Energy Models;77
5.2.5;Interactions between Visual Neurons;78
5.2.6;Topographic Organization;80
5.2.7;Processing after the Primary Visual Cortex;80
5.2.8;References;81
5.2.9;Exercises;81
5.2.9.1;Mathematical Exercises;81
5.2.9.2;Computer Assignments;82
5.3;Multivariate Probability and Statistics;83
5.3.1;Natural Images Patches as Random Vectors;83
5.3.2;Multivariate Probability Distributions;84
5.3.2.1;Notation and Motivation;84
5.3.2.2;Probability Density Function;85
5.3.3;Marginal and Joint Probabilities;86
5.3.4;Conditional Probabilities;89
5.3.4.1;Generalization to Many Dimensions;90
5.3.4.2;Discrete-Valued Variables;91
5.3.5;Independence;91
5.3.6;Expectation and Covariance;93
5.3.6.1;Expectation;93
5.3.6.2;Variance and Covariance in One Dimension;94
5.3.6.3;Covariance Matrix;94
5.3.6.4;Independence and Covariances;95
5.3.7;Bayesian Inference;97
5.3.7.1;Motivating Example;97
5.3.7.2;Bayes' Rule;99
5.3.7.3;Non-informative Priors;99
5.3.7.4;Bayesian Inference as an Incremental Learning Process;100
5.3.8;Parameter Estimation and Likelihood;102
5.3.8.1;Models, Estimation, and Samples;102
5.3.8.2;Maximum Likelihood and Maximum a Posteriori;103
5.3.8.3;Prior and Large Samples;105
5.3.9;References;105
5.3.10;Exercises;105
5.3.10.1;Mathematical Exercises;105
5.3.10.2;Computer Assignments;106
6;Statistics of Linear Features;107
6.1;Principal Components and Whitening;108
6.1.1;DC Component or Mean Grey-Scale Value;108
6.1.2;Principal Component Analysis;109
6.1.2.1;A Basic Dependency of Pixels in Natural Images;109
6.1.2.2;Learning One Feature by Maximization of Variance;111
6.1.2.2.1;Principal Component as Variance-Maximizing Feature;111
6.1.2.2.2;Learning One Feature from Natural Images;113
6.1.2.3;Learning Many Features by PCA;113
6.1.2.3.1;Defining Many Principal Components;113
6.1.2.3.1.1;Definition;113
6.1.2.3.1.2;Critique of the Definition;114
6.1.2.3.2;All Principal Components of Natural Images;115
6.1.2.4;Computational Implementation of PCA;116
6.1.2.5;The Implications of Translation-Invariance;117
6.1.3;PCA as a Preprocessing Tool;118
6.1.3.1;Dimension Reduction by PCA;118
6.1.3.2;Whitening by PCA;119
6.1.3.2.1;Whitening as Normalized Decorrelation;119
6.1.3.2.2;Whitening Transformations and Orthogonality;120
6.1.3.3;Anti-aliasing by PCA;121
6.1.3.3.1;Oblique Gratings Can Have Higher Frequencies;121
6.1.3.3.2;Highest Frequencies Can Have only Two Different Phases;122
6.1.3.3.3;Dimension Selection to Avoid Aliasing;123
6.1.4;Canonical Preprocessing Used in This Book;124
6.1.4.1;Notation;124
6.1.5;Gaussianity as the Basis for PCA;124
6.1.5.1;The Probability Model Related to PCA;124
6.1.5.2;PCA as a Generative Model;125
6.1.5.3;Image Synthesis Results;126
6.1.6;Power Spectrum of Natural Images;126
6.1.6.1;The 1/f Fourier Amplitude or 1/f2 Power Spectrum;126
6.1.6.2;Connection between Power Spectrum and Covariances;128
6.1.6.3;Relative Importance of Amplitude and Phase;129
6.1.7;Anisotropy in Natural Images;130
6.1.8;Mathematics of Principal Component Analysis*;131
6.1.8.1;Eigenvalue Decomposition of the Covariance Matrix;132
6.1.8.2;Eigenvectors and Translation-Invariance;134
6.1.9;Decorrelation Models of Retina and LGN *;135
6.1.9.1;Whitening and Redundancy Reduction;135
6.1.9.2;Patch-Based Decorrelation;136
6.1.9.2.1;Matrix Square Root;138
6.1.9.2.2;Symmetric Whitening Matrix;139
6.1.9.2.3;Application to Natural Images;139
6.1.9.3;Filter-Based Decorrelation;139
6.1.10;Concluding Remarks and References;143
6.1.11;Exercises;144
6.1.11.1;Mathematical Exercises;144
6.1.11.2;Computer Assignments;145
6.2;Sparse Coding and Simple Cells;146
6.2.1;Definition of Sparseness;146
6.2.2;Learning One Feature by Maximization of Sparseness;147
6.2.2.1;Measuring Sparseness: General Framework;148
6.2.2.2;Measuring Sparseness Using Kurtosis;148
6.2.2.3;Measuring Sparseness Using Convex Functions of Square;149
6.2.2.3.1;Convexity and Sparseness;149
6.2.2.3.2;An Example Distribution;150
6.2.2.3.3;Suitable Convex Functions;151
6.2.2.3.4;Summary;153
6.2.2.4;The Case of Canonically Preprocessed Data;153
6.2.2.5;One Feature Learned from Natural Images;153
6.2.3;Learning Many Features by Maximization of Sparseness;154
6.2.3.1;Deflationary Decorrelation;155
6.2.3.2;Symmetric Decorrelation;156
6.2.3.3;Sparseness of Feature vs. Sparseness of Representation;156
6.2.4;Sparse Coding Features for Natural Images;158
6.2.4.1;Full Set of Features;158
6.2.4.2;Analysis of Tuning Properties;159
6.2.5;How Is Sparseness Useful?;162
6.2.5.1;Bayesian Modeling;162
6.2.5.2;Neural Modeling;163
6.2.5.3;Metabolic Economy;163
6.2.6;Concluding Remarks and References;163
6.2.7;Exercises;164
6.2.7.1;Mathematical Exercises;164
6.2.7.2;Computer Assignments;165
6.3;Independent Component Analysis;166
6.3.1;Limitations of the Sparse Coding Approach;166
6.3.2;Definition of ICA;167
6.3.2.1;Independence;167
6.3.2.2;Generative Model;167
6.3.2.3;Model for Preprocessed Data;169
6.3.3;Insufficiency of Second-Order Information;169
6.3.3.1;Why Whitening Does Not Find Independent Components;169
6.3.3.2;Why Components Have to Be Non-Gaussian;171
6.3.3.2.1;Whitened Gaussian pdf is Spherically Symmetric;171
6.3.3.2.2;Uncorrelated Gaussian Variables Are Independent;172
6.3.4;The Probability Density Defined by ICA;173
6.3.4.1;Short Digression to Probability Theory;173
6.3.5;Maximum Likelihood Estimation in ICA;174
6.3.6;Results on Natural Images;175
6.3.6.1;Estimation of Features;175
6.3.6.2;Image Synthesis Using ICA;175
6.3.7;Connection to Maximization of Sparseness;176
6.3.7.1;Likelihood as a Measure of Sparseness;176
6.3.7.2;Optimal Sparseness Measures;178
6.3.8;Why Are Independent Components Sparse?;181
6.3.8.1;Different Forms of Non-Gaussianity;182
6.3.8.2;Non-Gaussianity in Natural Images;182
6.3.8.3;Why Is Sparseness Dominant?;183
6.3.9;General ICA as Maximization of Non-Gaussianity;183
6.3.9.1;Central Limit Theorem;184
6.3.9.2;``Non-Gaussian Is Independent'';184
6.3.9.3;Sparse Coding as a Special Case of ICA;185
6.3.10;Receptive Fields vs. Feature Vectors;186
6.3.11;Problem of Inversion of Preprocessing;187
6.3.12;Frequency Channels and ICA;188
6.3.13;Concluding Remarks and References;188
6.3.14;Exercises;189
6.3.14.1;Mathematical Exercises;189
6.3.14.2;Computer Assignments;189
6.4;Information-Theoretic Interpretations;191
6.4.1;Basic Motivation for Information Theory;191
6.4.1.1;Compression;191
6.4.1.2;Transmission;192
6.4.2;Entropy as a Measure of Uncertainty;193
6.4.2.1;Definition of Entropy;193
6.4.2.2;Entropy as Minimum Coding Length;194
6.4.2.3;Redundancy;195
6.4.2.4;Differential Entropy;196
6.4.2.5;Maximum Entropy;197
6.4.3;Mutual Information;198
6.4.4;Minimum Entropy Coding of Natural Images;199
6.4.4.1;Image Compression and Sparse Coding;199
6.4.4.2;Mutual Information and Sparse Coding;201
6.4.4.3;Minimum Entropy Coding in the Cortex;201
6.4.5;Information Transmission in the Nervous System;202
6.4.5.1;Definition of Information Flow and Infomax;202
6.4.5.2;Basic Infomax with Linear Neurons;202
6.4.5.3;Infomax with Non-linear Neurons;203
6.4.5.3.1;Definition of Model;203
6.4.5.4;Infomax with Non-constant Noise Variance;204
6.4.5.4.1;Problems with Non-linear Neuron Model;204
6.4.5.4.2;Using Neurons with Non-constant Variance;205
6.4.6;Caveats in Application of Information Theory;207
6.4.7;Concluding Remarks and References;209
6.4.8;Exercises;209
6.4.8.1;Mathematical Exercises;209
6.4.8.2;Computer Assignments;210
7;Nonlinear Features and Dependency of Linear Features;211
7.1;Energy Correlation of Linear Features and Normalization;212
7.1.1;Why Estimated Independent Components Are Not Independent;212
7.1.1.1;Estimates vs. Theoretical Components;212
7.1.1.2;Counting the Number of Free Parameters;213
7.1.2;Correlations of Squares of Components in Natural Images;214
7.1.3;Modeling Using a Variance Variable;214
7.1.4;Normalization of Variance and Contrast Gain Control;216
7.1.5;Physical and Neurophysiological Interpretations;218
7.1.5.1;Canceling the Effect of Changing Lighting Conditions;218
7.1.5.2;Uniform Surfaces;219
7.1.5.3;Saturation of Cell Responses;219
7.1.6;Effect of Normalization on ICA;220
7.1.7;Concluding Remarks and References;223
7.1.8;Exercises;224
7.1.8.1;Mathematical Exercises;224
7.1.8.2;Computer Assignments;224
7.2;Energy Detectors and Complex Cells;225
7.2.1;Subspace Model of Invariant Features;225
7.2.1.1;Why Linear Features Are Insufficient;225
7.2.1.2;Subspaces or Groups of Linear Features;225
7.2.1.3;Energy Model of Feature Detection;226
7.2.1.3.1;Canonically Preprocessed Data;228
7.2.2;Maximizing Sparseness in the Energy Model;228
7.2.2.1;Definition of Sparseness of Output;228
7.2.2.2;One Feature Learned from Natural Images;229
7.2.3;Model of Independent Subspace Analysis;231
7.2.4;Dependency as Energy Correlation;232
7.2.4.1;Why Energy Correlations Are Related to Sparseness;232
7.2.4.2;Spherical Symmetry and Changing Variance;233
7.2.4.3;Correlation of Squares and Convexity of Non-linearity;234
7.2.5;Connection to Contrast Gain Control;235
7.2.6;ISA as a Non-linear Version of ICA;236
7.2.7;Results on Natural Images;237
7.2.7.1;Emergence of Invariance to Phase;237
7.2.7.1.1;Data and Preprocessing;237
7.2.7.1.2;Features Obtained;237
7.2.7.1.3;Analysis of Tuning and Invariance;238
7.2.7.1.4;Image Synthesis Results;242
7.2.7.2;The Importance of Being Invariant;242
7.2.7.3;Grouping of Dependencies;244
7.2.7.4;Superiority of the Model over ICA;244
7.2.8;Analysis of Convexity and Energy Correlations*;246
7.2.8.1;Variance Variable Model Gives Convex h;246
7.2.8.2;Convex h Typically Implies Positive Energy Correlations;247
7.2.9;Concluding Remarks and References;248
7.2.10;Exercises;248
7.2.10.1;Mathematical Exercises;248
7.2.10.2;Computer Assignments;249
7.3;Energy Correlations and Topographic Organization;250
7.3.1;Topography in the Cortex;250
7.3.2;Modeling Topography by Statistical Dependence;251
7.3.2.1;Topographic Grid;251
7.3.2.2;Defining Topography by Statistical Dependencies;251
7.3.3;Definition of Topographic ICA;253
7.3.4;Connection to Independent Subspaces and Invariant Features;254
7.3.5;Utility of Topography;255
7.3.6;Estimation of Topographic ICA;256
7.3.7;Topographic ICA of Natural Images;257
7.3.7.1;Emergence of V1-like Topography;257
7.3.7.1.1;Data and Preprocessing;257
7.3.7.1.2;Results and Analysis;258
7.3.7.1.3;Image Synthesis Results and Sketch of Generative Model;262
7.3.7.2;Comparison with Other Models;264
7.3.8;Learning Both Layers in a Two-Layer Model *;264
7.3.8.1;Generative vs. Energy-Based Approach;264
7.3.8.2;Definition of the Generative Model;265
7.3.8.3;Basic Properties of the Generative Model;266
7.3.8.3.1;The Components si Are Uncorrelated;266
7.3.8.3.2;The Components si Are Sparse;267
7.3.8.3.3;Topographic Organization Can Be Modeled;267
7.3.8.3.4;Independent Subspaces Are a Special Case;267
7.3.8.4;Estimation of the Generative Model;267
7.3.8.4.1;Integrating Out;267
7.3.8.4.2;Approximating the Likelihood;268
7.3.8.4.3;Difficulty of Estimating the Model;270
7.3.8.5;Energy-Based Two-Layer Models;270
7.3.9;Concluding Remarks and References;271
7.4;Dependencies of Energy Detectors: Beyond V1;273
7.4.1;Predictive Modeling of Extrastriate Cortex;273
7.4.2;Simulation of V1 by a Fixed Two-Layer Model;273
7.4.3;Learning the Third Layer by Another ICA Model;275
7.4.4;Methods for Analyzing Higher-Order Components;276
7.4.5;Results on Natural Images;278
7.4.5.1;Emergence of Collinear Contour Units;278
7.4.5.2;Emergence of Pooling over Frequencies;279
7.4.6;Discussion of Results;283
7.4.6.1;Why Coding of Contours?;283
7.4.6.2;Frequency Channels and Edges;284
7.4.6.3;Toward Predictive Modeling;284
7.4.6.4;References and Related Work;285
7.4.7;Conclusion;286
7.5;Overcomplete and Non-negative Models;287
7.5.1;Overcomplete Bases;287
7.5.1.1;Motivation;287
7.5.1.2;Definition of Generative Model;288
7.5.1.3;Nonlinear Computation of the Basis Coefficients;289
7.5.1.4;Estimation of the Basis;291
7.5.1.5;Approach Using Energy-Based Models;292
7.5.1.6;Results on Natural Images;295
7.5.1.7;Markov Random Field Models *;295
7.5.2;Non-negative Models;298
7.5.2.1;Motivation;298
7.5.2.2;Definition;298
7.5.2.3;Adding Sparseness Constraints;300
7.5.3;Conclusion;303
7.6;Lateral Interactions and Feedback;304
7.6.1;Feedback as Bayesian Inference;304
7.6.1.1;Example: Contour Integrator Units;305
7.6.1.2;Thresholding (Shrinkage) of a Sparse Code;307
7.6.1.2.1;Decoupling of Estimates;307
7.6.1.2.2;Sparseness Leads to Shrinkage;309
7.6.1.3;Categorization and Top-Down Feedback;311
7.6.2;Overcomplete Basis and End-stopping;311
7.6.3;Predictive Coding;313
7.6.4;Conclusion;314
8;Time, Color, and Stereo;316
8.1;Color and Stereo Images;317
8.1.1;Color Image Experiments;317
8.1.1.1;Choice of Data;317
8.1.1.2;Preprocessing and PCA;318
8.1.1.3;ICA Results and Discussion;321
8.1.2;Stereo Image Experiments;323
8.1.2.1;Choice of Data;323
8.1.2.2;Preprocessing and PCA;324
8.1.2.3;ICA Results and Discussion;325
8.1.3;Further References;330
8.1.3.1;Color and Stereo Images;330
8.1.3.2;Other Modalities, Including Audition;331
8.1.4;Conclusion;331
8.2;Temporal Sequences of Natural Images;332
8.2.1;Natural Image Sequences and Spatiotemporal Filtering;332
8.2.2;Temporal and Spatiotemporal Receptive Fields;333
8.2.3;Second-Order Statistics;335
8.2.3.1;Average Spatiotemporal Power Spectrum;335
8.2.3.2;The Temporally Decorrelating Filter;339
8.2.4;Sparse Coding and ICA of Natural Image Sequences;340
8.2.5;Temporal Coherence in Spatial Features;343
8.2.5.1;Temporal Coherence and Invariant Representation;343
8.2.5.2;Quantifying Temporal Coherence;344
8.2.5.3;Interpretation as Generative Model *;345
8.2.5.4;Experiments on Natural Image Sequences;346
8.2.5.4.1;Data and Preprocessing;346
8.2.5.4.2;Results and Analysis;347
8.2.5.5;Why Gabor-Like Features Maximize Temporal Coherence;348
8.2.5.6;Control Experiments;351
8.2.6;Spatiotemporal Energy Correlations in Linear Features;352
8.2.6.1;Definition of the Model;352
8.2.6.2;Estimation of the Model;354
8.2.6.3;Experiments on Natural Images;355
8.2.6.4;Intuitive Explanation of Results;357
8.2.7;Unifying Model of Spatiotemporal Dependencies;359
8.2.8;Features with Minimal Average Temporal Change;361
8.2.8.1;Slow Feature Analysis;361
8.2.8.1.1;Motivation and History;361
8.2.8.1.2;SFA in a Linear Neuron Model;363
8.2.8.2;Quadratic Slow Feature Analysis;364
8.2.8.3;Sparse Slow Feature Analysis;366
8.2.9;Conclusion;368
9;Conclusion;369
9.1;Conclusion and Future Prospects;370
9.1.1;Short Overview;370
9.1.2;Open, or Frequently Asked, Questions;372
9.1.2.1;What Is the Real Learning Principle in the Brain?;372
9.1.2.2;Nature vs. Nurture;373
9.1.2.3;How to Model Whole Images;374
9.1.2.4;Are There Clear-Cut Cell Types?;374
9.1.2.5;How Far Can We Go?;376
9.1.3;Other Mathematical Models of Images;376
9.1.3.1;Scaling Laws;377
9.1.3.2;Wavelet Theory;377
9.1.3.3;Physically Inspired Models;378
9.1.4;Future Work;379
10;Appendix: Supplementary Mathematical Tools;380
10.1;Optimization Theory and Algorithms;381
10.1.1;Levels of Modeling;381
10.1.2;Gradient Method;382
10.1.2.1;Definition and Meaning of Gradient;382
10.1.2.2;Gradient and Optimization;384
10.1.2.3;Optimization of Function of Matrix;385
10.1.2.4;Constrained Optimization;385
10.1.2.4.1;Projecting Back to Constraint Set;386
10.1.2.4.2;Projection of the Gradient;387
10.1.3;Global and Local Maxima;387
10.1.4;Hebb's Rule and Gradient Methods;388
10.1.4.1;Hebb's Rule;388
10.1.4.2;Hebb's Rule and Optimization;389
10.1.4.3;Stochastic Gradient Methods;390
10.1.4.4;Role of the Hebbian Non-linearity;391
10.1.4.5;Receptive Fields vs. Synaptic Strengths;392
10.1.4.6;The Problem of Feedback;392
10.1.5;Optimization in Topographic ICA *;393
10.1.6;Beyond Basic Gradient Methods *;394
10.1.6.1;Newton's Method;395
10.1.6.2;Conjugate Gradient Methods;397
10.1.7;FastICA, a Fixed-Point Algorithm for ICA;398
10.1.7.1;The FastICA Algorithm;398
10.1.7.2;Choice of the FastICA Non-linearity;399
10.1.7.3;Mathematics of FastICA *;399
10.1.7.3.1;Derivation of the Fixed-Point Iteration;399
10.1.7.3.2;Connection to Gradient Methods;400
10.2;Crash Course on Linear Algebra;402
10.2.1;Vectors;402
10.2.2;Linear Transformations;403
10.2.3;Matrices;404
10.2.4;Determinant;405
10.2.5;Inverse;405
10.2.6;Basis Representations;406
10.2.7;Orthogonality;407
10.2.8;Pseudo-Inverse *;408
10.3;The Discrete Fourier Transform;409
10.3.1;Linear Shift-Invariant Systems;409
10.3.2;One-Dimensional Discrete Fourier Transform;410
10.3.2.1;Euler's Formula;410
10.3.2.2;Representation in Complex Exponentials;410
10.3.2.3;The Discrete Fourier Transform and Its Inverse;413
10.3.2.3.1;Negative Frequencies and Periodicity in the DFT;415
10.3.2.3.2;Periodicity of the IDFT and the Convolution Theorem;416
10.3.2.3.3;Real- and Complex-Valued DFT Coefficients;417
10.3.2.3.4;The Sinusoidal Representation from the DFT;418
10.3.2.3.5;The Basis is Orthogonal, Perhaps up to Scaling;418
10.3.2.3.6;DFT Can Be Computed by the Fast Fourier Transformation;419
10.3.3;Two- and Three-Dimensional Discrete Fourier Transforms;419
10.4;Estimation of Non-normalized Statistical Models;421
10.4.1;Non-normalized Statistical Models;421
10.4.2;Estimation by Score Matching;422
10.4.3;Example 1: Multivariate Gaussian Density;424
10.4.4;Example 2: Estimation of Basic ICA Model;426
10.4.5;Example 3: Estimation of an Overcomplete ICA Model;427
10.4.6;Conclusion;427
11;References;429
12;Index;443



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.