E-Book, Englisch, 155 Seiten
Ernst / Schweikard Fundamentals of Machine Learning
1. Auflage 2020
ISBN: 978-3-8385-5251-4
Verlag: UTB
Format: PDF
Kopierschutz: 1 - PDF Watermark
Support Vector Machines Made Easy
E-Book, Englisch, 155 Seiten
ISBN: 978-3-8385-5251-4
Verlag: UTB
Format: PDF
Kopierschutz: 1 - PDF Watermark
Prof. Dr. Floris Ernst lehrt KI (Künstliche Intelligenz) und Robotik an der Universität Lübeck.
Autoren/Hrsg.
Weitere Infos & Material
Contents
Preface
1 Symbolic Classification and Nearest Neighbour Classification
1.1 Symbolic Classification
1.2 Nearest Neighbour Classification
2 Separating Planes and Linear Programming
2.1 Finding a Separating Hyperplane
2.2 Testing for feasibility of linear constraints
2.3 Linear Programming
MATLAB example
2.4 Conclusion
3 Separating Margins and Quadratic Programming
3.1 Quadratic Programming
3.2 Maximum Margin Separator Planes
3.3 Slack Variables
4 Dualization and Support Vectors
4.1 Duals of Linear Programs
4.2 Duals of Quadratic Programs
4.3 Support Vectors
5 Lagrange Multipliers and Duality
5.1 Multidimensional functions
5.2 Support Vector Expansion
5.3 Support Vector Expansion with Slack Variables
6 Kernel Functions
6.1 Feature Spaces
6.2 Feature Spaces and Quadratic Programming
6.3 Kernel Matrix and Mercer’s Theorem
6.4 Proof of Mercer’s Theorem
Step 1 – Definitions and Prerequisites
Step 2 – Designing the right Hilbert Space
Step 3 – The reproducing property
7 The SMO Algorithm
7.1 Overview and Principles
7.2 Optimisation Step
7.3 Simplified SMO
8 Regression
8.1 Slack Variables
8.2 Duality, Kernels and Regression
8.3 Deriving the Dual form of the QP for Regression
9 Perceptrons, Neural Networks and Genetic Algorithms
9.1 Perceptrons
Perceptron-Algorithm
Perceptron-Lemma and Convergence
Perceptrons and Linear Feasibility Testing
9.2 Neural Networks
Forward Propagation
Training and Error Backpropagation
9.3 Genetic Algorithms
9.4 Conclusion
10 Bayesian Regression
10.1 Bayesian Learning
10.2 Probabilistic Linear Regression
10.3 Gaussian Process Models
10.4 GP model with measurement noise
Optimization of hyperparameters
Covariance functions
10.5 Multi-Task Gaussian Process (MTGP) Models
11 Bayesian Networks
Propagation of probabilities in causal networks
Appendix – Linear Programming
A.1 Solving LP0 problems
A.2 Schematic representation of the iteration steps
A.3 Transition from LP0 to LP
A.4 Computing time and complexity issues
References
Index