This course focuses on the principles of learning from data and quantification of uncertainty, by complementing and enriching the “Introduction to Statistical Learning” course.
In particular, the course is divided into two main parts that correspond to the supervised and unsupervised learning paradigms. The presentation of the material follows a common thread based on the probabilistic data modeling approach, so that many classical algorithms, such as least squares and k-means, can be seen as special cases of inference problems for more general probabilistic models. Taking a probabilistic view also allows the course to derive inference algorithms for nonparametric models with close connections with neural networks and support vector machines. Similarly, in the “Introduction to Statistical Learning” course, the focus is not on the algorithmic background of the methods, but rather on their mathematical and statistical foundations. This advanced course is complemented by the lab. sessions to guide students through the design and validation of the methods developed during the lectures.
Teaching and Learning Methods: Lectures and Lab sessions (preferably one student per group).
Course Policies: Attendance to the Lab sessions are mandatory.
- Book: BISHOP M. Pattern Recognition and Machine Learning. Springer-Verlag, 2006, 768p.
- Book: ROGERS S., GIROLAMI M. A First Course in Machine Learning. Chapman & HallCRC Press, 2011, 30p.
Basic knowledge of algebra and calculus.
The course will cover a selection of the following topics :
o Recap on linear algebra and calculus
o Overview of probability theory
· Supervised learning
o Linear regression
o Linear classification
o Bayesian classification
o Kernel methods for nonlinear regression and classification
· Unsupervised learning
o K-means and Kernel K_means
o Gaussian mixture models
o Principal component analysis (PCA) and Kernel PCA
· Advanced topics
o Dirichlet Processes
o Gaussian Processes
o Variational inference
o Markov chain Monte Carlo
To identify the key elements composing a given probabilistic model
To recognize the suitability of different probabilistic models given a machine-learning problem
To use the appropriate techniques to derive probabilistic machine-learning algorithms
To develop proof-of-concept software to set up analyses of data using probabilistic machine-learning algorithms
Nb hours: 42.00, at least 4 Lab sessions (12 hours)
- Assessed exercise (25% of the final grade)
- Final written exam (75% of the final grade)