Advanced Topics (ILV)

Back

Course lecturer:

em.o.Univ.-Prof. Dr.

 Jürgen Pilz

FH-Prof. DI Dr. techn.

 Stefan Schrunner

image
Course numberM2.08760.11.151
Course codeAT
Curriculum2021
Semester of degree program Semester 3
Mode of delivery Presence- and Telecourse
Units per week3,5
ECTS credits5,0
Language of instruction English

The students are familiar with the most important methods of generating pseudorandom numbers and their application in statistical Monte Carlo simulations.
They know about the basic principles of modeling stochastic processes using Markov Chains and the computation of limiting distributions. They are able to determine predictive posterior distributions for real-world data sets with the help of modern Markov Chain Monte Carlo methods and their implementations in R and/or Python environments.
They are able to apply approximate methods of Bayesian inference to non-standard and big data problems.
Students are familiar with basic methods of modeling and application of generative adversarial networks and have a command of the most important regularization methods for deep learning applications.
They know how to apply Gaussian Process methodology for complex data in regression and classification problems, how to model the correlation functions of the input variables and how to solve complex optimization problems with the help of surrogate models.
They are able to apply the methods mentioned above using the statistical programming system R and/or the corresponding Python tools.

All modules of "Data Science Basics" and "Artificial Intelligence (I)"

The module covers the following topics/contents:

  • Monte-Carlo-methods: Generation of pseudo-random numbers,
  • direct and non-iterative MC-methods, rejection and importance sampling
  • Markov Chains: transition matrix, recurrente and transient states, ergodic distributions
  • Monte-Carlo Markov Chain methods: Gibbs sampling, Metropolis-Hastings- Algorithm, Hamiltonian MC- methods
  • Methods of approximate (Bayesian) inference: variational Bayesian inference, Laplace- and INLA-Approximation, approximate Bayes computing
  • Generative stochastic networks: Boltzmann-Machines, generative adversarial networks, adversarial games
  • Regularization methods for Deep Learning: Bagging and other Ensemble methods, Dropout, Data augmentation
  • Gaussian Processes (GP) for Regression and Classification: treed Gaussian processes, stationary and non-stationary Gaussian processes, Covariance functions, models with nugget effect, Surrogate Models, additive GP, Nearest Neighbour GP

Lecture script as provided in the course (required)
V. Gomez-Rubio: Bayesian Inference with INLA. Chapman and Hall/CRC 2020
D. Barber: Bayesian Reasoning and Machine Learning. Cambridge University Press 2012
K.P. Murphy: Machine Learning. A Probabilistic Perspective. 2nd ed., MIT Press 2021
R.B. Gramacy: Surrogates: Gaussian Process Modeling, Design and Optimization for the Applied Sciences. Chapman and Hall/CRC 2020

Integrated course - teaching & discussion, guest lectures, demonstration, practical examples, home work

Immanent examination character: presentation, assignment reports, written/oral exam