Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
teaching:st23:master-seminar [2023/09/29 19:03] ipa [Descripion of Seminar.] |
teaching:st23:master-seminar [2023/10/11 19:56] (current) ipa [Organization] |
||
---|---|---|---|
Line 15: | Line 15: | ||
==== Descripion of Seminar. ==== | ==== Descripion of Seminar. ==== | ||
- | Score-based generative models have demonstrated state-of-the-art performance in numerous applications in recent years. The central concept behind these models involves the gradual injection of noise into the training data, followed by learning the reverse process to generate new samples. The training and sampling procedures can be conducted independently. The learning phase is facilitated by noise-conditional score networks, while sampling can be accomplished through various methods, including Langevin Monte Carlo approaches, stochastic differential equations, ordinary differential equations, and combinations. | + | Score-based generative models have demonstrated state-of-the-art performance in numerous applications in recent years. The central concept behind these models involves the gradual injection of noise into the training data, followed by learning the reverse process to generate new samples. The training and sampling procedures can be conducted independently. The learning phase is facilitated by noise-conditional score networks, while sampling can be accomplished through various methods, including Langevin Monte Carlo approaches, stochastic differential equations, ordinary differential equations, and various combinations. |
- | In this seminar, we will begin by reviewing generative models and exploring the most common architectures used in current research. Special attention will be given to comparing different objective functions needed for training, as well as various sampling procedures. We will explore invertible neural networks, with special attention on normalizing flows and continuous normalizing flows. Afterwards, we will handle score matching and Langevin dynamics for score-based generative models. This will include explaining how the Langevin dynamics can be used to approximate scores, deriving noise-conditional score networks and explaining and providing a detailed explanation of the training process. Additionally, we will investigate a broader generalization involving an infinite number of time steps for noise levels, studying the process using stochastic differential equations. This formulation, known as score SDEs, utilizes SDEs for noise perturbation and sample generation. We will end up with a comparison with other possible diffusion models and with further improvements in the sample generation. | + | In this seminar, we will begin by reviewing generative models and examining the most common architectures used in current research. Special attention will be given to comparing different objective functions needed for training, as well as the different sampling procedures. We will explore invertible neural networks, with a particular focus on normalizing flows and continuous normalizing flows. Subsequently, we will address score matching and Langevin dynamics for score-based generative models. This includes an explanation of how Langevin dynamics can be used to approximate scores, deriving noise-conditional score networks, and providing a detailed explanation of the training process. |
+ | Additionally, we will explore a broader generalization involving an infinite number of time steps for noise levels, studying the process using stochastic differential equations. This formulation, known as score SDEs, leverages SDEs for noise perturbation and sample generation. The seminar will conclude with a comparison to other possible diffusion models and a discussion of further enhancements in sample generation. | ||
The seminar is scheduled for the second half of the winter term. Participants interested in reviewing concepts of stochastic differential equations have the option to attend a previous seminar titled [[teaching:st23:seminar| Stochastic Differential Equations and Generative Modelling (Proseminar/Seminar)]], which takes place in the first half of the winter term. | The seminar is scheduled for the second half of the winter term. Participants interested in reviewing concepts of stochastic differential equations have the option to attend a previous seminar titled [[teaching:st23:seminar| Stochastic Differential Equations and Generative Modelling (Proseminar/Seminar)]], which takes place in the first half of the winter term. | ||
Line 24: | Line 25: | ||
* **Prerequisites:** Basic knowledge in probability theory and statistics | * **Prerequisites:** Basic knowledge in probability theory and statistics | ||
* **Registration:** Via Müsli. [[https://muesli.mathi.uni-heidelberg.de/lecture/view/1757|Link]] | * **Registration:** Via Müsli. [[https://muesli.mathi.uni-heidelberg.de/lecture/view/1757|Link]] | ||
- | * **First (organizational) meeting:** Kalenderwoche 42. Specific day and time will be announced soon. | + | * **First (organizational) meeting:** Tuesday, 17 October at 14:00 c.t. |
- | * **Time and Location:** Time and location will be announced soon. | + | * **Time and Location:** Tuesdays 14:00 c.t. in SR 6 |
- | Further information on the seminar will be announced in the first organizational meeting. For any specific question you can contact [[:people | Daniel Gonzalez]]. | + | Further information on the seminar will be announced in the first organizational meeting. For any specific question you can contact [[:people | Daniel Gonzalez, Jonas Cassel]]. |