Description

Title Why do we need deep generative modeling?
Abstract Deep learning achieves state-of-the-art results in tasks like image classification or Learning generative models that are capable of capturing rich distributions from vast amounts of data like image collections remains one of the major challenges of machine learning. In recent years, different approaches to achieving this goal were proposed by formulating alternative training objectives to the log-likelihood function, e.g., the adversarial loss, or by utilizing variational inference. The latter approach could be made especially efficient through the application of the reparameterization trick resulting in a highly scalable framework now known as the variational auto-encoders (VAE). Various extensions to deep generative models have been proposed that aim to enrich the variational posterior, e.g., through normalizing flows. Recently, it has been also noticed that in fact the prior plays a crucial role in mediating between the generative decoder and the variational encoder. Choosing a too simplistic prior like the standard normal distribution could lead to overregularization and, as a consequence, very poor hidden representations. Beside Generative Adversarial Networks and VAEs, a third option is to utilize fully-observable probabilistic models that directly represent relations among observable random variables, e.g., flow-based models and autoregressive models. In this talk, I will present a general overview of VAEs and flow-based models. First, I will indicate why generative modeling is important for reliable decision making models. Next, I will explain Variational Auto-Encoders as a general framework. Then I will discuss recent successes of flow-based models that utilize the idea of normalizing flows directly to the observable variables. The talk will be concluded with challenges and possible future directions.

Other presentations by Jakub Tomczak

DateTitle
13 January 2020 Why do we need deep generative modeling?