CS Colloquium Series | Pinar Yanardag
Interpretable Deep Generative Models
Deep generative models have made significant progress in a variety of real-world applications. However, these models are often interpreted as black-boxes without the ability to understand what knowledge they learn in their latent representations. Before we can widely deploy deep generative models in high-stakes tasks such as self-driving cars or medical diagnosis, we need to understand the decision-making process of these models.In this presentation, I will discuss our recent work on interpretable generative models at CATLAB. Additionally, I will highlight how these models facilitate creative content generation and human-AI collaborated designs as part of the How to Generate (Almost) Anything project. Lastly, I will touch on our efforts to address fairness and bias in generative models and share future work.