Organizers
Overview
This tutorial is here to help researchers understand and handle uncertainty in their models, making them more reliable using Bayesian methods. We'll start by discussing different Bayesian approaches and then focus on Bayesian Neural Networks and how to approximate them efficiently for computer vision tasks. We will also use real-world examples and practical methods to show how to put these ideas into practice.
Schedule
- 8:45 - 9:15: Opening - Andrei
- 9:15 - 10:05: Uncertainty quantification: from maximum a posteriori to BNNs - Pavel
- 10:05 - 10:30: Computationally-efficient BNNs for computer vision - Gianni
- 10:35 - 11:00: ☕ Coffee ☕
- 11:00 - 11:50: Convert your DNN into a BNN - Alexander
- 11:50 - 12:20: Quality of estimated uncertainty and practical examples - Adrien & Gianni
- 12:20 - 12:40: Closing remarks + Q&A - Andrei, Alex, Pavel & Gianni
Outline
Introduction: Why & where is UQ helpful?
Initial exploration into the critical role of uncertainty quantification (UQ) within the realm of computer vision (CV): participants will gain an understanding of why it’s essential to consider uncertainty in CV, especially concerning decision-making in complex environments. We will introduce real-world scenarios where uncertainty can profoundly impact model performance and safety, setting the stage for deeper exploration through out the tutorial.
From maximum a posteriori to BNNs.
In this part, we will journey through the evolution of UQ techniques, starting from classic approaches such as maximum a posteriori estimation to the more ellaborate Bayesian Neural Networks. The participants will grasp the conceptual foundations of UQ, laying the groundwork for the subsequent discussions of Bayesian methods.
Strategies for BNN posterior inference.
This is the core part, which will dive into the process of estimating the posterior distribution of BNNs. The participants will gain insights into the computational complexities involved in modeling uncertainty through a comprehensive overview of techniques such as Variational Inference (VI), Hamiltonian Monte Carlo (HMC), and Langevin Dynamics. Moreover, we will explore the characteristics and visual representation of posterior distributions, providing a better understanding of Bayesian inference.
Computationally-efficient BNNs for CV.
Here, we will present recent techniques to improve the computational efficiency of BNNs for computer vision tasks. We will present different forms of obtaining BNNs from a intermediate checkpoints, weight trajectories during a training run, different types of variational subnetworks, etc., along with their main strenghts and limitations.
Convert your DNN into a BNN: post-hoc BNN inference.
This segment focuses on post-hoc inference techniques, with a focus on Laplace approximation. The participants will learn how Laplace approximation serves as a computationally efficient method for approximating the posterior distribution of Bayesian Neural Networks.
Quality of estimated uncertainty and practical examples.
In the final session, participants will learn how to evaluate the quality of UQ in practi- cal settings. We will develop multiple approaches to assess the reliability and calibra- tion of uncertainty estimates, equipping participants with the tools to gauge the robust- ness of their models. Additionally, we will dive into real-world examples and applica- tions, showcasing how UQ can enhance the reliability and performance of computer vision systems in diverse scenarios. Through interactive discussions and case studies, participants will gain practical insights into deploying uncertainty-aware models in real-world applications.
Uncertainty Quantification Framework.
This tutorial will also very quickly introduce the TorchUncertainty library, an uncertainty-aware open-source framework for training models in PyTorch.
Relation to prior tutorials and short courses
This tutorial is affiliated with the UNCV Workshop, which had its inaugural edition at ECCV 2022, a subsequent one at ICCV, and is back at ECCV this year. In constrast to the workshop, the tutorial puts its primary emphasis on the theoretical facets.
UQ has received some attention in recent times, as evidenced by its inclusion in the tutorial 'Many Faces of Reliability of Deep Learning for Real-World Deployment'. While this tutorial explored various applications associated with uncertainty, it did not place a specific emphasis on probabilistic models and Bayesian Neural Networks. Our tutorial aims to provide a more in-depth exploration of uncertainty theory, accompanied by the introduction of practical applications, including the presentation of the library, TorchUncertainty.
Selected References
- Immer, A., Palumbo, E., Marx, A., & Vogt, J. E. E Effective Bayesian Heteroscedastic Regres- sion with Deep Neural Networks. In NeurIPS, 2023.
- Franchi, G., Bursuc, A., Aldea, E., Dubuisson, S., & Bloch, I. Encoding the latent posterior of Bayesian Neural Networks for uncertainty quantification. IEEE TPAMI, 2023.
- Franchi, G., Yu, X., Bursuc, A., Aldea, E., Dubuisson, S., & Filliat, D. Latent Discriminant deterministic Uncertainty. In ECCV 2022.
- Laurent, O., Lafage, A., Tartaglione, E., Daniel, G., Martinez, J. M., Bursuc, A., & Franchi, G. Packed-Ensembles for Efficient Uncertainty Estimation. In ICLR 2023.
- Izmailov, P., Vikram, S., Hoffman, M. D., & Wilson, A. G. What are Bayesian neural network posteriors really like? In ICML, 2021.
- Izmailov, P., Maddox, W. J., Kirichenko, P., Garipov, T., Vetrov, D., & Wilson, A. G. Subspace inference for Bayesian deep learning. In UAI, 2020.
- Franchi, G., Bursuc, A., Aldea, E., Dubuisson, S., & Bloch, I. TRADI: Tracking deep neural network weight distributions. In ECCV 2020.
- Wilson, A. G., & Izmailov, P. Bayesian deep learning and a probabilistic perspective of generalization. In NeurIPS, 2020.
- Hendrycks, D., Dietterich, T. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations. In ICLR 2019.
- Izmailov, P., Podoprikhin, D., Garipov, T., Vetrov, D., & Wilson, A. G. Averaging weights leads to wider optima and better generalization. In UAI, 2018.