Abstract: As artificial intelligence and machine learning (AI/ML) methods are increasingly adopted into high-stakes applications, concerns about their reliability, robustness, and trustworthiness are also growing. This talk explores the critical role of uncertainty quantification (UQ) in addressing these skepticisms by distinguishing and analyzing two sources of uncertainties in deep learning systems: aleatoric and epistemic. Aleatoric uncertainty is an irreducible form of uncertainty, emerging from sources such as inherent data noise and randomness. Epistemic uncertainty is a reducible form of uncertainty, arising from limited knowledge of model parameters. We will present UQ techniques such as Bayesian Neural Networks, Monte Carlo dropout, and ensemble methods, and demonstrate how they can be integrated into deep learning pipelines. This will result in robust models that provide a measure of confidence in model predictions, thus improving their interpretability, reliability, and trustworthiness. |