Name of the participant: Marcel Reith-Braun
Description of the IT-research project: A major drawback of today’s artificial neural networks, such as those used in image recognition, autonomous driving or speech processing, is that they output predictions without an associated confidence level. Trusting these predictions without verification can lead to serious consequences (e.g. in autonomous driving, if the network fails to detect another road user). In practice, this causes great concern about the use of neural networks and limits their application throughout the industry.
It is possible to overcome this disadvantage by estimating additional uncertainties for the predictions of the neural networks. Based on these uncertainties one can decide to what degree the network’s output can be trusted. Among others, Bayesian neural networks are a promising approach for uncertainty estimation. However, due to the great complexity of today’s neural networks, their uncertainty can only be approximated, resulting in approximation errors. In practice, the uncertainties obtained by these methods are often not trustworthy and thus, the original problem still cannot be addressed.
In the course of this project, we aim to develop a methodology for evaluating and restricting the estimated uncertainties to regions with valid uncertainties. Afterwards, we apply the method to an industry-related problem in the field of predictive maintenance or control engineering. The main idea is to develop a (statistical) test which checks whether the data that is generated by the neural network can originate from the underlying probabilistic model. Subsequently, the problem of untrustworthy uncertainties shall be addressed by restricting the estimated uncertainties to valid regions.
Software Campus partners: KIT, IAV GmbH
Implementation period: 01.02.2021 – 14.05.2022