Machine Learning Safety
Deep Learning systems perform surprisingly well on a broad set of tasks from various domains, including computer vision and natural language processing. Recently, however, theoretical studies, as well as, for example, prominent cases of accidents involving self driving vehicles, which received broad coverage by the media, demonstrated that Deep Learning based models sometimes fail - seemingly - without any reason.
In order to enable the deployment of artificial neural networks as components of safety critical systems, like autonomous vehicles, or for the control of industrial manufacturing processes, their reliability has to be increased and, if possible, proven.
Particular domains of interest cover Anomaly detection and Out-of-Distribution detection, which, in this case, refers to methods that assess the capability of a model to provide correct predictions.
We investigate different methods to estimate the confidence in the predictions of an artificial neural network in order to enable systems to fail gracefully.
For further questions, please contact Konstantin Kirchheim.