Our main research areas are:

  • Regression and classification
  • Multivariate analysis
  • Dimension Reduction
  • Biostatistics
  • High-dimensional statistics
  • Statistical Analysis of Probabilistic Programs
  • Applications in life sciences and engineering
  • Bayesian Learning
  • Experimental Design

Important result in Trustworthy AI by Interdisciplinary Team from Informatics and MathematicsTU Wien Researchers develop a new approach to quantify uncertainty in Neural Networks
 

An interdisciplinary research team from TU Wien (Andrey Kofnov, Daniel Kapla, Ezio Bartocci and Efstathia Bura) has achieved a major milestone in the field of artificial intelligence, presenting a novel method to precisely quantify uncertainty in neural networks—paving the way for safer deployment of AI in safety-critical applications.

Our paper, “Exact Upper and Lower Bounds for the Output Distribution of Neural Networks with Random Inputs,” (https://arxiv.org/abs/2502.11672, opens an external URL in a new window ) was recently accepted at the prestigious International Conference on Machine Learning (ICML)—one of the top three conferences in machine learning and artificial intelligence.

This research is the result of a close interdisciplinary collaboration between two faculties at TU Wien: the Cyber-Physical System Research Unit of the Faculty of Informatics (Bartocci) and the ASTAT Research Unit of the Faculty of Mathematics and Geoinformation (Kofnov, Kapla, Bura). This work highlights the core mission of TU Wien’s doctoral school SecInt (Secure and Intelligent Human-Centric Digital Technologies) to foster cutting-edge research at the intersection of machine learning, formal methods, and cybersecurity.

Why It Matters

Artificial intelligence is transforming every aspect of society—from self-driving cars to wearable health monitors. But a major challenge remains: How much can we trust AI predictions when the input data are uncertain or noisy?

In safety-critical applications like autonomous driving or medical devices, even small errors or unanticipated behavior can lead to devastating outcomes. Traditional neural networks often offer little insight into their confidence, especially when faced with inputs they haven’t seen before.

The TU Wien team developed a new approach to this problem: a mathematically rigorous method to compute exact upper and lower bounds on the predictions of neural networks when the input data are uncertain. This provides guaranteed error margins—a critical advancement over existing methods that rely heavily on approximation or expensive simulations.