Arizona astronomer creates ray-tracing method to make AI less overconfident

A University of Arizona astronomer has adapted ray tracing to help AI models better assess when their predictions might be wrong, improving trustworthiness.

uncertainty quantification, ray tracing, Bayesian sampling, trustworthy AI, Peter Behroozi, neural networks, AI safety

A University of Arizona astronomer, Peter Behroozi, has developed a novel technique to make AI systems more trustworthy by enabling them to quantify when they might be wrong.

Behroozi’s method adapts ray tracing, traditionally used in computer graphics, to explore the high-dimensional spaces in which AI models operate, thereby allowing the system to gauge uncertainty more effectively.

He uses a Bayesian-sampling approach: rather than relying on a single model, the system effectively consults a ‘whole range of experts’ by training many models in parallel and observing the diversity of their outputs.

This advance addresses a critical problem in modern AI: ‘wrong-but-confident’ outputs, situations where a model gives a single, confident answer that may be incorrect. According to Behroozi, his technique is orders of magnitude faster than traditional uncertainty-quantification methods, making it practical even for extensive neural networks.

The implications are broad, extending from healthcare to finance to autonomous systems: AI that knows its own limits could reduce risk and increase reliability. Behroozi hopes his code, now publicly available, will be adopted by other researchers working under high-stakes conditions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot