Skip to main content
ukiyo journal - 日本と世界をつなぐ新しいニュースメディア Logo
  • All Articles
  • 🗒️ Register
  • 🔑 Login
    • 日本語
    • 中文
    • Español
    • Français
    • 한국어
    • Deutsch
    • ภาษาไทย
    • हिंदी
Cookie Usage

We use cookies to improve our services and optimize user experience. Privacy Policy and Cookie Policy for more information.

Cookie Settings

You can configure detailed settings for cookie usage.

Essential Cookies

Cookies necessary for basic site functionality. These cannot be disabled.

Analytics Cookies

Cookies used to analyze site usage and improve our services.

Marketing Cookies

Cookies used to display personalized advertisements.

Functional Cookies

Cookies that provide functionality such as user settings and language selection.

How Far Will AI Reliability Evolve? The Impact of 0% Error Guarantee: A New Theory to Contain AI Uncertainty with Formulas

How Far Will AI Reliability Evolve? The Impact of 0% Error Guarantee: A New Theory to Contain AI Uncertainty with Formulas

2025年07月02日 01:16

Introduction: The Question of "Is AI Really Safe?"

Behind the generative AI boom, in fields like healthcare and autonomous driving where lives are at stake, "a 0.01% probability of an accident" can lead to the worst-case scenario. The key here is quantifying "Uncertainty." However, current mainstream methods like dropout estimation and Bayesian inference are merely "approximations." Methods that **"mathematically guarantee no errors occur"** have been almost nonexistent.


Breakthrough at TU Wien

In June 2025, Andrey Kofnov and others from **Vienna University of Technology (TU Wien)** announced a method to "partition the input space into high-dimensional polytopes and calculate strict upper and lower bounds of output distribution for each region." The paper is already published on arXiv and has been accepted at ICML 2025phys.org.


What's New?

  1. Geometric Partitioning

    • Mapped to ReLU networks, the input space is partitioned into convex polytopes.

    • Each polytope region is "confined" using linear mapping, and the probability distribution of the output is analyzed.

  2. Calculating Upper and Lower Bounds "Precisely"

    • Instead of approximations, mathematical proof is provided that "it will absolutely not go beyond this range."

  3. Handling the Entire Probability Distribution

    • Generalized to ReLU/tanh/softmax, etc.

  4. Limited to Small-Scale NNs

    • Still computationally intensive for LLMs (authors acknowledge this as a challenge)phys.org.


Key Technology: Visualizing High-Dimensional Space

  • Input Space: An "n-dimensional universe" with coordinates like pixel values, noise, and lighting.

  • Partitioning: Generating polytopes at boundaries where activation patterns change.

  • Analysis: Within each polytope, the NN is linear, so the output distribution is a linearly transformed probability distribution.

  • Aggregation: Integrating all polytopes and providing upper and lower bounds on the cumulative distribution function (CDF) of the output.


Comparison with Existing Methods

FeaturesApproximate UQ (MC Dropout, etc.)New Method (TU Wien)
GuaranteeStatistical Estimation (with errors)Mathematically Rigorous
ScaleLarge-Scale NNs PossibleLimited to Small-Scale NNs
Computational ComplexityGPU Inference LevelCombinatorial Explosion (Optimization Needed)
ApplicationsLLM, CV in GeneralMedical Devices, Industrial Control, etc., in "small but life-critical" areas


Application Scenarios

  1. Embedded AI for Medical Use: Devices like catheter robot image recognition models where misdiagnosis is critical.

  2. Sensor Fusion in Autonomous Driving: 100% proof-verified mixed NN of ultrasound/radar.

  3. Financial Risk Calculation: Integrating small-scale NNs into real-time audits to automatically guarantee thresholds.


Reactions on SNS

  • Hacker News User @quant_curious

    "The theory is more beautiful than existing UQ that measures quality with rejection verification."news.ycombinator.com

  • Hacker News Comment @esafak

    "Bayesian NNs can also express uncertainty, but calibration is difficult. Mathematical guarantees are a game-changer."news.ycombinator.com

  • Reddit r/MachineLearning Thread

    "UQ in Deep Learning has always been said to be 'one step away.' With proof, FDA approval is also in sight."reddit.com

  • On X (formerly Twitter), TU Wien's official account

    reported "Opening a new chapter in #AI safety" and gained over 500 likes for the ICML acceptance (posted 2025-07-01)tuwien.at


Expert Opinions

  • Prof. Ezio Bartocci (Co-author)

    "While large models like ChatGPT are distant goals, we want to make safety proofs a culture starting with small models."phys.org

  • External Researchers' Impressions
    Bayesian statisticians commented, "Rigorousness is great, but reconciling with the reality of deployment is a challenge" (Hacker News)news.ycombinator.com.


Challenges and Future Prospects

  1. Reducing Computational Costs: Suppressing exponential growth in the number of polytopes through sampling/approximation.

  2. Extension to Middle-Scale NNs: Evaluating tens of thousands of parameter-level NNs within minutes on a GPU.

  3. Regulatory Compliance: Incorporating mathematical guarantees into the review of "high-risk systems" under the EU AI Act.


Conclusion

  • Significance: Paving the way to address AI safety not probabilistically but with mathematical clarity.

  • Impact: If it becomes widespread in "zero tolerance" areas like medical devices and aviation control, AI adoption will accelerate.

  • Future Vision: By around 2030, the workflow might standardize to "first scale down NNs for rigorous verification, then distill them into large-scale models."

References

AI Uncertainty Quantifiable Through Mathematical Approach
Source: https://phys.org/news/2025-06-mathematical-approach-uncertainty-ai-quantifiable.html

← Back to Article List

Contact |  Terms of Service |  Privacy Policy |  Cookie Policy |  Cookie Settings

© Copyright ukiyo journal - 日本と世界をつなぐ新しいニュースメディア All rights reserved.