New technique enables AI to admit what it doesn’t know

Technology|11/5/2026
New technique enables AI to admit what it doesn’t know
Illustrative image
Listen to this story:
0:00

Note: AI technology was used to generate this article's audio.

  • New method curbs overconfidence in AI systems
  • Brain-inspired approach helps models better gauge their own limits

South Korean researchers have developed a new method that enables artificial intelligence models to admit when they do not know certain topics, in a way that resembles human behavior.

Researchers from the Korea Advanced Institute of Science and Technology say the breakthrough could improve the reliability of AI systems used in sensitive fields such as autonomous driving and medical diagnosis.

Previous studies have highlighted “overconfidence” in AI models as a key risk, where systems generate answers or even fabricate information instead of acknowledging uncertainty.

In a new development, researchers found a way to help AI systems identify unfamiliar or unseen situations, reducing errors and improving overall responses.

They explain that one major cause of overconfidence lies in how neural networks are initially trained, where early small errors can accumulate and grow during later stages.

The team also discovered that feeding random data during early training can make models appear confident without real understanding, contributing to hallucinated outputs.

To address this, researchers drew inspiration from the human brain, which develops baseline neural activity even before birth.

They introduced a “warm-up” phase in which AI models are exposed to random noise before actual training begins.

This helps the system establish a lower, more realistic confidence level and reduces the tendency toward overconfidence.

As a result, models become better at recognizing when they do not know an answer instead of guessing incorrectly with high certainty.

Lead author Se-Bum Paik said the approach brings AI closer to human-like awareness of uncertainty, making it not just more accurate but also more honest about its limits.