Ghostboard pixel

Ontario Tech Researcher Highlights Challenges of AI Admitting Uncertainty

Dr. Peter Lewis, Canada Research Chair in Trustworthy Artificial Intelligence at Ontario Tech University, states that current artificial intelligence (AI) systems are unable to indicate when they do not know an answer. Dr. Lewis explains that these generative AI systems often present information with overconfidence, even when unsure of the result.

This issue is significant for fields such as finance, law, and health care, where errors by AI systems can have major effects. Dr. Lewis points out that AI models produce convincing answers, but they reach conclusions differently than humans and can make mistakes in unusual ways.

According to Dr. Lewis, research shows that AI systems often fail to accurately gauge their own confidence and typically act as if they are always certain. He says that instead of only reducing uncertainty in AI, users and experts need to understand and manage this uncertainty, particularly in high-impact areas like medicine. Dr. Lewis describes it as an ethical obligation for AI to be transparent about its level of certainty, calling it a safeguard that can help determine when human oversight is needed. He adds that designing AI systems to recognize their own limitations may be more effective than focusing solely on perfect accuracy.

Dr. Lewis states it is unrealistic to expect AI to be correct all the time. He notes that collaboration between AI experts and users is necessary to understand and address uncertainty in AI models, especially in specialized domains where risk factors vary.

On future developments, Dr. Lewis reports that Ontario Tech University is researching new AI system architectures called 'Reflective AI.' These systems are designed to better evaluate their own outputs and report confidence levels with greater accuracy. He describes these systems as using a reflective layer to check the AI's output against specific criteria before acting on the information.

Dr. Lewis also addresses accessibility and equity in AI, stating that many AI systems operate as 'black boxes' with unclear decision-making processes and potential for bias. He describes ongoing research at Ontario Tech in partnership with the Canadian National Institute for the Blind (CNIB) to improve accessibility in AI tools for people with disabilities. Dr. Lewis notes that current technology is not yet adequate for these uses.

He observes that public opinion about AI varies widely, from those who see it as revolutionary to others who view it as risky. Dr. Lewis remarks that it is necessary to acknowledge both the strengths and challenges of AI without expecting it to be flawless or entirely dangerous.

Read more