Ghostboard pixel

Ontario Tech Researcher Says Generative AI Does Not Communicate Uncertainty

Ontario Tech University’s Dr. Peter Lewis, Canada Research Chair in Trustworthy Artificial Intelligence, stated that current generative artificial intelligence (AI) systems are not able to reliably indicate when they lack knowledge or certainty. In an interview published on June 17 on Ontario Tech University’s website, Dr. Lewis explained that while AI is used in areas like finance, law, and health care, these systems often do not acknowledge their own limitations.

Dr. Lewis told Ontario Tech University that AI systems, including large language models like those behind ChatGPT, may provide confident responses that can be incorrect. He noted that research shows these systems often cannot accurately judge their own ability and can be overconfident, which means they frequently fail to communicate uncertainty when they do not know the answer.

“With current AI systems, we really can’t [trust them to acknowledge when they don’t know something],” Dr. Lewis said. “As humans, we inherently know we are not perfect, and we expect that level of humility and acknowledgement from one other. So, we might also expect AI to have this same kind of characteristic, to be considered trustworthy.”

Dr. Lewis explained that having AI systems communicate uncertainty is not only a technical feature but also an ethical safeguard. He stated that if an AI model can acknowledge its limitations and indicate when it is unsure—such as stating, “I’m not sure”—this may be more valuable than producing an answer. Dr. Lewis said that AI systems should be designed to recommend human oversight when needed.

The interview also covered Dr. Lewis’s research on “Reflective AI,” an approach that places a reflective layer around an AI system. According to Dr. Lewis, this layer evaluates the output of large language models against various criteria, including whether the system’s reported confidence is justified and if its actions are consistent with social expectations.

Discussing accessibility, Dr. Lewis said many AI systems operate as “black boxes,” providing decisions without revealing how outcomes are determined. He detailed work with the Canadian National Institute for the Blind (CNIB) and other organizations to develop explainable AI tools that are accessible to people with disabilities. Dr. Lewis stated that current technology does not yet meet accessibility requirements.

Dr. Lewis also commented that public opinion on AI remains divided, with some people regarding it as revolutionary and others expressing strong concerns. He noted that both views frequently rely on assumptions about the power and inevitability of AI. Dr. Lewis recommended adopting a “rational skeptic” approach, which means using AI where it brings clear benefits while acknowledging its challenges and limitations.

Dr. Peter Lewis is an Associate Professor in the Faculty of Business and Information Technology at Ontario Tech University.

Read more