Annual-Report-2024 - Flipbook - Page 58
Artificial Intelligence and Physics
How Do We Preserve Trust
in Science in the Age of AI?
Recent advances in artificial intelligence (AI) are rapidly finding application in scientific
research, not least in physics and related fields. While these advances hold great promise,
they also raise serious questions. Two #LINO24 sessions discussed possible answers.
The final Panel Discussion on Mainau Island was dedicated to trust and AI. It was therefore highly appropriate,
if not, arguably, self-referential, that moderator Sibylle
Anderl, Lindau Alumna 2010 and science journalist at DIE
ZEIT, Germany, started by asking AI chatbot ChatGPT to
briefly introduce the participants.
When presenting Nobel Laureate David J. Gross from
the University of California, Santa Barbara, United States
of America, ChatGPT accurately summarized his field
of work and major achievements but simply made up a
statement about Gross’ opinion on AI. AI hallucination
(the process by which an AI generates incorrect or nonsensical information that appears plausible or coherent)
is a major problem not only with ChatGPT but with other
generative models as well.
“I think AI models are inherently hard to trust because it’s hard for people to understand what’s going on,”
commented Jaryd Ricardo Christie, a medical physicist
at the University of Western Ontario, Canada, who was
also on the panel. “These models are kind of a black box.
I think what we should do as AI scientists is always be
sceptical of these models.” Many generative models are
essentially plausibility generators, but they have no real
56 | Talking Physics That Matters
understanding of what they are generating. “AI models,
especially large language models (LLMs), are trained to
come up with things that sound good, but they don’t
know if it’s true or not,” Gross said. He continued that it
would be great if AI could be trained to follow the scientific method, but until that happens, it’s up to us humans
to be sceptical of AI and ensure that what it turns out, or
outputs in computing jargon, is factual and reliable.
Of course, members of the audience are very familiar
with the scientific model and use it in day-to-day life as
well as in their professional capacities. But for the general
public, matters are different. It’s one thing to ensure trust
in AI in research; it’s another to ensure it for society.
“I think LLMs are very useful, but we also have to
worry about people not using this assistance as a crutch
and not ever learning anything,” said fellow panelist
and Nobel Laureate Brian P. Schmidt of the Australian
National University. He argued that the sense of accountability is crucial, and we as a society are still defining
who is accountable for what.
Nobel Laureate Donna Strickland, University of Waterloo, Canada, echoed these ideas: “I think we have a
growing distrust of science, especially in North America.