Annual-Report-2024 - Flipbook - Page 59
#LINO24's final Panel Discussion: Brian P. Schmidt, Jaryd Ricardo Christie, Donna Strickland, David J. Gross, and moderator Sibylle Anderl
Scientists from all fields can be part of the solution by
engaging with the public, and not just the public that
already loves science.” For the general public, in the short
term it will be more important to deal with the new
deluge of content that will hit, and in some ways is already hitting, society.
“We’re going to see a proliferation of generated false
material in the next few years, to the point where you
won’t be able to tell if something is real or false unless
there’s a digital accountability […] of what person or organization is behind it,” said Schmidt. Any path to maintaining trust in AI and science should have two components: one where scientists continue to do a good job
(whether using AI or not), and one where scientists engage with the public.
A Next Gen Session on "Artificial Intelligence in Physics" gave young researchers the opportunity to address
this component. Ranging from quantum machine learning to medical physics and air flow prediction, there was
no shortage of AI applications in physics. Young Scientist
Anna Dawid-Łękowska works in a field of AI connected
to trust: interpretable AI. Unlike “black box” models, interpretable AI provides clear insights into how certain
results are achieved. The researcher presented work that
aims to create a robust, automated system that can identify and optimize laser cooling schemes for various atoms
and molecules. Different atoms and molecules have
unique electronic structures, making it challenging to
develop a one-size-fits-all approach to laser cooling. Each
species may require a tailored cooling scheme to achieve
optimal results, and this is where machine learning
comes in (see p. 69).
Looking ahead, the integration of AI into scientific research holds great promise, yet it also demands careful
consideration and proactive measures to maintain trust.
The first step is to apply scientific standards to AI and to
ensure that it is used reliably.
The second step is arguably much more pressing –
maintaining public trust. This is a monumental challenge, the panel agreed, but also one that brings opportunities. The panel ended on a note of cautious optimism,
while Schmidt concluded by reiterating the importance
of accountability. “No matter what happens, we as
humans are accountable for it, and we should accept
that. If you start trusting a machine for accountability,
that is the end.”
57