To create the political and economic framework conditions for Artificial Intelligence (AI), policymakers rely on advice from people in science and academia. Ricardo Chavarriaga, Head of CLAIRE Office Zurich and researcher at the ZHAW Centre for Artificial Intelligence (CAI), is driving the responsible use of AI with the pan-European CLAIRE network. He explains what politicians in Switzerland need to know about AI to make informed decisions.
Ricardo, why is it important that politicians understand AI?
AI plays a very important role in society and brings enormous possibilities to address important challenges in health care, climate change, security and for the global economy. Strong policies to encourage and oversee research, development and deployment of AI systems are very important for any country.
Becoming a leader in AI will also entail leadership at the economical and geopolitical level. Conversely, lagging behind in this area will endanger sovereignty as it will make any region strongly dependent on others for implementing services that are crucial to the state.
What’s the current level of knowledge among politicians? In other words, how much do they already know about AI?
The knowledge politicians have about AI reflects the knowledge of the general population. Alas, this information usually reflects two opposite extremes of what the technology is or can be. Namely, an overly positive stance putting AI as the imminent solution to every problem or framing the same technology as an imminent threat that will bring society to a dystopian future.
Where do they get the knowledge? Who explains it to them?
There is a growing number of initiatives aimed at providing information to politicians. Among them, we can highlight the role of transnational organisations like the Organisation for Economic Co-operation and Development (OECD), the Global Partnership on Artificial Intelligence (GPAI), or UNESCO. Of course, CLAIRE is another such organisation with a vast network and a lot of expertise.
How can researchers contribute to this process?
It is extremely important that researchers and developers get more involved in the process. CLAIRE, like other technical organisations, is convinced of the importance of being part of this dialogue and is in continuous interaction with European decision makers. One example of it is our role in the recent establishment of the “AI, Data and Robotics Association” (Adra), a EU-supported partnership to invest 1.3 billion Euro aimed at driving innovation, acceptance and uptake of these technologies, notably through public-private partnerships.
Does CLAIRE also advise politicians in Switzerland?
Yes, in October 2020 I had the opportunity to attend a session of the Security Commission of the National Council in Bern on the topic of security policy and the role of new technologies. With other experts, we presented the technical basis of different technologies and the challenges on their development and validation. The session allowed the members of the commission to get more information about these topics, ask their questions and concerns and identify potential ways of action. As a follow-up of these discussions, in June 2021 the commission accepted a postulate to prepare a report on autonomous weapons and the use of AI in military applications.
As Head of CLAIRE Office Zurich, Ricardo Chavarriaga wants to promote connections between the European AI research community and people in politics and industry. Besides his work at the ZHAW and CLAIRE, Ricardo Chavarriaga is a fellow at the Geneva Centre for Security Policy.
Which misperceptions do politicians have about AI?
Politicians, like the general public, tend to have a confused impression of the power of AI technologies. AI systems are often mistaken by systems that are very close to present human-like intelligence in a wide variety of scenarios, while in reality they tend to perform very well only in a specific task and context.
Another major misperception of the public is the degree at which these systems have been validated. It is well-known that many systems that exhibited impressive performance at early stages of the research and innovation processes do not live up to the generated expectations once they are tested in conditions closer to real-life use. Currently, when we hear about a new breakthrough in AI, it is hard to know how mature the technology is and how thorough the testing process has been. As a result, we tend to underestimate the time and resources needed for these systems to become impactful products to their intended users.
What’s the biggest challenge in explaining AI to politicians?
One of the biggest challenges in explaining AI is the cacophony of mixed messages conveyed by the public media and interested parties. Besides the complexity of the technical aspects, it is important to notice that there are many ways in which AI can be developed and deployed. Nuance about specific cases, self-serving opinions trying to push a specific agenda and the inherent uncertainty about the impact of new technologies make it difficult to maintain informed debates about the topic. Remarkably, these challenges are not unique to the dialogue with politicians but also with other stakeholders and even within the AI community itself.
What is needed to overcome these challenges?
It is crucial that decision-makers at governmental and organisational levels have access to independent, unbiased experts that can provide them advice. These experts should come from different backgrounds to allow for an inclusive analysis of the benefits and drawbacks of using AI in a given scenario. This implies that AI experts should also learn more about policymaking and diplomacy to be able to efficiently communicate and contribute to the process.
What is the role of these experts?
Probably the biggest misperception is the belief that AI experts have a complete knowledge of how AI systems work and evolve once they interact with humans or other AI systems. Emerging technologies are by definition a changing domain that is characterised by uncertainty. As such, a good point to start explaining AI to politicians and the general public is to acknowledge this uncertainty and remind them that the role of experts is not to provide definitive answers but to contribute with the tools of science and technology in the societal effort that is building the future we wish.
Explaining and teaching AI at the ZHAW
For people outside the field, it is often difficult to understand how artificial intelligence works. At the ZHAW, there are approaches to explain what is “artificial” and “intelligent” about the technology. Find out more in the latest issue of “Impact”.