10 Policy & Politics
02 Funding
Business
Research & Innovation
Societal Impact
08 February 2023Juliette de la Rie

3 questions to Jord Goudsmit on QDNL white paper (3/3)

Lessons from AI – Risk management

Following the first and second white papers on Stakeholder Engagement and Communications from the ‘Lessons from AI’ series, the third and final paper on Risk Management is now available for download.

White paper 1 ‘Stakeholder Engagement’

White paper 2 ‘Communications’

White paper 3 ‘Risk Management’

In light of this last paper, we asked 3 questions to contributor Jord Goudsmit. Jord is a responsible tech consultant who helps organisations navigate the rapidly changing digital landscape. He specializes in digital ethics and responsible innovation.

This papers was written by Jord Goudsmit, Luca Possati, Lauren Challis and Ulrich Mans.

Why do you think is it useful to collect inspiration from the last 10+ years of AI risk management practice when looking at quantum technology?

Even though all emerging technologies develop in their own unique and distinctive ways, oftentimes there are similarities in the problems they face. For instance, there has been a tendency to mystify emerging technologies in the public debate, leading to all kinds of blown-up narratives about their capabilities (e.g., AI robots will dominate humankind).

We have seen this with AI, and due to the even more complex nature of quantum, we can anticipate the same will happen for quantum technology. This also means we can take preventive measures to avoid this same situation, for instance, by emphasizing the need for clear communication to the larger public, providing adequate information on the workings of quantum technology and explaining its relevance. Instead of reinventing the wheel every time a new technology is developed, we ought to look at earlier lessons learned, even when they seem rather obvious.

Which of the takeaways was a surprise to you?

One takeaway that was especially remarkable to me is the shift in thinking concerning human-machine interaction. Over the years, the debate concerning humans and technology has shifted from a techno-centric perspective to a more human-centric perspective. However, recently, the idea of a collective perspective has emerged, which argues for a balanced trust in both tech and humans. In this way, the collective perspective maximizes good by profiting from the unique traits of both technology (e.g., identifying complex data patterns), and humans (e.g., the capacity for critical reflection). Even though this perspective is not necessarily ‘new’ or groundbreaking, it is interesting to see that the collective approach that we advocate in our work is part of a bigger shift in human-machine interaction.

The AI Act is now a reality. What would you hope for the quantum community to ‘learn’ from work done on an EU level around risk management and accountability as we start conversations about future quantum applications?

Concerning the regulation of future quantum applications, one of the most important lessons from the AI Act is the use of policy prototyping. This is a method of experimental governance, where regulation first gets tested under real-world circumstances and adapted conform to the newly acquired knowledge before it is carried out. By doing this, you can create better-fitting policies and regulations for complex issues, such as emerging technology, by implementing this ‘feedback loop’.

Leveraging this methodology, we have seen that the AI Act does not always accurately depicts the AI landscape, for example with the proposed taxonomy of actors. This methodology of testing policy in practice is especially interesting for the quantum community because the technology is still in its early development phase, which allows for more time to find and test suitable policies.

Download the full white paper here:

Read more

Related articles about QDNL white papers – Lessons from AI

Get in touch

Drop us a message

We collect stories from extraordinary people within the Quantum community, don't miss out on these.

Subscribe to stay in the loop
©2023 QDNL. All rights reserved.