AI regulation: Where is the money, Lebowski?

Alexander Turkhanov, Head of Product at Zestic AI

Next week Bletchley Park will be hosting the AI Safety Summit 2023, a global event that will be bringing together international government representatives (led by the UK Prime Minister Rishi Sunak), leading AI companies, civil society groups and research experts. They will be focussing on the risks and opportunities of frontier AI, highly capable general-purpose AI Large Language Models (LLMs), as well as some narrow AI with potentially dangerous capabilities.

The summit’s briefing documents describe two main categories of Frontier AI risks: misuse risks, where bad actors use AI for harmful purposes, and loss of control risks, where AI systems behave in ways that are not aligned with our values and intentions.

The discussions will aim to define AI safety measures, or possible ways to prevent and mitigate harms from frontier AI. This could include, for example, creating ethical standards, ensuring transparency and accountability, fostering public awareness and education, collaborating on AI safety research, and developing new standards for governance.

WHAT’S MISSING HERE?

…The financial aspect of the said regulation. We believe it won’t take off properly unless we develop financial proposals and incentives.

Historically, we have had three ways to regulate technology – personal responsibility, corporate governance and insurance. Currently we are discussing the governance and personal accountability of ‘AI godfathers’, we are talking about corporate ethics and sustainable AI, but how about insurance? Isn’t it the best way to go?

Personal responsibility is the most obvious way of controlling a new technology. Roman engineers allegedly slept under the bridges they built. Thomas Andrews, the chief naval architect of the shipyard that built RMS Titanic, went down with the ship, and so did Captain Cowper Phipps Coles with HMS Captain. Since then, we have come up with better ways of making developers responsible, and we now consider not only regulatory challenges, but also regulatory opportunities.

After the Titanic disaster, we have seen new regulation: first, the SOLAS (Safety of Life at Sea) convention, followed by the COLREG (Collision of Ships Regulation) 50 years later, as well as the changes in insurance contracts. The same process supported the evolution of coal mines, trains, planes and cars. Financial incentives closely supported regulation changes. After all, business is at a centre of value creation within the society and financial outcomes are important for them, so we cannot skip economics.

The AI risk must be not only qualitative; it must have a monetary value, and it cannot be expressed in fines only. ESG has been on a similar journey, where many companies have moved on from simple scoring and ranking of their ESG risks to translating these into monetary impacts.

Quantifying risk, pricing it and transacting in it, including via a vibrant insurance market, accelerates the development of a new market and ultimately makes it safer.

This is how it could work in AI.

Government-led experts can define terms and propose interoperability requirements for businesses to exchange AI risks information, perhaps this data could reside in a dedicated agency. AI risk assessment models could be sourced from a research community via a call for papers and the provision of research grants. This way we could start gaining practical benefits from the AI ethics and AI implementation consequences databases that we have built during the recent years.

We should direct the public discussion into more practical aspects - AI is here to stay, and it will cause accidents, get people killed and property destroyed. The same has happened when cars, planes and mines were created, but it hasn’t stopped us from using them.

The same evolution will happen with AI, and an establishment of a financial market transacting in quantified AI risk will enable the sector to mature and become safer.

The world needs practical solutions.

At Zestic AI, we design and implement ‘ethical AI’ solutions - privacy-first, secure, tailored to your needs and SAFE. All without breaking a bank, as everything we do has a solid business case behind it. Please get in contact with us to find out more.

Next
Next

What does the release of ChatGPT have in common with the Chernobyl plant disaster?