Jannah Theme License is not validated, Go to the theme options page to validate the license, You need a single license for each domain name.
Tech

Think Tank Calls for UK System to Track AI Misuse and Malfunctions

The UK requires a system for recording misuse and malfunctions in artificial intelligence, or ministers risk remaining unaware of alarming incidents involving the technology, according to a report.

The Centre for Long-Term Resilience (CLTR), a think tank focused on government responses to unforeseen crises and extreme risks, recommended that the next government create a system for logging AI-related incidents in public services. Additionally, it suggested building a central hub for collating such episodes across the UK.

CLTR emphasized the importance of an incident reporting regime, similar to the Air Accidents Investigation Branch (AAIB), to successfully manage the use of AI. The report cited 10,000 AI “safety incidents” recorded by news outlets since 2014, listed in a database compiled by the Organisation for Economic Co-operation and Development (OECD). The OECD defines harmful AI incidents as those causing physical, economic, reputational, or psychological harm.

Examples from the OECD’s AI safety incident monitor include a deepfake of Labour leader Keir Starmer purportedly being abusive to party staff, Google’s Gemini model misrepresenting German WWII soldiers, incidents involving self-driving cars, and a chatbot encouraging a man to plan the assassination of the late queen.

“Incident reporting has played a transformative role in mitigating and managing risks in safety-critical industries such as aviation and medicine. But it’s largely missing from the regulatory landscape being developed for AI. This is leaving the UK government blind to the incidents that are emerging from AI’s use, inhibiting its ability to respond,” said Tommy Shaffer Shane, a policy manager at CLTR and the report’s author.

CLTR advised the UK government to follow the example of safety-critical industries like aviation and medicine by introducing a robust incident reporting regime. The think tank noted that many AI incidents might not be covered by UK watchdogs, as there is no regulator specifically focused on cutting-edge AI systems such as chatbots and image generators. Labour has pledged to introduce binding regulations for the most advanced AI companies.

Such a setup would provide quick insights into how AI is malfunctioning and help the government anticipate similar incidents in the future. Incident reporting would also help coordinate responses to serious incidents where speed is crucial and identify early signs of large-scale harms that could occur in the future.

Despite testing by the UK’s AI Safety Institute, some models may only reveal harms once fully released. Incident reporting would allow the government to monitor how well the regulatory setup addresses these risks.

CLTR warned that the Department for Science, Innovation and Technology (DSIT) risks lacking an up-to-date picture of AI misuse, such as disinformation campaigns, attempts to develop bioweapons, bias in AI systems, or misuse of AI in public services. An example is the Netherlands, where tax authorities caused financial distress to thousands of families by deploying an AI program in a misguided attempt to tackle benefits fraud.

“DSIT should prioritize ensuring that the UK government learns about such novel harm not through the news, but through proven processes of incident reporting,” said the report.

Funded largely by wealthy Estonian computer programmer Jaan Tallinn, CLTR recommended three immediate steps: creating a government system to report AI incidents in public services, asking UK regulators to identify gaps in AI incident reporting, and considering a pilot AI incident database to collect AI-related episodes from existing bodies like the AAIB, the Information Commissioner’s Office, and the medicines regulator MHRA.

The reporting system for AI use in public services could build on the existing algorithmic transparency reporting standard, which encourages departments and police authorities to disclose AI use.

In May, 10 countries including the UK, along with the EU, signed a statement on AI safety cooperation that included monitoring “AI harms and safety incidents.”

The report added that an incident report system would also support DSIT’s Central AI Risk Function body (CAIRF), which assesses and reports on AI-associated risks.

Related Articles

Back to top button