This week the UK Government’s AI Safety Summit will seek to answer some of the big questions around the use of artificial intelligence.
The two day event, where attendees will include billionaire entrepreneur Elon Musk, US Vice President Kamala Harris, China’s tech vice minister Wu Zhaohui and executives from the world's best-known AI companies, will consider how to manage the risk AI poses, and help to shape the role it will play in society. Speaking ahead of the summit, two AI experts from Saïd Business School at the University of Oxford, talked about the issues they hope to see covered by industry leaders and government officials.
Professor of Operations Management, Matthias Holweg, weighed into the debate around the existential risks many believe AI could hold. He said: ‘The debate on AI regulation very often descends into pointing to existential risks, but those fears are misplaced, in my view. We are several key development stages away for AI to become that powerful or go out of control. And we do not have any credible path (yet) on how AI could ever become sentient.’
Matthias still deems regulation to be crucial, and insists there must be conformity across AI system, before they are launched to the public. He explains: ‘The clear and present danger, and why AI regulation is very important, is that these systems decide on the access to essential services, like finance or education. If we don’t ensure AI systems conform prior to being launched, we risk excluding and/or exploiting certain parts of the population. And in the worst case, propagate existing biases into the future, without anyone noticing.’
Warning against a parochial approach to regulation, Matthias predicted the rules will ultimately be decided by America, the EU and big tech firms. He added: ‘While the UK efforts to develop its own AI regulation are laudable, they miss the point that firms will seek to comply with one global standard. What the UK may or may not decide, is quite frankly irrelevant to most AI operators.’
In research carried out by Matthias and academic colleagues at the Oxford Internet Institute, they developed a comprehensive AI failure database, which identifies how and why AI systems fail, mainly around privacy intrusions, followed by discriminatory predictions (bias). More recently, the problem of ‘hallucination’ has been added thanks to the proliferation of generative AI.
Senior Fellow in Management Practice, Alex Connock, outlined how the ownership and regulation of Large Language Models (LLM) and the importance of the UK having rights to the intellectual property of the LLMs is fundamental to avoiding the UK becoming a ‘factory’ of content for other nations.
He said: ‘The UK entertainment industry has become largely a fabrication facility for Hollywood studios to produce intellectual property (IP) in the UK, but own it overseas. That is a microcosm of what could happen with large language models, the driving resource of generative AI. Without ownership, the UK will be merely a factory for the production of content and algorithms that are ultimately owned in California, Abu Dhabi, Beijing. The UK government needs to make sure that the IP of at least some of these LLMs is UK owned as well as based in the major UK AI companies.’
Alex also posed questions to the UK government over the regulation of facial recognition systems and deep fake technology, stating that Prime Minister Rishi Sunak ‘must ensure a system is created whereby all makers of generative image systems have to also provide a watermark key by which the outputs of those systems can be back traced to its core maker.’
Echoing Matthias’ concerns, Alex also called for greater algorithmic transparency. He said: ‘Can the UK put in place measures to make algorithms used in for instance insurance and health screening, not be ‘black box’ but instead be transparent and therefore capable of regulation?’