The human race may only have two years left to tame advanced artificial intelligence systems before they become too powerful to control, Rishi Sunak’s chief adviser on the technology has warned.
Matt Clifford said he was kept awake at night by the admission by the world’s top AI experts that they “don’t understand exactly” how these systems “exhibit the behaviours that they do” – a situation he agreed was terrifying.
Mr Clifford, who is the Prime Minister’s AI Task Force adviser, said he agreed with the statement by 350 experts last week that AI could pose an existential risk to humanity.
He said the world was facing a moment similar to the early stages of the Covid pandemic, when some people were dismissing fears over the potential effect it could have.
Mr Clifford told TalkTV’s First Edition with Tom Newton Dunn: “I think one way to think about this is – imagine the January 2020 moment in Covid.
“You know, it’s sort of very tempting to say, ‘Oh, you know, the number of cases isn’t going up that much’. And that’s because we’re not used to thinking about these exponentials. I think what the signers of the letter are saying is we’re on an exponential like these systems are getting more and more capable at an ever-increasing rate.
“And if we don’t start to think now about how to regulate, how to think about safety, then in two years time, we’ll be finding that we have systems that are very powerful indeed.”
Asked if this could be the moment when computers could surpass humans in intelligence, Mr Clifford said: “The truth is, no one knows. There are a very broad range of predictions among AI experts. I think two years will be at the very most sort of bullish end of the spectrum.”
Mr Sunak added his voice to warnings about the lack of proper regulation of AI last month when he said at the G7 summit in Japan that there needed to be “guardrails” in place to keep track on the rapidly evolving technology.
He has met AI experts in Downing Street and is expected to raise the issue with President Joe Biden in Washington this week. Mr Sunak is understood to be considering pushing for an AI version of the International Atomic Energy Agency, which carries out inspections of nuclear weapon development.
Sam Altman, the chief executive of OpenAI which operates ChatGPT, has also called for tougher regulation of the sector, and has said the technology should be handled in the same way as nuclear material.
Mr Clifford, who also chairs the Government’s Advanced Research and Invention Agency (Aria), agreed that a new global regulator needed to be put in place.
He added: “It’s certainly true that if we try and create artificial intelligence that is more intelligent than humans and we don’t know how to control it, then that’s going to create a potential for all sorts of risks now and in the future. So I think there’s lots of different scenarios to worry about but I certainly think it’s right that it should be very high on the policy makers’ agendas”.
Asked what the “tipping point” might be, Mr Clifford said: “I think there’s lots of different types of risks with AI and often in the industry we talk about near-term and long-term risks and the near-term risks are actually pretty scary.
“You can use AI today to create new recipes for bioweapons or to launch large-scale cyber attacks, you know, these are bad things. The kind of existential risk that I think the letter writers were talking about is exactly as you said, they’re talking about what happens once we effectively create a new species, you know an intelligence that is greater than humans.”
He said this scenario was “certainly not inevitable”, but added: “However, the reason that people are starting to get worried and the reason that even the people making these systems, the people that signed the letter, is that the rate of progress that we’ve seen over the last two or three years has been pretty striking.”
He added: “If we go back to things like the bioweapons or the cyber, you can have really very dangerous threats to humans that could kill many humans, not all humans, simply from where we’d expect models to be in two years’ time.
“I think that’s really the thing to focus on now is how do we make sure that we know how to control these models because right now we don’t and how we do we have some sort of path to regulate them on a global scale because it’s not enough I think to regulate them nationally.”
Asked what one thing about AI keeps him awake at night, Mr Clifford said: “The fact that the people who are building the most capable systems freely admit that they don’t understand exactly how they exhibit the behaviours that they do.”
Mr Clifford insisted however that there were “obvious benefits of AI if it goes right” and that the world could end up with “very powerful systems that are safe and robust” that could improve people’s lives, cure disease, make the economy more productive and help the planet reach net zero.