The open letter calling for an immediate six-month pause in the AI development arms race and signed by more than 1600 tech luminaries, researchers and responsible technology advocates under the umbrella of the Future of Life Institute is stunning on its face.
Self-reflection and caution have never been defining qualities of technology sector leaders. Outside of nuclear technology, itâs hard to identify another time when so many have publicly rallied to slow the pace of technology development down, much less call for government regulation and intervention.
âAdvanced AI could represent a profound change in the history of life on Earth and should be planned for and managed with commensurate care and resources,â the letter states. âUnfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one â not even their creators â can understand, predict, or reliably control.
âTherefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than (Open AIâs) GPT-4. This pause should be public and verifiable and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.â
Signatories to the letter are growing daily and include some of the biggest tech celebrities: Tesla and Twitter owner Elon Musk, Apple Co-founder Steve Wozniak, and several engineers at Microsoft, Google, Amazon, Meta and Alphabet-owned DeepMind.
Mr Musk is one of the biggest donors of the Future of Life Institute, which is led by the prominent MIT professor and AI researcher, Max Tegmark. Many Australian researchers from several universities are also co-signatories, along with Adrian Turner, former CEO of CSIRO data science subsidiary, Data61 and now the lead of the Mindaroo Foundation Wildfire and Disaster Resilience Program.
Conspicuously absent from the cosignatory list is Toby Walsh, Australiaâs preeminent AI expert and chief scientist of the new AI Institute at the University of NSW (UNSW.ai).
Mr Walsh, who has authored several books on AI, has played a leading role at the United Nations in a global campaign to ban lethal autonomous weapons or âkiller robotsâ. He is not just aligned with the myriad stated concerns of AI development and application, but has been out in front of them.
But he is unambiguously not a supporter of the great AI development pause.
âIt wonât work. It is the wrong action,â Mr Walsh says. Â âWe need to focus on careful deployment of AI not stop research into it. (The Open Letter signatories) have the wrong argument: itâs not that AI is too smart but too stupid that is the problem.â
So whatâs going on here?
There is little argument â from Mr Walsh â or any other AI expert that the technology represents a massive shift for humanity and its many and complex intended and unintended risks and consequences need careful study.
The open letter comes at a time when AI systems and large language models like Open AIâs Chat GPT have made impressive leaps. The popular chatbot which launched publicly last November and achieved a stunning 100 million downloads in a month, scores highly on academic tests and delights and shocks in its capabilities to write software code and answer complex questions with human-like sophistication.
But it also makes plenty of mistakes, sometimes trivial, even humorous. But also dangerous. It often reveals incorrect information on any number of subjects and portrays ingrained social biases. These glitches are collectively known as âAI hallucinationsâ.
The popularity of ChatGPT (enhanced version 3.0) pushed competitors into rushing the launch of their own AI. Microsoft — which invested USD$10 billion into OpenAI is using AI in its Bing search engine, with very mixed results — and Google, which developed some of the AI needed for ChatGPT 4 and has created its own large language model, LaMDA, but then it rushed the debut of its Chat GPT competitors, BARD, and PaLM, also with mixed results.
Like Elon Musk, Future of Life Institute head, Max Tegmark are genuinely alarmed by the pace of AI development and its existential threat to humanity. âIt is unfortunate to frame this as an arms race,â Mr Tegmark said. âIt is more of a suicide race. It doesnât matter who is going to get there first. It just means that humanity as a whole could lose control of its own destiny.â
Mr Walsh acknowledges that the financial profit-motives driving the massive acceleration in AI development and its premature launch into the wild at scale are cause for concern. But he says the correct path to responsible improvement in AI — how it is designed and applied for education, science and business with a set of shared safety rules that can be audited and overseen – is for all players to be active, open and transparent at every step of its development and use. AI systems are best trained not within the confines of a lab, but by continual use by large numbers of people in real life.
Mr Walsh also questions the âsix-monthâ pause period. âWhy six months? Whatâs that going to do?â he says.
Calls for the pause naturally clash with the appetites of startup founders and their venture capital funders who see a green-field opportunity to embrace the âgenerative AIâ boom. And unsurprisingly, OpenAI chief executive Sam Altman is not a signatory to the Letter, although there are reports that he initially was before removing it.
âIn some sense, this is preaching to the choir,â Altman said in response to letterâs publication. âWe have, I think, been talking about these issues the loudest, with the most intensity, for the longest.â
Other critics of the Pause AI movement question its logic, its efficacy and warn that other countries â specifically China â will not be pausing AI development. Still others are rightly suspicious of wrong-headed and anticompetitive intervention by governments who have historically demonstrated little understanding of emerging technologies.
Even Mr Musk tweeted that he doubts developers and startups will embrace the pause. â(Developers) will not heed this warning, but at least it was said.â
Whichever way it goes, and whatever the myriad motives and agendas of AI developers, entrepreneurs, policy makers and alarmists, it is clear that a new conversation about this extraordinary technology has started in earnest. The shock is that this conversation is coming from inside the house.
Do you know more? Contact James Riley via Email.