Buckle up: The AI genie is out of the bottle


Mish
Administrator

We have largely been taken by surprise by the tremendous advances in capability of the latest Large Language Models (LLMs) and Generative AI. Across domains from music and art to customer service and research, from health to education, AI is challenging our views of activities once the exclusive domain of we humans with our unique gifts of creativity, empathy and judgement.

So how can we use AI, rather than become victim to it? How can we think our way into frameworks for regulation and appropriate ways of using AI without it “just happening” to us?

Ask any current generation of AI tools to whip up a biography of your favourite artist and you will get a succinct summary. Ask it to write a song in the style of this same artist, and you will get something impressive.  

What has changed from days of chess-playing robots is the way AI works and the size of the datasets used to train the AI. Generative AI is trained to ‘focus’ and is trained on datasets of unimaginable size, literally trillions of examples.

Chess-playing AI adheres to very strict rules. It is impressive because it can plan many moves ahead and explore a vast ‘solution space’ associated with future moves weighting options for a range of possible responses.

AI that was used to beat humans at the games of ‘Go’ and ‘Jeopardy’ still operate in constrained environments, albeit with substantially greater complexity. As long as the game has rules, a finite number of options exist for every move and counter move. 

Generative AI models use neural networks to identify patterns and structures in training data then generate new content.

Generative AI tools also can leverage different learning approaches, including unsupervised or semi-supervised learning. Unsupervised or semi-supervised learning allows the size of the training datasets to expand massively. 

Any unsupervised training activity will, invariably, lead to some unexpected results. When presented with a supposedly factual response from your AI query, some results may refer to ‘real world’ sources that simply do not exist. The algorithm has ‘filled in the gaps’.

Similarly, a request to generate an image from a verbal description may lead to something more ‘Salvador Dali’ like than you may have expected. This scaled up version of an age-old adage of ‘garbage-in-garbage-out’ has a modern twist ‘garbage-in-sometimes-hallucination-out’.  

Nonetheless, the responses from the latest generation AI tools are impressive, even if they need to be fact-checked. 

Regulating use of AI – How should we think about this?

AI is different to other technologies: some of the concerns raised about AI could just as readily be applied to other technologies when they were first introduced.

If you instead replaced ‘AI’ with ‘quantum’, ‘laser’, ‘computer’ or even ‘calculator’, some of the same concerns arise about appropriate use, safeguards, fairness, contestability. What is different is that AI allows systems, processes and decisions to happen much faster and on a much grander scale.

AI is an accelerant and an amplifier. In many cases, it also ‘adapts’, meaning what we design at the beginning is not how it operates over time. 

Before developing new rules, existing regulation and policy should be tested to see if it stands up to potential harms and concerns associated with acceleration, amplification or adaptation. If your AI also ‘generates’ or synthesises, then more stress-tests are needed as ‘generation’ goes well beyond what you can expect from your desktop calculator. 

AI is no longer explainable: Apart from the most trivial cases, the complexity of the neural networks (number of layers and number of weights), coupled with the incomprehensibly large training datasets means there is little chance of describing how an output was derived even if it were possible to unpick all of the levels of each training element. Any explanation would be meaningless. 

For any decision which matters, there must always be an empowered, capable, responsible human in the loop ultimately making that decision. That human-in-the-loop cannot just be a rubber stamp extension of the AI driven process. 

Any regulation must not refer to the technology: The orders of magnitude difference between the pace that technology moves, and that at which regulation adapts, means the closer the regulation gets to the technology, the sooner it is out of date.

Regulation must stay principles-based and outcomes-focussed. Regulation must remain focussed on preventing harms, the requirement for appropriate human-based judgement (even if AI assisted), dealing with contestability, and remediation. 

Blanket bans will not work: Comprehensive banning of student use of generative AI has been announced by various Departments of Education around the world (including Australia). The intention of these bans is to prevent students unfairly using the AI to generate responses to assignments or exams, then claiming it to be their own work. 

Such bans are extremely unlikely to be effective simply because those who are not banned from use have a potential advantage (real or perceived) by accessing powerful tools or networks.

The popularity of AI platforms also means that workarounds are likely to be actively explored including use of platforms in environments outside of the restrictions.

The bans arguably address symptoms rather than root causes. In the case of education, rethinking how learning is assessed is core to the challenge of appropriate use of generative AI.

We need to think long term: AI technology has been with us for a long time. It is suddenly renewed, and we are looking at it with little understanding of the long-term consequences.

By analogy, electricity was the wonder of the 19th century. From an initial scientific curiosity, electricity is embedded everywhere and has profoundly changed the world. 

AI is likely to have as profound an impact as electricity. As AI becomes embedded in devices, tools and systems, it becomes invisible to us.

Our expectations of these devices, tools and systems are that they are ‘smarter’: better aligned to the tasks at hand; better able to interpret what we mean rather than what we ask for; and improve over time. We do not expect to be manipulated by, or harmed by the tools we use. 

Regulation must provide the long-term oversight allowing us to remain vigilant to consequences from AI individually, for society, and for our environment. 

An example – Use of AI in education and research 

Existing commercial solutions have shown that AI can be extremely useful for customised learning, acting as a personalised tutor adapting to, and addressing the individual needs of students. 

Generative AI can go further to help students and researchers identify the most appropriate source material for assignments and essays, or to find unexpected connections in large datasets.

A problem arises if the use of AI extends to generating the report or assignment, and this is presented as the original work of the student or researcher. 

This challenges centuries-old ways of assessment, whether the assignment is an essay on the character Hamlet, or a scientific paper submitted as original research. 

If the goal is to assess learning and comprehension, different means are required such as asking the student to argue the major elements of any submitted assignment. Use of AI as part of the process may also help by identifying how much of a submitted assignment was auto-generated, including asking the AI tools if they generated the assignment. 

However, if the goal is scientific discovery, any verifiable new insight or relationship is still a discovery irrespective of whether generated from dogged individual research, or the result of a set of prompts by a researcher to an algorithm. One of the challenges that this creates is how to acknowledge or reward scientific discovery?

With access to vast amounts of data and very powerful algorithms, the skills required to make a breakthrough discovery may be very different to discovery based on years of fieldwork and careful analysis.

Rewarding research based on outcomes may need different incentive mechanisms for researchers progressing through their careers. 

NSW’s AI Assurance Framework – Version 1.0 

NSW developed an AI strategy and AI Ethics policy in 2020. NSW then developed, tested and mandated the use of an AI Assurance Framework.

The Framework is a self-assessment tool supported by an expert AI Review Committee (AIRC) who are tasked to review AI projects with an estimated total cost of $5 million or those for which certain risk thresholds have been identified during the Assurance Framework’s self-assessment process. 

The Framework assists project teams using AI to analyse and document a project’s specific AI risks. It also helps teams to implement risk mitigation strategies and establish clear governance and accountability measures.

The role of standards 

As we try to address technology, we must remain focussed on regulatory principles and the characteristics (amplifier, accelerant and adaptive) of the technology rather than directly speaking to technological elements. 

As we do so, the world of standards is somewhere we can look for assistance. As ISO, IEC, JTC1 and even IEEE make progress in standards addressing elements of AI use, system models, algorithmic bias and data quality, our regulatory frameworks can be reinforced by reference to the need to apply appropriate standards. 

Returning to the electricity analogy, generation, distribution and house-hold supply are all highly regulated. The regulations are supported by international and Australian standards and allow us to safely deliver electricity to a myriad of daily uses. 

AI is moving fast and so active contribution by Australian experts to developing standards  will be critical to ensuring appropriate use of AI as capability develop. 

So what next?

The first LLMs hit the scene in late 2022, emerging into our lives with a bang, and with the accelerator planted to the floor.

We need to think seriously about how we will and will not use AI, knowingly or unknowingly in every part of our lives. Good regulation and standards will help us. 

And buckle up for when quantum supercharges AI.

Dr Ian Oppermann is the NSW Government’s Chief Data Scientist working within the Department of Customer Service. He is also an Industry Professor at the University of Technology Sydney (UTS). He heads the NSW government’s AI Review Committee and Smart Places Advisory Council.Dr Oppermann is a Fellow of the Institute of Engineers Australia, a Fellow of the IEEE, a Fellow of the Australian Academy of Technological Sciences and Engineering, is a Fellow and Immediate Past President of the Australian Computer Society, Fellow of the NSW Royal Society, and a graduate member of the Australian Institute of Company Directors.

Do you know more? Contact James Riley via Email.

Leave a Comment