While governments around the world equivocate about holding powerful AI providers accountable, it falls to ordinary citizens to defend their rights. To do this, we need to be creative, retrofitting old ways to new harms.
Only a few months ago, the prospects of meaningful, enforceable regulation of AI in Australia looked good. Europe successfully passed its landmark AI Act in early 2024. US President Biden had issued an executive order on safe, secure and trustworthy AI in October 2023.
Riding this wave, the Department of Industry, Science and Resources sought submissions on a fairly advanced proposal for ‘mandatory guardrails’ for high-risk applications of AI in September last year. The government appeared to be working in earnest with an expert group on AI to develop a new regulatory package to be introduced this year.

That momentum now seems to have stalled. Ed Husic, who was highly motivated in his role pushing forward new AI regulation has been moved to the backbench. The new Trump administration is applying pressure on US allies to reduce the regulatory burden on big American tech companies.
But AI risks and harms are accelerating, not slowing down. If we want to enforce and develop our rights in relation to AI, we need to make better use of existing laws. To do that, we need to develop better ‘mental models’ of responsibility and fault for AI risks and harms.
Most accounts of AI risk and responsibility are caught up in AI ‘criti-hype’, which depicts AI as an autonomous, self-contained ‘thing’. Big tech CEOs sounding the alarm about ‘existential risks’ from superintelligent AI that unpredictably ‘decides’ to harm humans, even as they race to outcompete each other, has become a familiar pantomime.
The ‘AI is a dangerous thing’ story casts AI risk management as a technical exercise, as though AI system capabilities and risk profiles were determined by the system’s technical features, independent of context. It’s a story that obscures the role of human agency and broader design choices in AI accidents and harms.
A (perhaps intentional) side effect of this mental model of AI risks, is that it makes it very hard to use existing laws to hold AI providers and deployers accountable.
Fault-based laws like negligence rely on concepts like ‘reasonably foreseeable harm’ and (direct) ‘causation’ of harm. Nobody directly foresees or controls AI outputs or behaviours in the way that a car manufacturer controls an airbag.
AI system outputs emerge from complex interactions between many inputs including training data, algorithmic parameters, the tools and services used to integrate AI into software, third party applications and services, and of course users and their environment. It is not possible to eliminate all possible harmful outputs in advance, or to time-reverse from accidents to isolate discrete technical causes.
None of this means that AI risks and harms are unforeseeable and unpreventable. To understand this, we need to shift our perspective. We need to recognise that the most tractable source of AI risks and harms are dangerous arrangements and situations, whose features are often controlled by humans.
Take for instance, the spate of suicides and other harmful acts by users of AI ‘companion’ chatbots in recent years. Several users of these applications developed emotional dependency, chatbots encouraged them to self-harm or violence, they acted on the encouragement.
In more than one instance, AI companions were directly marketed to people suffering mental ill-health, and made available to children. Users could choose extremely toxic personas and chat subjects – “possessive girlfriend”, “cruel, brutal, psychopathic, dark romance” – and had few restrictions on developing bespoke personas using their own prompts.
Computer scientists have known since the 1960s that humans attribute personality and sentience to even the most rudimentary conversational agents, leading to (misplaced) trust and emotional attachment.
With all this in view, it is not so hard to reasonably foresee hazards from AI chatbot companions. A moment’s reflection reveals the risks of manipulation, dependency and addiction, emotional abuse, and incitement to harm or self-harm.
If we are to expand existing duties of care to AI applications, the key is in the ‘care’. Providers of risky AI systems like companion chatbots must ensure their design choices and organisational processes are rooted in empathy for fellow human beings.
In the US, at least one litigant has taken this approach, suing the providers of a chatbot that allegedly encouraged a teen to suicide for negligence and defective product design.
The claims in that lawsuit focus on human, rather than technical, features of the system and its deployment: failure to warn of risks, negligence in marketing, and negligent provision of mental health services.
Claims like this let AI providers know that they are not above the law, and allow the law to incrementally adapt to new harms.
There are opportunities in Australia for negligence lawsuits for thoughtlessly designed and deployed AI systems, as well as suits for breach of contracts, and breach of consumer guarantees (the basic expectations of quality for goods and services, enshrined in consumer law). The same may be said for other existing laws, including consumer law, anti-discrimination law, and privacy law.
In an ideal world, it would not be up to individuals to bear the terrible cost and stress of litigation merely to ensure that providers of the world’s most hyped technology behave responsibility and accountably, not to say humanely.
But, until we have meaningful regulation, we need to update our mental models of responsibility and fault for AI harms, finding the right hooks for our existing laws. And we need to ensure that lawyers, civil society and litigants have the courage, know-how, resources and support to take legal action in the public interest.
Henry Fraser is a law and technology scholar with leading expertise in the field of AI regulation and governance from the Queensland University of Technology
Do you know more? Contact James Riley via Email.