The Hunger Games created by NDIS algorithms


Marie Johnson
Contributor

This fifth article in the Defending the NDIS series looks at the harm caused by algorithms and the perverted power politics of algorithms that is driving political debate.

The question is, can we continue to ignore what’s happening? The first four articles set the stage for this impossible question.

The first article in this Defending the NDIS series dealt with the complexity of the system. The second article exposed the ‘word salad’ of assistive technology. The third article pulled apart the actuarial model that has helped destroy the scheme. And the fourth article was an insider’s look at what happened when the government took a wrecking ball to the National Disability Insurance Scheme’s operating model.

Crushing systemic complexity, actuarial fiction and omissions, and a wrecked operating model have landed us at the altar of algorithms. By any measure, this is a case study of failed digital government.

And if you think this just affects the folks on the NDIS, you’re in for a shock. Do you receive the childcare subsidy? Keep reading.

Marie Johnson
Marie Johnson: The fifth in a series on articles on the troubled NDIS

Last year, following a bitter political showdown with the states and territory governments and the entire disability and health community, the federal government too happily proclaimed that the Independent Assessment model was ‘dead’.

While the distraction of the model name game played out, the algorithms were unleashed.

Under intense questioning at the Senate inquiry into Independent Assessments, the National Disability Insurance Agency gave up some of the architecture of the roboplanning algorithms built on 400 personas.

Given the population at risk, the inquiry worryingly revealed a lack of independent oversight and ethics in the algorithm design.

The determinants of the algorithm personas had not been subject to any independent external design and ethics scrutiny. Almost universally, health professionals expressed serious concern regarding what many considered to be a significant risk of harm.

We are now seeing the realisation of the horrific human cost of these powerful algorithms in operation in automated planning processes and plan reviews.

There is no pretence that automated roboplans are now the planner’s toolkit, enabling automated plan processing at a scale not previously possible.

What is happening is widespread dangerous and massive cuts to plans without warning. Without explanation. Without evidence being read or considered. ‘No contact’ processing for the surging number of requests for plan reviews that this automated budget harvesting is driving.

The human is utterly out of the loop. This is a co-design wasteland.

Many participants report being caught in endless cycles of reviews: people’s needs judged not value-for-money by the algorithm as they suffer end-of-life.

And here is the human face of this outrageous suffering machine.

Consider the child with a rare genetic incurable terminal disorder requiring 24×7 support, whose NDIS funding was cut by 40 per cent without warning. The child, bedridden and unable to move due to his degenerative condition. The child’s mother described the cuts as being at ‘dangerous levels’.

The young man with severe intellectual disabilities, needing help to eat, get dressed and use the bathroom, had his NDIS funding slashed by tens of thousands of dollars as the agency determined the funding was not ‘value-for-money’. His mother quit her job to provide 24×7 care.

And the young man with autism, cerebral palsy, epilepsy, who uses a wheelchair and needs constant supervision, had his funding unexpectedly cut in half. “His funding was cut to the point that none of his daily needs were going to be met and he was not going to be safe,” his mother said.

This has also been my family’s experience.

These are not one-off instances – ridiculed as ‘minuscule’ by Minister Linda Reynolds – but systemic algorithmic patterns preventing and shutting off access to services.

And the operation of the flawed algorithms not only affects individuals in access to necessary supports, but insidiously impacts the broader operation of access to justice.

It is no coincidence that roboplanning processes have been underway at the same time that complaints about NDIS plans to the Administrative Appeals Tribunal (AAT) skyrocketed by a staggering 300 per cent over the past nine months alone.

Naomi Anderson, principal solicitor at Villamanta Disability Rights Legal Service in Geelong, said calls to the practice about “significant and unexpected cuts” had ramped up over the last six to eight months.

‘The AAT can’t cope with the influx, the NDIA’s lawyers can’t cope with the influx, and the advocates can’t, so everything is getting slower and slower,” she said.

And during that same nine-month period, the NDIA spent a whopping $31 million fortune on AAT litigation fighting participants.

It would appear that there is no upper limit on the cost of litigation to fight participants, or the unprecedented rate and scale of appeals generated by robo processes.

And it is an utterly perverse hunger game where the government puts an extra $100 million to disability advocacy services to ‘better assist’ people going to the AAT – to fight the government.

Regardless of funding, advocacy capacity will continue to collapse under the strain of the volume generated by algorithms, fighting the NDIS also represented by lawyers in ever more cases at the AAT, where none of the parties can really explain how the algorithm is constructed or its effect.

The algorithm is a black box, with no checks and balances, unchallenged, and unchallengeable.

That a judicial body could be so overwhelmed by the scale and network effects of an algorithm, signals a completely different risk that central government is either choosing to ignore or inexcusably ignorant of.

Evidence of this is the extraordinarily puerile statements in the PM&C discussion paper on ‘Automated Decision Making and AI Regulation’.

‘In principle, automation has the potential to reduce bias and discrimination, as a machine can only reflect the inputs provided by human users.’

Further, ‘AI and ADM is also being deployed by government to dramatically improve service standards…’

At the centre of government, there is clearly no understanding of the risk of complexity in automated decision making and the harm suffered today, not some far off time in the digital future.

This discussion paper epitomises just how out of date and ill prepared Australia is, not just for the challenges for regulation, but for the challenges to democratic institutions and specifically access to justice. Whilst the paper talks about regulation, it is silent or ignorant of the constraints and oversight required of government itself.

The power of the state fundamentally changes things, and the voluntary AI Ethics Framework is a farcical marketing veil in face of this.

And while PM&C naively entertain a happy upside of automated decision making, what is now happening with RoboNDIS was not only predicted but known to occur in other jurisdictions. How can this risk of harm be ignored?

Reading like a mirror of the NDIS planning processes, are the horrific examples of algorithm-driven funding models used in the United States.

Algorithm-driven funding assessments that “hit low-income seniors and people with disabilities in Pennsylvania, Iowa, New York, Maryland, New Jersey, Arkansas, and other states, after algorithms became the arbiters of how their home health care was allocated – replacing judgments that used to be primarily made by nurses and social workers.”

A lack of funding determined by the algorithm resulting in “…people lying in their own waste. You had people getting bed sores because there’s nobody there to turn them. You had people being shut in, you had people skipping meals. It was just incalculable human suffering.”

And the case of the Dutch childcare benefits scandal sounds eerily like the unlawful Australian RoboDebt catastrophe, with warnings for the growing Australian controversy over automated debt recovery of childcare subsidies.

In 2019, it was revealed that the Dutch tax authority ruined thousands of lives by using a self-learning algorithm to create a risk profile for childcare benefits fraud. Authorities issued exorbitant debt notices to families over the mere suspicion of fraud based on the algorithm’s risk indicators. Some victims suicided. More than a thousand children were taken into foster care.

The Dutch parliamentary inquiry that followed found that the fundamental principles of the rule of law had been violated, and there was a total lack of checks and balances.

Once the enormous scale of the scandal came to light, the Dutch government resigned.

Of course, debt recovery is not the issue: it is a legitimate function.

The issue for roboplanning and debt recovery, is the use of algorithms at scale by the state.

The widespread application of algorithms changes the relationship between the citizen and the state: opaque algorithms enabling policies based on the reverse onus-of-proof and non-appealable processes that target and impact the most disadvantaged in society.

In my opinion, perhaps the most concerning aspect of algorithmic decision making is that any humans who might be in the loop, will defer to the algorithm because they are simply overwhelmed by volumes and KPIs.

Local Area Coordinators. Planners. Decision makers. Appeals officers. Health professionals. Legal representatives for the agency. Legal representatives for participants. Members of the AAT.

Across all these parties making decisions regarding the lives of people with disabilities, there is not a common – or any – understanding as to the construction and effect of the algorithms.

I know I’m not the only person to see a problem with this system-wide distortion.

But these algorithms did not just invent themselves. On evidence at the Senate inquiry into Independent Assessments, it appeared that the actuary function had primary carriage for the design of the algorithms.

But the accountability question seems as opaque as the algorithms themselves.

Algorithms that are effectively a health intervention, were created without co-design with health professionals.

There was almost universal criticism of the over-reach of the actuarial function in the design of the algorithms and on evidence at Senate hearings, the NDIA shockingly stumbled over whether or not there was even an ethics framework.

In my own submission to the Senate Inquiry, I alerted to the pernicious effect of the over-reach of the actuarial function on service design. These are fundamentally different skill sets and accountabilities that cannot be mixed up or combined: there exists an inalienable conflict of interest.

The risks inherent in service deign of human services are extraordinary.

But it appears that recent changes at the NDIA have further embedded the role of the actuary in digital and the participant experience design. The over-reach of the actuarial function is now complete. This will inevitably be the starting point for any future class actions on RoboNDIS.

The Opposition has committed to a Royal Commission into RoboDebt. The terms of reference of this Royal Commission will be historically unique, as these will observe a fundamental shift in the nature of the exercise of power by the state against its own citizens. A shift only made possible by algorithms.

Society is in a very different space. People with disability are indeed the guinea pigs for changes that will affect every aspect of Australian society. And the additive impact – RoboNDIS, Robo Childcare, RoboDebt and so on – is almost beyond comprehension.

So significant are the patterns and practices between RoboDebt and NDIS, that rebuilding the NDIS based on trust must involve immediately stopping the use of algorithms.

The sixth and final article in the series pulls apart the governance that has made the NDIS such a divisive political lightening rod.

Marie Johnson is the CEO of the Centre for Digital Business. She is a global award-winning digital authority and advocate for the humanitarian application of AI. Her experience encompasses the public and private sector experience in Australia and internationally, including leading Microsoft’s Worldwide Public Services and eGovernment industry based in Seattle.

Marie was Head of the Technology Authority for the National Disability Insurance Scheme responsible for the technology business case, co-design, and the creation of Nadia. For many years, Marie was the Department of Human Services Chief Technology Architect, with responsibilities including the architecture and technology business cases bringing together the massive systems of Centrelink, Medicare Australia, and the Child Support Agency.

You can follow her on Twitter at @mariehjohnson or visit marie-johnson.com.

Do you know more? Contact James Riley via Email.

2 Comments
  1. Rob Mawston 2 years ago

    This is absolutely chilling reading. The fact that the ‘Artificial Intelligence Ethics Framework (2019)’ principles are voluntary is mind boggling. And the sheer negligence of ‘Algorithms that are effectively a health intervention, were created without co-design with health professionals.’
    This set of articles should be required reading, if not in the terms of reference, for the RC on RoboDebt – i.e. we need solid recommendations that apply across the board for use of AI and ADM by the state.

  2. Dr John Rogers 2 years ago

    This really interesting article is made almost unreadable by a flashing advert on the right-hand side of the screen!

Leave a Comment

Related stories