The algorithm-industrial complex: where is the widespread backlash?


Marie Johnson
Contributor

OPINON: There’s a passage in Ernest Hemingway’s novel “The Sun Also Rises” in which a character named Mike is asked how he went bankrupt. “Two ways” he answers. “Gradually, then suddenly.”

And suddenly it would seem, the algorithm-industrial complex has erupted onto the civil society and policy landscape. Preconditions that have festered and been largely ignored for more than 20 years, have given rise to dynamics seen elsewhere.

Marie Johnson
Marie Johnson on the algorithm-industrial complex.

The military-industrial complex is a term that refers to the relationships and links between defence contractors, the military establishment, and politicians. The term gained popularity after a warning of its detrimental effects in the farewell address of President Dwight D Eisenhower in 1961. Significantly, this network of influence included lobbying and oversight of the industry.

In 1969, the term “medical-industrial complex” – analogous to the military-industrial complex – was first used to describe the network of corporations supplying healthcare services and products. Similarly, the medical-industrial complex became powerfully influential in the oversight of the industry, and throughout the 1970s profit-seeking companies became significant stakeholders in the US health system and policy.

The result in both the military-industrial complex and the medical-industrial complex has been massive and uncontrollable budgets; rigid systems; and failed projects.

These same patterns and forces are forging the algorithm-industrial complex, presenting enormous peril to civil society and human rights.

The algorithm-industrial complex is characterised by a power and skills distortion: public sectors gutted of skills, and the influence of and outrageous expenditure on outsourcing, tech, and consultants.

Of course, algorithms have been part of our lives for many years: thankfully for our family, literally a life saver. Algorithms developed by Apple and the Apple health ecosystem that we chose to use.

But what defines the algorithm-industrial complex is the emergence of policy which can only be executed via algorithms.

Think about that. Only be executed via algorithms.

What does this algorithm-industrial complex look like and who is involved?

Perhaps our first glimpse of the catastrophic impact of algorithms on civil society was robodebt, deemed unlawful by the Federal Court in a blistering assessment describing it as a “massive failure in public administration” of Australia’s social security scheme.

Notwithstanding this, the algorithm doctrine deepens, ruthlessly pursued with roboNDIS. Algorithms built on untested personas were to be unleashed onto 500,000 disabled Australians in a process of independent assessments, to determine “fair” funding – a model used elsewhere that has damaged people. The ferocious backlash triggered a crisis summit with the states and the federal government saying that independent assessments would not proceed “in their current form” but work would be done on a “person-centred” model.

So deep-rooted is this doctrine, that the government pursued King Henry VIII powers, which again appear in the NDIS exposure bill released last week. In readiness, the roboNDIS algorithms have already been baked.

Also during the past few weeks, two extraordinarily powerful pieces of legislation have entered the algorithm-industrial complex maze.

The Social Security Legislation Amendment (Streamlined Participation Requirements and Other Measures) Bill 2021 – which looks like robodebt mark II on steroids. This bill which the government is trying to pass, “will allow better use of technology, enabling job seekers more choice about how they enter into an employment pathway plan and meet the requirements of that plan”.

The bill describes “technological processes” – effectively an algorithm – whereby data from a questionnaire will determine the employment pathway participants (i.e., jobseekers) are streamed into. The New Daily reported that “…advocates fear this will pressure job-seekers into signing their digital job plans in a rush, although the government has contested this, saying they will be allowed to vary their job plans, albeit with approval from officials.”

What we have are jobseekers reduced to data fields in a closed loop government algorithm, who are then filtered and discarded by companies using their own search and recruitment algorithms.

In the article, “Did the government learn nothing from the robodebt scandal”, Asher Wolf – one of the people who uncovered the robodebt debacle – brilliantly describes what this black box algorithm “technological process” might look like.

And it looks a lot like roboNDIS. The common terminology of the algorithm-industrial complex has entered the policy vernacular: participants (jobseekers; disabled people; veterans; old people; sick people); plans; pathways; person-centred.

The second piece of legislation, the Surveillance Legislation Amendment (Identify and Disrupt) Bill 2020 (Identify and Disrupt Bill) passed both houses of federal parliament on 25 August 2021. As reported by InnovationAus, this legislation hands “broad hacking powers handed to authorities to, among other things, covertly take control of online accounts and ‘disrupt’ data.”

Human Rights advocates believe that the new Australian surveillance bill signals the end of respect for human rights in Australia.

Angus Murray, chair of Electronic Frontiers Australia’s policy team, believes the hacking powers pose a serious risk to our civil liberties.

“This is now a regime in Australia where we have conferred power on law enforcement agencies to hack Australians’, and potentially overseas persons’, computers and to take over accounts and modify and delete data on those accounts.”

So, we see the emergence of inter-weaving and inter-dependent threads of security and social security legislation that can only be implemented via algorithms.

While the Human Rights Law Centre criticised the surveillance legislation as having “insufficient safeguards”, at the same time the government continues to pursue King Henry VIII powers in the NDIS legislative changes. The Australian Council of Social Services (ACOSS) saw the need for a digital code of ethics to be built into the robodebt mark II legislation to ensure that jobseekers using these new online systems aren’t disadvantaged.

Clearly roboplanning, robo-welfare and robo-benefits are here to stay. But does Australian civil society really understand what is being created? This is the atom bomb question. Having led the Access Card program as the Chief Technology Architect, I am intrigued as to why there is not widespread backlash from civil society.

I see two reasons for this. The first is that general public don’t really understand what algorithms are. And the second reason is the insatiable desire across Australian governments and business to use algorithms, notwithstanding the human cost.

On the question of who is involved and who’s creating these algorithms, let’s first look at public sector capability and readiness. After all, this goes to the heart of accountability and trust.

Over the many years all this has been happening, the Digital Transformation Agency has effectively vacated the field. Neither the Digital Profession or the Digital Transformation Refresh Strategy have any mention of algorithms; no mention of inclusion and no mention of ethics.

Want to talk about ethics? We are referred to the federal Department of Industry, Science, Energy and Resources for a look at Australia’s Artificial Intelligence Ethics Framework.

Here we see a conveniently voluntary and unenforceable framework of eight AI ethics principles, totally at odds with what’s actually going on in government itself. “Aspirational” is an extraordinary description given the risks.

We see references to human-centred values and fairness. “Fairness” is the dog-whistle furiously used by the federal government in its pursuit of change to the NDIS.

At the state level, the NSW Government has developed a mandatory “AI Ethics Policy”, with one of the five mandatory principles also being fairness.

But it is difficult to see how collaboration between jurisdictions can safely happen when aspirational ethics frameworks butt up against mandatory frameworks. The additive impact of algorithms, especially on disadvantaged populations, is incomprehensible and has not been contemplated in any of these frameworks.

Within this skills vacuum and contentious ethics and accountability environment, this brings us to the pointy end of the question: who develops the algorithms and who has influence.

Tech and consulting firms selling, and re-selling algorithm-driven software, models, and methods: the very mechanisms for the delivery of policy. This is very very big business and likely a very big part of the $6.4 billion spend by Australian governments next year.

Of course, the Australian Information Industry Association wants to support industry and government to work together “to develop an AI ethics framework, to ensure Australians can confidently and comfortably engage with AI in their day to day lives…and that once established, the framework operates as a self-regulating industry Code of Practice”.

But where were the warnings from an industry that promotes tech-for-good, that technology as being used and planned by government was inconsistent with their own stated ethics? More than 370 submissions to the Senate inquiry into independent assessments spoke of the risks of harm posed by algorithms to human rights and the safety of people.

It would seem that self-regulation as determined by the industry will be insufficient and perhaps not appropriate, when it is the immense resources and power of the state yielding extraordinary policy power that can only be implemented via algorithms – algorithms that are themselves shielded behind veils of intellectual property and commercial-in-confidence.

Standing alone has been the Australian Human Rights Commission, as the only government instrumentality that has the moral courage to call out the human rights risks posed by the government’s own use of algorithms.

But the algorithm-industrial complex has already achieved the outsourcing of decision-making to algorithms, and that changes the relationship between the citizen and the state.

We are no longer citizens active in government decision making at any level from policy to payments, merely inert data to be manipulated then discarded.

Marie Johnson was the Chief Technology Architect of the Health and Human Services Access Card program; formerly Microsoft World Wide Executive Director Public Services and eGovernment; and former Head of the NDIS Technology Authority. Marie is an inaugural member of the ANU Cyber Institute Advisory Board.

Do you know more? Contact James Riley via Email.

Leave a Comment