No ‘fairness’ in the harm caused by algorithms


Marie Johnson
Contributor

Australia is racing into the next decades of the digital era wilfully blind and ill-prepared as to the impact of algorithms on its citizens.

I say wilfully blind because notwithstanding the very serious concerns and legal challenges about algorithms around the world over many years, the current Digital Transformation Strategy 2025 and the recent APS Workforce Strategy 2025 contain no reference to algorithms.

Why is this a problem?

Because of the speed with which government agencies are acquiring and applying powerful algorithm technologies, while at the same time the political mantra of “fairness” has become the very raison d’être for their use.

Marie Johnson
Marie Johnson: Sleep walking into an algorithmic miasma

The widespread application of algorithms changes the relationship between the citizen and the state: opaque algorithms enabling policies of the reverse onus-of-proof and non-appealable processes that target and impact the most disadvantaged in society.

The unlawful RoboDebt debacle and the visceral outcry from disabled people and their families over proposed NDIS RoboPlans generated from outsourced Independent Assessments, demonstrates that government is ethically ill-equipped for the era of algorithms.

In 2020 in the United Kingdom, there was outrage and political fights over the use of “unfair algorithms” to make all sorts of government decisions.

Controversially, the use of opaque algorithms to calculate the grades of secondary school students disproportionally impacted disadvantaged students who were denied access to universities. This “provoked so much public anger at its perceived unfairness…that the government was forced into an embarrassing U-turn.”

But the socio-economic discrimination problems are far bigger for both Australia and the UK alike. The UN special rapporteur for extreme poverty, Philip Alston, warns that the UK is “stumbling zombie-like into a digital welfare dystopia”. Alston argued that too often technology is being used to reduce people’s benefits, set up intrusive surveillance and generate profits for private companies.

What can be done?

Twenty-five years ago, as “government online” was gearing up, governments around the world undertook “forms and transactions” audits as a pillar of their online strategies. At the time, the Victorian Government was a globally recognised leader in government online. I undertook a number of these forms and transactions audits myself, and there is a whole other story to be told about what these audits revealed about the bowels of government.

The reason why the forms and transactions audits were undertaken, was to establish a baseline and priority for online delivery. For the first time, these audits established transparency as to the impact on citizens and business of their interactions across government.

Now twenty-five years later, given the magnitude of the impact of algorithms on citizens and democracy more broadly, a similar audit of algorithms is urgently needed.

The Stanford University Report “Government by Algorithm” cautioned that “little attention has been devoted to how agencies acquire such tools in the first place or oversee their use.” The Stanford report advocated that the US Federal administration undertake a “a rigorous canvass of AI use at the 142 most significant federal departments, agencies, and sub-agencies”. That is, an audit of algorithms.

It beggars belief that two of the most significant capability strategies in government (the Digital Transformation Strategy and the APS Workforce Strategy 2025), are devoid of any reference to algorithms. An algorithm audit would fundamentally change both of these.

Without this, it is impossible to understand the additive impact of algorithms on citizens and businesses, or the depth of policy skills and ethics required of the public sector. Or the forms of public scrutiny that are even possible by civil society.

In fact, in 2017 ABC News reported that it wrote to 11 Australian government departments responsible for administering legislation with computerised decision-making, asking what decisions the computers are making.

“The short answer is: we don’t know.”

We don’t know – but we may very well be shocked.

Application for grants. Application for Trade Marks. Job applications.

The ABC also reported that “the most recent new powers for automated decision-making apply to the departments of Health and Veterans Affairs.

“The Department of Veterans Affairs (DVA) is undertaking veteran centric reform to significantly improve services for veterans and their families by re-engineering DVA business processes.”

With this reform, there is concern that veterans may end up facing flawed processes similar to those implemented by Centrelink.

These concerns might be justified, given the disability sector-wide outrage over safety and human rights impacts of the proposed NDIS RoboPlans.

Stuart Robert, who retains significant influence over service delivery and digital transformation, described “the kind of transformation the government wants is its approach not just to the NDIS but also Veterans Affairs and Aged Care.”

All this would involve algorithms. Access determined by facial biometric algorithms; plans generated by algorithms; funding determined by algorithms; debt determined by algorithms; and the potential control of payments by blockchain algorithms.

The situation at present, is that there is no transparency, knowledge of or governance around the use and sharing of algorithms by government agencies in Australia.

Compounding this and of grave concern is the new intergovernmental agreement on data-sharing, which expands the sharing of data between public and private sector organisations.

The additive impact of bias on citizens from the use and sharing of algorithms across agencies, across jurisdictions and across sectors will be unfathomable. Pre-emptive legal challenges will take place, as happened in the UK.

So there should be a detente on the use and planning of algorithms until an audit of algorithms is done.

Fairness can only come from transparency, a duty of care and the active governance of ethics.

Do you know more? Contact James Riley via Email.

Leave a Comment