PC casts doubt on Australia’s ‘activist’ AI support


Brandon How
Administrator

The Productivity Commission believes Australia is not well positioned to produce its own advanced artificial intelligence models and doubts the productivity benefits of “activist government ‘sponsorship’” across the AI value chain.

Instead, the government should focus on encouraging safe AI adoption across the economy by providing regulatory clarity and certainty while acting as an exemplar user of the technology.

The government think tank made the argument in one of three research papers on ‘making the most of the AI opportunity’ released on Thursday night.

The first paper is focused on how Australia could best benefit from AI technology and, therefore, where policy efforts should be focused. The other papers focus on ‘the challenges of regulating AI’ and on how ‘AI raises the stakes for data policy’.

While the Productivity Commission (PC) noted that whether Australia is ultimately a ‘maker’ or a ‘buyer’ in the global AI value chain is still evolving, it cautioned that “activist government ‘sponsorship’ of parts of the AI value chain is unlikely to yield ongoing productivity benefits”.

“In developing greater expertise and capability through supporting an AI innovation industry, skills and know-how could become available to other firms adopting AI further down the value chain. However, such spillovers are likely to arise from large international tech firms already operating in Australia,” the paper reads.

“These global businesses have strong incentives to establish their platforms in Australia. Large US tech firms in particular already provide capital and technology, and global expertise.”

The research paper said only a handful of companies across the world have the “immense amounts of data, extensive computer power, and significant expertise” required to develop deep-learning AI models.

“Companies that have already embedded AI capabilities are in a position to retain their market power through restricting access to their datasets and workers (the latter through non-compete clauses in employment contracts),” the report reads.

As the global market for hardware, cloud services, and developing advanced machine learning algorithms becomes increasingly concentrated, the think tank said Australia is unlikely to have a comparative advantage and that “it is unlikely that Australian policy or regulation can influence this market outcome”.

Comparative advantage is a principle that states an economy should specialise in goods that it can produce at a relatively low opportunity cost, based on its resources, compared to the rest of the world.

This principle previously underpinned the Productivity Commission’s doubts that a local battery cell manufacturing industry could be stood up, although it has been criticised for being outdated.

However, while “Australia’s comparative advantages are less likely to be in activities that require extreme quantities of data and investment” it could lie in ‘downstream’ products that build on general purpose AI models.

These could be “AI models that can be trained on smaller (high-quality) datasets; or the adaptation of general-purpose models to more specific use cases, based on local data; or in uptake and implementation of the technology by digitised firms and through software as a service”.

According to a CSIRO report released in December, Australia has a strong AI research sector but lags global competitors in commercialisation. There are also 544 companies “making and selling AI products and services” headquartered in Australia, with 204 having opened in the past five years alone.

As such, the government think tank believes policy should focus less on local AI industry development and more on widespread adoption.

It argues the government should use its “buying power to drive higher standards from AI suppliers or establish expectations of suppliers that other purchasers can then leverage”.

“To the extent that governments can demonstrate safe and effective uses of AI in service delivery, including through pilots and trials, government use could encourage business adoption of AI solutions for similar use cases,” the report reads.

“Conversely, rushed adoption or misuse of technology can undermine public confidence in automated decision making and AI.”

The work from the Productivity Commission follows the federal government’s release of an interim response on the safe and responsible adoption of AI last month.

The response proposed mandatory guard-rails for high-risk AI use cases although there is no timeline for regulatory or legislative reform.

Do you know more? Contact James Riley via Email.

Leave a Comment