Ethical AI must focus on security


James Riley
Editorial Director

The government should place data security at the centre of its efforts to develop an artificial intelligence ethics framework, according to the communications industry body.

The federal government is currently consulting on the development of an AI ethics framework after the industry department and Data61 released draft guidelines.

The guidelines, designed to be a “call to arms” for Australians, outline a set of principles and practical measures that organisations and individuals in Australia can use to design, develop and use AI in an ethical way.

John Stanton: Security must be at the centre of an AI ethics framework

They list eight core principles surrounding the ethical use of AI: generating net-benefits, doing no harm, regulatory and legal compliance, privacy protection, fairness, transparency and explainability, contestability and accountability.

But in its submission to government, the Communications Alliance “strongly recommended” that a principle centred on the security of AI also be included.

“AI will pose a significant challenge from a cybersecurity perspective as large volumes of centralised data create a ‘honeypot’ that is likely to be targeted by criminal actors,” Communications Alliance CEO John Stanton said in the submission.

“In addition, the power of AI systems is likely to present an attractive target for those who seek to exert control through the use of AI and who wish to manipulate AI systems,” he said.

“It can be argued that securing the powerful AI that we create must be part of an ethical consideration rather than a mere commercial implication or prerequisite to applying other principles, such as the privacy protection principle.

“Creating such a security principle would also align with other international principles, such as the OECD principles.”

The Communications Alliance says the government’s proposed framework and principles “sound reasonable and attractive” when looked at with a wide lens, but “begin to falter under scrutiny as to how they can be applied in practice”.

Principle six of the government’s draft guidelines focuses on transparency and explainability, outlining that “people must be informed when an algorithm is being used that impacts them and they should be provided with information about what information the algorithm uses to make decisions”.

But the Communications Alliance said this is “problematic” due to the wide variety of areas that AI is used, some being fairly innocuous.

“At what point does an algorithm impact someone such that it would trigger a requirement for disclosure? Such a decision is fairly obvious in cases where an algorithm has a substantial impact on an individual,” Mr Stanton said.

“However, in many everyday cases – such as the use of AI to make automatic adjustments to a camera-phone’s exposure settings – it may be extraneous to the user whether an algorithm has been used or not,” he said.

“It would appear that the application of the principle requires a fair degree of flexibility to account for the vast variety of AI applications and situations in which users would be subject to it.”

Principle eight, which said that “people and organisations responsible for the creation and implementation of AI algorithms should be identifiable and accountable for the impacts of that algorithm”, is “not realistic”, according to the Communications Alliance.

“As presently worded, it suggests that any person or organisation involved in the creation of an open source model or API that ends up being used in an AI system should be identifiable and accountable for the impacts, even if they were unintended,” Mr Stanton said.

“It will be impossible, however, for the developer to predict or even find out all the ways in which AI models they have created will be used, particularly if the model had been made available on an open source basis.

“While they can take steps to be responsible, the actual use cases are not something within their control or even visibility.

“It appears that accountability for the impacts that were reasonably foreseeable at the time of the creation / release / application of the AI could constitute a more practical approach.”

The Communications Alliance also argued that the government should take a light-touch approach when looking to regulate the use of AI.

“We believe that it would be wise to carefully analyse existing frameworks and regulations and how those might accommodate evolving new technologies rather than defaulting to the creation of new regulatory frameworks which may be adding unnecessary complexity and cost. Also, any regulatory intervention only ought to be contemplated when there is a proven failure of markets to produce the desired outcome,” he said.

The government’s working definition of AI – a “collection of interrelated technologies used to solve problems autonomously and perform tasks to achieve defined objectives without explicit guidance from a human being” – may also be “too broad”, the submission said.

“Based on this definition, it will be difficult to discern when a certain activity or technology constitutes AI – a difficulty that would likely arise with most, if not all, definitions of AI.

“The scope of the definition is important because it would appear to encompass many AI applications where the proposed ethical framework is not relevant,” Mr Stanton said.

Submissions to the government’s AI ethics consultation closed at the end of May.

Do you know more? Contact James Riley via Email.

Leave a Comment