National science agency CSIRO and fund manager Alphinity have teamed up to develop a framework for companies to assess artificial intelligence investment decisions from an ESG standpoint.
The framework, which will be developed over the coming year, will provide companies with a greater understanding of what constitutes “responsible” AI at time of rapid change, and will help them to assess, manage and report on AI risks on a case-by-case basis.
“AI will present significant opportunities to improve company performance, but we foresee potential risks in areas of governance, social licence, and operations, and investors will increasingly need to identify these,” Alphinitiy’s ESG and Sustainability head Jessica Cairns said.
Responsible AI is the ethical development of AI system to the benefit of humans, society, and the environment, according to the CSIRO, which has a well-established program of work to ensure AI is inclusive, safe, secure and reliable.
The CSIRO, through its digital arm Data61, was responsible for developing Australia’s AI ethics framework back in 2019 to guide the development, adoption and use of AI systems, and been focused on working to operationalise those principles more recently.
CSIRO research director Liming Zhu, who leads the Responsible AI initiative at the science agency, said the project will “give us insights into the AI risks and opportunities companies are grappling with and provide guidance around best practices”.
“Australia can lead the world in the responsible development and use of AI, but to practically achieve that we must bring diverse skillsets together to develop measurements and tools to support implementation,” Dr Zhu said.
In order to develop the framework, Alphinity and CSIRO are calling on companies to share information on their “experience and thinking on the impact and responsible application of AI”, Ms Cairns said.
“We hope the case studies and other data will also assist companies at the start of their AI journey to implement best-practice considerations,” she said.
“From our perspective, it will create a foundation for the longer-term development of frameworks for analysis and robust modelling of responsible AI within our broader set of ESG performance and risk analysis.”
Last week, the Albanese government released a discussion paper that proposes options for tightening the frameworks for governing AI. One of the options canvassed is a ban on the technology in “high-risk” settings.
According to the paper, high-risk is defined as having “very high impacts that are systemic, irreversible or perpetual”. Examples include the use of AI-enabled robots for surgery and self-driving cars.
Do you know more? Contact James Riley via Email.