Designing trust into AI platforms

James Riley
Editorial Director

A big part building societal trust in new technology like artificial intelligence systems is to have quality designers creating it, according to Ellen Broad, head of technical delivery in consumer data standards at the CSIRO.

Ms Broad told delegates at’s Civic Nation forum that it’s often difficult to “get under the skin of a piece of software” and therefore when building new systems, it is needs to be designed in a way that makes it transparent from the get-go.

She drew on the example of a recent court case in the US that looked into how artificial intelligence was being used to determine welfare allocations.

Ms Broad said the case exposed the “innards of a piece of software making decisions.” But what emerged was that nobody understood what the AI system was doing or even what it was designed to achieve.

CSIRO data unit Data61 chief executive Adrian Turner says more research still needs to be done in areas of data provenance and the information supply chain to create trust in systems that rely heavily on data, including artificial intelligence.

“As much as the hype would have us believe that systems such as AI are very well progressed, they’re very brittle and narrow. They’re not generalisable. There’s still a lot of work to do,” he said.

He is optimistic that when it comes to data collection it will eventually become a more federated, distributed model where there will be greater participation between public and private sector.

This is in comparison to the current model that is dominated by vertically integrated platforms such as Amazon and Facebook that are accelerating because of their ability to collect, use and reuse data.

“What we are going to revert to is things like data cooperatives, where there are open standards based methods on exchanging information,” Mr Turner said.

“Maybe not even a centralised entity sitting at the centre in some cases, and I think that’s a better outcome for society. It’s a much more inclusive economic model.”

Ms Broad drew the analogy of how data cooperatives can act almost like libraries that are “trusted institutions who operate in a complex ecosystem of rules, regulations, principles and practices to ensure that there are trusted, curated, high-quality repositories of knowledge that can be used and reused”.

“We really do have challenges around monopolisation of data assets, which makes it very hard for new entrants into a new market to innovate,” she said.

AccelerateHQ co-founder and former Digital Transformation Office CEO Paul Shetler, warned that the idea of personal data ownership in relation to privacy is very idealistic.

“We’re exhibitionists in an optacon [and yet] we’re talking about how we want to have our privacy. When they put out models on how you can own your data in places like the UK nobody wanted it,” he said.

Meanwhile, Toby Walsh, scientica professor in artificial intelligence at UNSW, argued in a decades’ time the conversation around data privacy will no longer be necessary.

“All of us are going out and giving money to private companies to put loud speakers in our homes that listen and sends all that information to that private company where we have no rights over,” Prof Walsh said.

“But in 10 years’ time we’ll look back and think wasn’t that stupid, because we’ll be able to do all that computation on the device. The only reason we have to send our data out to an Amazon today is because we can’t do that computation on the device.”

“There’s no reason that information needs to leave your home, but as soon as it does your privacy is gone. But in ten years’ tim,e when the devices can do computation you can get that privacy back.”

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories