A privacy watchdog probe into radiology giant I-MED has cleared the company of wrongdoing over handing millions of patient medical scans to local AI startup Harrison.ai.
Last year, Crikey revealed patient concerns that data from Australia’s largest diagnostic imaging provider had been used to train Harrison.ai’s first AI medical imaging solution, Annalise.ai, without consent.
Annalise.ai — which has been trained on more than 800,000 images — can read chest X-rays and help detect conditions in under 10 seconds, effectively giving clinicians a second set of eyes.

The report prompted concerns from then Attorney-General Mark Dreyfus, as well as the Greens, leading the Office of the Australian Information Commissioner (OAIC) to begin preliminary inquiries in September.
But privacy commissioner Carly Kind, in releasing her findings on Thursday, found that the images had been “sufficiently” de-identified and ruled out further regulatory action into I-MED.
According to the report, I-MED had argued that notification and consent was not required because the data had been de-identified, meaning it no longer meets the definition of personal information under the Privacy Act.
Based on the information obtained through the inquiry, including samples of image scans and other patient data, Ms Kind said that I-MED used a de-identification process endorsed by NIST.
Over an almost two-year period between April 2020 and January 2022, the company shared “less than 30 million patient studies… and a similar volume of associated diagnostic reports with Annalise.ai”.
A patient study refers to a “complete imaging session for a single patient and may include multiple image types, that together represent a single diagnostic episode”, according to the report.
Only in a “very small number of instances” was personal information shared in error, and these were proactively identified and reported to I-MED and subsequently deleted or de-identified by Harrison.ai.
Ms Kind said that she was satisfied the “patient data shared with Annalise.ai had been de-identified sufficiently that it was no longer personal information for the purposes of the Privacy Act.”
“Although the steps taken by I-MED could not entirely remove the risk of re-identification, the Commissioner was satisfied that it reduced that risk to a sufficiently low level and was supported by sound data governance practices,” the report said.
Ms Kind also described the case as an example of “how good governance and planning for privacy at the start of a new initiative can support an organisation to adopt new and innovative data-driven technologies in a way that protects the rights of individuals”.
I-MED welcomed the commissioner’s findings and said that it was committed to “ensuring our practices align with evolving legal, governance and community expectations”.
Since the inquiries, the company has begun asking providers for permission to use their scans to train AI. Other providers like PRP Diagnostic Imaging have also adopted this approach.
Do you know more? Contact James Riley via Email.