Australian scientists and international research partners have discovered “major vulnerabilities” in technology used to detect deepfakes, finding none of the market leading detectors can reliably identify real-world use.
The detectors were shown to have failed to keep up with rapidly advancing deepfake technologies, which are now more convincing, cheaper and easier to deploy than ever before.
The warning comes ahead of an Australian federal election that will allow deepfakes and voice clones, after the government prioritised donation caps and spending limits but left truth in political advertising work off its current agenda.
An international team of researchers that includes CSIRO scientists on Thursday called for urgent improvements in deepfake detection technologies after discovering the vulnerabilities.
The joint study by the CSIRO and South Korea’s Sungkyunkwan University assessed 16 leading detectors and found none could reliably identify real-world deepfakes.
“Deepfakes are increasingly deceptive and capable of spreading misinformation, so there is an urgent need for more adaptable and resilient solutions to detect them,” CSIRO cybersecurity expert and co-author Dr Sharif Abuadbba said.
“As deepfakes grow more convincing, detection must focus on meaning and context rather than appearance alone.”
A Parliamentary inquiry into artificial intelligence last October stopped short of calling for deepfakes to be outlawed through South-Korean-style fast-tracked laws that advocates have sought.
A month later the Albanese government introduced bills to reform electoral laws, including one that would make deepfakes illegal during elections and referendums.
But it did not bring the deepfake bill on for debate, instead leaving it to “languish” to focus on the other bill to reform donation rules and set campaign spending caps.
Concerns about deepfakes during elections were again raised earlier this year when platform giant Meta announced it is ending fact checking.
Deepfakes and voice clones are expected to increase again during the federal election and the Australian Electoral Commission has warned it is limited in what it can investigate.
To even detect the technology could be a challenge, according to the CSIRO research, although work is underway on better approaches.
“We’re developing detection models that integrate audio, text, images, and metadata for more reliable results,” the CSIRO’s Dr Kristen Moore, a co-author of the new study, said.
“Proactive strategies, such as fingerprinting techniques that track deepfake origins, enhance detection and mitigation efforts.
“To keep pace with evolving deepfakes, detection models should also look to incorporate diverse datasets, synthetic data, and contextual analysis, moving beyond just images or audio.”
Do you know more? Contact James Riley via Email.