The trust crisis in web standards

James Riley
Editorial Director

New research from CSIRO’s Data61 unit has found that even the most popular and trusted websites are using chains of third-party scripts and services hidden from end users that make these sites prone to malicious activity.

While advanced users understand how any website may load dozens of third-party services, whether for advertising or analytics, this world-first research has explored just how deep these chains of third-party dependencies can run, and how it impacts the ‘trustability’ of the sites we use every day.

“Almost all websites today are heavily embedded with tracking components. For every website you visit, you could be unknowingly loading content from potentially malicious parties and leaving a trail of your internet activity,” said Data61 Information Security and Privacy researcher and Scientific Director of the Optus Macquarie University Cyber Security Hub, Prof Dali Kaafar.

Untrusted networks: The architecture of the web is a worse problem than is obvious

Tracking the top 200,000 websites for more than two years, the research monitored the use of third-party services on these sites and built a picture of how they feed information back and forth with networks of other domains that end users can only implicitly trust with our computers and our data.

Even the websites themselves were only in an explicit trust relationship with the first layer of third-parties they engaged, while those partners had further third-parties that information would ‘hop’ to in the chain, becoming more and more removed from the end user.

The researchers found that the deeper the chains go, the more likely something malicious could be inserted into a website.

Most websites were found to have short dependency chains – less than three levels deep – but some ran as deep as 30 layers.

“Clearly this has some privacy and security implications. We tried to analyse how good or bad these resources are, and where they come from, and we found surprising results there,” Prof Kaafar told

“1.2 per cent of these websites would be loading some traffic from third-parties that are malicious or suspicious in nature. That means serving potential malware, or having the possibility of ransomware attack, and so on.”

The research is set to be presented at The Web Conference in San Francisco on May 15. For Prof Kaafar, he believes the results point to key steps that should be taken across three communities – users, third-party service providers, and web standards authorities.

“This is a problem for the design of the web and how fragile the web ecosystem is,” said Prof Kaafar.

“We have to pay extra attention to what is being loaded and executed. Some companies have software that can be installed as extensions that add some scrutiny to the content,” he said.

He suggests users install browser extensions that monitor use of third-party tracking and Javascript on websites, and can control or block such activity when it is deemed suspicious.

“Third-parties should include an extra level of care when importing content from extra layers down the track,” Prof Kaafar said.

“It’s not happening now and it should be. This has become extremely complex and the notion of trustability is not something that exists in the web ecosystem now.There is a huge role here for standardisation bodies like W3C,” he said.

“One of the problems is that we have a huge lack of mechanisms for trust and accountability. When you download malware from a website there is no one to blame except the end user. But it’s more complex than that.

“Sometimes you really trust a website and there’s no reason why you shouldn’t. We lack a mechanism to say it was caused by a domain and here is the proof.

“A solution that could enable trustability and audit where a threat came from and who is accountable for that. Standards bodies could enforce such protocols to happen,” said Prof Kaafar.

Do you know more? Contact James Riley via Email.

Leave a Comment