Guardian or gatekeeper? AI’s role in cyber ethics 


Jason Stevens
Contributor

As companies deploy generative AI as a frontline defence for critical infrastructure, is governance being overlooked in their bid to avoid cyber threats? 

The fine line between agility and ethical integrity stretches security teams as they embrace automation and operational efficiency to react faster with AI-aided defence and offence, particularly for the financial sector, which is often considered a leader in cybersecurity innovation. 

This tension is explored during the latest episode of Securing critical infrastructure: the regulatory vs the practical, a four-part vodcast series produced by InnovationAus.com in partnership with SentinelOne.  

“What we’re finding,” SentinelOne’s APJ field chief technology officer Wayne Phillips said, “is that large enterprises use AI automation to free up resources so they can take their threat-hunting team and do higher-order tasks”.

AI models in cybersecurity may identify unusual or harmful activities that deviate from the norm. This method is crucial in defending against cyber threats by focusing on detecting and responding to such anomalies. 

Generative AI searches and analyses data, providing outputs “crunched” using advanced language models to understand their significance in cybersecurity. 

However, David Fariman, the chief information officer & chief security officer of Netskope, says this approach raises question marks: “Can you trust the output? It comes back to that training data and how that model’s been used.”

He also cautions: “If you’re not enforcing that quality nor driving that race to AI safety, AI quality and AI responsibility — then we’re just doing more things badly and faster.”

While SentinelOne excels in AI security, Netskope has a pedigree in protecting cloud data. With Mr Fairman’s big-picture thinking and Mr Phillips’ hands-on tactics, they’re well-equipped to tackle today’s top cybersecurity issues. 

They both agree the pace of regulatory frameworks must improve to match the rapid innovation in AI, leaving businesses needing practical solutions against adversaries equipped with similar technologies. 

“David is absolutely correct,” Mr Phillips acknowledges, “but today, companies, banks and utilities are simply worried about keeping the lights on and the bad guys out.”

Netskope’s David Fariman, SentinelOne’s Wayne Phillips and InnovationAus.com’s Corrie McLeod

Ultimately, he said, they’re more concerned with having tools to protect themselves and cannot wait for the government or governance to catch up. 

But, Mr Fairman warns, “There’s such a risk with us getting generative AI wrong; the ramifications are massive.”

He sees scams and fraud becoming more prevalent in the financial sector and, along with Mr Phillips, underlines collaborative efforts with public-private partnerships to coordinate better cybersecurity strategies. 

The challenges associated with AI use, such as bias, traceability, and fairness, are even more pronounced when one considers the need to understand the language and terminology used in the industry, particularly regarding endpoints and cybersecurity. 

While Mr Fairman questions the traditional focus on securing endpoints in financial services, suggesting a shift towards protecting data and end-user interactions, Mr Phillips expands this view. 

He sees endpoints as encompassing a range of devices, including critical network systems like the Swift network, advocating for a broader, more inclusive definition in today’s always-on, connected environments. 

While both companies have been using early-stage AI in varying degrees since around 2013, applying natural language models simplifies the interpretation of cybersecurity data, making the process faster and more efficient. 

Mr Phillips said that protecting critical infrastructure increasingly “requires 24/7 security operations centre (SOC) monitoring across various endpoints, including mobile banking apps, ATMs, and network servers.” 

Mr Fairman responds that this raises the stakes to “ensure there’s a framework to understand these actions and their implications in the broader context of digital risk management.” 

While black box AI simplifies practical AI cybersecurity, it also helps with training and hiring. It addresses the skill shortages prevalent in the industry. 

“If you get junior grads coming in, you must get them up to speed quickly,” Mr Phillips said. “And if the tools are there and they respond in a language you understand as in your natural language, then it’s easier for them to upskill.” 

Mr Fairman echoes the easier learning curve for newcomers but cautions they need to operate AI tools and understand their outputs and implications. “You’ll find that people writing the right queries can get sharper, clearer, better-quality output and answers.”

He also warns there’s always the risk of large language models being manipulated to produce misleading results. He notes that adding human intervention to these processes can slow response times. 

Therefore, the challenge is to balance human skills with the speed of AI to quickly detect and respond to cyber threats, matching the pace at which adversaries are exploiting these technologies. 

“I think you’ll see this mature over time; I get to see quite a lot of attacks that are just constant, and companies are just trying to stop the bleeding at the moment,” said Phillips. 

Securing critical infrastructure: The regulatory vs the practical vodcast series and accompanying articles are produced by InnovationAus.com in partnership with SentinelOne. 

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories