In this episode of Verizon's Securing AI podcast, Robert Le Busque, regional vice president for Asia Pacific at Verizon, and Dr Huon Curtis from the Tech Policy Design Centre discuss the growing importance of Large Language Model firewalls for Australian businesses. They highlight the risks generative AI poses to sensitive enterprise data and how LLM firewalls can safeguard it by controlling data flow and ensuring compliance. These safeguards are particularly crucial in sectors like telecommunications and anti-money laundering. They also explore how Centres of Excellence provide teams with tools to securely innovate with AI technologies.
Securing AI
If we spend an extra million dollars on cybersecurity, are we a million dollars safer? This is a question Louise McGrath, Head of Industry Development and Policy at Australian Industry (Ai) Group, often hears from her members as they battle to keep costs down while securing their future with innovation. She discussed these issues with Chris Novak, Head of Cybersecurity Consulting at Verizon, in the latest episode of Verizon’s Securing AI podcast series. Moderated by InnovationAus.com publisher Corrie McLeod, this conversation explores the challenge of quantifying cybersecurity risks in dollars and cents, and the transformative impact of generative AI on managing these investments.
As the AI arms race heats up across APAC, the technology is cementing itself as a disinformation tool and a shield against cyber threats. In this episode of Verizon’s Securing AI podcast series, Corrie McLeod talks to Mike Bareja, former deputy director of Cyber, Technology and Security at the Australian Strategic Policy Institute (ASPI), and John Hines, head of cybersecurity APAC at Verizon Business, about the democratisation of AI. This hard-hitting conversation explores the dangerous potential of AI disinformation to subtly undermine societal trust, as it floods digital channels with fake content generation and new reports of identity hijacking.