The rise of the ‘Responsible AI Office’ 


Jason Stevens
Contributor

I advocate that every company form a Responsible AI Office,” Balakrishna D.R. (Bali) said, underscoring its crucial role in steering AI-first strategies safely through the quagmire of data privacy, IP integrity, and bias mitigation. 

He practices what he preaches as the global head of AI and automation, application development and maintenance at Infosys, a company with around 328,000 employees in over 56 countries. “We’ve put in place automated guardrails that carefully monitor every prompt request and response, ensuring data privacy, security, and fair processing,” he said. 

This initiative reflects a broader commitment to responsible AI practices, setting a benchmark in the industry. 

Speaking with InnovationAus.com editorial director James Riley in a recent episode of Commercial Disco, he outlined IT modernisation efforts at Infosys — and for its enterprise clients using generative AI.  

“We’re implementing AI assistants in almost every role across the organisation,” he shared. This rollout aims to help software developers, testers, and sales personnel do their day-to-day jobs faster and better than before. 

“We aim to amplify human potential — not replace them — in every role with these AI assistants.”

The company processes about a million resumes yearly. AI assistants help recruiters efficiently sift through this volume, selecting the best candidates and dramatically improving their productivity and decision-making accuracy. 

While he paints the upside of modernisation through AI, getting there requires side-stepping risks, which he cautions can trip up overeager companies.  

“From our perspective, it’s not a one-time activity,” he cautioned, “and it’s not just a technology migration from one technology to another”.

Many of their clients are hampered by outdated, legacy platforms where developers and operators have likely retired and cannot service these systems.  

This gap and the added infrastructural risk make modernisation complex but achievable with generative AI. 

The approach unlocks innovative methods to revamp legacy systems. “With GenAI, we extract existing business rules from old applications, enhancing efficiency while reducing costs and risks,” explains Balakrishna. 

For example, GenAI can interpret and document legacy code, such as COBOL, extracting crucial business rules. While AI can convert legacy systems like COBOL to Java, Infosys strategically chooses not to mirror outdated architectures within modern AI frameworks. 

“Instead,” he clarifies, “we focus on reverse engineering, followed by utilising AI for forward engineering tasks, including code completion.”

Infosys is currently involved in more than 80 projects that specifically focus on generative AI using many of the lessons learned internally from building their own enterprise AI-first platform.  

While the challenges of AI regulation, data privacy and ethics diverge across their global client base, he points out the benefits of small, proof-of-concept (PoC) rollouts before unleashing a company-wide AI enterprise platform. 

He recommends beginning with modest, PoC initiatives, drawing from a recent engagement with a wealth management client.  

With 18,000 managers swamped by the need to analyse 100,000 documents for client advisories, “it was physically impossible for them to actually go through all of them to provide a meaningful response.” 

Generative AI simplified tasks, providing managers with AI assistants that understand and handle the massive document load efficiently and manageably. The next step involves scaling up to enterprise application architecture in a shared AI vision across an organisation. 

“Without this vision, there is minimal interaction between various business divisions, leading to discord among groups – one of the most common causes of failure in AI-related projects,” he said. 

Critically, responsible AI remains baked into the enterprise architecture, with vigilance required to safeguard sensitive data. “For example,” he notes, “we check to ensure that the prompt doesn’t contain personally identifiable information and that there’s no intellectual property leakage in the request.”

This meticulous screening is applied to both requests and responses, reflecting a multi-dimensional approach to data security and integrity in the age of AI.  

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories