How trusted enterprise AI can drive the next economic boom 


There’s a significant body of research which predicts the Australian economy will be boosted by artificial intelligence.

The Kingston AI Group predict the economy will grow by more than $200 billion annually. According to the IDC, it is expected AI applications delivered through cloud will grow from US$12.5 billion in 2022 to US$42.6 billion in 2027.

The market for generative AI applications is expected to become a larger part of the AI-centric applications market. Generative AI applications accounted for only 4.8 per cent of the market in 2022, but it’s expected this will increase to over 28 per cent by 2027.

Enterprise AI will be the driving factor behind this growth. Enterprise AI applications are usually case specific, and created for specific work contexts, as opposed to the open-ended nature of Consumer AI.

Enterprise AI systems are deployed in more controlled environments and are grounded on and operate on curated data, which generally is consensually obtained from enterprise customers. This limits the risk of hallucinations and increases accuracy. 

Regulating high-risk AI 

The Australian government has signalled it will create three types of new regulation: voluntary guidelines, mandatory guardrails for high-risk AI and possibly the requirement for AI-generated images to be labelled.

For high-risk AI, it will be important to consider a range of factors, especially in relation to enterprise AI which is more mature in its use than consumer AI: 

  • Activity that has a high risk of physical impact such as management or operation of critical infrastructure in energy, transportation, and water; 
  • Economic impact, including automated determinations of eligibility for credit, employment, educational institutions, or public assistance services; 
  • Government decision-making such as law enforcement/criminal justice and migration/asylum; 
  • Impact on democracy and the rule of law, for example, the spread of disinformation at scale; and 
  • Violations of internationally recognised human rights. 

The 3Ds of the AI ecosystem: Developers, deployers and distributors 

It is crucial regulation is applied responsibly and appropriately across the value chain, and recognise the 3Ds: developers, deployers, and distributors: 

  • Developers should be defined as entities that design, code, or produce AI systems. This definition accounts for companies making both predictive and generative AI systems.  
  • Deployers should be defined as entities that are using or modifying an AI system under their authority. This definition is important because while developers make AI systems, some of these systems are customisable and become specific to the deployer once the deployer inputs its data. 
  • Distributors should be defined as an organisation other than the developer or deployer that integrates an AI system into a downstream application or system without substantial or intentional modification. Distributors provide customers with a platform or interface that allows general-purpose systems to be tailored to fulfill more narrow business applications of AI.  

In general, this ecosystem also means that enterprise AI companies are handling data in line with the contractual obligations and their ethical guidelines. Further, these same contracts are regularly reviewed to remain aligned with the high standards of the business customers and responsive to the risk environment.

In contrast, consumer AI companies have created terms of service that consumers can read to understand what data will be collected and its use, however without having the ability to negotiate or tailor to all their specific preferences.

The role of transparency

Salesforce believes that humans and technology work best together. To facilitate human oversight of AI technology, transparency is critical. This means that humans should be in control, and equipped with the documentation to understand the genesis, limitations, and proper use of the AI system.

For example: 

  • Developers should provide their deployers and distributors with information such as model cards and a document outlining the proper use of the system to help deployers or end-users correctly utilise the system 
  • Deployers should provide end-users with information about the proper use of the AI system and perform assessments of the AI model and ensure there are clear terms of use for end-users 
  • Distributors should provide information on their data governance program to both the developers and deployers that interact with their platform. Details should include policies on data retention, data minimisation efforts, and audit procedures. 

Data governance: standards on storage and data  

Everyone in the AI value chain should endeavour to only store personal data for as long as it’s required and for the originally intended purpose of that data. Developers, deployers, and distributors should all have an external policy outlining clear rationales for the retention of data as well as clear timeframes for its deletion.

All members of the AI chain should be clear with users about what is being done with the data with which they are entrusted. For example, Salesforce uses changelog abilities which track information on what was created by AI, when, by which system, and how that AI-generated item (action, content, etc.) flowed through the system.  

The importance of a multi-stakeholder approach  

Salesforce believes in the tremendous opportunities AI can bring to individuals and businesses alike – with proper governance. We support a multi-stakeholder approach to AI policymaking, prioritising the design of flexible, nuanced, and adaptive policies that respond to the rapid pace of AI innovation.

Enterprise AI companies, like Salesforce, have unique perspectives on how to tackle some of the most pressing concerns policymakers are grappling with. It’s only through governments, industry, and civil society working together that we can avoid the pitfalls and realise the gain of the AI economic boom.  

To learn more, read the latest policy paper by Salesforce, A Trusted Framework for Enterprise AI 

Sassoon Grigorian is the Vice President of Government Affairs & Public Policy, APAC and Japan at Salesforce.  

This article was produced by Salesforce in partnership with InnovationAus.com.  

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories