You’re probably planning on using AI in your product. You might be sitting on a stash of proprietary data your business has collected over the years it’s been in operation – a database full of transactions or a file system full of documents. Or you might be looking at how your product can collect data and leverage AI during its day-to-day usage by your customers.
The Australian government, like the EU, the US, UK, Canada and a host of other nations, has started the move towards legislating AI. After a process involving discussion papers, think tanks, and community consultation, the government has proposed that mandatory guardrails for AI be put in place. This is on top of the 10 voluntary guardrails for AI.
What those guardrails involve depends on if your usage or AI counts as high risk or low risk.
High Risk vs Low Risk AI
It is a kind of one-sided definition. High Risk AI is AI that can result in these impacts:
- Human Rights: discrimination or rights violations under Australian and international human rights law, particularly regarding protected attributes like race, sex, age, and disability.
- Health and Safety: potential impacts on physical and mental health through errors in diagnosis and treatment in medical applications or injuries caused by AI controlled cars and machinery or mental health impact of social media, etc.
- Legal Effects: this is the “robo-services” concern involving automated decision making with the potential to cause “adverse legal outcomes”, such as restricting essential services or defamation, especially when opting out is difficult.
- Societal Impacts: AI use must account for risks to societal well-being, including trust in institutions, social cohesion, and democratic integrity. You know, the kind of things governments are worried about.
- Severity and Extent of Impacts: This is the overall “guideline” for deciding if the use of AI is High Risk. If possible harms are minor or unlikely then the application might not be labeled as High Risk.
If your use of AI doesn’t fall into any of those categories then it might be Low Risk. Might be. There will be much talking to lawyers by businesses to come to terms with these coming changes.
There is another category, above High Risk, which governments are particularly concerned about: General Purpose AI (GPAI) models. These are defined as highly capable AI systems that can be adapted for various purposes, making it difficult to foresee all potential risks. What models get this classification, if any at the moment, is undecided, but GPAI will require mandatory guardrails.
Understanding the Government’s Perspective
Governments worldwide are deeply worried about how fast AI is simultaneously advancing and being adopted and how the impacts of these two paths are going to play out.
Concerns grow out of big business’s excitement at the wholesale replacement of expensive employees with AI colliding with the inherent fallibility of AI, that plays out as “hallucinations” and getting “obvious” things wrong that anyone who uses AI regularly experiences.
So governments are intent on:
- Ensuring public safety and preventing harm
- Protecting individual privacy and data rights
- Promoting fairness and preventing discrimination
- Maintaining national security and economic competitiveness
But at the same time everyone recognises that AI is going to cause a major shift in how the world works and how it is organised. No-one wants to be left behind, so governments want to foster AI usage, but they also want to balance it against public interests.
As they say, may you live in interesting times.
Impact of AI Legislation on Products and Data
As you consider the potential impact of AI regulations on your business, it’s important to understand which products and data types are likely to be affected and which might remain relatively unscathed.
Products and data likely to be impacted:
- AI-powered healthcare diagnostics and treatment recommendations
- Automated financial decision-making systems (e.g., loan approvals, credit scoring)
- AI-driven recruitment and hiring tools
- Facial recognition and biometric identification systems
- Autonomous vehicles and their decision-making algorithms
- AI systems used in law enforcement and criminal justice
- Large-scale data collection and processing for personalized advertising
Products and data less likely to be impacted:
- General-purpose chatbots and virtual assistants
- AI-powered content recommendation systems (e.g., for entertainment)
- Language translation tools
- AI-assisted design and creativity tools
- Basic data analytics and business intelligence applications
- AI-powered productivity tools (e.g., email sorting, scheduling assistants)
It’s worth noting that the degree of impact may vary depending on the specific use case and the sensitivity of the data involved. You’ll need to carefully assess where your products and data fall on this spectrum.
How AI Legislation Might Play Out For Different Businesses
To better understand how different businesses might incorporate regulatory requirements, let’s examine three hypothetical companies in various sectors:
PropTech: RentSmart Solutions
RentSmart Solutions provides comprehensive rental management services, including tenant selection, rental payments, and communications.
Regulatory considerations:
- Fair housing laws and anti-discrimination regulations
- Data privacy and security for tenant information
- Financial regulations for handling rental payments
Compliance strategies:
- Implement an AI-driven tenant screening system with built-in fairness checks to prevent bias in selection processes.
- Develop a transparent algorithm for rental pricing that can be audited for fairness.
- Use encryption and secure data storage for all tenant information, with clear data retention and deletion policies.
- Implement a human-in-the-loop system for final approval on tenant selections and dispute resolutions.
- Regularly audit decision logs to ensure compliance with fair housing laws.
Food Delivery: FeastFleet
FeastFleet is a food delivery app that uses location information for deliveries and historical order data for driver allocation, restaurant recommendations, and discount offers.
Regulatory considerations:
- Data privacy and location tracking regulations
- Fair labor practices for gig economy workers
Compliance strategies:
- Implement granular location permissions, allowing users to control when and how their location is tracked.
- Develop an explainable AI system for driver allocations to ensure fair distribution of work.
- Use anonymized data for restaurant recommendations and promotional offers to protect user privacy.
- Implement a robust data deletion process for users who delete their accounts.
FinTech: MicroShares
MicroShares is a fintech app that allows users to buy and sell partial shares in stocks on demand.
Regulatory considerations:
- Financial regulations and securities laws
- Anti-money laundering (AML) and Know Your Customer (KYC) requirements
- Data privacy and security for financial information
Compliance strategies:
- Implement a robust AI-driven KYC and AML system with human oversight for flagged cases.
- Develop an explainable AI model for investment recommendations that complies with financial advice regulations.
- Use secure, encrypted channels for all financial transactions and data storage.
- Implement a real-time transaction monitoring system to detect and prevent fraudulent activities.
- Develop clear, AI-generated reports for users that explain their investment performance and associated risks in plain language.
These fictional micro case studies demonstrate how businesses in different sectors can proactively address regulatory concerns while building products around AI.
Strategies for Compliance and Adaptation
As you’re working out your product strategy you’re going to want take a whole list of things into consideration. This is that list:
- Prioritise transparency and explainability: Ensure your systems can clearly articulate how decisions are made, especially if your product and its usage of AI falls in the High Risk category.
- Implement robust data governance: Establish clear protocols for data collection, storage, and usage. If your product has an international audience you may already by intending to comply with privacy regulations like GDPR or CCPA.
- Conduct impact assessments: Evaluate the potential risks and societal impacts of your AI systems.
- Foster a culture of responsible AI: Invest in training your team on ethical AI practices and create internal guidelines for AI development and deployment.
- Consider alternative approaches: If certain applications become heavily regulated, explore less regulated areas or focus on developing AI tools that augment human decision-making rather than replace it entirely.
- Stay agile and adaptable: Be prepared to pivot your product strategy or business model if necessary, keeping a close eye on regulatory developments.
For a deeper dive and a guided process on assessing your AI use, the Department of Industry, Science and Resources has published the AI Impact Navigator along with a set of tools and templates for using it.
Wrapping it all up
Chances are your use of AI falls in the Low Risk category. And as long as you practice good data governance and fulfill any required privacy regulations around the data you collect then the burden of any new AI legislation will probably be minimal.
If your product falls into the High Risk category – and that is where many profitable niches live (within the health and finance domains) – then you need to be prepared to deal with the requirements.
On the bright side, by the time the AI legislation rolls into law there will probably be AI-based tools to help you with achieving compliance anyway.