Griffin on Tech: Aussie floats mandatory guardrails for AI

The Australian Government this week published some proposed mandatory guardrails for high-risk AI as well as a voluntary safety standard for organisations using AI that will act as a temporary stopgap.

The proposed mandatory rules may form the basis of a AI Act like the Europeans have in Australia’s efforts to regulate use of AI, which one academic this week described as a “mess”.

“The central problem is that people don’t know how AI systems work, when they’re using them, and whether the output helps or hurts them,” wrote Nicholas Davis, industry professor of emerging technology and co-director, Human Technology Institute, University of Technology Sydney.

“Take, for example, a company that recently asked my advice on a generative AI service projected to cost hundreds of thousands of dollars each year. It was worried about falling behind competitors and having difficulty choosing between vendors,” he continued in The Conversation.

“Yet, in the first 15 minutes of discussion, the company revealed it had no reliable information around the potential benefit for the business, and no knowledge of existing generative AI use by its teams.”

Similar conversations are happening on this side of the Tasman, where a report from the AI Forum AI in Action: Exploring the Impact of Artificial Intelligence on New Zealand's Productivity, which I was involved in compiling, this week revealed that uptake of AI across New Zealand organisations is high, 67%, with over half of that relating to use of generative AI in particular.

Our businesses have been experimenting extensively with AI since the debut of ChatGPT, and to good effect, according to the survey respondents. Here are some of the other report highlights:

Financial benefits: 50% of participants reported a positive financial impact on output, and 62% reported operational cost savings. There is a strong correlation between the amount spent on AI and the financial benefits gained. Increased efficiency: 96% of respondents indicated that AI has made workers more efficient. 

Minimal job displacement: Only 8% reported that AI had replaced employees, and those who did reported minimal displacement (5-10%). 

Setup and ongoing costs: 52% of participants reported initial setup costs, and 62% reported ongoing costs. These costs were generally on the lower end, except for some high initial setup costs.

That all sounds quite promising but other studies that focus on responsible AI, including a major upcoming Australasian study that I’ve seen the results for, suggest we are equally messy when it comes to AI governance and practising responsible AI.

So the Australian moves serve a real need and have the potential to inform our own efforts. What do the proposed mandatory rules, which the Australian Government is currently holding public consultation around, look like?

They outline a risk-based approach to regulating AI requiring testing, transparency and accountability. The ten proposed mandatory requirements include:

  1. Establish, implement and publish an accountability process, including governance, internal capability and a strategy for regulator compliance

  2. Establish and implement a risk management process to identify and mitigate risks

  3. Protect AI systems, and implement data governance measures to manage data quality and provenance

  4. Test AI models and systems to evaluate model performance and monitor the system once deployed

  5. Enable human control or intervention in an AI system to achieve meaningful human oversight

  6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content

  7. Establish processes for people impacted by AI systems to challenge use or outcomes

  8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks

  9. Keep and maintain records to allow third parties to assess compliance with guardrails

  10. Undertake conformity assessments to demonstrate and certify compliance with the guardrails

The Aussies have laid out three options for legislation to underpin the AI guardrails mandate - tweak existing laws, set up new framework legislation, or implement an AI Act that would take effect across the economy, akin to the EU approach.

Nothing would come into effect before 2025, hence the voluntary safety standard that’s also been released, which the Australian Government is urging organisations to adhere to.

Use of AI is characterised by “an array of reckless rollouts, low levels of citizen trust and the prospect of thousands of Robodebt-esque crises across both industry and government,” says Professor Davis.

So getting the regulation right is hugely important. As he points out, AI exacerbates the “information asymmetry” we all face as consumers and citizens every day. We have uneven knowledge about products and services, whether its a plane ticket or getting a quote on building a new house.

The black box of AI and large language models offer little transparency into how decisions are made - that’s the secret sauce that businesses want to keep to themselves. But the risk of harm is great.

“It can lead to poor-quality goods dominating the market, and even the market failing entirely,” says Davis.

As our own Government formulates its own roadmap for AI, there’s a strong case for aligning efforts with Australia and pursuing a similar timeline for tweaking additional legislation or putting a new regulatory regime in place.

Previous
Previous

Apple finds its price ceiling, readies to unleash AI

Next
Next

ITP Cartoon by Jim - The Three R’s