Govt’s plan on a page for public sector AI

After last year announcing a plan to pursue a “light-touch, proportionate and risk-based” approach to regulation, the Government has released its framework for public sector use of AI.

The The Public Service AI Framework is a plan on a page with a dearth of supporting information - at the moment. It consists of one PowerPoint slide where the strategic plan template has been liberally used.

Here’s what it looks like:

The Department of Internal Affairs, which has responsibility for all things digital across government and oversaw the development of the framework, says its intended use is to 

The Public Service AI Framework has been developed to:

“support a structured approach to the development, deployment and use of AI across the New Zealand Public Service”.

It will also “support leaders, decision-makers, practitioners and influencers of AI within Public Service agencies to use AI lawfully and in line with Public Service values,” according to DIA.

The Government Chief Digital Officer surveyed government departments last year about their use of AI and found that 37 out of the 50 agencies that responded had at least one AI use case.

108 total use cases were identified from the 37 participating agencies and AI was largely used internally to boost productivity, automate routine tasks, and process work more efficiently.

As widely signalled last year, the framework draws heavily on the OECD’s Five AI Principles, adopted by that organisation in 2019, to promote the use of AI. that is “innovative and trustworthy and that respects human rights and democratic values”.

They include:

- Inclusive growth, sustainable development and well-being

- Human rights and democratic values, including fairness and privacy

- Transparency and explainability

- Robustness, security and safety

- Accountability

DIA suggests the Public Service AI Framework should be used to support government agencies to “ensure that they’re using AI lawfully, safely and responsibly for its benefits, while maintaining public trust and aligning with overall Public Service values”. 

However, one privacy expert who has reviewed the framework says that without supporting documentation, it contains a fair amount of ambiguity.

“Good it's arrived but sorely lacking in much-needed detail,” Simply Privacy’s Frith Tweedie pointed out on LinkedIn

For instance, the framework is big on responsible AI buzzwords.

“The framework demonstrates a consistent focus on adopting AI responsibly. But what constitutes adopting and embracing AI ‘responsibly’ in this context? This definitely needs clarification,” Tweedie wrote.

“There are repeated references to ‘safe’ and safety, terms which do not have a settled meaning in the AI world. For some, this means ‘avoiding killer robots and existential harms’ while others use this to include avoiding issues like discrimination. What does it mean here?

Tweedie also highlighted an omission of anything to do with AI literacy and on ensuring public services know how to use AI constructively and responsibly.

“This is an important oversight at this stage,” she wrote.

A focus on AI literacy and skills will require attention as DIA’s own survey of government departments identified it as the biggest barrier to AI adoption.

Source: DIA

The framework is the first major piece of policy work released since the interim advice for the public service on the use of generative AI was published in 2023.

Previous
Previous

ITP Cartoon by Jim - DeepScared

Next
Next

Have We Stopped Thinking in Full Sentences?