Griffin on Tech: GovGPT’s baby steps and Harari’s AI warning
The unveiling this week of GovGPT, a government chatbot that will help answer questions for businesses seeking to interact with government agencies, attracted more media headlines than I expected.
After all, it's a simple application that Callaghan Innovation was able to spin up within a couple of months using off-the-shelf AI tech and trained on “a small sample of our government websites”
You can pretty much use ChatGPT, Perplexity or Gemini, to get reasonable answers to your questions about New Zealand government services - it is likely to go to access the same websites that have been scraped for information, as GovGPT does.
But, as a visiting AI expert from the US told me this week, trying to make that happen in the federal government over there would take years. As a small nation, we are well-positioned to experiment with proof-of-concept AI tools and services in the public sector.
Digitising government… some more
The other reason GovGPT attracted a lot of attention this week is that many Kiwis have a frustrating time dealing with government agencies. It’s part of the reason why Judith Collins, who launched GovGPT at the AI Summit this week, created the digitising government ministerial portfolio. She wants to see a customer service revolution in government and sees AI as being central to that.
All power to her. But GovGPT represents a baby step along the way in doing so. The multi-lingual chatbot will save you the effort of doing Google searches and trawling agency websites. But it is really just a way to fast-track your information gathering.
The next step is using AI to draw on your personal data to make suggestions, process interactions or even make decisions on your behalf. In the public sector, that’s complex and high-risk, but where the friction can really be removed.
The private sector is already doing it. Next week I’ll attend Dreamforce in San Francisco, the annual conference of customer relationship management software maker Salesforce, where the focus will be all about AI agents. Salesforce founder Marc Benioff sees AI agents as being the future of his company, and a game-changer for the businesses that use the Salesforce platform.
AI agents software programs that can not only answer questions, but perform tasks, and automate processes for users. One day, I’ll hopefully be able to use an AI agent to calculate my GST and provisional tax, prepare my tax returns and file them with IRD. That would represent a huge productivity boost for me. The startup Hnry is currently offering a stop-gap version of that for sole traders like me, doing your taxes and filings on your behalf - but it charges 1% of your annual income, capped at $1,500.
Those types of use cases, extended across other agencies like ACC, MSD, and MBIE, have huge potential to make government more efficient. But it will require a huge amount of trust on the part of citizens to allow AI to access the personal data that will make AI agents genuinely useful to citizens. We have a long way to go to get to that point. GovGPT serves a useful purpose in helping establish that trust and building the confidence within the government to work with AI while putting in place the appropriate guardrails.
Required reading on AI
Yuval Noah Harari, Israeli historian and the best-selling author of Sapiens and Homo Deus, a slightly geeky and sweeping history of information networks. It’s really a book about AI, which Harari has become a noted commentator about since he voiced his unease about the rise of AI in Homo Deus, and subsequent essays and interviews.
The basic premise of Nexus is that AI signifies a turning point in human history based on information. There have been many of those in the course of human history, from the impact of Gutenberg’s printing press, to the advent of the broadcast era, and then the rise of computers and the internet.
But Harari argues that AI is different, with its potential to assume control of decision-making on our behalf, with little transparency into its workings, and the looming prospect of it becoming superintelligent, a concern that has been well documented elsewhere.
Technologists tend to look through rose-coloured glasses at the various waves of technological change humanity has witnessed over the millennia. They downplay the negatives to talk up technology as a force for good. But Harari, with examples ranging from the Roman Empire to Nazi Germany and the destructive imperialism that stretched between the two, points out that powerful information networks were not only liberating and enlightening but also controlling and tyrannical.
Can we expect to “muddle through” the next information revolution with AI dominant in our lives, Harari asks? My sense is that he isn’t optimistic, but in his brief epilogue, he offers an alternative to AI apocalypse.
“We must abandon both the naive and populist views of information, put aside our fantasies of infallibility and commit ourselves to the hard and rather mundane work of building institutions with strong self-correcting mechanisms,” Harari writes.
That involves the methodical and complex business of developing fit-for-purpose regulations and guardrails for AI, to reduce its ability to produce bias and misinformation, to ensure it is explainable and auditable and that there’s sufficient human oversight to allow us to control it rather than the other way around.
Given the breakneck pace of change in AI, how do we do so in a way that allows us to account for the exponential change? This will turn out to be the major challenge of the next era of the information age.