Does Digital Tech need to be regulated?

“We regulate bridges because if they fall, people die. Shouldn’t we regulate algorithms that can crash planes?” ITP Member.

At this month’s member meetup, we found ourselves circling a weighty question: Should digital tech professionals be regulated like doctors, lawyers, civil engineers, or accountants?

It’s not a new conversation, but it seems to be becoming more urgent. Many professions that carry significant consequences — to life, safety, financial security — are governed by laws that define professional standards and personal accountability. Civil engineers need to be registered. Accountants face penalties for breaches. Teachers must uphold codes of conduct.

In contrast, most of the digital technology world operates on trust, standards, and good practice — not legislation. But as more of what we build becomes embedded in physical systems, from aircraft to combine harvesters, should that change?

When the Code Controls the Outcome

One of our members described it succinctly:

“It used to be that people controlled machines. Now machines control people. And we’re the ones writing the instructions.”

Think about it. A flaw in an autonomous driving system can lead to injury or death. Software that controls commercial aircraft — like the Boeing MCAS system, implicated in multiple fatal crashes — has consequences far beyond a blue screen of death. Systems in trains, factory robots, and medical devices all rely on code that assumes the world will behave in certain ways.

The risk isn’t hypothetical — it’s real, and growing. As more automation enters our daily lives, so does the chance that a bug, a poorly considered assumption, or an overlooked edge case causes real harm.

Which raises the question: Should we require software engineers working on safety-critical systems to be licensed, trained, and subject to professional accountability?

What About AI?

As AI systems become more embedded in everything from banking decisions to autonomous vehicles, the question of regulation becomes even more complex.

Large language models (LLMs) and other generative AI tools don’t always produce predictable outputs. Their decisions are shaped by vast amounts of data, probabilistic models, and reinforcement learning — meaning they can go off track in subtle ways. If an AI system causes harm, who is accountable? The engineer who built the system? The company who own the IP? The model provider? the person who is configuring or operating it?

And let’s be clear: AI doesn’t build itself. Behind every model are people making decisions — about the data, the model’s behaviour, and the business outcomes it’s intended to support.

It might be time to treat some types of AI development as safety-critical work — especially when it intersects with human wellbeing, mobility, or financial decision-making.

Should we keep the status quo?

Of course, regulation isn’t a silver bullet. It brings costs, complexity, and often lags behind innovation. For much of digital tech, especially areas like front-end development, UX design, or internal tooling, the risk of harm is low. Mandating registration or certification across the board risks creating unnecessary barriers to entry in a sector already grappling with skills and capability shortages in some areas.

And professional regulation doesn’t automatically ensure ethical or competent behaviour — we’ve seen failures in medicine, law, and construction despite tight regulation.

In many ways, the current model — built on standards, codes of conduct, peer recognition, and ongoing learning — provides flexibility and responsiveness that regulation could stifle. We should be cautious about breaking something that, for many parts of our industry, still works.

So Where to From Here?

This isn’t a binary decision. It’s likely we’ll need targeted, risk-based regulation — focused on high-impact systems and critical infrastructure, rather than trying to wrap all of digital tech in red tape.

We also need to sharpen up our collective understanding of what “professionalism” looks like in digital tech — and when it matters most. That’s a conversation we’re committed to continuing with our members, government, and industry partners.

Let’s keep asking the hard questions. Not to limit innovation, but to ensure the systems we build — and the people who rely on them — are kept safe.

Vic MacLennan

CEO of IT Professionals, Te Pou Haungarau Ngaio, Vic believes everyone in Aotearoa New Zealand deserves an opportunity to reach their potential so as a technologist by trade she is dedicated to changing the face of the digital tech industry - to become more inclusive, where everyone has a place to belong. Vic is also on a quest to close the digital divide. Find out more about her mahi on LinkedIN.

Next
Next

Music is at the forefront of AI disruption, but NZ artists still have few protections