Professionalism, Standards, and Ethics in Digital Tech – My Guest Lecture

Earlier this week, I had the opportunity to deliver a guest lecture University of Waikato as part of an "Introduction to Digital Professional Skills" series. I was asked to talk about Codes of Conduct, Ethics and Professionalism - generally boring yet so very important. Here is a summary of that lecture.

Professionalism in Digital Tech

I began by talking about professionalism in digital technology, something that can often be overlooked in a fast-paced industry like ours. Unlike doctors or lawyers, digital technology professionals in New Zealand don’t have strict regulatory bodies or governing legislation - this is really really important so I will repeat, we don’t have governing legislation. This means that we set our own bar for what it means to be a professional. It’s not just about technical skills—it’s about integrity, accountability, and the way we collaborate with others. In a field where so much is built on trust, professionalism ensures that the work we do is reliable and respected.

The Role of Codes of Conduct

From there, I moved into the topic of codes of conduct—generally, then specifically, the ITP Code of Ethics. This is the ethical guideline we follow as digital professionals in New Zealand. It outlines principles like integrity, competence, and the public interest. I stressed how important it is to have a framework like this, especially when we’re navigating complex situations that don’t always have a clear right or wrong. A code of conduct provides structure, helping us stay accountable, ethical, and aligned with the needs of our clients and the wider community.

Why Standards Matter

Before tackling Ethics I felt the need to explain why standards are so important for us to adopt here in Aotearoa so I then explored standards and their significance in digital projects. Standards—like those from ISO/IEC— provide a blueprint for quality, security, and interoperability, ensuring that the solutions we develop are reliable and scalable. I emphasized how adhering to standards sets projects up for success, reduces errors, and ensures that the tech we build integrates seamlessly with other systems.

Standards are especially important in a globalised world, where our solutions need to work across borders, meet the legislative requirements of other jurisdictions and expectations of regulators around the world. In a rapidly changing industry like digital technology, standards help maintain consistency and reliability, no matter how fast we’re moving.

The Big Picture: Ethics

One of the most important topics I covered was ethics not just in digital technology, but in everyday life. Ethics are about doing what’s right, not just what’s convenient or profitable - I spent a while describing how important it is to find your own moral and ethical values set, be clear on whether what you are asked to do in a work context aligns. Ethics form the foundation of how we operate as professionals as well, so I used an example from the students’ own lives—group projects—where fairness and accountability come into play, even in everyday decisions. What if one of your group didn’t contribute to the completion of a project yet they expected to share the same grade as those who did the mahi? what if you let them do this without speaking up and this result enabled them to gain a scholarship?

Ethics guide us in making difficult choices, ensuring that we act with integrity even when it’s uncomfortable. But being ethical also leads to the need for brave conversations, times when we need to go against the grain and speak up. Nathan covered this in his lovely blog Fearless Advice earlier in the year.

The Complexities of Ethics in AI

Finally, I focused on ethics in AI - it’s just so important we needed to discuss and to be fair most of the questions I fielded were on this topic. AI has the potential to transform industries, but it also brings up unique ethical challenges—bias, accountability, transparency, and control. We explored how AI systems, if not designed ethically, can reinforce societal biases or operate in ways that aren’t transparent to users. The decisions made by AI can have a profound impact on people’s lives, from healthcare to financial services, which is why it’s crucial to establish ethical guidelines early in the development process.

AI ethics aren’t just about avoiding bias, it’s also about ensuring that we maintain human oversight, remain accountable, and ensure transparency so that users can trust the systems they’re interacting with. We discussed the pros and cons of black box AI’s (for want of a better word) or those who provide the transparency of their models, and explainability of their result sets - whether you use the AI as your backend, engine or call from it, transparency matters.

Finally, we got into a discussion on how the exploding number of AI products and engines are being trained, AI models are only as good as the data they are trained on, meaning that if the data is biased, incomplete, or inaccurate, the results produced by the AI will be flawed. High-quality, diverse, and representative data is essential to ensuring that AI systems make reliable, fair, and effective decisions. Without this, we risk perpetuating biases or producing ineffective models that can mislead or harm users. Ethical AI development starts with how we source, select, and refine the data used to train these engines. It’s crucial that we prioritise data quality and transparency throughout the process, as the impact of poor data can extends far beyond the initial training phase, affecting real-world decisions and outcomes. Ethical oversight during data collection and training isn’t just a best practice—it’s a responsibility.

Why should we care?

In today’s fast-moving digital landscape, ethics, professionalism, and standards are the foundation that keeps innovation both responsible and sustainable. As technology shapes more aspects of our lives—especially with the rise of AI—it's easy to get caught up in the thrill of new possibilities. But with that comes a growing responsibility. Whether you're just starting your career or you're a seasoned professional, understanding and embedding these principles in your work isn't just about following rules—it's about ensuring that the technology we create benefits society as a whole, protects people’s rights, and fosters trust. In the end, what we build reflects who we are. We should all care because these principles are what allow technology to truly improve lives without causing harm.

Vic MacLennan

CEO of IT Professionals, Te Pou Haungarau Ngaio, Vic believes everyone in Aotearoa New Zealand deserves an opportunity to reach their potential so as a technologist by trade she is dedicated to changing the face of the digital tech industry - to become more inclusive, where everyone has a place to belong. Vic is also on a quest to close the digital divide. Find out more about her mahi on LinkedIN.

Previous
Previous

A matter of trust: Investors don’t fancy AI’s stock-picking abilities

Next
Next

Is big tech harming society? To find out, we need research – but it’s being manipulated by big tech itself