Substantial AI regulation - EU first off the blocks
On 13 March 2024 the European Parliament passed the Artificial Intelligence Act (AI Act). This is a world-first attempt at substantially regulating the use of AI and provides a good indication of how regulation may develop around the world.
The use of AI is rapidly expanding. As in many areas of technology or science, the creation of new tools and techniques leads to moral and philosophical questions about how these things should be used – just because we can doesn’t mean we should. The EU has taken a leading role in the regulation of the use of personal information, through the General Data Protection Regulation (GDPR), and as the first mover, will now set a benchmark for other countries grappling with the regulation of AI, including New Zealand.
The approach taken by the European Parliament under the AI Act has been to classify AI systems by risk, with higher-risk systems subject to more onerous obligations. This ranges from banning some uses of AI altogether, to imposing "guardrails" aimed at ensuring the safe use of AI at the simpler end of the spectrum.
The banned uses of AI include:
- Biometric categorisation using sensitive characteristics (eg political, religious, philosophical beliefs, sexual orientation, race)
- Mass scraping of facial images from CCTV footage or the internet
- Social scoring and predictive policing based solely on profiling people (as per Philip K Dick’s, or
Tom Cruise’s, Minority Report)
- Emotion recognition in the workplace and at educational institutions (other than for medical or safety reasons)
- Using AI to manipulate human behaviour to circumvent free will.
There are some exemptions from the above for law enforcement agencies. Law enforcement agencies may, with prior judicial approval, use AI biometric identification in real-time to:
- Search for people that have been abducted or may be the victim of human trafficking
- Prevent a specific and immediate terrorist threat
- Locate and identify people suspected of serious crimes (such as terrorism, murder, kidnapping, armed robbery).
The use of AI in these situations must be limited to a specific time period and geographic area. We expect use by law enforcement will continue to attract considerable interest and scrutiny.
The second tier of AI systems, classified as “high-risk”, are those systems which may cause potential harm to health, safety, fundamental rights, the environment, and democracy. This includes systems that relate to:
- Critical infrastructure
- Education, vocational training and employment
- Essential private and public services (eg. healthcare, banking)
- Justice and democratic processes (eg. influencing elections).
High-risk AI systems must undergo regular risk assessments, maintain usage logs, be transparent and accurate, and must be operated under human oversight. People will also have a right to complain about such AI systems and are entitled to explanations.
The lowest-risk AI is classified as “general-purpose” AI, which are systems that produce plausible text, image, video and audio from simple prompts (eg ChatGPT). These systems must meet transparency requirements, including publishing detailed summaries of the content used for training the AI model, and must comply with EU copyright law.
More powerful general-purpose AI model that could pose systemic risks must also perform model evaluations, assess and mitigate systemic risks, and report on incidents. The AI Act also requires deepfakes to be clearly labelled as manipulated content.
The AI Act is expected to become law in the coming weeks after it passes its final legislative steps. It isn’t clear at this stage how the AI Act might be applied to existing AI platforms that were created and trained before the AI Act comes into force.
Many countries – including New Zealand – will be keenly watching how the application of the AI Act unfolds and how its principles might be used or adapted elsewhere.
Damien Steel-Baker is Special Council at Buddle Findlay.
Photo credit Igor Omilaev/Unsplash