Beware the Black Box: What to Consider Before Using AI Tools

AI

This blog was posted by Andrew Churches on the DTTA - Digital Technologies Teachers Association - message board. I asked Andrew whether we could repost this as there is clear relevance in his advice for anyone as they delve into using AI products. It’s great to see teachers are also discussing AI tools and keeping everyone safe.
Over to Andrew’s insights.

Before you start

With the Growth of Artificial intelligence (AI) and the relative ease of the deployment of these tools, we are faced with a series of challenges about what to use and what to share.

Increasingly, we are seeing AI products being deployed to assist and support teaching and learning. Superficially, they offer fast and simple solutions that quickly resolve a need or gap we might have, but may also place us in a precarious position.

I would encourage us to pause and consider before we sign up to a product/service and consider how this matches with the restrictions, limitations and legislation for for your school, community and for Aotearoa New Zealand.

At first glance...

A brief examination of the frontage of the product may well give us indicators about the viability and safety of the system. (This is for pretty much any site or service collecting data)

1. Who is hosting the website? Is the organisation hosting it or is it a third party site like weebly, google sites, Wix etc

2. How are they collecting data and is this secure? Again is this a third party site like survey monkey or google forms or has the company/organisation invested in developing this.

3. Is there a clear indication of the AI system being used to power the site/product. Identifying the AI engine helps us to understand the training set and therefore the currency of the underlying data?

4. Does the product have the significant policies available for scrutiny. At a minimum you should be able to see the privacy and IP related policies

5. How sustainable is the system? How does it monetise the product? Products that are “free” must gain revenue somehow. Is this by offering “freemium” products, restricted products or by the sale of the collated data and IP.

Overall, does the initial contact leave you feeling your data and that of your students is secure and safe?

Delving deeper...

In examining a product we need to consider:

1. Privacy – does the products privacy policy match the principles of the privacy act 2020, which we as New Zealand educators are legally obligated to abide by? https://www.privacy.org.nz/privacy-act-2020/privacy-principles/

2. Intellectual property and copyright – Who owns the data/shared/uploaded material? Uploading exemplars, standards etc is potentially putting the user at risk of copyright breach. For instance - as an organisation, DTTA has agreement amongst members to share and reuse certain content but the ministry of education is unlikely to give permission to use crown materials to commercial enterprises.

Further, our own IP is at risk whenever we upload it anywhere. Whether this is:

  • personally developed materials,

  • materials that have been shared with the DTTA community (or others)

  • or materials the school has asked us to develop

  • or commercial material that your school has purchased

We need to ask ourselves. Are we prepared to give this away?

Most AI systems have statements about the ownership of the materials. Most system own any and all materials uploaded to the product. The uploaded shifts the ownership to the AI owners while all responsibility for the copyright and IP infringements remains with the uploader.

See https://www.mbie.govt.nz/business-and-employment/business/intellectual-property/copyright/copyright-protection-in-new-zealand/

3. Sustainability and monetisation. How does the product sustain itself? If there is no clear revenue stream, i.e. a subscription how does it fund the wages of the developers and the costs of hosting? Is there a government or NGO sponsor? If there is no clear indication, its fair to assume that they are developing revenue from the data they harvest.

How big or small is the company? Will it have sufficient personnel to support the proposed use?

4. Reliability and accuracy. Is the product up to date? Does it have an open or closed training set and how up to date is this? Is there a bias evident in the training set? If you only feed the AI western-centric English-language based data the responses it will give to culturally sensitive questions will be, understandably, limited to the perspective of the training set.

Is the AI developed for the purpose that it is being used. Large language model AI’s may not be adequate for some of the purposes they are being used for.

5. Support. Does the company/organisation have user and support agreements for school or district use?

6. Data Security and sovereignty. Where is our data going to be stored? Is it going off-shore and what are the rules around data sovereignty in that jurisdiction? What is happening to our data and that of our students?

Footnote - the Privacy Commission has kindly supplied two additional links of relevance to this blog. Their Media Release on AI guidance, and a link to the guidance itself.

This post was written by Andrew Churches of DTTA and shared with his permission.

Vic MacLennan

CEO of IT Professionals, Te Pou Haungarau Ngaio, Vic believes everyone in Aotearoa New Zealand deserves an opportunity to reach their potential so as a technologist by trade she is dedicated to changing the face of the digital tech industry - to become more inclusive, where everyone has a place to belong. Vic is also on a quest to close the digital divide. Find out more about her mahi on LinkedIN.

Previous
Previous

FMA demanding stronger cyber reporting and tech resilience

Next
Next

Griffin on Tech: War of words over hacking heats up - but is it news?