AI wrap Friday 28th July

Kia ora koutou welcome to the 28th July AI wrap for the week. It’s shorter as I have been sick all week with a terrible cold. 

Government Guidance on use of Generative AI 

Here in Aotearoa NZ the biggest news AI wise must be the guidance published by the Government (GCDO at DIA) on use of Generative AI. 

Peter wrote a short blog on this earlier in the week, he was far more complimentary than I am about to be.

Congratulations must go to the GCDO for publishing this guidance, it is for the most part sound advice and dearly needed. Thank you for listening to the calls for this.

It stops short of saying two things I expected to see. 

The first that generative AI SHOULD NOT be used to write policy, regulation or legislation - something other governments around the world have done.

The second thing absent is a statement to the effect of "Exercise caution when using proprietary AI" and explaining why black boxes with no algorithm transparency should be approached with caution. 
Instead the advice dedicates a ridiculous amount of time to perpetuating the GCDO’s anti Open Source position. This inclusion, without the balance of caution on proprietary made me more than mildly angry. 
I assume the author was concerned about Chat-GPT which ironically is not longer Open Sourced. 
To the GCDO - I would like to remind you the New Zealand government signed up to the Digital Nations Charter where (according to your own website) you “committed to working towards the following principles of digital development” Open Source being #3 on that list. However in practice you have instead committed to a strong bias towards multinational vendors, signing MOU’s and license agreements committing government to use of their products.

All of that said, this stuff is hard! Every nation is grappling with this stuff. So it is great the DIA have released something. Very disappointed on the anti open source stance. Rant over. 

FraudGPT new AI tool

Don’t be fooled into thinking this is a tool to help fight fraud, it’s quite the opposite it seems “This is an AI bot, exclusively targeted for offensive purposes, such as crafting spear phishing emails, creating cracking tools, carding, etc. The tool is currently being sold on various Dark Web marketplaces and the Telegram platform.” according to these articles - from Hacker News and Netenrich.

It makes sense that there will be tools out there to help hackers shortcut and automate their craft and I am sure that this is just one of many.

OpenAI shuts down it’s AI detection tool

Something extremely topical in education organisations across the world is how to detect whether a student has used a generative AI product to write their answers. Open AI launched their AI Classifier earlier in the year with the promise it could detect whether something was written by a generative AI tool. Alas it seems their solution was low on accuracy. “OpenAI called the classifier "not fully reliable," adding that the evaluations on a “challenge set” of English texts correctly identified 26% of AI-written text as “likely AI-written,” while incorrectly labeling the human-written text as AI-written 9% of the time.” from Decrypt.co.

AI and self regulation

I’ve talked a lot about government regulations in our new generative AI world. Well this week Google, Microsoft, OpenAI and a few others announced the Frontier Models Forum to focus on safe and responsible creation of new models. The forum will promote research, develop standards for evaluating models, discuss issues like trust and safety risks with politicians and help develop the positive uses of AI. 
“The group said it would focus on the “safe and responsible” development of frontier AI models, referring to AI technology even more advanced than the examples available currently.”

In short

The Grammy’s will allow songs written with AI help

Google has been testing a new AI tool codename “Genesis” that can write news articles

Superhuman - an article aggregator - has a short piece on why AI isn’t going to take your job based on the evidence of bank tells not being automated until long after ATM’s were introduced.

$900k for an AI product manager role at Netflix! What are they thinking?

Tightly coupled with the Netflix article above is the concern from Hollywood writers their roles could be replaced by AI

A call to protect workers from AI by UK Unions

The House of Lords could be replaced with bots with “deeper knowledge, higher productivity and lower running costs”, now that opens an interesting can of worms. For next time.

Previous
Previous

Griffin on Tech: Cybersecurity shake-up and a dearth of digital policies 

Next
Next

ITP Cartoon by Jim - X Marks the spot