Griffin on Tech: Let’s hear the other side of the OpenAI story
After a tumultuous week reminiscent of a Shakespeare play, Sam Altman is back at the company he co-founded and most of the board that fired him is gone.
For tech bros who have been cheering him on with gladiator memes on X, its a huge triumph of commercial pragmatism over self-righteous idealism. The effective accelerationists, who see rapid advancement of artificial intelligence and its widespread commercialisation as being hugely positive for humanity, won the fight.
But we are yet to hear publicly from the board members who fired Altman a week ago, other than the curt statement they released suggesting that he hadn’t been completely upfront in his communications with them.
It’s led to all sorts of speculation - that Altman’s team achieved an (AGI) artificial general intelligence breakthrough and didn’t alert the board, or that he was inappropriately developing side hustle AI businesses.
The way the OpenAI board handled Altman’s dismissal will go into the textbook outlining what not to do - blindside your biggest financial supporter (Microsoft), alienate your employees, and leave a question mark over your motives. But that applies to a typical board, not the unusual structure of OpenAI. The issue deserves a much greater level of transparency and reveals flaws in OpenAI’s governance structure that will now have to be addressed for the non-profit, public-interest aspects of OpenAI to have any credibility moving forward.
Outside oversight
I particularly want to hear from board member Helen Toner, an Australian AI ethics expert at Georgetown University, who is said to have led the push to get rid of Altman.
“I think it’s really important to make sure that there is outside oversight, not just by the boards of the companies but also by regulators and by the broader public,” Toner told the Financial Times earlier this year.
“Even if their hearts are in the right place, we shouldn’t rely on that as our primary way of ensuring they do the right thing,” the ‘effective altruist’ added.
I’d also like to hear from board member Ilya Sutskever, OpenAI’s chief scientist, who initially voted to fire Altman, before expressing regret and supporting his return. If OpenAI’s board had grave concerns that Altman was jeopardising the remit of OpenAI, the non-profit entity, they were within their rights to dismiss him. Heck, they had a moral if not legal obligation to do so. But it doesn’t appear to be that simple and an internal investigation needs to examine what happened, with the results fully disclosed to the public.
It may result in major corporate governance changes at OpenAI and a new approach to balancing AI safety and commercialisation. It may be most appropriate to drop the non-profit aspects and just pursue OpenAI as the for profit business many of its staff and management want it to be.
But Toner, whatever governance mistakes she may or may not have made, is right - we shouldn’t rely on the AI companies to police themselves. Given the pace of change in field of AI and the potential risks to humanity, there should be regulatory oversight of the field, just as the Europeans are pursuing with the AI Act and which is already in place for drugs, food safety, chemicals, telecommunications and numerous other areas in most countries.
There’s plenty yet to plan out in the OpenAI board saga. Now we need to hear from the people who dropped the bomb that started it all.