Over the weekend, we witnessed one of the most dramatic moments in the history of technology start-up management. It reminded many of what happened to Steve Jobs when he was fired from the company he founded, Apple. This time around, it was Sam Altman the man who co-founded the company that brought us ChatGPT, OpenAI.
He was fired by the four-member board, allegedly for not being “candid with the truth”. Although there were attempts to bring him back, an intervention by shareholders led to the appointment of Emmett Shear, the former CEO of Twitch.
Why do developments at OpenAI matter?
To get a sense of the significance of this development, one has to look at what I understand to be at the root of conflict. According to reports, at the centre of the OpenAI conflict, there’s the company chief scientist, Ilya Sutskever, and former CEO, Altman. In a debate about curbing the powers of AI or to let it be, Sutskever was more for curbing AI powers and Altman wanted to move ahead swiftly.
In technical terms, that means Altman was pushing ahead to release Artificial General Intelligence (AGI), which has more powers that supersede human beings. On the other hand, Sutskever was of the view that society is not ready for AGI.
As of Monday, part of the settlement included a move by Altman to Microsoft in order to create an internal AI entity. At this stage, it’s not clear how much power will Altman have under Microsoft CEO Satya Nadella’s wings.
We can, however, conclude that the world is at war about what is to be done about AI. Should it be allowed to go rogue or be controlled? This is one of the most important issues after climate change. It is something that will probably feature heavily in a battle among global powers such as China and the US.
While the Altman firing issue might seem trivial, it is something that should serve as a wake-up call for the global community. The battle about AI should not be left to just individuals and corporations. If its potential harms are what we have come to understand, then there’s a need for consensus building process around how AI will be enabled in society.
Tech needs a multilateral institution that will facilitate the deployment of technology in line with human values. If developments at OpenAI are anything to go by in terms of how prepared we are to deal with what is coming from AI, we are in trouble. Our situation is such that we’ve allowed kids to play with a bomb while we’re watching as bystanders. If AI will have a significant impact on society, then society has to decide how it should be governed and function.
South Africa has been vocal about current wars and quiet about AI. Such a reaction is understandable, considering there’s little local understanding (at least amongst law makers) about its impact. The time is now to get its full understanding and begin a process of having a voice on what should be done.
The issue might seem far from South Africa (or countries beyond the US) at this stage due to where it’s developed but its impact will be felt globally and therefore there should be a global response.
Commercial interests are shaping everything in the AI space. Everyone is at the mercy of investors. It’s not an ideal situation for human society. One can only hope that there will be a change of heart among companies that are leading the AI race. The leadership of tech leaders matters now more than ever.
Wesley Diphoko is the editor-in-chief of FastCompany (SA)