Editorial comment
The board of ChatGPT-maker OpenAI fired its high-profile CEO, Sam Altman, on a Friday in mid-November. Altman accepted a job at Microsoft over the weekend, before being reinstated to his original role at OpenAI the following week. Altman’s sudden firing prompted 702 of OpenAI’s 750 employees to sign a letter demanding he be brought back, threatening mass walkouts. A revamped board now presides over the AI giant, with the addition of Bret Taylor, former CEO of Salesforce, and Larry Summer, former US Treasury Secretary.
Register for free »
Get started now for absolutely FREE, no credit card required.
OpenAI was formed with the mission to guide the safe and ethical development of artificial intelligence. Its board members preside over a ‘capped profit company’, of which Microsoft owns 49%. The company charter lists preventing harm as a priority, as it seeks to master safe AI. Unlike most of its fellow Silicon Valley startups, “OpenAI is overseen by a nonprofit parent board designed to ensure AI safety is given priority alongside growth.”1 The original board had said the justification for the firing was Altman’s lack of candour and its need to defend OpenAI’s mission to develop AI that benefits humanity. However, reports are surfacing that some sort of big AI breakthrough was coming, which would have compromised the ethos of the non-profit board.
Reuters reports that “ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity … The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman’s firing, among which were concerns over commercialising advances before understanding the consequences.”2
The current buzz is around a software allegedly named: Q* (pronounced Q star). This purported AI breakthrough takes a step towards what is known as artificial general intelligence (AGI), which are, broadly, autonomous systems that surpass humans in the economically valuable tasks. It’s said that Q* can handle mathematical problems. Mathematics is an important frontier in AI, because in maths there is just one correct answer. While a Chatbot like ChatGPT can offer you insight into a subject using words generated from its extensive large language model (LLM), the ability to answer a mathematical question correctly is much more difficult. In other words, Q* could be a really big deal for the future of AI.
The saga has highlighted disagreements within the tech sector about how fast we should be moving on AI, and where commercialisation sits in the scheme of things. Some shareholders are believed to be exploring legal recourse after the turmoil threatened the future of OpenAI, presumably with dollars very much in the forefront of their minds. Many have speculated that in bringing Altman back and tweaking the board, the experiment in so-called ‘altruistic governance’ is over.
In the pipeline sector, we use AI in many ways. As Darryl Willis, Corporate Vice President of Energy at Microsoft, puts it: “Technologies like AI and machine learning can analyse the past, optimise the present and predict the future.”3 AI solutions assist pipeliners with advanced analytics, predictive maintenance and incident response; with optimising assets and processes; in monitoring, surveillance and reducing downtime; with inventory, procurement, logistics, supply chain management; and in meeting compliance. AI is helping the sector meet future challenges too, since it is used in the development of digital twin technology, AI-premised cyber security, demand forecasting, cloud computing, and the next stage of digitalisation for plants and pipeline networks. This month we’re covering cyber security for pipelines, and we start 2024 strong with a special feature on the digitalisation journey for pipeline operators in the January issue. The team and I look forward to guiding you through this fascinating, and ever-changing world of AI evolution.