OpenAI’s progress on its next major model, GPT-5, appears to be delayed, with early results not yet justifying the significant costs involved, according to a new report from The Wall Street Journal.
This report aligns with an earlier one from The Information, which suggested that OpenAI might need to rethink its approach, as GPT-5 may not represent as significant an improvement over previous versions. The WSJ article offers additional insights into the 18-month development of GPT-5, also known as Orion.
According to the report, OpenAI has completed at least two large-scale training runs, which are designed to refine the model by processing vast amounts of data. The initial training phase was slower than anticipated, indicating that future runs could be both time-intensive and expensive. While GPT-5 reportedly outperforms earlier models, its progress has not yet reached a level that justifies the ongoing expenses of maintaining the model.
The WSJ also mentions that OpenAI has expanded its data sources beyond publicly available information and licensing agreements. The company has hired individuals to generate new data through activities such as coding and solving mathematical problems. Additionally, synthetic data from another of OpenAI’s models, o1, is being used in the training process.
OpenAI has not responded to requests for comment. The company had previously stated that it would not be releasing the Orion model this year.