OpenAI’s next flagship model might not represent as big a leap forward as its predecessors, according to a new report in The Information.

Employees who tested the new model, code-named Orion, reportedly found that even though its performance exceeds OpenAI’s existing models, there was less improvement than they’d seen in the jump from GPT-3 to GPT-4. 

In other words, the rate of improvement seems to be slowing down. In fact, Orion might not be reliably better than previous models at all in some areas, such as coding.

In response, OpenAI has created a foundations team to figure out how the company can continue to improve its models in the face of a dwindling supply of new training data. These new strategies reportedly include training Orion on synthetic data produced by AI models, as well as doing more improve models during the post-training process.

OpenAI did not immediately respond to a request for comment. In response to previous reports about plans for its flagship model, the company said, “We don’t have plans to release a model code-named Orion this year.”



Source link

By admin

Malcare WordPress Security