Last week OpenAI released a new model called o1 (previously referred to under the code name “Strawberry” and, before that, Q*) that blows GPT-4o out of the water.
Unlike previous models that are well suited for language tasks like writing and editing, OpenAI o1 is focused on multistep “reasoning,” the type of process required for advanced mathematics, coding, or other STEM-based questions. The model is also trained to answer PhD-level questions in subjects ranging from astrophysics to organic chemistry.
The bulk of LLM progress until now has been language-driven, but in addition to getting lots of facts wrong, such LLMs have failed to demonstrate the types of skills required to solve important problems in fields like drug discovery, materials science, coding, or physics. OpenAI’s o1 is one of the first signs that LLMs might soon become genuinely helpful companions to human researchers in these fields. Read the full story.
—James O’Donnell
This story is from The Algorithm, our weekly newsletter giving you the inside track on all things AI. Sign up to receive it in your inbox every Monday.
This designer creates magic from everyday materials
Back in 2012, designer and computer scientist Skylar Tibbits started working on 3D-printed materials that could change their shape or properties after being printed—a concept that Tibbits dubbed “4D printing,” where the fourth dimension is time.
Today, 4D printing is its own field—the subject of a professional society and thousands of papers, with researchers around the world looking into potential applications from self-adjusting biomedical devices to soft robotics.
But not long after 4D printing took off, Tibbits was already looking toward a new challenge: What other capabilities can we build into materials? And can we do that without printing? Read the full story.
—Anna Gibbs
This piece is from the latest print issue of MIT Technology Review, which celebrates 125 years of the magazine! If you don’t already, subscribe now to get 25% off future copies once they land.