You are currently viewing New AI Models Capable of Reasoning Being Developed by Meta and OpenAI
Citation: Image used for information purpose only. Picture Credit: https://www.ft.com

New AI Models Capable of Reasoning Being Developed by Meta and OpenAI

This year has seen the release of a wave of new major language models, including upgrades.

New AI models from OpenAI and Meta, which they claim will be able to reason and plan—two essential steps toward obtaining superhuman cognition in machines—will soon be available. This week, executives from OpenAI and Meta made indications that they were getting ready to release the newest iterations of their massive language models, which underpin generative AI programs like ChatGPT. Meta announced that it will start releasing Llama 3 in the upcoming weeks, and OpenAI, which is supported by Microsoft, claimed that its next model, which is anticipated to be named GPT-5, is arriving “soon.”

“We are hard at work in figuring out how to get these models not just to talk, but actually to reason, to plan . . . to have memory,” said Joelle Pineau, vice-president of AI research at Meta.

OpenAI’s chief operating officer Brad Lightcap told the Financial Times that the next generation of GPT would show progress on solving “hard problems” such as reasoning.

“We’re going to start to see AI that can take on more complex tasks in a more sophisticated way,” he said in an interview. “I think we’re just starting to scratch the surface on the ability that these models have to reason.”

Today’s AI systems are “really good at one-off small tasks”, Lightcap added, but were still “pretty narrow” in their capabilities.

Meta and OpenAI’s upgrades are part of a wave of new large language models being released this year by companies including Google, Anthropic and Cohere. As tech companies race to create ever more sophisticated generative AI — software that can create humanlike words, images, code and video of quality indistinguishable from human output — the pace of progress is accelerating. Reasoning and planning are important steps towards what AI researchers call “artificial general intelligence” — human-level cognition — because they allow chatbots and virtual assistants to complete sequences of related tasks and predict the consequences of their actions.

Speaking at an event in London on Tuesday, Meta’s chief AI scientist Yann LeCun said current AI systems “produce one word after the other really without thinking and planning”. Because they struggle to deal with complex questions or retain information for a long period, they still “make stupid mistakes”, he said. Adding reasoning would mean that an AI model “searches over possible answers”, “plans the sequence of actions” and builds a “mental model of what the effect of [its] actions are going to be”, he said. This is a “big missing piece that we are working on to get machines to get to the next level of intelligence”, he added.

LeCun said it was working on AI “agents” that could, for instance, plan and book each step of a journey, from someone’s office in Paris to another in New York, including getting to the airport. Meta plans to embed its new AI model into WhatsApp and its Ray-Ban smart glasses. It is preparing to release Llama 3 in a range of model sizes, for different applications and devices, over the coming months.

Lightcap said OpenAI would have “more to say soon” on the next version of GPT. “I think over time . . . we’ll see the models go toward longer, kind of more complex tasks,” he said. “And that implicitly requires the improvement in their ability to reason.” At its event in London, Chris Cox, Meta’s chief product officer, said the cameras in Meta’s Ray-Ban glasses could be used to look at, for instance, a broken coffee machine, and an AI assistant — powered by Llama 3 — would explain to the wearer how to fix it.

For More Details: https://thesiliconleaders.com/