Meta unveils new AI model designed to improve the Metaverse user experience
Meta announced on Thursday the release of an artificial intelligence model called Meta Motivo, designed to control the movements of a human-like digital agent in the Metaverse, offering potential enhancements to user experiences.
The company has invested billions of dollars in AI, augmented reality, and other Metaverse technologies, projecting a capital expense forecast between $37 billion and $40 billion for 2024 – a record high.
Meta has made many of its AI models available for free to developers, believing that an open approach will lead to the development of better tools for its services and benefit its business.
“We believe that this research could usher in fully embodied agents in the Metaverse, leading to more realistic NPCs, democratizing character animation, and creating new immersive experiences,” the company stated.
Meta Motivo tackles body control issues common in digital avatars, enabling them to move in a more human-like and realistic manner.
Additionally, Meta introduced a new training model for language modeling called the Large Concept Model (LCM), aimed at separating reasoning from language representation.
“Unlike a typical LLM, the LCM focuses on predicting high-level concepts or ideas represented by full sentences in a multimodal and multilingual embedding space,” Meta explained.
Other AI tools released by Meta include the Video Seal, which inserts a hidden watermark into videos for traceability without being visible to the naked eye.
Source link
#Meta #releases #model #enhance #Metaverse #experience