TECH – Meta this week introduced V‑JEPA 2, an advanced AI “world model” engineered to enhance machines’ ability to perceive, reason, and act within three-dimensional physical environments. First announced on June 11, 2025, the open-source model aims to support autonomous systems such as delivery robots and self-driving vehicles—an evolution from conventional language-centric AI toward embodied intelligence.
Unlike typical AI models that rely heavily on labeled datasets, V‑JEPA 2 learns from raw, unlabeled video footage. It develops an internal simulation of the physical world—learning how objects move and interact over time in three dimensions. Meta describes it as enabling “state‑of‑the‑art understanding and prediction, as well as zero‑shot planning and robot control in new environments.” This enables the model to anticipate future scenarios and plan actions without prior task-specific training.
At its core, V‑JEPA 2 boasts around 1.2 billion parameters and builds on the previous JEPA architecture introduced by Meta in 2022. Its multimodal design allows it to integrate information from video streams, predicting object trajectories and developing a coherent spatial map of its surroundings. This allows machines to understand context—for instance, recognizing when a ball will fall or how a robot arm should move—akin to how humans intuitively interpret physics .
Read More: China Launches Open‑Source RoboBrain 2.0 AI Model
Meta’s chief AI scientist, Yann LeCun, described world models as “an abstract digital twin of reality that allows AI to predict what will happen next and plan accordingly,” marking a vital step toward machines capable of reasoning in real-world conditions. The model’s ability to simulate environments without explicit labeling makes it particularly promising for robotics, augmented reality applications, and advanced AI assistants.
By open-sourcing V‑JEPA 2, Meta invites global researchers and developers to build on its innovation. This move also aligns with the company’s broader goal of achieving advanced machine intelligence (AMI) by teaching machines to learn about the world “as humans do,” as Meta articulated in its blog.
This development places Meta at the forefront of a rising trend in AI: world models. Other leaders in this field include Google DeepMind’s “Genie” and startups like World Labs, which raised $230 million to pursue similar goals. Meanwhile, Meta continues to ramp up investment in AI infrastructure, including funding for Scale AI and self-driving capabilities.
Meta’s announcement coincides with growing investor confidence—its stock surged roughly 4.2 % on the day of the launch, reflecting the market’s optimism about AI-driven progress—and reflects the growing intersection between AI research and commercialization in robotics and autonomous systems.
Source: CNBC