← Home

Quick answer

AI Summary: Enables realistic physics-based simulations and 'what-if' scenarios from monocular video inputs.

Claim

Mirage2Matter: A Physically Grounded Gaussian World Model from Video

Authors
Zhengqing Gao·
Ziwen Li·
Xin Wang·
Tongliang Liu

ABSTRACT

To bridge the simulation-to-real gap, we introduce Mirage2Matter, a physically grounded Gaussian world model that generates high-fidelity embodied training data from multi-view videos. We reconstruct environments into photorealistic scene representations using 3DGS, then leverage generative models to recover physically realistic properties like collision geometry. By integrating this into a simulation environment via a precision calibration target, we ensure accurate scale alignment. Extensive experiments show that VLA models trained on our generated data exhibit strong zero-shot generalization across various manipulation tasks, overcoming the simulation mismatches that usually undermine zero-shot deployment.

Review Snapshot

Explore ratings

0.0
★★★★★
0 ratings
5 star
0%
4 star
0%
3 star
0%
2 star
0%
1 star
0%

Recommendation

0%

recommend this content.

Review this content

Share your opinion to help other learners triage faster.

Write a review

Invite a reviewer

Invite someone by email to share an invited review for Mirage2Matter: A Physically Grounded Gaussian World Model from Video.

Author Inquiries

Public questions about this content. Attendemia will route your question to the author. Vote on the most important ones. No guarantee of response.
Post an inquiry
Sort by: Most helpful