EMMA: Generalizing Real-World Robot Manipulation via Generative Visual Transfer

GigaAI1,  Peking University2  Tsinghua University3  CASIA4 
*Equal Contribution

DreamTransfer demonstrates strong controllability in embodied manipulation video generation. It excels in text-controlled appearance editing while preserving 3D structure and geometric plausibility, and supports both real-to-real and sim-to-real transfer. The complete prompts used for generation is provided in the supplementary materials.

Abstract

Vision-language-action (VLA) models increasingly rely on diverse training data to achieve robust generalization. However, collecting large-scale real-world robot manipulation data across varied object appearances and environmental conditions remains prohibitively time-consuming and expensive. To overcome this bottleneck, we propose Embodied Manipulation Media Adaptation (EMMA), a VLA policy enhancement framework that integrates a generative data engine with an effective training pipeline. We introduce DreamTransfer, a diffusion Transformer-based framework for generating multi-view consistent, geometrically grounded embodied manipulation videos. DreamTransfer enables text-controlled visual editing of robot videos, transforming foreground, background, and lighting conditions without compromising 3D structure or geometrical plausibility. Furthermore, we explore hybrid training with real and generated data, and introduce AdaMix, a hard-sample-aware training strategy that dynamically reweights training batches to focus optimization on perceptually or kinematically challenging samples. Extensive experiments show that videos generated by DreamTransfer significantly outperform prior video generation methods in multi-view consistency, geometric fidelity, and text-conditioning accuracy. Crucially, VLAs trained with generated data enable robots to generalize to unseen object categories and novel visual domains using only demonstrations from a single appearance. In real-world robotic manipulation tasks with zero-shot visual domains, our approach achieves over a 200% relative performance gain compared to training on real data alone, and further improves by 13% with AdaMix, demonstrating its effectiveness in boosting policy generalization.

Method

Overview of the EMMA framework. First, DreamTransfer generates multi-view consistent videos by performing text-controlled visual editing of the foreground, background, and lighting conditions, conditioned on depth and corresponding text prompts. The generated videos are then evaluated by a video quality filter. Low-quality videos are initially assigned zero sampling weight to stabilize early-stage training. The AdaMix module further adaptively reweights training samples based on trajectory performance metrics, up-weighting challenging samples to improve policy robustness and generalization.

Results

1. Embodied manipulation video generation.


2. Real-world deployment of AdaMix-trained policies on "Fold Cloth".


3. Real-world deployment of AdaMix-trained policies on "Clean Desk".


4. Real-world deployment of AdaMix-trained policies on "Throw Bottle".


BibTeX

If you use our work in your research, please cite:

@article{dong2025emma,
  title={EMMA: Generalizing Real-World Robot Manipulation via Generative Visual Transfer},
  author={Zhehao Dong and Xiaofeng Wang and Zheng Zhu and Yirui Wang and Yang Wang and Yukun Zhou and Boyuan Wang and Chaojun Ni and Runqi Ouyang and Wenkang Qin and Xinze Chen and Yun Ye and Guan Huang},
  journal={arXiv preprint arXiv:2509.22407},
  year={2025}
}