Moto: Latent Motion Token as the Bridging Language for Robot Manipulation

1The University of Hong Kong, 2ARC Lab, Tencent PCG, 3University of California, Berkeley
Corresponding Authors




The overview of Moto, which utilizes Latent Motion Tokens as a bridging "language" for autoregressive pretraining on video data. The Moto-GPT pre-trained through next motion token prediction learns a wealth of motion-related prior knowledge from videos, which can be seamlessly transferred to enhance downstream robot manipulation tasks with significant performance gains.

Abstract

Recent developments in Large Language Models (LLMs) pre-trained on extensive corpora have shown significant success in various natural language processing (NLP) tasks with minimal fine-tuning. This success offers new promise for robotics, which has long been constrained by the high cost of action-labeled data. We ask: given the abundant video data containing interaction-related knowledge available as a rich "corpus", can a similar generative pre-training approach be effectively applied to enhance robot learning? The key challenge is to identify an effective representation for autoregressive pre-training that benefits robot manipulation tasks. Inspired by the way humans learn new skills through observing dynamic environments, we propose that effective robotic learning should emphasize motion-related knowledge, which is closely tied to low-level actions and is hardware-agnostic, facilitating the transfer of learned motions to actual robot actions. To this end, we introduce Moto, which converts video content into latent Motion Token sequences by a Latent Motion Tokenizer, learning a bridging "language" of motion from videos in an unsupervised manner. We pre-train Moto-GPT through motion token autoregression, enabling it to capture diverse visual motion knowledge. After pre-training, Moto-GPT demonstrates the promising ability to produce semantically interpretable motion tokens, predict plausible motion trajectories, and assess trajectory rationality through output likelihood. To transfer learned motion priors to real robot actions, we implement a co-fine-tuning strategy that seamlessly bridges latent motion token prediction and real robot control. Extensive experiments show that the fine-tuned Moto-GPT exhibits superior robustness and efficiency on robot manipulation benchmarks, underscoring its effectiveness in transferring knowledge from video data to downstream visual manipulations.

Method

• Three Training Stages of Moto

Overview of Moto's three training stages: (1) The Latent Motion Tokenizer encodes key visual motions between video frames into compact latent tokens in an unsupervised manner using pure video data. (2) Moto-GPT is pre-trained with autoregressive motion token prediction to learn motion priors from video-instruction pairs. (3) Moto-GPT is co-fine-tuned on action-labeled trajectories to predict robot actions based on the output of learnable action query tokens while maintaining the next-motion-token prediction objective.

• Latent Motion Tokenizer


Latent Motion Tokenizer.

The Latent Motion Tokenizer produces discrete motion tokens from two consecutive video frames. It regularizes the decoder to reconstruct the second frame based on the first one and the discrete tokens, effectively capturing the motion between frames.


Reconstruction Quality.

Qualitative examples of reconstruction results, where discrete motion tokens obtained from the Latent Motion Tokenizer based on the initial and next frame, are fed into the decoder along with the initial frame to reconstruct the target frame.


Experiments


To comprehensively evaluate the effectiveness of Moto, we study three key experimental questions:



• Latent Motion Token as an Interpretable Motion Language (Q1)

Interpretability of latent motion tokens.

Visualization of latent motion token interpretability. Each row displays reconstructed frames from the same initial frame using different latent motion tokens, while each column shows frames reconstructed from the same latent motion tokens with varying initial frames. The latent motion tokens exhibit consistent (see columns) and discriminative (see rows) semantics, despite being trained in an unsupervised manner.



Video imitation generation.

Video imitation generation via latent motion tokens, where a sequence of latent motion tokens from a demonstration video are extracted by the Latent Motion Tokenizer, and are decoded into a new video. This generated video is based on a different initial frame while preserving the original robot movement semantics.

• Pre-trained Moto-GPT as a Useful Prior Learner (Q2)


Moto-GPT for future anticipation.

Visualization of video trajectories generated from a sequence of latent motion tokens, which are predicted by the pre-trained Moto-GPT given different language instructions.


Moto-GPT as a reward model.

Pre-trained Moto-GPT distinguishes successful, failed, and random robot trajectories using log-likelihoods, which indicates that it can effectively measure the rationality of trajectories to provide potential reward signals.

• Fine-tuned Moto-GPT as an Effective Robot Policy (Q3)


Performance on SIMPLER

Moto-GPT achieves competitive performance with larger vision-language-action models like RT-2-X (PaLI-X 55B) and OpenVLA (Prismatic 7B), despite having only 98M parameters for the GPT-style backbone.


Performance on SIMPLER.

Performance on CALVIN (ABC→D)

Moto-GPT shows strong zero-shot generalization ability in the unseen CALVIN environment, despite relying solely on RGB images from a static camera.

Performance on CALVIN.

Data Efficiency

Data Efficiency.

Task success rate of models fine-tuned with different proportions of action-labeled data on CALVIN (ABC→D). The performance gap between Moto-GPT and its variant trained from scratch without latent motion tokens (Moto w/o Motion Token) widens with limited fine-tuning data.


Ablations on Policy Fine-tuning Methods


Ablations on Policy Fine-tuning Methods.

Ablations of Moto-GPT on CALVIN (ABC→D). Moto-IML and Moto-DM share the same pre-training approach as Moto-GPT but differ in their fine-tuning methods: Moto-IML omits the loss term for latent motion token prediction, while Moto-DM discards motion tokens in the input sequence entirely.

BibTeX

@article{chen2024moto,
      title={Moto: Latent Motion Token as the Bridging Language for Robot Manipulation},
      author={Chen, Yi and Ge, Yuying and Li, Yizhuo and Ge, Yixiao and Ding, Mingyu and Shan, Ying and Liu, Xihui},
      journal={arXiv preprint arXiv:2412.04445},
      year={2024}
    }