Overview of Moto's three training stages: (1) The Latent Motion Tokenizer encodes key visual motions between video frames into compact latent tokens in an unsupervised manner using pure video data. (2) Moto-GPT is pre-trained with autoregressive motion token prediction to learn motion priors from video-instruction pairs. (3) Moto-GPT is co-fine-tuned on action-labeled trajectories to predict robot actions based on the output of learnable action query tokens while maintaining the next-motion-token prediction objective.
The Latent Motion Tokenizer produces discrete motion tokens from two consecutive video frames. It regularizes the decoder to reconstruct the second frame based on the first one and the discrete tokens, effectively capturing the motion between frames.
Qualitative examples of reconstruction results, where discrete motion tokens obtained from the Latent Motion Tokenizer based on the initial and next frame, are fed into the decoder along with the initial frame to reconstruct the target frame.
To comprehensively evaluate the effectiveness of Moto, we study three key experimental questions:
Visualization of latent motion token interpretability. Each row displays reconstructed frames from the same initial frame using different latent motion tokens, while each column shows frames reconstructed from the same latent motion tokens with varying initial frames. The latent motion tokens exhibit consistent (see columns) and discriminative (see rows) semantics, despite being trained in an unsupervised manner.
Video imitation generation via latent motion tokens, where a sequence of latent motion tokens from a demonstration video are extracted by the Latent Motion Tokenizer, and are decoded into a new video. This generated video is based on a different initial frame while preserving the original robot movement semantics.
Visualization of video trajectories generated from a sequence of latent motion tokens, which are predicted by the pre-trained Moto-GPT given different language instructions.
Pre-trained Moto-GPT distinguishes successful, failed, and random robot trajectories using log-likelihoods, which indicates that it can effectively measure the rationality of trajectories to provide potential reward signals.
Moto-GPT achieves competitive performance with larger vision-language-action models like RT-2-X (PaLI-X 55B) and OpenVLA (Prismatic 7B), despite having only 98M parameters for the GPT-style backbone.
Moto-GPT shows strong zero-shot generalization ability in the unseen CALVIN environment, despite relying solely on RGB images from a static camera.
Task success rate of models fine-tuned with different proportions of action-labeled data on CALVIN (ABC→D). The performance gap between Moto-GPT and its variant trained from scratch without latent motion tokens (Moto w/o Motion Token) widens with limited fine-tuning data.
Ablations of Moto-GPT on CALVIN (ABC→D). Moto-IML and Moto-DM share the same pre-training approach as Moto-GPT but differ in their fine-tuning methods: Moto-IML omits the loss term for latent motion token prediction, while Moto-DM discards motion tokens in the input sequence entirely.
@article{chen2024moto,
title={Moto: Latent Motion Token as the Bridging Language for Robot Manipulation},
author={Chen, Yi and Ge, Yuying and Li, Yizhuo and Ge, Yixiao and Ding, Mingyu and Shan, Ying and Liu, Xihui},
journal={arXiv preprint arXiv:2412.04445},
year={2024}
}