Zhancun Mu

Undergraduate student

OmniJARVIS: Unified Vision-Language-Action Tokenization Enables Open-World Instruction Following Agents


Journal article


Zihao Wang, Shaofei Cai, Zhancun Mu, Haowei Lin, Ceyao Zhang, Qing Li Xuejie Liu, Anji Liu, Xiaojian Ma, Yitao Liang
2024

PDF Website
Cite

Cite

APA   Click to copy
Wang, Z., Cai, S., Mu, Z., Lin, H., Zhang, C., Xuejie Liu, Q. L., … Liang, Y. (2024). OmniJARVIS: Unified Vision-Language-Action Tokenization Enables Open-World Instruction Following Agents.


Chicago/Turabian   Click to copy
Wang, Zihao, Shaofei Cai, Zhancun Mu, Haowei Lin, Ceyao Zhang, Qing Li Xuejie Liu, Anji Liu, Xiaojian Ma, and Yitao Liang. “OmniJARVIS: Unified Vision-Language-Action Tokenization Enables Open-World Instruction Following Agents” (2024).


MLA   Click to copy
Wang, Zihao, et al. OmniJARVIS: Unified Vision-Language-Action Tokenization Enables Open-World Instruction Following Agents. 2024.


BibTeX   Click to copy

@article{zihao2024a,
  title = {OmniJARVIS: Unified Vision-Language-Action Tokenization Enables Open-World Instruction Following Agents},
  year = {2024},
  author = {Wang, Zihao and Cai, Shaofei and Mu, Zhancun and Lin, Haowei and Zhang, Ceyao and Xuejie Liu, Qing Li and Liu, Anji and Ma, Xiaojian and Liang, Yitao}
}

Abstract

We present OmniJARVIS, a novel Vision-Language-Action (VLA) model for open-world instruction-following agents in open-world Minecraft. Compared to prior works that either emit textual goals to separate controllers or produce the control command directly, OmniJARVIS seeks a different path to ensure both strong reasoning and efficient decision-making capabilities via unified tokenization of multimodal interaction data. First, we introduce a self-supervised approach to learn a behavior encoder that produces discretized tokens for behavior trajectories (τ = {o_0, a_0, \cdots}) and an imitation learning (IL) policy decoder conditioned on these tokens. These additional behavior tokens will be augmented to the vocabulary of pretrained Multimodal Language Models (MLMs). With this encoder, we then pack long-term multimodal interactions involving task instructions, memories, thoughts, observations, textual responses, behavior trajectories, etc. into unified token sequences and model them with autoregressive transformers. Thanks to the semantically meaningful behavior tokens, the resulting VLA model, OmniJARVIS, can reason (by producing chain-of-thoughts), plan, answer questions, and act (by producing behavior tokens for the IL policy decoder). OmniJARVIS demonstrates excellent performances on a comprehensive collection of atomic, programmatic, and open-ended tasks in open-world Minecraft. Our analysis further unveils the crucial design principles in interaction data formation, unified tokenization, and its scaling potentials.


Share



Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in