Zhancun Mu

Undergraduate student

About


I am currently a senior student in Tong class, Peking University.

My research is focused on developing autonomous agents capable of operating in open-ended, dynamic environments such as Minecraft. To achieve this goal, I concentrate on two key elements: First, designing robust planning systems that enable agents to make complex decisions over long time horizons. This involves developing algorithms for reasoning about uncertain outcomes, decomposing high-level goals into executable subgoals, and continuously revising plans as new information becomes available. Second, creating intuitive controllers that allow agents to seamlessly interact with their environments. This requires integrating perception, decision-making, and low-level control in a unified framework that can handle the rich, multimodal inputs and outputs characteristic of virtual worlds. In addition to these core areas, I am keenly interested in multi-agent systems, exploring how autonomous agents can coordinate their behaviors to accomplish shared objectives. I am also drawn to cognitive reasoning and developing computational models that capture aspects of human-like intelligence. Furthermore, I am excited by the potential of AI to accelerate scientific discovery across domains. My ultimate vision is to develop intelligent agents that can autonomously explore, understand, and shape the open-ended environments they inhabit in pursuit of complex, self-motivated goals. wakatime

News

  • Sep 2024: 🎉🎉 Our latest Vision-Language-Action model OmniJARVIS is accepted by NeurIPS 2024.
  • Jun 2024: 🎉🎉 Our paper "A Contextual Combinatorial Bandits Approach to Negotiation" is accepted at ICML 2024.
  • Jan 2024: 🎉🎉 PTGM is accepted by ICLR 2024 for oral presentation (top 1.2%).

Publications

Embodied Agent




ROCKET-1: Master Open-World Interaction with Visual-Temporal Context Prompting


Shaofei Cai, Zihao Wang, Kewei Lian, Zhancun Mu, Xiaojian Ma, Anji Liu, Yitao Liang,




Pre-Training Goal-based Models for Sample-Efficient Reinforcement Learning


Haoqi Yuan, Zhancun Mu, Feiyang Xie, Zongqing Lu

The Twelfth International Conference on Learning Representations, 2024




OmniJARVIS: Unified Vision-Language-Action Tokenization Enables Open-World Instruction Following Agents


Zihao Wang, Shaofei Cai, Zhancun Mu, Haowei Lin, Ceyao Zhang, Qing Li Xuejie Liu, Anji Liu, Xiaojian Ma, Yitao Liang

2024


Multi-Agent System




A Contextual Combinatorial Bandits Approach to Negotiation


Yexin Li, Zhancun Mu, Siyuan Qi

The Forty-first International Conference on Machine Learning, 2024

PDF

AI4Science


GlobalTomo: A global dataset for physics-ML seismic wavefield modeling and FWI


Shiqian Li, Zhi Li, Zhancun Mu, Shiji Xin, Zhixiang Dai, Kuangdai Leng, Ruihua Zhang, Xiaodong Song, Yixin Zhu

2024




Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in