Me
Zhuoran Li
Robotics Researcher
zhuoran.li [at] u [dot] nus [dot] edu

About

Hello, my name is Zhuoran Li and your can call me Ran for short:) I am always driven by interesting and challenging problems (not specific methods) about developing generalist robots that learn and plan to help people. I am currently a research assistant working with Prof. Jiajun Wu, Prof. David Hsu, and Dr. Weiyu Liu, working on automatically learning effective abstractions—state abstractions (predicates) and action abstractions (skills)—from everyday human videos for robot planning, using neural-symbolic methods and foundation models.

Previously, I received my Bachelor of Science with the highest distinction from the National University of Singapore (NUS). I had a wonderful research experience at the Stanford Vision and Learning Lab (SVL), working on designing ideal scene representations for robotic manipulationby leveraging foundation models, as an undergraduate visiting research intern, where I was grateful to be advised by Prof. Jiajun Wu and Prof. Yunzhu Li.

At the intersection of automated planning and machine learning, my long-term research goal is to build generalist, human-centered robots that perceive, model, and interact with the real world, executing long-horizon tasks from simple language commands like "make the coffee". I pursue robust generalization through a structured representation of world knowledge that leverages the rich knowledge embedded in human language and everyday activity and grounds it through interaction, connecting abstract concepts to sensorimotor skills. Concretely, I develop neural-symbolic methods and foundation model pipelines that automatically learn effective abstractions—state abstractions (predicates) and action abstractions (skills)—from everyday human videos for robot planning. These structures make learning more data-efficient, more compositionally generalizable, and accelerate inference and planning. Before formalizing this long-term research direction, I worked on designing scene representations for robotic manipulation. That experience gives me a strong foundation in robotic perception and manipulation.

Publications

(* indicates equal contribution)
D3Fields: Dynamic 3D Descriptor Fields for Zero-Shot Generalizable Rearrangement
Yixuan Wang*, Mingtong Zhang*, Zhuoran Li*, Katherine Driggs-Campbell, Jiajun Wu, Li Fei-Fei, Yunzhu Li
The Conference on Robot Learning (CoRL), 2024
Oral Presentation
[Project Page] [Paper] [Code]
RoboEXP: Action-Conditioned Scene Graph via Interactive Exploration for Robotic Manipulation
Hanxiao Jiang, Binghao Huang, Ruihai Wu, Zhuoran Li, Shubham Garg, Hooshang Nayyeri, Shenlong Wang, Yunzhu Li
The Conference on Robot Learning (CoRL), 2024
Best Paper Nomination at ICRA 2024 Workshop on Vision-Language Models for Manipulation [Link]
Oral Presentation at CVPR 2024 Workshop on Vision and Language for Autonomous Driving and Robotics [Link]
[Project Page] [Paper] [Code]

Selected Honors & Awards

  • Dean's List, NUS
  • Certificate of Oustanding Performance & Top Students in CS4243 Computer Vision and Pattern Recognition (graduate level course), NUS
  • Certificate of Best Project Award in CS4246/CS5446 AI Planning and Decision Making (graduate level course), NUS
  • The Science & Technology Undergraduate Scholarship (an undergraduate full scholarship offered to support outstanding students), NUS & Singapore Ministry of Education

Experiences

May. 2025 - Present
Research Assistant @ Stanford & NUS
Advisor: Prof. Jiajun Wu and Prof. David Hsu
Working on automatically learning effective abstractions—state abstractions (predicates) and action abstractions (skills)—from everyday human videos for robot planning, using neural-symbolic methods and foundation models.
Sep. 2023 - May. 2024
Undergraduate Research Assistant @ NUS
Group of Learning and Optimization Working in AI (GLOW.AI)
Advisor: Prof. Bryan Low
Worked on my undergraduate honors thesis about leveraging large multimodal models (LMMs) for decision making and planning under uncertainty, with applications in autonomous driving and robotics.
Oct. 2023 - Feb. 2024
Undergraduate Research Assistant @ UIUC
Robotic Perception, Interaction, and Learning Lab (RoboPIL)
Advisor: Prof. Yunzhu Li
Worked on action-conditioned scene graph building via interactive exploration for robotic manipulation, incorporating the large multimodal model (LMM).
Apr. 2023 - Sep. 2023
Undergraduate Visiting Research Intern @ Stanford
Stanford Vision and Learning Lab (SVL)
Advisor: Prof. Jiajun Wu and Prof. Yunzhu Li
Worked on designing an ideal scene representation for embodied AI by leveraging foundation models, which is 3D, dynamic, and semantic.
Jan. 2021 - Aug. 2021
Undergraduate Research Assistant @ NUS
Adaptive Computing Lab (AdaComp)
Advisor: Prof. David Hsu
Worked on designing the control and system architecture for autonomous long-horizon visual navigation.

Academic Service

Teaching Assistant
  • CS3244: Machine Learning, Fall 2021, NUS
  • CS2040: Data Structures and Algorithms, Fall 2020, NUS
Top