Me
Zhuoran Li
Robotics & Embodied AI Researcher
zhuoran.li [at] u [dot] nus [dot] edu

About

Hi, I am a robotics and embodied AI researcher. Previously, I received my Bachelor of Science with the highest distinction from the National University of Singapore (NUS), advised by Prof. Bryan Low. I had a wonderful research experience at the Stanford Vision and Learning Lab (SVL), working on designing ideal scene representations for embodied AI by leveraging foundation models, as an undergraduate visiting research intern, where I was grateful to be advised by Prof. Jiajun Wu and Prof. Yunzhu Li. After that, I had also been a research assistant at the Robotic Perception, Interaction, and Learning Lab (RoboPIL) working with Prof. Yunzhu Li on action-conditioned scene graph building via interactive exploration for robotic manipulation, incorporating the large multimodal model (LMM).

My research lies at the intersection of robotics, computer vision, and machine learning. Specifically, I focus on Embodied AI and aim to develop novel algorithms that enable robots to better perceive, understand, and learn from multimodal information in their interactions with the physical world, and autonomously acquire the perception and manipulation skills necessary to execute complex tasks, ultimately achieving human-level performance in various scenarios, ranging from industrial applications to daily life assistance.

Publications

(* indicates equal contribution)
D3Fields: Dynamic 3D Descriptor Fields for Zero-Shot Generalizable Rearrangement
Yixuan Wang*, Mingtong Zhang*, Zhuoran Li*, Katherine Driggs-Campbell, Jiajun Wu, Li Fei-Fei, Yunzhu Li
The Conference on Robot Learning (CoRL), 2024
Oral Presentation
[Project Page] [Paper] [Code]
RoboEXP: Action-Conditioned Scene Graph via Interactive Exploration for Robotic Manipulation
Hanxiao Jiang, Binghao Huang, Ruihai Wu, Zhuoran Li, Shubham Garg, Hooshang Nayyeri, Shenlong Wang, Yunzhu Li
The Conference on Robot Learning (CoRL), 2024
Best Paper Nomination at ICRA 2024 Workshop on Vision-Language Models for Manipulation [Link]
Oral Presentation at CVPR 2024 Workshop on Vision and Language for Autonomous Driving and Robotics [Link]
[Project Page] [Paper] [Code]

Selected Honors & Awards

  • Dean's List, NUS
  • Certificate of Oustanding Performance & Top Students in CS4243 Computer Vision and Pattern Recognition (graduate level course), NUS
  • Certificate of Best Project Award in CS4246/CS5446 AI Planning and Decision Making (graduate level course), NUS
  • The Science & Technology Undergraduate Scholarship (an undergraduate full scholarship offered to support outstanding students), NUS & Singapore Ministry of Education

Experiences

Sep. 2023 - May. 2024
Undergraduate Research Assistant @ NUS
Group of Learning and Optimization Working in AI (GLOW.AI)
Advisor: Prof. Bryan Low
Worked on my undergraduate honors thesis about leveraging large multimodal models (LMMs) for decision making and planning under uncertainty, with applications in autonomous driving and robotics.
Oct. 2023 - Feb. 2024
Undergraduate Research Assistant @ UIUC
Robotic Perception, Interaction, and Learning Lab (RoboPIL)
Advisor: Prof. Yunzhu Li
Worked on action-conditioned scene graph building via interactive exploration for robotic manipulation, incorporating the large multimodal model (LMM).
Apr. 2023 - Sep. 2023
Undergraduate Visiting Research Intern @ Stanford
Stanford Vision and Learning Lab (SVL)
Advisor: Prof. Jiajun Wu and Prof. Yunzhu Li
Worked on designing an ideal scene representation for embodied AI by leveraging foundation models, which is 3D, dynamic, and semantic.
Jan. 2021 - Aug. 2021
Undergraduate Research Assistant @ NUS
Adaptive Computing Lab (AdaComp)
Advisor: Prof. David Hsu
Worked on designing the control and system architecture for autonomous long-horizon visual navigation.

Academic Service

Teaching Assistant
  • CS3244: Machine Learning, Fall 2021, NUS
  • CS2040: Data Structures and Algorithms, Fall 2020, NUS
Top