Human-like Navigation in a World Built for Humans

  • University of Illinois Urbana-Champaign
  • * Equal Contribution

CoRL 2025


teaser

Abstract

When navigating in a man-made environment they haven’t visited before—like an office building—humans employ be haviors such as reading signs and asking others for direc tions. These behaviors help humans reach their destinations efficiently by reducing the need to search through large ar eas. Existing robot navigation systems lack the ability to execute such behaviors and are thus highly inefficient at navigating within large environments. We present Reason Nav, a modular navigation system which integrates these human-like navigation skills by leveraging the reasoning capabilites of a vision-language model (VLM). We design compact input and output abstractions based on navigation landmarks, allowing the VLM to focus on language under standing and reasoning. We evaluate ReasonNav on real and simulated navigation tasks and show that the agent suc cessfully employs higher-order reasoning to navigate effi ciently in large, complex buildings.


Supplementary Video



Citation

If you find our work useful in your research, please consider citing:

@inproceedings{chandaka2025reasonnav, author={Chandaka, Bhargav and Wang, Gloria and Chen, Haozhe and Che, Henry and Zhai, Albert and Wang, Shenlong}, title={Human-like Navigation in a World Built for Humans}, booktitle={Conference on Robot Learning}, year={2025} }


Acknowledgements

The website template was borrowed from ClimateNerf and Sim-on-Wheels.