Learning to Drive Anywhere

Ruizhao Zhu         Peng Huang         Eshed Ohn-Bar         Venkatesh Saligrama
Boston University CoRL 2023

How can we learn a unified global-scale driving model from heterogeneous and distributed data?


Human drivers can seamlessly adapt their driving decisions across geographical locations with diverse conditions and rules of the road, e.g., left vs. right-hand traffic. In contrast, existing models for autonomous driving have been thus far only deployed within restricted operational domains, i.e., without accounting for varying driving behaviors across locations or model scalability. In this work, we propose AnyD, a single geographically-aware conditional imitation learning (CIL) model that can efficiently learn from heterogeneous and globally distributed data with dynamic environmental, traffic, and social characteristics. Our key insight is to introduce a high-capacity, geo-location-based channel attention mechanism that effectively adapts to local nuances while also flexibly modeling similarities among regions in a data-driven manner. By optimizing a contrastive imitation objective, our proposed approach can efficiently scale across the inherently imbalanced data distributions and location-dependent events. We demonstrate the benefits of our AnyD agent across multiple datasets, cities, and scalable deployment paradigms, i.e., centralized, semi-supervised, and distributed agent training. Specifically, AnyD outperforms CIL baselines by over 14% in open-loop evaluation and 30% in closed-loop testing on CARLA.



Our model maps image, region, speed, and conditional command observations to future decisions, parameterized as waypoints in the map view. To efficiently learn a high-capacity model, we leverage a multi-head cross attention module which fuses and adapts internal representations in a geo-aware manner. Our imitation objective, defined over human-demonstrated waypoints (outlined in green), the other command branches (outlined in red), and the predicted weights by the multi-head module, regularizes model optimization under diverse data distributions.

Qualitative Results

AnyD exhibits robustness in diverse scenarios, including turning right (wider turn) in Singapore and yielding to a ‘Pittsburgh left’.


  title={Learning to Drive Anywhere},
  author={Zhu, Ruizhao and Huang, Peng and Ohn-Bar, Eshed and Saligrama, Venkatesh},
  booktitle={7th Annual Conference on Robot Learning},


This research was supported by a Red Hat Research Grant, Army Research Office Grant W911NF2110246, National Science Foundation grants (CCF-2007350, CCF-1955981, and IIS-2152077), and AFRL Contract no. FA8650-22-C-1039.