Milan Ganai is a PhD student in the Department of Computer Science advised by Professors Marco Pavone and Clark Barrett. His research interests lie at the intersection of safe AI and robotics, concentrating on developing generalizable physical reasoning capabilities for autonomous systems to reliably adapt to novel environments. Prior to Stanford, he received his BS in Computer Science, summa cum laude with highest distinction, and MS in Computer Science at UC San Diego, where he was a Jacobs School Scholar and Regents Scholar. He performed research in the intersection of control and reinforcement learning under Professors Sicun Gao and Sylvia Herbert and has interned at Amazon Web Services.
Abstract: Embodied Chain-of-Thought (CoT) reasoning has significantly enhanced Vision-Language-Action (VLA) models, yet current methods rely on rigid templates to specify reasoning primitives (e.g., objects in the scene, high-level plans, structural affordances). These templates can force policies to process irrelevant information that distracts from critical action-prediction signals. This creates a bottleneck: without successful policies, we cannot verify reasoning quality; without quality reasoning, we cannot build robust policies. We introduce R&B-EnCoRe, which enables models to bootstrap embodied reasoning from internet-scale knowledge through self-supervised refinement. By treating reasoning as a latent variable within importance-weighted variational inference, models can generate and distill a refined reasoning training dataset of embodiment-specific strategies without external rewards, verifiers, or human annotation. We validate R&B-EnCoRe across manipulation (Franka Panda in simulation, WidowX in hardware), legged navigation (bipedal, wheeled, bicycle, quadruped), and autonomous driving embodiments using various VLA architectures with 1B, 4B, 7B, and 30B parameters. Our approach achieves 28% gains in manipulation success, 101% improvement in navigation scores, and 21% reduction in collision-rate metric over models that indiscriminately reason about all available primitives. R&B-EnCoRe enables models to distill reasoning that is predictive of successful control, bypassing manual annotation engineering while grounding internet-scale knowledge in physical execution.
@article{GanaiLuoEtAl2026,
author = {Ganai, M. and Luo, K. and Frey, J. and Barrett, C. and Pavone, M.},
title = {Self-Supervised Bootstrapping of Action-Predictive Embodied Reasoning},
year = {2026},
journal = {ArXiv 2602.08167},
url = {https://arxiv.org/abs/2602.08167},
keywords = {sub},
owner = {mganai},
timestamp = {2026-02-09}
}
Abstract: The draft IMO MASS Code requires autonomous and remotely supervised maritime vessels to detect departures from their operational design domain, enter a predefined fallback that notifies the operator, permit immediate human override, and avoid changing the voyage plan without approval. Meeting these obligations in the alert-to-takeover gap calls for a short-horizon, human-overridable safe-keeping policy. Classical maritime autonomy stacks struggle when the correct action depends on meaning (e.g., a diver-down flag means people in the water, fire close by means hazard). We argue (i) that vision–language models (VLMs) provide semantic awareness for such out-of-distribution situations, and (ii) that a fast–slow anomaly pipeline with a short-horizon, human-overridable fallback makes this practical in the handover window. We introduce Semantic Lookout, a camera-only, candidate-constrained vision–language model bridge that selects one cautious action (or station-keeping) from water-valid, world-anchored trajectories under continuous human authority. On 40 harbor scenes we measure per-call scene understanding and latency, alignment with human consensus (model majority-of-three voting), short-horizon risk-relief on fire hazard scenes, and an on-water alert→bridge→operator handover. Sub-10 s models retain most of the awareness of slower state-of-the-art models. The bridge policy outperforms geometry-only baselines and increases standoff distance on fire scenes. A field run verifies end-to-end operation. These results support VLMs as a semantic fallback “bridge policy” compatible with the draft IMO MASS Code, within practical latency budgets, and motivate future work on domain-adapted, hybrid autonomy that pairs foundation-model semantics with multi-sensor bird’s-eye-view perception and short-horizon replanning.
@inproceedings{ChristensenTufteEtAl2026,
author = {Christensen, K. A. and Tufte, A. G. and Gusev, A. and Sinha, R. and Ganai, M. and Alsos, O. A. and Pavone, M. and Steinert, M.},
title = {Foundation models on the bridge: Semantic hazard detection and safety maneuvers for maritime autonomy with vision-language models},
journal = {Ocean Engineering},
year = {2026},
timestamp = {2026-02-12},
url = {https://arxiv.org/abs/2512.24470},
owner = {mganai}
}
Abstract: While foundation models offer promise toward improving robot safety in out-of-distribution (OOD) scenarios, how to effectively harness their generalist knowledge for real-time, dynamically feasible response remains a crucial problem. We present FORTRESS, a joint reasoning and planning framework that generates semantically safe fallback strategies to prevent safety-critical, OOD failures. At a low frequency under nominal operation, FORTRESS uses multi-modal foundation models to anticipate possible failure modes and identify safe fallback sets. When a runtime monitor triggers a fallback response, FORTRESS rapidly synthesizes plans to fallback goals while inferring and avoiding semantically unsafe regions in real time. By bridging open-world, multi-modal reasoning with dynamics-aware planning, we eliminate the need for hard-coded fallbacks and human safety interventions. FORTRESS outperforms on-the-fly prompting of slow reasoning models in safety classification accuracy on synthetic benchmarks and real-world ANYmal robot data, and further improves system safety and planning success in simulation and on quadrotor hardware for urban navigation.
@inproceedings{GanaiSinhaEtAl2025,
author = {Ganai, M. and Sinha, R. and Agia, C. and Morton, D. and Di Lillo, L. and Pavone, M.},
title = {Real-Time Out-of-Distribution Failure Prevention via Multi-Modal Reasoning},
booktitle = {{Conf. on Robot Learning}},
year = {2025},
month = jul,
address = {Seoul, Korea},
keywords = {press},
owner = {mganai},
url = {https://arxiv.org/abs/2505.10547},
timestamp = {2025-06-08},
note = {oral}
}