× Attention

The talks will be in-person.

Stanford Robotics and Autonomous Systems Seminar series hosts both invited and internal speakers. The seminar aims to bring the campus-wide robotics community together and provide a platform to overview and foster discussion about the progress and challenges in the various disciplines of Robotics. This quarter, the seminar is also offered to students as a 1 unit course. Note that registration to the class is NOT required in order to attend the talks.

The course syllabus is available here. Go here for more course details.

The seminar is open to Stanford faculty, students, and sponsors.

Attedence Form

For students taking the class, please fill out the attendance form: https://tinyurl.com/robosem-win-26 when attending the seminar to receive credit. You need to fill out 7 attedence to receive credit for the quarter, or make up for it by submitting late paragraphs on the talks you missed via Canvas.

Seminar Youtube Recordings

All publically available past seminar recordings can be viewed on our YouTube Playlist. Registered students can access all talk recordings on Canvas.

Get Email Notifications

Sign up for the mailing list: Click here!

Schedule Winter 2026

Date Guest Affiliation Title Location Time
Fri, Jan 09 Ahmed Qureshi Purdue Robot Motion Learning with Physics-Based PDE Priors Nvidia Auditorium 3:00PM
Abstract

This talk explores how partial differential equation (PDE)–based physics priors can provide a foundation for scalable and generalizable algorithms in robot motion learning. Rather than searching over discrete graphs or samples, it formulates and learns the solution to the motion-planning problem as a continuous value function governed by Hamilton–Jacobi (HJ) PDEs. These methods enable self-supervised value-function learning without reliance on expert trajectories or trial-and-error interaction. The learned value functions yield fast inference of motion plans and demonstrate strong scalability across complex, high-dimensional, and constraint-rich navigation and manipulation tasks. The talk also introduces an HJ PDE–derived mapping representation that unifies perception and planning: unlike occupancy grids or signed distance fields, it encodes motion-feasible geometry in a form naturally structured for continuous decision-making. Together, these developments outline a unified, numerically grounded framework for robot motion planning and control through the lens of physics-informed learning.

Fri, Jan 16 Sebastian Scherer CMU Resilient Autonomy for Extreme and Uncertain Environments Nvidia Auditorium 3:00PM
Abstract

Robots show great promise if they can get out of the lab into the field and go beyond a single-operator per robot paradigm. However, the unstructured nature of the real-world requires nuanced decision making of the robot. In this talk I will outline some of our approaches, progress, and results on multi-modal sensing, providing nuanced perception inputs, as well as navigation in difficult terrain, and future directions of our research.

Fri, Jan 23 Jing Liang Stanford Autonomous Navigation in Complex Outdoor Environments: Towards Companion Robots for Longevity Nvidia Auditorium 3:00PM
Abstract

Deploying mobile robots in unstructured outdoor environments remains a fundamental challenge, requiring the ability to robustly perceive complex terrains, pedestrian flows, and general traffic rules. To effectively serve humans, especially older adults, these robots must go beyond simple navigation to also understand human behavior and enhance personal mobility. In this talk, I will review our previous approaches for long-range outdoor navigation, with a focus on scene understanding and planning. Then, I will present a high-level overview of what we are currently working on, where I aim to apply these navigation technologies to develop companion robots that support older adults.

Fri, Jan 23 Yao Feng Stanford From Digital Humans to Safe Humanoids: Grounded Reasoning and Compliant Interaction  Nvidia Auditorium 3:00PM
Abstract

Humanoid robots are entering human-centric environments, where they must not only move well but also understand people and interact safely through physical contact. In this talk, I will present two complementary directions toward human-centered embodied intelligence. First, I will introduce GentleHumanoid, a whole-body control policy that combines motion tracking with compliant, tunable force regulation, enabling contact-rich behaviors such as gentle hugging, assistive support, and safe object interaction on the Unitree G1. Second, I will show how large language models can be grounded in 3D human motion for behavior understanding and planning, highlighting ChatPose and ChatHuman as steps toward systems that interpret actions, anticipate intent, and connect high-level reasoning to executable motion. I will close with future directions on scaling human–humanoid interaction data, developing vision-language-action models for long-horizon interaction, and incorporating muscle-driven modeling for more realistic and adaptive humanoids. 

Fri, Jan 30 Madhur Behl UVirginia Bringing AI Up To Speed Nvidia Auditorium 3:00PM
Abstract

Despite decades of advancement, autonomous driving systems have not met the high expectations set by many. What’s missing is physical intelligence - the ability of AI systems to reason, react, and adapt in real time, while operating safely and effectively within the laws of physics. In this talk, I will first examine which hurdles have turned out to be more formidable than expected, and share our research on how to refine testing methodologies to advance the safety of autonomous vehicles. I will then show how high-speed autonomous racing provides a unique proving ground to test the boundaries of AI’s physical capabilities. Leveraging more than a decade of experience in high-speed autonomous racing, particularly with the full-scale Cavalier Autonomous Racing Indy car and the F1tenth platform, I will demonstrate how racing at high speeds and in close proximity to other vehicles exposes unsolved challenges in perception, planning, and control. I will recount our journey from the lab to lap times, and the rigorous engineering required to build a full-scale autonomous racecar from scratch. Despite progress, autonomous racing has yet to match the skill of expert human drivers or master the complexity of dense, multi-car competition; indicating that we still have several more laps to go on our path toward artificial general “driving” intelligence.

Fri, Feb 06 Koushil Sreenath UC Berkeley Safety, Representations, and Generative Learning for Dynamical Systems Nvidia Auditorium 3:00PM
Abstract

This talk explores the interplay between model-based guarantees and learning-based flexibility in the control of dynamical systems. I begin with safety-critical control using control barrier functions (CBFs), highlighting that while CBFs enforce state constraints, they may induce unstable internal dynamics that renders the system 'unsafe'! To address this, I introduce conditions under which CBF-based safety filters also ensure boundedness of the full system state. I then transition to learning representations of hybrid dynamical systems. I present a framework that learns continuous neural representations by exploiting the geometric structure induced by guards and resets, enabling accurate flow prediction of hybrid systems without explicit mode switching. Finally, I discuss generative learning approaches for control. Through applications to legged robotics, I illustrate how a generative sensorimotor model can generalize beyond the training distribution of pure locomotion and manipulation to achieve whole-body control. Together, these results highlight how structure, geometry, and learning can bridge safety guarantees and expressive control for complex dynamical systems.

Fri, Feb 20 Xianyi Cheng Duke TBD Nvidia Auditorium 3:00PM
Abstract

TBD

Fri, Feb 27 Max Simchowitz CMU TBD Nvidia Auditorium 3:00PM
Abstract

TBD

Fri, Mar 06 Jenny Barry RAI TBD Nvidia Auditorium 3:00PM
Abstract

TBD

Sponsors

The Stanford Robotics and Autonomous Systems Seminar enjoys the support of the following sponsors.