× Attention

The talks will be in-person.

Stanford Robotics and Autonomous Systems Seminar series hosts both invited and internal speakers. The seminar aims to bring the campus-wide robotics community together and provide a platform to overview and foster discussion about the progress and challenges in the various disciplines of Robotics. This quarter, the seminar is also offered to students as a 1 unit course. Note that registration to the class is NOT required in order to attend the talks.

The course syllabus is available here. Go here for more course details.

The seminar is open to Stanford faculty, students, and sponsors.

Attedence Form

For students taking the class, please fill out the attendance form: https://tinyurl.com/robosem-spr-26 when attending the seminar to receive credit. You need to fill out 7 attedence to receive credit for the quarter, or make up for it by submitting late paragraphs on the talks you missed via Canvas.

Seminar Youtube Recordings

All publically available past seminar recordings can be viewed on our YouTube Playlist. Registered students can access all talk recordings on Canvas.

Get Email Notifications

Sign up for the mailing list: Click here!

Schedule Spring 2026

Date Guest Affiliation Title Location Time
Fri, Apr 03 Baxi Chong Penn State Mechanical intelligence in locomotion: from information theory to multi-legged robots Gates B03 3:00PM
Abstract

Locomotion in complex environments (e.g., rubble, leaf litter, granular media) is essential to mobile engineered systems such as robots. Effective locomotion requires complex control strategies to interact with terrain heterogeneity. Computational intelligence (CI), which typically includes rapid terrain sensing and active feedback controls, is a widely recognized component in locomotion strategy. Alternatively, mechanical intelligence (MI) - passive response to environmental perturbation governed by physics laws or mechanical constraints - is an important yet less studied component. In this talk, I will discuss 'why' and 'how' MI can contribute to effective locomotion using the examples of multi-legged robots (redundantly segmented bodies with simple legs). For the 'why,' I will quantify a specific MI that emerges from leg redundancy. By modeling locomotion as a stochastic process (analogous to signal transmission over noisy channels), I will show that MI, without any CI, is sufficient to generate reliable and effective locomotion. To explore the 'how,' I will take a quantitative analogy to signal transmission algorithms (e.g., error correcting/detecting codes) and propose a co-design coding scheme for multi-legged locomotion. Specifically, my talk will cover that (i) additional legs, with higher control dimensions, can enable a broader spectrum of capabilities, including load carrying/pulling, sidewinding, rolling, and obstacle-climbing; (ii) the inclusion of CI (feedback controls) can enhance multi-legged locomotion speed while preserving the feature of robustness; and (iii) CI might reduce the number of redundant legs required to navigate a particular terrain. Finally, I will discuss the coordination and competition between MI and CI in a broader framework termed Embedded Intelligence (EI) and illustrate the applications of MI-dominated systems in fields like search-and-rescue, agriculture, and the development of soft, micro, and modular robots.

Fri, Apr 10 Danfei Xu Georgia Tech Robot Learning from Human Experience: Science and Scaling Gates B03 3:00PM
Abstract

Modern AI advances by transferring knowledge from humans to machines at scale. Vision and language models learn from vast Internet data, but robot learning still relies heavily on slow, labor-intensive teleoperation. Recently this assumption has begun to shift: growing industrial efforts are collecting large amounts of human experience data to scale robot performance. As large-scale data collection becomes increasingly feasible, the central challenge shifts to understanding how robots can learn from human behavior. In this talk, I argue that human-to-robot transfer can be understood as two coupled problems: extracting priors about physical intelligence from human experience, and grounding those priors into a robot’s embodiment. I will revisit several of our recent works through this lens, showing how egocentric human data enables scalable learning of manipulation priors, while representation learning and cross-embodiment transfer address the grounding challenge. I will also discuss recent results showing emergent human-to-robot transfer from large-scale human pretraining, as well as evidence that learning across diverse robot embodiments can further improve transfer. Finally, I will introduce EgoVerse, an ecosystem for robot learning from embodied human data, and discuss how collaborative platforms can enable both rigorous science and organic data growth. I will conclude with future directions toward more human-centered robots that better understand human intent and collaborate naturally with people.

Fri, Apr 17 Rachel Holladay UPenn TBD Gates B03 3:00PM
Abstract

TBD

Fri, Apr 24 Michael Yip UCSD TBD Gates B03 3:00PM
Abstract

TBD

Fri, May 01 Negahr Mehr UC Berkeley TBD Gates B03 3:00PM
Abstract

TBD

Fri, May 08 Jiayuan Mao Amazon FAR, UPenn TBD Gates B03 3:00PM
Abstract

TBD

Fri, May 15 Howie Choset CMU TBD Gates B03 3:00PM
Abstract

TBD

Fri, May 22 Rob Platt Northeastern TBD Gates B03 3:00PM
Abstract

TBD

Fri, May 29 Nick Colonese Meta TBD Gates B03 3:00PM
Abstract

TBD

Sponsors

The Stanford Robotics and Autonomous Systems Seminar enjoys the support of the following sponsors.