The talks will be in-person.
Stanford Robotics and Autonomous Systems Seminar series hosts both invited and internal speakers. The seminar aims to bring the campus-wide robotics community together and provide a platform to overview and foster discussion about the progress and challenges in the various disciplines of Robotics. This quarter, the seminar is also offered to students as a 1 unit course. Note that registration to the class is NOT required in order to attend the talks.
The course syllabus is available here. Go here for more course details.
The seminar is open to Stanford faculty, students, and sponsors.
Attedence Form
For students taking the class, please fill out the attendance form: https://tinyurl.com/robotsem-fall-25 when attending the seminar to receive credit. You need to fill out 7 attedence to receive credit for the quarter, or make up for it by submitting late paragraphs on the talks you missed via Canvas.
Seminar Youtube Recordings
All publically available past seminar recordings can be viewed on our YouTube Playlist. Registered students can access all talk recordings on Canvas.
Get Email Notifications
Sign up for the mailing list: Click here!
Schedule Fall 2025
| Date | Guest | Affiliation | Title | Location | Time | 
|---|---|---|---|---|---|
| Fri, Sep 26 | Feifei Qian | USC | Make Every Step an Experiment: Towards Terrain-aware, High-mobility Robots for Planetary Explorations | Gates B01 | 3:00PM | 
| Abstract In this talk, I will present our recent progress in two main directions. First, I will show that by strategically eliciting the force responses from loose regolith, robots could generate desired ground reaction forces and achieve substantially improved locomotion performance on deformable substrates. Second, I will show that by leveraging the high force transparency of direct-drive actuators, robots can use their legs as proprioceptive sensors to opportunistically determine the terramechanical properties of regolith from every step. | |||||
| Fri, Oct 03 | Yang Gao | THU | Manipulation Data Pyramid: From Human Video Pretraining to Physical RL | Gates B01 | 3:00PM | 
| Abstract Scaling laws are now often seen as a key ingredient on the path toward general intelligence. But in robotics, progress is slowed by one major obstacle: the lack of abundant, high-quality data. In this talk, I introduce a data pyramid strategy designed to tackle this challenge by making the most of diverse data sources. The idea is simple but powerful: combine internet-scale datasets, human teleoperation data, and robot-collected experiences so that each strengthens and fills in the gaps of the others. | |||||
| Fri, Oct 10 | Simone Schürle‑Finke | ETHz | Design, Synthesis, Control, and Tracking of Soft Magnetic Microrobots for Targeted Therapeutic Delivery | Gates B01 | 3:00PM | 
| Abstract Effective delivery of therapeutics remains a central challenge in medicine, particularly when interventions must navigate complex and dynamic biological environments. Magnetic microrobots offer a promising solution, providing untethered locomotion and the ability to actively steer toward target sites. Among actuation strategies, rotational magnetic fields provide scalable torque-based propulsion, enabling continuous motion and agile navigation even under physiologically relevant flow conditions. In this presentation, I will highlight two complementary microrobotic platforms. Biohybrid microrobots based on bacteria combine autonomous chemotactic sensing with external torque-based control, allowing them to navigate tissues while maintaining responsiveness to applied magnetic fields. Synthetic bioinspired microrobots, constructed from biodegradable hydrogels with anisotropic magnetic nanoparticle patterns, exploit torque-driven propulsion to achieve efficient transport and directional control in vascular models and other constrained environments. To further improve targeting, I will introduce a strategy for spatially restricting rotating magnetic fields, focusing torque delivery to specific regions to enhance precision and reduce off-target effects. Complementing this, we integrate inductive feedback for real-time tracking, capturing magnetic phase lag and swarm synchronization to enable closed-loop control of microrobotic motion and collective behavior. Together, these advances—from torque-driven actuation and programmable magnetic design to spatially focused control and real-time feedback—demonstrate a versatile, scalable approach for microscale robotic systems in targeted therapeutics, paving the way toward clinical translation. | |||||
| Fri, Oct 17 | Nick Gravish | UCSD | Adaptive robots through reconfiguration, compliance, and contact | Gates B01 | 3:00PM | 
| Abstract Recent advances in robot materials and algorithms have enabled new levels of adaptive and versatile behavior. In this talk I will describe my lab’s efforts to create robots capable of emergent adaptive behaviors. I will first describe how soft materials can enable reconfigurable robot appendages and bodies, culminating in new modes of robot manipulation and locomotion. Next, I will describe how autonomous oscillators drive adaptive flapping wing robots and have shed new light on the evolution of insect flight. Lastly, I will describe how mechanical contact can be leveraged for multi-robot control such as friction modulation or multi-robot synchronization. The overarching focus of this work is to identify opportunities for adaptive behavior in robots from engineered emergent phenomena. | |||||
| Fri, Oct 24 | Hojung Choi | Stanford | General Compliant Robot Interaction Through Scalable F/T Sensing | Gates B01 | 3:00PM | 
| Abstract Robots excel at avoiding contact and performing structured tasks, but they often fail in unstructured, contact-rich environments. To interact safely and effectively, they must sense and regulate contact through compliance and tactile sensing. This talk presents two systems; CoinFT, a coin-sized, robust, and affordable 6-axis force/torque sensor, and UMI-FT, a handheld multimodal data collection platform that combines vision and finger-level force sensing. Together, they enable scalable tactile perception and compliant robot learning, allowing robots to not only detect contact but also use it, bringing us closer to general, contact-aware robot interaction with the real world. | |||||
| Fri, Oct 24 | Jonas Frey | Stanford & Berkeley | Embodied Foundation Models: Bridging RL Locomotion and LLMs for Legged Navigation | Gates B01 | 3:00PM | 
| Abstract Quadrupedal robots trained with reinforcement learning (RL) can navigate rough and challenging environments without falling, yet they remain far from fully autonomous and fail to achieve general-purpose navigation. Consequently, current systems heavily rely on teleoperation, which limits their practical utility. Meanwhile, large language models (LLMs) have acquired broad world knowledge and reasoning capabilities. In this talk, we review what makes sim-to-real work for locomotion and why it remains limited for manipulation, examine the systemic challenges of integrating RL with reasoning models, and showcase recent work bridging the simulation diversity gap. We also evaluate current LLM capabilities for enabling navigation and propose paths forward for embodied LLMs conditioned on low-level RL policies. | |||||
| Fri, Oct 31 | Ashutosh Saxena | Torque AGI | The Graph Physical AI Approach: Bridging Physics and Data for Scalable Robotics | Gates B01 | 3:00PM | 
| Abstract Autonomous robots still struggle in the field — when tasks drift from the script, materials deform, weather turns unpredictable, or unmodeled interactions arise. Across mobile robots, manipulators, and humanoids, such edge cases reveal the persistent gap between lab-trained AI and real-world reliability. While vision–language–action (VLA) models show promise in unifying perception, reasoning, and control, their reliance on massive datasets and retraining makes them fragile in dynamic, data-scarce settings. Robots cannot scale by data — they must scale by understanding. I will present Graph Physical AI (G-PAI), a foundation model that embeds physics–neural operators directly into its multimodal design, enabling data-efficient learning and robust adaptation across robot types. Its compositional architecture links perception, planning, and control agents through a shared physics-informed core, ensuring interpretability and fast generalization. G-PAI is already powering robots in demanding conditions — from warehouses and construction sites to agricultural operations. It has formal safety benchmarks on long-tail edge cases with an OEM, with broader testing underway across additional domains. Together, these results mark a practical step toward deployable, general-purpose Physical AI. | |||||
| Fri, Nov 07 | Yuke Zhu | UT Austin | TBD | Gates B01 | 3:00PM | 
| Abstract TBD | |||||
| Fri, Nov 21 | Jitendra Malik | UCB | TBD | Gates B01 | 3:00PM | 
| Abstract TBD | |||||
| Fri, Dec 05 | Nadia Figueroa | UPenn | TBD | Gates B01 | 3:00PM | 
| Abstract TBD | |||||
Sponsors
The Stanford Robotics and Autonomous Systems Seminar enjoys the support of the following sponsors.
 
    