The talks will be in-person.
Stanford Robotics and Autonomous Systems Seminar series hosts both invited and internal speakers. The seminar aims to bring the campus-wide robotics community together and provide a platform to overview and foster discussion about the progress and challenges in the various disciplines of Robotics. This quarter, the seminar is also offered to students as a 1 unit course. Note that registration to the class is NOT required in order to attend the talks.
The course syllabus is available here. Go here for more course details.
The seminar is open to Stanford faculty, students, and sponsors.
Seminar Recordings
All publically available past seminar recordings can be viewed on our YouTube Playlist.
Get Email Notifications
Sign up for the mailing list: Click here!
Schedule Fall 2024
Date | Guest | Affiliation | Title | Location | Time |
---|---|---|---|---|---|
Fri, Sep 27 | A. Miguel San Martin | NASA Jet Propulsion Laboratory | The Challenges of Landing on Mars | Packard 101 | 3:00PM |
Abstract
This presentation will describe the challenges of landing on Mars and how those challenges were addressed in NASA missions through the years in response to ever-increasing science requirements, previous landing experiences, and changing programmatic constraints. From the Viking legged lenders, through the Mars Pathfinder and the Mars Explorations Rovers airbag landers, and ultimately the Curiosity and Perseverance SkyCrane landers, Mars exploration has provided fertile ground for the development of innovative landing technologies in our quest for extracting the scientific secrets hidden in the Red Planet. |
|||||
Fri, Oct 04 | Zhenan Bao | Stanford University | Learning from Skin: from Materials, Sensing Functions to Neuromorphic Engineering | Gates B01 | 3:00PM |
Abstract
Skin is the body’s largest organ. It is responsible for the transduction of a vast amount of information. This conformable, stretchable, self-healable and biodegradable material simultaneously collects signals from external stimuli, such as pressure, pain, and temperature, and translates into spike-train signals. The development of electronic materials, inspired by the complexity of this organ is a tremendous, unrealized materials challenge. Furthermore, skin-like integrated circuits are necessary for neuromorphic signal processing to generate spike-train signals. However, the advent of organic-based electronic materials may offer a potential solution to this longstanding problem. Over the past decade, we have developed materials design concepts to add skin-like functions to organic electronic materials without compromising their electronic properties. An important discovery was nano-confined polymer semiconductors and conductors. This finding addressed the long-standing challenge of conformational disorder-limited charge transport with polymer electronic materials. It enabled us to introduce various skin-like functions while simultaneously increase polymer electronic material charge transport ability. The above fundamental understanding further allowed us to develop direct photo-patterning methods and fabrication processes for high-density large scale soft stretchable integrated circuits. In addition, we developed various soft sensors for continuous measurements, including pressure, strain, shear, temperature, electrophysiological and neurotransmitter sensors. The above sensors and integrated circuits are the foundations for soft bioelectronics and are enabling a broad range of new tools for medical devices, robotics and wearable electronics. |
|||||
Fri, Oct 11 | Laura Leal-Taixé | NVIDIA | Open-world Segmentation and Tracking in 3D | Gates B01 | 3:00PM |
Abstract
In this talk, I will discuss how to train models for detection and segmentation of 3D data, e.g., Lidar, without having to actually annotate data in the 3D domain. I will discuss different uses of pseudo-labels and present a demo for our Segment Anything in Lidar. |
|||||
Fri, Oct 18 | Aaron Johnson | Carnegie Mellon University | The Trouble with Contact: Helping Robots Touch the World | Gates B01 | 3:00PM |
Abstract
Contact with the outside world is challenging for robots due to its inherently discontinuous nature -- when a foot or hand is touching a surface the forces are completely different than if it is just above the surface. However, most of our computational and analytic tools for planning, learning, and control assume continuous (if not smooth or even linear) systems. Simple models of contact make assumptions (like plasticity and coulomb friction) that are known to not only be wrong physically but also inconsistent. In this talk I will present techniques for overcoming these challenges in order to adapt smooth methods to systems that have changing contact conditions. In particular I will focus on three topics: First, I will present the “Salted Kalman Filter” for state estimation over hybrid systems. Second, I will present an analysis approach that unifies and extends different strategies for stabilizing and controlling systems through contact. Finally, I will talk about when these hybrid models of contact break down, especially when driving on sand. |
|||||
Fri, Oct 25 | Iro Armeni | Stanford University | Living Scenes: Creating and updating 3D representations of evolving indoor scenes | Gates B01 | 3:00PM |
Abstract
Buildings are like living organisms, i.e., they evolve over time due to interaction with natural phenomena and humans. How can we realistically maintain their digital twins throughout their lifespan? Or else, how can we maintain a living building model as the space is undergoing changes? In this talk, I will present some of my recent works that focus on creating and updating building replicas of geometry and semantics using visual data that depict the building undergoing changes over time as a result of human interaction. I will discuss handling both drastic changes in the building during construction and smaller changes on asset location and geometry during operation, while ensuring privacy and realistic implementations. The goal of this research line is to develop quantitative and data driven methods for better construction and operation monitoring, to ultimately create sustainable buildings that are more suitable for users. |
|||||
Fri, Nov 08 | Student Speaker - Albert Wu | Stanford University | Leveraging Physics-Based Models To Learn Generalizable Robotic Manipulation | Gates B01 | 3:00PM |
Abstract
Robotic manipulation involves rich contact interactions, which pose significant challenges in producing effective manipulation policies. While model-based planners have achieved great success in continuous systems, their performance typically breaks down when dealing with the complex, discontinuous contact dynamics in manipulation tasks. On the other hand, behavior cloning and reinforcement learning have demonstrated success in fine-grained manipulation, but often struggle with generalization beyond the specific scenarios encountered during training. In this talk, I will explore how incorporating physical models can enable efficient learning of robust manipulation policies. I will present a series of projects that demonstrate how integrating physics priors and contact constraints into learning pipelines can facilitate the execution of sophisticated dexterous and nonprehensile tasks across a wide range of simulation and hardware environments. |
|||||
Fri, Nov 08 | Student Speaker - Savannah Cofer | Stanford University | Designing Origami Voxels for Robotics | Gates B01 | 3:30PM |
Abstract
The topic of this talk is designing novel origami voxels and their applications in robotics. Just as folding a flat sheet of origami paper creates a multitude of emergent properties, we extend these principles to the design and folding of three-dimensional cubes. We focus on designing origami voxels with specific mechanical properties relevant to robotics applications, including multistability, self-locking, reconfigurability, modularity, and underactuation. The highly coupled nature of folding origami cubes contributes to addressing fundamental challenges of actuation complexity and energetic stability in modular robotics, with the goal of combining mathematical elegance and creative design with practical functionality. The talk will explore the crease patterns of eight different origami voxels with a hands-on approach, and discuss both theoretical and experimental methods of characterizing these designs for shape changing robots. |
|||||
Fri, Nov 15 | Somil Bansal | Stanford | Continual Safety Assurances for Learning-Enabled Robotic Systems | Gates B01 | 3:00PM |
Abstract
The ability of machine learning techniques to leverage data and process rich perceptual inputs (e.g., vision) makes them highly appealing for use in robotic systems. However, the inclusion of data and machine learning in the control loop poses an important challenge: how can we guarantee the safety of such systems? To address these safety challenges, we present a controller synthesis technique based on the computation of reachable sets, using optimal control and game theory. We present new methods that leverage advances in physics-informed neural networks to compute reachable sets and safety controllers efficiently. These techniques are highly scalable to high-dimensional systems, enabling the learning of safe controllers for a wide array of robotic systems. Furthermore, these methods allow us to dynamically update safety assurances online, as new environment information is obtained during deployment. In the second part of the talk, we will present a toolbox of methods that use data-driven reachable sets to stress-test learning and vision-based controllers. By identifying safety-critical failures, these tools guide performance improvement while maintaining safety. Together, these advances establish a continual safety assurance framework for learning-enabled robotic systems, where safety considerations are integrated across various stages of the learning process, from initial design to deployment and ongoing system enhancement. Throughout the talk, we will illustrate these methods on various safety-critical autonomous systems, including autonomous aircrafts, autonomous driving, quadrupeds, and quadcopters. |
|||||
Fri, Nov 22 | Xiaolong Wang | UC San Diego | Modeling Humans for Humanoid Robots | Gates B01 | 3:00PM |
Abstract
Having a humanoid robot operating like a human has been a long-standing goal in robotics. The humanoid robot provides a general-purpose platform to conduct diverse tasks we do in our daily lives. In this talk, I will present a 2-level learning framework designed to equip humanoid robots with robust mobility and manipulation skills, enabling them to generalize across diverse tasks, objects, and environments. The first level focuses on training Vision-Language-Action (VLA) models with human video data for both navigation and manipulation. These models can predict “mid-level” actions which predict precise movements or trajectories for the human body and hands, conditioned on language instructions. The second level involves developing low-level robot manipulation skills through teleoperation, and low-level humanoid whole-body control skills via motion imitation and Sim2Real. By combining human VLA with low-level robot skills, this framework offers a scalable pathway toward realizing general-purpose humanoid robots. |
|||||
Fri, Dec 06 | Girish Chowdhary | UIUC | TBD | Gates B01 | 3:00PM |
Abstract
TBD |
Sponsors
The Stanford Robotics and Autonomous Systems Seminar enjoys the support of the following sponsors.