× Attention

The talks will be in-person.

Stanford Robotics and Autonomous Systems Seminar series hosts both invited and internal speakers. The seminar aims to bring the campus-wide robotics community together and provide a platform to overview and foster discussion about the progress and challenges in the various disciplines of Robotics. This quarter, the seminar is also offered to students as a 1 unit course. Note that registration to the class is NOT required in order to attend the talks.

The course syllabus is available here. Go here for more course details.

The seminar is open to Stanford faculty, students, and sponsors.

Get Email Notifications

Sign up for the mailing list: Click here!

Schedule Spring 2024

Date Guest Affiliation Title Location Time
Fri, Apr 05 Andreas Krause ETH Zurich Towards Safe and Efficient Learning in the Physical World Skilling Auditorium 12:30PM
Abstract

How can we enable agents to efficiently and safely learn online, from interaction with the real world? I will first present safe Bayesian optimization, where we quantify uncertainty in the unknown objective and constraints, and, under some regularity conditions, can guarantee both safety and convergence to a natural notion of reachable optimum. I will then consider Bayesian model-based deep reinforcement learning, where we use the epistemic uncertainty in the world model to guide exploration while ensuring safety. Lastly I will discuss how we can meta-learn flexible probabilistic models from related tasks and simulations, and demonstrate our approaches on real-world applications, such as robotics tasks and tuning the SwissFEL Free Electron Laser.

Fri, Apr 12 Karen Leung University of Washington Towards trusted human-centric robot autonomy Skilling Auditorium 12:30PM
Abstract

Autonomous robots are becoming increasingly prevalent in our daily existence, from navigating our roads and sky, assisting in households and warehouses, conducting daring search and rescue missions, and even exploring the frontiers of space. Yet building robots that can safely and fluently interact with humans in a trusted manner remains an elusive task. Ensuring robots can keep a sufficiently safe distance from humans is at odds with fluent interactions, yet humans are remarkable at seamlessly avoiding collision in crowded settings. In this talk, we will study how humans engage in safe and fluent multi-agent interactions and how this can be applied to robot decision-making and control. In the first half of the talk, I will introduce the notion of safety concepts and demonstrate how we can tractably synthesize data-driven safety concepts using control theoretic tools as inductive biases. These data-driven safety concepts are designed to capture how humans think about safety in real-world scenarios more accurately. In the second half, I will present recent work investigating how fluent motion can lead to safer interactions. Specifically, I will show that legible and proactive robot behavior can lead to prosocial interactions. This talk aims to revisit how safety is defined and rethink how safety and fluency can be more compatible with one another in human-robot interactions.

Fri, Apr 19 Ken Goldberg UC Berkeley Data is All You Need: Large Robot Action Models and Good Old Fashioned Engineering Skilling Auditorium 12:30PM
Abstract

2024 is off to an exciting start with enormous enthusiasm for humanoids and other robots based on recent advances in "end-to-end" large robot action models. Initial results are promising, and several collaborative efforts are underway to collect the needed demonstration data. But is Data All You Need? I'll present my view of the status quo in terms of manipulation task definition, data collection, and experimental evaluation. I'll then suggest that to reach expected performance levels, it may be helpful for the community to reconsider good old fashioned engineering in terms of modularity, metrics, and failure analysis. I'll present MANIP, a potential framework for doing this that shows promise for tasks such as cable untangling, surgical suturing, and bagging. I welcome feedback: this will be the first time I present this talk and I expect it to be a bit controversial ;)

Fri, Apr 26 Elliot Hawkes UCSB Engineering physical principles of embryonic morphogenesis in robotic collectives Skilling Auditorium 12:30PM
Abstract

Embryonic tissue is an active material able to self-shape, self-heal, and control its strength in space and time. Realizing these features in synthetic materials would change static objects—with properties set at design time—into dynamic programmable matter. However, unlike tissue, which achieves these capabilities by rearranging tight-packed cells throughout the tissue, current material-like robotic collectives can generally only move units at the perimeter of the collective. In this talk, I will describe how by encoding key tissue-inspired processes into robotic units, here we build material-like robotic collectives capable of topological rearrangement throughout the collective, enabling spatiotemporal control of shape and strength.

Fri, May 03 Student Speaker 1 -- Somrita Banerjee Stanford Learning-enabled Adaptation to Evolving Conditions for Robotics Skilling Auditorium 12:30PM
Abstract

With advancements in machine learning and artificial intelligence, a new generation of “learning-enabled” robots is emerging, which are better suited to operating autonomously in unstructured, uncertain, and unforgiving environments. To achieve these goals, robots must be able to adapt to evolving conditions that are different from those seen during training or expected during deployment. In this talk I will first talk about adapting to novel instantiations, i.e., different task instances with shared structure, through parameter adaptation. Such adaptation is done passively, by augmenting physics-based models with learned models, with our key contribution being that the interpretability of physical parameters is retained, allowing us to monitor adaptation. Second, I will talk about a framework for active adaptation where the model monitors its own performance and curates a diverse subset of uncertain inputs to be used for periodic fine-tuning of the model, improving performance over the full data lifecycle.

Fri, May 03 Student Speaker 2 -- Elliot Weiss Stanford Wearing a VR Headset While Driving to Improve Vehicle Safety Skilling Auditorium 12:30PM
Abstract

Driver assistance systems hold the promise of improving safety on the road. We are particularly interested in developing new assistance systems that smoothly share control with the driver and testing them in a wide range of driving conditions. Given the central role of the driver in a shared control system, it is critical to elicit natural driving behavior during tests. This talk discusses the development of a flexible driving simulation platform that can be used for safe and immersive shared control testing. Our platform, known as "Vehicle-in-the-Loop", enables experiments on a real vehicle within a simulated traffic scenario viewed by the driver in a virtual reality headset. By implementing this platform around a four-wheel steer-by-wire vehicle, the driver can interact with shared control systems in a variety of test conditions – including low friction and highway speed driving – all on one vehicle platform and at one proving ground.

Fri, May 10 Cynthia Sung UPenn When Design = Planning Skilling Auditorium 12:30PM
Abstract

Robot design is an inherently difficult process that requires balancing multiple different aspects: kinematics and geometry, materials and compliance, actuation, fabrication, control complexity, power, and more. Computational design systems aim to simplify this process by helping designers check whether their designs are feasible and interdependencies are satisfied. But what can we say about when a design that accomplishes a task even exists? Or what the simplest design that does a job is? In this talk, I will discuss recent work from my group in which we have discovered that, in some cases, design problems can be mapped to problems in robot planning, and that results derived in the planning space allow us to make formal statements about design feasibility. These ideas apply to systems as varied as traditional robot arms, dynamical quadrupeds, compliant manipulators, and modular truss structures. I will share examples from systems developed in my group and forecast forward on the implications of these results for future robot co-design.

Fri, May 17 Hiro Ono NASA JPL From the surface of Mars to the ocean of Enceladus: EELS Robot to Spearhead a New One-Shot Exploration Paradigm with Risk-Aware Adaptive Autonomy Skilling Auditorium 12:30PM
Abstract

NASA’s Perseverance rover, on its mission to find a sign of ancient Martian life that might have existed billions of years ago, has been enormously successful partially owing to its highly advanced autonomous driving capabilities. However, current Mars exploration requires ample environmental knowledge accumulated over decades and across multiple missions, resulting in slow progression towards exploring unvisited worlds beyond Mars. The EELS (Exobiology Extant Life Surveyor) robot, a snake-like robot designed for exploring extreme environments, aims to shift this exploration paradigm by utilizing versatile robotic hardware, mechanical flexibility, and intelligent, risk-aware autonomy. For the first time, this adaptive robot gives us the opportunity to explore environments currently out of reach. The ultimate mission of EELS would be exploring Saturn’s Enceladus geysers – searching within a subsurface ocean for extant alien life. We built hardware and software prototypes of EELS and successfully tested in a wide range of environment, including natural vertical holes on Athabasca Glacier in Canada. This talk will cover a broad range of topics related to autonomous robotic exploration of unknown planetary environments, including EELS, Mars rover autonomy, and risk-aware planning algorithms.

Fri, May 24 Richard Linares MIT Improving Computational Efficiency for Powered Descent Guidance via Transformer-based Tight Constraint Prediction Skilling Auditorium 12:30PM
Abstract

Future spacecraft and surface robotic missions require increasingly capable autonomy stacks for exploring challenging and unstructured domains and trajectory optimization will be a cornerstone of such autonomy stacks. However, the optimization solvers required remain too slow for use on resource constrained flight-grade computers. In this work, we present Transformer-based Powered Descent Guidance (T-PDG), a scalable algorithm for reducing the computational complexity of the direct optimization formulation of the spacecraft-powered descent guidance problem. T-PDG uses data from prior runs of trajectory optimization algorithms to train a transformer neural network, which accurately predicts the relationship between problem parameters and the globally optimal solution for the powered descent guidance problem. The solution is encoded as the set of tight constraints corresponding to the constrained minimum-cost trajectory and the optimal final landing time. By leveraging the attention mechanism of transformer neural networks, large sequences of time series data can be accurately predicted when given only the spacecraft state and landing site parameters. When applied to the real problem of Mars-powered descent guidance, T-PDG reduces the time for computing the 3 degrees of freedom fuel-optimal trajectory when compared to lossless convexification, improving solution times by up to an order of magnitude. A safe and optimal solution is guaranteed by including a feasibility check in T-PDG before returning the final trajectory.

Fri, May 31 Aaron Ames Caltech Safe Autonomy: Why Learning Needs Control Skilling Auditorium 12:30PM
Abstract

As robotic systems pervade our everyday lives, especially those that leverage complex learning and autonomy algorithms, the question becomes: how can we trust that robots will operate safely around us? An answer to this question was given, in the abstract, by famed science fiction writer Isaac Asimov: The Three Laws of Robotics. These laws provide a safety layer between the robot and the world that ensures trustworthy behavior. In this presentation, I will propose a mathematical formalization of the three laws of robots, encapsulated by control barrier functions (CBFs). These generalizations of (control) Lyapunov functions ensure forward invariance of “safe” sets. Moreover, CBFs lead to the notion of a safety filter that minimally modifies an existing controller to ensure the safety of the system—even if this controller is unknown, the result of a learning-based process, or operating as part of a broader layered autonomy stack. The utility of CBFs will be demonstrated through their extensive implementation in practice on a wide variety of highly dynamic robotic systems: from ground robots, to drones, to legged robots, to robotic assistive devices.

Sponsors

The Stanford Robotics and Autonomous Systems Seminar enjoys the support of the following sponsors.