× Attention

The talks will be in-person.

Stanford Robotics and Autonomous Systems Seminar series hosts both invited and internal speakers. The seminar aims to bring the campus-wide robotics community together and provide a platform to overview and foster discussion about the progress and challenges in the various disciplines of Robotics. This quarter, the seminar is also offered to students as a 1 unit course. Note that registration to the class is NOT required in order to attend the talks.

The course syllabus is available here. Go here for more course details.

The seminar is open to Stanford faculty, students, and sponsors.

Get Email Notifications

Sign up for the mailing list: Click here!

Schedule Spring 2023

Date Guest Affiliation Title Location Time
Fri, Apr 07 Sheila Russo Boston University Soft Material Robotics and Next-Generation Surgical Robots Skilling Auditorium 12:30PM

Minimally invasive surgical (MIS) procedures pose significant challenges for robots, which need to safely navigate through and manipulate delicate anatomy while performing complex tasks to treat tumors in remote areas. Soft robots hold considerable potential in MIS given their compliant nature, inherent safety, and high dexterity. Yet, a significant breakthrough of soft robots in surgery is impeded by current limitations in the design, manufacturing, and integration of soft materials that combine actuation, sensing, and control. Scientific understanding of medical and surgical robotics is entering an exciting new era where early approaches relying on rigid materials, standard manufacturing, and conventional kinematics are giving way to Soft Material Robotics. Our research at the Material Robotics Lab at Boston University is focused on the design, mechanics, and manufacturing of novel multi-scale and multi-material biomedical robotic systems. This talk will illustrate our work towards achieving safe navigation, distal actuation, integrated sensing, and effective force transmission in MIS by highlighting different classes of soft surgical robots, i.e., soft continuum robots, soft-foldable robots, and soft reactive skins with applications in lung cancer, colorectal cancer, and brain cancer surgery.

Fri, Apr 14 Zhenish Zhakypov Stanford Multimaterial Design for Multifunctional Miniature Robots Skilling Auditorium 12:30PM

Small-scale animals like trap-jaw ants exhibit remarkable behaviors, not just through communication, but also via their adaptable jaw-jump and leg-jump mechanisms that enable them to thrive in diverse environments. These creatures have successfully tackled the challenges of miniaturization, multifunctionality, and multiplicity, which are critical factors in the development of small-scale robotic systems. By creating these abilities in mesoscale robots, we can unlock a vast array of applications. For instance, we could build artificial multi-locomotion swarms to explore and monitor diverse physical environments with high task efficiency or design compact and distributed haptic actuators to simulate compelling human touch interactions in virtual environments with high fidelity and minimal encumbrance. However, conventional design methods for creating miniature yet multifunctional robots are limited due to constraints in downsizing classical electric motors, transmission gears, and mechanisms. Additionally, increasing the number of components requires meticulous manual assembly processes. In this talk, I will delve into how multimaterial layer composition and folding (origami robotics) and 3D printing can enable miniature, multifunctional, and mass-manufacturable robots. I will provide insights into a systematic design methodology that breaks down mesoscale robot design in terms of mechanisms, geometry, materials, and fabrication, highlighting their relation and challenges. I will demonstrate unique robotic platforms built on this paradigm, including Tribots, 10-gram palm-sized multi-locomotion origami robots that jump, roll, and crawl to traverse uneven terrains and manipulate objects collectively, as well as shape-morphing grippers and structures. These robots use functional materials like shape memory alloy and fluids to achieve tunable power, compact actuators, and mechanisms. Additionally, I will present my latest research on monolithically 3D-printed, soft finger and wrist-worn haptic displays called FingerPrint and Hoxels. FingerPrint produces 4-DoF motion on the finger pad and phalanges with tunable forces and torques for skin shear, pressure, and vibrotactile interaction and can be mass-printed requiring

Fri, Apr 21 Sanja Fidler U. Toronto/NVIDIA A.I. for 3D Content Creation Skilling Auditorium 12:30PM

3D content is key in several domains such as architecture, film, gaming, robotics, and lies in the heart of the metaverse applications. However, creating 3D content can be very time consuming -- the artists need to sculpt high quality 3d assets, compose them into large worlds, and bring these worlds to life by writing behaviour models that drive the agents around in the world. In this talk, I'll present some of our ongoing efforts on creating virtual worlds with A.I., with the focus on street level simulation for autonomous driving.

Fri, Apr 28 Cathy Wu MIT Intelligent Coordination for Sustainable Roadways – If Autonomous Vehicles are the Answer, then What is the Question? Skilling Auditorium 12:30PM

For all its hype, autonomous vehicles have yet to make our roadways more sustainable: safer, cheaper, cleaner. This talk suggests that key to unlocking sustainable roadways is to shift the focus from autonomy-driven design to use-driven design. Based on recent work, the talk focuses on three critical priorities––safety, cost, and environment––each leveraging the 'autonomy' capability of coordinating vehicles. But fully autonomous agents are not the only entities that can coordinate. A paragon of safety is air traffic control, in which expert operators remotely coordinate aircraft. The work brings these ideas to the dense traffic on roadways and analyzes the scalability of operators. Another much cheaper way to coordinate is to give a smartphone app to drivers. The work characterizes how well lower-tech systems can still achieve autonomous capabilities. For cleaner roadways, dozens of articles have considered coordinating vehicles to reduce emissions. This work models whether doing so would move the needle on climate change mitigation goals. To study these multi-agent coordination problems, the work leverages queueing theory, Lyapunov stability analysis, transfer learning, and multi-task reinforcement learning. The talk will also substantiate issues of robustness that arise when applying learning-based techniques and a new line of work designed to address them. Overall, the results indicate promise for intelligent coordination to enable sustainable roadways.

Fri, May 05 Brian Ichter Google Brain Connecting Robotics and Foundation Models Skilling Auditorium 12:30PM

Foundation models can encode a wealth of semantic knowledge about the world, but can be limited by their lack of interactive, real-world experience. This poses a challenge for leveraging them in robotics, which requires interactive decision making and reasoning for a given embodiment. This talk will discuss several research directions towards addressing these challenges, from grounding them in their environment (SayCan, InnerMonologue, Grounded Decoding, NL-Maps), to directly outputting grounded code (Code as Policies), and finally training them with embodied robotics data (PaLM-E, RT-1).

Fri, May 12 Nick Morozovsky Amazon Lab 126 Meet Astro: Amazon Consumer Robotics’ first robot for homes and small-to-medium businesses Skilling Auditorium 12:30PM

Astro is Amazon’s first robot designed for homes and small-to-medium businesses. In this talk, we will show Astro’s use cases, including home monitoring, Human Robot Interaction, and Virtual Security Guard, and share how we designed Astro for customers. Making the first prototype work is a very different problem from designing and testing robots for mass production. Design for manufacturability, repairability, and sustainability are important tenets for making robots that are easy to assemble, test, and fix. Reliability is another critical concern for mass-produced robots, we’ll describe some of the some of the extensive testing we performed to make sure that every Astro robot works for customers for years. The Mobility and Perception capabilities of Astro are what make it useful and capable in unstructured and variable environments like homes. Our software team overcame challenges to deliver these capabilities with consumer-grade sensors and compute. We’ll conclude with some of Amazon’s programs to engage with the academic community. Note: No youtube video

Fri, May 19 Pratik Chaudhari UPenn A Picture of the Prediction Space of Deep Networks Skilling Auditorium 12:30PM

Deep networks have many more parameters than the number of training data and can therefore overfit---and yet, they predict remarkably accurately in practice. Training such networks is a high-dimensional, large-scale and non-convex optimization problem and should be prohibitively difficult---and yet, it is quite tractable. This talk aims to illuminate these puzzling contradictions. We will argue that deep networks generalize well because of a characteristic structure in the space of learnable tasks. The input correlation matrix for typical tasks has a “sloppy” eigenspectrum where, in addition to a few large eigenvalues, there is a large number of small eigenvalues that are distributed uniformly over a very large range. As a consequence, the Hessian and the Fisher Information Matrix of a trained network also have a sloppy eigenspectrum. Using these ideas, we will demonstrate an analytical non-vacuous PAC-Bayes generalization bound for general deep networks. We will next develop information-geometric techniques to analyze the trajectories of the predictions of deep networks during training. By examining the underlying high-dimensional probabilistic models, we will reveal that the training process explores an effectively low-dimensional manifold. Networks with a wide range of architectures, sizes, trained using different optimization methods, regularization techniques, data augmentation techniques, and weight initializations lie on the same manifold in the prediction space. We will also show that predictions of networks being trained on different tasks (e.g., different subsets of ImageNet) using different representation learning methods (e.g., supervised, meta-, semi-supervised and contrastive learning) also lie on a low-dimensional manifold. References: Does the data induce capacity control in deep learning? Rubing Yang, Jialin Mao, and Pratik Chaudhari. [ICML '22] https://arxiv.org/abs/2110.14163 Deep Reference Priors: What is the best way to pretrain a model? Yansong Gao, Rahul Ramesh, and Pratik Chaudhari. [ICML '22] https://arxiv.org/abs/2202.00187 A picture of the space of typical learnable tasks. Rahul Ramesh, Jialin Mao, Itay Griniasty, Rubing Yang, Han Kheng Teoh, Mark Transtrum, James P. Sethna, and Pratik Chaudhari. [ICML ’23]. https://arxiv.org/abs/2210.17011 The Training Process of Many Deep Networks Explores the Same Low-Dimensional Manifold. Jialin Mao, Itay Griniasty, Han Kheng Teoh, Rahul Ramesh, Rubing Yang, Mark K. Transtrum, James P. Sethna, Pratik Chaudhari. 2023. https://arxiv.org/abs/2305.01604

Fri, May 26 Jeannette Bohg Stanford Large Language Models for Solving Long-Horizon Manipulation Problems Skilling Auditorium 12:30PM

My long-term research goal is enable real robots to manipulate any kind of object such that they can perform many different tasks in a wide variety of application scenarios such as in our homes, in hospitals, warehouses, or factories. Many of these tasks will require long-horizon reasoning and sequencing of skills to achieve a goal state. In this talk, I will present our work on enabling long-horizon reasoning on real robots for a variety of different long-horizon tasks that can be solved by sequencing a large variety of composable skill primitives. I will specifically focus on the different ways Large Language Models (LLMs) can help with solving these long-horizon tasks. The first part of my talk will be on TidyBot, a robot for personalised household clean-up. One of the key challenges in robotic household cleanup is deciding where each item goes. People's preferences can vary greatly depending on personal taste or cultural background. One person might want shirts in the drawer, another might want them on the shelf. How can we infer these user preferences from only a handful of examples in a generalizable way? Our key insight: Summarization with LLMs is an effective way to achieve generalization in robotics. Given the generalised rules, I will then show how TidyBot then solves the long-horizon task of cleaning up a home. In the second part of my talk, I will focus on more complex long-horizon manipulation tasks that exhibit geometric dependencies between different skills in a sequence. In these tasks, the way a robot performs a certain skill will determine whether a follow-up skill in the sequence can be executed at all. I will present an approach called text2motion that utilises LLMs for task planning without the need for defining complex symbolic domains. And I will show how we can verify whether the plan that the LLM came up with is actually feasible. The basis for this verification is a library of learned skills and an approach for sequencing these skills to resolve geometric dependencies prevalent in long-horizon tasks.

Fri, Jun 02 Student Speakers Stanford and UC Berkeley TBD Skilling Auditorium 12:30PM



The Stanford Robotics and Autonomous Systems Seminar enjoys the support of the following sponsors.