× Attention

The talks will be in-person.

Stanford Robotics and Autonomous Systems Seminar series hosts both invited and internal speakers. The seminar aims to bring the campus-wide robotics community together and provide a platform to overview and foster discussion about the progress and challenges in the various disciplines of Robotics. This quarter, the seminar is also offered to students as a 1 unit course. Note that registration to the class is NOT required in order to attend the talks.

The course syllabus is available here. Go here for more course details.

The seminar is open to Stanford faculty, students, and sponsors.

Get Email Notifications

Sign up for the mailing list: Click here!

Schedule Spring 2022

Date Guest Affiliation Title Location Time
Fri, Apr 01 Anima Anandkumar Caltech and NVIDIA Representation Learning for Autonomous Robots Gates B01 12:15PM
Abstract

Autonomous robots need to be efficient and agile, and be able to handle a wide range of tasks and environmental conditions. This requires the ability to learn good representations of domains and tasks using a variety of sources such as demonstrations and simulations. Representation learning for robotic tasks needs to be generalizable and robust. I will describe some key ingredients to enable this: (1) robust self-supervised learning (2) uncertainty awareness (3) compositionality. We utilize NVIDIA Isaac for GPU-accelerated robot learning at scale on a variety of tasks and domains.

Fri, Apr 08 Samir Menon and Robert Sun Dexterity AI Robot Manipulation in the Logistics Industry Gates B01 12:15PM
Abstract

The past several years have created a perfect storm for the logistics industry: worker shortages, surging ecommerce activity, and many other factors have significantly increased the demand for robot manipulators automating more and more components of logistics and supply chains. This new wave of automation presents a new set of challenges compared to traditional automation tasks, e.g. in manufacturing. Manipulation workloads in the logistics industry involve extreme variability in the objects being handled: their shape, size, dynamics, condition, etc. as well as the sets of objects that must be managed and organized together. Additionally, these manipulators must be plugged into existing workflows and infrastructures that were designed for and still often interface with humans. Meeting this need, Dexterity is a robotics startup that has engineered and deployed robotic systems that can intelligently manipulate tens of thousands of items in production, reason about and operate in dynamic environments, collaborate with each other using the sense of touch, and safely operate in the presence of humans. Dexterity's robots ship hundreds of thousands of units in packaged food and parcel warehouses each day and are in production 24/7. In this talk, we will describe the unique challenges we have encountered in bringing robot manipulation to logistics, including the technical advancements which we have employed to date, spanning engineering disciplines from machine learning, simulation, modeling, algorithms, and control, to robotic hardware & software. We will describe the variety of automation workflows we are executing on which we have found provide the most value to our customers, including palletizing, depalletizing, kitting for fulfillment, and singulation for induction. And we will highlight a number of open problems we have encountered which can motivate future research in the robotics community.

Fri, Apr 15 Joydeep Biswas UT Austin Deploying Autonomous Service Mobile Robots, And Keeping Them Autonomous Gates B01 12:15PM
Abstract

Why is it so hard to deploy autonomous service mobile robots in unstructured human environments, and to keep them autonomous? In this talk, I will explain three key challenges, and our recent research in overcoming them: 1) ensuring robustness to environmental changes; 2) anticipating and overcoming failures; and 3) efficiently adapting to user needs. To remain robust to environmental changes, we build probabilistic perception models to explicitly reason about object permanence and distributions of semantically meaningful movable objects. By anticipating and accounting for changes in the environment, we are able to robustly deploy robots in challenging frequently changing environments. To anticipate and overcome failures, we introduce introspective perception to learn to predict and overcome perception errors. Introspective perception allows a robot to autonomously learn to identify causes of perception failure, how to avoid them, and how to learn context-aware noise models to overcome such failures. To adapt and correct behaviors of robots based on user preferences, or to handle unforeseen circumstances, we leverage representation learning and program synthesis. We introduce visual representation learning for preference-aware planning to identify and reason about novel terrain types from unlabelled human demonstrations. We further introduce physics-informed program synthesis to synthesize and repair programmatic action selection policies (ASPs) in a human-interpretable domain-specific language with several orders of magnitude fewer demonstrations than necessary for neural network ASPs of comparable performance. The combination of these research advances allows us to deploy a varied fleet of wheeled and legged autonomous mobile robots on the campus scale at UT Austin, performing tasks that require robust mobility both indoors and outdoors.

Fri, Apr 22 Rika Antonova Stanford Distributional Representations and Scalable Simulations for Real-to-Sim-to-Real with Deformables Gates B01 12:15PM
Abstract

Success stories of sim-to-real transfer can make it seem effortless and robust. However, the success hinges on bringing simulation close enough to reality. This real-to-sim problem of inferring simulation parameters is particularly challenging for deformable objects. Here, many conventional techniques fall short, since they often require precise state estimation and accurate dynamics. In this talk, I will describe our formulation of real-to-sim as probabilistic inference over simulation parameters. Our key idea is in how we define the state space of a deformable object. We view noisy keypoints extracted from an image of an object as samples from the distribution that captures object geometry. We then embed this distribution into a reproducing kernel Hilbert space (RKHS). Object motion can then be represented by a trajectory of distribution embeddings in this novel state space. This allows for a principled way to incorporate noisy state observations into modern Bayesian tools for simulation parameter inference. Using a small set of real-world trajectories, we can estimate posterior distributions over simulation parameters, such as elasticity, friction, and scale, even for highly deformable objects. I will conclude the talk by outlining our next steps for improving real-to-sim and sim-to-real. One branch of our work explores the potential of differentiable simulators to increase the speed and precision of real-to-sim. Another branch aims to create flexible simulation environments for large-scale learning, with thousands of objects and flexible customization, ultimately aiming to enable sim-to-real for multi-arm and mobile manipulation with deformables.

Fri, May 06 Daniel S. Brown UC Berkeley Leveraging Human Input to Enable Robust AI Systems Gates B01 12:15PM
Abstract

In this talk I will discuss recent progress towards using human input to enable safe and robust AI systems. Much work on robust machine learning and control seeks to be resilient to, or completely remove the need for, human input. By contrast, my research seeks to directly and efficiently incorporate human input into the study of robust AI systems. One problem that arises when robots and other AI systems learn from human input is that there is often a large amount of uncertainty over the human’s true intent and the corresponding desired robot behavior. To address this problem, I will discuss prior and ongoing research along three main topics: (1) how to enable AI systems to efficiently and accurately maintain uncertainty over human intent, (2) how to generate risk-averse behaviors that are robust to this uncertainty, and (3) how robots and other AI systems can efficiently query for additional human input to actively reduce uncertainty and improve their performance. My talk will conclude with a discussion of my long-term vision for safe and robust AI systems, including learning from multi-modal human input, interpretable and verifiable robustness, and developing techniques for human-in-the-loop robust machine learning that generalize beyond reward function uncertainty.

Fri, May 13 Cynthia Sung UPenn Computational Design of Compliant, Dynamical Robots Gates B01 12:15PM
Abstract

Recent years have seen a large interest in soft robotic systems, which provide new opportunities for machines that are flexible, adaptable, safe, and robust. These systems have been highly successful in a broad range of applications, including manipulation, locomotion, human-robot interaction, and more, but they present challenging design and control problems. In this talk, I will share efforts from my group to expand the capabilities of compliant and origami robots to dynamical tasks. I will show how the compliance of a mechanism can be designed to produce a particular mechanical response, how we can leverage these designs for better performance and simpler control, and how we approach these problems computationally to design new compliant robots with new capabilities such as hopping, swimming, and flight.

Fri, May 20 Heather Culbertson USC Using Data for Increased Realism with Haptic Modeling and Devices Gates B01 12:15PM
Abstract

The haptic (touch) sensations felt when interacting with the physical world create a rich and varied impression of objects and their environment. Humans can discover a significant amount of information through touch with their environment, allowing them to assess object properties and qualities, dexterously handle objects, and communicate social cues and emotions. Humans are spending significantly more time in the digital world, however, and are increasingly interacting with people and objects through a digital medium. Unfortunately, digital interactions remain unsatisfying and limited, representing the human as having only two sensory inputs: visual and auditory. This talk will focus on methods for building haptic and multimodal models that can be used to create realistic virtual interactions in mobile applications and in VR. I will discuss data-driven modeling methods that involve recording force, vibration, and sounds data from direct interactions with the physical objects. I will compare this to new methods using machine learning to generate and tune haptic models using human preferences.

Fri, May 27 Claire Tomlin UC Berkeley Modeling and interacting with other agents Gates B01 12:15PM
Abstract

One of the biggest challenges in the design of autonomous systems is to effectively predict what other agents will do. Reachable sets computed using dynamic game formulations can be used to characterize safe states and maneuvers, yet these have typically been based on the assumption that other agents take their most unsafe actions. In this talk, we explore how this worst case assumption may be relaxed. We present both game-theoretic motion planning results which use feedback Nash equilibrium strategies, and behavioral models with parameters learned in real time, to represent interaction between agents. We demonstrate our results on both simulations and robotic experiments of multiple vehicle scenarios.

Sponsors

The Stanford Robotics and Autonomous Systems Seminar enjoys the support of the following sponsors.