Archive

Schedule Winter 2024

Date Guest Affiliation Title Location Time
Fri, Jan 12 Hannah Stuart UC Berkeley Robots that aren't afraid of contact, An embodied approach Skilling Auditorium 12:30PM
Abstract

The world is rich with complex and varied mechanics, which often leads to robots that tend to avoid contact due to uncertainty. However, this richness also opens opportunities for new robotic mechanisms that creatively harness the local environment. In this talk, I'll focus on two recent case studies that use ambient fluids for resilient and compliant grippers with tactile sensing. These inventions apply to logistics pick-and-place automation, as well as more generalized applications. Time allowing, I will also provide another case study in granular media (i.e. sand) interaction for planetary robotics. Inspired by nature, the goal of this research is to access new and resilient robotic behaviors without an over-reliance on digital computing alone, but rather by harnessing morphological computation alongside active control.

Fri, Jan 19 Student Speaker 1 -- Won Kyung Do Stanford Improving Robotic Dexterity with Optical Tactile Sensor DenseTact Skilling Auditorium 12:30PM
Abstract

Dexterous manipulation, particularly of small everyday objects, remains a complex challenge in the field of robotics. In this talk, I will present two studies addressing these challenges with DenseTact, a soft optical tactile sensor. The first study introduces an innovative approach to inter-finger manipulation using a tactile sensor-equipped gripper. This development not only enhances grasping accuracy in cluttered environments but also facilitates improved manipulation and reorientation of small objects, enabling more precise classification. The second study addresses the challenges of grasping objects of varying sizes on flat surfaces. I will introduce the DenseTact-Mini, an optical tactile sensor featuring a soft, rounded, smooth gel surface, compact design, and a synthetic fingernail. This sensor enables the grasping of multi-scale objects using three distinct strategies for different sizes and masses of objects. This presentation will underscore how these advancements open new avenues in robotics, particularly in enhancing manipulation capabilities in complex scenarios where vision is limited due to occlusions.

Fri, Jan 19 Student Speaker 2 -- Annie Chen Stanford Single-Life Robot Deployment: Adapting On-the-Fly to Novel Scenarios Skilling Auditorium 12:30PM
Abstract

A major obstacle to the broad application of robots is their inability to adapt to unexpected circumstances, which limits their uses largely to tightly controlled environments. Even equipped with prior experience and pre-training, robots will inevitably encounter out-of-distribution (OOD) situations at deployment time that may require a large amount of on-the-fly adaptation. In this talk, I will first motivate and introduce the problem setting of single-life deployment, which provides a natural setting to study the challenge of autonomously adapting to unfamiliar situations. I will then present our recent work on this problem, Robust Autonomous Modulation (ROAM). By effectively identifying relevant behaviors on-the-fly, ROAM adapts over 2x as efficiently compared to existing methods when facing a variety of OOD situations during deployment. Crucially, this adaptation process all happens within a single episode at test time, without any human supervision.

Fri, Jan 26 Raphael Zufferey EPFL Flying robots: exploring hybrid locomotion and physical interaction Skilling Auditorium 12:30PM
Abstract

Autonomous flying robots have become widespread in recent years, yet their capability to interact with the environment remains limited. Moving in multiple fluids is one of the great challenges of mobile robotics, and carries great potential for application in biological and environmental studies. In particular, hybrid locomotion provides the means to cross large distances and obstacles or even change from one body of water to another thanks to flight. At the same time, they are capable of operating underwater, collecting samples, video and aquatic metrics. However, the challenges of operating in both air and water are complex. In this talk, we will introduce these challenges and cover several research solutions which aim to adress these in different modalities, depending on locomotion and objectives. Bio-inspiration plays a crucial role in these solutions, and the topic of flapping flight in the context of physical interaction will also be presented.

Fri, Feb 02 Sunil Agrawal Columbia University Rehabilitation Robotics: Improving Functions of People with Impairments Skilling Auditorium 12:30PM
Abstract

Neural disorders, old age, and traumatic brain injury limit activities of daily living. Robotics can be used in novel ways to characterize human neuromuscular responses and retrain human functions. Columbia University Robotics and Rehabilitation (ROAR) Laboratory designs innovative mechanisms/robots with these goals and performs scientific studies to improve human functions such as standing, walking, stairclimbing, trunk control, head turning, and others. Human experiments have targeted individuals with stroke, cerebral palsy, Parkinson’s disease, ALS, and elderly subjects. The talk will provide an overview of these robotic technologies and scientific studies performed with them to demonstrate strong potential of rehabilitation robotics to improve human functions and quality of life of people.

Fri, Feb 09 Paul Glick JPL Robotics Embodied Intelligence for Extreme Environments Skilling Auditorium 12:30PM
Abstract

Extreme environments penalize the sensing, actuation, computation, and communication that robotic systems rely upon. Compounding this challenge is the fact that these remote locations are often some of the least mapped areas on and beyond our planet. Structured compliance offers a pathway for robots to adapt to their environment at the mechanical level while preserving the strength to support payload mass & forceful interactions. This theme is explored across projects that include gripping in space, exploration of coral reefs, data acquisition under ice, and a cold-operable robotic arm.

Fri, Feb 16 Shuran Song Stanford Robot Skill Acquisition: Policy Representation and Data Generation Skilling Auditorium 12:30PM
Abstract

What do we need to take robot learning to the 'next level?' Is it better algorithms, improved policy representations, or is it advancements in affordable robot hardware? While all of these factors are undoubtedly important, however, what I really wish for is something that underpins all these aspects – the right data. In particular, we need data that is scalable, reusable, and robot-complete. While ‘scale’ often takes center stage in machine learning today; I would argue that in robotics, having data that is also both reusable and complete can be just as important. Focusing on sheer quantity and neglecting these properties make it difficult for robot learning to benefit from the same scaling trend that other machine learning fields have enjoyed. In this talk, we will explore potential solutions to such data challenges, shed light on some of the often-overlooked hidden costs associated with each approach, and more importantly, how to potentially bypass these obstacles.

Fri, Feb 23 Dorsa Sadigh Stanford Robot Learning in the Era of Large Pretrained Models Skilling Auditorium 12:30PM
Abstract

In this talk, I will discuss how interactive robot learning can benefit from the rise of large pretrained models such as foundation models. I will introduce two perspectives. First I will discuss the role of pretraining when learning visual representations, and how language can guide learning grounded visual representations useful for downstream robotics tasks. I will then discuss the choice of datasets during pretraining. Specifically, how we could guide large scale data collection, and what constitutes high quality data for imitation learning. I will discuss some recent work around guiding data collection based on enabling compositional generalization of learned policies. Finally, I will end the talk by discussing a few creative ways of tapping into the rich context of large language models and vision-language models for robotics.

Fri, Mar 01 Renee Zhao Stanford Multifunctional Origami Robots Skilling Auditorium 12:30PM
Abstract

Millimeter/centimeter-scale origami robots have recently been explored for biomedical applications due to their inherent shape-morphing capability. However, they mainly rely on passive or/and irreversible deformation that significantly hinders the clinic functions in an on-demand manner. Here, we report magnetically actuated origami robots that can crawl and swim for effective locomotion and targeted drug delivery in severely confined spaces and aqueous environments. We design our robots based on origami, whose thin shell structure 1) provides an internal cavity for drug storage, 2) permits torsion-induced contraction as a crawling mechanism and a pumping mechanism for controllable liquid medicine dispensing, 3) serves as propellers that spin for propulsion to swim, 4) offers anisotropic stiffness to overcome the large resistance from the severely confined spaces in biomedical environments. These magnetic origami robots can potentially serve as minimally invasive devices for biomedical diagnoses and treatments.

Fri, Mar 08 Dragomir Anguelov Waymo ML Recipes for Building a Scalable Autonomous Driving Agent Skilling Auditorium 12:30PM
Abstract

Machine learning has proven to be a key ingredient in building a performant and scalable Autonomous Vehicle stack, spanning key capabilities such as perception, behavior prediction, planning and simulation. In this talk, I will describe recent Waymo research on performant ML models and architectures that help us handle the variety and complexity of the real world driving environment, and I will outline key remaining research challenges in our domain.

Schedule Fall 2023

Date Guest Affiliation Title Location Time
Fri, Sep 29 Mac Schwager Stanford Perception-Rich Robot Autonomy with Neural Environment Models Skilling Auditorium 12:30PM
Abstract

New developments in computer vision and deep learning have led to the rise of neural environment representations: 3D maps that are stored as deep networks that spatially register occupancy, color, texture, and other physical properties. These environment models can generate photo-realistic synthetic images from unseen view points, and can store 3D information in exquisite detail. In this talk, I investigate the questions: How can robots use neural environment representations for perception, motion planning, manipulation, and simulation? I will present recent work from my lab in navigating a robot through a neural radiance field map of an environment while preserving safety guarantees. I will talk about realtime NeRF training, where we produce a neural map online in a SLAM-like fashion. I will also discuss open-vocabulary semantic navigation in a neural map, where we find or avoid objects specified at runtime. I will present the concept of dynamics-augmented neural objects, which are assets captured from RGB images whose motion (including contact) can be simulated in a differentiable physics engine. I will show how such models can be used in real-to-sim transfer and robot manipulation planning scenarios. I will conclude with future opportunities and challenges in integrating neural environment representations into the robot autonomy stack.

Fri, Oct 06 Boris Ivanovic Nvidia Architecting Next-Generation AV Autonomy Stacks Skilling Auditorium 12:30PM
Abstract

Learning-based components are ubiquitous within modern robotic autonomy stacks. However, many of these components are not being utilized to their fullest potential, with training and evaluation schemes that are agnostic to their eventual downstream tasks. In this talk, I will present next-generation autonomy stack architectures that treat learning and differentiability as first-class citizens, enabling training and evaluation with respect to downstream tasks without sacrificing interpretability, as well as methods for evaluating and generalizing them. Towards this end, I will present some of our recent research efforts, broadly spanning the topics of information representation and uncertainty propagation, simulation, and domain generalization.

Fri, Oct 13 Aaron Parness Amazon Robotics Stowing and Picking Items in E-Commerce Skilling Auditorium 12:30PM
Abstract

Stowing and picking items are two of the most expensive tasks in e-commerce fulfillment. They are difficult to automate because of 1) the many physical contacts between the robot and items already on shelves, 2) the variety of items that are handled, and 3) the financial motivation for storage density. This talk presents development of robotic manipulation capabilities for high clutter and high contact. Our perception algorithms infer available space using images of shelfs and manifest information. We then plan motions with an assumption of contact, and control those motions with force and torque in the loop. Custom end of arm tools (grippers) simplify the tasks.

Fri, Oct 20 Sylvia Herbert University of California, San Diego Blending Data-Driven CBF Approximations with HJ Reachability Skilling Auditorium 12:30PM
Abstract

In this talk I will discuss recent joint work with Professor Sicun (Sean) Gao on using data-driven CBF approximations for safe control of autonomous systems. First I will discuss how we blend CBF approximations and HJ reachability for systems with modeled dynamics. The data-driven CBF approximation provides an efficient initial estimate of the true CBF, which is then refined using HJ reachability analysis. This work was presented at IROS 2022, with some new additions. Next I will discuss our recent work on how we use data-driven CBFs for hard-to-model dynamics (e.g. interaction behavior among pedestrians). Our approach exploits an important observation: the spatial interaction patterns of multiple dynamic obstacles can be decomposed and predicted through temporal sequences of states for each obstacle. Through decomposition, we can generalize control policies trained only with a small number of obstacles, to environments where the obstacle density can be 100x higher. We have no guarantees on safety (at least so far), but we empirically show significant improvements to dynamic collision avoidance (compared to other learning methods) without being overly conservative (compared to control theoretic methods). This work won the Robocup best paper award this month at IROS 2023.

Fri, Oct 27 BARS 2023 UC Berkeley and Stanford Bay Area Robotics Symposium David & Joan Traitel Building of Hoover Institution 8:30AM
Abstract

The Bay Area Robotics Symposium aims to bring together roboticists from the Bay Area. The program will consist of a mix of faculty, student and industry presentations.

Fri, Nov 03 Student Speaker 1 -- Kenneth Hoffmann Stanford Design Principles for Bioinspired Visually Guided Aerial Grasping Robots Skilling Auditorium 12:30PM
Abstract

Humans have long looked to the skies for inspiration to build the newest generation of flying vehicles. Taking inspiration from the peregrine falcon to pursue and capture prey in flight is particularly intriguing because it can help robot engineers design supermaneuverable aerial robots. Simultaneously, an opportunity exists for developing counter-UAS (unmanned aerial system) aerial robotic systems, aimed at safeguarding sensitive airspaces from rogue drones. During this talk, I will propose design principles for developing bioinspired, visually guided aerial grasping robots. I show how to take inspiration from how falcons pursue aerial prey to design control laws that enable the robot to pursue and capture flying aerial targets. I tie together these two main concepts into an aerial robot with an autonomous system that enables dynamic pursuit and grasping. Following this, I use simulation and experiments to better understand which flight conditions leads to successful aerial grasping. Then, I analyze the robotic systems that enable pursuit and grasping through a systems level failure analysis. Finally, I will address how improvements in hardware, sensing, and planning can pave the way for the future of aerial grasping robots, highlighting the key areas of development required to enhance the performance of this emerging category of robot.

Fri, Nov 03 Student Speaker 2 -- Amar Hajj-Ahmad Stanford Getting a (Gecko) Grip: Surface conformation for dry adhesion assisted robotic grasping Skilling Auditorium 12:30PM
Abstract

Nature continues to stimulate engineering solutions for real world problems; understanding how the gecko, for example, relies on Van Der Waals forces to climb various surfaces inevitably led to the fabrication of materials which employ the same working principles. This talk addresses the question of how to develop gecko adhesive controllability with intermittently-active surface confirmation, for utilization in real world robotic applications. A study is performed on direct indenting as a manufacturing technique for creating varying micro-geometries for aiding gecko adhesive control and conformation. Building off this capability, augmented suction and adhesion tool for side-picking bulky and irregular objects is developed with air-promoted contact. There will be reference to manufacturing requirements of the dry adhesive material, and implementation considerations for tackling and improving robotic task execution. This work informs future design and use of gecko inspired adhesives with active surface conformation as a tool to effectively solve real world challenges.

Fri, Nov 10 Ding Zhao Carnegie Mellon University Towards Trustworthy Autonomy - Generalizability, Safety, Embodiment Skilling Auditorium 12:30PM
Abstract

As AI becomes more integrated into physical autonomy, it presents a dual spectrum of opportunities and risks. In this talk, I will introduce our efforts in creating trustworthy intelligent autonomy for vital civil usage such as self-driving cars and assistant robots. In these realms, training data often exhibit significant imbalance, multi-modal complexity, and inadequacy. I will initiate the discussion by analyzing 'long-tailed' problems with rare events and their connection to safety evaluation and safe reinforcement learning. I will then discuss how modeling multi-modal uncertainties as ‘tasks’ may enhance generalizability by learning across domains. To facilitate meta-learning and continuous learning with high-dimensional inputs in vision and language, we have developed prompt-transformer structures for efficient adaptation and mitigation of catastrophic forgetting. In cases involving unknown-unknown tasks with severely limited data, we explore the potential of leveraging external knowledge from legislative sources, causal reasoning, and large language models. Lastly, we will expand intelligence development into the realm of system-level design space with meta physical robot morphologies, which may achieve generalizability and safety more effectively than relying solely on software solutions.

Fri, Dec 01 Aimy Wissa Princeton How Nature Moves: Exploring Locomotion in Various Mediums and Across Sizes Skilling Auditorium 12:30PM
Abstract

Organisms have evolved various locomotion (self-propulsion) and shape adaptation (morphing) strategies to survive and thrive in diverse and uncertain environments. Unlike engineered systems, which rely heavily on active control, natural systems also rely on reflexive and passive control. Nature often exploits distributed flexibility to simplify global actuation requirements. These approaches to locomotion and morphing rely on multifunctional and passively adaptive structures. This talk will introduce several examples of bioinspired multifunctional structures, such as feather-inspired flow control devices. Flow control devices found on birds’ wings will be introduced as a pathway toward revolutionizing the current design and flight control of small-unmanned air vehicles. Wind tunnel and flight-testing results show the aerodynamic benefits of these devices in delaying stall and improving flight performance. In addition to bioinspired engineering, I will highlight how engineering analysis and experiments can help answer critical questions about biological systems, such as the flying fish aerial-aquatic transition and click beetles’ legless jumping. These research topics represent examples of how nature can inform robotic engineering design and highlight that engineering analysis can provide insights into the locomotion and adaptation strategies employed by nature.

Fri, Dec 08 Luca Carlone MIT Foundations of Spatial Perception for Robotics Skilling Auditorium 12:30PM
Abstract

A large gap still separates robot and human perception: humans are able to quickly form a holistic representation of the scene that encompasses both geometric and semantic aspects, are robust to a broad range of perceptual conditions, and are able to learn without low-level supervision. This talk discusses recent efforts to bridge these gaps. First, we show that scalable metric-semantic scene understanding requires hierarchical representations; these hierarchical representations, or 3D scene graphs, are key to efficient storage and inference, and enable real-time perception algorithms. Second, we discuss progress in the design of certifiable algorithms for robust estimation, which provide first-of-a-kind performance guarantees for estimation problems arising in robot perception. Finally, we observe that certification and self-supervision are twin challenges, and the design of certifiable perception algorithms enables a natural self-supervised learning scheme; we apply this insight to 3D object pose estimation and present self-supervised algorithms that perform on par with state-of-the-art, fully supervised methods, while not requiring manual 3D annotations.

Schedule Spring 2023

Date Guest Affiliation Title Location Time
Fri, Apr 07 Sheila Russo Boston University Soft Material Robotics and Next-Generation Surgical Robots Skilling Auditorium 12:30PM
Abstract

Minimally invasive surgical (MIS) procedures pose significant challenges for robots, which need to safely navigate through and manipulate delicate anatomy while performing complex tasks to treat tumors in remote areas. Soft robots hold considerable potential in MIS given their compliant nature, inherent safety, and high dexterity. Yet, a significant breakthrough of soft robots in surgery is impeded by current limitations in the design, manufacturing, and integration of soft materials that combine actuation, sensing, and control. Scientific understanding of medical and surgical robotics is entering an exciting new era where early approaches relying on rigid materials, standard manufacturing, and conventional kinematics are giving way to Soft Material Robotics. Our research at the Material Robotics Lab at Boston University is focused on the design, mechanics, and manufacturing of novel multi-scale and multi-material biomedical robotic systems. This talk will illustrate our work towards achieving safe navigation, distal actuation, integrated sensing, and effective force transmission in MIS by highlighting different classes of soft surgical robots, i.e., soft continuum robots, soft-foldable robots, and soft reactive skins with applications in lung cancer, colorectal cancer, and brain cancer surgery.

Fri, Apr 14 Zhenish Zhakypov Stanford Multimaterial Design for Multifunctional Miniature Robots Skilling Auditorium 12:30PM
Abstract

Small-scale animals like trap-jaw ants exhibit remarkable behaviors, not just through communication, but also via their adaptable jaw-jump and leg-jump mechanisms that enable them to thrive in diverse environments. These creatures have successfully tackled the challenges of miniaturization, multifunctionality, and multiplicity, which are critical factors in the development of small-scale robotic systems. By creating these abilities in mesoscale robots, we can unlock a vast array of applications. For instance, we could build artificial multi-locomotion swarms to explore and monitor diverse physical environments with high task efficiency or design compact and distributed haptic actuators to simulate compelling human touch interactions in virtual environments with high fidelity and minimal encumbrance. However, conventional design methods for creating miniature yet multifunctional robots are limited due to constraints in downsizing classical electric motors, transmission gears, and mechanisms. Additionally, increasing the number of components requires meticulous manual assembly processes. In this talk, I will delve into how multimaterial layer composition and folding (origami robotics) and 3D printing can enable miniature, multifunctional, and mass-manufacturable robots. I will provide insights into a systematic design methodology that breaks down mesoscale robot design in terms of mechanisms, geometry, materials, and fabrication, highlighting their relation and challenges. I will demonstrate unique robotic platforms built on this paradigm, including Tribots, 10-gram palm-sized multi-locomotion origami robots that jump, roll, and crawl to traverse uneven terrains and manipulate objects collectively, as well as shape-morphing grippers and structures. These robots use functional materials like shape memory alloy and fluids to achieve tunable power, compact actuators, and mechanisms. Additionally, I will present my latest research on monolithically 3D-printed, soft finger and wrist-worn haptic displays called FingerPrint and Hoxels. FingerPrint produces 4-DoF motion on the finger pad and phalanges with tunable forces and torques for skin shear, pressure, and vibrotactile interaction and can be mass-printed requiring

Fri, Apr 21 Sanja Fidler U. Toronto/NVIDIA A.I. for 3D Content Creation Skilling Auditorium 12:30PM
Abstract

3D content is key in several domains such as architecture, film, gaming, robotics, and lies in the heart of the metaverse applications. However, creating 3D content can be very time consuming -- the artists need to sculpt high quality 3d assets, compose them into large worlds, and bring these worlds to life by writing behaviour models that drive the agents around in the world. In this talk, I'll present some of our ongoing efforts on creating virtual worlds with A.I., with the focus on street level simulation for autonomous driving.

Fri, Apr 28 Cathy Wu MIT Intelligent Coordination for Sustainable Roadways – If Autonomous Vehicles are the Answer, then What is the Question? Skilling Auditorium 12:30PM
Abstract

For all its hype, autonomous vehicles have yet to make our roadways more sustainable: safer, cheaper, cleaner. This talk suggests that key to unlocking sustainable roadways is to shift the focus from autonomy-driven design to use-driven design. Based on recent work, the talk focuses on three critical priorities––safety, cost, and environment––each leveraging the 'autonomy' capability of coordinating vehicles. But fully autonomous agents are not the only entities that can coordinate. A paragon of safety is air traffic control, in which expert operators remotely coordinate aircraft. The work brings these ideas to the dense traffic on roadways and analyzes the scalability of operators. Another much cheaper way to coordinate is to give a smartphone app to drivers. The work characterizes how well lower-tech systems can still achieve autonomous capabilities. For cleaner roadways, dozens of articles have considered coordinating vehicles to reduce emissions. This work models whether doing so would move the needle on climate change mitigation goals. To study these multi-agent coordination problems, the work leverages queueing theory, Lyapunov stability analysis, transfer learning, and multi-task reinforcement learning. The talk will also substantiate issues of robustness that arise when applying learning-based techniques and a new line of work designed to address them. Overall, the results indicate promise for intelligent coordination to enable sustainable roadways.

Fri, May 05 Brian Ichter Google Brain Connecting Robotics and Foundation Models Skilling Auditorium 12:30PM
Abstract

Foundation models can encode a wealth of semantic knowledge about the world, but can be limited by their lack of interactive, real-world experience. This poses a challenge for leveraging them in robotics, which requires interactive decision making and reasoning for a given embodiment. This talk will discuss several research directions towards addressing these challenges, from grounding them in their environment (SayCan, InnerMonologue, Grounded Decoding, NL-Maps), to directly outputting grounded code (Code as Policies), and finally training them with embodied robotics data (PaLM-E, RT-1).

Fri, May 12 Nick Morozovsky Amazon Lab 126 Meet Astro: Amazon Consumer Robotics’ first robot for homes and small-to-medium businesses Skilling Auditorium 12:30PM
Abstract

Astro is Amazon’s first robot designed for homes and small-to-medium businesses. In this talk, we will show Astro’s use cases, including home monitoring, Human Robot Interaction, and Virtual Security Guard, and share how we designed Astro for customers. Making the first prototype work is a very different problem from designing and testing robots for mass production. Design for manufacturability, repairability, and sustainability are important tenets for making robots that are easy to assemble, test, and fix. Reliability is another critical concern for mass-produced robots, we’ll describe some of the some of the extensive testing we performed to make sure that every Astro robot works for customers for years. The Mobility and Perception capabilities of Astro are what make it useful and capable in unstructured and variable environments like homes. Our software team overcame challenges to deliver these capabilities with consumer-grade sensors and compute. We’ll conclude with some of Amazon’s programs to engage with the academic community. Note: No youtube video

Fri, May 19 Pratik Chaudhari UPenn A Picture of the Prediction Space of Deep Networks Skilling Auditorium 12:30PM
Abstract

Deep networks have many more parameters than the number of training data and can therefore overfit---and yet, they predict remarkably accurately in practice. Training such networks is a high-dimensional, large-scale and non-convex optimization problem and should be prohibitively difficult---and yet, it is quite tractable. This talk aims to illuminate these puzzling contradictions. We will argue that deep networks generalize well because of a characteristic structure in the space of learnable tasks. The input correlation matrix for typical tasks has a “sloppy” eigenspectrum where, in addition to a few large eigenvalues, there is a large number of small eigenvalues that are distributed uniformly over a very large range. As a consequence, the Hessian and the Fisher Information Matrix of a trained network also have a sloppy eigenspectrum. Using these ideas, we will demonstrate an analytical non-vacuous PAC-Bayes generalization bound for general deep networks. We will next develop information-geometric techniques to analyze the trajectories of the predictions of deep networks during training. By examining the underlying high-dimensional probabilistic models, we will reveal that the training process explores an effectively low-dimensional manifold. Networks with a wide range of architectures, sizes, trained using different optimization methods, regularization techniques, data augmentation techniques, and weight initializations lie on the same manifold in the prediction space. We will also show that predictions of networks being trained on different tasks (e.g., different subsets of ImageNet) using different representation learning methods (e.g., supervised, meta-, semi-supervised and contrastive learning) also lie on a low-dimensional manifold. References: Does the data induce capacity control in deep learning? Rubing Yang, Jialin Mao, and Pratik Chaudhari. [ICML '22] https://arxiv.org/abs/2110.14163 Deep Reference Priors: What is the best way to pretrain a model? Yansong Gao, Rahul Ramesh, and Pratik Chaudhari. [ICML '22] https://arxiv.org/abs/2202.00187 A picture of the space of typical learnable tasks. Rahul Ramesh, Jialin Mao, Itay Griniasty, Rubing Yang, Han Kheng Teoh, Mark Transtrum, James P. Sethna, and Pratik Chaudhari. [ICML ’23]. https://arxiv.org/abs/2210.17011 The Training Process of Many Deep Networks Explores the Same Low-Dimensional Manifold. Jialin Mao, Itay Griniasty, Han Kheng Teoh, Rahul Ramesh, Rubing Yang, Mark K. Transtrum, James P. Sethna, Pratik Chaudhari. 2023. https://arxiv.org/abs/2305.01604

Fri, May 26 Jeannette Bohg Stanford Large Language Models for Solving Long-Horizon Manipulation Problems Skilling Auditorium 12:30PM
Abstract

My long-term research goal is enable real robots to manipulate any kind of object such that they can perform many different tasks in a wide variety of application scenarios such as in our homes, in hospitals, warehouses, or factories. Many of these tasks will require long-horizon reasoning and sequencing of skills to achieve a goal state. In this talk, I will present our work on enabling long-horizon reasoning on real robots for a variety of different long-horizon tasks that can be solved by sequencing a large variety of composable skill primitives. I will specifically focus on the different ways Large Language Models (LLMs) can help with solving these long-horizon tasks. The first part of my talk will be on TidyBot, a robot for personalised household clean-up. One of the key challenges in robotic household cleanup is deciding where each item goes. People's preferences can vary greatly depending on personal taste or cultural background. One person might want shirts in the drawer, another might want them on the shelf. How can we infer these user preferences from only a handful of examples in a generalizable way? Our key insight: Summarization with LLMs is an effective way to achieve generalization in robotics. Given the generalised rules, I will then show how TidyBot then solves the long-horizon task of cleaning up a home. In the second part of my talk, I will focus on more complex long-horizon manipulation tasks that exhibit geometric dependencies between different skills in a sequence. In these tasks, the way a robot performs a certain skill will determine whether a follow-up skill in the sequence can be executed at all. I will present an approach called text2motion that utilises LLMs for task planning without the need for defining complex symbolic domains. And I will show how we can verify whether the plan that the LLM came up with is actually feasible. The basis for this verification is a library of learned skills and an approach for sequencing these skills to resolve geometric dependencies prevalent in long-horizon tasks.

Fri, Jun 02 Andreea Bobu Stanford Aligning Robot and Human Representations Skilling Auditorium 12:30PM
Abstract

To perform tasks that humans want in the world, robots rely on a representation of salient task features; for example, to hand me a cup of coffee, the robot considers features like efficiency and cup orientation in its behavior. Prior methods try to learn both a representation and a downstream task jointly from data sets of human behavior, but this unfortunately picks up on spurious correlations and results in behaviors that do not generalize. In my view, what’s holding us back from successful human-robot interaction is that human and robot representations are often misaligned: for example, our lab’s assistive robot moved a cup inches away from my face -- which is technically collision-free behavior -- because it lacked an understanding of personal space. Instead of treating people as static data sources, my key insight is that robots must engage with humans in an interactive process for finding a shared representation for more efficient, transparent, and seamless downstream learning. In this talk, I focus on a divide and conquer approach: explicitly focus human input on teaching robots good representations before using them for learning downstream tasks. This means that instead of relying on inputs designed to teach the representation implicitly, we have the opportunity to design human input that is explicitly targeted at teaching the representation and can do so efficiently. I introduce a new type of representation-specific input that lets the human teach new features, I enable robots to reason about the uncertainty in their current representation and automatically detect misalignment, and I propose a novel human behavior model to learn robust behaviors on top of human-aligned representations. By explicitly tackling representation alignment, I believe we can ultimately achieve seamless interaction with humans where each agent truly grasps why the other behaves the way they do.

Fri, Jun 02 Spencer M. Richards Stanford Control-Oriented Learning for Dynamical Systems Skilling Auditorium 12:30PM
Abstract

Robots are inherently nonlinear dynamical systems, for which synthesizing a stabilizing feedback controller with a known system model is already a difficult task. When learning a nonlinear model and controller from data, naive regression can produce a closed-loop model that is poorly conditioned for stable operation over long time horizons. In this talk, I will present our work on control-oriented learning, wherein the model learning problem is augmented to be cognizant of the desire for a stable closed-loop system. I will discuss how principles from control theory inform such augmentation to produce performant closed-loop models in a data efficient manner. This will involve ideas from contraction theory, constrained optimization, structured learning, adaptive control, and meta-learning.

Schedule Winter 2023

Date Guest Affiliation Title Location Time
Fri, Jan 13 Mrdjan Jankovic Ford (Retired) Why would we want a multi-agent system unstable Skilling Auditorium 12:30PM
Abstract

In everyday driving, many traffic maneuvers such as merges, lane changes, passing through an intersection, require negotiation between independent actors/agents. The same is true for mobile robots autonomously operating in a space open to other agents (e.g., humans, robots, etc.). Negotiation is an inherently difficult concept to code into a software algorithm. It has been observed in computer simulations that some “decentralized” algorithms produce gridlocks while others never do. It has turned out that gridlocking algorithms create locally stable equilibria in the joint inter-agent space, while, for those that don’t gridlock, equilibria are unstable – hence the title of the talk. We use Control Barrier Function (CBF) based methods to provide collision avoidance guarantees. The main advantage of CBFs is that they result in relatively easier to solve convex programs even for nonlinear system dynamics and inherently non-convex obstacle avoidance problems. Six different CBF-based control policies were compared for collision avoidance and liveness (fluidity of motion, absence of gridlocks) on a 5-agent, holonomic-robot system. The outcome was then correlated with stability analysis on a simpler, yet representative problem. The results are illustrated by extensive simulations and a vehicle experiment with stationary obstacles.

Fri, Jan 20 Tony Chen Stanford Designing Robotic Grippers for Interaction with Real-World Environments Skilling Auditorium 12:30PM
Abstract

Equipping robots with the functionality to traverse and interact with real-world environments beyond the laboratory is the crucial next step in advancing robotics, particularly for field robotic surveying and exploration. In order to achieve this, robots need to have the capability of interacting with the environment – that is, the capability to manipulate. In this talk, I will discuss the design, prototype, and experimentation process of two grippers for two robotic systems. First, I will introduce a gripper for aerial grasping with drones in mid-flight, where starting from simple dynamic models leads to the design principles behind a passively activated gripper, and the implications of flight controls. Second, I will introduce ReachBot, a novel rock climbing robot designed for planetary exploration such as Martian lava tubes, and focus on the mechanical design challenges and the gripper designs.

Fri, Jan 20 Rachel Luo Stanford Incorporating Sample Efficient Monitoring into Learned Autonomy Skilling Auditorium 12:30PM
Abstract

When deploying machine learning models in high-stakes robotics applications, the ability to detect unsafe situations is crucial. Warning systems are thus designed to provide alerts when an unsafe situation is imminent (in the absence of corrective action), with the objective of issuing alerts as quickly as possible when there is a problem (i.e. they should be sample-efficient). They should also come with statistical guarantees ensuring that whenever there is an unsafe situation, the warning system will detect it (i.e. a low false negative rate) or that not too many false alarms will be issued (a low false positive rate). In this talk, I will present warning systems for two types of situations. First, I will introduce a real-time warning system framework that can detect unsafe situations when there is no distribution shift. We provide a guarantee on the false negative rate (i.e. of the situations that are unsafe, fewer than epsilon will occur without an alert) using very few samples (only 1/epsilon), and we empirically observe low false detection (positive) rates. Second, I will present a warning system for identifying distribution shifts. Our method is capable of detecting these distribution shifts up to 11x faster than prior work on realistic robotics settings, while providing a high probability guarantee against false alarms. We empirically observe low false negative rates (whenever there is a distribution shift in our experiments, our method indeed emits an alert).

Fri, Jan 27 Jan Becker Apex.ai From open-source to safety-certified robotic software Skilling Auditorium 12:30PM
Abstract

For decades, automotive and robotics developers have reinvented the software wheel many times. With its launch in 2010, ROS has enabled rapid software prototyping and reuse of software in prototyping and development. In 2018, the launch of ROS 2 further improved ROS and its architecture and ROS 2 now provides an efficient prototyping and rapid development platform. But the lack of real-time performance and safety certification still prevents the widescale adoption of ROS-based software into products in safety-critical products. In this talk, we will discuss the state of the art in robotics and automotive software and how we have taken ROS from an open-source project to a safety-certified robotic and automotive software development kit.

Fri, Feb 03 Oussama Khatib Stanford The Age of Human-Robot Collaboration: OceanOneK Deep-Sea Exploration Skilling Auditorium 12:30PM
Abstract

Robotics is undergoing a major transformation in scope and dimension with accelerating impact on the economy, production, and culture of our global society. The generations of robots now being developed will increasingly touch people and their lives. Combining the experience and cognitive abilities of the human with the strength, dependability, reach, and endurance of robots will fuel a wide range of new robotic applications. This paradigm is illustrated here with challenging underwater tasks accomplished by a robotic diver, OceanOneK. The robot’s advanced autonomous capabilities for physical interaction in deep-sea are connected to a human expert through an intuitive haptic/stereo-vision interface. The robot was recently deployed in several archeological expeditions in the Mediterranean with the ability to reach 1000 meters. Distancing humans physically from dangerous and unreachable spaces while connecting their skills, intuition, and experience to the task promises to fundamentally alter remote work. These developments show how human-robot collaboration-induced synergy can expand our abilities to reach new resources, build and maintain infrastructure, and perform disaster prevention and recovery operations - be it deep in oceans and mines, at mountain tops, or in space.

Fri, Feb 10 Sean Follmer Stanford Towards Shape Changing Displays and Shape Changing Robots Skilling Auditorium 12:30PM
Abstract

Morphological change can afford both information transfer (through both vision and touch) as well as functional adaptation to the environment or the task at hand. In my research, I explore the design, development, and modeling of shape changing systems in both haptic user interfaces and robotics. Towards a goal of more human-centered computing, I believe that interaction must be grounded in the physical world and leverage our innate abilities for spatial cognition and dexterous manipulation with our hands. By creating interfaces that allow for richer physical interaction, such as bimanual, whole hand haptic exploration, these systems can help people with different abilities (e.g., children, people with visual impairments, or even expert designers) better understand and interact with information. The first part of my talk will discuss a central challenge in the widespread adoption of such haptic user interfaces – how can we create physical interactive displays that update dynamically, and what are the interaction techniques and enabling technologies necessary to support such systems? In a parallel domain, Robotics, these same technologies and approaches can support new multifunctionality and adaptation. In the second part of my talk, I will detail our recent progress in large shape changing truss robots. I will present methods for high-extension and compliant actuation in truss robots and explore how the compliance can be utilized for unique behaviors. This shape change can be applied to locomotion, physical interaction with the environment, and the engulfing, grasping, and manipulation of objects.

Fri, Feb 17 Anca Dragan UC Berkeley Robotics algorithms that take people into account Skilling Auditorium 12:30PM
Abstract

I discovered AI by reading “Artificial Intelligence: A Modern Approach” (AIMA). What drew me in was the concept that you could specify a goal or objective for a robot, and it would be able to figure out on its own how to sequence actions in order to achieve it. In other words, we don’t have to hand-engineer the robot’s behavior — it emerges from optimal decision making. Throughout my career in robotics and AI, it has always felt satisfying when the robot would autonomously generate a strategy that I felt was the right way to solve the task, and it was even better when the optimal solution would take me a bit by surprise. In “Intro to AI” I share with students an example of this, where a mobile robot figures out it can avoid getting stuck in a pit by moving along the edge. In my group’s research, we tackle the problem of enabling robots to coordinate with and assist people: for example, autonomous cars driving among pedestrians and human-driven vehicles, or robot arms helping people with motor impairments (together with UCSF Neurology). And time and time again, what has sparked the most joy for me is when robots figure out their own strategies that lead to good interaction — when, as in the work your very own faculty Dorsa Sadigh did in her PhD, we don’t have to hand-engineer that an autonomous car should inch forward at a 4-way stop to assert its turn. Instead, the behavior emerges from optimal decision making. So for this seminar, I'd like to step back a bit. Rather than going through one particular piece of research, I will take the opportunity to share what I've found the underlying optimal decision making problem formulation is for HRI -- and reflect on how we've set up optimal decision making problems that require the robot to account for the people it is interacting with, along with the surprising strategies that have emerged from that along the way. This has come back full circle for me, as I got to include some of this perspective in the very book that drew me into the field, by editing the robotics chapter for the 4th edition of AIMA.

Fri, Feb 24 Ankur Handa NVIDIA DeXtreme: Transferring Agile In-Hand Manipulation from Simulations to Reality Skilling Auditorium 12:30PM
Abstract

Recent work has demonstrated the ability of deep reinforcement learning (RL) algorithms to learn complex robotic behaviours in simulation, including in the domain of multi-fingered manipulation. However, such models can be challenging to transfer to the real world due to the gap between simulation and reality. In this paper, we present our techniques to train a) a policy that can perform robust dexterous manipulation on an anthropomorphic robot hand and b) a robust pose estimator suitable for providing reliable real-time information on the state of the object being manipulated. Our policies are trained to adapt to a wide range of conditions in simulation. Consequently, our vision-based policies significantly outperform the best vision policies in the literature on the same reorientation task and are competitive with policies that are given privileged state information via motion capture systems. Our work reaffirms the possibilities of sim-to-real transfer for dexterous manipulation in diverse kinds of hardware and simulator setups, and in our case, with the Allegro Hand and Isaac Gym GPU-based simulation. Furthermore, it opens up possibilities for researchers to achieve such results with commonly-available, affordable robot hands and cameras. Videos of the resulting policy and supplementary information, including experiments and demos, can be found at https://dextreme.org/

Fri, Mar 03 Chris Heckman CU Boulder Failure is Not an Option: Our Techniques at the DARPA Subterranean Challenge, Lessons Learned, and Next Steps Skilling Auditorium 12:30PM
Abstract

When we in the robotics research community think of what we'd like autonomous agents to tackle in the future, we often target 'dull, dirty, and dangerous' tasks. However, despite a sustained boom in robotics research over the last decade, the number of places we've seen robotics in use for these tasks has been uninspiring. Successful commercialization of autonomous robots have required significant human scaffolding through teleoperation, and incredible amounts of capital, to achieve, and despite this are still limited by brittle systems and hand-engineered components. The reality seems to be that these tasks are not nearly as dull as they might seem on the surface, and instead require ingenuity for success some small but critical fraction of the time. In this talk, I focus on my recent investigation into where the limits of autonomy are for the highly sought-after application to subterranean emergency response operations. This application was motivated by the DARPA Subterranean Challenge, which just last year concluded with the CU Boulder team 'MARBLE' taking third place and winning a $500,000 prize. In this talk, I will give an overview into the genesis of our solution over three years of effort, especially with respect to mobility, autonomy, perception, and communications. I'll also discuss the implications for present-day robotic autonomy and where we go from here

Fri, Mar 10 Joel Burdick Caltech Robots in Dynamic Tasks: Learning, Risk, and Safety Skilling Auditorium 12:30PM
Abstract

Autonomous robots are increasing applied to tasks that involve complex maneuvers and dynamic environments that are difficult to model a priori. Various types of learning methods have been proposed to fill this modeling gap. To motivate the need for learning complex fluid-structure interactions, we first review the SQUID (a ballistically launched and self-stabilizing drone) and PARSEC (an aerial manipulator that can deliver self-anchoring sensor network modules) systems. Next we show how to learn basic fluid-structure interactions using Koopman spectral techniques, and incorporate the learned model into a real-time nonlinear model predictive control framework. The performance of this approach is demonstrated on small drones that operate very close to the ground, where the ground effect normally destabilizes flight. Operational risk abounds in complex robotic tasks. This risk arises both from the uncertain environment, and from incompletely learned models. After reviewing the basics of coherent risk measures, we will show how simple risk aware terrain analysis improved the performance of our legged and wheeled robots in the DARPA Subterranean challenge. Then we will introduce an on-line method to learn the dynamics of an apriori unknown dynamical obstacle, and robustly avoid the obstacle using a novel risk-based, distributionally robust, chance constraints derived from the evolving learned model. We then introduce the concept of risk surfaces to enable fast on-line learning of a priori unknown dynamical disturbances, and show how this approach can adapt a drone to wind disturbances with only 45 seconds of on-line data gathering.

Fri, Mar 17 Vandi Verma NASA (JPL) Autonomous NASA robots breaking records on Mars Skilling Auditorium 12:30PM
Abstract

The goal of NASA’s robotics missions is to maximize science return. As instructions can only be sent once every one or more Martian solar days, robots need to be autonomous to be effective. In this seminar I’ll discuss autonomous navigation, flight, sampling and targeting data from the Mars 2020 mission which consists of the Perseverance rover and Ingenuity helicopter. The goal of the mission is to core and store Martian samples for eventual return to Earth. Perseverance’s manipulation and sampling systems have collected samples from unique locations at twice the rate of any prior mission and broken several planetary rover driving records. 88% of all driving has been autonomous - an order of magnitude more than any prior NASA Mars rover. Perseverance can autonomously select scientifically interesting rocks to shoot with its laser for analysis and is in the process of deploying autonomous planning and scheduling of activities. I’ll discuss some open problems that if addressed could further enhance space robotics at NASA JPL.

Schedule Fall 2022

Date Guest Affiliation Title Location Time
Fri, Sep 30 Roberto Calandra Meta AI Perceiving, Understanding, and Interacting through Touch NVIDIA Auditorium 12:30PM
Abstract

Touch is a crucial sensor modality in both humans and robots. Recent advances in tactile sensing hardware have resulted -- for the first time -- in the availability of mass-produced, high-resolution, inexpensive, and reliable tactile sensors. In this talk, I will argue for the importance of creating a new computational field of Touch processing dedicated to the processing and understanding of touch, similarly to what computer vision is for vision. This new field will present significant challenges both in terms of research and engineering. To start addressing some of these challenges, I will introduce our open-source ecosystem dedicated to touch sensing research. Finally, I will present some applications of touch in robotics and discuss other future applications.

Fri, Oct 07 Tania Morimoto UCSD Flexible Surgical Robots: Design, Sensing, and Control NVIDIA Auditorium 12:30PM
Abstract

Flexible and soft medical robots offer capabilities beyond those of conventional rigid-link robots due to their ability to traverse confined spaces and conform to highly curved paths. They also offer potential for improved safety due to their inherent compliance. In this talk, I will present several new robot designs for various surgical applications. In particular, I will discuss our work on soft, growing robots that achieve locomotion by material extending from their tip. I will discuss limitations in miniaturizing such robots, along with methods for actively steering, sensing, and controlling them. Finally, I will also discuss new approaches for sensing, haptic feedback, and human-in-the-loop control that are aimed at improving the performance of flexible surgical robots.

Fri, Oct 14 Jiajun Wu Stanford Multi-Sensory Neural Objects: Modeling, Inference, and Applications in Robotics NVIDIA Auditorium 12:30PM
Abstract

n the past two years, neural representations for objects and scenes have demonstrated impressive performance on graphics and vision tasks, particularly on novel view synthesis, and have gradually gained attention from the robotics community due to their potential robotic applications. In this talk, I'll present our recent efforts in building neural representations that are object-centric and multi-sensory---two properties that are essential for flexible, efficient, and generalizable robot manipulation. I'll focus on four aspects: technical innovations in building such representations, advances in scaling them up in the form of a multi-sensory neural object dataset, methods for inferring category-agnostic neural object representations and their parameters (SysID) from unlabeled visual data, and systems that adopt these representations for robotic manipulation.

Fri, Oct 21 Animesh Garg Georgia Tech/NVIDIA Towards Generalizable Autonomy: Duality of Discovery & Bias NVIDIA Auditorium 12:30PM
Abstract

Generalization in embodied intelligence, such as in robotics, requires interactive learning across families of tasks is essential for discovering efficient representation and inference mechanisms. Concurrent systems need a lot of hand-holding to even learn a single cognitive concept or a dexterous skill, say “open a door”, let alone generalizing to new windows and cupboards! This is far from our vision of everyday robots! would require a broader concept of generalization and continual update of representations. This study of the science of embodied AI opens three key questions: (a) Representational biases & Causal inference for interactive decision making, (b) Perceptual representations learned by and for interaction, (c) Systems and abstractions for scalable learning. This talk will focus on decision-making uncovering the many facets of inductive biases in off-policy reinforcement learning in robotics. I will introduce C-Learning to trade off-speed and reliability instead of vanilla Q-Learning. Then I will talk about the discovery of latent causal structure to improve sample efficiency. Moving on from skills, we will describe task graphs for hierarchically structured tasks for manipulation. I will present how to scale structured learning in robot manipulation with Roboturk, and finally, prescribe a practical algorithm for deployment with safety constraints. Taking a step back, I will end with notions of structure in Embodied AI for both perception and decision making.

Fri, Oct 28 Tae Myung Huh UC Santa Cruz Adaptable Robotic Manipulation Using Tactile Sensors NVIDIA Auditorium 12:30PM
Abstract

Despite some successful demonstrations, bringing robots into our everyday lives still remains a challenge. One of the major hurdles is the sensing of contact conditions. Contact conditions profoundly affect how the robot’s actions will affect interaction forces between it and its objects, surfaces, or even other agents that it comes into contact with. Part of the motivation to monitor these interactions and how they change is that they are not entirely predictable; contact conditions and forces can change continuously or discontinuously over the course of a task. To react adequately to these kinds of changes, the robot needs tactile sensors. These tactile sensors provide unique contact information, such as contact force, location, and slips, which enables adaptive robotic control in changing contact conditions. In this talk, I mainly present two tactile sensing studies on dexterous manipulation and suction cup grasping. The first concerns tactile sensing with friction-based contacts for sliding manipulation. I will present a multimodal tactile sensor that measures local normal/shear stress as well as directional (linear and rotational) slips. This information is useful for in-hand object manipulations. The second concerns tactile sensing for suction cup grasping. The smart suction cup monitors local suction seal formations and enables haptic exploration and grip monitoring during grasping and forceful manipulation.

Fri, Nov 11 Veronica Santos UCLA Get in touch: Tactile perception for human-robot systems NVIDIA Auditorium 12:30PM
Abstract

Compared to vision, the complementary sense of touch has yet to be broadly integrated into robotic systems that physically interact with the world. An artificial sense of touch is especially useful when vision is limited or unavailable. In this presentation, I will highlight our work on task-driven efforts to endow robots with tactile perception capabilities for human-robot interaction, remote work in harsh environments, and the manipulation of deformable objects. Real-time tactile perception and decision-making capabilities could be used to advance semi-autonomous robot systems and reduce the cognitive burden on human teleoperators. With advances in haptic display technologies, interfaces with the human body, and networking capabilities, however, touch can be used for more than completing novel tasks. Touch can enhance social connections from afar, enable the inclusion of marginalized groups in community activities, and create new opportunities for remote work involving social and physical interactions.

Fri, Nov 18 Matthew Gombolay Georgia Tech Democratizing Robot Learning NVIDIA Auditorium 12:30PM
Abstract

New advances in robotics and autonomy offer a promise of revitalizing final assembly manufacturing, assisting in personalized at-home healthcare, and even scaling the power of earth-bound scientists for robotic space exploration. Yet, in real-world applications, autonomy is often run in the O-F-F mode because researchers fail to understand the human in human-in-the-loop systems. In this talk, I will share exciting research we are conducting at the nexus of human factors engineering and cognitive robotics to inform the design of human-robot interaction. In my talk, I will focus on our recent work on 1) enabling machines to learn skills from and model heterogeneous, suboptimal human decision-makers, 2) “white-box” that knowledge through explainable Artificial Intelligence (XAI) techniques, and 3) scale to coordinated control of stochastic human-robot teams. The goal of this research is to inform the design of autonomous teammates so that users want to turn – and benefit from turning – to the O-N mode.

Fri, Dec 02 Jeremy Brown JHU Understanding the Utility of Haptic Feedback in Telerobotic Devices NVIDIA Auditorium 12:30PM
Abstract

The human body is capable of dexterous manipulation in many different environments. Some environments, however, are challenging to access because of distance, scale, and limitations of the body itself. In many of these situations, access can be effectively restored via a telerobot. Dexterous manipulation through a telerobot is possible only if the telerobot can accurately relay any sensory feedback resulting from its interactions in the environment to the operator. In this talk, I will discuss recent work from our lab focused on the application of haptic feedback in various telerobotic applications. I will begin by describing findings from recent investigations comparing different haptic feedback and autonomous control approaches for upper-extremity prosthetic limbs, as well as the cognitive load of haptic feedback in these prosthetic devices. I will then discuss recent discoveries on the potential benefits of haptic feedback in robotic minimally invasive surgery (RAMIS) training. Finally, I will discuss current efforts in our lab to measure haptic perception through novel telerobotic interfaces.

Fri, Dec 09 Aaron Edsinger Hello Robot Humanizing Robot Design NVIDIA Auditorium 12:30PM
Abstract

We are at the beginning of a transformation where robots and humans cohabitate and collaborate in everyday life. From caring for older adults to supporting workers in service industries, collaborative robots hold incredible potential to improve the quality of life for millions of people. These robots need to be safe, intuitive and simple to use. They need to be affordable enough to allow widespread access and adoption. Ultimately, acceptance of these robots in society will require that the human experience is at the center of their design. In this presentation I will highlight some of my work to humanize robot design over the last two decades. This work includes compliant and safe actuation for humanoids, low-cost collaborative robot arms, and assistive mobile manipulators. Our recent work at Hello Robot has been to commercialize a mobile manipulator named Stretch that can assist older adults and people with disabilities. I’ll detail the human-centered research and development process behind Stretch and present recent work to allow an individual with quadriplegia to control Stretch for everyday tasks. Finally I’ll highlight some of the results by the growing community of researchers working with Stretch.

Schedule Spring 2022

Date Guest Affiliation Title Location Time
Fri, Apr 01 Anima Anandkumar Caltech and NVIDIA Representation Learning for Autonomous Robots Gates B01 12:15PM
Abstract

Autonomous robots need to be efficient and agile, and be able to handle a wide range of tasks and environmental conditions. This requires the ability to learn good representations of domains and tasks using a variety of sources such as demonstrations and simulations. Representation learning for robotic tasks needs to be generalizable and robust. I will describe some key ingredients to enable this: (1) robust self-supervised learning (2) uncertainty awareness (3) compositionality. We utilize NVIDIA Isaac for GPU-accelerated robot learning at scale on a variety of tasks and domains.

Fri, Apr 08 Samir Menon and Robert Sun Dexterity AI Robot Manipulation in the Logistics Industry Gates B01 12:15PM
Abstract

The past several years have created a perfect storm for the logistics industry: worker shortages, surging ecommerce activity, and many other factors have significantly increased the demand for robot manipulators automating more and more components of logistics and supply chains. This new wave of automation presents a new set of challenges compared to traditional automation tasks, e.g. in manufacturing. Manipulation workloads in the logistics industry involve extreme variability in the objects being handled: their shape, size, dynamics, condition, etc. as well as the sets of objects that must be managed and organized together. Additionally, these manipulators must be plugged into existing workflows and infrastructures that were designed for and still often interface with humans. Meeting this need, Dexterity is a robotics startup that has engineered and deployed robotic systems that can intelligently manipulate tens of thousands of items in production, reason about and operate in dynamic environments, collaborate with each other using the sense of touch, and safely operate in the presence of humans. Dexterity's robots ship hundreds of thousands of units in packaged food and parcel warehouses each day and are in production 24/7. In this talk, we will describe the unique challenges we have encountered in bringing robot manipulation to logistics, including the technical advancements which we have employed to date, spanning engineering disciplines from machine learning, simulation, modeling, algorithms, and control, to robotic hardware & software. We will describe the variety of automation workflows we are executing on which we have found provide the most value to our customers, including palletizing, depalletizing, kitting for fulfillment, and singulation for induction. And we will highlight a number of open problems we have encountered which can motivate future research in the robotics community.

Fri, Apr 15 Joydeep Biswas UT Austin Deploying Autonomous Service Mobile Robots, And Keeping Them Autonomous Gates B01 12:15PM
Abstract

Why is it so hard to deploy autonomous service mobile robots in unstructured human environments, and to keep them autonomous? In this talk, I will explain three key challenges, and our recent research in overcoming them: 1) ensuring robustness to environmental changes; 2) anticipating and overcoming failures; and 3) efficiently adapting to user needs. To remain robust to environmental changes, we build probabilistic perception models to explicitly reason about object permanence and distributions of semantically meaningful movable objects. By anticipating and accounting for changes in the environment, we are able to robustly deploy robots in challenging frequently changing environments. To anticipate and overcome failures, we introduce introspective perception to learn to predict and overcome perception errors. Introspective perception allows a robot to autonomously learn to identify causes of perception failure, how to avoid them, and how to learn context-aware noise models to overcome such failures. To adapt and correct behaviors of robots based on user preferences, or to handle unforeseen circumstances, we leverage representation learning and program synthesis. We introduce visual representation learning for preference-aware planning to identify and reason about novel terrain types from unlabelled human demonstrations. We further introduce physics-informed program synthesis to synthesize and repair programmatic action selection policies (ASPs) in a human-interpretable domain-specific language with several orders of magnitude fewer demonstrations than necessary for neural network ASPs of comparable performance. The combination of these research advances allows us to deploy a varied fleet of wheeled and legged autonomous mobile robots on the campus scale at UT Austin, performing tasks that require robust mobility both indoors and outdoors.

Fri, Apr 22 Rika Antonova Stanford Distributional Representations and Scalable Simulations for Real-to-Sim-to-Real with Deformables Gates B01 12:15PM
Abstract

Success stories of sim-to-real transfer can make it seem effortless and robust. However, the success hinges on bringing simulation close enough to reality. This real-to-sim problem of inferring simulation parameters is particularly challenging for deformable objects. Here, many conventional techniques fall short, since they often require precise state estimation and accurate dynamics. In this talk, I will describe our formulation of real-to-sim as probabilistic inference over simulation parameters. Our key idea is in how we define the state space of a deformable object. We view noisy keypoints extracted from an image of an object as samples from the distribution that captures object geometry. We then embed this distribution into a reproducing kernel Hilbert space (RKHS). Object motion can then be represented by a trajectory of distribution embeddings in this novel state space. This allows for a principled way to incorporate noisy state observations into modern Bayesian tools for simulation parameter inference. Using a small set of real-world trajectories, we can estimate posterior distributions over simulation parameters, such as elasticity, friction, and scale, even for highly deformable objects. I will conclude the talk by outlining our next steps for improving real-to-sim and sim-to-real. One branch of our work explores the potential of differentiable simulators to increase the speed and precision of real-to-sim. Another branch aims to create flexible simulation environments for large-scale learning, with thousands of objects and flexible customization, ultimately aiming to enable sim-to-real for multi-arm and mobile manipulation with deformables.

Fri, May 06 Daniel S. Brown UC Berkeley Leveraging Human Input to Enable Robust AI Systems Gates B01 12:15PM
Abstract

In this talk I will discuss recent progress towards using human input to enable safe and robust AI systems. Much work on robust machine learning and control seeks to be resilient to, or completely remove the need for, human input. By contrast, my research seeks to directly and efficiently incorporate human input into the study of robust AI systems. One problem that arises when robots and other AI systems learn from human input is that there is often a large amount of uncertainty over the human’s true intent and the corresponding desired robot behavior. To address this problem, I will discuss prior and ongoing research along three main topics: (1) how to enable AI systems to efficiently and accurately maintain uncertainty over human intent, (2) how to generate risk-averse behaviors that are robust to this uncertainty, and (3) how robots and other AI systems can efficiently query for additional human input to actively reduce uncertainty and improve their performance. My talk will conclude with a discussion of my long-term vision for safe and robust AI systems, including learning from multi-modal human input, interpretable and verifiable robustness, and developing techniques for human-in-the-loop robust machine learning that generalize beyond reward function uncertainty.

Fri, May 13 Cynthia Sung UPenn Computational Design of Compliant, Dynamical Robots Gates B01 12:15PM
Abstract

Recent years have seen a large interest in soft robotic systems, which provide new opportunities for machines that are flexible, adaptable, safe, and robust. These systems have been highly successful in a broad range of applications, including manipulation, locomotion, human-robot interaction, and more, but they present challenging design and control problems. In this talk, I will share efforts from my group to expand the capabilities of compliant and origami robots to dynamical tasks. I will show how the compliance of a mechanism can be designed to produce a particular mechanical response, how we can leverage these designs for better performance and simpler control, and how we approach these problems computationally to design new compliant robots with new capabilities such as hopping, swimming, and flight.

Fri, May 20 Heather Culbertson USC Using Data for Increased Realism with Haptic Modeling and Devices Gates B01 12:15PM
Abstract

The haptic (touch) sensations felt when interacting with the physical world create a rich and varied impression of objects and their environment. Humans can discover a significant amount of information through touch with their environment, allowing them to assess object properties and qualities, dexterously handle objects, and communicate social cues and emotions. Humans are spending significantly more time in the digital world, however, and are increasingly interacting with people and objects through a digital medium. Unfortunately, digital interactions remain unsatisfying and limited, representing the human as having only two sensory inputs: visual and auditory. This talk will focus on methods for building haptic and multimodal models that can be used to create realistic virtual interactions in mobile applications and in VR. I will discuss data-driven modeling methods that involve recording force, vibration, and sounds data from direct interactions with the physical objects. I will compare this to new methods using machine learning to generate and tune haptic models using human preferences.

Fri, May 27 Claire Tomlin UC Berkeley Modeling and interacting with other agents Gates B01 12:15PM
Abstract

One of the biggest challenges in the design of autonomous systems is to effectively predict what other agents will do. Reachable sets computed using dynamic game formulations can be used to characterize safe states and maneuvers, yet these have typically been based on the assumption that other agents take their most unsafe actions. In this talk, we explore how this worst case assumption may be relaxed. We present both game-theoretic motion planning results which use feedback Nash equilibrium strategies, and behavioral models with parameters learned in real time, to represent interaction between agents. We demonstrate our results on both simulations and robotic experiments of multiple vehicle scenarios.

Fri, Jun 17 Stefanie Tellex Brown University Towards Complex Language in Partially Observed Environments Y2E2 111 12:00PM
Abstract

Robots can act as a force multiplier for people, whether a robot assisting an astronaut with a repair on the International Space station, a UAV taking flight over our cities, or an autonomous vehicle driving through our streets. Existing approaches use action-based representations that do not capture the goal-based meaning of a language expression and do not generalize to partially observed environments. The aim of my research program is to create autonomous robots that can understand complex goal-based commands and execute those commands in partially observed, dynamic environments. I will describe demonstrations of object-search in a POMDP setting with information about object locations provided by language, and mapping between English and Linear Temporal Logic, enabling a robot to understand complex natural language commands in city-scale environments. These advances represent steps towards robots that interpret complex natural language commands in partially observed environments using a decision theoretic framework.

Schedule Winter 2022

Date Guest Affiliation Title Location Time
Fri, Jan 21 Sanjiban Choudhury Cornell University (currently at Aurora) Interactive Imitation Learning: Planning Alongside Humans Skilling Auditorium 1:30PM
Abstract

Advances in machine learning have fueled progress towards deploying real-world robots from assembly lines to self-driving. However, if robots are to truly work alongside humans in the wild, they need to solve fundamental challenges that go beyond collecting large-scale datasets. Robots must continually improve and learn online to adapt to individual human preferences. How do we design robots that both understand and learn from natural human interactions? In this talk, I will dive into two core challenges. First, I will discuss learning from natural human interactions where we look at the recurring problem of feedback-driven covariate shift. We will tackle this problem from a unified framework of distribution matching. Second, I will discuss learning to predict human intent where we look at the chicken-or-egg problem of planning with learned forecasts. I will present a graph neural network approach that tractably reasons over latent intents of multiple actors in the scene. Finally, we will demonstrate how these methods come together to result in a self-driving product deployed at scale.

Fri, Jan 28 Aleksandra Faust Google Brain Toward Scalable Autonomy Skilling Auditorium 1:30PM
Abstract

Reinforcement learning is a promising technique for training autonomous systems that perform complex tasks in the real world. However, training reinforcement learning agents is a tedious, human-in-the-loop process, requiring heavy engineering and often resulting in suboptimal results. In this talk we explore two main directions toward scalable reinforcement learning. First, we discuss several methods for zero-shot sim2real transfer for mobile and aerial navigation, including visual navigation and fully autonomous navigation on a severely resource constrained nano UAV. Second, we observe that the interaction between the human engineer and the agent under training as a decision-making process that the human agent performs, and consequently automate the training by learning a decision making policy. With that insight, we focus on zero-shot generalization and discuss learning RL loss functions and a compositional task curriculum that generalize to unseen tasks of evolving complexity. We show that across different applications, learning-to-learn methods improve reinforcement learning agents generalization and performance, and raise questions about nurture vs nature in training autonomous systems.

Fri, Feb 04 Vasu Raman and Gavin Ananda Zipline Deconfliction, Or How to Keep Fleets of Fast-Flying Robots from Crashing into Each Other Skilling Auditorium 1:30PM
Abstract

Zipline is a California-based company that manufactures and operates national-scale medical delivery systems using fleets of small drones. Each day, Zipline's drones rack up over 70,000 km in real-world BVLOS flight: day and night, rain or shine, from remote lands to dense urban sprawl. Our vehicles can reach facilities over 80 kilometers away in under an hour - rain or shine. To date, we've made over 250,000 commercial deliveries of medical products to facilities in Rwanda, Ghana, and the United States. With over five years of continuous commercial operations and more than 30 million kilometers flown, we have built a unique perspective on scaling real-world drone operations. Our fleet of aircraft, which we call Zips, operate fully autonomously, with minimal human supervision. On our way to achieving safe autonomous operations at scale, we have faced many technical and operational challenges we have had to innovate around. This talk describes one such innovation – our tactical deconfliction system. A constant bottleneck for our operations is: how do we safely put more Zips in the air at the same time? The design of our airspace has improved through years of iteration to support many Zips aloft at once, all autonomously avoiding one another. This talk shares details on the design of our deconfliction system, and performance data from all of our global operations.

Fri, Feb 11 Luca Carlone MIT Opening the Doors of (Robot) Perception: Towards Certifiable Spatial Perception Algorithms and Systems Skilling Auditorium 1:30PM
Abstract

Spatial perception —the robot’s ability to sense and understand the surrounding environment— is a key enabler for autonomous systems operating in complex environments, including self-driving cars and unmanned aerial vehicles. Recent advances in perception algorithms and systems have enabled robots to detect objects and create large-scale maps of an unknown environment, which are crucial capabilities for navigation, manipulation, and human-robot interaction. Despite these advances, researchers and practitioners are well-aware of the brittleness of existing perception systems, and a large gap still separates robot and human perception. This talk presents our latest results on the design of the next generation of robot perception systems and algorithms. The first part of the talk discusses spatial perception systems and motivates the need for high-level 3D scene understanding for robotics. I introduce early work on metric-semantic mapping (Kimera) and novel hierarchical representations for 3D scene understanding (3D Dynamic Scene Graphs). Then, I present recent results on the development of Hydra, the first real-time spatial perception system that builds 3D scene graphs of the environment in real-time and without human supervision. The second part of the talk focuses on perception algorithms and draws connections between robustness of robot perception and global optimization. I present an overview of our certifiable perception algorithms, a novel class of algorithms that is robust to extreme amounts of noise and outliers and affords performance guarantees. I discuss the theoretical implications of our certifiable algorithms and showcase applications to vehicle pose and shape estimation in self-driving scenarios.

Fri, Feb 18 Yuke Zhu UT Austin Objects, Skills, and the Quest for Compositional Robot Autonomy Skilling Auditorium 1:30PM
Abstract

Recent years have witnessed great strides in deep learning for robotics. Yet, state-of-the-art robot learning algorithms still fall short of generalization and robustness for widespread deployment. In this talk, I argue that the key to building the next generation of deployable autonomous robots is integrating scientific advances in AI with engineering disciplines of building scalable systems. Specifically, I will discuss the role of abstraction and composition in building robot autonomy and introduce our recent work on developing a compositional autonomy stack through state-action abstractions. I will talk about GIGA and Ditto for learning actionable object representations from embodied interactions. I will then present BUDS and MAPLE for scaffolding long-horizon tasks with sensorimotor skills. Finally, I will conclude with discussions on future research directions towards building scalable robot autonomy.

Fri, Feb 25 Monroe Kennedy III Stanford Considerations for Human-Robot Collaboration Skilling Auditorium 1:30PM
Abstract

The field of robotics has evolved over the past few decades. We've seen robots progress from the automation of repetitive tasks in manufacturing to the autonomy of mobilizing in unstructured environments to the cooperation of swarm robots that are centralized or decentralized. These abilities have required advances in robotic hardware, modeling, and artificial intelligence. The next frontier is robots collaborating in complex tasks with human teammates, in environments traditionally configured for humans. While solutions to this challenge must utilize all of the advances of robotics, the human element adds a unique aspect that must be addressed. Collaborating with a human teammate means that the robot must have a contextual understanding of the task as well as all participant's roles. We will discuss what constitutes an effective teammate and how we can capture this behavior in a robotic collaborator.

Fri, Mar 04 Stefanos Nikolaidis USC Towards Robust Human-Robot Interaction: A Quality Diversity Approach Skilling Auditorium 1:30PM
Abstract

The growth of scale and complexity of interactions between humans and robots highlights the need for new computational methods to automatically evaluate novel algorithms and applications. Exploring the diverse scenarios of interaction between humans and robots in simulation can improve understanding of complex human-robot interaction systems and avoid potentially costly failures in real-world settings. In this talk, I propose formulating the problem of automatic scenario generation in human-robot interaction as a quality diversity problem, where the goal is not to find a single global optimum, but a diverse range of failure scenarios that explore both environments and human actions. I show how standard quality diversity algorithms can discover surprising and unexpected failure cases in the shared autonomy domain. I then discuss the development of a new class of quality diversity algorithms that significantly improve the search of the scenario space and the integration of these algorithms with generative models, which enables the generation of complex and realistic scenarios. Finally, I discuss applications in procedural content generation and human preference learning.

Schedule Winter 2020

Date Guest Affiliation Title Location Time
Fri, Jan 10 Sandeep Chinchali Stanford University Distributed Perception and Learning Between Robots and the Cloud NVIDIA Auditorium 11:00AM
Abstract

Today’s robotic fleets are increasingly facing two coupled challenges. First, they are measuring growing volumes of high-bitrate video and LIDAR sensory streams, which, second, requires them to use increasingly compute-intensive models, such as deep neural networks (DNNs), for downstream perception or control. To cope with such challenges, compute and storage-limited robots, such as low-power drones, can offload data to central servers (or “the cloud”), for more accurate real-time perception as well as offline model learning. However, cloud processing of robotic sensory streams introduces acute systems bottlenecks ranging from network delay for real-time inference, to cloud storage, human annotation, and cloud-computing cost for offline model learning. In this talk, I will present learning-based approaches for robots to improve model performance with cloud offloading, but with minimal systems cost. For real-time inference, I will present a deep reinforcement learning based offloader that decides when a robot should exploit low-latency, on-board computation, or, when highly uncertain, query a more accurate cloud model. Then, for continual learning, I will present an intelligent, on-robot sampler that mines real-time sensory streams for valuable training examples to send to the cloud for model re-training. Using insights from months of field data and experiments on state-of-the-art embedded deep learning hardware, I will show how simple learning algorithms allow robots to significantly transcend their on-board sensing and control performance, but with limited communication cost.

Fri, Jan 17 Christoffer Heckman CU Boulder Robotic Autonomy and Perception in Challenging Environments NVIDIA Auditorium 11:00AM
Abstract

Perception precedes action, in both the biological world as well as the technologies maturing today that will bring us autonomous cars, aerial vehicles, robotic arms and mobile platforms. The problem of probabilistic state estimation via sensor measurements takes on a variety of forms, resulting in information about our own motion as well as the structure of the world around us. In this talk, I will discuss some approaches that my research group has been developing that focus on estimating these quantities online and in real-time in extreme environments where dust, fog and other visually obscuring phenomena are widely present and when sensor calibration is altered or degraded over time. These approaches include new techniques in computer vision, visual-inertial SLAM, geometric reconstruction, nonlinear optimization, and even some sensor development. The methods I discuss have an application-specific focus to ground vehicles in the subterranean environment, but are also currently deployed in the agriculture, search and rescue, and industrial human-robot collaboration contexts.

Fri, Jan 24 Takumi Kamioka Honda ASIMO Motion planning of bipedal robots based on Divergent Component of Motion NVIDIA Auditorium 11:00AM
Abstract

Honda has been developing bipedal humanoid robots for more than 30 years. As a part of the results, we showed several locomotion ability of humanoid robot such as robust walking, running, jumping and quadrupedal walking. A key concept of these abilities is divergent component of motion (DCM). DCM is a component of robot's center of gravity and must be controlled properly because of its divergent property. We derived it from eigenvalue decomposition, but equivalent values have proposed by other researchers independently. In this talk, I will give the definition and property of DCM and show its application methods for robot's locomotion.

Fri, Jan 31 Mark Yim UPenn Challenges to Developing Low Cost Robotic Systems NVIDIA Auditorium 11:00AM
Abstract

The promise of robot systems as initially imagined in science fiction is that of generic machines capable of doing a variety of tasks often mimicking humans. It turns out doing that can be very expensive and is keeping robotic systems from having impact in today's society. One of the challenges includes overcoming the perception of the pursuit of low-cost as more than "just engineering". This talk will present some general principles towards designing low cost systems while also presenting specific examples of novel devices ranging from mechatronic components (sensors and actuators), robotic components (grippers) to full systems (flying systems). In each case we will present some practical examples of methods that can be applied today.

Fri, Feb 07 Leila Takayama UC Santa Cruz Designing More Effective Remote Presence Systems for Human Connection and Exploration NVIDIA Auditorium 11:00AM
Abstract

As people are speculating about what the future of robots in the workplace will look like, this could be a good time to realize that we already live in that future. We actually know a lot about what it’s like to telecommute to work everyday via telepresence robot. Coming from a human-robot interaction perspective, I’ll present the research lessons learned from several years of fielding telepresence robot prototypes in companies and running controlled experiments in the lab to figure out how to better support remote collaboration between people. Building upon that work, I will share some recent research on professional robot operators, including service robot operators, drone pilots, and deep sea robot operators. Finally, I will share our current research on identifying needs and opportunities for designing robotic systems that can better support robotic systems with humans in-the-loop.

Fri, Feb 14 Aaron Ames Caltech Safety-Critical Control of Dynamic Robots NVIDIA Auditorium 11:00AM
Abstract

Science fiction has long promised a world of robotic possibilities: from humanoid robots in the home, to wearable robotic devices that restore and augment human capabilities, to swarms of autonomous robotic systems forming the backbone of the cities of the future, to robots enabling exploration of the cosmos. With the goal of ultimately achieving these capabilities on robotic systems, this talk will present a unified nonlinear control framework for realizing dynamic behaviors in an efficient, provably stable (via control Lyapunov functions) and safety-critical fashion (as guaranteed by control barrier functions). The application of these ideas will be demonstrated experimentally on a wide variety of robotic systems, including multi-robot systems with guaranteed safe behavior, bipedal and humanoid robots capable of achieving dynamic walking and running behaviors that display the hallmarks of natural human locomotion, and robotic assistive devices (including prostheses and exoskeletons) aimed at restoring mobility. The ideas presented will be framed in the broader context of seeking autonomy on robotic systems with the goal of getting robots into the real-world.

Fri, Feb 21 Sarah Dean UC Berkeley Safe and Robust Perception-Based Control NVIDIA Auditorium 11:00AM
Abstract

Machine learning provides a promising path to distill information from high dimensional sensors like cameras -- a fact that often serves as motivation for merging learning with control. This talk aims to provide rigorous guarantees for systems with such learned perception components in closed-loop. Our approach is comprised of characterizing uncertainty in perception and then designing a robust controller to account for these errors. We use a framework which handles uncertainties in an explicit way, allowing us to provide performance guarantees and illustrate how trade-offs arise from limitations of the training data. Throughout, I will motivate this work with the example of autonomous vehicles, including both simulated experiments and an implementation on a 1/10 scale autonomous car. Joint work with Aurelia Guy, Nikolai Matni, Ben Recht, Rohan Sinha, and Vickie Ye.

Fri, Feb 28 Dieter Fox UW/NVIDIA Toward robust manipulation in complex environments NVIDIA Auditorium 11:00AM
Abstract

Over the last years, advances in deep learning and GPU-based computing have enabled significant progress in several areas of robotics, including visual recognition, real-time tracking, object manipulation, and learning-based control. This progress has turned applications such as autonomous driving and delivery tasks in warehouses, hospitals, or hotels into realistic application scenarios. However, robust manipulation in complex settings is still an open research problem. Various research efforts show promising results on individual pieces of the manipulation puzzle, including manipulator control, touch sensing, object pose detection, task and motion planning, and object pickup. In this talk, I will present our recent work in integrating such components into a complete manipulation system. Specifically, I will describe a mobile robot manipulator that moves through a kitchen, can open and close cabinet doors and drawers, detect and pickup objects, and move these objects to desired locations. Our baseline system is designed to be applicable in a wide variety of environments, only relying on 3D articulated models of the kitchen and the relevant objects. I will discuss the design choices behind our approach, the lessons we learned so far, and various research directions toward enabling more robust and general manipulation systems.

Fri, Mar 06 Laura Matloff Stanford Designing bioinspired aerial robots with feathered morphing wings NVIDIA Auditorium 11:00AM
Abstract

Birds are a source of design inspiration for aerial robots, as they can still outmaneuver current man-made fliers of similar size and weight. I study their ability to seamlessly morph their wings through large shape changes during gliding flight, and use biological measurements to drive mechanical design. I measure the wing feather and bone kinematics, investigate adjacent feather interactions, and examine feather microstructures to inform the design of PigeonBot, a biohybrid feathered robot. The feathered morphing wing design principles can also be adapted to other bird species, and even artificial feathers. This work was done in collaboration with Eric Chang, Amanda Stowers, Teresa Feo, Lindsie Jeffries, Sage Manier, and David Lentink.

Schedule Fall 2019

Date Guest Affiliation Title Location Time
Fri, Sep 27 Jaime Fisac Princeton University Mind the Gap: Bridging model-based and data-driven reasoning for safe human-centered robotics Skilling Auditorium 11:00AM
Abstract

Spurred by recent advances in perception and decision-making, robotic technologies are undergoing a historic expansion from factory floors to the public space. From autonomous driving and drone delivery to robotic devices in the home and workplace, robots are bound to play an increasingly central role in our everyday lives. However, the safe deployment of these systems in complex, human-populated spaces introduces new fundamental challenges. Whether safety-critical failures (e.g. collisions) can be avoided will depend not only on the decisions of the autonomous system, but also on the actions of human beings around it. Given the complexity of human behavior, how can robots reason through these interactions reliably enough to ensure safe operation in our homes and cities? In this talk I will present a vision for safe human-centered robotics that brings together control-theoretic safety analysis and Bayesian machine learning, enabling robots to actively monitor the “reality gap” between their models and the world while leveraging existing structure to ensure safety in spite of this gap. In particular, I will focus on how robots can reason game-theoretically about the mutual influence between their decisions and those of humans over time, strategically steering interaction towards safe outcomes despite the inevitably limited accuracy of human behavioral models. I will show some experimental results on quadrotor navigation around human pedestrians and simulation studies on autonomous driving. I will end with a broader look at the pressing need for assurances in human-centered intelligent systems beyond robotics, and how control-theoretic safety analysis can be incorporated into modern artificial intelligence, enabling strong synergies between learning and safety.

Fri, Oct 04 Monroe Kennedy Stanford University Modeling and Control for Robotic Assistants Skilling Auditorium 11:00AM
Abstract

As advances are made in robotic hardware, the capacity of the complexity of tasks they are capable of performing also increases. One goal of modern robotics is to introduce robotic platforms that require very little augmentation of their environments to be effective and robust. Therefore the challenge for the Roboticist is to develop algorithms and control strategies that leverage the knowledge of the task while retaining the ability to be adaptive, adjusting to perturbations in the environment and task assumptions. These strategies will be discussed in the context of a wet-lab robotic assistant. Motivated by collaborations with a local pharmaceutical company, we will explore two relevant tasks. First, we will discuss a robot-assisted rapid experiment preparation system for research and development scientists. Second, we will discuss ongoing work for intelligent human-robot cooperative transport with limited communication. These tasks are the beginning of a suite of abilities for an assisting robotic platform that can be transferred to similar applications useful to a diverse set of end-users.

Fri, Oct 11 Adrien Gaidon Toyota Research Institute Self-Supervised Pseudo-Lidar Networks Skilling Auditorium 11:00AM
Abstract

Although cameras are ubiquitous, robotic platforms typically rely on active sensors like LiDAR for direct 3D perception, especially in safety critical contexts like Automated Driving. Nonetheless, recent progress in combining deep learning and geometry suggests that cameras may become a competitive source of reliable 3D information. In this talk, we will present our latest developments in self-supervised monocular depth and pose estimation for urban environments. Particularly, we show that with the proper network architecture, large-scale training, and computational power it is possible to outperform fully supervised methods while still operating on the much more challenging self-supervised setting, where the only source of input information are video sequences. Furthermore, we discuss how other sources of information (i.e. camera velocity, sparse LiDAR data, and semantic predictions) can be leveraged at training time to further improve pseudo-lidar accuracy and overcome some of the inherent limitations of self-supervised learning.

Fri, Oct 18 Kostas Alexis University of Nevada Reno Field-hardened Robotic Autonomy Skilling Auditorium 11:00AM
Abstract

This talk will present our contributions in the domain of field-hardened resilient robotic autonomy and specifically on multi-modal sensing-degraded GPS-denied localization and mapping, informative path planning, and robust control to facilitate reliable access, exploration, mapping and search of challenging environments such as subterranean settings. The presented work will, among others, emphasize on fundamental developments taking place in the framework of the DARPA Subterranean Challenge and the research of the CERBERUS (https://www.subt-cerberus.org/) team, alongside work on nuclear site characterization and infrastructure inspection. Relevant field results from both active and abandoned underground mines as well as tunnels in the U.S. and in Switzerland will be presented. In addition, a selected set of prior works on long-term autonomy, including the world-record on unmanned aircraft endurance will be briefly overviewed. The talk will conclude with directions for future research to enable advanced autonomy and resilience, alongside the necessary connection to education and the potential for major broader impacts to the benefit of our economy and society.

Fri, Oct 25 Francesco Borrelli UC Berkeley Learning and Predictions in Autonomous Systems Skilling Auditorium 11:00AM
Abstract

Forecasts play an important role in autonomous and automated systems. Applications include transportation, energy, manufacturing and healthcare systems. Predictions of systems dynamics, human behavior and environment conditions can improve safety and performance of the resulting system. However, constraint satisfaction, performance guarantees and real-time computation are challenged by the growing complexity of the engineered system, the human/machine interaction and the uncertainty of the environment where the system operates. Our research over the past years has focused on predictive control design for autonomous systems performing iterative tasks. In this talk I will first provide an overview of the theory and tools that we have developed for the systematic design of learning predictive controllers. Then, I will focus on recent results on the use of data to efficiently formulate stochastic MPC problems which autonomously improve performance in iterative tasks. Throughout the talk I will focus on autonomous cars and solar power plants to motivate our research and show the benefits of the proposed techniques.

Fri, Nov 01 Tianshi Gao and Sam Abrahams Cruise Automation Scaled Learning for Autonomous Vehicles Skilling Auditorium 11:00AM
Abstract

The adoption of machine learning to solve problems in autonomous systems has become increasingly prevalent. Cruise is a developer of self-driving cars, currently operating a research and development fleet of over 100 all-electric autonomous vehicles in San Francisco. In this talk, we focus on the challenges involved with developing machine learning solutions in the autonomous driving domain. In addition to sharing lessons learned over the past few years of autonomous vehicle development, this discussion will include a review of some of the more challenging perception and prediction problems faced when operating driverless vehicles on the chaotic streets of San Francisco. Then, we share and highlight what it takes to make machine learning work in the wilderness at scale to meet these challenges.

Fri, Nov 08 Ricardo Sanfelice UC Santa Cruz Model Predictive Control of Hybrid Dynamical Systems Skilling Auditorium 11:00AM
Abstract

Hybrid systems model the behavior of dynamical systems in which the states can evolve continuously and, at isolate time instances, exhibit instantaneous jumps. Such systems arise when control algorithms that involve digital devices are applied to continuous-time systems, or when the intrinsic dynamics of the system itself has such hybrid behavior, for example, in mechanical systems with impacts, switching electrical circuits, spiking neurons, atc. Hybrid control may be used for improved performance and robustness properties compared to conventional control, and hybrid dynamics may be unavoidable due to the interplay between digital and analog components in a cyber-physical system. In this talk, we will introduce analysis and design tools for model predictive control (MPC) schemes for hybrid systems. We will present recently developed results on asymptotically stabilizing MPC for hybrid systems based on control Lyapunov functions. After a short overview of the state of the art on hybrid MPC, and a brief introduction to a powerful hybrid systems framework, we will present key concepts and analysis tools. After that, we will lay out the theoretical foundations of a general MPC framework for hybrid systems, with guaranteed stability and feasibility. In particular, we will characterize invariance properties of the feasible set and the terminal constraint sets, continuity of the value function, and use these results to establish asymptotic stability of the hybrid closed-loop system. To conclude, we will illustrate the framework in several applications and summarize some of the open problems, in particular, those related to computational issues.

Fri, Nov 15 BARS 2019 UC Berkeley and Stanford Bay Area Robotics Symposium International House 8:30AM
Abstract

The 2019 Bay Area Robotics Symposium aims to bring together roboticists from the Bay Area. The program will consist of a mix of faculty, student and industry presentations.

Fri, Nov 22 Hannah Stuart UC Berkeley Hands in the Real World: Grasping Outside the Lab Skilling Auditorium 11:00AM
Abstract

Robots face a rapidly expanding range of potential applications beyond the lab, from remote exploration and search-and-rescue to household assistance. The focus of physical interaction is typically delegated to end-effectors, or hands, as these machines perform manual tasks. Despite decades of dedicated research, effective deployment of robot hands in the real world is still limited to a few examples, other than the use of rigid parallel-jaw grippers. In this presentation, I will review articulated hands that found application in the field, focusing primarily on ocean exploration and drawing examples from recent developments in the Embodied Dexterity Group. I will also introduce preliminary findings regarding an assistive mitten designed to improve the grasping strength of people with weakened hands. Similarities between the design of robot hands and wearable technologies for the human hand will be discussed.

Fri, Dec 06 Chelsea Finn Stanford University The Next Generation of Robot Learning Skilling Auditorium 11:00AM
Abstract

For robots to be successful in unconstrained environments, they must be able to perform tasks in a wide variety of situations — they must be able to generalize. We’ve seen impressive results from machine learning systems that generalize to broad real-world datasets for a range of problems. Hence, machine learning provides a powerful tool for robots to do the same. However, in sharp contrast, machine learning methods for robotics often generalize narrowly within a single laboratory environment. Why the mismatch? In this talk, I’ll discuss the challenges that face robots, in contrast to standard machine learning problem settings, and how we can rethink both our robot learning algorithms and our data sources in a way that enables robots to generalize broadly across tasks, across environments, and even across robot platforms.

Schedule Spring 2019

Date Guest Affiliation Title Location Time
Fri, Apr 05 Rick Zhang Zoox Practical Challenges of Urban Autonomous Driving McCullough 115 11:00AM
Abstract

Autonomous driving holds great promise for society in terms of improving road safety, increasing accessibility, and increasing productivity. Despite rapid technological advances in autonomous driving over the past decade, significant challenges still remain. In this talk, I will examine several practical challenges of autonomous driving in dense urban environments, with an emphasis on challenges involving human-robot interactions. I will talk about how Zoox thinks about these challenges and tackles them on multiple levels throughout the AI stack (Perception, Prediction, Planning, and Simulation). Finally, I will share my perspectives and outlook on the future of autonomous mobility.

Fri, Apr 12 Ross Knepper Cornell University Formalizing Teamwork in Human-Robot Interaction McCullough 115 11:00AM
Abstract

Robots out in the world today work for people but not with people. Before robots can work closely with ordinary people as part of a human-robot team in a home or office setting, robots need the ability to acquire a new mix of functional and social skills. Working with people requires a shared understanding of the task, capabilities, intentions, and background knowledge. For robots to act jointly as part of a team with people, they must engage in collaborative planning, which involves forming a consensus through an exchange of information about goals, capabilities, and partial plans. Often, much of this information is conveyed through implicit communication. In this talk, I formalize components of teamwork involving collaboration, communication, and representation. I illustrate how these concepts interact in the application of social navigation, which I argue is a first-class example of teamwork. In this setting, participants must avoid collision by legibly conveying intended passing sides via nonverbal cues like path shape. A topological representation using the braid groups enables the robot to reason about a small enumerable set of passing outcomes. I show how implicit communication of topological group plans achieves rapid covergence to a group consensus, and how a robot in the group can deliberately influence the ultimate outcome to maximize joint performance, yielding pedestrian comfort with the robot.

Fri, Apr 19 David Lentink Stanford University Avian Inspired Design McCullough 115 11:00AM
Abstract

Many organisms fly in order to survive and reproduce. My lab focusses on understanding bird flight to improve flying robots—because birds fly further, longer, and more reliable in complex visual and wind environments. I use this multidisciplinary lens that integrates biomechanics, aerodynamics, and robotics to advance our understanding of the evolution of flight more generally across birds, bats, insects, and autorotating seeds. The development of flying organisms as an individual and their evolution as a species are shaped by the physical interaction between organism and surrounding air. The organism’s architecture is tuned for propelling itself and controlling its motion. Flying animals and plants maximize performance by generating and manipulating vortices. These vortices are created close to the body as it is driven by the action of muscles or gravity, then are ‘shed’ to form a wake (a trackway left behind in the fluid). I study how the organism’s architecture is tuned to utilize these and other aeromechanical principles to compare the function of bird wings to that of bat, insect, and maple seed wings. The experimental approaches range from making robotic models to training birds to fly in a custom-designed wind tunnel as well as in visual flight arena’s—and inventing methods to 3D scan birds and measure the aerodynamic force they generate—nonintrusively—with a novel aerodynamic force platform. The studies reveal that animals and plants have converged upon the same solution for generating high lift: A strong vortex that runs parallel to the leading edge of the wing, which it sucks upward. Why this vortex remains stably attached to flapping animal and spinning plant wings is elucidated and linked to kinematics and wing morphology. While wing morphology is quite rigid in insects and maple seeds, it is extremely fluid in birds. I will show how such ‘wing morphing’ significantly expands the performance envelope of birds during flight, and will dissect the mechanisms that enable birds to morph better than any aircraft can. Finally, I will show how these findings have inspired my students to design new flapping and morphing aerial robots.

Fri, Apr 26 Matei Ciocarlie Columbia University How to Make, Sense, and Make Sense of Contact in Robotic Manipulation McCullough 115 11:00AM
Abstract

Reach into your pocket, grab one object (phone) between others (keys, wallet), and take it out. Congratulations, you have achieved an impressive feat of motor control, one that we can not replicate in artificial mechanisms. What was the key to success: the mechanical structure of the hand, the rich tactile and proprioceptive data it can collect, analysis and planning in the brain, or perhaps all of these? In this talk, I will present our work advancing each of these areas: analytical models of grasp stability (with realistic contact and non-convex energy dissipation constraints), design and use of sensors (tactile and proprioceptive) for contact information, and hand posture subspaces (for mechanism design optimization and teleoperation). These are stepping stones towards motor skills which rely on transient contact with complex environments (such as dexterous manipulation), motivated by applications as diverse as logistics, manufacturing, disaster response and space robots.

Fri, May 03 Nora Ayanian USC Crossing the Reality Gap: Coordinating Multirobot Systems in The Physical World McCullough 115 11:00AM
Abstract

Using a group of robots in place of a single robot to accomplish a complex task has many benefits such as redundancy, robustness, faster completion times, and the ability to be everywhere at once. The applications of such systems are wide and varied: Imagine teams of robots containing forest fires, filling urban skies with package deliveries, or searching for survivors after a natural disaster. These applications have been motivating multirobot research for years, but why aren’t they happening yet? These missions demand different roles for robots, necessitating a strategy for coordinated autonomy while respecting any constraints the particular environment or other team members may impose. As a result, current solutions for multirobot systems are often task- and environment-specific, requiring hand-tuning and an expert in the loop. They also require solutions that can manage complexity as the number of robots increases. Such inflexibility in deployment, reduced situational awareness, computational complexity, and need for multiple operators significantly limits widespread use of multirobot systems In this talk I will present algorithmic strategies that address the main challenges that precludes the widespread adoption of multirobot systems. In particular, I will focus on strategies we have developed that automatically synthesize policies that are broadly applicable to navigating groups of robots in complex environment, from nearly real-time solutions for coordinating hundreds of robots to real-time collision avoidance. I will conclude with experimental results that validate our strategies using our CrazySwarm testbed -- a 49-UAV platform for testing multi-robot algorithms at a large scale.

Fri, May 10 Anirudha Majumdar Princeton University Safety Guarantees with Perception and Learning in the Loop McCullough 115 11:00AM
Abstract

Imagine an unmanned aerial vehicle (UAV) that successfully navigates a thousand different obstacle environments or a robotic manipulator that successfully grasps a million objects in our dataset. How likely are these systems to succeed on a novel (i.e., previously unseen) environment or object? How can we learn control policies that provably generalize well to environments or objects that our robot has not previously encountered? In this talk, I will present approaches for learning control policies for robotic systems that provably generalize well with high probability to novel environments. The key technical idea behind our approach is to leverage tools from generalization theory (e.g., PAC-Bayes theory) in machine learning and the theory of information bottlenecks from information theory. We apply our techniques on examples including UAV navigation and grasping in order to demonstrate the ability to provide strong generalization guarantees on controllers for robotic systems with continuous state and action spaces, complicated (e.g., nonlinear) dynamics, and rich sensory inputs (e.g., depth measurements).

Fri, May 17 Davide Scaramuzza University of Zurich, ETH Autonomous, Agile, Vision-controlled Drones: from Frame-based to Event-based Vision McCullough 115 11:00AM
Abstract

Autonomous quadrotors will soon play a major role in search-and-rescue and remote-inspection missions, where a fast response is crucial. Quadrotors have the potential to navigate quickly through unstructured environments, enter and exit buildings through narrow gaps, and fly through collapsed buildings. However, their speed and maneuverability are still far from those of birds. Indeed, agile navigation through unknown, indoor environments poses a number of challenges for robotics research in terms of perception, state estimation, planning, and control. In this talk, I will show that tightly-coupled perception and control is crucial in order to plan trajectories that improve the quality of perception. Also, I will talk about our recent results on event-based vision to enable low latency sensory motor control and navigation in both low light and dynamic environments, where traditional vision sensors fail.

Fri, May 24 Ben Recht UC Berkeley The Merits of Models in Continuous Reinforcement Learning McCullough 115 11:00AM
Abstract

Classical control theory and machine learning have similar goals: acquire data about the environment, perform a prediction, and use that prediction to impact the world. However, the approaches they use are frequently at odds. Controls is the theory of designing complex actions from well-specified models, while machine learning makes intricate, model-free predictions from data alone. For contemporary autonomous systems, some sort of hybrid may be essential in order to fuse and process the vast amounts of sensor data recorded into timely, agile, and safe decisions. In this talk, I will examine the relative merits of model-based and model-free methods in data-driven control problems. I will discuss quantitative estimates on the number of measurements required to achieve a high quality control performance and statistical techniques that can distinguish the relative power of different methods. In particular, I will show how model-free methods are considerably less sample efficient than their model-based counterparts. I will also describe how notions of robustness, safety, constraint satisfaction, and exploration can be transparently incorporated in model-based methods. I will conclude with a discussion of possible positive roles for model-free methods in contemporary autonomous systems that may mitigate their high sample complexity and lack of reliability and versatility.

Fri, May 31 Roberto Calandra Facebook AI No Title McCullough 115 11:00AM
Abstract

Schedule Winter 2019

Date Guest Affiliation Title Location Time
Fri, Jan 11 Dangxiao Wang Beihang University Paradigm shift of haptic human-machine interaction: Historical perspective and our practice McCullough 115 11:00AM
Abstract

Haptics is a fundamental channel when we interact with the physical world. However, it is underutilized when human interact with machines such as computers and robots. In this talk, I will start from the biological motivation of studying haptic human-machine interaction (HMI), and then I will introduce the paradigm shift of haptic MRI in the past 30 years, which include desktop haptics in personal computer era, surface haptics in mobile computer era, and wearable haptics in virtual reality era. Specifically I will try to keep a balance on the research performed in our group and in the whole haptics community. Finally I will share my perspective on future research challenges in haptics HMI field.

Fri, Jan 18 Sylvia Herbert UC Berkeley Reachability in Robotics McCullough 115 11:00AM
Abstract

Motion planning is an extremely well-studied problem in the robotics community, yet existing work largely falls into one of two categories: computationally efficient but with few if any safety guarantees, or able to give stronger guarantees but at high computational cost. In this talk I will give an overview of some of the techniques used in the Berkeley Hybrid Systems lab to balance safety with computational complexity in analyzing control systems. I will show these methods applied to a quadrotor in a motion capture room planning in real time to navigate around a priori unknown obstacles, as well as navigation around a human pedestrian.

Fri, Jan 25 Sean Anderson Boston University Sub-sampling approaches to mapping and imaging McCullough 115 11:00AM
Abstract

Sub-sampling approaches can greatly reduce the amount of data that need to be gathered and stored when exploring an unknown signal or environment. When combined with optimization algorithms, accurate reconstructions from the sub-sampled data can be generated, even when acquiring far less than Nyquist-Shannon theory requires. In this talk we explore the use of such schemes in two disparate application domains. The first is in robotic mapping where sub-sampling followed by reconstruction can greatly reduce the number of measurements needed to produce accurate maps. The second is in nanometer-scale imaging using an atomic force microscope where sub-sampling can significantly increase the imaging rate for a given image resolution.

Fri, Feb 01 Jeffrey Lipton University of Washington/MIT Fabrication via Mobile Robotics and Digital Manufacturing McCullough 115 11:00AM
Abstract

Each new generation of robotic fabrication tools has transformed manufacturing, enabling greater complexity and customization of the world around us. With the recent developments in additive manufacturing and mobile robots, several pressing questions have emerged. How can we use computational methods to expand the set of achievable material properties? How can we use mobile robots to do manufacturing? Finally, how can we use the answers from these questions to make robots more capable? In this talk, I will provide answers to these questions. I will demonstrate how we can use generative processes to make deformable cellular materials and how mobile manufacturing robots can perform carpentry tasks. Deformable cellular materials enable open, close, stochastic and ordered foams. These are useful in actuation, protection and deployable structures for robots. Mobile robotic fabrication brings robots out of the factory and onto the job site, enables scalable manufacturing tools, and expands the set of programmable manufacturing processes. Together these two methods will enable the next generation of custom manufacturing.

Fri, Feb 08 Alexandre Bayen UC Berkeley Lagrangian control at large and local scales in mixed autonomy traffic flow: optimization and deep-RL approaches McCullough 115 11:00AM
Abstract

This talk investigates Lagrangian (mobile) control of traffic flow at large scale (city-wide, with fluid flow models) and local scale (vehicular level). For large scale inference and control, fluid flow models over networks are considered. Algorithms relying on convex optimization are presented for fusion of static and mobile (Lagrangian) traffic information data. Repeated game theory is used to characterize the stability such flows under selfish information patterns (each flow attempting to optimize their latency). Convergence to Nash equilibria of the solutions is presented, leading to control strategies to optimize the network efficiency. At local scale, the question of how self-driving vehicles will change traffic flow patterns is investigated. We describe approaches based on deep reinforcement learning presented in the context of enabling mixed-autonomy mobility. The talk explores the gradual and complex integration of automated vehicles into the existing traffic system. We present the potential impact of a small fraction of automated vehicles on low-level traffic flow dynamics, using novel techniques in model-free deep reinforcement learning, in which the automated vehicles act as mobile (Lagrangian) controllers to traffic flow. Illustrative examples will be presented in the context of a new open-source computational platform called FLOW, which integrates state of the art microsimulation tools with deep-RL libraries on AWS EC2. Interesting behavior of mixed autonomy traffic will be revealed in the context of emergent behavior of traffic: https://flow-project.github.io/ inference, control, and game-theoretic algorithms developed to improve traffic flow in transportation networks. The talk will investigate various factors that intervene in decisions made by travelers in large scale urban environments. We will discuss disruptions in demand due to the rapid expansion of the use of “selfish routing” apps, and how they affect urban planning. These disruptions cause congestion and make traditional approaches of traffic management less effective. Game theoretic approaches to demand modeling will be presented. These models encompass heterogeneous users (some using routing information, some not) that share the same network and compete for the same commodity (capacity). Results will be presented for static loading, based on Nash-Stackelberg games, and in the context of repeated games, to account for the fact that routing algorithms learn the dynamics of the system over time when users change their behavior. The talk will present some potential remedies envisioned by planners, which range from incentivization to regulation.

Fri, Feb 15 Mark Mueller UC Berkeley High-Performance Aerial Robotics McCullough 115 11:00AM
Abstract

We present some of our recent results on high-performance aerial robots. First, we present two novel mechanical vehicle configurations: the first is aimed at creating an aerial robot capable of withstanding external disturbance, and the second exploits unactuated internal degrees of freedom for passive shape-shifting, resulting in a simple, agile vehicle capable of squeezing through very narrow spatial gaps. Next, we will discuss results on vibration-based fault detection, exploiting only an onboard IMU to detect and isolate motor faults through vibrations, even if the frequency of the motors is above the Nyquist sampling frequency. Finally, two results pertaining to energy efficiency are presented, one a mechanical modification, and the second an algorithmic adaptation for online adaptation of a vehicle's cruise speed.

Fri, Feb 22 Ludovic Righetti NYU Fast computation of robust multi-contact behaviors McCullough 115 11:00AM
Abstract

Interaction with objects and environments is at the core of any manipulation or locomotion behavior, yet, robots still mostly try to avoid physical interaction with their environment at all costs. This is in stark contrast with humans or animals, that not only constantly interact with their environment from the day they are born but also exploit this interaction to improve their skills. One reason that prevents robots from seamlessly interacting with the world is that reasoning about contacts is a computationally daunting problem. In this presentation, I will present our efforts to break down this complexity and find algorithms that are computationally efficient yet generic enough to be applied to any robot. I will also discuss how these approaches can be rendered robust to unknown and changing environments and how we can leverage machine learning to significantly improve computation efficiency.

Fri, Mar 01 Melonee Wise Fetch Robotics Taking robots to the cloud and other insights on the path to market McCullough 115 11:00AM
Abstract

The robotics industry has come a long way from the industrial robots that have long been in manufacturing environments. Now, robots that can safely work alongside people are used in all sorts of work environments. A new generation of robotics technology is emerging that brings to factory floors and warehouses the kind of speed, agility and incremental cost advantages that cloud computing has brought to IT. Collaborative, autonomous and cloud-based robotics systems don’t require changes to the facility, nor do they require installation or integration of IT hardware and software. Fetch Robotics CEO Melonee Wise will discuss the evolution of robotics to the cloud, and how the company has successfully brought its robotics technology to market.

Fri, Mar 08 Jeff Hancock Stanford University Conversation with a Robot McCullough 115 11:00AM
Abstract

Jeff Hancock is founding director of the Stanford Social Media Lab and is a Professor in the Department of Communication at Stanford University. Professor Hancock and his group work on understanding psychological and interpersonal processes in social media. The team specializes in using computational linguistics and experiments to understand how the words we use can reveal psychological and social dynamics, such as deception and trust, emotional dynamics, intimacy and relationships, and social support. Recently Professor Hancock is working on understanding the mental models people have about algorithms in social media, as well as working on the ethical issues associated with computational social science.

Fri, Mar 15 Marin Kobilarov Johns Hopkins University No Title McCullough 115 11:00AM
Abstract

This talk will focus on computing robust control policies for autonomous agents performing a given task that can be modeled using a performance function and constraints. We will first consider a strategy for computing guarantees on future policy execution under uncertainty, based on probably-approximately-correct (PAC) high-confidence performance bounds. The bounds will then be used to optimize a given policy based on a high-fidelity learned stochastic model of the agent and its environment. Finally, we will consider initial efforts towards transferring such robust policies to physical agents such as aerial and ground vehicles navigating around obstacles.

Schedule Fall 2018

Date Guest Affiliation Title Location Time
Fri, Sep 28 Wojciech Zaremba OpenAI Learning Dexterity McCullough 115 11:00AM
Abstract

We’ve trained a human-like robot hand to manipulate physical objects with unprecedented dexterity. Our system, called Dactyl, is trained entirely in simulation and transfers its knowledge to reality, adapting to real-world physics using techniques we’ve been working on for the past year. Dactyl learns from scratch using the same general-purpose reinforcement learning algorithm and code as OpenAI Five. Our results show that it’s possible to train agents in simulation and have them solve real-world tasks, without physically-accurate modeling of the world.

Fri, Oct 05 Naira Hovakimyan UIUC L1 Adaptive Control and Its Transition to Practice McCullough 115 11:00AM
Abstract

The history of adaptive control systems dates back to early 50-s, when the aeronautical community was struggling to advance aircraft speeds to higher Mach numbers. In November of 1967, X-15 launched on what was planned to be a routine research flight to evaluate a boost guidance system, but it went into a spin and eventually broke up at 65,000 feet, killing the pilot Michael Adams. It was later found that the onboard adaptive control system was to be blamed for this incident. Exactly thirty years later, fueled by advances in the theory of nonlinear control, Air Force successfully flight tested the unmanned unstable tailless X-36 aircraft with an onboard adaptive flight control system. This was a landmark achievement that dispelled some of the misgivings that had arisen from the X-15 crash in 1967. Since then, numerous flight tests of Joint Direct Attack Munitions (JDAM) weapon retrofitted with adaptive element have met with great success and have proven the benefits of the adaptation in the presence of component failures and aerodynamic uncertainties. However, the major challenge related to stability/robustness assessment of adaptive systems is still being resolved based on testing the closed-loop system for all possible variations of uncertainties in Monte Carlo simulations, the cost of which increases with the growing complexity of the systems. This talk will give an overview of the limitations inherent to the conventional adaptive controllers and will introduce the audience to the L1 adaptive control theory, the architectures of which have guaranteed robustness in the presence of fast adaptation. Various applications, including flight tests of a subscale commercial jet, will be discussed during the presentation to demonstrate the tools and the concepts. With its key feature of decoupling adaptation from robustness L1 adaptive control theory has facilitated new developments in the areas of event-driven adaptation and networked control systems. It has been evaluated on Learjet in 2015 and 2017 with five people on board in more than 20 hours of flight time each time, and on F16 in 2016 with two pilots on board.

Fri, Oct 12 Michael Yip UCSD Learning Model-free Representations for Fast, Adaptive Robot Control and Motion Planning McCullough 115 11:00AM
Abstract

Robot manipulation has traditionally been a problem of solving model-based control and motion planning in structured environments. This has made them very well suited for a finite set of repeating tasks and trajectories such on a manufacturing assembly line. However, when considering more complex and partially-observable environments, and when more complex and compliant and safe robots are proposed, outcomes of robot actions become more and more uncertain, and model-based methods tend to fail or produce unexpected results. Erratic behavior makes robots dangerous in human environments and thus new approaches must be taken. In this talk, I will discuss our research in learning model-free representations for robots that enable robots to learn and adapt their control to new environments, plan, and execute trajectories. These representations are trained using a variety of local and global model-free learning strategies, and when implemented are comparatively significantly faster, more consistent, and more power and memory efficient than conventional control and trajectory planners.

Fri, Oct 19 Yasser Shoukry UMD Attack-Resilient and Verifiable Autonomous Systems: A Satisfiability Modulo Convex Programming Approach McCullough 115 11:00AM
Abstract

Autonomous systems in general, and self-driving cars, in particular, hold the promise to be one of the most disruptive technologies emerging in recent years. However, the security and resilience of these systems, if not proactively addressed, will pose a significant threat potentially impairing our relation with these technologies and may lead to a societal rejection of adopting them permanently. In this talk, I will focus on three problems in the context of designing resilient and verifiable autonomous systems: (i) the design of resilient state estimators in the presence of false data injection attacks, (ii) the design of resilient multi-robot motion planning in the presence of Denial-of-Service (DoS) attacks and (iii) the formal verification of neural network-based controllers. I will argue that, although of a heterogeneous nature, all these problems have something in common. They can be formulated as the feasibility problem for a type of formula called monotone Satisfiability Modulo Convex programming (or SMC for short). I will present then a new SMC decision procedure that uses a lazy combination of Boolean satisfiability solving and convex programming to provide a satisfying assignment or determine that the formula is unsatisfiable. I will finish by showing, through multiple experimental results, the real-time and the resilience performance of the proposed algorithms.

Fri, Oct 26 Jerry Kaplan Stanford Law School The Devil Made Me Do it: Computational Ethics for Robots McCullough 115 11:00AM
Abstract

Before we set robots and other autonomous systems loose in the world, we need to ensure that they will adhere to basic moral principles and human social conventions. This is easier said than done. Science fiction writer Issac Asimov famously proposed three laws of Robotics: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Less well known was the purpose of Asimov’s proposal: to point out that these simple rules are woefully inadequate as a design criteria for building ethical robots. So if his laws aren’t sufficient, what is? Join me as we window shop through two millennia of moral theories to find a suitable foundation for the emerging discipline of “Computational Ethics” — and explore the darkly hilarious ways these theories often fail in practice!

Fri, Nov 02 Wenzhen Yuan CMU/Stanford Making Sense of the Physical World with High-resolution Tactile Sensing McCullough 115 11:00AM
Abstract

With the rapid progress in robotics, people expect robots to be able to accomplish a wide variety of tasks in the real world, such as working in factories, performing household chores, and caring for elderly. However, it is still very difficult for robots to act in the physical world. A major challenge lies in the lack of adequate tactile sensing. Progress requires advances in the sensing hardware, but also requires advances in the software that can exploit the tactile signals generated when the robot touches an object. The sensor we use is a vision-based tactile sensor called GelSight, which measures the geometry and traction field of the contact surface. For interpreting the high-resolution tactile signal, we utilize both traditional statistical models and deep neural networks. I will describe research on two kinds of tasks: exploration and manipulation. For exploration, I use active touch to estimate the physical properties of the objects. The work has included learning the basic properties (e.g., hardness), of artificial objects, as well as estimating the general properties of natural objects via autonomous tactile exploration. For manipulation, I study the robot’s ability to detect slip or incipient slip with tactile sensing during grasping. My research helps robots to better understand and flexibly interact with the physical world.

Thu, Nov 08 Sumeet Singh Stanford Control-Theoretic Regularization for Nonlinear Dynamical Systems Learning 300-300 11:00AM
Abstract

When it works, model-based Reinforcement Learning (RL) typically offers major improvements in sample efficiency in comparison to state of the art RL methods such as Policy Gradients that do not explicitly estimate the underlying dynamical system. Yet, all too often, when standard supervised learning is applied to model complex dynamics, the resulting controllers do not perform at par with model-free RL methods in the limit of increasing sample size, due to compounding errors across long time horizons. In this talk, I will present novel algorithmic tools leveraging Lyapunov-based analysis and semi-infinite convex programming to derive a control-theoretic regularizer for dynamics fitting, rooted in the notion of trajectory stabilizability. The resulting semi-supervised algorithm yields dynamics models that jointly balance regression performance and stabilizability, ultimately resulting in generated trajectories for the robot that are notably easier to track. Evaluation on a simulated quadrotor model illustrates the vastly improved trajectory generation and tracking performance over traditional regression techniques, especially in the regime of small demonstration datasets. I will conclude with a brief discussion of some open questions within this field of control-theoretic learning.

Thu, Nov 08 Kirby Witte CMU Assistance of Walking and Running Using Wearable Robots 300-300 11:00AM
Abstract

We are familiar with wearable robots through comic books and movies. Exoskeletons give heroes such as Iron Man enhanced strength, speed, and ability. While we are far from reaching super human abilities in reality, exoskeletons are hitting the consumer market as tools for rehabilitation and assisting assembly line workers. Exoskeleton research has progressed significantly in the last several years, but it is still difficult to determine how exoskeleton assistance should be adapted to fit the needs of individuals. I present an approach to this problem that utilizes a highly adaptable experimental setup called an exoskeleton emulator system to rapidly explore exoskeleton design and control strategies. I will introduce human-in-the-loop optimization which is utilized for selecting the optimal settings for each user. I will also present the latest results for exoskeleton assisted walking and running using these tools and my thoughts on the future of exoskeleton technologies.

Fri, Nov 16 Aviv Tamar Technion Learning Representations for Planning McCullough 115 11:00AM
Abstract

How can we build autonomous robots that operate in unstructured and dynamic environments such as homes or hospitals? This problem has been investigated under several disciplines, including planning (motion planning, task planning, etc.), and reinforcement learning. While both of these fields have witnessed tremendous progress, each have fundamental drawbacks when it comes to autonomous robots. In general, planning approaches require substantial manual engineering in specifying a model for the domain, while RL is data hungry and does not generalize beyond the tasks seen during training. In this talk, we present several studies that aim to mitigate these shortcomings by combining ideas from both planning and learning. We start by introducing value iteration networks, a type of differentiable planner that can be used within model-free RL to obtain better generalization. Next, we consider a practical robotic assembly problem, and show that motion planning, based on readily available CAD data, can be combined with RL to quickly learn policies for assembling tight fitting objects. Then, we show how deep learning can be used to improve classical planning, by learning powerful image-based heuristic functions for A* search. We conclude with our recent work on learning to imagine goal-directed visual plans. Motivated by humans’ remarkable capability to predict and plan complex manipulations of objects, we develop a data-driven method that learns to ‘imagine’ a plausible sequence of observations that transition a dynamical system from its current configuration to a desired goal state. Key to our method is Causal InfoGAN, a deep generative model that can learn features that are compatible with strong planning algorithms. We demonstrate our approach on learning to imagine and execute robotic rope manipulation.

Fri, Nov 30 Ken Goldberg UC Berkeley A Grand Challenge for E-Commerce: Optimizing Rate, Reliability, and Range for Robot Bin Picking and Related Projects McCullough 115 11:00AM
Abstract

Consumer adoption of e-commerce is skyrocketing at Amazon, Walmart, JD.com, and Alibaba. As new super-sized warehouses are opening every month, it is proving increasingly difficult to hire enough workers to meet the pressing need to shorten fulfillment times. Thus a Holy Grail for e-commerce is robots that are capable of Universal Picking: reliably and efficiently grasping a massive (and changing) set of products of diverse shapes and sizes. I'll describe a 'new wave' in research that combines classical mechanics, stochastic, and deep learning. The First Wave of grasping research, still dominant, uses analytic methods based on screw theory and assumes exact knowledge of pose, shape, and contact mechanics. The Second Wave is empirical: purely data driven approaches which learn grasp strategies from many examples using techniques such as imitation and reinforcement learning with hyperparametric function approximation (Deep Learning). I'll present the Dexterity Network (Dex-Net), a New Wave method being developed by UC Berkeley startup Ambidextrous Laboratories hat combines analytic and empirical approaches to rapidly synthesize massive training datasets that incorporate statistical analytic models of the inherent errors arising from physics, sensing, and control. Dex-Net can be applied to almost any combination of robots, bins, shelves, 3D sensors, and gripping devices and is achieving record-breaking performance in picks per hour on novel objects.

Fri, Dec 07 Terry Fong NASA Human-Robot Teaming: From Space Robotics to Self-Driving Cars McCullough 115 11:00AM
Abstract

The role of robots in human-robot teams is increasingly becoming that of a peer-like teammate, or partner, who is able to assist with and complete joint tasks. This relationship raises key issues that need to be addressed in order for such teams to be effective. In particular, human-robot teaming demands that concepts of communication, coordination, and collaboration be accommodated by human-robot interaction. Moreover, building effective human-robot teams is challenging because robotic capabilities are continually advancing, yet still have difficulties when faced with anomalies, edge cases, and corner cases. In this talk, I will describe how NASA Ames has been developing and testing human-robot teams. In our research, we have focused on studying how such teams can increase the performance, reduce the cost, and increase the success of space missions. A key tenet of our work is that humans and robots should support one another in order to compensate for limitations of human manual control and robot autonomy. This principle has broad applicability beyond space exploration. Thus, I will conclude by discussing how we have worked with Nissan to apply our methods to self-driving cars -- enabling humans to support self-driving cars operating in unpredictable and difficult situations.

Schedule Spring 2018

Date Guest Affiliation Title Location Time
Fri, Apr 06 Phillippe Poignet LIRMM Univ Montpellier CNRS Recent advances in surgical robotics: some examples through the LIRMM research activities illustrated in minimally invasive surgery and interventional radiology Jordan Hall 040 11:00AM
Abstract

The interest of surgeons for robotics grew a lot since the last two decades. The presence of the DaVinci robot in the operating room opens a large road for the use of robotized instruments in the OR. Discussing some recent advances in surgical robotics, we will highlight some new trends by presenting some examples of the LIRMM research activities. It will be illustrated in the domain of minimally invasive surgery and interventional radiology.

Fri, Apr 13 Benjamin Hockman Stanford University Hopping Rovers for Exploration of Asteroids and Comets: Design, Control, and Autonomy Jordan Hall 040 11:00AM
Abstract

The surface exploration of small Solar System bodies, such as asteroids and comets, has become a central objective for NASA and space agencies worldwide. However, the highly irregular terrain and extremely weak gravity on small bodies present major challenges for traditional wheeled rovers, such as those sent to the moon and Mars. Through a joint collaboration between Stanford and JPL, we have been developing a minimalistic internally-actuated hopping rover called “Hedgehog” for targeted mobility in these extreme environments. By applying controlled torques to three internal flywheels, Hedgehog can perform various controlled maneuvers including long-range hops and short, precise “tumbles.” In this talk, I will present my PhD work on developing the necessary tools to make such a hopping system controllable and autonomous, ranging from low-level dynamics modeling and control analysis to higher-level motion planning for highly stochastic hopping/bouncing dynamics.

Fri, Apr 13 Zachary Sunberg Stanford University Safety and Efficiency in Autonomous Vehicles through Planning with Uncertainty Jordan Hall 040 11:00AM
Abstract

In order to be useful, autonomous vehicles must accomplish goals quickly while maintaining safety and minimizing disruptions to other human activities. One key to acting efficiently in a wide range of scenarios without compromising safety is modeling and planning with uncertainty, especially uncertainty in other agents' internal states such as intentions and dispositions. The partially observable Markov decision process (POMDP) provides a systematic framework for dealing with this type of uncertainty, but these problems are notoriously difficult to solve. Our new online algorithms make it possible to find approximate POMDP solutions, and that the solutions are useful in autonomous driving.

Fri, Apr 20 Zac Manchester Stanford University Planning for Contact Jordan Hall 040 11:00AM
Abstract

Contact interactions are pervasive in many key real-world robotic tasks like manipulation and walking. However, the dynamics associated with impacts and friction remain challenging to model, and motion planning and control algorithms that can effectively reason about contact remain elusive. In this talk I will share some recent work that leverages ideas from discrete mechanics to both accurately simulate rigid body dynamics with contact, as well as enable fully contact-implicit trajectory optimization. I will share several examples in which walking gates are generated for complex robots with no a priori specification of contact mode sequences.

Fri, Apr 27 Brian Casey Stanford Law School Law as Action: Profit-Driven Bias in Autonomous Vehicles Jordan Hall 040 11:00AM
Abstract

Profit maximizing firms designing autonomous vehicles will face economic incentives to weigh the benefits of delivering speedy transportation against expected liabilities in the event of an accident. This Article demonstrates that path planning algorithms optimizing these tradeoffs using well-established auto injury compensation formulas may have the unintended consequence of producing facially discriminatory driving behaviors in autonomous vehicles. It considers a simulated one-way street setting with a probabilistic pedestrian crossing and uses U.S. Census Bureau data to calculate income-based liability predictions for collision events. Obtaining quantitative results through Monte Carlo sampling, this Article shows how profit maximizing speeds can be expected to vary inversely with neighborhood income levels—putting simulated pedestrians that encounter such systems in predominantly minority regions at heightened risk of injury or death relative to their non-minority counterparts. It then discusses how these findings are consistent with a host of other recently documented instances of real world algorithmic bias that highlight the need for fairness, transparency, and accountability in AI systems. Finally, it surveys the challenges facing lawyers, engineers, industry leaders, and policymakers tasked with governing these systems, and argues that a multidisciplinary, multistakeholder approach is necessary to shape policy that is sensitive to the complex social realms into which AI systems are deploying.

Fri, May 04 Russ Tedrake MIT The Combinatorics of Multi-contact Feedback Control Jordan Hall 040 11:00AM
Abstract

It’s sad but true: most state-of-the-art systems today for robotic manipulation operate almost completely open-loop. Shockingly, we still have essentially no principled approaches to designing feedback controllers for systems of this complexity that make and break contact with the environment. Central to the challenge is the combinatorial structure of the contact problem. In this talk, I’ll review some recent work on planning and control methods which address this combinatorial structure without sacrificing the rich underlying nonlinear dynamics. I’ll present some details of our explorations with mixed-integer convex- and SDP-relaxations applied to hard problems in legged locomotion over rough terrain, manipulation, and UAVs flying through highly cluttered environments. I’ll also show a few teasers from the dynamics and manipulation team at the Toyota Research Institute.

Fri, May 11 Sreeja Nag NASA Ames Research Center/BAERI Autonomous Scheduling of Agile Spacecraft Constellations for Rapid Response Imaging Jordan Hall 040 11:00AM
Abstract

Distributed Spacecraft, such as formation flight and constellations, are being recognized as important Earth Observation solutions to increase measurement samples over multiple spatio-temporal-angular vantage points. Small spacecraft have the capability to host imager payloads and can slew to capture images within short notice, given the precise attitude control systems emerging in the commercial market. When combined with appropriate software, this can significantly increase response rate, revisit time and coverage. We have demonstrated a ground-based, algorithmic framework that combines orbital mechanics, attitude control and scheduling optimization to plan the time-varying, full-body orientation of agile, small spacecraft in a constellation, such that they maximize observations for given imaging requirements and spacecraft specifications. Running the algorithm onboard will enable the constellation to make time-sensitive, measurement decisions autonomously. Upcoming technologies such as inter-satellite links, onboard processing of images for intelligent decision making and onboard orbit prediction will be leveraged for reaching consensus and coordinated execution among multiple spacecraft.

Fri, May 18 Mo Chen Stanford University Safety in Autonomy via Reachability Jordan Hall 040 11:00AM
Abstract

Autonomous systems are becoming pervasive in everyday life, and many of these systems are complex and safety-critical. Reachability analysis is a flexible tool for guaranteeing safety for nonlinear systems under the influence of unknown disturbances, and involves computing the reachable set, which quantifies the set of initial states from which a system may reach a set of unsafe states. However, computational scalability, a difficult challenge in formal verification, makes reachability analysis intractable for complex, high-dimensional systems. In this seminar, I will show how high-dimensional reachability analysis can be made more tractable through a clever differential game approach in the context of real-time robust planning, through carefully decomposing a complex system into subsystems, and through utilizing optimization techniques that provide conservative safety guarantees. By tackling the curse of dimensionality from multiple fronts, tractable verification of practical systems is becoming a reality, paving the way towards more pervasive and safer automation.

Fri, May 18 Ming Luo Stanford University Design, Theoretical Modeling, Motion Planning , and Control of a Pressure-operated Modular Soft Robotic Snake Jordan Hall 040 11:00AM
Abstract

Snake robotics is an important research topic with applications in a wide range of fields including inspection in confined spaces, search-and-rescue, and disaster response, where other locomotion modalities may not be ideal. Snake robots are well-suited to these applications because of their versatility and adaptability to unstructured and high-risk environments. However, compared to their biological counterparts, rigid snake robots have kinematic limitations that reduce their effectiveness in negotiating tight passageways. Pressure-operated soft robotic snakes offer a solution that can address this functionality gap. To achieve functional autonomy, this talk combines soft mobile robot modeling, control, and motion planning. We propose a pressure-operated soft robotic snake with a high degree of modularity, with embedded flexible local curvature sensing based on our recent results in this area. On this platform, we introduce the use of iterative learning control using feedback from the on-board curvature sensors to enable the snake to control its locomotion direction. We also present a motion planning and trajectory following algorithm using an adaptive bounding box, which allows for efficient motion planning that takes into account the kinematic and dynamic information of the soft robotic snake. We test this algorithm experimentally, and demonstrate its performance for obstacle avoidance scenarios.

Fri, May 25 Stefano Soatto UCLA The Information Knot Tying Sensing and Control and the Emergence Theory of Deep Representation Learning Jordan Hall 040 11:00AM
Abstract

Internal representations of the physical environment, inferred from sensory data, are believed to be crucial for interaction with it, but until recently lacked sound theoretical foundations. Indeed, some of the practices for high-dimensional sensor streams like imagery seemed to contravene basic principles of Information Theory: Are there non-trivial functions of past data that ‘summarize’ the ‘information’ it contains that is relevant to decision and control tasks? What ‘state’ of the world should an autonomous system maintain? How would such a state be inferred? What properties should it have? Is there some kind of ‘separation principle’, whereby a statistic (the state) of all past data is sufficient for control and decision tasks? I will start from defining an optimal representation as a (stochastic) function of past data that is sufficient (as good as the data) for a given task, has minimal (information) complexity, and is invariant to nuisance factors affecting the data but irrelevant for a task. Such minimal sufficient invariants, if computable, would be an ideal representation of the given data for the given task. I will then show that these criteria can be formalized into a variational optimization problem via the Information Bottleneck Lagrangian, and minimized with respect to a universal approximant class of function realized by deep neural networks. I will then specialize this program for control tasks, and show that it is possible to define and compute a ‘state’ that separates past data from future tasks, and has all the desirable properties that generalize the state of dynamical models customary in linear control systems, except for being highly non-linear and having high-dimension (in the millions).

Schedule Winter 2018

Date Guest Affiliation Title Location Time
Fri, Jan 12 Mark Cutkosky and Allison Okamura Stanford University Biological and Robotic haptics Jordan Hall 040 11:00AM
Abstract

We will introduce the audience to the mechanisms underlying tactile sensing in nature and the corresponding implications for robotic tactile sensing, tactile perception and haptics. The first part of the talk will focus primarily on human mechanoreception, to provide an understanding of what we sense, how we sense it, and how we use the information in exploration and manipulation. The second part will look at robotic tactile sensing and haptic display, mainly from the standpoint of what information is desired and how to obtain it, rather than surveying the many kinds of tactile sensors developed over the years. In comparison to other sensing modalities, tactile sensing is inherently multi-modal, distributed, and the result of physical interactions with objects and surfaces. These factors are largely responsible for the slow evolution of robotic tactile sensing in comparison to vision.

Fri, Jan 19 Byron Boots Georgia Tech Learning Perception and Control for Agile Off-Road Autonomous Driving Jordan Hall 040 11:00AM
Abstract

The main goal of this talk is to illustrate how machine learning can start to address some of the fundamental perceptual and control challenges involved in building intelligent robots. I’ll start by introducing a new high speed autonomous “rally car” platform built at Georgia Tech, and discuss an off-road racing task that requires impressive sensing, speed, and agility to complete. I will discuss two approaches to this problem, one based on model predictive control and one based on learning deep policies that directly map images to actions. Along the way I’ll introduce new tools from reinforcement learning, imitation learning, and online learning and show how theoretical insights help us to overcome some of the practical challenges involved in learning on a real-world platform. I will conclude by discussing ongoing work in my lab related to machine learning for robotics.

Fri, Jan 26 Animesh Garg Stanford University Towards Generalizable Imitation in Robotics Jordan Hall 040 11:00AM
Abstract

Robotics and AI are experiencing radical growth, fueled by innovations in data-driven learning paradigms coupled with novel device design, in applications such as healthcare, manufacturing and service robotics. Data-driven methods such reinforcement learning circumvent hand-tuned feature engineering, albeit lack guarantees and often incur a massive computational expense: training these models frequently takes weeks in addition to months of task-specific data-collection on physical systems. Further such ab initio methods often do not scale to complex sequential tasks. In contrast, biological agents can often learn faster not only through self-supervision but also imitation. My research aims to bridge this gap and enable generalizable imitation for robot autonomy. We need to build systems that can capture semantic task structures that promote sample efficiency and can generalize to new task instances across visual, dynamical or semantic variations. And this involves designing algorithms that unify in reinforcement learning, control theoretic planning, semantic scene & video understanding, and design. In this talk three aspects of Generalizable Imitation: Task Structure Learning, Policy Generalization, and Robust/Safe Transfer. First, I will how we can move away from hand-designed finite state machines by unsupervised structure learning for complex multi-step sequential tasks. I will then present a method for generalization across task semantics with a single example with unseen task structure, topology or length. Then I will discuss techniques for robust policy learning to handle generalization across unseen dynamics. And lastly, I will revisit task structure learning to build task representations that generalize across visual semantics. I will present a reference resolution algorithm for task-level understanding from videos. The algorithms and techniques introduced are applicable across domains in robotics; in this talk, I will exemplify these ideas through my work on medical and personal robotics.

Fri, Feb 02 Steven H. Collins Stanford University Designing exoskeletons and prostheses that enhance human performance Jordan Hall 040 11:00AM
Abstract

Exoskeletons and active prostheses could improve mobility for hundreds of millions of people. However, two serious challenges must first be overcome: we need ways of identifying what a device should do to benefit an individual user, and we need cheap, efficient hardware that can do it. In this talk, we will describe a new approach to the design of wearable robots, based on versatile emulator systems and algorithms that automatically customize assistance, which we call human-in-the-loop optimization. We will also discuss the design of exoskeletons that use no energy themselves, yet reduce the energy cost of human walking, and efficient, electroadhesive actuators that could make wearable robots substantially cheaper and more efficient.

Fri, Feb 09 Edward Schmerling Stanford University On Quantifying Uncertainty for Robot Planning and Decision Making Jordan Hall 040 11:00AM
Abstract

Robot planning and control is often tailored towards the 'average' case -- we plan with a certain behavior in mind and hope that in execution a robot can achieve, or at least stay close to its plan. While this assumption may be justified for assembly robots on factory floors, in less structured settings robots must contend with uncertainty in their dynamics, sensing, and environment that can force their best laid plans awry. In this talk I will discuss two methods for quantifying uncertainty in the case that multimodality, i.e., the possibility of multiple highly distinct futures, plays a critical role in decision making. The first portion of this talk will outline a computationally efficient method for estimating the likelihood of multiple rare, but critical events (e.g., collisions with a robot's environment) under a known uncertainty model. The second portion will focus on learning multimodal generative models for human-robot interaction in an autonomous driving context where the uncertainty in human action depends reciprocally on a robot's candidate action plan.

Fri, Feb 09 Sarah Marie Thornton Stanford University Value sensitive design for autonomous vehicle motion planning Jordan Hall 040 11:00AM
Abstract

Human drivers navigate the roadways by balancing the values of safety, legality, and mobility. The public will likely judge an autonomous vehicle by the same values. The iterative methodology of value sensitive design formalizes the connection of human values to engineering specifications. We apply a modified value sensitive design methodology to the development of an autonomous vehicle speed control algorithm to safely navigate an occluded pedestrian crosswalk. The first iteration presented here models the problem as a partially observable Markov decision process and uses dynamic programming to compute an optimal policy to control the longitudinal acceleration of the vehicle based on the belief of a pedestrian crossing.

Fri, Feb 16 Vincent Vanhoucke Google Brain Self-Supervision for Robotic Learning Jordan Hall 040 11:00AM
Abstract

One of the main challenges in applying machine learning techniques to robotics problem is the problem of acquiring labeled data. This is particularly important for anything involving perception, where deep learning techniques perform very well in high-data, supervised regimes, but degrade quickly in performance when data-starved. In this talk I'll argue that thanks to the intrinsic multi-modal and dynamical nature of many robotics problems, much of that gap can be filled using self-supervision, using either alternative modalities or temporal prediction as the supervisory signal. I'll examine how self-consistency, both at the geometric and semantic level, can provide a powerful signal to leverage in teaching robots how to interpret and act in the world.

Fri, Feb 23 Christian Duriez Institut national de recherche en informatique et en automatique Numerical methods for modeling, simulation and control for deformable robots. Jordan Hall 040 11:00AM
Abstract

The design of robots can now be done with complex deformable structures, close to organic material that can be found in nature. Soft robotics opens very interesting perspectives in terms of human interaction, new applications, cost reduction, robustness, security… Soft robotics could bring new advances in robotics in the coming years. However these robots being highly deformable, traditional modeling and control methods used in robotics do not fully apply. During this talk, this scientific challenge of modeling and control of soft robot will be presented. I will also present some of our contributions which make use of methods from numerical mechanics (like Finite Element Methods) and adapt them to fulfill the constraints of robotics: real-time computation, direct and inverse kinematic models, closed loop control…

Fri, Mar 02 Jana Kosecka George Mason University Semantic Understanding for Robot Perception Jordan Hall 040 11:00AM
Abstract

Advancements in robotic navigation and fetch and delivery tasks rest to a large extent on robust, efficient and scalable semantic understanding of the surrounding environment. Deep learning fueled rapid progress in computer vision in object category recognition, localization and semantic segmentation, exploiting large amounts of labelled data and using mostly static images. I will talk about challenges and opportunities in tackling these problems in indoors and outdoors environments relevant to robotics applications. These include methods for semantic segmentation and 3D structure recovery using deep convolutional neural networks (CNNs), localization and mapping of large scale environments, training object instance detectors using synthetically generated training data and 3D object pose recovery. The applicability of the techniques for autonomous driving, service robotics, augmented reality and navigation will be discussed.

Fri, Mar 09 Dimitria Panagou University of Michigan Persistent Coverage Control for Constrained Multi-UAV Systems Jordan Hall 040 11:00AM
Abstract

Control of multi-agent systems and networks has been a popular topic of research with applications in numerous real-world problems involving autonomous unmanned vehicles (ground, marine, aerial, space) and robotic assets. Despite the significant progress over the past few years, we are not yet in place to deploy arbitrarily large-scale systems with prescribed safety and resilience (against malfunction or malicious attacks) guarantees for a variety of applications, such as surveillance and situational awareness in civilian and military environments. Planning, estimation and control for such complex systems is challenging due to non-trivial agent (vehicle, robot) dynamics, restrictions in onboard power, sensing, computation and communication resources, the number of agents in the network, and uncertainty about the environment. In this talk, we will present some of our recent results and ongoing work on the safe, persistent dynamic coverage control for multi-UAS networks.

Fri, Mar 16 Daniela Rus MIT Recent Advances Enabling Autonomous Transportation Jordan Hall 040 11:00AM
Abstract

Tomorrow's cars will be our partners. They will drastically improve the safety and quality of the driving experience. They will fill in when our human senses fail us, helping us navigate icy roads and blind intersections, paying attention when we're tired, and even making our time in the car fun. However we are not there yet. Our broad objective is to develop the science and engineering of autonomy and its broad range of application in transportation, logistics, manufacturing, and exploration. In this talk I will discuss recent advances in autonomous vehicles and mobility as a service, powered by new algorithms for perception, planning, learning, and control. These algorithms (i) understand the behavior of other agents, (ii) devise controllers for safe interactions, (iii) generate provably-safe trajectories that get the vehicle through cluttered environments in a natural manner, while guaranteeing safety, and (iv) optimize the allocation of customers to vehicles to optimize a multi-vehicle transportation system.

Fri, Mar 16 Koushil Sreenath UC Berkeley Safety-Critical Control for Dynamic Legged and Aerial Robotics Jordan Hall 040 11:00AM
Abstract

Biological systems such as birds and humans are able to move with great agility, efficiency, and robustness in a wide range of environments. Endowing machines with similar capabilities requires designing controllers that address the challenges of high-degree-of-freedom, high-degree-of-underactuation, nonlinear & hybrid dynamics, as well as input, state, and safety-critical constraints in the presence of model and sensing uncertainty. In this talk, I will present the design of planning and control algorithms for (i) dynamic legged locomotion over discrete terrain that requires enforcing safety-critical constraints in the form of precise foot placements; and (ii) dynamic aerial manipulation through cooperative transportation of a cable-suspended payload using multiple aerial robots with safety-critical constraints on manifolds. I will show that we can address the challenges of stability of hybrid systems through control Lyapunov functions (CLFs), input and state constraints through CLF-based quadratic programs, and safety-critical constraints through control barrier functions. I will show that robust and geometric formulations of control Lyapunov and barrier functions can respectively address adverse effects of model uncertainty on stability and constraint enforcement on manifolds.

Schedule Fall 2017

Date Guest Affiliation Title Location Time
Thu, Oct 05 Dorsa Sadigh Stanford University No Title STLC-111 11:00AM
Abstract

Today’s society is rapidly advancing towards cyber-physical systems (CPS) that interact and collaborate with humans, e.g., semi-autonomous vehicles interacting with drivers and pedestrians, medical robots used in collaboration with doctors, or service robots interacting with their users in smart homes. The safety-critical nature of these systems requires us to provide provably correct guarantees about their performance in interaction with humans. The goal of my research is to enable such human-cyber-physical systems (h-CPS) to be safe and interactive. I aim to develop a formalism for design of algorithms and mathematical models that facilitate correct-by-construction control for safe and interactive autonomy. In this talk, I will first discuss interactive autonomy, where we use algorithmic human-robot interaction to be mindful of the effects of autonomous systems on humans, and further leverage these effects for better safety, efficiency, coordination, and estimation. I will then talk about safe autonomy, where we provide correctness guarantees, while taking into account the uncertainty arising from the environment. Further, I will discuss a diagnosis and repair algorithm for systematic transfer of control to the human in unrealizable settings. While the algorithms and techniques introduced can be applied to many h-CPS applications, in this talk, I will focus on the implications of my work for semi-autonomous driving.

Thu, Oct 19 Jeannette Bohg Stanford University Combining learned and analytical models for predicting the effect of contact interaction STLC-111 11:00AM
Abstract

One of the most basic skills a robot should possess is predicting the effect of physical interactions with objects in the environment. Traditionally, these dynamics have been described by physics-based analytical models which may be very hard to formulate for complex problems. More recently, we have seen learning-based approaches that can predict the effect of complex physical interactions from raw sensory input. However, it is an open question how far these models generalise beyond their training data. In this talk, I propose a way to combine analytical and learned models to leverage the best of both worlds. The method assumes raw sensory data as input and the predicted effect as output. In our experiments, we compared the performance of the proposed model to a purely learned and a pure analytical model. Our results show that the combined method outperforms the purely learned version in terms of accuracy and generalisation to interactions and objects not seen during training. Beyond these empirical result, I will also present an in-depth analysis of why the purely learned model has difficulties in capturing the dynamics of this task and how the analytical model helps.

Fri, Nov 03 Maja Mataric University of Southern California Combining learned and analytical models for predicting the effect of contact interaction Gates 104 11:00AM
Abstract

Thu, Nov 16 Jean-Jacques Slotine MIT Evolvability and adaptation in robotic systems STLC-111 11:00AM
Abstract

The talk discusses recent nonlinear dynamic system tools and their robotics applications to collective behavior, adaptation, identification, SLAM, and biomimetic flight.

Thu, Nov 30 Franziska Meier Max Plank Institute for Intelligent Systems Continuously Learning Robots STLC-111 11:00AM
Abstract

Most robot learning approaches are focussed on discrete, e.g. single-task, learning events. A policy is trained for specific environments and/or tasks, and then tested on similar problems. Yet, in order to be truly autonomous, robots need to be able to react to unexpected events and then update their models/policies to include the just encountered data points. In short, true autonomy requires continual learning. However, continuously updating models without forgetting previously learned mappings remains an open research problem. In this talk I will present learning algorithms, based on localized inference schemes, that alleviate the problem of forgetting when learning incrementally. Finally, I will introduce our recent advances on learning-to-learn in the context of continuous learning. We show that, with the help of our meta-learner we achieve faster model adaptation when encountering new situations when learning online.

Mon, Dec 04 Philipp Hennig Max Plank Institute for Intelligent Systems Probabilistic Numerics — Uncertainty in Computation Packard 202 11:00AM
Abstract

The computational complexity of inference from data is dominated by the solution of non-analytic numerical problems (large-scale linear algebra, optimization, integration, the solution of differential equations). But a converse of sorts is also true — numerical algorithms for these tasks are inference engines! They estimate intractable, latent quantities by collecting the observable result of tractable computations. Because they also decide adaptively which computations to perform, these methods can be interpreted as autonomous inference agents. This observation lies at the heart of the emerging topic of Probabilistic Numerical Computation, which applies the concepts of probabilistic (Bayesian) inference to the design of algorithms, assigning a notion of probabilistic uncertainty to the result even of deterministic computations. I will outline how this viewpoint is connected to that of classic numerical analysis, and show that thinking about computation as inference affords novel, practical answers to the challenges of large-scale, big data, inference.