Archive

Schedule Fall 2019

Date Guest Affiliation Title Location Time
Fri, Sep 27 Jaime Fisac Princeton University Mind the Gap: Bridging model-based and data-driven reasoning for safe human-centered robotics Skilling Auditorium 11:00AM
Abstract

Spurred by recent advances in perception and decision-making, robotic technologies are undergoing a historic expansion from factory floors to the public space. From autonomous driving and drone delivery to robotic devices in the home and workplace, robots are bound to play an increasingly central role in our everyday lives. However, the safe deployment of these systems in complex, human-populated spaces introduces new fundamental challenges. Whether safety-critical failures (e.g. collisions) can be avoided will depend not only on the decisions of the autonomous system, but also on the actions of human beings around it. Given the complexity of human behavior, how can robots reason through these interactions reliably enough to ensure safe operation in our homes and cities? In this talk I will present a vision for safe human-centered robotics that brings together control-theoretic safety analysis and Bayesian machine learning, enabling robots to actively monitor the “reality gap” between their models and the world while leveraging existing structure to ensure safety in spite of this gap. In particular, I will focus on how robots can reason game-theoretically about the mutual influence between their decisions and those of humans over time, strategically steering interaction towards safe outcomes despite the inevitably limited accuracy of human behavioral models. I will show some experimental results on quadrotor navigation around human pedestrians and simulation studies on autonomous driving. I will end with a broader look at the pressing need for assurances in human-centered intelligent systems beyond robotics, and how control-theoretic safety analysis can be incorporated into modern artificial intelligence, enabling strong synergies between learning and safety.

Fri, Oct 04 Monroe Kennedy Stanford University Modeling and Control for Robotic Assistants Skilling Auditorium 11:00AM
Abstract

As advances are made in robotic hardware, the capacity of the complexity of tasks they are capable of performing also increases. One goal of modern robotics is to introduce robotic platforms that require very little augmentation of their environments to be effective and robust. Therefore the challenge for the Roboticist is to develop algorithms and control strategies that leverage the knowledge of the task while retaining the ability to be adaptive, adjusting to perturbations in the environment and task assumptions. These strategies will be discussed in the context of a wet-lab robotic assistant. Motivated by collaborations with a local pharmaceutical company, we will explore two relevant tasks. First, we will discuss a robot-assisted rapid experiment preparation system for research and development scientists. Second, we will discuss ongoing work for intelligent human-robot cooperative transport with limited communication. These tasks are the beginning of a suite of abilities for an assisting robotic platform that can be transferred to similar applications useful to a diverse set of end-users.

Fri, Oct 11 Adrien Gaidon Toyota Research Institute Self-Supervised Pseudo-Lidar Networks Skilling Auditorium 11:00AM
Abstract

Although cameras are ubiquitous, robotic platforms typically rely on active sensors like LiDAR for direct 3D perception, especially in safety critical contexts like Automated Driving. Nonetheless, recent progress in combining deep learning and geometry suggests that cameras may become a competitive source of reliable 3D information. In this talk, we will present our latest developments in self-supervised monocular depth and pose estimation for urban environments. Particularly, we show that with the proper network architecture, large-scale training, and computational power it is possible to outperform fully supervised methods while still operating on the much more challenging self-supervised setting, where the only source of input information are video sequences. Furthermore, we discuss how other sources of information (i.e. camera velocity, sparse LiDAR data, and semantic predictions) can be leveraged at training time to further improve pseudo-lidar accuracy and overcome some of the inherent limitations of self-supervised learning.

Fri, Oct 18 Kostas Alexis University of Nevada Reno Field-hardened Robotic Autonomy Skilling Auditorium 11:00AM
Abstract

This talk will present our contributions in the domain of field-hardened resilient robotic autonomy and specifically on multi-modal sensing-degraded GPS-denied localization and mapping, informative path planning, and robust control to facilitate reliable access, exploration, mapping and search of challenging environments such as subterranean settings. The presented work will, among others, emphasize on fundamental developments taking place in the framework of the DARPA Subterranean Challenge and the research of the CERBERUS (https://www.subt-cerberus.org/) team, alongside work on nuclear site characterization and infrastructure inspection. Relevant field results from both active and abandoned underground mines as well as tunnels in the U.S. and in Switzerland will be presented. In addition, a selected set of prior works on long-term autonomy, including the world-record on unmanned aircraft endurance will be briefly overviewed. The talk will conclude with directions for future research to enable advanced autonomy and resilience, alongside the necessary connection to education and the potential for major broader impacts to the benefit of our economy and society.

Fri, Oct 25 Francesco Borrelli UC Berkeley Learning and Predictions in Autonomous Systems Skilling Auditorium 11:00AM
Abstract

Forecasts play an important role in autonomous and automated systems. Applications include transportation, energy, manufacturing and healthcare systems. Predictions of systems dynamics, human behavior and environment conditions can improve safety and performance of the resulting system. However, constraint satisfaction, performance guarantees and real-time computation are challenged by the growing complexity of the engineered system, the human/machine interaction and the uncertainty of the environment where the system operates. Our research over the past years has focused on predictive control design for autonomous systems performing iterative tasks. In this talk I will first provide an overview of the theory and tools that we have developed for the systematic design of learning predictive controllers. Then, I will focus on recent results on the use of data to efficiently formulate stochastic MPC problems which autonomously improve performance in iterative tasks. Throughout the talk I will focus on autonomous cars and solar power plants to motivate our research and show the benefits of the proposed techniques.

Fri, Nov 01 Tianshi Gao and Sam Abrahams Cruise Automation Scaled Learning for Autonomous Vehicles Skilling Auditorium 11:00AM
Abstract

The adoption of machine learning to solve problems in autonomous systems has become increasingly prevalent. Cruise is a developer of self-driving cars, currently operating a research and development fleet of over 100 all-electric autonomous vehicles in San Francisco. In this talk, we focus on the challenges involved with developing machine learning solutions in the autonomous driving domain. In addition to sharing lessons learned over the past few years of autonomous vehicle development, this discussion will include a review of some of the more challenging perception and prediction problems faced when operating driverless vehicles on the chaotic streets of San Francisco. Then, we share and highlight what it takes to make machine learning work in the wilderness at scale to meet these challenges.

Fri, Nov 08 Ricardo Sanfelice UC Santa Cruz Model Predictive Control of Hybrid Dynamical Systems Skilling Auditorium 11:00AM
Abstract

Hybrid systems model the behavior of dynamical systems in which the states can evolve continuously and, at isolate time instances, exhibit instantaneous jumps. Such systems arise when control algorithms that involve digital devices are applied to continuous-time systems, or when the intrinsic dynamics of the system itself has such hybrid behavior, for example, in mechanical systems with impacts, switching electrical circuits, spiking neurons, atc. Hybrid control may be used for improved performance and robustness properties compared to conventional control, and hybrid dynamics may be unavoidable due to the interplay between digital and analog components in a cyber-physical system. In this talk, we will introduce analysis and design tools for model predictive control (MPC) schemes for hybrid systems. We will present recently developed results on asymptotically stabilizing MPC for hybrid systems based on control Lyapunov functions. After a short overview of the state of the art on hybrid MPC, and a brief introduction to a powerful hybrid systems framework, we will present key concepts and analysis tools. After that, we will lay out the theoretical foundations of a general MPC framework for hybrid systems, with guaranteed stability and feasibility. In particular, we will characterize invariance properties of the feasible set and the terminal constraint sets, continuity of the value function, and use these results to establish asymptotic stability of the hybrid closed-loop system. To conclude, we will illustrate the framework in several applications and summarize some of the open problems, in particular, those related to computational issues.

Fri, Nov 15 BARS 2019 UC Berkeley and Stanford Bay Area Robotics Symposium International House 8:30AM
Abstract

The 2019 Bay Area Robotics Symposium aims to bring together roboticists from the Bay Area. The program will consist of a mix of faculty, student and industry presentations.

Fri, Nov 22 Hannah Stuart UC Berkeley Hands in the Real World: Grasping Outside the Lab Skilling Auditorium 11:00AM
Abstract

Robots face a rapidly expanding range of potential applications beyond the lab, from remote exploration and search-and-rescue to household assistance. The focus of physical interaction is typically delegated to end-effectors, or hands, as these machines perform manual tasks. Despite decades of dedicated research, effective deployment of robot hands in the real world is still limited to a few examples, other than the use of rigid parallel-jaw grippers. In this presentation, I will review articulated hands that found application in the field, focusing primarily on ocean exploration and drawing examples from recent developments in the Embodied Dexterity Group. I will also introduce preliminary findings regarding an assistive mitten designed to improve the grasping strength of people with weakened hands. Similarities between the design of robot hands and wearable technologies for the human hand will be discussed.

Fri, Dec 06 Chelsea Finn Stanford University The Next Generation of Robot Learning Skilling Auditorium 11:00AM
Abstract

For robots to be successful in unconstrained environments, they must be able to perform tasks in a wide variety of situations — they must be able to generalize. We’ve seen impressive results from machine learning systems that generalize to broad real-world datasets for a range of problems. Hence, machine learning provides a powerful tool for robots to do the same. However, in sharp contrast, machine learning methods for robotics often generalize narrowly within a single laboratory environment. Why the mismatch? In this talk, I’ll discuss the challenges that face robots, in contrast to standard machine learning problem settings, and how we can rethink both our robot learning algorithms and our data sources in a way that enables robots to generalize broadly across tasks, across environments, and even across robot platforms.

Schedule Spring 2019

Date Guest Affiliation Title Location Time
Fri, Apr 05 Rick Zhang Zoox Practical Challenges of Urban Autonomous Driving McCullough 115 11:00AM
Abstract

Autonomous driving holds great promise for society in terms of improving road safety, increasing accessibility, and increasing productivity. Despite rapid technological advances in autonomous driving over the past decade, significant challenges still remain. In this talk, I will examine several practical challenges of autonomous driving in dense urban environments, with an emphasis on challenges involving human-robot interactions. I will talk about how Zoox thinks about these challenges and tackles them on multiple levels throughout the AI stack (Perception, Prediction, Planning, and Simulation). Finally, I will share my perspectives and outlook on the future of autonomous mobility.

Fri, Apr 12 Ross Knepper Cornell University Formalizing Teamwork in Human-Robot Interaction McCullough 115 11:00AM
Abstract

Robots out in the world today work for people but not with people. Before robots can work closely with ordinary people as part of a human-robot team in a home or office setting, robots need the ability to acquire a new mix of functional and social skills. Working with people requires a shared understanding of the task, capabilities, intentions, and background knowledge. For robots to act jointly as part of a team with people, they must engage in collaborative planning, which involves forming a consensus through an exchange of information about goals, capabilities, and partial plans. Often, much of this information is conveyed through implicit communication. In this talk, I formalize components of teamwork involving collaboration, communication, and representation. I illustrate how these concepts interact in the application of social navigation, which I argue is a first-class example of teamwork. In this setting, participants must avoid collision by legibly conveying intended passing sides via nonverbal cues like path shape. A topological representation using the braid groups enables the robot to reason about a small enumerable set of passing outcomes. I show how implicit communication of topological group plans achieves rapid covergence to a group consensus, and how a robot in the group can deliberately influence the ultimate outcome to maximize joint performance, yielding pedestrian comfort with the robot.

Fri, Apr 19 David Lentink Stanford University Avian Inspired Design McCullough 115 11:00AM
Abstract

Many organisms fly in order to survive and reproduce. My lab focusses on understanding bird flight to improve flying robots—because birds fly further, longer, and more reliable in complex visual and wind environments. I use this multidisciplinary lens that integrates biomechanics, aerodynamics, and robotics to advance our understanding of the evolution of flight more generally across birds, bats, insects, and autorotating seeds. The development of flying organisms as an individual and their evolution as a species are shaped by the physical interaction between organism and surrounding air. The organism’s architecture is tuned for propelling itself and controlling its motion. Flying animals and plants maximize performance by generating and manipulating vortices. These vortices are created close to the body as it is driven by the action of muscles or gravity, then are ‘shed’ to form a wake (a trackway left behind in the fluid). I study how the organism’s architecture is tuned to utilize these and other aeromechanical principles to compare the function of bird wings to that of bat, insect, and maple seed wings. The experimental approaches range from making robotic models to training birds to fly in a custom-designed wind tunnel as well as in visual flight arena’s—and inventing methods to 3D scan birds and measure the aerodynamic force they generate—nonintrusively—with a novel aerodynamic force platform. The studies reveal that animals and plants have converged upon the same solution for generating high lift: A strong vortex that runs parallel to the leading edge of the wing, which it sucks upward. Why this vortex remains stably attached to flapping animal and spinning plant wings is elucidated and linked to kinematics and wing morphology. While wing morphology is quite rigid in insects and maple seeds, it is extremely fluid in birds. I will show how such ‘wing morphing’ significantly expands the performance envelope of birds during flight, and will dissect the mechanisms that enable birds to morph better than any aircraft can. Finally, I will show how these findings have inspired my students to design new flapping and morphing aerial robots.

Fri, Apr 26 Matei Ciocarlie Columbia University How to Make, Sense, and Make Sense of Contact in Robotic Manipulation McCullough 115 11:00AM
Abstract

Reach into your pocket, grab one object (phone) between others (keys, wallet), and take it out. Congratulations, you have achieved an impressive feat of motor control, one that we can not replicate in artificial mechanisms. What was the key to success: the mechanical structure of the hand, the rich tactile and proprioceptive data it can collect, analysis and planning in the brain, or perhaps all of these? In this talk, I will present our work advancing each of these areas: analytical models of grasp stability (with realistic contact and non-convex energy dissipation constraints), design and use of sensors (tactile and proprioceptive) for contact information, and hand posture subspaces (for mechanism design optimization and teleoperation). These are stepping stones towards motor skills which rely on transient contact with complex environments (such as dexterous manipulation), motivated by applications as diverse as logistics, manufacturing, disaster response and space robots.

Fri, May 03 Nora Ayanian USC Crossing the Reality Gap: Coordinating Multirobot Systems in The Physical World McCullough 115 11:00AM
Abstract

Using a group of robots in place of a single robot to accomplish a complex task has many benefits such as redundancy, robustness, faster completion times, and the ability to be everywhere at once. The applications of such systems are wide and varied: Imagine teams of robots containing forest fires, filling urban skies with package deliveries, or searching for survivors after a natural disaster. These applications have been motivating multirobot research for years, but why aren’t they happening yet? These missions demand different roles for robots, necessitating a strategy for coordinated autonomy while respecting any constraints the particular environment or other team members may impose. As a result, current solutions for multirobot systems are often task- and environment-specific, requiring hand-tuning and an expert in the loop. They also require solutions that can manage complexity as the number of robots increases. Such inflexibility in deployment, reduced situational awareness, computational complexity, and need for multiple operators significantly limits widespread use of multirobot systems In this talk I will present algorithmic strategies that address the main challenges that precludes the widespread adoption of multirobot systems. In particular, I will focus on strategies we have developed that automatically synthesize policies that are broadly applicable to navigating groups of robots in complex environment, from nearly real-time solutions for coordinating hundreds of robots to real-time collision avoidance. I will conclude with experimental results that validate our strategies using our CrazySwarm testbed -- a 49-UAV platform for testing multi-robot algorithms at a large scale.

Fri, May 10 Anirudha Majumdar Princeton University Safety Guarantees with Perception and Learning in the Loop McCullough 115 11:00AM
Abstract

Imagine an unmanned aerial vehicle (UAV) that successfully navigates a thousand different obstacle environments or a robotic manipulator that successfully grasps a million objects in our dataset. How likely are these systems to succeed on a novel (i.e., previously unseen) environment or object? How can we learn control policies that provably generalize well to environments or objects that our robot has not previously encountered? In this talk, I will present approaches for learning control policies for robotic systems that provably generalize well with high probability to novel environments. The key technical idea behind our approach is to leverage tools from generalization theory (e.g., PAC-Bayes theory) in machine learning and the theory of information bottlenecks from information theory. We apply our techniques on examples including UAV navigation and grasping in order to demonstrate the ability to provide strong generalization guarantees on controllers for robotic systems with continuous state and action spaces, complicated (e.g., nonlinear) dynamics, and rich sensory inputs (e.g., depth measurements).

Fri, May 17 Davide Scaramuzza University of Zurich, ETH Autonomous, Agile, Vision-controlled Drones: from Frame-based to Event-based Vision McCullough 115 11:00AM
Abstract

Autonomous quadrotors will soon play a major role in search-and-rescue and remote-inspection missions, where a fast response is crucial. Quadrotors have the potential to navigate quickly through unstructured environments, enter and exit buildings through narrow gaps, and fly through collapsed buildings. However, their speed and maneuverability are still far from those of birds. Indeed, agile navigation through unknown, indoor environments poses a number of challenges for robotics research in terms of perception, state estimation, planning, and control. In this talk, I will show that tightly-coupled perception and control is crucial in order to plan trajectories that improve the quality of perception. Also, I will talk about our recent results on event-based vision to enable low latency sensory motor control and navigation in both low light and dynamic environments, where traditional vision sensors fail.

Fri, May 24 Ben Recht UC Berkeley The Merits of Models in Continuous Reinforcement Learning McCullough 115 11:00AM
Abstract

Classical control theory and machine learning have similar goals: acquire data about the environment, perform a prediction, and use that prediction to impact the world. However, the approaches they use are frequently at odds. Controls is the theory of designing complex actions from well-specified models, while machine learning makes intricate, model-free predictions from data alone. For contemporary autonomous systems, some sort of hybrid may be essential in order to fuse and process the vast amounts of sensor data recorded into timely, agile, and safe decisions. In this talk, I will examine the relative merits of model-based and model-free methods in data-driven control problems. I will discuss quantitative estimates on the number of measurements required to achieve a high quality control performance and statistical techniques that can distinguish the relative power of different methods. In particular, I will show how model-free methods are considerably less sample efficient than their model-based counterparts. I will also describe how notions of robustness, safety, constraint satisfaction, and exploration can be transparently incorporated in model-based methods. I will conclude with a discussion of possible positive roles for model-free methods in contemporary autonomous systems that may mitigate their high sample complexity and lack of reliability and versatility.

Fri, May 31 Roberto Calandra Facebook AI No Title McCullough 115 11:00AM
Abstract

Schedule Winter 2019

Date Guest Affiliation Title Location Time
Fri, Jan 11 Dangxiao Wang Beihang University Paradigm shift of haptic human-machine interaction: Historical perspective and our practice McCullough 115 11:00AM
Abstract

Haptics is a fundamental channel when we interact with the physical world. However, it is underutilized when human interact with machines such as computers and robots. In this talk, I will start from the biological motivation of studying haptic human-machine interaction (HMI), and then I will introduce the paradigm shift of haptic MRI in the past 30 years, which include desktop haptics in personal computer era, surface haptics in mobile computer era, and wearable haptics in virtual reality era. Specifically I will try to keep a balance on the research performed in our group and in the whole haptics community. Finally I will share my perspective on future research challenges in haptics HMI field.

Fri, Jan 18 Sylvia Herbert UC Berkeley Reachability in Robotics McCullough 115 11:00AM
Abstract

Motion planning is an extremely well-studied problem in the robotics community, yet existing work largely falls into one of two categories: computationally efficient but with few if any safety guarantees, or able to give stronger guarantees but at high computational cost. In this talk I will give an overview of some of the techniques used in the Berkeley Hybrid Systems lab to balance safety with computational complexity in analyzing control systems. I will show these methods applied to a quadrotor in a motion capture room planning in real time to navigate around a priori unknown obstacles, as well as navigation around a human pedestrian.

Fri, Jan 25 Sean Anderson Boston University Sub-sampling approaches to mapping and imaging McCullough 115 11:00AM
Abstract

Sub-sampling approaches can greatly reduce the amount of data that need to be gathered and stored when exploring an unknown signal or environment. When combined with optimization algorithms, accurate reconstructions from the sub-sampled data can be generated, even when acquiring far less than Nyquist-Shannon theory requires. In this talk we explore the use of such schemes in two disparate application domains. The first is in robotic mapping where sub-sampling followed by reconstruction can greatly reduce the number of measurements needed to produce accurate maps. The second is in nanometer-scale imaging using an atomic force microscope where sub-sampling can significantly increase the imaging rate for a given image resolution.

Fri, Feb 01 Jeffrey Lipton University of Washington/MIT Fabrication via Mobile Robotics and Digital Manufacturing McCullough 115 11:00AM
Abstract

Each new generation of robotic fabrication tools has transformed manufacturing, enabling greater complexity and customization of the world around us. With the recent developments in additive manufacturing and mobile robots, several pressing questions have emerged. How can we use computational methods to expand the set of achievable material properties? How can we use mobile robots to do manufacturing? Finally, how can we use the answers from these questions to make robots more capable? In this talk, I will provide answers to these questions. I will demonstrate how we can use generative processes to make deformable cellular materials and how mobile manufacturing robots can perform carpentry tasks. Deformable cellular materials enable open, close, stochastic and ordered foams. These are useful in actuation, protection and deployable structures for robots. Mobile robotic fabrication brings robots out of the factory and onto the job site, enables scalable manufacturing tools, and expands the set of programmable manufacturing processes. Together these two methods will enable the next generation of custom manufacturing.

Fri, Feb 08 Alexandre Bayen UC Berkeley Lagrangian control at large and local scales in mixed autonomy traffic flow: optimization and deep-RL approaches McCullough 115 11:00AM
Abstract

This talk investigates Lagrangian (mobile) control of traffic flow at large scale (city-wide, with fluid flow models) and local scale (vehicular level). For large scale inference and control, fluid flow models over networks are considered. Algorithms relying on convex optimization are presented for fusion of static and mobile (Lagrangian) traffic information data. Repeated game theory is used to characterize the stability such flows under selfish information patterns (each flow attempting to optimize their latency). Convergence to Nash equilibria of the solutions is presented, leading to control strategies to optimize the network efficiency. At local scale, the question of how self-driving vehicles will change traffic flow patterns is investigated. We describe approaches based on deep reinforcement learning presented in the context of enabling mixed-autonomy mobility. The talk explores the gradual and complex integration of automated vehicles into the existing traffic system. We present the potential impact of a small fraction of automated vehicles on low-level traffic flow dynamics, using novel techniques in model-free deep reinforcement learning, in which the automated vehicles act as mobile (Lagrangian) controllers to traffic flow. Illustrative examples will be presented in the context of a new open-source computational platform called FLOW, which integrates state of the art microsimulation tools with deep-RL libraries on AWS EC2. Interesting behavior of mixed autonomy traffic will be revealed in the context of emergent behavior of traffic: https://flow-project.github.io/ inference, control, and game-theoretic algorithms developed to improve traffic flow in transportation networks. The talk will investigate various factors that intervene in decisions made by travelers in large scale urban environments. We will discuss disruptions in demand due to the rapid expansion of the use of “selfish routing” apps, and how they affect urban planning. These disruptions cause congestion and make traditional approaches of traffic management less effective. Game theoretic approaches to demand modeling will be presented. These models encompass heterogeneous users (some using routing information, some not) that share the same network and compete for the same commodity (capacity). Results will be presented for static loading, based on Nash-Stackelberg games, and in the context of repeated games, to account for the fact that routing algorithms learn the dynamics of the system over time when users change their behavior. The talk will present some potential remedies envisioned by planners, which range from incentivization to regulation.

Fri, Feb 15 Mark Mueller UC Berkeley High-Performance Aerial Robotics McCullough 115 11:00AM
Abstract

We present some of our recent results on high-performance aerial robots. First, we present two novel mechanical vehicle configurations: the first is aimed at creating an aerial robot capable of withstanding external disturbance, and the second exploits unactuated internal degrees of freedom for passive shape-shifting, resulting in a simple, agile vehicle capable of squeezing through very narrow spatial gaps. Next, we will discuss results on vibration-based fault detection, exploiting only an onboard IMU to detect and isolate motor faults through vibrations, even if the frequency of the motors is above the Nyquist sampling frequency. Finally, two results pertaining to energy efficiency are presented, one a mechanical modification, and the second an algorithmic adaptation for online adaptation of a vehicle's cruise speed.

Fri, Feb 22 Ludovic Righetti NYU Fast computation of robust multi-contact behaviors McCullough 115 11:00AM
Abstract

Interaction with objects and environments is at the core of any manipulation or locomotion behavior, yet, robots still mostly try to avoid physical interaction with their environment at all costs. This is in stark contrast with humans or animals, that not only constantly interact with their environment from the day they are born but also exploit this interaction to improve their skills. One reason that prevents robots from seamlessly interacting with the world is that reasoning about contacts is a computationally daunting problem. In this presentation, I will present our efforts to break down this complexity and find algorithms that are computationally efficient yet generic enough to be applied to any robot. I will also discuss how these approaches can be rendered robust to unknown and changing environments and how we can leverage machine learning to significantly improve computation efficiency.

Fri, Mar 01 Melonee Wise Fetch Robotics Taking robots to the cloud and other insights on the path to market McCullough 115 11:00AM
Abstract

The robotics industry has come a long way from the industrial robots that have long been in manufacturing environments. Now, robots that can safely work alongside people are used in all sorts of work environments. A new generation of robotics technology is emerging that brings to factory floors and warehouses the kind of speed, agility and incremental cost advantages that cloud computing has brought to IT. Collaborative, autonomous and cloud-based robotics systems don’t require changes to the facility, nor do they require installation or integration of IT hardware and software. Fetch Robotics CEO Melonee Wise will discuss the evolution of robotics to the cloud, and how the company has successfully brought its robotics technology to market.

Fri, Mar 08 Jeff Hancock Stanford University Conversation with a Robot McCullough 115 11:00AM
Abstract

Jeff Hancock is founding director of the Stanford Social Media Lab and is a Professor in the Department of Communication at Stanford University. Professor Hancock and his group work on understanding psychological and interpersonal processes in social media. The team specializes in using computational linguistics and experiments to understand how the words we use can reveal psychological and social dynamics, such as deception and trust, emotional dynamics, intimacy and relationships, and social support. Recently Professor Hancock is working on understanding the mental models people have about algorithms in social media, as well as working on the ethical issues associated with computational social science.

Fri, Mar 15 Marin Kobilarov Johns Hopkins University No Title McCullough 115 11:00AM
Abstract

This talk will focus on computing robust control policies for autonomous agents performing a given task that can be modeled using a performance function and constraints. We will first consider a strategy for computing guarantees on future policy execution under uncertainty, based on probably-approximately-correct (PAC) high-confidence performance bounds. The bounds will then be used to optimize a given policy based on a high-fidelity learned stochastic model of the agent and its environment. Finally, we will consider initial efforts towards transferring such robust policies to physical agents such as aerial and ground vehicles navigating around obstacles.

Schedule Fall 2018

Date Guest Affiliation Title Location Time
Fri, Sep 28 Wojciech Zaremba OpenAI Learning Dexterity McCullough 115 11:00AM
Abstract

We’ve trained a human-like robot hand to manipulate physical objects with unprecedented dexterity. Our system, called Dactyl, is trained entirely in simulation and transfers its knowledge to reality, adapting to real-world physics using techniques we’ve been working on for the past year. Dactyl learns from scratch using the same general-purpose reinforcement learning algorithm and code as OpenAI Five. Our results show that it’s possible to train agents in simulation and have them solve real-world tasks, without physically-accurate modeling of the world.

Fri, Oct 05 Naira Hovakimyan UIUC L1 Adaptive Control and Its Transition to Practice McCullough 115 11:00AM
Abstract

The history of adaptive control systems dates back to early 50-s, when the aeronautical community was struggling to advance aircraft speeds to higher Mach numbers. In November of 1967, X-15 launched on what was planned to be a routine research flight to evaluate a boost guidance system, but it went into a spin and eventually broke up at 65,000 feet, killing the pilot Michael Adams. It was later found that the onboard adaptive control system was to be blamed for this incident. Exactly thirty years later, fueled by advances in the theory of nonlinear control, Air Force successfully flight tested the unmanned unstable tailless X-36 aircraft with an onboard adaptive flight control system. This was a landmark achievement that dispelled some of the misgivings that had arisen from the X-15 crash in 1967. Since then, numerous flight tests of Joint Direct Attack Munitions (JDAM) weapon retrofitted with adaptive element have met with great success and have proven the benefits of the adaptation in the presence of component failures and aerodynamic uncertainties. However, the major challenge related to stability/robustness assessment of adaptive systems is still being resolved based on testing the closed-loop system for all possible variations of uncertainties in Monte Carlo simulations, the cost of which increases with the growing complexity of the systems. This talk will give an overview of the limitations inherent to the conventional adaptive controllers and will introduce the audience to the L1 adaptive control theory, the architectures of which have guaranteed robustness in the presence of fast adaptation. Various applications, including flight tests of a subscale commercial jet, will be discussed during the presentation to demonstrate the tools and the concepts. With its key feature of decoupling adaptation from robustness L1 adaptive control theory has facilitated new developments in the areas of event-driven adaptation and networked control systems. It has been evaluated on Learjet in 2015 and 2017 with five people on board in more than 20 hours of flight time each time, and on F16 in 2016 with two pilots on board.

Fri, Oct 12 Michael Yip UCSD Learning Model-free Representations for Fast, Adaptive Robot Control and Motion Planning McCullough 115 11:00AM
Abstract

Robot manipulation has traditionally been a problem of solving model-based control and motion planning in structured environments. This has made them very well suited for a finite set of repeating tasks and trajectories such on a manufacturing assembly line. However, when considering more complex and partially-observable environments, and when more complex and compliant and safe robots are proposed, outcomes of robot actions become more and more uncertain, and model-based methods tend to fail or produce unexpected results. Erratic behavior makes robots dangerous in human environments and thus new approaches must be taken. In this talk, I will discuss our research in learning model-free representations for robots that enable robots to learn and adapt their control to new environments, plan, and execute trajectories. These representations are trained using a variety of local and global model-free learning strategies, and when implemented are comparatively significantly faster, more consistent, and more power and memory efficient than conventional control and trajectory planners.

Fri, Oct 19 Yasser Shoukry UMD Attack-Resilient and Verifiable Autonomous Systems: A Satisfiability Modulo Convex Programming Approach McCullough 115 11:00AM
Abstract

Autonomous systems in general, and self-driving cars, in particular, hold the promise to be one of the most disruptive technologies emerging in recent years. However, the security and resilience of these systems, if not proactively addressed, will pose a significant threat potentially impairing our relation with these technologies and may lead to a societal rejection of adopting them permanently. In this talk, I will focus on three problems in the context of designing resilient and verifiable autonomous systems: (i) the design of resilient state estimators in the presence of false data injection attacks, (ii) the design of resilient multi-robot motion planning in the presence of Denial-of-Service (DoS) attacks and (iii) the formal verification of neural network-based controllers. I will argue that, although of a heterogeneous nature, all these problems have something in common. They can be formulated as the feasibility problem for a type of formula called monotone Satisfiability Modulo Convex programming (or SMC for short). I will present then a new SMC decision procedure that uses a lazy combination of Boolean satisfiability solving and convex programming to provide a satisfying assignment or determine that the formula is unsatisfiable. I will finish by showing, through multiple experimental results, the real-time and the resilience performance of the proposed algorithms.

Fri, Oct 26 Jerry Kaplan Stanford Law School The Devil Made Me Do it: Computational Ethics for Robots McCullough 115 11:00AM
Abstract

Before we set robots and other autonomous systems loose in the world, we need to ensure that they will adhere to basic moral principles and human social conventions. This is easier said than done. Science fiction writer Issac Asimov famously proposed three laws of Robotics: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Less well known was the purpose of Asimov’s proposal: to point out that these simple rules are woefully inadequate as a design criteria for building ethical robots. So if his laws aren’t sufficient, what is? Join me as we window shop through two millennia of moral theories to find a suitable foundation for the emerging discipline of “Computational Ethics” — and explore the darkly hilarious ways these theories often fail in practice!

Fri, Nov 02 Wenzhen Yuan CMU/Stanford Making Sense of the Physical World with High-resolution Tactile Sensing McCullough 115 11:00AM
Abstract

With the rapid progress in robotics, people expect robots to be able to accomplish a wide variety of tasks in the real world, such as working in factories, performing household chores, and caring for elderly. However, it is still very difficult for robots to act in the physical world. A major challenge lies in the lack of adequate tactile sensing. Progress requires advances in the sensing hardware, but also requires advances in the software that can exploit the tactile signals generated when the robot touches an object. The sensor we use is a vision-based tactile sensor called GelSight, which measures the geometry and traction field of the contact surface. For interpreting the high-resolution tactile signal, we utilize both traditional statistical models and deep neural networks. I will describe research on two kinds of tasks: exploration and manipulation. For exploration, I use active touch to estimate the physical properties of the objects. The work has included learning the basic properties (e.g., hardness), of artificial objects, as well as estimating the general properties of natural objects via autonomous tactile exploration. For manipulation, I study the robot’s ability to detect slip or incipient slip with tactile sensing during grasping. My research helps robots to better understand and flexibly interact with the physical world.

Thu, Nov 08 Sumeet Singh Stanford Control-Theoretic Regularization for Nonlinear Dynamical Systems Learning 300-300 11:00AM
Abstract

When it works, model-based Reinforcement Learning (RL) typically offers major improvements in sample efficiency in comparison to state of the art RL methods such as Policy Gradients that do not explicitly estimate the underlying dynamical system. Yet, all too often, when standard supervised learning is applied to model complex dynamics, the resulting controllers do not perform at par with model-free RL methods in the limit of increasing sample size, due to compounding errors across long time horizons. In this talk, I will present novel algorithmic tools leveraging Lyapunov-based analysis and semi-infinite convex programming to derive a control-theoretic regularizer for dynamics fitting, rooted in the notion of trajectory stabilizability. The resulting semi-supervised algorithm yields dynamics models that jointly balance regression performance and stabilizability, ultimately resulting in generated trajectories for the robot that are notably easier to track. Evaluation on a simulated quadrotor model illustrates the vastly improved trajectory generation and tracking performance over traditional regression techniques, especially in the regime of small demonstration datasets. I will conclude with a brief discussion of some open questions within this field of control-theoretic learning.

Thu, Nov 08 Kirby Witte CMU Assistance of Walking and Running Using Wearable Robots 300-300 11:00AM
Abstract

We are familiar with wearable robots through comic books and movies. Exoskeletons give heroes such as Iron Man enhanced strength, speed, and ability. While we are far from reaching super human abilities in reality, exoskeletons are hitting the consumer market as tools for rehabilitation and assisting assembly line workers. Exoskeleton research has progressed significantly in the last several years, but it is still difficult to determine how exoskeleton assistance should be adapted to fit the needs of individuals. I present an approach to this problem that utilizes a highly adaptable experimental setup called an exoskeleton emulator system to rapidly explore exoskeleton design and control strategies. I will introduce human-in-the-loop optimization which is utilized for selecting the optimal settings for each user. I will also present the latest results for exoskeleton assisted walking and running using these tools and my thoughts on the future of exoskeleton technologies.

Fri, Nov 16 Aviv Tamar Technion Learning Representations for Planning McCullough 115 11:00AM
Abstract

How can we build autonomous robots that operate in unstructured and dynamic environments such as homes or hospitals? This problem has been investigated under several disciplines, including planning (motion planning, task planning, etc.), and reinforcement learning. While both of these fields have witnessed tremendous progress, each have fundamental drawbacks when it comes to autonomous robots. In general, planning approaches require substantial manual engineering in specifying a model for the domain, while RL is data hungry and does not generalize beyond the tasks seen during training. In this talk, we present several studies that aim to mitigate these shortcomings by combining ideas from both planning and learning. We start by introducing value iteration networks, a type of differentiable planner that can be used within model-free RL to obtain better generalization. Next, we consider a practical robotic assembly problem, and show that motion planning, based on readily available CAD data, can be combined with RL to quickly learn policies for assembling tight fitting objects. Then, we show how deep learning can be used to improve classical planning, by learning powerful image-based heuristic functions for A* search. We conclude with our recent work on learning to imagine goal-directed visual plans. Motivated by humans’ remarkable capability to predict and plan complex manipulations of objects, we develop a data-driven method that learns to ‘imagine’ a plausible sequence of observations that transition a dynamical system from its current configuration to a desired goal state. Key to our method is Causal InfoGAN, a deep generative model that can learn features that are compatible with strong planning algorithms. We demonstrate our approach on learning to imagine and execute robotic rope manipulation.

Fri, Nov 30 Ken Goldberg UC Berkeley A Grand Challenge for E-Commerce: Optimizing Rate, Reliability, and Range for Robot Bin Picking and Related Projects McCullough 115 11:00AM
Abstract

Consumer adoption of e-commerce is skyrocketing at Amazon, Walmart, JD.com, and Alibaba. As new super-sized warehouses are opening every month, it is proving increasingly difficult to hire enough workers to meet the pressing need to shorten fulfillment times. Thus a Holy Grail for e-commerce is robots that are capable of Universal Picking: reliably and efficiently grasping a massive (and changing) set of products of diverse shapes and sizes. I'll describe a 'new wave' in research that combines classical mechanics, stochastic, and deep learning. The First Wave of grasping research, still dominant, uses analytic methods based on screw theory and assumes exact knowledge of pose, shape, and contact mechanics. The Second Wave is empirical: purely data driven approaches which learn grasp strategies from many examples using techniques such as imitation and reinforcement learning with hyperparametric function approximation (Deep Learning). I'll present the Dexterity Network (Dex-Net), a New Wave method being developed by UC Berkeley startup Ambidextrous Laboratories hat combines analytic and empirical approaches to rapidly synthesize massive training datasets that incorporate statistical analytic models of the inherent errors arising from physics, sensing, and control. Dex-Net can be applied to almost any combination of robots, bins, shelves, 3D sensors, and gripping devices and is achieving record-breaking performance in picks per hour on novel objects.

Fri, Dec 07 Terry Fong NASA Human-Robot Teaming: From Space Robotics to Self-Driving Cars McCullough 115 11:00AM
Abstract

The role of robots in human-robot teams is increasingly becoming that of a peer-like teammate, or partner, who is able to assist with and complete joint tasks. This relationship raises key issues that need to be addressed in order for such teams to be effective. In particular, human-robot teaming demands that concepts of communication, coordination, and collaboration be accommodated by human-robot interaction. Moreover, building effective human-robot teams is challenging because robotic capabilities are continually advancing, yet still have difficulties when faced with anomalies, edge cases, and corner cases. In this talk, I will describe how NASA Ames has been developing and testing human-robot teams. In our research, we have focused on studying how such teams can increase the performance, reduce the cost, and increase the success of space missions. A key tenet of our work is that humans and robots should support one another in order to compensate for limitations of human manual control and robot autonomy. This principle has broad applicability beyond space exploration. Thus, I will conclude by discussing how we have worked with Nissan to apply our methods to self-driving cars -- enabling humans to support self-driving cars operating in unpredictable and difficult situations.

Schedule Spring 2018

Date Guest Affiliation Title Location Time
Fri, Apr 06 Phillippe Poignet LIRMM Univ Montpellier CNRS Recent advances in surgical robotics: some examples through the LIRMM research activities illustrated in minimally invasive surgery and interventional radiology Jordan Hall 040 11:00AM
Abstract

The interest of surgeons for robotics grew a lot since the last two decades. The presence of the DaVinci robot in the operating room opens a large road for the use of robotized instruments in the OR. Discussing some recent advances in surgical robotics, we will highlight some new trends by presenting some examples of the LIRMM research activities. It will be illustrated in the domain of minimally invasive surgery and interventional radiology.

Fri, Apr 13 Benjamin Hockman Stanford University Hopping Rovers for Exploration of Asteroids and Comets: Design, Control, and Autonomy Jordan Hall 040 11:00AM
Abstract

The surface exploration of small Solar System bodies, such as asteroids and comets, has become a central objective for NASA and space agencies worldwide. However, the highly irregular terrain and extremely weak gravity on small bodies present major challenges for traditional wheeled rovers, such as those sent to the moon and Mars. Through a joint collaboration between Stanford and JPL, we have been developing a minimalistic internally-actuated hopping rover called “Hedgehog” for targeted mobility in these extreme environments. By applying controlled torques to three internal flywheels, Hedgehog can perform various controlled maneuvers including long-range hops and short, precise “tumbles.” In this talk, I will present my PhD work on developing the necessary tools to make such a hopping system controllable and autonomous, ranging from low-level dynamics modeling and control analysis to higher-level motion planning for highly stochastic hopping/bouncing dynamics.

Fri, Apr 13 Zachary Sunberg Stanford University Safety and Efficiency in Autonomous Vehicles through Planning with Uncertainty Jordan Hall 040 11:00AM
Abstract

In order to be useful, autonomous vehicles must accomplish goals quickly while maintaining safety and minimizing disruptions to other human activities. One key to acting efficiently in a wide range of scenarios without compromising safety is modeling and planning with uncertainty, especially uncertainty in other agents' internal states such as intentions and dispositions. The partially observable Markov decision process (POMDP) provides a systematic framework for dealing with this type of uncertainty, but these problems are notoriously difficult to solve. Our new online algorithms make it possible to find approximate POMDP solutions, and that the solutions are useful in autonomous driving.

Fri, Apr 20 Zac Manchester Stanford University Planning for Contact Jordan Hall 040 11:00AM
Abstract

Contact interactions are pervasive in many key real-world robotic tasks like manipulation and walking. However, the dynamics associated with impacts and friction remain challenging to model, and motion planning and control algorithms that can effectively reason about contact remain elusive. In this talk I will share some recent work that leverages ideas from discrete mechanics to both accurately simulate rigid body dynamics with contact, as well as enable fully contact-implicit trajectory optimization. I will share several examples in which walking gates are generated for complex robots with no a priori specification of contact mode sequences.

Fri, Apr 27 Brian Casey Stanford Law School Law as Action: Profit-Driven Bias in Autonomous Vehicles Jordan Hall 040 11:00AM
Abstract

Profit maximizing firms designing autonomous vehicles will face economic incentives to weigh the benefits of delivering speedy transportation against expected liabilities in the event of an accident. This Article demonstrates that path planning algorithms optimizing these tradeoffs using well-established auto injury compensation formulas may have the unintended consequence of producing facially discriminatory driving behaviors in autonomous vehicles. It considers a simulated one-way street setting with a probabilistic pedestrian crossing and uses U.S. Census Bureau data to calculate income-based liability predictions for collision events. Obtaining quantitative results through Monte Carlo sampling, this Article shows how profit maximizing speeds can be expected to vary inversely with neighborhood income levels—putting simulated pedestrians that encounter such systems in predominantly minority regions at heightened risk of injury or death relative to their non-minority counterparts. It then discusses how these findings are consistent with a host of other recently documented instances of real world algorithmic bias that highlight the need for fairness, transparency, and accountability in AI systems. Finally, it surveys the challenges facing lawyers, engineers, industry leaders, and policymakers tasked with governing these systems, and argues that a multidisciplinary, multistakeholder approach is necessary to shape policy that is sensitive to the complex social realms into which AI systems are deploying.

Fri, May 04 Russ Tedrake MIT The Combinatorics of Multi-contact Feedback Control Jordan Hall 040 11:00AM
Abstract

It’s sad but true: most state-of-the-art systems today for robotic manipulation operate almost completely open-loop. Shockingly, we still have essentially no principled approaches to designing feedback controllers for systems of this complexity that make and break contact with the environment. Central to the challenge is the combinatorial structure of the contact problem. In this talk, I’ll review some recent work on planning and control methods which address this combinatorial structure without sacrificing the rich underlying nonlinear dynamics. I’ll present some details of our explorations with mixed-integer convex- and SDP-relaxations applied to hard problems in legged locomotion over rough terrain, manipulation, and UAVs flying through highly cluttered environments. I’ll also show a few teasers from the dynamics and manipulation team at the Toyota Research Institute.

Fri, May 11 Sreeja Nag NASA Ames Research Center/BAERI Autonomous Scheduling of Agile Spacecraft Constellations for Rapid Response Imaging Jordan Hall 040 11:00AM
Abstract

Distributed Spacecraft, such as formation flight and constellations, are being recognized as important Earth Observation solutions to increase measurement samples over multiple spatio-temporal-angular vantage points. Small spacecraft have the capability to host imager payloads and can slew to capture images within short notice, given the precise attitude control systems emerging in the commercial market. When combined with appropriate software, this can significantly increase response rate, revisit time and coverage. We have demonstrated a ground-based, algorithmic framework that combines orbital mechanics, attitude control and scheduling optimization to plan the time-varying, full-body orientation of agile, small spacecraft in a constellation, such that they maximize observations for given imaging requirements and spacecraft specifications. Running the algorithm onboard will enable the constellation to make time-sensitive, measurement decisions autonomously. Upcoming technologies such as inter-satellite links, onboard processing of images for intelligent decision making and onboard orbit prediction will be leveraged for reaching consensus and coordinated execution among multiple spacecraft.

Fri, May 18 Mo Chen Stanford University Safety in Autonomy via Reachability Jordan Hall 040 11:00AM
Abstract

Autonomous systems are becoming pervasive in everyday life, and many of these systems are complex and safety-critical. Reachability analysis is a flexible tool for guaranteeing safety for nonlinear systems under the influence of unknown disturbances, and involves computing the reachable set, which quantifies the set of initial states from which a system may reach a set of unsafe states. However, computational scalability, a difficult challenge in formal verification, makes reachability analysis intractable for complex, high-dimensional systems. In this seminar, I will show how high-dimensional reachability analysis can be made more tractable through a clever differential game approach in the context of real-time robust planning, through carefully decomposing a complex system into subsystems, and through utilizing optimization techniques that provide conservative safety guarantees. By tackling the curse of dimensionality from multiple fronts, tractable verification of practical systems is becoming a reality, paving the way towards more pervasive and safer automation.

Fri, May 18 Ming Luo Stanford University Design, Theoretical Modeling, Motion Planning , and Control of a Pressure-operated Modular Soft Robotic Snake Jordan Hall 040 11:00AM
Abstract

Snake robotics is an important research topic with applications in a wide range of fields including inspection in confined spaces, search-and-rescue, and disaster response, where other locomotion modalities may not be ideal. Snake robots are well-suited to these applications because of their versatility and adaptability to unstructured and high-risk environments. However, compared to their biological counterparts, rigid snake robots have kinematic limitations that reduce their effectiveness in negotiating tight passageways. Pressure-operated soft robotic snakes offer a solution that can address this functionality gap. To achieve functional autonomy, this talk combines soft mobile robot modeling, control, and motion planning. We propose a pressure-operated soft robotic snake with a high degree of modularity, with embedded flexible local curvature sensing based on our recent results in this area. On this platform, we introduce the use of iterative learning control using feedback from the on-board curvature sensors to enable the snake to control its locomotion direction. We also present a motion planning and trajectory following algorithm using an adaptive bounding box, which allows for efficient motion planning that takes into account the kinematic and dynamic information of the soft robotic snake. We test this algorithm experimentally, and demonstrate its performance for obstacle avoidance scenarios.

Fri, May 25 Stefano Soatto UCLA The Information Knot Tying Sensing and Control and the Emergence Theory of Deep Representation Learning Jordan Hall 040 11:00AM
Abstract

Internal representations of the physical environment, inferred from sensory data, are believed to be crucial for interaction with it, but until recently lacked sound theoretical foundations. Indeed, some of the practices for high-dimensional sensor streams like imagery seemed to contravene basic principles of Information Theory: Are there non-trivial functions of past data that ‘summarize’ the ‘information’ it contains that is relevant to decision and control tasks? What ‘state’ of the world should an autonomous system maintain? How would such a state be inferred? What properties should it have? Is there some kind of ‘separation principle’, whereby a statistic (the state) of all past data is sufficient for control and decision tasks? I will start from defining an optimal representation as a (stochastic) function of past data that is sufficient (as good as the data) for a given task, has minimal (information) complexity, and is invariant to nuisance factors affecting the data but irrelevant for a task. Such minimal sufficient invariants, if computable, would be an ideal representation of the given data for the given task. I will then show that these criteria can be formalized into a variational optimization problem via the Information Bottleneck Lagrangian, and minimized with respect to a universal approximant class of function realized by deep neural networks. I will then specialize this program for control tasks, and show that it is possible to define and compute a ‘state’ that separates past data from future tasks, and has all the desirable properties that generalize the state of dynamical models customary in linear control systems, except for being highly non-linear and having high-dimension (in the millions).

Schedule Winter 2018

Date Guest Affiliation Title Location Time
Fri, Jan 12 Mark Cutkosky and Allison Okamura Stanford University Biological and Robotic haptics Jordan Hall 040 11:00AM
Abstract

We will introduce the audience to the mechanisms underlying tactile sensing in nature and the corresponding implications for robotic tactile sensing, tactile perception and haptics. The first part of the talk will focus primarily on human mechanoreception, to provide an understanding of what we sense, how we sense it, and how we use the information in exploration and manipulation. The second part will look at robotic tactile sensing and haptic display, mainly from the standpoint of what information is desired and how to obtain it, rather than surveying the many kinds of tactile sensors developed over the years. In comparison to other sensing modalities, tactile sensing is inherently multi-modal, distributed, and the result of physical interactions with objects and surfaces. These factors are largely responsible for the slow evolution of robotic tactile sensing in comparison to vision.

Fri, Jan 19 Byron Boots Georgia Tech Learning Perception and Control for Agile Off-Road Autonomous Driving Jordan Hall 040 11:00AM
Abstract

The main goal of this talk is to illustrate how machine learning can start to address some of the fundamental perceptual and control challenges involved in building intelligent robots. I’ll start by introducing a new high speed autonomous “rally car” platform built at Georgia Tech, and discuss an off-road racing task that requires impressive sensing, speed, and agility to complete. I will discuss two approaches to this problem, one based on model predictive control and one based on learning deep policies that directly map images to actions. Along the way I’ll introduce new tools from reinforcement learning, imitation learning, and online learning and show how theoretical insights help us to overcome some of the practical challenges involved in learning on a real-world platform. I will conclude by discussing ongoing work in my lab related to machine learning for robotics.

Fri, Jan 26 Animesh Garg Stanford University Towards Generalizable Imitation in Robotics Jordan Hall 040 11:00AM
Abstract

Robotics and AI are experiencing radical growth, fueled by innovations in data-driven learning paradigms coupled with novel device design, in applications such as healthcare, manufacturing and service robotics. Data-driven methods such reinforcement learning circumvent hand-tuned feature engineering, albeit lack guarantees and often incur a massive computational expense: training these models frequently takes weeks in addition to months of task-specific data-collection on physical systems. Further such ab initio methods often do not scale to complex sequential tasks. In contrast, biological agents can often learn faster not only through self-supervision but also imitation. My research aims to bridge this gap and enable generalizable imitation for robot autonomy. We need to build systems that can capture semantic task structures that promote sample efficiency and can generalize to new task instances across visual, dynamical or semantic variations. And this involves designing algorithms that unify in reinforcement learning, control theoretic planning, semantic scene & video understanding, and design. In this talk three aspects of Generalizable Imitation: Task Structure Learning, Policy Generalization, and Robust/Safe Transfer. First, I will how we can move away from hand-designed finite state machines by unsupervised structure learning for complex multi-step sequential tasks. I will then present a method for generalization across task semantics with a single example with unseen task structure, topology or length. Then I will discuss techniques for robust policy learning to handle generalization across unseen dynamics. And lastly, I will revisit task structure learning to build task representations that generalize across visual semantics. I will present a reference resolution algorithm for task-level understanding from videos. The algorithms and techniques introduced are applicable across domains in robotics; in this talk, I will exemplify these ideas through my work on medical and personal robotics.

Fri, Feb 02 Steven H. Collins Stanford University Designing exoskeletons and prostheses that enhance human performance Jordan Hall 040 11:00AM
Abstract

Exoskeletons and active prostheses could improve mobility for hundreds of millions of people. However, two serious challenges must first be overcome: we need ways of identifying what a device should do to benefit an individual user, and we need cheap, efficient hardware that can do it. In this talk, we will describe a new approach to the design of wearable robots, based on versatile emulator systems and algorithms that automatically customize assistance, which we call human-in-the-loop optimization. We will also discuss the design of exoskeletons that use no energy themselves, yet reduce the energy cost of human walking, and efficient, electroadhesive actuators that could make wearable robots substantially cheaper and more efficient.

Fri, Feb 09 Edward Schmerling Stanford University On Quantifying Uncertainty for Robot Planning and Decision Making Jordan Hall 040 11:00AM
Abstract

Robot planning and control is often tailored towards the 'average' case -- we plan with a certain behavior in mind and hope that in execution a robot can achieve, or at least stay close to its plan. While this assumption may be justified for assembly robots on factory floors, in less structured settings robots must contend with uncertainty in their dynamics, sensing, and environment that can force their best laid plans awry. In this talk I will discuss two methods for quantifying uncertainty in the case that multimodality, i.e., the possibility of multiple highly distinct futures, plays a critical role in decision making. The first portion of this talk will outline a computationally efficient method for estimating the likelihood of multiple rare, but critical events (e.g., collisions with a robot's environment) under a known uncertainty model. The second portion will focus on learning multimodal generative models for human-robot interaction in an autonomous driving context where the uncertainty in human action depends reciprocally on a robot's candidate action plan.

Fri, Feb 09 Sarah Marie Thornton Stanford University Value sensitive design for autonomous vehicle motion planning Jordan Hall 040 11:00AM
Abstract

Human drivers navigate the roadways by balancing the values of safety, legality, and mobility. The public will likely judge an autonomous vehicle by the same values. The iterative methodology of value sensitive design formalizes the connection of human values to engineering specifications. We apply a modified value sensitive design methodology to the development of an autonomous vehicle speed control algorithm to safely navigate an occluded pedestrian crosswalk. The first iteration presented here models the problem as a partially observable Markov decision process and uses dynamic programming to compute an optimal policy to control the longitudinal acceleration of the vehicle based on the belief of a pedestrian crossing.

Fri, Feb 16 Vincent Vanhoucke Google Brain Self-Supervision for Robotic Learning Jordan Hall 040 11:00AM
Abstract

One of the main challenges in applying machine learning techniques to robotics problem is the problem of acquiring labeled data. This is particularly important for anything involving perception, where deep learning techniques perform very well in high-data, supervised regimes, but degrade quickly in performance when data-starved. In this talk I'll argue that thanks to the intrinsic multi-modal and dynamical nature of many robotics problems, much of that gap can be filled using self-supervision, using either alternative modalities or temporal prediction as the supervisory signal. I'll examine how self-consistency, both at the geometric and semantic level, can provide a powerful signal to leverage in teaching robots how to interpret and act in the world.

Fri, Feb 23 Christian Duriez Institut national de recherche en informatique et en automatique Numerical methods for modeling, simulation and control for deformable robots. Jordan Hall 040 11:00AM
Abstract

The design of robots can now be done with complex deformable structures, close to organic material that can be found in nature. Soft robotics opens very interesting perspectives in terms of human interaction, new applications, cost reduction, robustness, security… Soft robotics could bring new advances in robotics in the coming years. However these robots being highly deformable, traditional modeling and control methods used in robotics do not fully apply. During this talk, this scientific challenge of modeling and control of soft robot will be presented. I will also present some of our contributions which make use of methods from numerical mechanics (like Finite Element Methods) and adapt them to fulfill the constraints of robotics: real-time computation, direct and inverse kinematic models, closed loop control…

Fri, Mar 02 Jana Kosecka George Mason University Semantic Understanding for Robot Perception Jordan Hall 040 11:00AM
Abstract

Advancements in robotic navigation and fetch and delivery tasks rest to a large extent on robust, efficient and scalable semantic understanding of the surrounding environment. Deep learning fueled rapid progress in computer vision in object category recognition, localization and semantic segmentation, exploiting large amounts of labelled data and using mostly static images. I will talk about challenges and opportunities in tackling these problems in indoors and outdoors environments relevant to robotics applications. These include methods for semantic segmentation and 3D structure recovery using deep convolutional neural networks (CNNs), localization and mapping of large scale environments, training object instance detectors using synthetically generated training data and 3D object pose recovery. The applicability of the techniques for autonomous driving, service robotics, augmented reality and navigation will be discussed.

Fri, Mar 09 Dimitria Panagou University of Michigan Persistent Coverage Control for Constrained Multi-UAV Systems Jordan Hall 040 11:00AM
Abstract

Control of multi-agent systems and networks has been a popular topic of research with applications in numerous real-world problems involving autonomous unmanned vehicles (ground, marine, aerial, space) and robotic assets. Despite the significant progress over the past few years, we are not yet in place to deploy arbitrarily large-scale systems with prescribed safety and resilience (against malfunction or malicious attacks) guarantees for a variety of applications, such as surveillance and situational awareness in civilian and military environments. Planning, estimation and control for such complex systems is challenging due to non-trivial agent (vehicle, robot) dynamics, restrictions in onboard power, sensing, computation and communication resources, the number of agents in the network, and uncertainty about the environment. In this talk, we will present some of our recent results and ongoing work on the safe, persistent dynamic coverage control for multi-UAS networks.

Fri, Mar 16 Daniela Rus MIT Recent Advances Enabling Autonomous Transportation Jordan Hall 040 11:00AM
Abstract

Tomorrow's cars will be our partners. They will drastically improve the safety and quality of the driving experience. They will fill in when our human senses fail us, helping us navigate icy roads and blind intersections, paying attention when we're tired, and even making our time in the car fun. However we are not there yet. Our broad objective is to develop the science and engineering of autonomy and its broad range of application in transportation, logistics, manufacturing, and exploration. In this talk I will discuss recent advances in autonomous vehicles and mobility as a service, powered by new algorithms for perception, planning, learning, and control. These algorithms (i) understand the behavior of other agents, (ii) devise controllers for safe interactions, (iii) generate provably-safe trajectories that get the vehicle through cluttered environments in a natural manner, while guaranteeing safety, and (iv) optimize the allocation of customers to vehicles to optimize a multi-vehicle transportation system.

Fri, Mar 16 Koushil Sreenath UC Berkeley Safety-Critical Control for Dynamic Legged and Aerial Robotics Jordan Hall 040 11:00AM
Abstract

Biological systems such as birds and humans are able to move with great agility, efficiency, and robustness in a wide range of environments. Endowing machines with similar capabilities requires designing controllers that address the challenges of high-degree-of-freedom, high-degree-of-underactuation, nonlinear & hybrid dynamics, as well as input, state, and safety-critical constraints in the presence of model and sensing uncertainty. In this talk, I will present the design of planning and control algorithms for (i) dynamic legged locomotion over discrete terrain that requires enforcing safety-critical constraints in the form of precise foot placements; and (ii) dynamic aerial manipulation through cooperative transportation of a cable-suspended payload using multiple aerial robots with safety-critical constraints on manifolds. I will show that we can address the challenges of stability of hybrid systems through control Lyapunov functions (CLFs), input and state constraints through CLF-based quadratic programs, and safety-critical constraints through control barrier functions. I will show that robust and geometric formulations of control Lyapunov and barrier functions can respectively address adverse effects of model uncertainty on stability and constraint enforcement on manifolds.

Schedule Fall 2017

Date Guest Affiliation Title Location Time
Thu, Oct 05 Dorsa Sadigh Stanford University No Title STLC-111 11:00AM
Abstract

Today’s society is rapidly advancing towards cyber-physical systems (CPS) that interact and collaborate with humans, e.g., semi-autonomous vehicles interacting with drivers and pedestrians, medical robots used in collaboration with doctors, or service robots interacting with their users in smart homes. The safety-critical nature of these systems requires us to provide provably correct guarantees about their performance in interaction with humans. The goal of my research is to enable such human-cyber-physical systems (h-CPS) to be safe and interactive. I aim to develop a formalism for design of algorithms and mathematical models that facilitate correct-by-construction control for safe and interactive autonomy. In this talk, I will first discuss interactive autonomy, where we use algorithmic human-robot interaction to be mindful of the effects of autonomous systems on humans, and further leverage these effects for better safety, efficiency, coordination, and estimation. I will then talk about safe autonomy, where we provide correctness guarantees, while taking into account the uncertainty arising from the environment. Further, I will discuss a diagnosis and repair algorithm for systematic transfer of control to the human in unrealizable settings. While the algorithms and techniques introduced can be applied to many h-CPS applications, in this talk, I will focus on the implications of my work for semi-autonomous driving.

Thu, Oct 19 Jeannette Bohg Stanford University Combining learned and analytical models for predicting the effect of contact interaction STLC-111 11:00AM
Abstract

One of the most basic skills a robot should possess is predicting the effect of physical interactions with objects in the environment. Traditionally, these dynamics have been described by physics-based analytical models which may be very hard to formulate for complex problems. More recently, we have seen learning-based approaches that can predict the effect of complex physical interactions from raw sensory input. However, it is an open question how far these models generalise beyond their training data. In this talk, I propose a way to combine analytical and learned models to leverage the best of both worlds. The method assumes raw sensory data as input and the predicted effect as output. In our experiments, we compared the performance of the proposed model to a purely learned and a pure analytical model. Our results show that the combined method outperforms the purely learned version in terms of accuracy and generalisation to interactions and objects not seen during training. Beyond these empirical result, I will also present an in-depth analysis of why the purely learned model has difficulties in capturing the dynamics of this task and how the analytical model helps.

Fri, Nov 03 Maja Mataric University of Southern California Combining learned and analytical models for predicting the effect of contact interaction Gates 104 11:00AM
Abstract

Thu, Nov 16 Jean-Jacques Slotine MIT Evolvability and adaptation in robotic systems STLC-111 11:00AM
Abstract

The talk discusses recent nonlinear dynamic system tools and their robotics applications to collective behavior, adaptation, identification, SLAM, and biomimetic flight.

Thu, Nov 30 Franziska Meier Max Plank Institute for Intelligent Systems Continuously Learning Robots STLC-111 11:00AM
Abstract

Most robot learning approaches are focussed on discrete, e.g. single-task, learning events. A policy is trained for specific environments and/or tasks, and then tested on similar problems. Yet, in order to be truly autonomous, robots need to be able to react to unexpected events and then update their models/policies to include the just encountered data points. In short, true autonomy requires continual learning. However, continuously updating models without forgetting previously learned mappings remains an open research problem. In this talk I will present learning algorithms, based on localized inference schemes, that alleviate the problem of forgetting when learning incrementally. Finally, I will introduce our recent advances on learning-to-learn in the context of continuous learning. We show that, with the help of our meta-learner we achieve faster model adaptation when encountering new situations when learning online.

Mon, Dec 04 Philipp Hennig Max Plank Institute for Intelligent Systems Probabilistic Numerics — Uncertainty in Computation Packard 202 11:00AM
Abstract

The computational complexity of inference from data is dominated by the solution of non-analytic numerical problems (large-scale linear algebra, optimization, integration, the solution of differential equations). But a converse of sorts is also true — numerical algorithms for these tasks are inference engines! They estimate intractable, latent quantities by collecting the observable result of tractable computations. Because they also decide adaptively which computations to perform, these methods can be interpreted as autonomous inference agents. This observation lies at the heart of the emerging topic of Probabilistic Numerical Computation, which applies the concepts of probabilistic (Bayesian) inference to the design of algorithms, assigning a notion of probabilistic uncertainty to the result even of deterministic computations. I will outline how this viewpoint is connected to that of classic numerical analysis, and show that thinking about computation as inference affords novel, practical answers to the challenges of large-scale, big data, inference.