× COVID-19

Due to the current guidelines set in place by the university and Santa Clara County, we unfortunately have to suspend the Robotics Seminar for the time being. Please check back for updates.

Stanford Robotics and Autonomous Systems Seminar series hosts both invited and internal speakers. The seminar aims to bring the campus-wide robotics community together and provide a platform to overview and foster discussion about the progress and challenges in the various disciplines of Robotics. This quarter, the seminar is also offered to students as a 1 unit course. Note that registration to the class is NOT required in order to attend the talks.

The seminar is open to Stanford faculty, students, and sponsors.

Get Email Notifications

Sign up for the mailing list: Click here!

Schedule Winter 2020

Date Guest Affiliation Title Location Time
Fri, Jan 10 Sandeep Chinchali Stanford University Distributed Perception and Learning Between Robots and the Cloud NVIDIA Auditorium 11:00AM
Abstract

Today’s robotic fleets are increasingly facing two coupled challenges. First, they are measuring growing volumes of high-bitrate video and LIDAR sensory streams, which, second, requires them to use increasingly compute-intensive models, such as deep neural networks (DNNs), for downstream perception or control. To cope with such challenges, compute and storage-limited robots, such as low-power drones, can offload data to central servers (or “the cloud”), for more accurate real-time perception as well as offline model learning. However, cloud processing of robotic sensory streams introduces acute systems bottlenecks ranging from network delay for real-time inference, to cloud storage, human annotation, and cloud-computing cost for offline model learning. In this talk, I will present learning-based approaches for robots to improve model performance with cloud offloading, but with minimal systems cost. For real-time inference, I will present a deep reinforcement learning based offloader that decides when a robot should exploit low-latency, on-board computation, or, when highly uncertain, query a more accurate cloud model. Then, for continual learning, I will present an intelligent, on-robot sampler that mines real-time sensory streams for valuable training examples to send to the cloud for model re-training. Using insights from months of field data and experiments on state-of-the-art embedded deep learning hardware, I will show how simple learning algorithms allow robots to significantly transcend their on-board sensing and control performance, but with limited communication cost.

Fri, Jan 17 Christoffer Heckman CU Boulder Robotic Autonomy and Perception in Challenging Environments NVIDIA Auditorium 11:00AM
Abstract

Perception precedes action, in both the biological world as well as the technologies maturing today that will bring us autonomous cars, aerial vehicles, robotic arms and mobile platforms. The problem of probabilistic state estimation via sensor measurements takes on a variety of forms, resulting in information about our own motion as well as the structure of the world around us. In this talk, I will discuss some approaches that my research group has been developing that focus on estimating these quantities online and in real-time in extreme environments where dust, fog and other visually obscuring phenomena are widely present and when sensor calibration is altered or degraded over time. These approaches include new techniques in computer vision, visual-inertial SLAM, geometric reconstruction, nonlinear optimization, and even some sensor development. The methods I discuss have an application-specific focus to ground vehicles in the subterranean environment, but are also currently deployed in the agriculture, search and rescue, and industrial human-robot collaboration contexts.

Fri, Jan 24 Takumi Kamioka Honda ASIMO Motion planning of bipedal robots based on Divergent Component of Motion NVIDIA Auditorium 11:00AM
Abstract

Honda has been developing bipedal humanoid robots for more than 30 years. As a part of the results, we showed several locomotion ability of humanoid robot such as robust walking, running, jumping and quadrupedal walking. A key concept of these abilities is divergent component of motion (DCM). DCM is a component of robot's center of gravity and must be controlled properly because of its divergent property. We derived it from eigenvalue decomposition, but equivalent values have proposed by other researchers independently. In this talk, I will give the definition and property of DCM and show its application methods for robot's locomotion.

Fri, Jan 31 Mark Yim UPenn Challenges to Developing Low Cost Robotic Systems NVIDIA Auditorium 11:00AM
Abstract

The promise of robot systems as initially imagined in science fiction is that of generic machines capable of doing a variety of tasks often mimicking humans. It turns out doing that can be very expensive and is keeping robotic systems from having impact in today's society. One of the challenges includes overcoming the perception of the pursuit of low-cost as more than "just engineering". This talk will present some general principles towards designing low cost systems while also presenting specific examples of novel devices ranging from mechatronic components (sensors and actuators), robotic components (grippers) to full systems (flying systems). In each case we will present some practical examples of methods that can be applied today.

Fri, Feb 07 Leila Takayama UC Santa Cruz Designing More Effective Remote Presence Systems for Human Connection and Exploration NVIDIA Auditorium 11:00AM
Abstract

As people are speculating about what the future of robots in the workplace will look like, this could be a good time to realize that we already live in that future. We actually know a lot about what it’s like to telecommute to work everyday via telepresence robot. Coming from a human-robot interaction perspective, I’ll present the research lessons learned from several years of fielding telepresence robot prototypes in companies and running controlled experiments in the lab to figure out how to better support remote collaboration between people. Building upon that work, I will share some recent research on professional robot operators, including service robot operators, drone pilots, and deep sea robot operators. Finally, I will share our current research on identifying needs and opportunities for designing robotic systems that can better support robotic systems with humans in-the-loop.

Fri, Feb 14 Aaron Ames Caltech Safety-Critical Control of Dynamic Robots NVIDIA Auditorium 11:00AM
Abstract

Science fiction has long promised a world of robotic possibilities: from humanoid robots in the home, to wearable robotic devices that restore and augment human capabilities, to swarms of autonomous robotic systems forming the backbone of the cities of the future, to robots enabling exploration of the cosmos. With the goal of ultimately achieving these capabilities on robotic systems, this talk will present a unified nonlinear control framework for realizing dynamic behaviors in an efficient, provably stable (via control Lyapunov functions) and safety-critical fashion (as guaranteed by control barrier functions). The application of these ideas will be demonstrated experimentally on a wide variety of robotic systems, including multi-robot systems with guaranteed safe behavior, bipedal and humanoid robots capable of achieving dynamic walking and running behaviors that display the hallmarks of natural human locomotion, and robotic assistive devices (including prostheses and exoskeletons) aimed at restoring mobility. The ideas presented will be framed in the broader context of seeking autonomy on robotic systems with the goal of getting robots into the real-world.

Fri, Feb 21 Sarah Dean UC Berkeley Safe and Robust Perception-Based Control NVIDIA Auditorium 11:00AM
Abstract

Machine learning provides a promising path to distill information from high dimensional sensors like cameras -- a fact that often serves as motivation for merging learning with control. This talk aims to provide rigorous guarantees for systems with such learned perception components in closed-loop. Our approach is comprised of characterizing uncertainty in perception and then designing a robust controller to account for these errors. We use a framework which handles uncertainties in an explicit way, allowing us to provide performance guarantees and illustrate how trade-offs arise from limitations of the training data. Throughout, I will motivate this work with the example of autonomous vehicles, including both simulated experiments and an implementation on a 1/10 scale autonomous car. Joint work with Aurelia Guy, Nikolai Matni, Ben Recht, Rohan Sinha, and Vickie Ye.

Fri, Feb 28 Dieter Fox UW/NVIDIA Toward robust manipulation in complex environments NVIDIA Auditorium 11:00AM
Abstract

Over the last years, advances in deep learning and GPU-based computing have enabled significant progress in several areas of robotics, including visual recognition, real-time tracking, object manipulation, and learning-based control. This progress has turned applications such as autonomous driving and delivery tasks in warehouses, hospitals, or hotels into realistic application scenarios. However, robust manipulation in complex settings is still an open research problem. Various research efforts show promising results on individual pieces of the manipulation puzzle, including manipulator control, touch sensing, object pose detection, task and motion planning, and object pickup. In this talk, I will present our recent work in integrating such components into a complete manipulation system. Specifically, I will describe a mobile robot manipulator that moves through a kitchen, can open and close cabinet doors and drawers, detect and pickup objects, and move these objects to desired locations. Our baseline system is designed to be applicable in a wide variety of environments, only relying on 3D articulated models of the kitchen and the relevant objects. I will discuss the design choices behind our approach, the lessons we learned so far, and various research directions toward enabling more robust and general manipulation systems.

Fri, Mar 06 Laura Matloff Stanford Designing bioinspired aerial robots with feathered morphing wings NVIDIA Auditorium 11:00AM
Abstract

Birds are a source of design inspiration for aerial robots, as they can still outmaneuver current man-made fliers of similar size and weight. I study their ability to seamlessly morph their wings through large shape changes during gliding flight, and use biological measurements to drive mechanical design. I measure the wing feather and bone kinematics, investigate adjacent feather interactions, and examine feather microstructures to inform the design of PigeonBot, a biohybrid feathered robot. The feathered morphing wing design principles can also be adapted to other bird species, and even artificial feathers. This work was done in collaboration with Eric Chang, Amanda Stowers, Teresa Feo, Lindsie Jeffries, Sage Manier, and David Lentink.

Sponsors

The Stanford Robotics and Autonomous Systems Seminar enjoys the support of the following sponsors.