Stanford Robotics and Autonomous Systems Seminar series hosts both invited and internal speakers. The seminar aims to bring the campus-wide robotics community together and provide a platform to overview and foster discussion about the progress and challenges in the various disciplines of Robotics. This quarter, the seminar is also offered to students as a 1 unit course. Note that registration to the class is NOT required in order to attend the talks.

The seminar is open to Stanford faculty, students, and sponsors.

Get Email Notifications

Sign up for the mailing list: Click here!

Schedule Winter 2020

Date Guest Affiliation Title Location Time
Fri, Jan 10 Sandeep Chinchali Stanford University Distributed Perception and Learning Between Robots and the Cloud NVIDIA Auditorium 11:00AM
Abstract

Today’s robotic fleets are increasingly facing two coupled challenges. First, they are measuring growing volumes of high-bitrate video and LIDAR sensory streams, which, second, requires them to use increasingly compute-intensive models, such as deep neural networks (DNNs), for downstream perception or control. To cope with such challenges, compute and storage-limited robots, such as low-power drones, can offload data to central servers (or “the cloud”), for more accurate real-time perception as well as offline model learning. However, cloud processing of robotic sensory streams introduces acute systems bottlenecks ranging from network delay for real-time inference, to cloud storage, human annotation, and cloud-computing cost for offline model learning. In this talk, I will present learning-based approaches for robots to improve model performance with cloud offloading, but with minimal systems cost. For real-time inference, I will present a deep reinforcement learning based offloader that decides when a robot should exploit low-latency, on-board computation, or, when highly uncertain, query a more accurate cloud model. Then, for continual learning, I will present an intelligent, on-robot sampler that mines real-time sensory streams for valuable training examples to send to the cloud for model re-training. Using insights from months of field data and experiments on state-of-the-art embedded deep learning hardware, I will show how simple learning algorithms allow robots to significantly transcend their on-board sensing and control performance, but with limited communication cost.

Fri, Jan 17 Christoffer Heckman CU Boulder Robotic Autonomy and Perception in Challenging Environments NVIDIA Auditorium 11:00AM
Abstract

Perception precedes action, in both the biological world as well as the technologies maturing today that will bring us autonomous cars, aerial vehicles, robotic arms and mobile platforms. The problem of probabilistic state estimation via sensor measurements takes on a variety of forms, resulting in information about our own motion as well as the structure of the world around us. In this talk, I will discuss some approaches that my research group has been developing that focus on estimating these quantities online and in real-time in extreme environments where dust, fog and other visually obscuring phenomena are widely present and when sensor calibration is altered or degraded over time. These approaches include new techniques in computer vision, visual-inertial SLAM, geometric reconstruction, nonlinear optimization, and even some sensor development. The methods I discuss have an application-specific focus to ground vehicles in the subterranean environment, but are also currently deployed in the agriculture, search and rescue, and industrial human-robot collaboration contexts.

Fri, Jan 24 Takumi Kamioka Honda ASIMO Motion planning of bipedal robots based on Divergent Component of Motion NVIDIA Auditorium 11:00AM
Abstract

Honda has been developing bipedal humanoid robots for more than 30 years. As a part of the results, we showed several locomotion ability of humanoid robot such as robust walking, running, jumping and quadrupedal walking. A key concept of these abilities is divergent component of motion (DCM). DCM is a component of robot's center of gravity and must be controlled properly because of its divergent property. We derived it from eigenvalue decomposition, but equivalent values have proposed by other researchers independently. In this talk, I will give the definition and property of DCM and show its application methods for robot's locomotion.

Fri, Jan 31 Mark Yim UPenn TBA NVIDIA Auditorium 11:00AM
Abstract

TBA

Fri, Feb 07 Leila Takayama UC Santa Cruz TBA NVIDIA Auditorium 11:00AM
Abstract

TBA

Fri, Feb 14 Aaron Ames Caltech TBA NVIDIA Auditorium 11:00AM
Abstract

TBA

Fri, Feb 21 Sarah Dean UC Berkeley TBA NVIDIA Auditorium 11:00AM
Abstract

TBA

Fri, Feb 28 Dieter Fox UW/NVIDIA TBA NVIDIA Auditorium 11:00AM
Abstract

TBA

Fri, Mar 06 Alberto Rodriguez MIT TBA NVIDIA Auditorium 11:00AM
Abstract

TBA

Fri, Mar 13 Stefanie Tellex Brown TBA NVIDIA Auditorium 11:00AM
Abstract

TBA

Sponsors

The Stanford Robotics and Autonomous Systems Seminar enjoys the support of the following sponsors.