× Attention

The talks will be in-person.

Stanford Robotics and Autonomous Systems Seminar series hosts both invited and internal speakers. The seminar aims to bring the campus-wide robotics community together and provide a platform to overview and foster discussion about the progress and challenges in the various disciplines of Robotics. This quarter, the seminar is also offered to students as a 1 unit course. Note that registration to the class is NOT required in order to attend the talks.

The course syllabus is available here. Go here for more course details.

The seminar is open to Stanford faculty, students, and sponsors.

Get Email Notifications

Sign up for the mailing list: Click here!

Schedule Winter 2024

Date Guest Affiliation Title Location Time
Fri, Jan 12 Hannah Stuart UC Berkeley Robots that aren't afraid of contact, An embodied approach Skilling Auditorium 12:30PM
Abstract

The world is rich with complex and varied mechanics, which often leads to robots that tend to avoid contact due to uncertainty. However, this richness also opens opportunities for new robotic mechanisms that creatively harness the local environment. In this talk, I'll focus on two recent case studies that use ambient fluids for resilient and compliant grippers with tactile sensing. These inventions apply to logistics pick-and-place automation, as well as more generalized applications. Time allowing, I will also provide another case study in granular media (i.e. sand) interaction for planetary robotics. Inspired by nature, the goal of this research is to access new and resilient robotic behaviors without an over-reliance on digital computing alone, but rather by harnessing morphological computation alongside active control.

Fri, Jan 19 Student Speaker 1 -- Won Kyung Do Stanford Improving Robotic Dexterity with Optical Tactile Sensor DenseTact Skilling Auditorium 12:30PM
Abstract

Dexterous manipulation, particularly of small everyday objects, remains a complex challenge in the field of robotics. In this talk, I will present two studies addressing these challenges with DenseTact, a soft optical tactile sensor. The first study introduces an innovative approach to inter-finger manipulation using a tactile sensor-equipped gripper. This development not only enhances grasping accuracy in cluttered environments but also facilitates improved manipulation and reorientation of small objects, enabling more precise classification. The second study addresses the challenges of grasping objects of varying sizes on flat surfaces. I will introduce the DenseTact-Mini, an optical tactile sensor featuring a soft, rounded, smooth gel surface, compact design, and a synthetic fingernail. This sensor enables the grasping of multi-scale objects using three distinct strategies for different sizes and masses of objects. This presentation will underscore how these advancements open new avenues in robotics, particularly in enhancing manipulation capabilities in complex scenarios where vision is limited due to occlusions.

Fri, Jan 19 Student Speaker 2 -- Annie Chen Stanford Single-Life Robot Deployment: Adapting On-the-Fly to Novel Scenarios Skilling Auditorium 12:30PM
Abstract

A major obstacle to the broad application of robots is their inability to adapt to unexpected circumstances, which limits their uses largely to tightly controlled environments. Even equipped with prior experience and pre-training, robots will inevitably encounter out-of-distribution (OOD) situations at deployment time that may require a large amount of on-the-fly adaptation. In this talk, I will first motivate and introduce the problem setting of single-life deployment, which provides a natural setting to study the challenge of autonomously adapting to unfamiliar situations. I will then present our recent work on this problem, Robust Autonomous Modulation (ROAM). By effectively identifying relevant behaviors on-the-fly, ROAM adapts over 2x as efficiently compared to existing methods when facing a variety of OOD situations during deployment. Crucially, this adaptation process all happens within a single episode at test time, without any human supervision.

Fri, Jan 26 Raphael Zufferey EPFL Flying robots: exploring hybrid locomotion and physical interaction Skilling Auditorium 12:30PM
Abstract

Autonomous flying robots have become widespread in recent years, yet their capability to interact with the environment remains limited. Moving in multiple fluids is one of the great challenges of mobile robotics, and carries great potential for application in biological and environmental studies. In particular, hybrid locomotion provides the means to cross large distances and obstacles or even change from one body of water to another thanks to flight. At the same time, they are capable of operating underwater, collecting samples, video and aquatic metrics. However, the challenges of operating in both air and water are complex. In this talk, we will introduce these challenges and cover several research solutions which aim to adress these in different modalities, depending on locomotion and objectives. Bio-inspiration plays a crucial role in these solutions, and the topic of flapping flight in the context of physical interaction will also be presented.

Fri, Feb 02 Sunil Agrawal Columbia University Rehabilitation Robotics: Improving Functions of People with Impairments Skilling Auditorium 12:30PM
Abstract

Neural disorders, old age, and traumatic brain injury limit activities of daily living. Robotics can be used in novel ways to characterize human neuromuscular responses and retrain human functions. Columbia University Robotics and Rehabilitation (ROAR) Laboratory designs innovative mechanisms/robots with these goals and performs scientific studies to improve human functions such as standing, walking, stairclimbing, trunk control, head turning, and others. Human experiments have targeted individuals with stroke, cerebral palsy, Parkinson’s disease, ALS, and elderly subjects. The talk will provide an overview of these robotic technologies and scientific studies performed with them to demonstrate strong potential of rehabilitation robotics to improve human functions and quality of life of people.

Fri, Feb 09 Paul Glick JPL Robotics Embodied Intelligence for Extreme Environments Skilling Auditorium 12:30PM
Abstract

Extreme environments penalize the sensing, actuation, computation, and communication that robotic systems rely upon. Compounding this challenge is the fact that these remote locations are often some of the least mapped areas on and beyond our planet. Structured compliance offers a pathway for robots to adapt to their environment at the mechanical level while preserving the strength to support payload mass & forceful interactions. This theme is explored across projects that include gripping in space, exploration of coral reefs, data acquisition under ice, and a cold-operable robotic arm.

Fri, Feb 16 Shuran Song Stanford Robot Skill Acquisition: Policy Representation and Data Generation Skilling Auditorium 12:30PM
Abstract

What do we need to take robot learning to the 'next level?' Is it better algorithms, improved policy representations, or is it advancements in affordable robot hardware? While all of these factors are undoubtedly important, however, what I really wish for is something that underpins all these aspects – the right data. In particular, we need data that is scalable, reusable, and robot-complete. While ‘scale’ often takes center stage in machine learning today; I would argue that in robotics, having data that is also both reusable and complete can be just as important. Focusing on sheer quantity and neglecting these properties make it difficult for robot learning to benefit from the same scaling trend that other machine learning fields have enjoyed. In this talk, we will explore potential solutions to such data challenges, shed light on some of the often-overlooked hidden costs associated with each approach, and more importantly, how to potentially bypass these obstacles.

Fri, Feb 23 Dorsa Sadigh Stanford Robot Learning in the Era of Large Pretrained Models Skilling Auditorium 12:30PM
Abstract

In this talk, I will discuss how interactive robot learning can benefit from the rise of large pretrained models such as foundation models. I will introduce two perspectives. First I will discuss the role of pretraining when learning visual representations, and how language can guide learning grounded visual representations useful for downstream robotics tasks. I will then discuss the choice of datasets during pretraining. Specifically, how we could guide large scale data collection, and what constitutes high quality data for imitation learning. I will discuss some recent work around guiding data collection based on enabling compositional generalization of learned policies. Finally, I will end the talk by discussing a few creative ways of tapping into the rich context of large language models and vision-language models for robotics.

Fri, Mar 01 Renee Zhao Stanford Multifunctional Origami Robots Skilling Auditorium 12:30PM
Abstract

Millimeter/centimeter-scale origami robots have recently been explored for biomedical applications due to their inherent shape-morphing capability. However, they mainly rely on passive or/and irreversible deformation that significantly hinders the clinic functions in an on-demand manner. Here, we report magnetically actuated origami robots that can crawl and swim for effective locomotion and targeted drug delivery in severely confined spaces and aqueous environments. We design our robots based on origami, whose thin shell structure 1) provides an internal cavity for drug storage, 2) permits torsion-induced contraction as a crawling mechanism and a pumping mechanism for controllable liquid medicine dispensing, 3) serves as propellers that spin for propulsion to swim, 4) offers anisotropic stiffness to overcome the large resistance from the severely confined spaces in biomedical environments. These magnetic origami robots can potentially serve as minimally invasive devices for biomedical diagnoses and treatments.

Fri, Mar 08 Dragomir Anguelov Waymo ML Recipes for Building a Scalable Autonomous Driving Agent Skilling Auditorium 12:30PM
Abstract

Machine learning has proven to be a key ingredient in building a performant and scalable Autonomous Vehicle stack, spanning key capabilities such as perception, behavior prediction, planning and simulation. In this talk, I will describe recent Waymo research on performant ML models and architectures that help us handle the variety and complexity of the real world driving environment, and I will outline key remaining research challenges in our domain.

Sponsors

The Stanford Robotics and Autonomous Systems Seminar enjoys the support of the following sponsors.