Shreyas is an assistant professor at Georgia Tech in in the George W. Woodruff School of Mechanical Engineering and a former postdoctoral scholar at Stanford’s Department of Aeronautics and Astronautics. He completed his B.S. degree in mechanical engineering from Georgia Tech in 2014, and his M.S. and Ph.D. degrees in mechanical engineering from the University of Michigan – Ann Arbor in 2020.
Shreyas’ research interests include geometric and numerical representations that enable fast nonlinear optimization. His Ph.D. dissertation focused on using reachability analysis to generate such representations for robot motion planning. Currently, he is focusing on how to apply reachability analysis outside of motion planning, to robot state estimation, navigation, and perception.
In his free time, Shreyas enjoys playing guitar and motorcycling.
Abstract: The past few years have seen immense progress on two fronts that are critical to safe, widespread mobile robot deployment: predicting uncertain motion of multiple agents, and planning robot motion under uncertainty. However, the numerical methods required on each front have resulted in a mismatch of representation for prediction and planning. In prediction, numerical tractability is usually achieved by coarsely discretizing time, and by representing multimodal multi-agent interactions as distributions with infinite support. On the other hand, safe planning typically requires very fine time discretization, paired with distributions with compact support, to reduce conservativeness and ensure numerical tractability. The result is, when existing predictors are coupled with planning and control, one may often find unsafe motion plans. This paper proposes ZAPP (Zonotope Agreement of Prediction and Planning) to resolve the representation mismatch. ZAPP unites a prediction-friendly coarse time discretization and a planning-friendly zonotope uncertainty representation; the method also enables differentiating through a zonotope collision check, allowing one to integrate prediction and planning within a gradient-based optimization framework. Numerical examples show how ZAPP can produce safer trajectories compared to baselines in interactive scenes.
@inproceedings{PaparussoKousikEtAl2024, title = {{ZAPP!} Zonotope Agreement of Prediction and Planning for Continuous-Time Collision Avoidance with Discrete-Time Dynamics}, author = {Paparusso, L. and Kousik, S. and Schmerling, E. and Braghin, F. and Pavone, M.}, booktitle = {{Proc. IEEE Conf. on Robotics and Automation}}, owner = {rdyro}, timestamp = {2023-09-28}, keywords = {sub}, year = {2024}, url = {/wp-content/papercite-data/pdf/Paparusso.ea.ICRA24.pdf} }
Abstract: Robots require a semantic understanding of their surroundings to operate in an efficient and explainable way in human environments. In the literature, there has been an extensive focus on object labeling and exhaustive scene graph generation; less effort has been focused on the task of purely identifying and mapping large semantic regions. The present work proposes a method for semantic region mapping via embodied navigation in indoor environments, generating a high-level representation of the knowledge of the agent. To enable region identification, the method uses a vision-to-language model to provide scene information for mapping. By projecting egocentric scene understanding into the global frame, the proposed method generates a semantic map as a distribution over possible region labels at each location. This mapping procedure is paired with a trained navigation policy to enable autonomous map generation. The proposed method significantly outperforms a variety of baselines, including an object-based system and a pretrained scene classifier, in experiments in a photorealistic simulator.
@inproceedings{BigazziEtAl2024, title = {Mapping High-level Semantic Regions in Indoor Environments without Object Recognition}, author = {Bigazzi, R. and Baraldi, L. and Kousik, S. and Cucchiara, R. and Pavone, M.}, booktitle = {{Proc. IEEE Conf. on Robotics and Automation}}, owner = {rdyro}, timestamp = {2023-09-28}, keywords = {pub}, year = {2024}, url = {https://arxiv.org/abs/2403.07076} }
Abstract: Reinforcement learning (RL) is capable of sophisticated motion planning and control for robots in uncertain environments. However, state-of-the-art deep RL approaches typically lack safety guarantees, especially when the robot and environment models are unknown. To justify widespread deployment, robots must respect safety constraints without sacrificing performance. Thus, we propose a Black-box Reachability-based Safety Layer (BRSL) with three main components: (1) data-driven reachability analysis for a black-box robot model, (2) a “dreaming” trajectory planner that hallucinates future actions and observations using an ensemble of neural networks trained online, and (3) a differentiable polytope collision check between the reachable set and obstacles that enables correcting unsafe actions. In simulation, BRSL outperforms other state-of-the-art safe RL methods on a Turtlebot 3, a quadrotor, and a trajectory-tracking point mass with an unsafe set adjacent to the area of highest reward.
@article{SelimAlanwarEtAl2022, author = {Selim, M. and Alanwar, A. and Kousik, S. and Gao, G. and Pavone, M. and Johansson, K.}, title = {Safe Reinforcement Learning Using Black-Box Reachability Analysis}, journal = {{IEEE Robotics and Automation Letters}}, volume = {7}, number = {4}, pages = {10665-10672}, year = {2022}, url = {https://arxiv.org/pdf/2204.07417}, owner = {rdyro}, timestamp = {2022-07-22} }