Shreyas Kousik

Shreyas Kousik


Shreyas is a postdoctoral scholar at Stanford’s Department of Aeronautics and Astronautics. He completed his B.S. degree in mechanical engineering from Georgia Tech in 2014, and his M.S. and Ph.D. degrees in mechanical engineering from the University of Michigan – Ann Arbor in 2020.

Shreyas’ research interests include geometric and numerical representations that enable fast nonlinear optimization. His Ph.D. dissertation focused on using reachability analysis to generate such representations for robot motion planning. Currently, he is focusing on how to apply reachability analysis outside of motion planning, to robot state estimation, navigation, and perception.

In his free time, Shreyas enjoys playing guitar and motorcycling.

Awards:

  • J. Robert Beyster Computational Innovation Graduate Fellowship (2019 - 2020)
  • NSF GRFP Honorable Mention
  • ASME DSCC Best Student Paper 2019

ASL Publications

  1. M. Selim, A. Alanwar, S. Kousik, G. Gao, M. Pavone, and K. Johansson, “Safe Reinforcement Learning Using Black-Box Reachability Analysis,” IEEE Robotics and Automation Letters, 2022. (Submitted)

    Abstract: Reinforcement learning (RL) is capable of sophisticated motion planning and control for robots in uncertain environments. However, state-of-the-art deep RL approaches typically lack safety guarantees, especially when the robot and environment models are unknown. To justify widespread deployment, robots must respect safety constraints without sacrificing performance. Thus, we propose a Black-box Reachability-based Safety Layer (BRSL) with three main components: (1) data-driven reachability analysis for a black-box robot model, (2) a “dreaming” trajectory planner that hallucinates future actions and observations using an ensemble of neural networks trained online, and (3) a differentiable polytope collision check between the reachable set and obstacles that enables correcting unsafe actions. In simulation, BRSL outperforms other state-of-the-art safe RL methods on a Turtlebot 3, a quadrotor, and a trajectory-tracking point mass with an unsafe set adjacent to the area of highest reward.

    @article{SelimAlanwarEtAl2022,
      author = {Selim, M. and Alanwar, A. and Kousik, S. and Gao, G. and Pavone, M. and Johansson, K.},
      title = {Safe Reinforcement Learning Using Black-Box Reachability Analysis},
      journal = {{IEEE Robotics and Automation Letters}},
      year = {2022},
      note = {Submitted},
      url = {},
      keywords = {sub},
      owner = {skousik},
      timestamp = {2022-03-03}
    }