Sandeep Chinchali

Contacts:

Email: csandeep at stanford dot edu

Sandeep Chinchali


Sandeep is a postdoctoral scholar in the ASL Lab and will be an assistant professor in the ECE department at UT Austin starting in Fall 2021. Sandeep completed his PhD in computer science at Stanford, where he was advised by Marco Pavone and Sachin Katti. Previously, he was the first principal data scientist at Uhana, a Stanford startup working on data-driven optimization of cellular networks, now acquired by VMWare. Prior to Stanford, he graduated from Caltech, where he worked on robotics at NASA’s Jet Propulsion Lab (JPL). He is a recipient of the Stanford Graduate Fellowship and National Science Foundation (NSF) fellowships.


Currently at University of Texas at Austin

ASL Publications

  1. M. Nakanoya, S. S. Narasimhan, S. Bhat, A. Anemogiannis, A. Datta, S. Katti, S. Chinchali, and M. Pavone, “Co-Design of Communication and Machine Inference for Cloud Robotics,” Autonomous Robots, vol. 47, pp. 579–594, 2023.

    Abstract: Today, even the most compute-and-power constrained robots can measure complex, high data-rate video and LIDAR sensory streams. Often, such robots, ranging from low-power drones to space and subterranean rovers, need to transmit high-bitrate sensory data to a remote compute server if they are uncertain or cannot scalably run complex perception or mapping tasks locally. However, today’s representations for sensory data are mostly designed for human, not robotic, perception and thus often waste precious compute or wireless network resources to transmit unimportant parts of a scene that are unnecessary for a high-level robotic task. This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model’s ultimate objective. Our algorithm aggressively compresses robotic sensory data by up to 11 x more than competing methods. Further, it achieves high accuracy and robust generalization on diverse tasks including Mars terrain classification with low-power deep learning accelerators, neural motion planning, and environmental timeseries classification.

    @article{NakanoyaEtAl2021,
      author = {Nakanoya, Manabu and Narasimhan, Sai Shankar and Bhat, Sharachchandra and Anemogiannis, Alexandros and Datta, Akul and Katti, Sachin and Chinchali, Sandeep and Pavone, Marco},
      title = {Co-Design of Communication and Machine Inference for Cloud Robotics},
      journal = {{Autonomous Robots}},
      volume = {47},
      number = {},
      pages = {579–-594},
      year = {2023},
      owner = {rdyro},
      timestamp = {2024-02-29},
      keywords = {pub}
    }
    
  2. J. Cheng, M. Pavone, S. Katti, S. Chinchali, and A. Tang, “Data Sharing and Compression for Cooperative Networked Control,” in Conf. on Neural Information Processing Systems, 2021.

    Abstract: Sharing forecasts of network timeseries data, such as cellular or electricity load patterns, can improve independent control applications ranging from traffic scheduling to power generation. Typically, forecasts are designed without knowledge of a downstream controller’s task objective, and thus simply optimize for mean prediction error. However, such task-agnostic representations are often too large to stream over a communication network and do not emphasize salient temporal features for cooperative control. This paper presents a solution to learn succinct, highly-compressed forecasts that are co-designed with a modular controller’s task objective. Our simulations with real cellular, Internet-of-Things (IoT), and electricity load data show we can improve a model predictive controller’s performance by at least 25% while transmitting 80% less data than the competing method. Further, we present theoretical compression results for a networked variant of the classical linear quadratic regulator (LQR) control problem.

    @inproceedings{ChengPavoneEtAl2021,
      author = {Cheng, J. and Pavone, M. and Katti, S. and Chinchali, S. and Tang, A.},
      title = {Data Sharing and Compression for Cooperative Networked Control},
      booktitle = {{Conf. on Neural Information Processing Systems}},
      year = {2021},
      month = dec,
      url = {https://arxiv.org/abs/2109.14675},
      owner = {borisi},
      timestamp = {2021-10-06}
    }
    
  3. M. Nakanoya, S. Chinchali, A. Anemogiannis, A. Datta, S. Katti, and M. Pavone, “Task-relevant Representation Learning for Networked Robotic Perception,” in Robotics: Science and Systems, Online, 2021.

    Abstract: Today, even the most compute-and-power constrained robots can measure complex, high data-rate video and LIDAR sensory streams. Often, such robots, ranging from low-power drones to space and subterranean rovers, need to transmit high-bitrate sensory data to a remote compute server if they are uncertain or cannot scalably run complex perception or mapping tasks locally. However, today’s representations for sensory data are mostly designed for human, not robotic, perception and thus often waste precious compute or wireless network resources to transmit unimportant parts of a scene that are unnecessary for a high-level robotic task. This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model’s ultimate objective. Our algorithm aggressively compresses robotic sensory data by up to 11x more than competing methods. Further, it achieves high accuracy and robust generalization on diverse tasks including Mars terrain classification with low-power deep learning accelerators, neural motion planning, and environmental timeseries classification.

    @inproceedings{NakanoyaChinchaliEtAl2021,
      author = {Nakanoya, M. and Chinchali, S. and Anemogiannis, A. and Datta, A. and Katti, S. and Pavone, M.},
      title = {Task-relevant Representation Learning for Networked Robotic Perception},
      booktitle = {{Robotics: Science and Systems}},
      year = {2021},
      address = {Online},
      month = jul,
      url = {https://arxiv.org/abs/2011.03216},
      owner = {csandeep},
      timestamp = {2021-05-19}
    }
    
  4. S. Chinchali, E. Pergament, M. Nakanoya, E. Cidon, E. Zhang, D. Bharadia, M. Pavone, and S. Katti, “Sampling Training Data for Distributed Learning between Robots and the Cloud,” in Int. Symp. on Experimental Robotics, Valetta, Malta, 2020.

    Abstract: Today’s robotic fleets are increasingly measuring high-volume video and LIDAR sensory streams, which can be mined for valuable training data, such as rare scenes of road construction sites, to steadily improve robotic perception models. However, re-training perception models on growing volumes of rich sensory data in central compute servers (or the "cloud") places an enormous time and cost burden on network transfer, cloud storage, human annotation, and cloud computing resources. Hence, we introduce HarvestNet, an intelligent sampling algorithm that resides on-board a robot and reduces system bottlenecks by only storing rare, useful events to steadily improve perception models re-trained in the cloud. HarvestNet significantly improves the accuracy of machine-learning models on our novel dataset of road construction sites, field testing of self-driving cars, and streaming face recognition, while reducing cloud storage, dataset annotation time, and cloud compute time by between 65.7-81.3%. Further, it is between 1.05-2.58x more accurate than baseline algorithms and scalably runs on embedded deep learning hardware.

    @inproceedings{ChinchaliPergamentEtAl2020,
      author = {Chinchali, S. and Pergament, E. and Nakanoya, M. and Cidon, E. and Zhang, E. and Bharadia, D. and Pavone, M. and Katti, S.},
      title = {Sampling Training Data for Distributed Learning between Robots and the Cloud},
      booktitle = {{Int. Symp. on Experimental Robotics}},
      year = {2020},
      address = {Valetta, Malta},
      month = mar,
      owner = {csandeep},
      timestamp = {2020-11-09}
    }
    
  5. S. Chinchali, A. Sharma, J. Harrison, A. Elhafsi, D. Kang, E. Pergament, E. Cidon, S. Katti, and M. Pavone, “Network Offloading Policies for Cloud Robotics: a Learning-based Approach,” in Robotics: Science and Systems, Freiburg im Breisgau, Germany, 2019.

    Abstract: Today’s robotic systems are increasingly turning to computationally expensive models such as deep neural networks (DNNs) for tasks like localization, perception, planning, and object detection. However, resource-constrained robots, like low-power drones, often have insufficient on-board compute resources or power reserves to scalably run the most accurate, state-of-the art neural network compute models. Cloud robotics allows mobile robots the benefit of offloading compute to centralized servers if they are uncertain locally or want to run more accurate, compute-intensive models. However, cloud robotics comes with a key, often understated cost: communicating with the cloud over congested wireless networks may result in latency or loss of data. In fact, sending high data-rate video or LIDAR from multiple robots over congested networks can lead to prohibitive delay for real-time applications, which we measure experimentally. In this paper, we formulate a novel Robot Offloading Problem - how and when should robots offload sensing tasks, especially if they are uncertain, to improve accuracy while minimizing the cost of cloud communication? We formulate offloading as a sequential decision making problem for robots, and propose a solution using deep reinforcement learning. In both simulations and hardware experiments using state-of-the art vision DNNs, our offloading strategy improves vision task performance by between 1.3-2.6x of benchmark offloading strategies, allowing robots the potential to significantly transcend their on-board sensing accuracy but with limited cost of cloud communication.

    @inproceedings{ChinchaliSharmaEtAl2019,
      author = {Chinchali, S. and Sharma, A. and Harrison, J. and Elhafsi, A. and Kang, D. and Pergament, E. and Cidon, E. and Katti, S. and Pavone, M.},
      title = {Network Offloading Policies for Cloud Robotics: a Learning-based Approach},
      booktitle = {{Robotics: Science and Systems}},
      year = {2019},
      address = {Freiburg im Breisgau, Germany},
      month = jun,
      url = {https://arxiv.org/pdf/1902.05703.pdf},
      owner = {apoorva},
      timestamp = {2019-02-07}
    }
    
  6. S. P. Chinchali, S. C. Livingston, M. Chen, and M. Pavone, “Multi-objective optimal control for proactive decision-making with temporal logic models,” Int. Journal of Robotics Research, vol. 38, no. 12-13, pp. 1490–1512, 2019.

    Abstract: The operation of today’s robots entails interactions with humans, e.g., in autonomous driving amidst human-driven vehicles. To effectively do so, robots must proactively decode the intent of humans and concurrently leverage this knowledge for safe, cooperative task satisfaction—a problem we refer to as proactive decision making. However, simultaneous intent decoding and robotic control requires reasoning over several possible human behavioral models, resulting in high-dimensional state trajectories. In this paper, we address the proactive decision making problem using a novel combination of formal methods, control, and data mining techniques. First, we distill high-dimensional state trajectories of human-robot interaction into concise, symbolic behavioral summaries that can be learned from data. Second, we leverage formal methods to model high-level agent goals, safe interaction, and information-seeking behavior with temporal logic formulae. Finally, we design a novel decision-making scheme that maintains a belief distribution over models of human behavior, and proactively plans informative actions. After showing several desirable theoretical properties, we apply our framework to a dataset of humans driving in crowded merging scenarios. For it, temporal logic models are generated and used to synthesize control strategies using tree-based value iteration and deep reinforcement learning (RL). Additionally, we illustrate how data-driven models of human responses to informative robot probes, such as from generative models like Conditional Variational Autoencoders (CVAEs), can be clustered with formal specifications. Results from simulated self-driving car scenarios demonstrate that data-driven strategies enable safe interaction, correct model identification, and significant dimensionality reduction.

    @article{ChinchaliLivingstonEtAl2018,
      author = {Chinchali, S. P. and Livingston, S. C. and Chen, M. and Pavone, M.},
      title = {Multi-objective optimal control for proactive decision-making with temporal logic models},
      journal = {{Int. Journal of Robotics Research}},
      volume = {38},
      number = {12-13},
      pages = {1490--1512},
      year = {2019},
      url = {/wp-content/papercite-data/pdf/Chinchali.Livingston.Chen.Pavone.IJRR18.pdf},
      owner = {SCL},
      timestamp = {2020-11-09}
    }
    
  7. S. Chinchali, P. Hu, T. Chu, M. Sharma, M. Bansal, R. Misra, M. Pavone, and Katti S, “Cellular Network Traffic Scheduling with Deep Reinforcement Learning,” in Proc. AAAI Conf. on Artificial Intelligence, New Orleans, Louisiana, 2018.

    Abstract: Modern mobile networks are facing unprecedented growth in demand due to a new class of traffic from Internet of Things (IoT) devices such as smart wearables and autonomous cars. Future networks must schedule delay-tolerant software updates, data backup, and other transfers from IoT devices while maintaining strict service guarantees for conventional real-time applications such as voice-calling and video. This problem is extremely challenging because conventional traffic is highly dynamic across space and time, so its performance is significantly impacted if all IoT traffic is scheduled immediately when it originates. In this paper, we present a reinforcement learning (RL) based scheduler that can dynamically adapt to traffic variation, and to various reward functions set by network operators, to optimally schedule IoT traffic. Using 4 weeks of real network data from downtown Melbourne, Australia spanning diverse traffic patterns, we demonstrate that our RL scheduler can enable mobile networks to carry 14.7% more data with minimal impact on existing traffic, and outperforms heuristic schedulers by more than 2x. Our work is a valuable step towards designing autonomous, "self- driving" networks that learn to manage themselves from past data.

    @inproceedings{ChinchaliHuEtAl2018,
      author = {Chinchali, S. and Hu, P. and Chu, T. and Sharma, M. and Bansal, M. and Misra, R. and Pavone, M. and Katti, S},
      title = {Cellular Network Traffic Scheduling with Deep Reinforcement Learning},
      booktitle = {{Proc. AAAI Conf. on Artificial Intelligence}},
      year = {2018},
      address = {New Orleans, Louisiana},
      month = feb,
      url = {/wp-content/papercite-data/pdf/Chinchali.ea.AAAI18.pdf},
      owner = {frossi2},
      timestamp = {2018-04-10}
    }
    
  8. S. P. Chinchali, S. C. Livingston, and M. Pavone, “Multi-objective optimal control for proactive decision-making with temporal logic models,” in Int. Symp. on Robotics Research, Puerto Varas, Chile, 2017.

    Abstract: The operation of today’s robots increasingly entails interactions with humans, in settings ranging from autonomous driving amidst human-driven vehicles to collaborative manufacturing. To effectively do so, robots must proactively decode the intent or plan of humans and concurrently leverage such a knowledge for safe, cooperative task satisfaction—a problem we refer to as proactive decision making. However, the problem of proactive intent decoding coupled with robotic control is computationally intractable as a robot must reason over several possible human behavioral models and resulting high-dimensional state trajectories. In this paper, we address the proactive decision making problem using a novel combination of algorithmic and data mining techniques. First, we distill high-dimensional state trajectories of human-robot interaction into concise, symbolic behavioral summaries that can be learned from data. Second, we leverage formal methods to model high-level agent goals, safe interaction, and information-seeking behavior with temporal logic formulae. Finally, we design a novel decision-making scheme that simply maintains a belief distribution over high-level, symbolic models of human behavior, and proactively plans informative control actions. Leveraging a rich dataset of real human driving data in crowded merging scenarios, we generate temporal logic models and use them to synthesize control strategies using tree-based value iteration and reinforcement learning (RL). Results from two simulated self-driving car scenarios, one cooperative and the other adversarial, demonstrate that our data-driven control strategies enable safe interaction, correct model identification, and significant dimensionality reduction.

    @inproceedings{ChinchaliLivingstonEtAl2017,
      author = {Chinchali, S. P. and Livingston, S. C. and Pavone, M.},
      title = {Multi-objective optimal control for proactive decision-making with temporal logic models},
      booktitle = {{Int. Symp. on Robotics Research}},
      year = {2017},
      address = {Puerto Varas, Chile},
      month = dec,
      url = {/wp-content/papercite-data/pdf/Chinchali.Livingston.Pavone.ISRR17.pdf},
      owner = {pavone},
      timestamp = {2018-01-16}
    }
    
  9. S. P. Chinchali, S. C. Livingston, M. Pavone, and J. W. Burdick, “Simultaneous Model Identification and Task Satisfaction in the Presence of Temporal Logic Constraints,” in Proc. IEEE Conf. on Robotics and Automation, Stockholm, Sweden, 2016.

    Abstract: Recent proliferation of cyber-physical systems, ranging from autonomous cars to nuclear hazard inspection robots, has exposed several challenging research problems on automated fault detection and recovery. This paper considers how recently developed formal synthesis and model verification techniques may be used to automatically generate information-seeking trajectories for anomaly detection. In particular, we consider the problem of how a robot could select its actions so as to maximally disambiguate between different model hypotheses that govern the environment it operates in or its interaction with other agents whose prime motivation is a priori unknown. The identification problem is posed as selection of the most likely model from a set of candidates, where each candidate is an adversarial Markov decision process (MDP) together with a linear temporal logic (LTL) formula that constrains robot-environment interaction. An adversarial MDP is an MDP in which transitions depend on both a (controlled) robot action and an (uncontrolled) adversary action. States are labeled, thus allowing interpretation of satisfaction of LTL formulae, which have a special form admitting satisfaction decisions in bounded time. An example where a robotic car must discern whether neighboring vehicles are following its trajectory for a surveillance operation is used to illustrate the problem and demonstrate our approach.

    @inproceedings{ChinchaliLivingstonEtAl2016,
      author = {Chinchali, S. P. and Livingston, S. C. and Pavone, M. and Burdick, J. W.},
      title = {Simultaneous Model Identification and Task Satisfaction in the Presence of Temporal Logic Constraints},
      booktitle = {{Proc. IEEE Conf. on Robotics and Automation}},
      year = {2016},
      address = {Stockholm, Sweden},
      doi = {10.1109/ICRA.2016.7487553},
      month = may,
      url = {/wp-content/papercite-data/pdf/Chinchali.Livingston.ea.ICRA16.pdf},
      owner = {bylard},
      timestamp = {2017-01-28}
    }