Rachel Luo

Contacts:

Email: rsluo at stanford dot edu

Rachel Luo


Rachel Luo is a Ph.D. candidate in the Electrical Engineering department. She received a B.S. in Electrical Engineering and Computer Science from MIT in 2014, and an M.S. in Electrical Engineering from Stanford in 2017. Rachel’s research focuses on uncertainty quantification for problems at the intersection of computer vision and robotics.

In her free time, Rachel enjoys photography, rock climbing, hiking, and commuting by electric longboard.

Awards:

  • Stanford Graduate Fellowship
  • National Science Foundation (NSF) Fellowship

ASL Publications

  1. R. Luo, R. Sinha, Y. Sun, A. Hindy, S. Zhao, S. Savarese, E. Schmerling, and M. Pavone, “Online Distribution Shift Detection via Recency Prediction,” in Proc. IEEE Conf. on Robotics and Automation, 2024. (In Press)

    Abstract: When deploying modern machine learning-enabled robotic systems in high-stakes applications, detecting distributional shift is critical. However, most existing methods for detecting distribution shift are not well-suited to robotics settings, where data often arrives in a streaming fashion and may be very high-dimensional. In this work, we present an online method for detecting distributional shift with guarantees on the false positive rate — i.e., when there is no distribution shift, our system is very unlikely (with probability < ε) to falsely issue an alert; any alerts that are issued should therefore be heeded. Our method is specifically designed for efficient detection even with high dimensional data, and it empirically achieves up to 6x faster detection on realistic robotics settings compared to prior work while maintaining a low false negative rate in practice (whenever there is a distribution shift in our experiments, our method indeed emits an alert).

    @inproceedings{LuoSinhaEtAl2023,
      author = {Luo, R. and Sinha, R. and Sun, Y. and Hindy, A. and Zhao, S. and Savarese, S. and Schmerling, E. and Pavone, M.},
      booktitle = {{Proc. IEEE Conf. on Robotics and Automation}},
      title = {Online Distribution Shift Detection via Recency Prediction},
      year = {2024},
      keywords = {press},
      note = {In press},
      url = {https://arxiv.org/abs/2211.09916},
      owner = {rdyro},
      timestamp = {2022-09-21}
    }
    
  2. A. Hindy, R. Luo, S. Banerjee, J. Kuck, E. Schmerling, and M. Pavone, “Diagnostic Runtime Monitoring with Martingales,” in Robotics: Science and Systems, 2024. (Submitted)

    Abstract: Machine learning systems deployed in safety-critical robotics settings must be robust to distribution shifts. However, system designers must understand the cause of a distribution shift in order to implement the appropriate intervention or mitigation strategy and prevent system failure. In this paper, we present a novel framework for diagnosing distribution shifts in a streaming fashion by deploying multiple stochastic martingales simultaneously. We show that knowledge of the underlying cause of a distribution shift can lead to proper interventions over the lifecycle of a deployed system. Our experimental framework can easily be adapted to different types of distribution shifts, models, and datasets. We find that our method outperforms existing work on diagnosing distribution shifts in terms of speed, accuracy, and flexibility, and validate the efficiency of our model in both simulated and live hardware settings.

    @inproceedings{HindyLuoEtAl2024,
      author = {Hindy, A. and Luo, R. and Banerjee, S. and Kuck, J. and Schmerling, E. and Pavone, M.},
      title = {Diagnostic Runtime Monitoring with Martingales},
      note = {Submitted},
      booktitle = {{Robotics: Science and Systems}},
      year = {2024},
      address = {},
      url = {},
      keywords = {sub},
      owner = {somrita},
      timestamp = {2024-02-09}
    }
    
  3. R. Luo, S. Zhao, J. Kuck, B. Ivanovic, S. Savarese, E. Schmerling, and M. Pavone, “Sample-Efficient Safety Assurances using Conformal Prediction,” in Int. Journal of Robotics Research, 2023.

    Abstract: When deploying machine learning models in high-stakes robotics applications, the ability to detect unsafe situations is crucial. Early warning systems can provide alerts when an unsafe situation is imminent (in the absence of corrective action). To reliably improve safety, these warning systems should have a provable false negative rate; i.e., of the situations than are unsafe, fewer than epsilon will occur without an alert. In this work, we present a framework that combines a statistical inference technique known as conformal prediction with a simulator of robot/environment dynamics, in order to tune warning systems to provably achieve an epsilon false negative rate using as few as 1/epsilon data points. We apply our framework to a driver warning system and a robotic grasping application, and empirically demonstrate guaranteed false negative rate and low false detection (positive) rate using very little data.

    @inproceedings{LuoZhaoEtAl2023,
      author = {Luo, R. and Zhao, S. and Kuck, J. and Ivanovic, B. and Savarese, S. and Schmerling, E. and Pavone, M.},
      title = {Sample-Efficient Safety Assurances using Conformal Prediction},
      booktitle = {{Int. Journal of Robotics Research}},
      year = {2023},
      owner = {rsluo},
      timestamp = {2023-02-10},
      url = {https://arxiv.org/abs/2109.14082}
    }
    
  4. R. Luo, S. Zhao, J. Kuck, B. Ivanovic, S. Savarese, E. Schmerling, and M. Pavone, “Sample-Efficient Safety Assurances using Conformal Prediction,” in Workshop on Algorithmic Foundations of Robotics, 2022.

    Abstract: When deploying machine learning models in high-stakes robotics applications, the ability to detect unsafe situations is crucial. Early warning systems can provide alerts when an unsafe situation is imminent (in the absence of corrective action). To reliably improve safety, these warning systems should have a provable false negative rate; i.e., of the situations than are unsafe, fewer than epsilon will occur without an alert. In this work, we present a framework that combines a statistical inference technique known as conformal prediction with a simulator of robot/environment dynamics, in order to tune warning systems to provably achieve an epsilon false negative rate using as few as 1/epsilon data points. We apply our framework to a driver warning system and a robotic grasping application, and empirically demonstrate guaranteed false negative rate and low false detection (positive) rate using very little data.

    @inproceedings{LuoZhaoEtAl2022,
      author = {Luo, R. and Zhao, S. and Kuck, J. and Ivanovic, B. and Savarese, S. and Schmerling, E. and Pavone, M.},
      title = {Sample-Efficient Safety Assurances using Conformal Prediction},
      booktitle = {{Workshop on Algorithmic Foundations of Robotics}},
      year = {2022},
      month = may,
      owner = {rsluo},
      timestamp = {2021-09-20},
      url = {https://arxiv.org/abs/2109.14082}
    }
    
  5. R. Sinha, S. Sharma, S. Banerjee, T. Lew, R. Luo, S. M. Richards, Y. Sun, E. Schmerling, and M. Pavone, “A System-Level View on Out-of-Distribution Data in Robotics,” 2022.

    Abstract: When testing conditions differ from those represented in training data, so-called out-of-distribution (OOD) inputs can mar the reliability of black-box learned components in the modern robot autonomy stack. Therefore, coping with OOD data is an important challenge on the path towards trustworthy learning-enabled open-world autonomy. In this paper, we aim to demystify the topic of OOD data and its associated challenges in the context of data-driven robotic systems, drawing connections to emerging paradigms in the ML community that study the effect of OOD data on learned models in isolation. We argue that as roboticists, we should reason about the overall system-level competence of a robot as it performs tasks in OOD conditions. We highlight key research questions around this system-level view of OOD problems to guide future research toward safe and reliable learning-enabled autonomy.

    @inproceedings{SinhaSharmaEtAl2022,
      author = {Sinha, R. and Sharma, S. and Banerjee, S. and Lew, T. and Luo, R. and Richards, S. M. and Sun, Y. and Schmerling, E. and Pavone, M.},
      title = {A System-Level View on Out-of-Distribution Data in Robotics},
      year = {2022},
      keywords = {},
      url = {https://arxiv.org/abs/2212.14020},
      owner = {rhnsinha},
      timestamp = {2022-12-30}
    }
    
  6. R. Luo, A. Bhatnagar, H. Wang, C. Xiong, S. Savarese, Y. Bai, S. Zhao, S. Ermon, E. Schmerling, and M. Pavone, “Local Calibration: Metrics and Recalibration,” in Proc. Conf. on Uncertainty in Artificial Intelligence, 2022.

    Abstract: Probabilistic classifiers output confidence scores along with their predictions, and these confidence scores should be calibrated, i.e., they should reflect the reliability of the prediction. Confidence scores that minimize standard metrics such as the expected calibration error (ECE) accurately measure the reliability on average across the entire population. However, it is in general impossible to measure the reliability of an individual prediction. In this work, we propose the local calibration error (LCE) to span the gap between average and individual reliability. For each individual prediction, the LCE measures the average reliability of a set of similar predictions, where similarity is quantified by a kernel function on a pretrained feature space and by a binning scheme over predicted model confidences. We show theoretically that the LCE can be estimated sample-efficiently from data, and empirically find that it reveals miscalibration modes that are more fine-grained than the ECE can detect. Our key result is a novel local recalibration method LoRe, to improve confidence scores for individual predictions and decrease the LCE. Experimentally, we show that our recalibration method produces more accurate confidence scores, which improves downstream fairness and decision making on classification tasks with both image and tabular data.

    @inproceedings{LuoEtAl2022,
      author = {Luo, R. and Bhatnagar, A. and Wang, H. and Xiong, C. and Savarese, S. and Bai, Y. and Zhao, S. and Ermon, S. and Schmerling, E. and Pavone, M.},
      title = {Local Calibration: Metrics and Recalibration},
      booktitle = {{Proc. Conf. on Uncertainty in Artificial Intelligence}},
      year = {2022},
      keywords = {pub},
      owner = {rdyro},
      timestamp = {2022-01-26},
      url = {https://arxiv.org/abs/2102.10809}
    }