Spencer M. Richards

Contacts:

Email: spenrich at stanford dot edu

Spencer M. Richards


Spencer is a Ph.D. student in the Aeronautics and Astronautics Department. Currently, he focuses on learning-based control for robotic systems. He works to create agents that learn safely and efficiently in the real world by leveraging tools from both control theory and machine learning. In general, he is interested in determining theoretic safety guarantees for dynamical systems, and how they translate into practice.

Previously, he completed his M.Sc. in Robotics, Systems, and Control at ETH Zürich, and his B.ASc. in Engineering Science (with a Major in Aerospace Engineering) at the University of Toronto. At ETH Zürich, he conducted his Master’s thesis on safe reinforcement learning with Felix Berkenkamp and Prof. Andreas Krause. He also worked on theory for mobility-on-demand systems with Claudio Ruch and Prof. Emilio Frazzoli. At the University of Toronto Institute for Aerospace Studies (UTIAS), he worked on state estimation for drones during his Bachelor’s thesis with Prof. Angela Schoellig. As an intern at Verity Studios, he developed autonomous flying machines for live entertainment with Prof. Raffaello D’Andrea.


Currently at Apple

ASL Publications

  1. S. M. Richards, J.-J. Slotine, N. Azizan, and M. Pavone, “Learning Control-Oriented Dynamical Structure from Data,” in Int. Conf. on Machine Learning, Honolulu, Hawaii, 2023.

    Abstract: Even for known nonlinear dynamical systems, feedback controller synthesis is a difficult problem that often requires leveraging the particular structure of the dynamics to induce a stable closed-loop system. For general nonlinear models, including those fit to data, there may not be enough known structure to reliably synthesize a stabilizing feedback controller. In this paper, we discuss a state-dependent nonlinear tracking controller formulation based on a state-dependent Riccati equation for general nonlinear control-affine systems. This formulation depends on a nonlinear factorization of the system of vector fields defining the control-affine dynamics, which always exists under mild smoothness assumptions. We propose a method for learning this factorization from a finite set of data. On a variety of simulated nonlinear dynamical systems, we empirically demonstrate the efficacy of learned versions of this controller in stable trajectory tracking. Alongside our learning method, we evaluate recent ideas in jointly learning a controller and stabilizability certificate for known dynamical systems; we show experimentally that such methods can be frail in comparison.

    @inproceedings{RichardsSlotineEtAl2023,
      author = {Richards, S. M. and Slotine, J.-J. and Azizan, N. and Pavone, M.},
      title = {Learning Control-Oriented Dynamical Structure from Data},
      year = {2023},
      booktitle = {{Int. Conf. on Machine Learning}},
      note = {In press},
      owner = {spenrich},
      timestamp = {2023-07-17},
      url = {https://arxiv.org/abs/2302.02529},
      address = {Honolulu, Hawaii},
      month = jul,
      keywords = {}
    }
    
  2. S. M. Richards, N. Azizan, J.-J. Slotine, and M. Pavone, “Control-Oriented Meta-Learning,” Int. Journal of Robotics Research, vol. 42, no. 10, pp. 777–797, 2023.

    Abstract: Real-time adaptation is imperative to the control of robots operating in complex, dynamic environments. Adaptive control laws can endow even nonlinear systems with good trajectory tracking performance, provided that any uncertain dynamics terms are linearly parameterizable with known nonlinear features. However, it is often difficult to specify such features a priori, such as for aerodynamic disturbances on rotorcraft or interaction forces between a manipulator arm and various objects. In this paper, we turn to data-driven modeling with neural networks to learn, offline from past data, an adaptive controller with an internal parametric model of these nonlinear features. Our key insight is that we can better prepare the controller for deployment with control-oriented meta-learning of features in closed-loop simulation, rather than regression-oriented meta-learning of features to fit input-output data. Specifically, we meta-learn the adaptive controller with closed-loop tracking simulation as the base-learner and the average tracking error as the meta-objective. With both fully-actuated and underactuated nonlinear planar rotorcraft subject to wind, we demonstrate that our adaptive controller outperforms other controllers trained with regression-oriented meta-learning when deployed in closed-loop for trajectory tracking control.

    @article{RichardsAzizanEtAl2023,
      author = {Richards, S. M. and Azizan, N. and Slotine, J.-J. and Pavone, M.},
      title = {Control-Oriented Meta-Learning},
      year = {2023},
      journal = {{Int. Journal of Robotics Research}},
      volume = {42},
      number = {10},
      pages = {777--797},
      owner = {spenrich},
      timestamp = {2024-02-29},
      url = {https://arxiv.org/abs/2204.06716}
    }
    
  3. R. Sinha, S. Sharma, S. Banerjee, T. Lew, R. Luo, S. M. Richards, Y. Sun, E. Schmerling, and M. Pavone, “A System-Level View on Out-of-Distribution Data in Robotics,” 2022.

    Abstract: When testing conditions differ from those represented in training data, so-called out-of-distribution (OOD) inputs can mar the reliability of black-box learned components in the modern robot autonomy stack. Therefore, coping with OOD data is an important challenge on the path towards trustworthy learning-enabled open-world autonomy. In this paper, we aim to demystify the topic of OOD data and its associated challenges in the context of data-driven robotic systems, drawing connections to emerging paradigms in the ML community that study the effect of OOD data on learned models in isolation. We argue that as roboticists, we should reason about the overall system-level competence of a robot as it performs tasks in OOD conditions. We highlight key research questions around this system-level view of OOD problems to guide future research toward safe and reliable learning-enabled autonomy.

    @inproceedings{SinhaSharmaEtAl2022,
      author = {Sinha, R. and Sharma, S. and Banerjee, S. and Lew, T. and Luo, R. and Richards, S. M. and Sun, Y. and Schmerling, E. and Pavone, M.},
      title = {A System-Level View on Out-of-Distribution Data in Robotics},
      year = {2022},
      keywords = {},
      url = {https://arxiv.org/abs/2212.14020},
      owner = {rhnsinha},
      timestamp = {2022-12-30}
    }
    
  4. R. Sinha, J. Harrison, S. M. Richards, and M. Pavone, “Adaptive Robust Model Predictive Control with Matched and Unmatched Uncertainty,” in American Control Conference, 2022.

    Abstract: We propose a learning-based robust predictive control algorithm that compensates for significant uncertainty in the dynamics for a class of discrete-time systems that are nominally linear with an additive nonlinear component. Such systems commonly model the nonlinear effects of an unknown environment on a nominal system. We optimize over a class of nonlinear feedback policies inspired by certainty equivalent “estimate-and-cancel” control laws pioneered in classical adaptive control to achieve significant performance improvements in the presence of uncertainties of large magnitude, a setting in which existing learning-based predictive control algorithms often struggle to guarantee safety. In contrast to previous work in robust adaptive MPC, our approach allows us to take advantage of structure (i.e., the numerical predictions) in the a priori unknown dynamics learned online through function approximation. Our approach also extends typical nonlinear adaptive control methods to systems with state and input constraints even when we cannot directly cancel the additive uncertain function from the dynamics. Moreover, we apply contemporary statistical estimation techniques to certify the system’s safety through persistent constraint satisfaction with high probability. Finally, we show in simulation that our method can accommodate more significant unknown dynamics terms than existing methods.

    @inproceedings{SinhaHarrisonEtAl2022,
      author = {Sinha, R. and Harrison, J. and Richards, S. M. and Pavone, M.},
      title = {Adaptive Robust Model Predictive Control with Matched and Unmatched Uncertainty},
      year = {2022},
      keywords = {pub},
      booktitle = {{American Control Conference}},
      url = {https://arxiv.org/abs/2104.08261},
      owner = {rhnsinha},
      timestamp = {2022-01-31}
    }
    
  5. R. Sinha, J. Harrison, S. M. Richards, and M. Pavone, “Adaptive Robust Model Predictive Control via Uncertainty Cancellation,” IEEE Transactions on Automatic Control, 2022. (In Press)

    Abstract: We propose a learning-based robust predictive control algorithm that compensates for significant uncertainty in the dynamics for a class of discrete-time systems that are nominally linear with an additive nonlinear component. Such systems are commonly used to model the nonlinear effects of an unknown environment on a nominal linear system. Inspired by certainty equivalent “estimate-and-cancel” control laws pioneered in classical adaptive control, we optimize over a class of nonlinear feedback policies to significantly improve performance in the presence of uncertainties of large magnitude, a setting in which existing learning-based predictive control algorithms often struggle to guarantee safety. In contrast to previous work in robust adaptive model predictive control, our approach allows us to take advantage of structure (i.e., the numerical predictions) in the a priori unknown dynamics learned online through function approximation. Our approach also extends typical nonlinear adaptive control methods to systems with state and input constraints even when we cannot directly cancel the additive uncertain function from the dynamics. Moreover, we apply contemporary statistical estimation techniques to certify the system’s safety in the form of persistent constraint satisfaction with high probability. Finally, we show in simulation that our method can accommodate more significant unknown dynamics terms than existing methods.

    @article{SinhaHarrisonEtAl2022b,
      author = {Sinha, R. and Harrison, J. and Richards, S. M. and Pavone, M.},
      title = {Adaptive Robust Model Predictive Control via Uncertainty Cancellation},
      journal = {{IEEE Transactions on Automatic Control}},
      year = {2022},
      keywords = {press},
      note = {In press},
      url = {https://arxiv.org/abs/2212.01371},
      owner = {rhnsinha},
      timestamp = {2023-01-30}
    }
    
  6. J. Schilliger, T. Lew, S. M. Richards, S. Hanggi, M. Pavone, and C. Onder, “Control Barrier Functions for Cyber-Physical Systems and Applications to NMPC,” IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 8623–8630, Aug. 2021.

    Abstract: Tractable safety-ensuring algorithms for cyber-physical systems are important in critical applications. Approaches based on Control Barrier Functions assume continuous enforcement, which is not possible in an online fashion. This paper presents two tractable algorithms to ensure forward invariance of discrete-time controlled cyber-physical systems. Both approaches are based on Control Barrier Functions to provide strict mathematical safety guarantees. The first algorithm exploits Lipschitz continuity and formulates the safety condition as a robust program which is subsequently relaxed to a set of affine conditions. The second algorithm is inspired by tube-NMPC and uses an affine Control Barrier Function formulation in conjunction with an auxiliary controller to guarantee safety of the system. We combine an approximate NMPC controller with the second algorithm to guarantee strict safety despite approximated constraints and show its effectiveness experimentally on a mini-Segway.

    @article{SchilligerEtAl2021,
      author = {Schilliger, J. and Lew, T. and Richards, S.~M. and Hanggi, S. and Pavone, M. and Onder, C.},
      title = {Control Barrier Functions for Cyber-Physical Systems and Applications to NMPC},
      journal = {{IEEE Robotics and Automation Letters}},
      volume = {6},
      number = {4},
      pages = {8623--8630},
      year = {2021},
      month = aug,
      url = {https://arxiv.org/abs/2104.14250},
      owner = {lew},
      timestamp = {2021-08-23}
    }
    
  7. S. M. Richards, N. Azizan, J.-J. Slotine, and M. Pavone, “Adaptive-Control-Oriented Meta-Learning for Nonlinear Systems,” in Robotics: Science and Systems, Virtual, 2021.

    Abstract: Real-time adaptation is imperative to the control of robots operating in complex, dynamic environments. Adaptive control laws can endow even nonlinear systems with good trajectory tracking performance, provided that any uncertain dynamics terms are linearly parameterizable with known nonlinear features. However, it is often difficult to specify such features a priori, such as for aerodynamic disturbances on rotorcraft or interaction forces between a manipulator arm and various objects. In this paper, we turn to data-driven modeling with neural networks to learn, offline from past data, an adaptive controller with an internal parametric model of these nonlinear features. Our key insight is that we can better prepare the controller for deployment with control-oriented meta-learning of features in closed-loop simulation, rather than regression-oriented meta-learning of features to fit input-output data. Specifically, we meta-learn the adaptive controller with closed-loop tracking simulation as the base-learner and the average tracking error as the meta-objective. With a nonlinear planar rotorcraft subject to wind, we demonstrate that our adaptive controller outperforms other controllers trained with regression-oriented meta-learning when deployed in closed-loop for trajectory tracking control.

    @inproceedings{RichardsAzizanEtAl2021,
      author = {Richards, S. M. and Azizan, N. and Slotine, J.-J. and Pavone, M.},
      title = {Adaptive-Control-Oriented Meta-Learning for Nonlinear Systems},
      year = {2021},
      booktitle = {{Robotics: Science and Systems}},
      note = {},
      owner = {spenrich},
      timestamp = {2023-01-30},
      url = {https://arxiv.org/abs/2103.04490},
      address = {Virtual},
      month = jul,
      keywords = {}
    }
    
  8. S. Singh, S. M. Richards, V. Sindhwani, J.-J. E. Slotine, and M. Pavone, “Learning Stabilizable Nonlinear Dynamics with Contraction-Based Regularization,” Int. Journal of Robotics Research, vol. 40, no. 10–11, pp. 1123–1150, 2021.

    Abstract: We propose a novel framework for learning stabilizable nonlinear dynamical systems for continuous control tasks in robotics. The key contribution is a control-theoretic regularizer for dynamics fitting rooted in the notion of stabilizability, a constraint which guarantees the existence of robust tracking controllers for arbitrary open-loop trajectories generated with the learned system. Leveraging tools from contraction theory and statistical learning in reproducing kernel Hilbert spaces, we formulate stabilizable dynamics learning as a functional optimization with a convex objective and bi-convex functional constraints. Under a mild structural assumption and relaxation of the functional constraints to sampling-based constraints, we derive the optimal solution with a modified representer theorem. Finally, we utilize random matrix feature approximations to reduce the dimensionality of the search parameters and formulate an iterative convex optimization algorithm that jointly fits the dynamics functions and searches for a certificate of stabilizability. We validate the proposed algorithm in simulation for a planar quadrotor, and on a quadrotor hardware testbed emulating planar dynamics. We verify, both in simulation and on hardware, significantly improved trajectory generation and tracking performance with the control-theoretic regularized model over models learned using traditional regression techniques, especially when learning from small supervised datasets. The results support the conjecture that the use of stabilizability constraints as a form of regularization can help prune the hypothesis space in a manner that is tailored to the downstream task of trajectory generation and feedback control. This produces models that are not only dramatically better conditioned, but also data efficient.

    @article{SinghRichardsEtAl2020,
      author = {Singh, S. and Richards, S. M. and Sindhwani, V. and Slotine, J-J. E. and Pavone, M.},
      title = {Learning Stabilizable Nonlinear Dynamics with Contraction-Based Regularization},
      journal = {{Int. Journal of Robotics Research}},
      volume = {40},
      number = {10--11},
      pages = {1123-1150},
      year = {2021},
      url = {/wp-content/papercite-data/pdf/Singh.Richards.ea.IJRR20.pdf},
      owner = {ssingh19},
      timestamp = {2020-03-25}
    }