Spencer M. Richards

Contacts:

Personal Webpage

Spencer M. Richards


Spencer is a Ph.D. student in the Aeronautics and Astronautics Department. Currently, he focuses on learning-based control for robotic systems. He works to create agents that learn safely and efficiently in the real world by leveraging tools from both control theory and machine learning. In general, he is interested in determining theoretic safety guarantees for dynamical systems, and how they translate into practice.

Previously, he completed his M.Sc. in Robotics, Systems, and Control at ETH Zürich, and his B.ASc. in Engineering Science (with a Major in Aerospace Engineering) at the University of Toronto. At ETH Zürich, he conducted his Master’s thesis on safe reinforcement learning with Felix Berkenkamp and Prof. Andreas Krause. He also worked on theory for mobility-on-demand systems with Claudio Ruch and Prof. Emilio Frazzoli. At the University of Toronto Institute for Aerospace Studies (UTIAS), he worked on state estimation for drones during his Bachelor’s thesis with Prof. Angela Schoellig. As an intern at Verity Studios, he developed autonomous flying machines for live entertainment with Prof. Raffaello D’Andrea.


ASL Publications

  1. S. M. Richards, N. Azizan, J.-J. E. Slotine, and M. Pavone, “Control-Oriented Meta-Learning,” Int. Journal of Robotics Research, 2023. (Submitted)

    Abstract: Real-time adaptation is imperative to the control of robots operating in complex, dynamic environments. Adaptive control laws can endow even nonlinear systems with good trajectory tracking performance, provided that any uncertain dynamics terms are linearly parameterizable with known nonlinear features. However, it is often difficult to specify such features a priori, such as for aerodynamic disturbances on rotorcraft or interaction forces between a manipulator arm and various objects. In this paper, we turn to data-driven modeling with neural networks to learn, offline from past data, an adaptive controller with an internal parametric model of these nonlinear features. Our key insight is that we can better prepare the controller for deployment with control-oriented meta-learning of features in closed-loop simulation, rather than regression-oriented meta-learning of features to fit input-output data. Specifically, we meta-learn the adaptive controller with closed-loop tracking simulation as the base-learner and the average tracking error as the meta-objective. With both fully-actuated and underactuated nonlinear planar rotorcraft subject to wind, we demonstrate that our adaptive controller outperforms other controllers trained with regression-oriented meta-learning when deployed in closed-loop for trajectory tracking control.

    @article{RichardsAzizanEtAl2023,
      author = {Richards, S. M. and Azizan, N. and Slotine, J.-J. E. and Pavone, M.},
      title = {Control-Oriented Meta-Learning},
      journal = {{Int. Journal of Robotics Research}},
      year = {2023},
      note = {Submitted},
      keywords = {sub},
      url = {https://arxiv.org/pdf/2103.04490.pdf},
      owner = {spenrich},
      timestamp = {2022-03-01}
    }
    
  2. R. Sinha, J. Harrison, S. M. Richards, and M. Pavone, “Adaptive Robust Model Predictive Control with Matched and Unmatched Uncertainty,” in American Control Conference, 2022. (In Press)

    Abstract: We propose a learning-based robust predictive control algorithm that compensates for significant uncertainty in the dynamics for a class of discrete-time systems that are nominally linear with an additive nonlinear component. Such systems commonly model the nonlinear effects of an unknown environment on a nominal system. We optimize over a class of nonlinear feedback policies inspired by certainty equivalent “estimate-and-cancel” control laws pioneered in classical adaptive control to achieve significant performance improvements in the presence of uncertainties of large magnitude, a setting in which existing learning-based predictive control algorithms often struggle to guarantee safety. In contrast to previous work in robust adaptive MPC, our approach allows us to take advantage of structure (i.e., the numerical predictions) in the a priori unknown dynamics learned online through function approximation. Our approach also extends typical nonlinear adaptive control methods to systems with state and input constraints even when we cannot directly cancel the additive uncertain function from the dynamics. Moreover, we apply contemporary statistical estimation techniques to certify the system’s safety through persistent constraint satisfaction with high probability. Finally, we show in simulation that our method can accommodate more significant unknown dynamics terms than existing methods.

    @inproceedings{SinhaHarrisonEtAl2022,
      author = {Sinha, R. and Harrison, J. and Richards, S. M. and Pavone, M.},
      title = {Adaptive Robust Model Predictive Control with Matched and Unmatched Uncertainty},
      year = {2022},
      keywords = {press},
      booktitle = {{American Control Conference}},
      url = {https://arxiv.org/pdf/2104.08261.pdf},
      owner = {rhnsinha},
      timestamp = {2022-01-31}
    }
    
  3. J. Schilliger, T. Lew, S. M. Richards, S. Hanggi, M. Pavone, and C. Onder, “Control Barrier Functions for Cyber-Physical Systems and Applications to NMPC,” IEEE Robotics and Automation Letters, Aug. 2021.

    Abstract: Tractable safety-ensuring algorithms for cyber-physical systems are important in critical applications. Approaches based on Control Barrier Functions assume continuous enforcement, which is not possible in an online fashion. This paper presents two tractable algorithms to ensure forward invariance of discrete-time controlled cyber-physical systems. Both approaches are based on Control Barrier Functions to provide strict mathematical safety guarantees. The first algorithm exploits Lipschitz continuity and formulates the safety condition as a robust program which is subsequently relaxed to a set of affine conditions. The second algorithm is inspired by tube-NMPC and uses an affine Control Barrier Function formulation in conjunction with an auxiliary controller to guarantee safety of the system. We combine an approximate NMPC controller with the second algorithm to guarantee strict safety despite approximated constraints and show its effectiveness experimentally on a mini-Segway.

    @article{SchilligerEtAl2021,
      author = {Schilliger, J. and Lew, T. and Richards, S.~M. and Hanggi, S. and Pavone, M. and Onder, C.},
      title = {Control Barrier Functions for Cyber-Physical Systems and Applications to NMPC},
      journal = {{IEEE Robotics and Automation Letters}},
      year = {2021},
      month = aug,
      url = {https://arxiv.org/abs/2104.14250},
      owner = {lew},
      timestamp = {2021-08-23}
    }
    
  4. S. M. Richards, N. Azizan, J.-J. E. Slotine, and M. Pavone, “Adaptive-Control-Oriented Meta-Learning for Nonlinear Systems,” in Robotics: Science and Systems, Virtual, 2021.

    Abstract: Real-time adaptation is imperative to the control of robots operating in complex, dynamic environments. Adaptive control laws can endow even nonlinear systems with good trajectory tracking performance, provided that any uncertain dynamics terms are linearly parameterizable with known nonlinear features. However, it is often difficult to specify such features a priori, such as for aerodynamic disturbances on rotorcraft or interaction forces between a manipulator arm and various objects. In this paper, we turn to data-driven modeling with neural networks to learn, offline from past data, an adaptive controller with an internal parametric model of these nonlinear features. Our key insight is that we can better prepare the controller for deployment with control-oriented meta-learning of features in closed-loop simulation, rather than regression-oriented meta-learning of features to fit input-output data. Specifically, we meta-learn the adaptive controller with closed-loop tracking simulation as the base-learner and the average tracking error as the meta-objective. With a nonlinear planar rotorcraft subject to wind, we demonstrate that our adaptive controller outperforms other controllers trained with regression-oriented meta-learning when deployed in closed-loop for trajectory tracking control.

    @inproceedings{RichardsAzizanEtAl2021,
      author = {Richards, S. M. and Azizan, N. and Slotine, J.-J. E. and Pavone, M.},
      title = {Adaptive-Control-Oriented Meta-Learning for Nonlinear Systems},
      booktitle = {{Robotics: Science and Systems}},
      year = {2021},
      address = {Virtual},
      month = jul,
      url = {https://arxiv.org/pdf/2103.04490.pdf},
      owner = {spenrich},
      timestamp = {2021-05-11}
    }
    
  5. S. Singh, S. M. Richards, V. Sindhwani, J.-J. E. Slotine, and M. Pavone, “Learning Stabilizable Nonlinear Dynamics with Contraction-Based Regularization,” Int. Journal of Robotics Research, 2020.

    Abstract: We propose a novel framework for learning stabilizable nonlinear dynamical systems for continuous control tasks in robotics. The key contribution is a control-theoretic regularizer for dynamics fitting rooted in the notion of stabilizability, a constraint which guarantees the existence of robust tracking controllers for arbitrary open-loop trajectories generated with the learned system. Leveraging tools from contraction theory and statistical learning in reproducing kernel Hilbert spaces, we formulate stabilizable dynamics learning as a functional optimization with a convex objective and bi-convex functional constraints. Under a mild structural assumption and relaxation of the functional constraints to sampling-based constraints, we derive the optimal solution with a modified representer theorem. Finally, we utilize random matrix feature approximations to reduce the dimensionality of the search parameters and formulate an iterative convex optimization algorithm that jointly fits the dynamics functions and searches for a certificate of stabilizability. We validate the proposed algorithm in simulation for a planar quadrotor, and on a quadrotor hardware testbed emulating planar dynamics. We verify, both in simulation and on hardware, significantly improved trajectory generation and tracking performance with the control-theoretic regularized model over models learned using traditional regression techniques, especially when learning from small supervised datasets. The results support the conjecture that the use of stabilizability constraints as a form of regularization can help prune the hypothesis space in a manner that is tailored to the downstream task of trajectory generation and feedback control. This produces models that are not only dramatically better conditioned, but also data efficient.

    @article{SinghRichardsEtAl2020,
      author = {Singh, S. and Richards, S. M. and Sindhwani, V. and Slotine, J-J. E. and Pavone, M.},
      title = {Learning Stabilizable Nonlinear Dynamics with Contraction-Based Regularization},
      journal = {{Int. Journal of Robotics Research}},
      year = {2020},
      url = {/wp-content/papercite-data/pdf/Singh.Richards.ea.IJRR20.pdf},
      owner = {ssingh19},
      timestamp = {2020-03-25}
    }