Spencer M. Richards

Contacts:

Personal Webpage

Spencer M. Richards


Spencer is a Ph.D. student in the Aeronautics and Astronautics Department. Currently, he focuses on learning-based control for robotic systems. He works to create agents that learn safely and efficiently in the real world by leveraging tools from both control theory and machine learning. In general, he is interested in determining theoretic safety guarantees for dynamical systems, and how they translate into practice.

Previously, he completed his M.Sc. in Robotics, Systems, and Control at ETH Zürich, and his B.ASc. in Engineering Science (with a Major in Aerospace Engineering) at the University of Toronto. At ETH Zürich, he conducted his Master’s thesis on safe reinforcement learning with Felix Berkenkamp and Prof. Andreas Krause. He also worked on theory for mobility-on-demand systems with Claudio Ruch and Prof. Emilio Frazzoli. At the University of Toronto Institute for Aerospace Studies (UTIAS), he worked on state estimation for drones during his Bachelor’s thesis with Prof. Angela Schoellig. As an intern at Verity Studios, he developed autonomous flying machines for live entertainment with Prof. Raffaello D’Andrea.


ASL Publications

  1. J. Schilliger, T. Lew, S. M. Richards, S. Hanggi, M. Pavone, and C. Onder, “Control Barrier Functions for Cyber-Physical Systems and Applications to NMPC,” IEEE Robotics and Automation Letters, Aug. 2021. (In Press)

    Abstract: Tractable safety-ensuring algorithms for cyber-physical systems are important in critical applications. Approaches based on Control Barrier Functions assume continuous enforcement, which is not possible in an online fashion. This paper presents two tractable algorithms to ensure forward invariance of discrete-time controlled cyber-physical systems. Both approaches are based on Control Barrier Functions to provide strict mathematical safety guarantees. The first algorithm exploits Lipschitz continuity and formulates the safety condition as a robust program which is subsequently relaxed to a set of affine conditions. The second algorithm is inspired by tube-NMPC and uses an affine Control Barrier Function formulation in conjunction with an auxiliary controller to guarantee safety of the system. We combine an approximate NMPC controller with the second algorithm to guarantee strict safety despite approximated constraints and show its effectiveness experimentally on a mini-Segway.

    @article{SchilligerEtAl2021,
      author = {Schilliger, J. and Lew, T. and Richards, S.~M. and Hanggi, S. and Pavone, M. and Onder, C.},
      title = {Control Barrier Functions for Cyber-Physical Systems and Applications to NMPC},
      journal = {{IEEE Robotics and Automation Letters}},
      year = {2021},
      note = {In Press},
      month = aug,
      url = {https://arxiv.org/abs/2104.14250},
      keywords = {press},
      owner = {lew},
      timestamp = {2021-08-23}
    }
    
  2. S. M. Richards, N. Azizan, J.-J. E. Slotine, and M. Pavone, “Adaptive-Control-Oriented Meta-Learning for Nonlinear Systems,” in Robotics: Science and Systems, Virtual, 2021. (In Press)

    Abstract: Real-time adaptation is imperative to the control of robots operating in complex, dynamic environments. Adaptive control laws can endow even nonlinear systems with good trajectory tracking performance, provided that any uncertain dynamics terms are linearly parameterizable with known nonlinear features. However, it is often difficult to specify such features a priori, such as for aerodynamic disturbances on rotorcraft or interaction forces between a manipulator arm and various objects. In this paper, we turn to data-driven modeling with neural networks to learn, offline from past data, an adaptive controller with an internal parametric model of these nonlinear features. Our key insight is that we can better prepare the controller for deployment with control-oriented meta-learning of features in closed-loop simulation, rather than regression-oriented meta-learning of features to fit input-output data. Specifically, we meta-learn the adaptive controller with closed-loop tracking simulation as the base-learner and the average tracking error as the meta-objective. With a nonlinear planar rotorcraft subject to wind, we demonstrate that our adaptive controller outperforms other controllers trained with regression-oriented meta-learning when deployed in closed-loop for trajectory tracking control.

    @inproceedings{RichardsAzizanEtAl2021,
      author = {Richards, S. M. and Azizan, N. and Slotine, J.-J. E. and Pavone, M.},
      title = {Adaptive-Control-Oriented Meta-Learning for Nonlinear Systems},
      booktitle = {{Robotics: Science and Systems}},
      year = {2021},
      note = {In press},
      keywords = {press},
      address = {Virtual},
      month = jul,
      url = {https://arxiv.org/pdf/2103.04490.pdf},
      owner = {spenrich},
      timestamp = {2021-05-11}
    }
    
  3. R. Sinha, J. Harrison, S. M. Richards, and M. Pavone, “Adaptive Robust Model Predictive Control with Matched and Unmatched Uncertainty,” 2021. (Submitted)

    Abstract: We propose a learning-based robust predictive control algorithm that can handle large uncertainty in the dynamics for a class of discrete-time systems that are nominally linear with an additive nonlinear dynamics component. Such systems commonly model the nonlinear effects of an unknown environment on a nominal system. Motivated by an inability of existing learning-based predictive control algorithms to achieve safety guarantees in the presence of uncertainties of large magnitude in this setting, we achieve significant performance improvements by optimizing over a novel class of nonlinear feedback policies inspired by certainty equivalent “estimate-and-cancel” control laws pioneered in classical adaptive control. In contrast with previous work in robust adaptive MPC, this allows us to take advantage of the structure in the a priori unknown dynamics that are learned online through function approximation. Our approach also extends typical nonlinear adaptive control methods to systems with state and input constraints even when an additive uncertain function cannot directly be canceled from the dynamics. Moreover, our approach allows us to apply contemporary statistical estimation techniques to certify the safety of the system through persistent constraint satisfaction with high probability. We show that our method allows us to consider larger unknown terms in the dynamics than existing methods through simulated examples.

    @inproceedings{SinhaHarrisonEtAl2021,
      author = {Sinha, R. and Harrison, J. and Richards, S. M. and Pavone, M.},
      title = {Adaptive Robust Model Predictive Control with Matched and Unmatched Uncertainty},
      year = {2021},
      note = {Submitted},
      keywords = {sub},
      url = {https://arxiv.org/pdf/2104.08261.pdf},
      owner = {rhnsinha},
      timestamp = {2021-06-04}
    }
    
  4. S. Singh, S. M. Richards, V. Sindhwani, J.-J. E. Slotine, and M. Pavone, “Learning Stabilizable Nonlinear Dynamics with Contraction-Based Regularization,” Int. Journal of Robotics Research, 2020. (In Press)

    Abstract: We propose a novel framework for learning stabilizable nonlinear dynamical systems for continuous control tasks in robotics. The key contribution is a control-theoretic regularizer for dynamics fitting rooted in the notion of stabilizability, a constraint which guarantees the existence of robust tracking controllers for arbitrary open-loop trajectories generated with the learned system. Leveraging tools from contraction theory and statistical learning in reproducing kernel Hilbert spaces, we formulate stabilizable dynamics learning as a functional optimization with a convex objective and bi-convex functional constraints. Under a mild structural assumption and relaxation of the functional constraints to sampling-based constraints, we derive the optimal solution with a modified representer theorem. Finally, we utilize random matrix feature approximations to reduce the dimensionality of the search parameters and formulate an iterative convex optimization algorithm that jointly fits the dynamics functions and searches for a certificate of stabilizability. We validate the proposed algorithm in simulation for a planar quadrotor, and on a quadrotor hardware testbed emulating planar dynamics. We verify, both in simulation and on hardware, significantly improved trajectory generation and tracking performance with the control-theoretic regularized model over models learned using traditional regression techniques, especially when learning from small supervised datasets. The results support the conjecture that the use of stabilizability constraints as a form of regularization can help prune the hypothesis space in a manner that is tailored to the downstream task of trajectory generation and feedback control. This produces models that are not only dramatically better conditioned, but also data efficient.

    @article{SinghRichardsEtAl2020,
      author = {Singh, S. and Richards, S. M. and Sindhwani, V. and Slotine, J-J. E. and Pavone, M.},
      title = {Learning Stabilizable Nonlinear Dynamics with Contraction-Based Regularization},
      journal = {{Int. Journal of Robotics Research}},
      year = {2020},
      note = {In Press},
      url = {/wp-content/papercite-data/pdf/Singh.Richards.ea.IJRR20.pdf},
      keywords = {press},
      owner = {ssingh19},
      timestamp = {2020-03-25}
    }