Navid Azizan

Contacts:

Personal Webpage

Navid Azizan


Dr. Navid Azizan is a Postdoctoral Scholar in ASL at Stanford, and an incoming Assistant Professor at MIT with dual appointments in the Department of Mechanical Engineering and the Institute for Data, Systems, and Society (IDSS).

Navid received his PhD in Computing and Mathematical Sciences (CMS) from the California Institute of Technology (Caltech) in 2020. Additionally, he was a research scientist intern at Google DeepMind in 2019. His research interests broadly lie in machine learning, control theory, mathematical optimization, and network science. He has made fundamental contributions to various aspects of intelligent systems, including the design and analysis of optimization algorithms for nonconvex and networked problems with applications to the smart grid, distributed computation, epidemics, and autonomy.

Navid’s work has been recognized by several awards including the 2020 Information Theory and Applications (ITA) Graduation-Day Gold Award. He was named an Amazon Fellow in Artificial Intelligence in 2017 and a PIMCO Fellow in Data Science in 2018. His research on smart grids received the ACM GREENMETRICS Best Student Paper Award in 2016. He was also the first-place winner and a gold medalist at the 2008 National Physics Olympiad in Iran. He co-organizes the popular “Control meets Learning” virtual seminar series.


ASL Publications

  1. S. M. Richards, N. Azizan, J.-J. E. Slotine, and M. Pavone, “Adaptive-Control-Oriented Meta-Learning for Nonlinear Systems,” in Robotics: Science and Systems, Virtual, 2021. (In Press)

    Abstract: Real-time adaptation is imperative to the control of robots operating in complex, dynamic environments. Adaptive control laws can endow even nonlinear systems with good trajectory tracking performance, provided that any uncertain dynamics terms are linearly parameterizable with known nonlinear features. However, it is often difficult to specify such features a priori, such as for aerodynamic disturbances on rotorcraft or interaction forces between a manipulator arm and various objects. In this paper, we turn to data-driven modeling with neural networks to learn, offline from past data, an adaptive controller with an internal parametric model of these nonlinear features. Our key insight is that we can better prepare the controller for deployment with control-oriented meta-learning of features in closed-loop simulation, rather than regression-oriented meta-learning of features to fit input-output data. Specifically, we meta-learn the adaptive controller with closed-loop tracking simulation as the base-learner and the average tracking error as the meta-objective. With a nonlinear planar rotorcraft subject to wind, we demonstrate that our adaptive controller outperforms other controllers trained with regression-oriented meta-learning when deployed in closed-loop for trajectory tracking control.

    @inproceedings{RichardsAzizanEtAl2021,
      author = {Richards, S. M. and Azizan, N. and Slotine, J.-J. E. and Pavone, M.},
      title = {Adaptive-Control-Oriented Meta-Learning for Nonlinear Systems},
      booktitle = {{Robotics: Science and Systems}},
      year = {2021},
      note = {In press},
      keywords = {press},
      address = {Virtual},
      month = jul,
      url = {https://arxiv.org/pdf/2103.04490.pdf},
      owner = {spenrich},
      timestamp = {2021-05-11}
    }
    
  2. A. Sharma, N. Azizan, and M. Pavone, “Sketching Curvature for Efficient Out-of-Distribution Detection for Deep Neural Networks,” in Proc. Conf. on Uncertainty in Artificial Intelligence, 2021. (In Press)

    Abstract: In order to safely deploy Deep Neural Networks (DNNs) within the perception pipelines of real-time decision making systems, there is a need for safeguards that can detect out-of-training-distribution (OoD) inputs both efficiently and accurately. Building on recent work leveraging the local curvature of DNNs to reason about epistemic uncertainty, we propose Sketching Curvature of OoD Detection (SCOD), an architecture-agnostic framework for equipping any trained DNN with a task-relevant epistemic uncertainty estimate. Offline, given a trained model and its training data, SCOD employs tools from matrix sketching to tractably compute a low-rank approximation of the Fisher information matrix, which characterizes which directions in the weight space are most influential on the predictions over the training data. Online, we estimate uncertainty by measuring how much perturbations orthogonal to these directions can alter predictions at a new test input. We apply SCOD to pre-trained networks of varying architectures on several tasks, ranging from regression to classification. We demonstrate that SCOD achieves comparable or better OoD detection performance with lower computational burden relative to existing baselines.

    @inproceedings{SharmaAzizanEtAl2021,
      author = {Sharma, A. and Azizan, N. and Pavone, M.},
      title = {Sketching Curvature for Efficient Out-of-Distribution Detection for Deep Neural Networks},
      booktitle = {{Proc. Conf. on Uncertainty in Artificial Intelligence}},
      year = {2021},
      note = {In press},
      month = jul,
      url = {https://arxiv.org/abs/2102.12567},
      keywords = {press},
      owner = {apoorva},
      timestamp = {2021-05-24}
    }