Robin Brown

Contacts:

Email: rabrown1 at stanford dot edu

Robin Brown


Robin is a second-year PhD student at the Institute for Computational and Mathematical engineering. Prior to coming to Stanford, she attended the California Institute of Technology where she received her B.S. in Mathematics; Business, Economics, and Managements, and Control and Dynamical Systems (minor). Her current research focuses on scalable and efficient algorithms for large-scale multi-agent systems.


ASL Publications

  1. L. F. Valenzuela, R. Brown, and M. Pavone, “Decentralized Implicit Differentiation,” IEEE Transactions on Control of Network Systems, 2024. (Submitted)

    Abstract: The ability to differentiate through optimization problems has unlocked numerous applications, from optimization-based layers in machine learning models to complex design problems formulated as bilevel programs. It has been shown that exploiting problem structure can yield significant computation gains for optimization and, in some cases, enable distributed computation. One should expect that this structure can be similarly exploited for gradient computation. In this work, we discuss a decentralized framework for computing gradients of constraint-coupled optimization problems. First, we show that this framework results in significant computational gains, especially for large systems, and provide sufficient conditions for its validity. Second, we leverage exponential decay of sensitivities in graph-structured problems towards building a fully distributed algorithm with convergence guarantees. Finally, we use the methodology to rigorously estimate marginal emissions rates in power systems models. Specifically, we demonstrate how the distributed scheme allows for accurate and efficient estimation of these important emissions metrics on large dynamic power system models.

    @article{ValenzuelaBrownEtAl2024,
      author = {Valenzuela, L. F. and Brown, R. and Pavone, M.},
      title = {Decentralized Implicit Differentiation},
      journal = {{IEEE Transactions on Control of Network Systems}},
      note = {Submitted},
      year = {2024},
      url = {https://arxiv.org/abs/2403.01260},
      owner = {rabrown1},
      timestamp = {2024-03-05},
      keywords = {sub}
    }
    
  2. D. E. B. Neira, R. Brown, P. Sathe, F. Wudarski, M. Pavone, E. G. Rieffel, and D. Venturelli, “Benchmarking the Operation of Quantum Heuristics and Ising Machines: Scoring Parameter Setting Strategies on Optimization Applications,” 2024. (Submitted)

    Abstract: We discuss guidelines for evaluating the performance of parameterized stochastic solvers for optimization problems, with particular attention to systems that employ novel hardware, such as digital quantum processors running variational algorithms, analog processors performing quantum annealing, or coherent Ising Machines. We illustrate through an example a benchmarking procedure grounded in the statistical analysis of the expectation of a given performance metric measured in a test environment. In particular, we discuss the necessity and cost of setting parameters that affect the algorithm’s performance. The optimal value of these parameters could vary significantly between instances of the same target problem. We present an open-source software package that facilitates the design, evaluation, and visualization of practical parameter tuning strategies for complex use of the heterogeneous components of the solver. We examine in detail an example using parallel tempering and a simulator of a photonic Coherent Ising Machine computing and display the scoring of an illustrative baseline family of parameter-setting strategies that feature an exploration-exploitation trade-off.

    @inproceedings{NeiraBrownEtAl2024,
      author = {Neira, D. E. B. and Brown, R. and Sathe, P. and Wudarski, F. and Pavone, M. and Rieffel, E. G. and Venturelli, D.},
      title = {Benchmarking the Operation of Quantum Heuristics and Ising Machines: Scoring Parameter Setting Strategies on Optimization Applications},
      year = {2024},
      keywords = {sub},
      note = {Submitted},
      url = {https://arxiv.org/abs/2402.10255},
      owner = {rabrown1},
      timestamp = {2024-03-01}
    }
    
  3. R. A. Brown, D. Venturelli, M. Pavone, and D. E. Bernal Neira, “Accelerating Continuous Variable Coherent Ising Machines via Momentum,” in Int. Conf. on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research, 2024. (In Press)

    Abstract: The Coherent Ising Machine (CIM) is a non-conventional architecture that takes inspiration from physical annealing processes to solve Ising problems heuristically. Its dynamics are naturally continuous and described by a set of ordinary differential equations that have been proven to be useful for the optimization of continuous variables non-convex quadratic optimization problems. The dynamics of such Continuous Variable CIMs (CV-CIM) encourage optimization via optical pulses whose amplitudes are determined by the negative gradient of the objective; however, standard gradient descent is known to be trapped by local minima and hampered by poor problem conditioning. In this work, we propose to modify the CV-CIM dynamics using more sophisticated pulse injections based on tried-and-true optimization techniques such as momentum and Adam. Through numerical experiments, we show that the momentum and Adam updates can significantly speed up the CV-CIM’s convergence and improve sample diversity over the original CV-CIM dynamics. We also find that the Adam-CV-CIM’s performance is more stable as a function of feedback strength, especially on poorly conditioned instances, resulting in an algorithm that is more robust, reliable, and easily tunable. More broadly, we identify the CIM dynamical framework as a fertile opportunity for exploring the intersection of classical optimization and modern analog computing.

    @inproceedings{BrownEtAlCPAIOR2024,
      author = {Brown, R. A. and Venturelli, D and Pavone, M. and Bernal Neira, D. E.},
      booktitle = {{Int. Conf. on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research}},
      title = {Accelerating Continuous Variable Coherent Ising Machines via Momentum},
      year = {2024},
      note = {In press},
      keywords = {press},
      owner = {rabrown1},
      timestamp = {2024-01-22}
    }
    
  4. R. A. Brown, D. E. Bernal Neira, D. Venturelli, and M. Pavone, “A Copositive Framework for Analysis of Hybrid Ising-Classical Algorithms,” SIAM Journal on Optimization, 2024. (In Press)

    Abstract:

    @article{BrownBernalEtAl2024,
      author = {Brown, R. A. and Bernal Neira, D. E. and Venturelli, D. and Pavone, M.},
      title = {A Copositive Framework for Analysis of Hybrid Ising-Classical Algorithms},
      journal = {{SIAM Journal on Optimization}},
      note = {In press},
      year = {2024},
      keywords = {press},
      url = {https://arxiv.org/abs/2207.13630},
      timestamp = {2022-10-04}
    }
    
  5. R. Brown, E. Schmerling, N. Azizan, and M. Pavone, “A Unified View of SDP-based Neural Network Verification through Completely Positive Programming,” in Int. Conf. on Artificial Intelligence and Statistics, 2022.

    Abstract: Verifying that input-output relationships of a neural network conform to prescribed operational specifications is a key enabler towards deploying these networks in safety-critical applications. Semidefinite programming (SDP)-based approaches to Rectified Linear Unit (ReLU) network verification transcribe this problem into an optimization problem, where the accuracy of any such formulation reflects the level of fidelity in how the neural network computation is represented, as well as the relaxations of intractable constraints. While the literature contains much progress on improving the tightness of SDP formulations while maintaining tractability, comparatively little work has been devoted to the other extreme, i.e., how to most accurately capture the original verification problem before SDP relaxation. In this work, we develop an exact, convex formulation of verification as a completely positive program (CPP), and provide analysis showing that our formulation is minimal—the removal of any constraint fundamentally misrepresents the neural network computation. We leverage our formulation to provide a unifying view of existing approaches, and give insight into the source of large relaxation gaps observed in some cases.

    @inproceedings{BrownSchmerlingEtAl2022,
      author = {Brown, R. and Schmerling, E. and Azizan, N. and Pavone, M.},
      title = {A Unified View of SDP-based Neural Network Verification through Completely Positive Programming},
      booktitle = {{Int. Conf. on Artificial Intelligence and Statistics}},
      year = {2022},
      owner = {rabrown1},
      timestamp = {2022-02-17}
    }
    
  6. R. Brown, D. Bernal, A. Sahasrabudhe, A. Lott, D. Venturelli, and M. Pavone, “Copositive optimization via Ising solvers,” 2022.

    Abstract:

    @unpublished{BrownBernalEtAl2022,
      author = {Brown, R. and Bernal, D. and Sahasrabudhe, A. and Lott, A. and Venturelli, D. and Pavone, M.},
      title = {Copositive optimization via Ising solvers},
      note = {Int. Conf. on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research. Extended Abstract.},
      year = {2022},
      owner = {rabrown1},
      url = {/wp-content/papercite-data/pdf/Brown.Bernal.ea.pdf},
      timestamp = {2022-04-07}
    }
    
  7. R. A. Brown, F. Rossi, K. Solovey, M. Tsao, M. T. Wolf, and M. Pavone, “On Local Computation for Network-Structured Convex Optimization in Multi-Agent Systems,” IEEE Transactions on Control of Network Systems, vol. 8, no. 2, pp. 542–554, 2021.

    Abstract: A number of prototypical optimization problems in multi-agent systems (e.g., task allocation and network load-sharing) exhibit a highly local structure: that is, each agent’s decision variables are only directly coupled to few other agent’s variables through the objective function or the constraints. In this paper, we develop a rigorous notion of "locality" that quantifies the degree to which agents can compute their portion of the global solution of such a distributed optimization problem based solely on information in their local neighborhood. We build upon the results of Rebeschini and Tatikonda (2019) to develop a more general theory of locality that fully captures the importance of problem data to individual solution components, as opposed to a theory that only captures response to perturbations. This analysis provides a theoretical basis for a rather simple algorithm in which agents individually solve a truncated sub-problem of the global problem, where the size of the sub-problem used depends on the locality of the problem, and the desired accuracy. Numerical results show that the proposed theoretical bounds are remarkably tight for well-conditioned problems.

    @article{BrownRossiEtAl2020,
      author = {Brown, R. A. and Rossi, F. and Solovey, K. and Tsao, M. and Wolf, M. T. and Pavone, M.},
      title = {On Local Computation for Network-Structured Convex Optimization in Multi-Agent Systems},
      journal = {{IEEE Transactions on Control of Network Systems}},
      volume = {8},
      number = {2},
      pages = {542-554},
      year = {2021},
      url = {/wp-content/papercite-data/pdf/Brown.Rossi.ea.TCNS20.pdf},
      owner = {rabrown1},
      timestamp = {2021-08-31}
    }
    
  8. R. A. Brown, F. Rossi, K. Solovey, M. T. Wolf, and M. Pavone, “Exploiting Locality and Structure for Distributed Optimization in Multi-Agent Systems,” in European Control Conference, St. Petersburg, Russia, 2020.

    Abstract: A number of prototypical optimization problems in multi-agent systems (e.g. task allocation and network load-sharing) exhibit a highly local structure: that is, each agent’s decision variables are only directly coupled to few other agent’s variables through the objective function or the constraints. Nevertheless, existing algorithms for distributed optimization generally do not exploit the locality structure of the problem, requiring all agents to compute or exchange the full set of decision variables. In this paper, we develop a rigorous notion of "locality" that relates the structural properties of a linearly-constrained convex optimization problem (in particular, the sparsity structure of the constraint matrix and the objective function) to the amount of information that agents should exchange to compute an arbitrarily high-quality approximation to the problem from a cold-start. We leverage the notion of locality to develop a locality-aware distributed optimization algorithm, and we show that, for problems where individual agents only require to know a small portion of the optimal solution, the algorithm requires very limited inter-agent communication. Numerical results show that the convergence rate of our algorithm is directly explained by the locality parameter proposed, and that the proposed theoretical bounds are remarkably tight.

    @inproceedings{BrownRossiEtAl20,
      author = {Brown, R. A. and Rossi, F. and Solovey, K. and Wolf, M. T. and Pavone, M.},
      title = {Exploiting Locality and Structure for Distributed Optimization in Multi-Agent Systems},
      booktitle = {{European Control Conference}},
      year = {2020},
      address = {St. Petersburg, Russia},
      month = may,
      url = {/wp-content/papercite-data/pdf/Brown.Rossi.ea.ECC20.pdf},
      owner = {rabrown1},
      timestamp = {2020-02-25}
    }