Carmen Amo Alonso is a Schmidt Science Fellow affiliated with ASL at Stanford University. Her research lies at the intersection of control theory, machine learning, and optimization, with a focus on generative AI. Carmen’s work aims to uncover and design control mechanisms in foundation models, and leverages control-theoretic principles to develop safer, more controllable AI technologies. Prior to joining Stanford, she held a postdoctoral fellow position at the Artificial Intelligence Center at ETH Zurich. Carmen earned a Ph.D. in Control and Dynamical Systems from Caltech in 2023, where she was advised by Prof. John Doyle, a M.Sc. in Space Engineering at Caltech in 2017, and a B.Sc. in Aerospace Engineering at the Technical University of Madrid in 2016. She also worked as an intern at Tesla in 2022. Besides research, Carmen is committed to education for all. As a member of Clubes de Ciencia, she travels to Mexico in the summer to teach underserved students. She also serves as the Communications and Engagement Chair of the Stanford Science Policy Group.
Abstract: Vision-Language-Action Models (VLAs) have shown remarkable progress towards embodied intelligence. While their architecture partially resembles that of Large Language Models (LLMs), VLAs exhibit higher complexity due to their multi-modal inputs/outputs and often hybrid nature of transformer and diffusion heads. This is part of the reason why insights from mechanistic interpretability in LLMs, which explain how the internal model representations relate to their output behavior, do not trivially transfer to VLA counterparts. In this work, we propose to close this gap by introducing and analyzing two main concepts: feature-observability and feature-controllability. In particular, we first study features that are linearly encoded in representation space, and show how they can be observed by means of a linear classifier. Then, we use a minimal linear intervention grounded in optimal control to accurately place internal representations and steer the VLA’s output towards a desired region. Our results show that targeted, lightweight interventions can reliably steer a robot’s behavior while preserving closed-loop capabilities. We demonstrate on different VLA architectures (\pi_0.5 and OpenVLA) through simulation experiments that VLAs possess interpretable internal structure amenable to online adaptation without fine-tuning, enabling real-time alignment with user preferences and task requirements.
@article{BuurmeijerAlonsoEtAl2026,
author = {Buurmeijer, H. and Amo Alonso, C. and Aiden, S. and Pavone, M.},
title = {Observing and Controlling Features in Vision-Language-Action Models},
year = {2026},
journal = {ArXiv 2603.05487},
url = {https://arxiv.org/abs/2603.05487},
keywords = {sub},
owner = {hbuurmei},
timestamp = {2026-02-09}
}