Chris Agia

Contacts:

Email: cagia at cs dot stanford dot edu

Chris Agia


Chris is a graduate student in the Department of Computer Science advised jointly by Professors Jeannette Bohg and Marco Pavone. His research focuses on how complex robotic behavior can be learned from data and embodied interaction (reinforcement learning), along with how to use these behaviors to feasibly and efficiently plan for any task (task & motion planning).

Prior to joining Stanford, Chris graduated with honors from University of Toronto’s Engineering Science program. In that time, he conducted research in robot vision, mapping, planning, and control with UofT’s Robot Vision and Learning Lab, Autonomous Systems and Biomechatronics Lab, and McGill’s Mobile Robotics Lab. Chris has also held internships with Microsoft Mixed Reality, Google Cloud, and Noah’s Ark Research Labs.

Beyond research, Chris enjoys practicing soccer, tennis, going on trail runs, reading, and playing music’s golden age on the guitar, bass and drums.

Awards:

  • Stanford School of Engineering Fellowship (2021)

ASL Publications

  1. J. Thumm, C. Agia, M. Pavone, and M. Althoff, “Text2Interaction: Establishing Safe and Preferable Human-Robot Interaction,” Conf. on Robot Learning, Nov. 2024. (In Press)

    Abstract: Adjusting robot behavior to human preferences can require intensive human feedback, preventing quick adaptation to new users and changing circumstances. Moreover, current approaches typically treat user preferences as a reward, which requires a manual balance between task success and user satisfaction. To integrate new user preferences in a zero-shot manner, our proposed Text2Interaction framework invokes large language models to generate a task plan, motion preferences as Python code, and parameters of a safety controller. By maximizing the combined probability of task completion and user satisfaction instead of a weighted sum of rewards, we can reliably find plans that fulfill both requirements. We find that 83 % of users working with Text2Interaction agree that it integrates their preferences into the plan of the robot, and 94 % prefer Text2Interaction over the baseline. Our ablation study shows that Text2Interaction aligns better with unseen preferences than other baselines while maintaining a high success rate. Real-world demonstrations and code are made available at sites.google.com/view/text2interaction.

    @article{ThummAgiaEtAl2024,
      author = {Thumm, J. and Agia, C. and Pavone, M. and Althoff, M.},
      title = {Text2Interaction: Establishing Safe and Preferable Human-Robot Interaction},
      booktitle = {{Conf. on Robot Learning}},
      year = {2024},
      address = {Munich, Germany},
      keywords = {press},
      month = nov,
      url = {https://arxiv.org/abs/2408.06105},
      owner = {agia},
      timestamp = {2024-09-19}
    }
    
  2. C. Agia, R. Sinha, J. Yang, Z. Cao, R. Antonova, M. Pavone, and J. Bohg, “Unpacking Failure Modes of Generative Policies: Runtime Monitoring of Consistency and Progress,” Conf. on Robot Learning, Nov. 2024. (In Press)

    Abstract: Robot behavior policies trained via imitation learning are prone to failure under conditions that deviate from their training data. Thus, algorithms that monitor learned policies at test time and provide early warnings of failure are necessary to facilitate scalable deployment. We propose Sentinel, a runtime monitoring framework that splits the detection of failures into two complementary categories: 1) Erratic failures, which we detect using statistical measures of temporal action consistency, and 2) task progression failures, where we use Vision Language Models (VLMs) to detect when the policy confidently and consistently takes actions that do not solve the task. Our approach has two key strengths. First, because learned policies exhibit diverse failure modes, combining complementary detectors leads to significantly higher accuracy at failure detection. Second, using a statistical temporal action consistency measure ensures that we quickly detect when multimodal, generative policies exhibit erratic behavior at negligible computational cost. In contrast, we only use VLMs to detect failure modes that are less time-sensitive. We demonstrate our approach in the context of diffusion policies trained on robotic mobile manipulation domains in both simulation and the real world. By unifying temporal consistency detection and VLM runtime monitoring, Sentinel detects 18% more failures than using either of the two detectors alone and significantly outperforms baselines, thus highlighting the importance of assigning specialized detectors to complementary categories of failure. Qualitative results are made available at sites.google.com/stanford.edu/sentinel.

    @article{AgiaSinhaEtAl2024,
      author = {Agia, C. and Sinha, R. and Yang, J. and Cao, Z. and Antonova, R. and Pavone, M. and Bohg, Jeannette},
      title = {Unpacking Failure Modes of Generative Policies: Runtime Monitoring of Consistency and Progress},
      booktitle = {{Conf. on Robot Learning}},
      year = {2024},
      address = {Munich, Germany},
      keywords = {press},
      month = nov,
      url = {https://arxiv.org/abs/2410.04640},
      owner = {agia},
      timestamp = {2024-10-20}
    }
    
  3. R. Sinha, A. Elhafsi, C. Agia, M. Foutter, E. Schmerling, and M. Pavone, “Real-Time Anomaly Detection and Planning with Large Language Models,” in Robotics: Science and Systems, Delft, Netherlands, 2024.

    Abstract: Foundation models, e.g., large language models, trained on internet-scale data possess zero-shot generalization capabilities that make them a promising technology for anomaly detection for robotic systems. Fully realizing this promise, however, poses two challenges: (i) mitigating the considerable computational expense of these models such that they may be applied online, and (ii) incorporating their judgement regarding potential anomalies into a safe control framework. In this work we present a two-stage reasoning framework: a fast binary anomaly classifier based on analyzing observations in an LLM embedding space, which may trigger a slower fallback selection stage that utilizes the reasoning capabilities of generative LLMs. These stages correspond to branch points in a model predictive control strategy that maintains the joint feasibility of continuing along various fallback plans as soon as an anomaly is detected (while the selector decides), thus ensuring safety. We demonstrate that, even when instantiated with relatively small language models, our fast anomaly classifier outperforms autoregressive reasoning with state-of-the-art GPT models. This enables our runtime monitor to improve the trustworthiness of dynamic robotic systems under resource and time constraints.

    @inproceedings{SinhaElhafsiEtAl2024,
      author = {Sinha, R. and Elhafsi, A. and Agia, C. and Foutter, M. and Schmerling, E. and Pavone, M.},
      title = {Real-Time Anomaly Detection and Planning with Large Language Models},
      booktitle = {{Robotics: Science and Systems}},
      address = {Delft, Netherlands},
      month = jul,
      year = {2024},
      owner = {amine},
      url = {https://arxiv.org/abs/2407.08735},
      timestamp = {2024-09-19}
    }
    
  4. M. Foutter, P. Bhoj, R. Sinha, A. Elhafsi, S. Banerjee, C. Agia, J. Kruger, T. Guffanti, D. Gammelli, S. D’Amico, and M. Pavone, “Adapting a Foundation Model for Space-based Tasks,” in Robotics: Science and Systems - Workshop on Semantics for Robotics: From Environment Understanding and Reasoning to Safe Interaction, 2024.

    Abstract:

    @inproceedings{FoutterBohjEtAl2024,
      author = {Foutter, M. and Bhoj, P. and Sinha, R. and Elhafsi, A. and Banerjee, S. and Agia, C. and Kruger, J. and Guffanti, T. and Gammelli, D. and D'Amico, S. and Pavone, M.},
      title = {Adapting a Foundation Model for Space-based Tasks},
      booktitle = {{Robotics: Science and Systems - Workshop on Semantics for Robotics: From Environment Understanding and Reasoning to Safe Interaction}},
      year = {2024},
      asl_abstract = {Foundation models, e.g., large language models, possess attributes of intelligence which offer promise to endow a robot with the contextual understanding necessary to navigate complex, unstructured tasks in the wild. In the future of space robotics, we see three core challenges which motivate the use of a foundation model adapted to space-based applications: 1) Scalability of ground-in-the-loop operations; 2) Generalizing prior knowledge to novel environments; and 3) Multi-modality in tasks and sensor data. Therefore, as a first-step towards building a foundation model for space-based applications, we automatically label the AI4Mars dataset to curate a language annotated dataset of visual-question-answer tuples. We fine-tune a pretrained LLaVA checkpoint on this dataset to endow a vision-language model with the ability to perform spatial reasoning and navigation on Mars' surface. In this work, we demonstrate that 1) existing vision-language models are deficient visual reasoners in space-based applications, and 2) fine-tuning a vision-language model on extraterrestrial data significantly improves the quality of responses even with a limited training dataset of only a few thousand samples.},
      asl_address = {Delft, Netherlands},
      asl_url = {https://arxiv.org/abs/2408.05924},
      url = {https://arxiv.org/abs/2408.05924},
      owner = {foutter},
      timestamp = {2024-08-12}
    }
    
  5. C. Agia, G. C. Vila, S. Bandyopadhyay, D. S. Bayard, K. Cheung, C. H. Lee, E. Wood, I. Aenishanslin, S. Ardito, L. Fesq, M. Pavone, and I. A. D. Nesnas, “Modeling Considerations for Developing Deep Space Autonomous Spacecraft and Simulators,” in IEEE Aerospace Conference, 2024.

    Abstract:

    @inproceedings{AgiaVilaEtAl2024,
      author = {Agia, C. and Vila, {G. C.} and Bandyopadhyay, S. and Bayard, {D. S.} and Cheung, K. and Lee, {C. H.} and Wood, E. and Aenishanslin, I. and Ardito, S. and Fesq, L. and Pavone, M. and Nesnas, {I. A. D.}},
      title = {Modeling Considerations for Developing Deep Space Autonomous Spacecraft and Simulators},
      booktitle = {{IEEE Aerospace Conference}},
      year = {2024},
      asl_abstract = {To extend the limited scope of autonomy used in prior missions for operation in distant and complex environments, there is a need to further develop and mature autonomy that jointly reasons over multiple subsystems, which we term system-level autonomy. System-level autonomy establishes situational awareness that resolves conflicting information across subsystems, which may necessitate the refinement and interconnection of the underlying spacecraft and environment onboard models. However, with a limited understanding of the assumptions and tradeoffs of modeling to arbitrary extents, designing onboard models to support system-level capabilities presents a significant challenge. In this paper, we provide a detailed analysis of the increasing levels of model fidelity for several key spacecraft subsystems, with the goal of informing future spacecraft functional- and system-level autonomy algorithms and the physics-based simulators on which they are validated. We do not argue for the adoption of a particular fidelity class of models but, instead, highlight the potential tradeoffs and opportunities associated with the use of models for onboard autonomy and in physics-based simulators at various fidelity levels. We ground our analysis in the context of deep space exploration of small bodies, an emerging frontier for autonomous spacecraft operation in space, where the choice of models employed onboard the spacecraft may determine mission success. We conduct our experiments in the Multi-Spacecraft Concept and Autonomy Tool (MuSCAT), a software suite for developing spacecraft autonomy algorithms.},
      asl_address = {Big Sky, Montana},
      asl_month = mar,
      asl_url = {https://arxiv.org/abs/2401.11371},
      owner = {agia},
      timestamp = {2024-03-01}
    }
    
  6. K. Lin, C. Agia, T. Migimatsu, M. Pavone, and J. Bohg, “Text2Motion: From Natural Language Instructions to Feasible Plans,” Autonomous Robots, vol. 47, no. 8, pp. 1345–1365, Nov. 2023.

    Abstract: We propose Text2Motion, a language-based planning framework enabling robots to solve sequential manipulation tasks that require long-horizon reasoning. Given a natural language instruction, our framework constructs both a task- and motion-level plan that is verified to reach inferred symbolic goals. Text2Motion uses feasibility heuristics encoded in Q-functions of a library of skills to guide task planning with Large Language Models. Whereas previous language-based planners only consider the feasibility of individual skills, Text2Motion actively resolves geometric dependencies spanning skill sequences by performing geometric feasibility planning during its search. We evaluate our method on a suite of problems that require long-horizon reasoning, interpretation of abstract goals, and handling of partial affordance perception. Our experiments show that Text2Motion can solve these challenging problems with a success rate of 82%, while prior state-of-the-art language-based planning methods only achieve 13%. Text2Motion thus provides promising generalization characteristics to semantically diverse sequential manipulation tasks with geometric dependencies between skills.

    @article{LinAgiaEtAl2023,
      author = {Lin, K. and Agia, C. and Migimatsu, T. and Pavone, M. and Bohg, J.},
      title = {Text2Motion: From Natural Language Instructions to Feasible Plans},
      journal = {{Autonomous Robots}},
      volume = {47},
      number = {8},
      pages = {1345–-1365},
      year = {2023},
      month = nov,
      doi = {10.1007/s10514-023-10131-7},
      url = {https://doi.org/10.1007/s10514-023-10131-7},
      owner = {agia},
      timestamp = {2024-02-29}
    }
    
  7. A. Elhafsi, R. Sinha, C. Agia, E. Schmerling, I. A. D. Nesnas, and M. Pavone, “Semantic Anomaly Detection with Large Language Models,” Autonomous Robots, vol. 47, no. 8, pp. 1035–1055, Oct. 2023.

    Abstract: As robots acquire increasingly sophisticated skills and see increasingly complex and varied environments, the threat of an edge case or anomalous failure is ever present. For example, Tesla cars have seen interesting failure modes ranging from autopilot disengagements due to inactive traffic lights carried by trucks to phantom braking caused by images of stop signs on roadside billboards. These system-level failures are not due to failures of any individual component of the autonomy stack but rather system-level deficiencies in semantic reasoning. Such edge cases, which we call semantic anomalies, are simple for a human to disentangle yet require insightful reasoning. To this end, we study the application of large language models (LLMs), endowed with broad contextual understanding and reasoning capabilities, to recognize such edge cases and introduce a monitoring framework for semantic anomaly detection in vision-based policies. Our experiments apply this framework to a finite state machine policy for autonomous driving and a learned policy for object manipulation. These experiments demonstrate that the LLM-based monitor can effectively identify semantic anomalies in a manner that shows agreement with human reasoning. Finally, we provide an extended discussion on the strengths and weaknesses of this approach and motivate a research outlook on how we can further use foundation models for semantic anomaly detection. Our project webpage can be found at https://sites.google.com/view/llm-anomaly-detection.

    @article{ElhafsiSinhaEtAl2023,
      author = {Elhafsi, A. and Sinha, R. and Agia, C. and Schmerling, E. and Nesnas, I. A. D and Pavone, M.},
      title = {Semantic Anomaly Detection with Large Language Models},
      journal = {{Autonomous Robots}},
      volume = {47},
      number = {8},
      pages = {1035--1055},
      year = {2023},
      month = oct,
      doi = {10.1007/s10514-023-10132-6},
      url = {https://arxiv.org/abs/2305.11307},
      owner = {amine},
      timestamp = {2024-09-19}
    }