<aside> 🌮

A plug-and-play framework that helps terminal agents compress noisy observations, reduce token overhead, and reason better over long-horizon tasks.

</aside>


👥 Authors

Jincheng Ren¹, Siwei Wu¹, Yizhi Li¹, Kang Zhu², Shu Xu³, Boyu Feng², Ruibin Yuan⁴, Wei Zhang⁵, Riza Batista-Navarro¹, Jian Yang⁵, Chenghua Lin¹

<aside> 🏛️

Affiliations

¹ University of Manchester · ² MAP · ³ HKUST(GZ) · ⁴ HKUST · ⁵ Beihang University

</aside>


🔗 Links

<aside> 📄

Paper

arXiv: 2604.19572

</aside>

<aside> 🤗

Daily Paper

Hugging Face

</aside>

<aside> 💻

Code

GitHub Repository

</aside>


⚡ TL;DR

<aside> 💡


🎯 Why Terminal Observation Compression Matters

As AI agents become increasingly capable, long-horizon, multi-turn interaction in complex terminal environments is becoming a central challenge for real-world software engineering agents.

A practical but often overlooked bottleneck is that terminal agents continuously feed raw terminal outputs back into future context. These outputs often contain large amounts of low-value information, such as:

This redundancy not only increases token cost, but can also bury task-critical signals such as errors, failures, and useful execution feedback, making long-horizon reasoning more difficult.