<aside> 🌮
A plug-and-play framework that helps terminal agents compress noisy observations, reduce token overhead, and reason better over long-horizon tasks.
</aside>
Jincheng Ren¹, Siwei Wu¹, Yizhi Li¹, Kang Zhu², Shu Xu³, Boyu Feng², Ruibin Yuan⁴, Wei Zhang⁵, Riza Batista-Navarro¹, Jian Yang⁵, Chenghua Lin¹
<aside> 🏛️
Affiliations
¹ University of Manchester · ² MAP · ³ HKUST(GZ) · ⁴ HKUST · ⁵ Beihang University
</aside>
<aside> 📄
Paper
</aside>
<aside> 🤗
Daily Paper
</aside>
<aside> 💻
Code
</aside>
<aside> 💡
As AI agents become increasingly capable, long-horizon, multi-turn interaction in complex terminal environments is becoming a central challenge for real-world software engineering agents.
A practical but often overlooked bottleneck is that terminal agents continuously feed raw terminal outputs back into future context. These outputs often contain large amounts of low-value information, such as:
This redundancy not only increases token cost, but can also bury task-critical signals such as errors, failures, and useful execution feedback, making long-horizon reasoning more difficult.