Agentic Context Engineering in RAG Pipelines
- Sponsored by: msg life ag
- Project lead: Dr. Ricardo Acevedo Cabra
- Scientific lead: Marina Baumgartner
- TUM co-mentor: TBA
- Term: Summer semester 2026
- Application deadline: Sunday 25.01.2026
Apply to this project here

Motivation
Retrieval-Augmented Generation (RAG) systems already enable companies to effectively leverage their internal knowledge. At msg life, we have built a successful agentic RAG solution that delivers high-quality information, providing convenient access to technical documentation, regulatory frameworks, and internal knowledge sources in the insurance industry.
However, real-world knowledge is dynamic, e.g. regulations change, products evolve, and information grows in complexity. Static retrieval methods often fall short when faced with these evolving contexts, which demand self-adjusting AI architectures.
Agentic Context Engineering (ACE) aims to push RAG beyond static retrieval. By integrating an intelligent, adaptive context layer, agents can actively select, evaluate, and refine the information they use. Unlike black-box weight updates, ACE makes contextual reasoning more interpretable and controllable, revealing why an answer emerged. By integrating reflection and reasoning over agentic context, we move from static retrieval to a self-improving system.
This project explores how to make AI self-reflective and adaptive, moving from retrieving knowledge to engineering it.
Research Questions
- “How does Agentic Context Engineering (ACE) enhance the accuracy, reliability, and interpretability of generative answers in RAG systems compared to static retrieval contexts?”
- “What mechanisms enable agents to evaluate, prioritize, and iteratively update contextual information and how can these changes be made transparent to users and developers?”
Goals
- Implement an ACE layer within an existing RAG pipeline.
- Design and test generator, reflector, and curator components for transparent adaptive context management.
- Evaluate the effect of ACE on real-world domain question-answering tasks, involving complex agentic tool calls and reasoning steps.
- Compare system behavior before and after context adaptation.
Objectives
- Data Familiarization: Explore and review the dataset, our RAG pipeline and infrastructure.
- Methodology: Research ACE algorithms and evaluation strategies to enable adaptive context refinement.
- Building Prototype: Implement and integrate a self-improving context mechanism into the RAG pipeline.
- Evaluation: Assess the performance of an adaptive context strategy for RAG pipelines against a static baseline using reasoning and retrieval tasks. Investigate the trade-off between quality and costs.
Requirements
- Curiosity and initiative in experimenting with new algorithms and architectures.
- Experience with Python and willingness to explore frameworks for LLM integration.
- Basic knowledge in Natural Language Processing and retrieval-based generation would be helpful.
Expected Outcome
By the end of the project, students will:
- Deliver a working prototype of a RAG system enhanced with an ACE layer.
- Provide an empirical evaluation comparing static vs. adaptive context architectures.
- Present insights on how agents can self-improve their reasoning context in complex, evolving domains.
Why Join This Project?
This project combines cutting-edge research with real-world impact. You’ll gain hands-on experience with modern LLM architectures, adaptive AI design, and evaluation of intelligent agents in dynamic, complex domains. It’s a chance to explore how future AI systems can reflect on their own reasoning.
Apply to this project here