Topic: RAG

Track this topic after sign-in.

Short answer

This page shows the most relevant public items for RAG, ranked by trend activity and review signal. Use weekly for fast changes, monthly for more stable patterns, and all-time for evergreen picks.

WeeklyMonthlyAll time

← Back to home

  1. AIRS-Bench: a Suite of Tasks for Frontier AI Research Science Agents

    PaperFeb 6, 2026arxiv.orgAlisia Lupidi, Bhavul Gauri, Thomas Simon Foster, Bassel Al Omari, Despoina Magka, Alberto Pepe, Alexis Audran-Reiss, Muna Aghamelu, Nicolas Baldwin, Lucia Cipolina-Kun, Jean-Christophe Gagnon-Audet, Chee Hau Leow, Sandra Lefdal, Hossam Mossalam, Abhinav Moudgil, Saba Nazir, Emanuel Tewolde, Isabel Urrego, Jordi Armengol Estape, Amar Budhiraja, Gaurav Chaurasia, Abhishek Charnalia, Derek Dunfield, Karen Hambardzumyan, Daniel Izcovich, Martin Josifoski, Ishita Mediratta, Kelvin Niu, Parth Pathak, Michael Shvartsman, Edan Toledo, Anton Protopopov, Roberta Raileanu, Alexander Miller, Tatiana Shavrina, Jakob Foerster, Yoram Bachrach

    LLM agents hold significant promise for advancing scientific research. To accelerate this progress, we introduce AIRS-Bench (the AI Research Science Benchmark), a suite of 20 tasks sourced from sta...

  2. Agentic Uncertainty Reveals Agentic Overconfidence

    PaperFeb 6, 2026arxiv.orgJean Kaddour, Srijan Patel, Gbètondji Dovonon, Leo Richter, Pasquale Minervini, Matt J. Kusner

    Can AI agents predict whether they will succeed at a task? We study agentic uncertainty by eliciting success probability estimates before, during, and after task execution. All results exhibit agen...

  3. From Features to Actions: Explainability in Traditional and Agentic AI Systems

    PaperFeb 6, 2026arxiv.orgSindhuja Chaduvula, Jessee Ho, Kina Kim, Aravind Narayanan, Mahshid Alinoori, Muskan Garg, Dhanesh Ramachandram, Shaina Raza

    Over the last decade, explainable AI has primarily focused on interpreting individual model predictions, producing post-hoc explanations that relate inputs to outputs under a fixed decision structu...

  4. SimpleMem: Efficient Lifelong Memory for LLM Agents

    PaperJan 29, 2026arxiv.orgJiaqi Liu, Yaofeng Su, Peng Xia, Siwei Han, Zeyu Zheng, Cihang Xie, Mingyu Ding, Huaxiu Yao

    To support long-term interaction in complex environments, LLM agents require memory systems that manage historical experiences. Existing approaches either retain full interaction histories via pass...

  5. MAGMA: A Multi-Graph based Agentic Memory Architecture for AI Agents

    PaperJan 6, 2026arxiv.orgDongming Jiang, Yi Li, Guanpeng Li, Bingzhe Li

    Memory-Augmented Generation (MAG) extends Large Language Models with external memory to support long-context reasoning, but existing approaches largely rely on semantic similarity over monolithic m...

  6. Membox: Weaving Topic Continuity into Long-Range Memory for LLM Agents

    PaperJan 20, 2026arxiv.orgDehao Tao, Guoliang Ma, Yongfeng Huang, Minghu Jiang

    Human-agent dialogues often exhibit topic continuity-a stable thematic frame that evolves through temporally adjacent exchanges-yet most large language model (LLM) agent memory systems fail to pres...

  7. PRISMA: Reinforcement Learning Guided Two-Stage Policy Optimization in Multi-Agent Architecture for Open-Domain Multi-Hop Question Answering

    PaperJan 9, 2026arxiv.orgYu Liu, Wenxiao Zhang, Cong Cao, Wenxuan Lu, Fangfang Yuan, Diandian Guo, Kun Peng, Qiang Sun, Kaiyan Zhang, Yanbing Liu, Jin B. Hong, Bowen Zhou, Zhiyuan Ma

    Answering real-world open-domain multi-hop questions over massive corpora is a critical challenge in Retrieval-Augmented Generation (RAG) systems. Recent research employs reinforcement learning (RL...

  8. Relink: Constructing Query-Driven Evidence Graph On-the-Fly for GraphRAG

    PaperJan 12, 2026arxiv.orgManzong Huang, Chenyang Bu, Yi He, Xingrui Zhuo, Xindong Wu

    Graph-based Retrieval-Augmented Generation (GraphRAG) mitigates hallucinations in Large Language Models (LLMs) by grounding them in structured knowledge. However, current GraphRAG methods are const...

  9. Beyond Dialogue Time: Temporal Semantic Memory for Personalized LLM Agents

    PaperJan 12, 2026arxiv.orgMiao Su, Yucan Guo, Zhongni Hou, Long Bai, Zixuan Li, Yufei Zhang, Guojun Yin, Wei Lin, Xiaolong Jin, Jiafeng Guo, Xueqi Cheng

    Memory enables Large Language Model (LLM) agents to perceive, store, and use information from past dialogues, which is essential for personalization. However, existing methods fail to properly mode...

  10. SwiftMem: Fast Agentic Memory via Query-aware Indexing

    PaperJan 13, 2026arxiv.orgAnxin Tian, Yiming Li, Xing Li, Hui-Ling Zhen, Lei Chen, Xianzhi Yu, Zhenhua Dong, Mingxuan Yuan

    Agentic memory systems have become critical for enabling LLM agents to maintain long-term context and retrieve relevant information efficiently. However, existing memory frameworks suffer from a fu...

← PreviousPage 4Next →

Top Entities In This Topic

Related Topics

FAQ

What does this RAG page rank?

It ranks public content for RAG using recent discussion, review, and engagement signals so you can triage faster. This guidance is specific to RAG topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

How should I use weekly vs monthly vs all-time?

Use weekly for fast-moving updates, monthly for stable trend confirmation, and all-time for evergreen references. This guidance is specific to RAG topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

How can I discover organizations active in RAG?

Use the linked entities section to jump to labs, companies, and experts connected to this topic and explore their timelines. This guidance is specific to RAG topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

Can I follow this topic for updates?

Yes. Use the follow button on this page to subscribe and track new high-signal activity. This guidance is specific to RAG topic page on Attendemia and is written so it still makes sense without reading other sections on the page.