Topic: RAG

Track this topic after sign-in.

Short answer

This page shows the most relevant public items for RAG, ranked by trend activity and review signal. Use weekly for fast changes, monthly for more stable patterns, and all-time for evergreen picks.

WeeklyMonthlyAll time

← Back to home

  1. To Retrieve or To Think? An Agentic Approach for Context Evolution

    PaperJan 14, 2026arxiv.orgRubing Chen, Jian Wang, Wenjie Li, Xiao-Yong Wei, Qing Li

    Current context augmentation methods, such as retrieval-augmented generation, are essential for solving knowledge-intensive reasoning tasks. However, they typically adhere to a rigid, brute-force s...

  2. OpenDecoder: Open Large Language Model Decoding to Incorporate Document Quality in RAG

    PaperJan 24, 2026arxiv.orgFengran Mo, Zhan Su, Yuchen Hui, Jinghan Zhang, Jia Ao Sun, Zheyuan Liu, Chao Zhang, Tetsuya Sakai, Jian-Yun Nie

    The development of large language models (LLMs) has achieved superior performance in a range of downstream tasks, including LLM-based retrieval-augmented generation (RAG). The quality of generated ...

  3. AtomMem : Learnable Dynamic Agentic Memory with Atomic Memory Operation

    PaperJan 27, 2026arxiv.orgYupeng Huo, Yaxi Lu, Zhong Zhang, Haotian Chen, Yankai Lin

    Equipping agents with memory is essential for solving real-world long-horizon problems. However, most existing agent memory mechanisms rely on static and hand-crafted workflows. This limits the per...

  4. The AI Hippocampus: How Far are We From Human Memory?

    PaperJan 14, 2026arxiv.orgZixia Jia, Jiaqi Li, Yipeng Kang, Yuxuan Wang, Tong Wu, Quansen Wang, Xiaobo Wang, Shuyi Zhang, Junzhe Shen, Qing Li, Siyuan Qi, Yitao Liang, Di He, Zilong Zheng, Song-Chun Zhu

    Memory plays a foundational role in augmenting the reasoning, adaptability, and contextual fidelity of modern Large Language Models and Multi-Modal LLMs. As these models transition from static pred...

  5. Rethinking Memory Mechanisms of Foundation Agents in the Second Half: A Survey

    PaperFeb 10, 2026arxiv.orgWei-Chieh Huang, Weizhi Zhang, Yueqing Liang, Yuanchen Bei, Yankai Chen, Tao Feng, Xinyu Pan, Zhen Tan, Yu Wang, Tianxin Wei, Shanglin Wu, Ruiyao Xu, Liangwei Yang, Rui Yang, Wooseong Yang, Chin-Yuan Yeh, Hanrong Zhang, Haozhen Zhang, Siqi Zhu, Henry Peng Zou, Wanjia Zhao, Song Wang, Wujiang Xu, Zixuan Ke, Zheng Hui, Dawei Li, Yaozu Wu, Langzhou He, Chen Wang, Xiongxiao Xu, Baixiang Huang, Juntao Tan, Shelby Heinecke, Huan Wang, Caiming Xiong, Ahmed A. Metwally, Jun Yan, Chen-Yu Lee, Hanqing Zeng, Yinglong Xia, Xiaokai Wei, Ali Payani, Haitong Ma, Wenya Wang, Chenguang Wang, Yu Zhang, Xin Wang, Yongfeng Zhang, Jiaxuan You, Hanghang Tong, Xiao Luo, Xue Liu, Yizhou Sun, Wei Wang, Julian McAuley, James Zou, Jiawei Han, Philip S. Yu, Kai Shu

    The research of artificial intelligence is undergoing a paradigm shift from prioritizing model innovations over benchmark scores towards emphasizing problem definition and rigorous real-world evalu...

  6. Continuum Memory Architectures for Long-Horizon LLM Agents

    PaperJan 14, 2026arxiv.orgJoe Logan

    Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: in...

  7. Topo-RAG: Topology-aware retrieval for hybrid text-table documents

    PaperJan 15, 2026arxiv.orgAlex Dantart, Marco Kóvacs-Navarro

    In enterprise datasets, documents are rarely pure. They are not just text, nor just numbers; they are a complex amalgam of narrative and structure. Current Retrieval-Augmented Generation (RAG) syst...

  8. Grounding Agent Memory in Contextual Intent

    PaperJan 15, 2026arxiv.orgRuozhen Yang, Yucheng Jiang, Yueqi Jiang, Priyanka Kargupta, Yunyi Zhang, Jiawei Han

    Deploying large language models in long-horizon, goal-oriented interactions remains challenging because similar entities and facts recur under different latent goals and constraints, causing memory...

  9. Utilizing Metadata for Better Retrieval-Augmented Generation

    PaperJan 17, 2026arxiv.orgRaquib Bin Yousuf, Shengzhe Xu, Mandar Sharma, Andrew Neeser, Chris Latimer, Naren Ramakrishnan

    Retrieval-Augmented Generation systems depend on retrieving semantically relevant document chunks to support accurate, grounded outputs from large language models. In structured and repetitive corp...

  10. Augmenting Question Answering with A Hybrid RAG Approach

    PaperJan 25, 2026arxiv.orgTianyi Yang, Nashrah Haque, Vaishnave Jonnalagadda, Yuya Jeremy Ong, Zhehui Chen, Yanzhao Wu, Lei Yu, Divyesh Jadav, Wenqi Wei

    Retrieval-Augmented Generation (RAG) has emerged as a powerful technique for enhancing the quality of responses in Question-Answering (QA) tasks. However, existing approaches often struggle with re...

  11. Incorporating Q&A Nuggets into Retrieval-Augmented Generation

    PaperJan 19, 2026arxiv.orgLaura Dietz, Bryan Li, Gabrielle Liu, Jia-Huei Ju, Eugene Yang, Dawn Lawrie, William Walden, James Mayfield

    RAGE systems integrate ideas from automatic evaluation (E) into Retrieval-augmented Generation (RAG). As one such example, we present Crucible, a Nugget-Augmented Generation System that preserves e...

  12. FadeMem: Biologically-Inspired Forgetting for Efficient Agent Memory

    PaperFeb 6, 2026arxiv.orgLei Wei, Xiao Peng, Xu Dong, Niantao Xie, Bin Wang

    Large language models deployed as autonomous agents face critical memory limitations, lacking selective forgetting mechanisms that lead to either catastrophic forgetting at context boundaries or in...

  13. Dep-Search: Learning Dependency-Aware Reasoning Traces with Persistent Memory

    PaperJan 26, 2026arxiv.orgYanming Liu, Xinyue Peng, Zixuan Yan, Yanxin Shen, Wenjie Xu, Yuefeng Huang, Xinyi Wang, Jiannan Cao, Jianwei Yin, Xuhong Zhang

    Large Language Models (LLMs) have demonstrated remarkable capabilities in complex reasoning tasks, particularly when augmented with search mechanisms that enable systematic exploration of external ...

← PreviousPage 5Next →

Top Entities In This Topic

Related Topics

FAQ

What does this RAG page rank?

It ranks public content for RAG using recent discussion, review, and engagement signals so you can triage faster. This guidance is specific to RAG topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

How should I use weekly vs monthly vs all-time?

Use weekly for fast-moving updates, monthly for stable trend confirmation, and all-time for evergreen references. This guidance is specific to RAG topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

How can I discover organizations active in RAG?

Use the linked entities section to jump to labs, companies, and experts connected to this topic and explore their timelines. This guidance is specific to RAG topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

Can I follow this topic for updates?

Yes. Use the follow button on this page to subscribe and track new high-signal activity. This guidance is specific to RAG topic page on Attendemia and is written so it still makes sense without reading other sections on the page.