Topic: Agentic AI

Track this topic after sign-in.

Short answer

This page shows the most relevant public items for Agentic AI, ranked by trend activity and review signal. Use weekly for fast changes, monthly for more stable patterns, and all-time for evergreen picks.

WeeklyMonthlyAll time

← Back to home

  1. MAGMA: A Multi-Graph based Agentic Memory Architecture for AI Agents

    PaperJan 6, 2026arxiv.orgDongming Jiang, Yi Li, Guanpeng Li, Bingzhe Li

    Memory-Augmented Generation (MAG) extends Large Language Models with external memory to support long-context reasoning, but existing approaches largely rely on semantic similarity over monolithic m...

  2. Membox: Weaving Topic Continuity into Long-Range Memory for LLM Agents

    PaperJan 20, 2026arxiv.orgDehao Tao, Guoliang Ma, Yongfeng Huang, Minghu Jiang

    Human-agent dialogues often exhibit topic continuity-a stable thematic frame that evolves through temporally adjacent exchanges-yet most large language model (LLM) agent memory systems fail to pres...

  3. PRISMA: Reinforcement Learning Guided Two-Stage Policy Optimization in Multi-Agent Architecture for Open-Domain Multi-Hop Question Answering

    PaperJan 9, 2026arxiv.orgYu Liu, Wenxiao Zhang, Cong Cao, Wenxuan Lu, Fangfang Yuan, Diandian Guo, Kun Peng, Qiang Sun, Kaiyan Zhang, Yanbing Liu, Jin B. Hong, Bowen Zhou, Zhiyuan Ma

    Answering real-world open-domain multi-hop questions over massive corpora is a critical challenge in Retrieval-Augmented Generation (RAG) systems. Recent research employs reinforcement learning (RL...

  4. Relink: Constructing Query-Driven Evidence Graph On-the-Fly for GraphRAG

    PaperJan 12, 2026arxiv.orgManzong Huang, Chenyang Bu, Yi He, Xingrui Zhuo, Xindong Wu

    Graph-based Retrieval-Augmented Generation (GraphRAG) mitigates hallucinations in Large Language Models (LLMs) by grounding them in structured knowledge. However, current GraphRAG methods are const...

  5. Beyond Dialogue Time: Temporal Semantic Memory for Personalized LLM Agents

    PaperJan 12, 2026arxiv.orgMiao Su, Yucan Guo, Zhongni Hou, Long Bai, Zixuan Li, Yufei Zhang, Guojun Yin, Wei Lin, Xiaolong Jin, Jiafeng Guo, Xueqi Cheng

    Memory enables Large Language Model (LLM) agents to perceive, store, and use information from past dialogues, which is essential for personalization. However, existing methods fail to properly mode...

  6. SwiftMem: Fast Agentic Memory via Query-aware Indexing

    PaperJan 13, 2026arxiv.orgAnxin Tian, Yiming Li, Xing Li, Hui-Ling Zhen, Lei Chen, Xianzhi Yu, Zhenhua Dong, Mingxuan Yuan

    Agentic memory systems have become critical for enabling LLM agents to maintain long-term context and retrieve relevant information efficiently. However, existing memory frameworks suffer from a fu...

  7. To Retrieve or To Think? An Agentic Approach for Context Evolution

    PaperJan 14, 2026arxiv.orgRubing Chen, Jian Wang, Wenjie Li, Xiao-Yong Wei, Qing Li

    Current context augmentation methods, such as retrieval-augmented generation, are essential for solving knowledge-intensive reasoning tasks. However, they typically adhere to a rigid, brute-force s...

  8. OpenDecoder: Open Large Language Model Decoding to Incorporate Document Quality in RAG

    PaperJan 24, 2026arxiv.orgFengran Mo, Zhan Su, Yuchen Hui, Jinghan Zhang, Jia Ao Sun, Zheyuan Liu, Chao Zhang, Tetsuya Sakai, Jian-Yun Nie

    The development of large language models (LLMs) has achieved superior performance in a range of downstream tasks, including LLM-based retrieval-augmented generation (RAG). The quality of generated ...

  9. AtomMem : Learnable Dynamic Agentic Memory with Atomic Memory Operation

    PaperJan 27, 2026arxiv.orgYupeng Huo, Yaxi Lu, Zhong Zhang, Haotian Chen, Yankai Lin

    Equipping agents with memory is essential for solving real-world long-horizon problems. However, most existing agent memory mechanisms rely on static and hand-crafted workflows. This limits the per...

← PreviousPage 11Next →

Related Topics

FAQ

What does this Agentic AI page rank?

It ranks public content for Agentic AI using recent discussion, review, and engagement signals so you can triage faster. This guidance is specific to Agentic AI topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

How should I use weekly vs monthly vs all-time?

Use weekly for fast-moving updates, monthly for stable trend confirmation, and all-time for evergreen references. This guidance is specific to Agentic AI topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

How can I discover organizations active in Agentic AI?

Use the linked entities section to jump to labs, companies, and experts connected to this topic and explore their timelines. This guidance is specific to Agentic AI topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

Can I follow this topic for updates?

Yes. Use the follow button on this page to subscribe and track new high-signal activity. This guidance is specific to Agentic AI topic page on Attendemia and is written so it still makes sense without reading other sections on the page.