Topic: AI Engineering

Track this topic after sign-in.

Short answer

This page shows the most relevant public items for AI Engineering, ranked by trend activity and review signal. Use weekly for fast changes, monthly for more stable patterns, and all-time for evergreen picks.

WeeklyMonthlyAll time

← Back to home

  1. PRISMA: Reinforcement Learning Guided Two-Stage Policy Optimization in Multi-Agent Architecture for Open-Domain Multi-Hop Question Answering

    PaperJan 9, 2026arxiv.orgYu Liu, Wenxiao Zhang, Cong Cao, Wenxuan Lu, Fangfang Yuan, Diandian Guo, Kun Peng, Qiang Sun, Kaiyan Zhang, Yanbing Liu, Jin B. Hong, Bowen Zhou, Zhiyuan Ma

    Answering real-world open-domain multi-hop questions over massive corpora is a critical challenge in Retrieval-Augmented Generation (RAG) systems. Recent research employs reinforcement learning (RL...

  2. Relink: Constructing Query-Driven Evidence Graph On-the-Fly for GraphRAG

    PaperJan 12, 2026arxiv.orgManzong Huang, Chenyang Bu, Yi He, Xingrui Zhuo, Xindong Wu

    Graph-based Retrieval-Augmented Generation (GraphRAG) mitigates hallucinations in Large Language Models (LLMs) by grounding them in structured knowledge. However, current GraphRAG methods are const...

  3. Beyond Dialogue Time: Temporal Semantic Memory for Personalized LLM Agents

    PaperJan 12, 2026arxiv.orgMiao Su, Yucan Guo, Zhongni Hou, Long Bai, Zixuan Li, Yufei Zhang, Guojun Yin, Wei Lin, Xiaolong Jin, Jiafeng Guo, Xueqi Cheng

    Memory enables Large Language Model (LLM) agents to perceive, store, and use information from past dialogues, which is essential for personalization. However, existing methods fail to properly mode...

  4. SwiftMem: Fast Agentic Memory via Query-aware Indexing

    PaperJan 13, 2026arxiv.orgAnxin Tian, Yiming Li, Xing Li, Hui-Ling Zhen, Lei Chen, Xianzhi Yu, Zhenhua Dong, Mingxuan Yuan

    Agentic memory systems have become critical for enabling LLM agents to maintain long-term context and retrieve relevant information efficiently. However, existing memory frameworks suffer from a fu...

  5. To Retrieve or To Think? An Agentic Approach for Context Evolution

    PaperJan 14, 2026arxiv.orgRubing Chen, Jian Wang, Wenjie Li, Xiao-Yong Wei, Qing Li

    Current context augmentation methods, such as retrieval-augmented generation, are essential for solving knowledge-intensive reasoning tasks. However, they typically adhere to a rigid, brute-force s...

  6. OpenDecoder: Open Large Language Model Decoding to Incorporate Document Quality in RAG

    PaperJan 24, 2026arxiv.orgFengran Mo, Zhan Su, Yuchen Hui, Jinghan Zhang, Jia Ao Sun, Zheyuan Liu, Chao Zhang, Tetsuya Sakai, Jian-Yun Nie

    The development of large language models (LLMs) has achieved superior performance in a range of downstream tasks, including LLM-based retrieval-augmented generation (RAG). The quality of generated ...

  7. AtomMem : Learnable Dynamic Agentic Memory with Atomic Memory Operation

    PaperJan 27, 2026arxiv.orgYupeng Huo, Yaxi Lu, Zhong Zhang, Haotian Chen, Yankai Lin

    Equipping agents with memory is essential for solving real-world long-horizon problems. However, most existing agent memory mechanisms rely on static and hand-crafted workflows. This limits the per...

  8. The AI Hippocampus: How Far are We From Human Memory?

    PaperJan 14, 2026arxiv.orgZixia Jia, Jiaqi Li, Yipeng Kang, Yuxuan Wang, Tong Wu, Quansen Wang, Xiaobo Wang, Shuyi Zhang, Junzhe Shen, Qing Li, Siyuan Qi, Yitao Liang, Di He, Zilong Zheng, Song-Chun Zhu

    Memory plays a foundational role in augmenting the reasoning, adaptability, and contextual fidelity of modern Large Language Models and Multi-Modal LLMs. As these models transition from static pred...

  9. Rethinking Memory Mechanisms of Foundation Agents in the Second Half: A Survey

    PaperFeb 10, 2026arxiv.orgWei-Chieh Huang, Weizhi Zhang, Yueqing Liang, Yuanchen Bei, Yankai Chen, Tao Feng, Xinyu Pan, Zhen Tan, Yu Wang, Tianxin Wei, Shanglin Wu, Ruiyao Xu, Liangwei Yang, Rui Yang, Wooseong Yang, Chin-Yuan Yeh, Hanrong Zhang, Haozhen Zhang, Siqi Zhu, Henry Peng Zou, Wanjia Zhao, Song Wang, Wujiang Xu, Zixuan Ke, Zheng Hui, Dawei Li, Yaozu Wu, Langzhou He, Chen Wang, Xiongxiao Xu, Baixiang Huang, Juntao Tan, Shelby Heinecke, Huan Wang, Caiming Xiong, Ahmed A. Metwally, Jun Yan, Chen-Yu Lee, Hanqing Zeng, Yinglong Xia, Xiaokai Wei, Ali Payani, Haitong Ma, Wenya Wang, Chenguang Wang, Yu Zhang, Xin Wang, Yongfeng Zhang, Jiaxuan You, Hanghang Tong, Xiao Luo, Xue Liu, Yizhou Sun, Wei Wang, Julian McAuley, James Zou, Jiawei Han, Philip S. Yu, Kai Shu

    The research of artificial intelligence is undergoing a paradigm shift from prioritizing model innovations over benchmark scores towards emphasizing problem definition and rigorous real-world evalu...

  10. Continuum Memory Architectures for Long-Horizon LLM Agents

    PaperJan 14, 2026arxiv.orgJoe Logan

    Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: in...

← PreviousPage 5Next →

Related Topics

FAQ

What does this AI Engineering page rank?

It ranks public content for AI Engineering using recent discussion, review, and engagement signals so you can triage faster. This guidance is specific to AI Engineering topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

How should I use weekly vs monthly vs all-time?

Use weekly for fast-moving updates, monthly for stable trend confirmation, and all-time for evergreen references. This guidance is specific to AI Engineering topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

How can I discover organizations active in AI Engineering?

Use the linked entities section to jump to labs, companies, and experts connected to this topic and explore their timelines. This guidance is specific to AI Engineering topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

Can I follow this topic for updates?

Yes. Use the follow button on this page to subscribe and track new high-signal activity. This guidance is specific to AI Engineering topic page on Attendemia and is written so it still makes sense without reading other sections on the page.