Diffusion Glancing Transformer for Parallel Sequence to Sequence Learning
Lihua Qian, Mingxuan Wang, Yang Liu, Hao Zhou · arxiv.org
Query conditions: topic=llm, publish_at in 202311, and type=paper
Lihua Qian, Mingxuan Wang, Yang Liu, Hao Zhou · arxiv.org
Zhongcong Xu, Jianfeng Zhang, Jun Hao Liew, Hanshu Yan, Jia-Wei Liu, Chenxu Zhang, Jiashi Feng, Mike Zheng Shou · arxiv.org
Yan Zeng, Guoqiang Wei, Jiani Zheng, Jiaxin Zou, Yang Wei, Yuchen Zhang, Hang Li · arxiv.org
This page ranks llm papers by topic match, content-type filter, checkout momentum, and freshness. The ranking is recalculated as new items and engagement signals arrive, so the top results stay practical for current workflows instead of remaining static or purely chronological.
The time suffix in this URL defines the publish-date window used for ranking. Year paths include items in that year, and YYYYMM paths include one calendar month. This makes comparisons cleaner when you want a focused snapshot rather than an all-time aggregate.
No. This ranking is editorial and signal-driven, not sponsored placement. Attendemia evaluates public metadata, source context, and usage momentum to rank candidates. Payment does not buy position, so readers can interpret the list as a curation surface rather than advertising inventory.