A Survey of Vibe Coding with Large Language Models
Yuyao Ge, Lingrui Mei · arXiv
Query conditions: topic=llm, publish_at in 202510, and type=paper
Yuyao Ge, Lingrui Mei · arXiv
Chenyu Zheng, Xinyu Zhang, Rongzhen Wang, Wei Huang, Zhi Tian, Weilin Huang, Jun Zhu, Chongxuan Li · arxiv.org
Yihong Dong, Ge Li, Yongding Tao, Xue Jiang, Kechi Zhang, Jia Li, Jinliang Deng, Jing Su, Jun Zhang, Jingjing Xu · arxiv.org
Yuxuan Bian, Xin Chen, Zenan Li, Tiancheng Zhi, Shen Sang, Linjie Luo, Qiang Xu · arxiv.org
Weinan Jia, Yuning Lu, Mengqi Huang, Hualiang Wang, Binyuan Huang, Nan Chen, Mu Liu, Jidong Jiang, Zhendong Mao · arxiv.org
Haoran Wei, Yaofeng Sun, Yukun Li · arxiv.org
Chao Jin, Ziheng Jiang, Zhihao Bai, Zheng Zhong, Juncai Liu, Xiang Li, Ningxin Zheng, Xi Wang, Cong Xie, Qi Huang, Wen Heng, Yiyuan Ma, Wenlei Bao, Size Zheng, Yanghua Peng, Haibin Lin, Xuanzhe Liu, Xin Jin, Xin Liu · arxiv.org
Ran Xin, Chenguang Xi, Jie Yang, Feng Chen, Hang Wu, Xia Xiao, Yifan Sun, Shen Zheng, Kai Shen · arxiv.org
Shanchuan Lin, Xin Xia, Yuxi Ren, Ceyuan Yang, Xuefeng Xiao, Lu Jiang · arxiv.org
This page ranks llm papers by topic match, content-type filter, checkout momentum, and freshness. The ranking is recalculated as new items and engagement signals arrive, so the top results stay practical for current workflows instead of remaining static or purely chronological.
The time suffix in this URL defines the publish-date window used for ranking. Year paths include items in that year, and YYYYMM paths include one calendar month. This makes comparisons cleaner when you want a focused snapshot rather than an all-time aggregate.
No. This ranking is editorial and signal-driven, not sponsored placement. Attendemia evaluates public metadata, source context, and usage momentum to rank candidates. Payment does not buy position, so readers can interpret the list as a curation surface rather than advertising inventory.