Best Llms Papers

The highest-signal papers on Llms, ranked by community reviews and momentum.
Canonical intent: topic=llms|type=paper|year=evergreen

Explore TopicAwesome ListsResearch Atlas

Top Picks

3
Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John Mellor, Irina Higgins, Antonia Creswell, Nat McAleese, Amy Wu, Eleni Elia, Danilo J. Rezende, Vinyals, Simonyan
Dec 8, 2021·430 checkouts·arxiv.org
Source ↗
10
Improving language models by retrieving from trillions of tokens
Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack W. Rae, Erich Elsen, Laurent Sifre
Dec 8, 2021·246 checkouts·arxiv.org
Source ↗
13
Language Models are Few-Shot Learners
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei
May 28, 2020·211 checkouts·arxiv.org
Source ↗

FAQ

How is this “best Llms Papers” collection ranked?

This page ranks Llms Papers using topic relevance, checkout momentum, source diversity, and freshness signals. Rankings are recalculated as new items and engagement arrive, so readers see resources that are both high quality and currently useful for implementation, research, and practical decision making. Canonical intent key: topic=llms|type=paper|year=evergreen.

How do you prevent duplicate collection pages?

Attendemia maps each slug variant, including best-of and year forms, to one canonical intent key. If two URLs describe the same topic, type, and timeframe, non-canonical versions permanently redirect. This consolidates crawl signals, avoids duplicate content dilution, and helps search engines index the strongest single page.

When does a year page stay separate from evergreen?

A year-specific page stays separate only when its item set is materially different from evergreen and has enough ranking depth. When overlap is high, the year URL redirects to the evergreen canonical page. This avoids thin duplication while preserving genuinely distinct annual collections for search users.

Are these paid recommendations?

No. These recommendations are not paid placements. Attendemia ranks items from public metadata, source quality coverage, and user engagement signals, then orders them by practical usefulness. Sponsorship does not buy rank position, so this page should be interpreted as editorial curation rather than advertising inventory.