Best #deep Learning Resources

The highest-signal resources on #deep Learning, ranked by community reviews and momentum.
Canonical intent: topic=transformers|type=all|year=evergreen

Explore TopicAwesome ListsResearch Atlas

Top Picks

7
Sora: Video generation models as world simulators
Tim Brooks, Bill Peebles, Connor Holmes, Will DePue, Yufei Guo, Li Jing, David Schnurr, Joe Taylor, Troy Luhman, Eric Luhman, Clarence Ng, Ricky Wang, Aditya Ramesh
Bf95619e 3086 40e1 8b91 11a4d1cd0a09Feb 15, 2024·410 checkouts·openai.com
Source ↗
11
Zero-Shot Text-to-Image Generation
Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, Ilya Sutskever
A7433808 91f2 4a7e b079 889aa642690bFeb 24, 2021·327 checkouts·arxiv.org
Source ↗
15
Consistency Models
Yang Song, Prafulla Dhariwal, Mark Chen, Ilya Sutskever
0eb73e6b 7b1e 4376 b34b c7d657e66ed8Mar 2, 2023·211 checkouts·arxiv.org
Source ↗
19
RT-1: Robotics Transformer for Real-World Control at Scale
Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Joseph Dabis, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Tomas Jackson, Google DeepMind
Ba40effd 5299 4248 994b 1cf3d000727fDec 13, 2022·179 checkouts·arxiv.org
Source ↗
21
Attention Is All You Need
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
9df8a7ae 19aa 40dd af74 df450fa453b2Jun 12, 2017·171 checkouts·arxiv.org
Source ↗
22
Improving Image Generation with Better Captions
James Betker, Gabriel Goh, Li Jing, Tim Brooks, Jianfeng Wang, Linjie Li, Long Ouyang, Juntang Zhuang, Joyce Lee, Yufei Guo, Wesam Manassra, Prafulla Dhariwal, Casey Chu, Yunxing Jiao, Aditya Ramesh
926d009d d37f 44d1 a528 70bcd2002142Oct 19, 2023·154 checkouts·cdn.openai.com
Source ↗

Browse by Format

FAQ

How is this “best #deep Learning Resources” collection ranked?

This page ranks #deep Learning Resources using topic relevance, checkout momentum, source diversity, and freshness signals. Rankings are recalculated as new items and engagement arrive, so readers see resources that are both high quality and currently useful for implementation, research, and practical decision making. Canonical intent key: topic=transformers|type=all|year=evergreen.

How do you prevent duplicate collection pages?

Attendemia maps each slug variant, including best-of and year forms, to one canonical intent key. If two URLs describe the same topic, type, and timeframe, non-canonical versions permanently redirect. This consolidates crawl signals, avoids duplicate content dilution, and helps search engines index the strongest single page.

When does a year page stay separate from evergreen?

A year-specific page stays separate only when its item set is materially different from evergreen and has enough ranking depth. When overlap is high, the year URL redirects to the evergreen canonical page. This avoids thin duplication while preserving genuinely distinct annual collections for search users.

Are these paid recommendations?

No. These recommendations are not paid placements. Attendemia ranks items from public metadata, source quality coverage, and user engagement signals, then orders them by practical usefulness. Sponsorship does not buy rank position, so this page should be interpreted as editorial curation rather than advertising inventory.