Topic: Few-Shot Learning

Short answer

This page shows the most relevant public items for Few-Shot Learning, ranked by trend activity and review signal. Use weekly for fast changes, monthly for more stable patterns, and all-time for evergreen picks.

WeeklyMonthlyAll time
Current weekPast week2 weeks ago

← Back to home

  1. Language Models are Few-Shot Learners

    PaperMay 28, 2020arXivTom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei

    Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic i...

  2. Matching Networks for One Shot Learning

    PaperJun 13, 2016arXivOriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, Daan Wierstra

    Deep learning algorithms typically require vast amounts of data to achieve high performance, contrasting sharply with human ability to learn new concepts from a single example. We introduce Matchin...

  3. Flamingo: a Visual Language Model for Few-Shot Learning

    PaperApr 28, 2022arXivJean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Karen Simonyan

    Building models that can be rapidly adapted to novel tasks using only a handful of annotated examples is an open challenge for multimodal machine learning research. We introduce Flamingo, a family ...

Related Topics

cs.CV (2)lab:deep-mind-ai (2)cs.CL (2)GPT-3 (1)Metric Learning (1)