Topic: Awesome List: deep-learning-foundation

Track this topic after sign-in.

Short answer

This page shows the most relevant public items for Awesome List: deep-learning-foundation, ranked by trend activity and review signal. Use weekly for fast changes, monthly for more stable patterns, and all-time for evergreen picks.

WeeklyMonthlyAll time

← Back to home

  1. Deep Residual Learning for Image Recognition

    PaperDec 10, 2015arxiv.orgKaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun

    Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly...

  2. Identity Mappings in Deep Residual Networks

    PaperApr 12, 2016arxiv.orgKaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun

    Deep residual networks have emerged as a family of extremely deep architectures showing compelling accuracy and nice convergence behaviors. In this paper, we analyze the propagation formulations be...

  3. Building high-level features using large scale unsupervised learning

    PaperJul 12, 2012arxiv.orgQuoc V. Le, Marc'Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg S. Corrado, Jeff Dean, Andrew Y. Ng

    We consider the problem of building high-level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answe...

  4. Auto-Encoding Variational Bayes

    PaperDec 10, 2022arxiv.orgDiederik P Kingma, Max Welling

    How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We...

  5. DRAW: A Recurrent Neural Network For Image Generation

    PaperMay 20, 2015arxiv.orgKarol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra

    This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveatio...

  6. Pixel Recurrent Neural Networks

    PaperFeb 29, 2016arxiv.orgAaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu

    Modeling the distribution of natural images is a landmark problem in unsupervised learning. This task requires an image model that is at once expressive, tractable and scalable. We present a deep n...

  7. Improving neural networks by preventing co-adaptation of feature detectors

    PaperJul 3, 2012arxiv.orgGeoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, Ruslan R. Salakhutdinov

    When a large feedforward neural network is trained on a small training set, it typically performs poorly on held-out test data. This "overfitting" is greatly reduced by randomly omitting ha...

  8. Adam: A Method for Stochastic Optimization

    PaperJan 30, 2017arxiv.orgDiederik P. Kingma, Jimmy Ba

    We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to i...

  9. DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition

    PaperOct 6, 2013arxiv.orgJeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, Trevor Darrell

    We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed...

  10. Visualizing and Understanding Convolutional Networks

    PaperNov 28, 2013arxiv.orgMatthew D Zeiler, Rob Fergus

    Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, o...

  11. Distilling the Knowledge in a Neural Network

    PaperMar 9, 2015arxiv.orgGeoffrey Hinton, Oriol Vinyals, Jeff Dean

    A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making...

← PreviousPage 6Next →

Top Entities In This Topic

Related Topics

FAQ

What does this Awesome List: deep-learning-foundation page rank?

It ranks public content for Awesome List: deep-learning-foundation using recent discussion, review, and engagement signals so you can triage faster. This guidance is specific to Awesome List: deep-learning-foundation topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

How should I use weekly vs monthly vs all-time?

Use weekly for fast-moving updates, monthly for stable trend confirmation, and all-time for evergreen references. This guidance is specific to Awesome List: deep-learning-foundation topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

How can I discover organizations active in Awesome List: deep-learning-foundation?

Use the linked entities section to jump to labs, companies, and experts connected to this topic and explore their timelines. This guidance is specific to Awesome List: deep-learning-foundation topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

Can I follow this topic for updates?

Yes. Use the follow button on this page to subscribe and track new high-signal activity. This guidance is specific to Awesome List: deep-learning-foundation topic page on Attendemia and is written so it still makes sense without reading other sections on the page.