Topic: Awesome List: deep-learning-foundation

Track this topic after sign-in.

Short answer

This page shows the most relevant public items for Awesome List: deep-learning-foundation, ranked by trend activity and review signal. Use weekly for fast changes, monthly for more stable patterns, and all-time for evergreen picks.

WeeklyMonthlyAll time

← Back to home

  1. Neural Turing Machines

    PaperDec 10, 2014arxiv.orgAlex Graves, Greg Wayne, Ivo Danihelka

    We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Mach...

  2. Efficient Estimation of Word Representations in Vector Space

    PaperSep 7, 2013arxiv.orgTomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean

    We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity ta...

  3. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation

    PaperOct 8, 2016arxiv.orgYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean

    Neural Machine Translation (NMT) is an end-to-end learning approach for automated translation, with the potential to overcome many of the weaknesses of conventional phrase-based translation systems...

  4. Bag of Tricks for Efficient Text Classification

    PaperAug 9, 2016arxiv.orgArmand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov

    This paper explores a simple and efficient baseline for text classification. Our experiments show that our fast text classifier fastText is often on par with deep learning classifiers in terms of a...

  5. Intriguing properties of neural networks

    PaperFeb 19, 2014arxiv.orgChristian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, Rob Fergus

    Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succ...

  6. Recurrent Neural Network Regularization

    PaperFeb 19, 2015arxiv.orgWojciech Zaremba, Ilya Sutskever, Oriol Vinyals

    We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. Dropout, the most successful technique for regularizing neural networks, ...

  7. Addressing the Rare Word Problem in Neural Machine Translation

    PaperMay 30, 2015arxiv.orgMinh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, Wojciech Zaremba

    Neural Machine Translation (NMT) is a new approach to machine translation that has shown promising results that are comparable to traditional approaches. A significant weakness in conventional NMT ...

  8. Recurrent Models of Visual Attention

    PaperJun 24, 2014arxiv.orgVolodymyr Mnih, Nicolas Heess, Alex Graves, Koray Kavukcuoglu

    Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent n...

  9. A Neural Conversational Model

    PaperJul 22, 2015arxiv.orgOriol Vinyals, Quoc Le

    Conversational modeling is an important task in natural language understanding and machine intelligence. Although previous approaches exist, they are often restricted to specific domains (e.g., boo...

  10. Visualizing and Understanding Recurrent Networks

    PaperNov 17, 2015arxiv.orgAndrej Karpathy, Justin Johnson, Li Fei-Fei

    Recurrent Neural Networks (RNNs), and specifically a variant with Long Short-Term Memory (LSTM), are enjoying renewed interest as a result of successful applications in a wide range of machine lear...

  11. Understanding Neural Networks Through Deep Visualization

    PaperJun 22, 2015arxiv.orgJason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, Hod Lipson

    Recent years have produced great advances in training large, deep neural networks (DNNs), including notable successes in training convolutional neural networks (convnets) to recognize natural image...

  12. Learning Deconvolution Network for Semantic Segmentation

    PaperMay 17, 2015arxiv.orgHyeonwoo Noh, Seunghoon Hong, Bohyung Han

    We propose a novel semantic segmentation algorithm by learning a deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution netw...

  13. Character-Aware Neural Language Models

    PaperDec 1, 2015arxiv.orgYoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush

    We describe a simple neural language model that relies only on character-level inputs. Predictions are still made at the word-level. Our model employs a convolutional neural network (CNN) and a hig...

  14. Ask Me Anything: Dynamic Memory Networks for Natural Language Processing

    PaperMar 5, 2016arxiv.orgAnkit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, Richard Socher

    Most tasks in natural language processing can be cast into question answering (QA) problems over language input. We introduce the dynamic memory network (DMN), a neural network architecture which p...

  15. Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks

    PaperDec 31, 2015arxiv.orgJason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov

    One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progres...

  16. Deep Networks with Stochastic Depth

    PaperJul 28, 2016arxiv.orgGao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, Kilian Weinberger

    Very deep convolutional networks with hundreds of layers have led to significant reductions in error on competitive benchmarks. Although the unmatched expressiveness of the many layers can be highl...

← PreviousPage 1Next →

Top Entities In This Topic

Related Topics

FAQ

What does this Awesome List: deep-learning-foundation page rank?

It ranks public content for Awesome List: deep-learning-foundation using recent discussion, review, and engagement signals so you can triage faster. This guidance is specific to Awesome List: deep-learning-foundation topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

How should I use weekly vs monthly vs all-time?

Use weekly for fast-moving updates, monthly for stable trend confirmation, and all-time for evergreen references. This guidance is specific to Awesome List: deep-learning-foundation topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

How can I discover organizations active in Awesome List: deep-learning-foundation?

Use the linked entities section to jump to labs, companies, and experts connected to this topic and explore their timelines. This guidance is specific to Awesome List: deep-learning-foundation topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

Can I follow this topic for updates?

Yes. Use the follow button on this page to subscribe and track new high-signal activity. This guidance is specific to Awesome List: deep-learning-foundation topic page on Attendemia and is written so it still makes sense without reading other sections on the page.