Topic: Awesome List: deep-learning-foundation

Short answer

This page shows the most relevant public items for Awesome List: deep-learning-foundation, ranked by trend activity and review signal. Use weekly for fast changes, monthly for more stable patterns, and all-time for evergreen picks.

WeeklyMonthlyAll time

← Back to home

  1. Intriguing properties of neural networks

    PaperFeb 19, 2014arxiv.orgChristian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, Rob Fergus

    Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succ...

  2. Recurrent Neural Network Regularization

    PaperFeb 19, 2015arxiv.orgWojciech Zaremba, Ilya Sutskever, Oriol Vinyals

    We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. Dropout, the most successful technique for regularizing neural networks, ...

  3. Addressing the Rare Word Problem in Neural Machine Translation

    PaperMay 30, 2015arxiv.orgMinh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, Wojciech Zaremba

    Neural Machine Translation (NMT) is a new approach to machine translation that has shown promising results that are comparable to traditional approaches. A significant weakness in conventional NMT ...

  4. Recurrent Models of Visual Attention

    PaperJun 24, 2014arxiv.orgVolodymyr Mnih, Nicolas Heess, Alex Graves, Koray Kavukcuoglu

    Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent n...

  5. A Neural Conversational Model

    PaperJul 22, 2015arxiv.orgOriol Vinyals, Quoc Le

    Conversational modeling is an important task in natural language understanding and machine intelligence. Although previous approaches exist, they are often restricted to specific domains (e.g., boo...

  6. Visualizing and Understanding Recurrent Networks

    PaperNov 17, 2015arxiv.orgAndrej Karpathy, Justin Johnson, Li Fei-Fei

    Recurrent Neural Networks (RNNs), and specifically a variant with Long Short-Term Memory (LSTM), are enjoying renewed interest as a result of successful applications in a wide range of machine lear...

  7. Understanding Neural Networks Through Deep Visualization

    PaperJun 22, 2015arxiv.orgJason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, Hod Lipson

    Recent years have produced great advances in training large, deep neural networks (DNNs), including notable successes in training convolutional neural networks (convnets) to recognize natural image...

  8. Learning Deconvolution Network for Semantic Segmentation

    PaperMay 17, 2015arxiv.orgHyeonwoo Noh, Seunghoon Hong, Bohyung Han

    We propose a novel semantic segmentation algorithm by learning a deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution netw...

  9. Character-Aware Neural Language Models

    PaperDec 1, 2015arxiv.orgYoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush

    We describe a simple neural language model that relies only on character-level inputs. Predictions are still made at the word-level. Our model employs a convolutional neural network (CNN) and a hig...

  10. Ask Me Anything: Dynamic Memory Networks for Natural Language Processing

    PaperMar 5, 2016arxiv.orgAnkit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, Richard Socher

    Most tasks in natural language processing can be cast into question answering (QA) problems over language input. We introduce the dynamic memory network (DMN), a neural network architecture which p...

  11. Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks

    PaperDec 31, 2015arxiv.orgJason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov

    One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progres...

  12. Deep Networks with Stochastic Depth

    PaperJul 28, 2016arxiv.orgGao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, Kilian Weinberger

    Very deep convolutional networks with hundreds of layers have led to significant reductions in error on competitive benchmarks. Although the unmatched expressiveness of the many layers can be highl...

  13. What makes for effective detection proposals?

    PaperAug 1, 2015arxiv.orgJan Hosang, Rodrigo Benenson, Piotr Dollár, Bernt Schiele

    Current top performing object detectors employ detection proposals to guide the search for objects, thereby avoiding exhaustive sliding window search across images. Despite the popularity and wides...

  14. Reading Text in the Wild with Convolutional Neural Networks

    PaperDec 4, 2014arxiv.orgMax Jaderberg, Karen Simonyan, Andrea Vedaldi, Andrew Zisserman

    In this work we present an end-to-end system for text spotting -- localising and recognising text in natural scene images -- and text based image retrieval. This system is based on a region proposa...

  15. Perceptual Losses for Real-Time Style Transfer and Super-Resolution

    PaperMar 27, 2016arxiv.orgJustin Johnson, Alexandre Alahi, Li Fei-Fei

    We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks usin...

  16. Learning to Compose Neural Networks for Question Answering

    PaperJun 7, 2016arxiv.orgJacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein

    We describe a question answering model that applies to both images and structured knowledge bases. The model uses natural language strings to automatically assemble neural networks from a collectio...

Related Topics

Machine Learning (115)Deep Learning (115)