Topic: Awesome List: deep-learning-foundation

Track this topic after sign-in.

Short answer

This page shows the most relevant public items for Awesome List: deep-learning-foundation, ranked by trend activity and review signal. Use weekly for fast changes, monthly for more stable patterns, and all-time for evergreen picks.

WeeklyMonthlyAll time

← Back to home

  1. End-to-End Attention-based Large Vocabulary Speech Recognition

    PaperMar 14, 2016arxiv.orgDzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel, Yoshua Bengio

    Many of the current state-of-the-art Large Vocabulary Continuous Speech Recognition Systems (LVCSR) are hybrids of neural networks and Hidden Markov Models (HMMs). Most of these systems contain sep...

  2. Generating Sequences With Recurrent Neural Networks

    PaperJun 5, 2014arxiv.orgAlex Graves

    This paper shows how Long Short-term Memory recurrent neural networks can be used to generate complex sequences with long-range structure, simply by predicting one data point at a time. The approac...

  3. Distributed Representations of Sentences and Documents

    PaperMay 22, 2014arxiv.orgQuoc V. Le, Tomas Mikolov

    Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite ...

  4. Convolutional Neural Networks for Sentence Classification

    PaperSep 3, 2014arxiv.orgYoon Kim

    We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with litt...

  5. A Convolutional Neural Network for Modelling Sentences

    PaperApr 8, 2014arxiv.orgNal Kalchbrenner, Edward Grefenstette, Phil Blunsom

    The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for...

  6. Neural Machine Translation by Jointly Learning to Align and Translate

    PaperMay 19, 2016arxiv.orgDzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio

    Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single n...

  7. Memory Networks

    PaperNov 29, 2015arxiv.orgJason Weston, Sumit Chopra, Antoine Bordes

    We describe a new class of learning models called memory networks. Memory networks reason with inference components combined with a long-term memory component; they learn how to use these jointly. ...

  8. Effective Approaches to Attention-based Neural Machine Translation

    PaperSep 20, 2015arxiv.orgMinh-Thang Luong, Hieu Pham, Christopher D. Manning

    An attentional mechanism has lately been used to improve neural machine translation (NMT) by selectively focusing on parts of the source sentence during translation. However, there has been little ...

  9. Exploring the Limits of Language Modeling

    PaperFeb 11, 2016arxiv.orgRafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu

    In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key chall...

  10. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention

    PaperApr 19, 2016arxiv.orgKelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio

    Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train ...

  11. A Neural Algorithm of Artistic Style

    PaperSep 2, 2015arxiv.orgLeon A. Gatys, Alexander S. Ecker, Matthias Bethge

    In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image. Thus far the al...

  12. Image Super-Resolution Using Deep Convolutional Networks

    PaperJul 31, 2015arxiv.orgChao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang

    We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a de...

  13. Network In Network

    PaperMar 4, 2014arxiv.orgMin Lin, Qiang Chen, Shuicheng Yan

    We propose a novel deep network structure called "Network In Network" (NIN) to enhance model discriminability for local patches within the receptive field. The conventional convolutional la...

  14. Maxout Networks

    PaperSep 20, 2013arxiv.orgIan J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, Yoshua Bengio

    We consider the problem of designing models to leverage a recently introduced approximate model averaging technique called dropout. We define a simple new model called maxout (so named because its ...

  15. Return of the Devil in the Details: Delving Deep into Convolutional Nets

    PaperNov 5, 2014arxiv.orgKen Chatfield, Karen Simonyan, Andrea Vedaldi, Andrew Zisserman

    The latest generation of Convolutional Neural Networks (CNN) have achieved impressive results in challenging benchmarks on image recognition and object detection, significantly raising the interest...

  16. Very Deep Convolutional Networks for Large-Scale Image Recognition

    PaperApr 10, 2015arxiv.orgKaren Simonyan, Andrew Zisserman

    In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of...

← PreviousPage 5Next →

Top Entities In This Topic

Related Topics

FAQ

What does this Awesome List: deep-learning-foundation page rank?

It ranks public content for Awesome List: deep-learning-foundation using recent discussion, review, and engagement signals so you can triage faster. This guidance is specific to Awesome List: deep-learning-foundation topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

How should I use weekly vs monthly vs all-time?

Use weekly for fast-moving updates, monthly for stable trend confirmation, and all-time for evergreen references. This guidance is specific to Awesome List: deep-learning-foundation topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

How can I discover organizations active in Awesome List: deep-learning-foundation?

Use the linked entities section to jump to labs, companies, and experts connected to this topic and explore their timelines. This guidance is specific to Awesome List: deep-learning-foundation topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

Can I follow this topic for updates?

Yes. Use the follow button on this page to subscribe and track new high-signal activity. This guidance is specific to Awesome List: deep-learning-foundation topic page on Attendemia and is written so it still makes sense without reading other sections on the page.