Topic: Awesome List: deep-learning-foundation

Track this topic after sign-in.

Short answer

This page shows the most relevant public items for Awesome List: deep-learning-foundation, ranked by trend activity and review signal. Use weekly for fast changes, monthly for more stable patterns, and all-time for evergreen picks.

WeeklyMonthlyAll time

← Back to home

  1. Tacotron: Towards End-to-End Speech Synthesis

    PaperApr 6, 2017arxiv.orgYuxuan Wang, RJ Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J. Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, Quoc Le, Yannis Agiomyrgiannakis, Rob Clark, Rif A. Saurous

    A text-to-speech synthesis system typically consists of multiple stages, such as a text analysis frontend, an acoustic model and an audio synthesis module. Building these components often requires ...

  2. A Knowledge-Grounded Neural Conversation Model

    PaperNov 15, 2018arxiv.orgMarjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, Michel Galley

    Neural network models are capable of generating extremely natural sounding conversational interactions. Nevertheless, these models have yet to demonstrate that they can incorporate content in the f...

  3. Convolutional Sequence to Sequence Learning

    PaperJul 25, 2017arxiv.orgJonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin

    The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on con...

  4. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications

    PaperApr 17, 2017arxiv.orgAndrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam

    We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions ...

  5. EIE: Efficient Inference Engine on Compressed Deep Neural Network

    PaperMay 3, 2016arxiv.orgSong Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A. Horowitz, William J. Dally

    State-of-the-art deep neural networks (DNNs) have hundreds of millions of connections and are both computationally and memory intensive, making them difficult to deploy on embedded systems with lim...

  6. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size

    PaperNov 4, 2016arxiv.orgForrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer

    Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that ac...

  7. SSD: Single Shot MultiBox Detector

    PaperDec 29, 2016arxiv.orgWei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg

    We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over diff...

  8. Generative Visual Manipulation on the Natural Image Manifold

    PaperDec 16, 2018arxiv.orgJun-Yan Zhu, Philipp Krähenbühl, Eli Shechtman, Alexei A. Efros

    Realistic image manipulation is challenging because it requires modifying the image appearance in a user-controlled way, while preserving the realism of the result. Unless the user has considerable...

  9. Colorful Image Colorization

    PaperOct 5, 2016arxiv.orgRichard Zhang, Phillip Isola, Alexei A. Efros

    Given a grayscale photograph as input, this paper attacks the problem of hallucinating a plausible color version of the photograph. This problem is clearly underconstrained, so previous approaches ...

  10. WaveNet: A Generative Model for Raw Audio

    PaperSep 19, 2016arxiv.orgAaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu

    This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample ...

  11. Learning to learn by gradient descent by gradient descent

    PaperJun 14, 2016arxiv.orgMarcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Nando de Freitas

    The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show ...

  12. Layer Normalization

    PaperJul 21, 2016arxiv.orgJimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton

    Training state-of-the-art, deep neural networks is computationally expensive. One way to reduce the training time is to normalize the activities of the neurons. A recently introduced technique call...

  13. Playing Atari with Deep Reinforcement Learning

    PaperDec 19, 2013arxiv.orgVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin Riedmiller

    We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural networ...

  14. Continuous control with deep reinforcement learning

    PaperJul 5, 2019arxiv.orgTimothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra

    We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can op...

  15. Mastering the game of Go with deep neural networks and tree search - Nature

    PaperJan 27, 2016www.nature.comDavid Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, Demis Hassabis

    A computer Go program based on deep neural networks defeats a human professional player to achieve one of the grand challenges of artificial intelligence.

  16. Deep Reinforcement Learning with Double Q-learning

    PaperDec 8, 2015arxiv.orgHado van Hasselt, Arthur Guez, David Silver

    The popular Q-learning algorithm is known to overestimate action values under certain conditions. It was not previously known whether, in practice, such overestimations are common, whether they har...

  17. Speech Recognition with Deep Recurrent Neural Networks

    PaperMar 22, 2013arxiv.orgAlex Graves, Abdel-rahman Mohamed, Geoffrey Hinton

    Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labe...

  18. Deep Speech 2: End-to-End Speech Recognition in English and Mandarin

    PaperDec 8, 2015arxiv.orgDario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu

    We show that an end-to-end deep learning approach can be used to recognize either English or Mandarin Chinese speech--two vastly different languages. Because it replaces entire pipelines of hand-en...

← PreviousPage 4Next →

Top Entities In This Topic

Related Topics

FAQ

What does this Awesome List: deep-learning-foundation page rank?

It ranks public content for Awesome List: deep-learning-foundation using recent discussion, review, and engagement signals so you can triage faster. This guidance is specific to Awesome List: deep-learning-foundation topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

How should I use weekly vs monthly vs all-time?

Use weekly for fast-moving updates, monthly for stable trend confirmation, and all-time for evergreen references. This guidance is specific to Awesome List: deep-learning-foundation topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

How can I discover organizations active in Awesome List: deep-learning-foundation?

Use the linked entities section to jump to labs, companies, and experts connected to this topic and explore their timelines. This guidance is specific to Awesome List: deep-learning-foundation topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

Can I follow this topic for updates?

Yes. Use the follow button on this page to subscribe and track new high-signal activity. This guidance is specific to Awesome List: deep-learning-foundation topic page on Attendemia and is written so it still makes sense without reading other sections on the page.