Topic: Awesome List: nlp-classic

Short answer

This page shows the most relevant public items for Awesome List: nlp-classic, ranked by trend activity and review signal. Use weekly for fast changes, monthly for more stable patterns, and all-time for evergreen picks.

WeeklyMonthlyAll time

← Back to home

  1. Distributed Representations of Words and Phrases and their Compositionality

    PaperOct 16, 2013arxiv.orgTomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, Jeffrey Dean

    The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic ...

  2. Efficient Estimation of Word Representations in Vector Space

    PaperSep 7, 2013arxiv.orgTomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean

    We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity ta...

  3. Retrofitting Word Vectors to Semantic Lexicons

    PaperMar 22, 2015arxiv.orgManaal Faruqui, Jesse Dodge, Sujay K. Jauhar, Chris Dyer, Eduard Hovy, Noah A. Smith

    Vector space word representations are learned from distributional information of words in large corpora. Although such statistics are semantically informative, they disregard the valuable informati...

  4. Semi-supervised Sequence Learning

    PaperNov 4, 2015arxiv.orgAndrew M. Dai, Quoc V. Le

    We present two approaches that use unlabeled data to improve sequence learning with recurrent networks. The first approach is to predict what comes next in a sequence, which is a conventional langu...

  5. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation

    PaperOct 8, 2016arxiv.orgYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean

    Neural Machine Translation (NMT) is an end-to-end learning approach for automated translation, with the potential to overcome many of the weaknesses of conventional phrase-based translation systems...

  6. Counter-fitting Word Vectors to Linguistic Constraints

    PaperMar 2, 2016arxiv.orgNikola Mrkšić, Diarmuid Ó Séaghdha, Blaise Thomson, Milica Gašić, Lina Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, Steve Young

    In this work, we present a novel counter-fitting method which injects antonymy and synonymy constraints into vector space representations in order to improve the vectors' capability for judging sem...

  7. Enriching Word Vectors with Subword Information

    PaperJun 19, 2017arxiv.orgPiotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov

    Continuous word representations, trained on large unlabeled corpora are useful for many natural language processing tasks. Popular models that learn such representations ignore the morphology of wo...

  8. Bag of Tricks for Efficient Text Classification

    PaperAug 9, 2016arxiv.orgArmand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov

    This paper explores a simple and efficient baseline for text classification. Our experiments show that our fast text classifier fastText is often on par with deep learning classifiers in terms of a...

  9. Towards Universal Paraphrastic Sentence Embeddings

    PaperMar 4, 2016arxiv.orgJohn Wieting, Mohit Bansal, Kevin Gimpel, Karen Livescu

    We consider the problem of learning general-purpose, paraphrastic sentence embeddings based on supervision from the Paraphrase Database (Ganitkevitch et al., 2013). We compare six compositional arc...

  10. Advances in Pre-Training Distributed Word Representations

    PaperDec 26, 2017arxiv.orgTomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, Armand Joulin

    Many Natural Language Processing applications nowadays rely on pre-trained word representations estimated from large text corpora such as news collections, Wikipedia and Web Crawl. In this paper, w...

  11. Fast and Accurate Entity Recognition with Iterated Dilated Convolutions

    PaperJul 22, 2017arxiv.orgEmma Strubell, Patrick Verga, David Belanger, Andrew McCallum

    Today when many practitioners run basic NLP on the entire web and large-volume traffic, faster methods are paramount to saving time and energy costs. Recent advances in GPU hardware have led to the...

  12. A Compressed Sensing View of Unsupervised Text Embeddings...

    PaperFeb 15, 2018openreview.netSanjeev Arora, Mikhail Khodak, Nikunj Saunshi, Kiran Vodrahalli

    We use the theory of compressed sensing to prove that LSTMs can do at least as well on linear text classification as Bag-of-n-Grams.

  13. Learned in Translation: Contextualized Word Vectors

    PaperJun 20, 2018arxiv.orgBryan McCann, James Bradbury, Caiming Xiong, Richard Socher

    Computer vision has benefited from initializing multiple deep layers with weights pretrained on large supervised training sets like ImageNet. Natural language processing (NLP) typically sees initia...

  14. Deep contextualized word representations

    PaperMar 22, 2018arxiv.orgMatthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer

    We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguist...

  15. Universal Sentence Encoder

    PaperApr 12, 2018arxiv.orgDaniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, Ray Kurzweil

    We present models for encoding sentences into embedding vectors that specifically target transfer learning to other NLP tasks. The models are efficient and result in accurate performance on diverse...

Related Topics

Machine Learning (25)#machine-learning (25)Awesome List: deep-learning-foundation (3)Deep Learning (3)