Proceedings of the 1st Workshop on Representation Learning for NLP
Phil Blunsom, Kyunghyun Cho, Shay Cohen, Edward Grefenstette, Karl Moritz Hermann, Laura Rimell, Jason Weston, Scott Wen-tau Yih (Editors)
- Anthology ID:
- W16-16
- Month:
- August
- Year:
- 2016
- Address:
- Berlin, Germany
- Venue:
- RepL4NLP
- SIG:
- SIGREP
- Publisher:
- Association for Computational Linguistics
- URL:
- https://aclanthology.org/W16-16
- DOI:
- 10.18653/v1/W16-16
- PDF:
- https://preview.aclanthology.org/emnlp22-frontmatter/W16-16.pdf
Proceedings of the 1st Workshop on Representation Learning for NLP
Phil Blunsom
|
Kyunghyun Cho
|
Shay Cohen
|
Edward Grefenstette
|
Karl Moritz Hermann
|
Laura Rimell
|
Jason Weston
|
Scott Wen-tau Yih
Explaining Predictions of Non-Linear Classifiers in NLP
Leila Arras
|
Franziska Horn
|
Grégoire Montavon
|
Klaus-Robert Müller
|
Wojciech Samek
Joint Learning of Sentence Embeddings for Relevance and Entailment
Petr Baudiš
|
Silvestr Stanko
|
Jan Šedivý
A Joint Model for Word Embedding and Word Morphology
Kris Cao
|
Marek Rei
On the Compositionality and Semantic Interpretation of English Noun Compounds
Corina Dima
Functional Distributional Semantics
Guy Emerson
|
Ann Copestake
Assisting Discussion Forum Users using Deep Recurrent Neural Networks
Jacob Hagstedt P Suorra
|
Olof Mogren
Adjusting Word Embeddings with Semantic Intensity Orders
Joo-Kyung Kim
|
Marie-Catherine de Marneffe
|
Eric Fosler-Lussier
Towards Abstraction from Extraction: Multiple Timescale Gated Recurrent Unit for Summarization
Minsoo Kim
|
Dennis Singh Moirangthem
|
Minho Lee
An Empirical Evaluation of doc2vec with Practical Insights into Document Embedding Generation
Jey Han Lau
|
Timothy Baldwin
Quantifying the Vanishing Gradient and Long Distance Dependency Problem in Recursive Neural Networks and Recursive LSTMs
Phong Le
|
Willem Zuidema
LSTM-Based Mixture-of-Experts for Knowledge-Aware Dialogues
Phong Le
|
Marc Dymetman
|
Jean-Michel Renders
Mapping Unseen Words to Task-Trained Embedding Spaces
Pranava Swaroop Madhyastha
|
Mohit Bansal
|
Kevin Gimpel
|
Karen Livescu
Multilingual Modal Sense Classification using a Convolutional Neural Network
Ana Marasović
|
Anette Frank
Towards cross-lingual distributed representations without parallel text trained with adversarial autoencoders
Antonio Valerio Miceli Barone
Decomposing Bilexical Dependencies into Semantic and Syntactic Vectors
Jeff Mitchell
Learning Semantic Relatedness in Community Question Answering Using Neural Models
Henry Nassif
|
Mitra Mohtarami
|
James Glass
Learning Text Similarity with Siamese Recurrent Networks
Paul Neculoiu
|
Maarten Versteegh
|
Mihai Rotaru
A Two-stage Approach for Extending Event Detection to New Types via Neural Networks
Thien Huu Nguyen
|
Lisheng Fu
|
Kyunghyun Cho
|
Ralph Grishman
Parameterized context windows in Random Indexing
Tobias Norlund
|
David Nilsson
|
Magnus Sahlgren
Making Sense of Word Embeddings
Maria Pelevina
|
Nikolay Arefiev
|
Chris Biemann
|
Alexander Panchenko
Pair Distance Distribution: A Model of Semantic Representation
Yonatan Ramni
|
Oded Maimon
|
Evgeni Khmelnitsky
Measuring Semantic Similarity of Words Using Concept Networks
Gábor Recski
|
Eszter Iklódi
|
Katalin Pajkossy
|
András Kornai
Using Embedding Masks for Word Categorization
Stefan Ruseti
|
Traian Rebedea
|
Stefan Trausan-Matu
Sparsifying Word Representations for Deep Unordered Sentence Modeling
Prasanna Sattigeri
|
Jayaraman J. Thiagarajan
Why “Blow Out”? A Structural Analysis of the Movie Dialog Dataset
Richard Searle
|
Megan Bingham-Walker
Learning Word Importance with the Neural Bag-of-Words Model
Imran Sheikh
|
Irina Illina
|
Dominique Fohr
|
Georges Linarès
A Vector Model for Type-Theoretical Semantics
Konstantin Sokolov
Towards Generalizable Sentence Embeddings
Eleni Triantafillou
|
Jamie Ryan Kiros
|
Raquel Urtasun
|
Richard Zemel
Domain Adaptation for Neural Networks by Parameter Augmentation
Yusuke Watanabe
|
Kazuma Hashimoto
|
Yoshimasa Tsuruoka
Neural Associative Memory for Dual-Sequence Modeling
Dirk Weissenborn