


default search action
2nd BlackboxNLP@ACL 2019: Florence, Italy
- Tal Linzen, Grzegorz Chrupala, Yonatan Belinkov, Dieuwke Hupkes:
Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP@ACL 2019, Florence, Italy, August 1, 2019. Association for Computational Linguistics 2019, ISBN 978-1-950737-30-7 - Kris Korrel, Dieuwke Hupkes, Verna Dankers, Elia Bruni:
Transcoding Compositionally: Using Attention to Find More Generalizable Solutions. 1-11 - Jeremy Barnes, Lilja Øvrelid, Erik Velldal:
Sentiment Analysis Is Not Solved! Assessing and Probing Sentiment Classification. 12-23 - Dominik Schlechtweg, Cennet Oguz, Sabine Schulte im Walde:
Second-order Co-occurrence Sensitivity of Skip-Gram with Negative Sampling. 24-30 - Hitomi Yanaka
, Koji Mineshima, Daisuke Bekki, Kentaro Inui, Satoshi Sekine, Lasha Abzianidze, Johan Bos:
Can Neural Networks Understand Monotonicity Reasoning? 31-40 - Zhiguo Wang, Yue Zhang, Mo Yu, Wei Zhang, Lin Pan, Linfeng Song, Kun Xu, Yousef El-Kurdi:
Multi-Granular Text Encoding for Self-Explaining Categorization. 41-45 - Alexander Kuhnle, Ann A. Copestake:
The Meaning of "Most" for Visual Question Answering Models. 46-55 - Julia Strout, Ye Zhang, Raymond J. Mooney:
Do Human Rationales Improve Machine Explanations? 56-62 - Jesse Vig, Yonatan Belinkov:
Analyzing the Structure of Attention in a Transformer Language Model. 63-76 - Rama Rohit Reddy Gangula, Suma Reddy Duggenpudi, Radhika Mamidi:
Detecting Political Bias in News Articles Using Headline Attention. 77-84 - Aarne Talman, Stergios Chatzikyriakidis:
Testing the Generalization Power of Neural Network Models across NLI Benchmarks. 85-94 - Yuval Pinter
, Marc Marone, Jacob Eisenstein:
Character Eyes: Seeing Language through Character-Level Taggers. 95-102 - Jialin Wu, Raymond J. Mooney:
Faithful Multimodal Explanation for Visual Question Answering. 103-112 - Leila Arras
, Ahmed Osman
, Klaus-Robert Müller
, Wojciech Samek:
Evaluating Recurrent Neural Network Explanations. 113-126 - Joris Baan, Jana Leible, Mitja Nikolaus
, David Rau, Dennis Ulmer, Tim Baumgärtner
, Dieuwke Hupkes, Elia Bruni:
On the Realization of Compositionality in Neural Networks. 127-137 - Xiang Yu, Ngoc Thang Vu, Jonas Kuhn:
Learning the Dyck Language with Attention-based Seq2Seq Models. 138-146 - Josua Stadelmaier, Sebastian Padó
:
Modeling Paths for Explainable Knowledge Base Completion. 147-157 - Paola Merlo:
Probing Word and Sentence Embeddings for Long-distance Dependencies Effects in French and English. 158-172 - Tomás Musil
, Jonás Vidra, David Marecek
:
Derivational Morphological Relations in Word Embeddings. 173-180 - Ethan Wilcox, Roger Levy, Richard Futrell:
Hierarchical Representation in Neural Language Models: Suppression and Recovery of Expectations. 181-190 - Samira Abnar, Lisa Beinborn, Rochelle Choenni, Willem H. Zuidema:
Blackbox Meets Blackbox: Representational Similarity & Stability Analysis of Neural Language Models and Brains. 191-203 - Shammur Absar Chowdhury, Roberto Zamparelli:
An LSTM Adaptation Study of (Un)grammaticality. 204-212 - Antonios Anastasopoulos:
An Analysis of Source-Side Grammatical Errors in NMT. 213-223 - William Merrill, Lenny Khazan, Noah Amsel, Yiding Hao, Simon Mendelsohn, Robert Frank
:
Finding Hierarchical Structure in Neural Stacks Using Unsupervised Parsing. 224-232 - Yi-Ting Tsai, Min-Chu Yang, Han-Yu Chen:
Adversarial Attack on Sentiment Classification. 233-240 - Yongjie Lin, Yi Chern Tan, Robert Frank
:
Open Sesame: Getting inside BERT's Linguistic Knowledge. 241-253 - Filip Gralinski, Anna Wróblewska
, Tomasz Stanislawek, Kamil Grabowski, Tomasz Górecki:
GEval: Tool for Debugging NLP Datasets and Models. 254-262 - David Marecek
, Rudolf Rosa:
From Balustrades to Pierre Vinken: Looking for Syntax in Transformer Self-Attentions. 263-275 - Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D. Manning
:
What Does BERT Look at? An Analysis of BERT's Attention. 276-286

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.