Neural-based Context Representation Learning for Dialog Act Classification

Daniel Ortega, Ngoc Thang Vu


Abstract
We explore context representation learning methods in neural-based models for dialog act classification. We propose and compare extensively different methods which combine recurrent neural network architectures and attention mechanisms (AMs) at different context levels. Our experimental results on two benchmark datasets show consistent improvements compared to the models without contextual information and reveal that the most suitable AM in the architecture depends on the nature of the dataset.
Anthology ID:
W17-5530
Volume:
Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue
Month:
August
Year:
2017
Address:
Saarbrücken, Germany
Editors:
Kristiina Jokinen, Manfred Stede, David DeVault, Annie Louis
Venue:
SIGDIAL
SIG:
SIGDIAL
Publisher:
Association for Computational Linguistics
Note:
Pages:
247–252
Language:
URL:
https://aclanthology.org/W17-5530/
DOI:
10.18653/v1/W17-5530
Bibkey:
Cite (ACL):
Daniel Ortega and Ngoc Thang Vu. 2017. Neural-based Context Representation Learning for Dialog Act Classification. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 247–252, Saarbrücken, Germany. Association for Computational Linguistics.
Cite (Informal):
Neural-based Context Representation Learning for Dialog Act Classification (Ortega & Vu, SIGDIAL 2017)
Copy Citation:
PDF:
https://aclanthology.org/W17-5530.pdf

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy