Event Representation Learning with Multi-Grained Contrastive Learning and Triple-Mixture of Experts

Tianqi Hu, Lishuang Li, Xueyang Qin, Yubo Feng


Abstract
Event representation learning plays a crucial role in numerous natural language processing (NLP) tasks, as it facilitates the extraction of semantic features associated with events. Current methods of learning event representation based on contrastive learning processes positive examples with single-grain random masked language model (MLM), but fall short in learn information inside events from multiple aspects. In this paper, we introduce multi-grained contrastive learning and triple-mixture of experts (MCTM) for event representation learning. Our proposed method extends the random MLM by incorporating a specialized MLM designed to capture different grammatical structures within events, which allows the model to learn token-level knowledge from multiple perspectives. Furthermore, we have observed that mask tokens with different granularities affect the model differently, therefore, we incorporate mixture of experts (MoE) to learn importance weights associated with different granularities. Our experiments demonstrate that MCTM outperforms other baselines in tasks such as hard similarity and transitive sentence similarity, highlighting the superiority of our method.
Anthology ID:
2024.lrec-main.588
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
6643–6654
Language:
URL:
https://aclanthology.org/2024.lrec-main.588/
DOI:
Bibkey:
Cite (ACL):
Tianqi Hu, Lishuang Li, Xueyang Qin, and Yubo Feng. 2024. Event Representation Learning with Multi-Grained Contrastive Learning and Triple-Mixture of Experts. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 6643–6654, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Event Representation Learning with Multi-Grained Contrastive Learning and Triple-Mixture of Experts (Hu et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.588.pdf

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy