< Back to previous page

Publication

Domain adaptation in semantic role labeling using a neural language model and linguistic resources

Journal Contribution - Journal Article

© 2014 IEEE. We propose a method for adapting Semantic Role Labeling (SRL) systems from a source domain to a target domain by combining a neural language model and linguistic resources to generate additional training examples. We primarily aim to improve the results of Location, Time, Manner and Direction roles. In our methodology, main words of selected predicates and arguments in the source-domain training data are replaced with words from the target domain. The replacement words are generated by a language model and then filtered by several linguistic filters (including Part-Of-Speech (POS), WordNet and Predicate constraints). In experiments on the out-of-domain CoNLL 2009 data, with the Recurrent Neural Network Language Model (RNNLM) and a well-known semantic parser from Lund University, we show enhanced recall and F1 without penalizing precision on the four targeted roles. These results improve the results of the same SRL system without using the language model and the linguistic resources, and are better than the results of the same SRL system that is trained with examples that are enriched with word embeddings. We also demonstrate the importance of using a language model and the vocabulary of the target domain when generating new training examples.
Journal: ACM Transactions on Speech and Language Processing
ISSN: 1550-4883
Issue: 11
Volume: 23
Pages: 1812 - 1823
Publication year:2015
BOF-keylabel:yes
IOF-keylabel:yes
BOF-publication weight:1
CSS-citation score:1
Authors:International
Authors from:Higher Education
Accessibility:Open