Extractive Machine Reading Comprehension Model with Explicitly Fused Lexical and Syntactic Features
YAN Wei-Hong;LI Shao-Bo;SHAN Li-Li;SUN Cheng-Jie;LIU Bing-Quan;State Key Laboratory of Communication Content Cognition,People's Daily Online;Faculty of Computing, Harbin Institute of Technology;
Language models obtained by pre-training unstructured text alone can provide excellent contextual representation features for each word, but cannot explicitly provide lexical and syntactic features, which are often the basis for understanding overall semantics. In this study, we investigate the impact of lexical and syntactic features on the reading comprehension ability of pre-trained models by introducing them explicitly. First, we utilize part of speech tagging and named entity recognition to provide lexical features and dependency parsing to provide syntactic features.These features are integrated with the contextual representation from the pre-trained model output. Then, we design an adaptive feature fusion method based on the attention mechanism to fuse different types of features. Experiments on the extractive machine reading comprehension dataset CMRC2018 show that our approach helps the model achieve 0.37%and 1.56% improvement in F1 and EM scores, respectively, by using explicitly introduced lexical and syntactic features at a very low computational cost.