Full-Text Search:
Home|About CNKI|User Service|中文
Add to Favorite Get Latest Update

Multi-modal Disaster Analysis Based on Embracing Fusion

MEI Xin;MIAO Zi-jing;School of Computer Science,South China Normal University;  
The multi-modal information fusion of texts and images can improve the accuracy of disaster event analysis compared with single-modality. However,most of the existing works simply merge the text features and image features,resulting in feature redundancy when extracting and fusing features,while ignoring the relationship between modalities,and the correlation of features between images and texts is not considered. To this end,this article analyzes and studies the current popular multi-modal fusion algorithms,and proposes a multi-modal disaster event analysis algorithm based on embrace fusion. First,the feature vectors of texts and those of images are compared with each other,and the correlation between text features and image features is considered. Then,based on multinomial sampling,the redundancy of features is eliminated,and text features and image features are fused. The experimental results show that the classification accuracy rates of the two tasks of Embrace Fusion on the Crisis MMD2. 0dataset are as high as 88. 2% and 85. 1%,respectively,which are significantly better than other multimodal fusion models,proving the effectiveness of the model. At the same time,the second experiment also verifies the applicability of the hug model to different text and image deep learning models.
Download(CAJ format) Download(PDF format)
CAJViewer7.0 supports all the CNKI file formats; AdobeReader only supports the PDF format.
©CNKI All Rights Reserved