Please use this identifier to cite or link to this item:
Files in This Item:
File Description SizeFormat 
Tahmasebzadeh2020.pdf483.33 kBAdobe PDFView/Open
Title: A Feature Analysis for Multimodal News Retrieval
Authors: Tahmasebzadeh, GolsaHakimov, SherzodMüller-Budack, EricEwerth, Ralph
Issue Date: 2020
Published in: Proceedings of the 1st International Workshop on Cross-lingual Event-centric Open Analytics co-located with the 17th Extended Semantic Web Conference (ESWC 2020)
Publisher: Aachen : RWTH
Abstract: Content-based information retrieval is based on the information contained in documents rather than using metadata such as keywords. Most information retrieval methods are either based on text or image. In this paper, we investigate the usefulness of multimodal features for cross-lingual news search in various domains: politics, health, environment, sport, and finance. To this end, we consider five feature types for image and text and compare the performance of the retrieval system using different combinations. Experimental results show that retrieval results can be improved when considering both visual and textual information. In addition, it is observed that among textual features entity overlap outperforms word embeddings, while geolocation embeddings achieve better performance among visual features in the retrieval task.
Keywords: Multimodal News Retrieval; Multimodal Features; Computer Vision; Natural Language Processing
DDC: 004
License: CC BY 4.0 Unported
Link to License:
Appears in Collections:Informationswissenschaften

This item is licensed under a Creative Commons License Creative Commons