Title

Multimodal indexing based on semantic cohesion for image retrieval

Author

Hugo Jair Escalante Balderas

Manuel Montes y Gómez

Luis Enrique Sucar Succar

Access level

Open Access

Summary or description

This paper introduces two novel strategies for representing multimodal images with application to multimedia image retrieval. We consider images that are composed of both text and labels: while text describes the image content at a very high semantic level (e.g., making reference to places, dates or events), labels provide a mid-level description of the image (i.e., in terms of the objects that can be seen in the image). Accordingly, the main assumption of this work is that by combining information from text and labels we can develop very effective retrieval methods. We study standard information fusion techniques for combining both sources of information. However, whereas the performance of such techniques is highly competitive, they cannot capture effectively the content of images. Therefore, we propose two novel representations for multimodal images that attempt to exploit the semantic cohesion among terms from different modalities. Such representations are based on distributional term representations widely used in computational linguistics. Under the considered representations the content of an image is modeled by a distribution of co-occurrences over terms or of occurrences over other images, in such a way that the representation can be considered an expansion of the multimodal terms in the image. We report experimental results using the SAIAPR TC12 benchmark on two sets of topics used in ImageCLEF competitions with manually and automatically generated labels. Experimental results show that the proposed representations outperform significantly both, standard multimodal techniques and unimodal methods. Results on manually assigned labels provide an upper bound in the retrieval performance that can be obtained, whereas results with automatically generated labels are encouraging. The novel representations are able to capture more effectively the content of multimodal images. We emphasize that although we have applied our representations to multimedia image retrieval the same formulation can be adopted for modeling other multimodal documents (e.g., videos).

Publisher

Springer Science + Business Media

Publish date

2012

Publication type

Article

Publication version

Accepted Version

Format

application/pdf

Language

English

Audience

Students

Researchers

General public

Citation suggestion

Escalante-Balderas, H.J., et al., (2012). Multimodal indexing based on semantic cohesion for image retrieval, Information Retrieval, Vol. 15 (1): 1–32

Source repository

Repositorio Institucional del INAOE

Downloads

62

Comments



You need to sign in or sign up to comment.