• Click:

Current position: Home  >  Scientific Research  >  Paper Publications

Paper Publications

Masking-Based Cross-Modal Remote Sensing Image–Text Retrieval via Dynamic Contrastive Learning

Release time:2024-07-21 Hits:

Impact Factor:8.2

Affiliation of Author(s):中国矿业大学

Journal:IEEE Transactions on Geoscience and Remote Sensing

Key Words:Cross-modal remote-sensing image–text retrieval (CMRSITR), masked image modeling (MIM), masked language modeling (MLM), momentum contrast

Abstract:— Cross-modal remote sensing image–text retrieval (CMRSITR) aims to extract comprehensive information from diverse modalities. The primary challenge in this field is developing effective mappings between visual and textual modalities to a shared latent space. Existing approaches generally focus on utilizing pretrained unimodal models to independently extract features from each modality. However, these techniques often fall short of achieving the critical alignment necessary for effective cross-modal matching. These techniques predominantly concentrate on the extraction of features and alignment at an instance level, suggesting potential areas for enhancement. To address these limitations, we introduce the masked interaction inferring and aligning (MIIA) framework, utilizing dynamic contrastive learning (DCL). This framework is adept at discerning the intricate relationships between local visual–textual tokens, thereby significantly bolstering the congruence of global image–text pairings without relying on additional prior supervision. Initially, we devise a masked interaction inferring (MII) module, which fosters token-level interplays through a novel masked visual-language (VL) modeling approach. Following this, we implement a cross-modal DCL mechanism, which is instrumental in capturing and aligning semantic correlations between images and texts more effectively. Finally, to ensure the comprehensive matching of visual and textual embeddings, we introduce a unique technique known as bidirectional distribution matching (BDM). This method is designed to minimize the Kullback–Leibler (KL) divergence between the distributions of image–text similarity, computed using the negative queues in momentum contrast learning. Comprehensive experiments performed on well-established public datasets consistently validate the state-of-the-art performance of MIIA methods in the CMRSITR task.

Indexed by:Journal paper

Document Code:5626215

Discipline:Engineering

First-Level Discipline:Computer Science and Technology

Document Type:J

Volume:62

Issue:2024

Translation or Not:no

Date of Publication:2024-06-21

Included Journals:SCI

calvin

Date of Birth:1977-09-15 Gender:Male Education Level:With Certificate of Graduation for Doctorate Study Alma Mater:北京大学 Degree:Doctor Status:在岗 School/Department:计算机科学与技术学院 Business Address:计算机楼A315-1、B610 Contact Information:15996967676 E-Mail: