IMECH-IR  > 流固耦合系统力学重点实验室
Learning transferable cross-modality representations for few-shot hyperspectral and LiDAR collaborative classification
Dai, Mofan; Xing, Shuai; Xu, Qing; Wang, Hanyun; Li, Pengcheng; Sun, Yifan; Pan, Jiechen; Li YQ(李玉琼)
Corresponding AuthorXing, Shuai([email protected]) ; Li, Yuqiong([email protected])
Source PublicationINTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION
2024-02-01
Volume126Pages:11
ISSN1569-8432
AbstractHyperspectral image (HSI) classification, incorporating both spatial and spectral information, is a crucial topic in earth observation and land cover analysis. However, ground objects with similar spectral attributes are still the challenges for finer classifications. Recently, deep learning-based multimodality fusion provides promising solutions by exploiting LiDAR data with its geometric information to fuse with spectral attributes. However, the labor-intensive and time-consuming multimodality data annotation limits the performance of supervised deep learning technologies. How to address the semantic disparity between the LiDAR data and HSIs, and learning transferable representations for cross-scene classifications are still challenging. In this paper, we propose a multimodal fusion relational network with meta-learning (MFRN-ML) to solve these challenges. Specifically, the MFRN-ML incorporates the multimodal learning and few-shot learning (FSL) into a three-stage task-based learning framework to learn the transferable cross-modality representations for few-shot HSI and LiDAR collaborative classification. First, a multimodal fusion relational network, composed of a cross-modality feature fusion module and a relation learning module, is proposed to address the challenge of limited annotations in multimodal learning in a data-adaptive way. Then, a three-stage task-based learning framework can train the network to learn transferable representations with few labeled samples for cross-scene classification. We perform experiments on four multimodal datasets collected by different sensors. Compared with existing supervised, semi-supervised, and meta-learning methods, MFRN-ML attains state-of-the-art performances in few-shot tasks. Particularly, our method shows promising generalization ability on unseen categories across different domains.
KeywordMultimodal remote sensing data Meta-learning Few-shot learning Cross-modality feature learning
DOI10.1016/j.jag.2023.103640
Indexed BySCI
Language英语
WOS IDWOS:001152464700001
WOS KeywordLAND-COVER CLASSIFICATION ; IMAGE CLASSIFICATION ; NETWORK
WOS Research AreaRemote Sensing
WOS SubjectRemote Sensing
Funding ProjectNational Natural Science Foundation of China[42271457] ; National Natural Science Foundation of China[41876105] ; Henan Province[202300410535] ; Joint Fund of Collaborative Innovation Center of Geo-Information Technology for Smart Central Plains, Henan Province ; Key Laboratory of Spatiotemporal Perception and Intelligent Processing, Ministry of Natural Resources[212108]
Funding OrganizationNational Natural Science Foundation of China ; Henan Province ; Joint Fund of Collaborative Innovation Center of Geo-Information Technology for Smart Central Plains, Henan Province ; Key Laboratory of Spatiotemporal Perception and Intelligent Processing, Ministry of Natural Resources
Classification一类
Ranking1
ContributorXing, Shuai ; Li, Yuqiong
Citation statistics
Cited Times:1[WOS]   [WOS Record]     [Related Records in WOS]
Document Type期刊论文
Identifierhttp://dspace.imech.ac.cn/handle/311007/94229
Collection流固耦合系统力学重点实验室
Recommended Citation
GB/T 7714
Dai, Mofan,Xing, Shuai,Xu, Qing,et al. Learning transferable cross-modality representations for few-shot hyperspectral and LiDAR collaborative classification[J]. INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION,2024,126:11.Rp_Au:Xing, Shuai, Li, Yuqiong
APA Dai, Mofan.,Xing, Shuai.,Xu, Qing.,Wang, Hanyun.,Li, Pengcheng.,...&李玉琼.(2024).Learning transferable cross-modality representations for few-shot hyperspectral and LiDAR collaborative classification.INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION,126,11.
MLA Dai, Mofan,et al."Learning transferable cross-modality representations for few-shot hyperspectral and LiDAR collaborative classification".INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION 126(2024):11.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Lanfanshu
Similar articles in Lanfanshu
[Dai, Mofan]'s Articles
[Xing, Shuai]'s Articles
[Xu, Qing]'s Articles
Baidu academic
Similar articles in Baidu academic
[Dai, Mofan]'s Articles
[Xing, Shuai]'s Articles
[Xu, Qing]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Dai, Mofan]'s Articles
[Xing, Shuai]'s Articles
[Xu, Qing]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.