ATTENTION-GUIDED FUSION OF LIDAR AND HYPERSPECTRAL IMAGING FOR IMPROVED SEMANTIC SEGMENTATION OF URBAN ENVIRONMENTS

Main Article Content

Gitanjali Pilankar, Dharmpal Doye

Abstract

This paper introduces a novel Multimodal Attention Fusion Network (MAFN) designed for the integration of LiDAR and Hyperspectral Imaging (HSI) data in the domain of object classification. The proposed MAFN leverages attention mechanisms to efficiently combine spatial information from LiDAR and spectral information from HSI, resulting in a powerful multimodal fusion model for accurate classification tasks. Extensive experimentation is conducted on benchmark datasets widely acknowledged in the remote sensing community, including the University of Houston dataset, Trento dataset and University of Southern Mississippi Gulfpark (MUUFL) dataset. These datasets cover diverse scenarios and object classes, providing a comprehensive evaluation platform for assessing the robustness and generalization capabilities of the proposed MAFN model. The MAFN model's performance is rigorously compared with state-of-the-art transformers, classical Convolutional Neural Networks (CNNs), and conventional classifiers. Through a series of comprehensive evaluations, we demonstrate the superior efficacy of the proposed MAFN model in handling multimodal data. Our results reveal that MAFN consistently outperforms existing models across various benchmark datasets, showcasing its capacity for robust object classification.


This research not only introduces a sophisticated multimodal fusion model but also contributes valuable benchmarks for the research community. Insights into the nuanced interplay between LiDAR and HSI data are provided, emphasizing the importance of attention mechanisms in capturing synergistic spatial and spectral features for improved object classification. The findings presented in this paper contribute to advancing research in remote sensing and object classification, offering a powerful tool for handling multimodal data and opening avenues for future research endeavors in this domain.

Article Details

Section
Articles