Please use this identifier to cite or link to this item:
|Title:||V3O2: hybrid deep learning model for hyperspectral image classification using vanilla-3D and octave-2D convolution|
Meenakshi Sundaram V.
|Citation:||Journal of Real-Time Image Processing , Vol. , , p. -|
|Abstract:||Remote sensing image analysis is an emerging area of research and is used for various applications such as climate analysis, crop monitoring and change detection. Hyperspectral image (HSI) is one of the dominant remote sensing imaging modalities that captures information beyond the visible spectrum. The evolution of deep learning has made a significant impact on HSI analysis, mainly for its classification. The spatial–spectral feature-based classification model improves the classification accuracy of hyperspectral images (HSIs). However, these models are computationally expensive, and redundancy exists in the spatial dimension of features. This research work proposes a hybrid convolutional neural network (CNN) for HSI classification. The proposed model uses principal component analysis (PCA) as a preprocessing technique for optimal band extraction from HSIs. The hybrid CNN classification technique extracts the spectral and spatial features using three-dimensional CNN (3D CNN). These features are fed into a two-dimensional CNN (2D CNN) for further feature extraction and classification. The redundancy in spatial features of the hybrid CNN model is reduced by octave convolution (OctConv) instead of standard vanilla convolution. OctConv factorizes the spatial features into lower and higher spatial frequencies, and different convolutions are performed on them based on their frequencies. The hybrid model is compared against various state-of-the-art CNN-based techniques and found that the accuracy is boosted with a lesser computational cost. © 2020, Springer-Verlag GmbH Germany, part of Springer Nature.|
|Appears in Collections:||2. Conference Papers|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.