结合空间—光谱信息的快速自训练高光谱遥感影像分类
Fast self-training based on spatial-spectral information for hyperspectral image classification
- 2024年28卷第1期 页码:219-230
纸质出版日期: 2024-01-07
DOI: 10.11834/jrs.20232286
扫 描 看 全 文
浏览全部资源
扫码关注微信
纸质出版日期: 2024-01-07 ,
扫 描 看 全 文
金垚,董燕妮,杜博.2024.结合空间—光谱信息的快速自训练高光谱遥感影像分类.遥感学报,28(1): 219-230
Jin Y,Dong Y N and Du B. 2024. Fast self-training based on spatial-spectral information for hyperspectral image classification. National Remote Sensing Bulletin, 28(1):219-230
自训练方法被广泛应用于高光谱影像分类任务中以解决标记样本获取困难的问题。传统的自训练方法不仅忽略了高光谱影像所能提供的空间信息,导致最终分类精度受到影响;同时在每次迭代过程中都需要完成一次对未标记数据的分类任务,导致需要大量的时间成本。因此,针对上述问题,本文提出了一种基于空间—光谱信息的快速自训练方法用于高光谱影像分类。与传统的自训练方法不同,该方法在迭代过程中使用空间—光谱信息对未标记数据进行筛选完成标记样本的扩充,而不是使用分类器对未标记样本进行分类。首先针对初始标记样本使用空间邻域块选择空间近邻点,然后使用自适应阈值对空间近邻点进行二次筛选得到空谱近邻点赋予标记,最后根据扩充后的标记样本对分类器进行训练完成分类任务。结果表明,在Washington DC Mall Subimage 高光谱数据集中每类分别选择2个和10个训练样本时,整体分类精度分别达到了93.17%和95.43%;而在Indian Pines数据集中整体分类精度分别达到了59.75%和86.13%。我们提出的结合空间—光谱信息的快速自训练方法和对比方法相比,我们的方法有明显的提升。
Hyperspectral image classification has been a popular issue in the field of hyperspectral image interpretation. The prominent problem in hyperspectral image classification currently involves the considerably time-consuming and expensive manual acquisition of labeled samples for hyperspectral images in practical applications. This problem leads to a sparse number of training samples and increases the difficulty of obtaining good classification results. Self-training methods are widely used in hyperspectral image classification to solve the difficulty in labeled sample acquisition in hyperspectral image classification. Traditional self-training methods mostly use spectral information to classify unlabeled samples and then utilize the expanded labeled data set to iteratively train the classifier to complete the classification task. In this model
the spatial information provided by the hyperspectral images is ignored
resulting in poor classification accuracy. Simultaneously
the classification of unlabeled data must be completed once during each iteration
resulting in a significant time cost. Therefore
a fast self-training method based on spatial–spectral information is proposed in this paper for hyperspectral image classification to address the above problems.
FST-SS (Fast Self-training based on Spatial-Spectral information) supplements the spatial information in hyperspectral images by exploiting the consistency of the spatial distribution of features in the hyperspectral images. Instead of using the classifier to classify unlabeled samples
this approach uses the spatial–spectral information to filter unlabeled data and extend the labeled samples during the iterative process. The spatial nearest neighbors are first selected using a spatial domain patch for the initial labeled sample. The spatial nearest neighbors are then filtered using an adaptive threshold to obtain the spatial–spectral nearest neighbors to be labeled. Finally
the classifier is trained to complete the classification task based on the expanded labeled samples.
This paper compares FST-SS with the supervised classification algorithm 1NN
the semi-supervised classification algorithm Star-SVM
Tri-Training
ST-DP
and LeMA on two real hyperspectral datasets to demonstrate the effectiveness of FST-SS. Experimental results show that the overall classification accuracy reaches 93.17% and 95.43% when 2 and 10 training samples
respectively
are selected for each class in the Washington DC Mall subimage dataset. The overall classification accuracy in Indian Pines dataset reaches 59.75% and 86.13%
which is a significant improvement compared with the comparison algorithm.
The FST-SS algorithm uses the spatial–spectral information provided by hyperspectral images to label unlabeled samples by combining the ideas of self-training methods. Compared with the conventional self-training methods
instead of using a classifier for classification
FST-SS uses the spatial–spectral information to filter the unlabeled samples directly
which markedly improves the computational efficiency of the algorithm.
高光谱遥感半监督分类小样本问题空间—光谱信息自训练方法
hyperspectral image classificationsemi-supervised classificationspatial-spectral informationself-training
Aydav P S S and Minz S. 2018. Classification of hyperspectral images using self-training and a pseudo validation set. Remote Sensing Letters, 9(11): 1109-1117 [DOI: 10.1080/2150704x.2018.1511932http://dx.doi.org/10.1080/2150704x.2018.1511932]
Bioucas-Dias J M, Plaza A, Camps-Valls G, Scheunders P, Nasrabadi N and Chanussot J. 2013. Hyperspectral remote sensing data analysis and future challenges. IEEE Geoscience and Remote Sensing Magazine, 1(2): 6-36 [DOI: 10.1109/mgrs.2013.2244672http://dx.doi.org/10.1109/mgrs.2013.2244672]
Blaschke T. 2010. Object based image analysis for remote sensing. ISPRS Journal of Photogrammetry and Remote Sensing, 65(1): 2-16 [DOI: 10.1016/j.isprsjprs.2009.06.004http://dx.doi.org/10.1016/j.isprsjprs.2009.06.004]
Bruzzone L, Chi M and Marconcini M. 2006. A novel transductive SVM for semisupervised classification of remote-sensing images. IEEE Transactions on Geoscience and Remote Sensing, 44(11): 3363-3373 [DOI: 10.1109/tgrs.2006.877950http://dx.doi.org/10.1109/tgrs.2006.877950]
Camps-Valls G, Bandos Marsheva T V and Zhou D Y. 2007. Semi-supervised graph-based hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing, 45(10): 3044-3054 [DOI: 10.1109/tgrs.2007.895416http://dx.doi.org/10.1109/tgrs.2007.895416]
Camps-Valls G, Gomez-Chova L, Munoz-Mari J, Vila-Frances J and Calpe-Maravilla J. 2006. Composite kernels for hyperspectral image classification. IEEE Geoscience and Remote Sensing Letters, 3(1): 93-97 [DOI: 10.1109/lgrs.2005.857031http://dx.doi.org/10.1109/lgrs.2005.857031]
Chen P H, Jiao L C, Liu F, Zhao J Q, Zhao Z Q and Liu S. 2017. Semi-supervised double sparse graphs based discriminant analysis for dimensionality reduction. Pattern Recognition, 61: 361-378 [DOI: 10.1016/j.patcog.2016.08.010http://dx.doi.org/10.1016/j.patcog.2016.08.010]
Cheung E and Li Y Y. 2017. Self-training with adaptive regularization for S3VM//2017 International Joint Conference on Neural Networks (IJCNN). Anchorage: IEEE: 3633-3640 [DOI: 10.1109/IJCNN.2017.7966313http://dx.doi.org/10.1109/IJCNN.2017.7966313]
Dong Y N, Jin Y and Cheng S B. 2022. Clustered multiple manifold metric learning for hyperspectral image dimensionality reduction and classification. IEEE Transactions on Geoscience and Remote Sensing, 60: 5516813 [DOI: 10.1109/tgrs.2021.3123651http://dx.doi.org/10.1109/tgrs.2021.3123651]
Gan H T, Li Z H, Wu W, Luo Z Z and Huang R. 2018. Safety-aware graph-based semi-supervised learning. Expert Systems with Applications, 107: 243-254 [DOI: 10.1016/j.eswa.2018.04.031http://dx.doi.org/10.1016/j.eswa.2018.04.031]
Ge H M, Pan H Z, Wang L G, Li C, Liu Y Z, Zhu W L and Teng Y P. 2021. A semi-supervised learning method for hyperspectral imagery based on self-training and local-based affinity propagation. International Journal of Remote Sensing, 42(17): 6391-6416 [DOI: 10.1080/01431161.2021.1934595http://dx.doi.org/10.1080/01431161.2021.1934595]
Ghamisi P, Plaza J, Chen Y S, Li J and Plaza A J. 2017. Advanced spectral classifiers for hyperspectral images: a review. IEEE Geoscience and Remote Sensing Magazine, 5(1): 8-32 [DOI: 10.1109/mgrs.2016.2616418http://dx.doi.org/10.1109/mgrs.2016.2616418]
Gu S K and Jin Y C. 2017. Multi-train: a semi-supervised heterogeneous ensemble classifier. Neurocomputing, 249: 202-211 [DOI: 10.1016/j.neucom.2017.03.063http://dx.doi.org/10.1016/j.neucom.2017.03.063]
Gu X W. 2020. A self-training hierarchical prototype-based approach for semi-supervised classification. Information Sciences, 535: 204-224 [DOI: 10.1016/j.ins.2020.05.018http://dx.doi.org/10.1016/j.ins.2020.05.018]
Hong D F, Yokoya N, Ge N, Chanussot J and Zhu X X. 2019. Learnable manifold alignment (LeMA): a semi-supervised cross-modality learning framework for land cover and land use classification. ISPRS Journal of Photogrammetry and Remote Sensing, 147: 193-205 [DOI: 10.1016/j.isprsjprs.2018.10.006http://dx.doi.org/10.1016/j.isprsjprs.2018.10.006]
Jin Y, Dong Y N, Zhang Y X and Hu X Y. 2022. SSMD: dimensionality reduction and classification of hyperspectral images based on spatial-spectral manifold distance metric learning. IEEE Transactions on Geoscience and Remote Sensing, 60: 5538916 [DOI: 10.1109/tgrs.2022.3205178http://dx.doi.org/10.1109/tgrs.2022.3205178]
Li F, Clausi D A, Xu L L and Wong A. 2018. ST-IRGS: a region-based self-training algorithm applied to hyperspectral image classification and segmentation. IEEE Transactions on Geoscience and Remote Sensing, 56(1): 3-16 [DOI: 10.1109/tgrs.2017.2713123http://dx.doi.org/10.1109/tgrs.2017.2713123]
Li Z Y, Togo R, Ogawa T and Haseyama M. 2020. Chronic gastritis classification using gastric X-ray images with a semi-supervised learning method based on tri-training. Medical and Biological Engineering and Computing, 58(6): 1239-1250 [DOI: 10.1007/s11517-020-02159-zhttp://dx.doi.org/10.1007/s11517-020-02159-z]
Lu X C, Zhang J P, Li T and Zhang Y. 2016. A novel synergetic classification approach for hyperspectral and panchromatic images based on self-learning. IEEE Transactions on Geoscience and Remote Sensing, 54(8): 4917-4928 [DOI: 10.1109/tgrs.2016.2553047http://dx.doi.org/10.1109/tgrs.2016.2553047]
Plaza A, Benediktsson J A, Boardman J W, Brazile J, Bruzzone L, Camps-Valls G, Chanussot J, Fauvel M, Gamba P, Gualtieri A, Marconcini M, Tilton J C and Trianni G. 2009. Recent advances in techniques for hyperspectral image processing. Remote Sensing of Environment, 113: S110-S122 [DOI: 10.1016/j.rse.2007.07.028http://dx.doi.org/10.1016/j.rse.2007.07.028]
Tan K, Zhu J S, Du Q, Wu L X and Du P J. 2016. A novel tri-training technique for semi-supervised classification of hyperspectral images based on diversity measurement. Remote Sensing, 8(9): 749 [DOI: 10.3390/rs8090749http://dx.doi.org/10.3390/rs8090749]
van Engelen J E and Hoos H H. 2020. A survey on semi-supervised learning. Machine Learning, 109(2): 373-440 [DOI: 10.1007/s10994-019-05855-6http://dx.doi.org/10.1007/s10994-019-05855-6]
Wang F and Zhang C S. 2008. Label propagation through linear Neighborhoods. IEEE Transactions on Knowledge and Data Engineering, 20(1): 55-67 [DOI: 10.1109/tkde.2007.190672http://dx.doi.org/10.1109/tkde.2007.190672]
Wang L G, Hao S Y, Wang Q M and Wang Y. 2014. Semi-supervised classification for hyperspectral imagery based on spatial-spectral Label Propagation. ISPRS Journal of Photogrammetry and Remote Sensing, 97: 123-137 [DOI: 10.1016/j.isprsjprs.2014.08.016http://dx.doi.org/10.1016/j.isprsjprs.2014.08.016]
Wu D, Shang M S, Luo X, Xu J, Yan H Y, Deng W H and Wang G Y. 2018. Self-training semi-supervised classification based on density peaks of data. Neurocomputing, 275: 180-191 [DOI: 10.1016/j.neucom.2017.05.072http://dx.doi.org/10.1016/j.neucom.2017.05.072]
Zerguine A, Shafi A and Bettayeb M. 2001. Multilayer perceptron-based DFE with lattice structure. IEEE Transactions on Neural Networks, 12(3): 532-545 [DOI: 10.1109/72.925556http://dx.doi.org/10.1109/72.925556]
Zhang L, Zhou W D and Jiao L C. 2004. Wavelet support vector machine. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 34(1): 34-39 [DOI: 10.1109/tsmcb.2003.811113http://dx.doi.org/10.1109/tsmcb.2003.811113]
Zhou Z H and Li M. 2005. Tri-training: exploiting unlabeled data using three classifiers. IEEE Transactions on Knowledge and Data Engineering, 17(11): 1529-1541 [DOI: 10.1109/tkde.2005.186http://dx.doi.org/10.1109/tkde.2005.186]
Zoidi O, Tefas A, Nikolaidis N and Pitas I. 2018. Positive and negative label propagations. IEEE Transactions on Circuits and Systems for Video Technology, 28(2): 342-355 [DOI: 10.1109/tcsvt.2016.2598671http://dx.doi.org/10.1109/tcsvt.2016.2598671]