面向对象与卷积神经网络模型的GF-6 WFV影像作物分类
Object-oriented crop classification for GF-6 WFV remote sensing images based on Convolutional Neural Network
- 2021年25卷第2期 页码:549-558
纸质出版日期: 2021-02-07
DOI: 10.11834/jrs.20219347
扫 描 看 全 文
浏览全部资源
扫码关注微信
纸质出版日期: 2021-02-07 ,
扫 描 看 全 文
李前景,刘珺,米晓飞,杨健,余涛.2021.面向对象与卷积神经网络模型的GF-6 WFV影像作物分类.遥感学报,25(2): 549-558
Li Q J,Liu J,Mi X F,Yang J and Yu T. 2021. Object-oriented crop classification for GF-6 WFV remote sensing images based on Convolutional Neural Network. National Remote Sensing Bulletin, 25(2):549-558
GF-6 WFV影像是中国首颗带有红边波段的中高分辨率8波段多光谱卫星的遥感影像,对于其影像及红边波段对作物分类影响的研究利用亟待展开。本文结合面向对象和深度学习提出一种适用于GF-6 WFV红边波段的卷积神经网络(RE-CNN)遥感影像作物分类方法。首先采用多尺度分割和ESP工具选择最佳分割参数完成影像分割,通过面向对象的CART决策树消除椒盐现象的同时提取植被区域,并转化为卷积神经网络的输入数据,最后基于Python和Numpy库构建的卷积神经网络模型(RE-CNN)用于影像作物分类及精度验证。有无红边波段的两组分类实验结果表明:在红边波段组,卷积神经网络(RE-CNN)作物分类识别取得了较好的效果,总体精度高达94.38%,相比无红边波段组分类精度提高了2.83%,验证了GF-6 WFV红边波段对作物分类的有效性。为GF-6 WFV红边波段影像用于作物的分类研究提供技术参考和借鉴价值。
The GF-6 WFV image is the first remote sensing image of the 8-band multi-spectral satellite with medium and high resolution in China. 4 spectrum bands including two red-edge bands are added to the image based on the conventional red
green
blue and near-infrared band. As a vegetation sensitive band
the red-edge band is one of the methods used for crop classification and identification in remote sensing images. Research on the impact of GF-6 WFV image and its red-edge bands on crop classification is urgently needed.
This research uses GF-6 WFV images as the data source. The main work is (1) proposes a convolutional neural network (RE-CNN) remote sensing image crop classification model suitable for GF-6 WFV red-edge bands; (2) conducts crop classification research about GF-6 WFV imagery and its red-edge bands and evaluates effectiveness of red-edge bands due to the lack of relevant research
(3) uses the strategy of combining object-oriented and deep learning for crop classification. The core idea of this research: multi-scale segmentation is used in order to avoid the influence of salt and pepper phenomenon on image classification
and image segmentation is completed by selecting the best segmentation parameters with ESP tools and ROC-LV. Object-oriented classification by CART decision tree can extract vegetation area while eliminating salt and pepper noise
and convert it into input data of convolutional neural network. The network structure of Inception was introduced to extract the multi-scale features of the image and then a convolutional neural network model (RE-CNN) for GF-6 WFV imagery was constructed for crop classification. A control experimental group with or without red-edge bands was set up and the RE-CNN model was used for crop classification and accuracy verification. The effect of the newly added red-edge bands on crop classification is studied
and the effectiveness and sensitivity of the red-edge band are evaluated in crop classification by GF-6 WFV imagery.
The experimental results of this study show that: (1) Object-oriented CART decision tree classification effectively eliminates salt and pepper noise in vegetation area extraction
and the classification strategy of combining with deep learning achieve better classification results in remote sensing image crop classification. (2) The RE-CNN model proposed in this paper can be used for GF-6 WFV remote sensing image crop classification. The classification accuracy of the experimental group in the group of red-edge bands is as high as 94.38%
and the Kappa coefficient is 0.92. (3) The newly added red-edge bands in GF-6 WFV images can effectively improve the crop classification accuracy of remote sensing image. Compared with the group without red-edge bands
the classification accuracy is increased by 2.83%
which verifies the effectiveness and sensitivity that the newly-added red-edge bands in GF-6 WFV images improves the classification accuracy. Moreover
it provides a reference for the research of GF-6 WFV image and its red-edge band for crop classification.
遥感面向对象分类高分六号红边波段卷积神经网络
remote sensingobject-orientedclassificationGF-6Red-Edge bandConvolutional Neural Network
Benz U C, Hofmann P, Willhauck G, Lingenfelder I and Heynen M. 2004. Multi-resolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information. ISPRS Journal of Photogrammetry and Remote Sensing, 58(3/4): 239-258 [DOI: 10.1016/j.isprsjprs.2003.10.002http://dx.doi.org/10.1016/j.isprsjprs.2003.10.002]
Breiman L. 1984. Classification and Regression Trees. New York: Routledge [DOI: 10.1201/9781315139470http://dx.doi.org/10.1201/9781315139470]
Chang C Y, Zhao G X, Wang L, Zhu X C and Gao Z. 2012. Land use classification based on RS object-oriented method in coastal spectral confusion region. Transactions of the Chinese Society of Agricultural Engineering, 28(5): 226-231
常春艳, 赵庚星, 王凌, 朱西存, 高泽. 2012. 滨海光谱混淆区面向对象的土地利用遥感分类. 农业工程学报, 28(5): 226-231 [DOI: 10.3969/j.issn.1002-6819.2012.05.038http://dx.doi.org/10.3969/j.issn.1002-6819.2012.05.038]
Cheng G L, Wang Y, Xu S B, Wang H Z, Xiang S M and Pan C H. 2017. Automatic road detection and centerline extraction via cascaded end-to-end convolutional neural network. IEEE Transactions on Geoscience and Remote Sensing, 55(6): 3322-3337 [DOI: 10.1109/TGRS.2017. 2669341http://dx.doi.org/10.1109/TGRS.2017.2669341]
Dong Z, Pei M T, He Y, Liu T, Dong Y M and Jia Y D. 2014. Vehicle type classification using unsupervised convolutional neural network//Proceedings of 2014 22nd International Conference on Pattern Recognition. Stockholm, Sweden: IEEE: 172-177 [DOI: 10.1109/ICPR.2014.39http://dx.doi.org/10.1109/ICPR.2014.39]
Drǎguţ L, Tiede D and Levick S R. 2010. ESP: a tool to estimate scale parameter for multiresolution image segmentation of remotely sensed data. International Journal of Geographical Information Science, 24(6): 859-871 [DOI: 10.1080/13658810903174803http://dx.doi.org/10.1080/13658810903174803]
Fu X L, Li L P, Mao K B, Tan X L, Li J J, Sun X and Zuo Z Y. 2017. Remote sensing image classification based on CNN model. Chinese High Technology Letters, 27(3): 203-212
付秀丽, 黎玲萍, 毛克彪, 谭雪兰, 李建军, 孙旭, 左志远. 2017. 基于卷积神经网络模型的遥感图像分类. 高技术通讯, 27(3): 203-212 [DOI: 10.3772/j.issn.1002-0470.2017.03.002http://dx.doi.org/10.3772/j.issn.1002-0470.2017.03.002]
Glorot X, Bordes A and Bengio Y. 2011. Deep sparse rectifier neural networks. Journal of Machine Learning Research, 15: 315-323
Horler D N H, Dockray M and Barber J. 1983. The red edge of plant leaf reflectance. International Journal of Remote Sensing, 4(2): 273-288 [DOI: 10.1080/01431168308948546http://dx.doi.org/10.1080/01431168308948546]
Huang B, Zhao B and Song Y M. 2018. Urban land-use mapping using a deep convolutional neural network with high spatial resolution multispectral remote sensing imagery. Remote Sensing of Environment, 214: 73-86 [DOI: 10.1016/j.rse.2018.04.050http://dx.doi.org/10.1016/j.rse.2018.04.050]
Kim H O and Yeom J M. 2014. Effect of red-edge and texture features for object-based paddy rice crop classification using RapidEye multi-spectral satellite image data. International Journal of Remote Sensing, 35(19): 7046-7068 [DOI: 10.1080/01431161.2014.965285http://dx.doi.org/10.1080/01431161.2014.965285]
Kussul N, Lavreniuk M, Skakun S and Shelestov A. 2017. Deep learning classification of land cover and crop types using remote sensing data. IEEE Geoscience and Remote Sensing Letters, 14(5): 778-782 [DOI: 10.1109/LGRS.2017.2681128http://dx.doi.org/10.1109/LGRS.2017.2681128]
LeCun Y, Bengio Y and Hinton G. 2015. Deep learning. Nature, 521(7553): 436-444 [DOI: 10.1038/nature14539http://dx.doi.org/10.1038/nature14539]
Ma K and Luo Z. 2018. Classification of remote sensing images in Qinghai lake based on convolutional neural network. Computer Systems and Applications, 27(9): 137-142
马凯, 罗泽. 2018. 基于卷积神经网络的青海湖区域遥感影像分类. 计算机系统应用, 27(9): 137-142 [DOI: 10.15888/j.cnki.csa.006532http://dx.doi.org/10.15888/j.cnki.csa.006532]
Murthy C S, Raju P V and Badrinath K V S. 2003.Classifica- tion of wheat crop with multi-temporal images: perform- ance of maximum likelihood and artificial neural netw- orks. International Journal of Remote Sensing, 24(23): 4871-4890 [DOI: 10.1080/01 43116031000070490http://dx.doi.org/10.1080/0143116031000070490]
Othman E, Bazi Y, Alajlan N, Hichri H and Melgani F. 2016. Using convolutional features and a sparse autoencoder for land-use scene classification. International Journal of Remote Sensing, 37(10): 2149-2167 [DOI: 10.1080/ 01431161.2016.1171928http://dx.doi.org/10.1080/01431161.2016.1171928]
Peng G X, Gong A D, Cui W H, Ming T and Chen F R. 2009. Study on methods comparison of typical remote sensing classification based on multi-temporal images. Journal of Geo-information Science, 11(2): 225-230
彭光雄, 宫阿都, 崔伟宏, 明涛, 陈锋锐. 2009. 多时相影像的典型区农作物识别分类方法对比研究. 地球信息科学学报, 11(2):225-230 [DOI: 10.3969/j.issn.1560-8999.2009.02.014http://dx.doi.org/10.3969/j.issn.1560-8999.2009.02.014]
Verburg H P, Neumann K and Nol L. 2011. Challenges in using land use and land cover data for global change studies. Global Change Biology, 17(2): 974-989 [DOI: 10.1111/j.1365-2486.2010.02307.xhttp://dx.doi.org/10.1111/j.1365-2486.2010.02307.x]
Wang J L, Huang J L, Wang L H, Hu Y X, Han P P and Huang W. 2014. Identification of sugarcane based on object-oriented analysis using time-series HJ CCD data. Transactions of the Chinese Society of Agricultural Engineering, 30(11): 145-151
王久玲, 黄进良, 王立辉, 胡砚霞, 韩鹏鹏, 黄维. 2014. 面向对象的多时相HJ星影像甘蔗识别方法. 农业工程学报, 30(11): 145-151 [DOI: 10.3969/j.issn.1002-6819.2014.11.018http://dx.doi.org/10.3969/j.issn.1002-6819.2014.11.018]
Wardlow B D, Egbert S L and Kastens J H. 2007. Analysis of time-series MODIS 250 m vegetation index data for crop classification in the U.S. Central Great Plains. Remote Sensing of Environment, 108(3): 290-310 [DOI: 10.1016/j.rse.2006.11.021http://dx.doi.org/10.1016/j.rse.2006.11.021]
Wu J S, Pan K Y, Peng J and Huang X L. 2012. Research on the accuracy of TM images land-use classification based on QUEST decision tree: a case study of Lijiang in Yunnan. Geographical Research, 31(11): 1973-1980
吴健生, 潘况一, 彭建, 黄秀兰. 2012. 基于QUEST决策树的遥感影像土地利用分类——以云南省丽江市为例. 地理研究, 31(11): 1973-1980 [DOI: 10.11821/ yj201 2110005http://dx.doi.org/10.11821/yj2012110005]
Yan J L, Zhou W Q, Han L J and Qian Y G. 2018. Mapping vegetation functional types in urban areas with WorldView-2 imagery: integrating object-based classification with phenology. Urban Forestry and Urban Greening, 31: 230-240 [DOI: 10.1016/j.ufug.2018.01.021http://dx.doi.org/10.1016/j.ufug.2018.01.021]
Zhang C, Sargent I, Pan X, Li H P, Gardiner A, Hare J and Atkinson P M. 2019. Joint Deep Learning for land cover and land use classification. Remote Sensing of Environment, 221: 173-187 [DOI: 10.1016/j.rse.2018.11.014http://dx.doi.org/10.1016/j.rse.2018.11.014]
Zhang K, Hei B Q, Zhou Z and Li S Y. 2018. CNN with coefficient of variation-based dimensionality reduction for hyperspectral remote sensing images classification. Journal of Remote Sensing, 22(1): 87-96
张康, 黑保琴, 周壮, 李盛阳. 2018. 变异系数降维的CNN高光谱遥感图像分类. 遥感学报, 22(1): 87-96 [DOI: 10.11834/jrs.20187075http://dx.doi.org/10.11834/jrs.20187075]
Zhang R, Li W P and Mo T. 2018. Review of deep learning. Information and Control, 47(4): 385-397, 410 (张荣, 李伟平, 莫同. 2018. 深度学习研究综述. 信息与控制, 47(4): 385-397, 410) [DOI: 10.13976/j.cnki.xk.2018. 8091http://dx.doi.org/10.13976/j.cnki.xk.2018.8091]
Zhao P, Fu Y F, Zheng L G, Feng X Z and Satyanarayana. 2005. Cart-based land use/cover classification of remote sensing images. Journal of Remote Sensing, 9(6): 708-716
赵萍, 傅云飞, 郑刘根, 冯学智, Satyanarayana B. 2005. 基于分类回归树分析的遥感影像土地利用/覆被分类研究. 遥感学报, 9(6): 708-716 [DOI: 10.11834/jrs.200506103http://dx.doi.org/10.11834/jrs.200506103]
Zhu Y S, Zeng Y N and Zhang M. 2017. Extract of land use/cover information based on HJ satellites data and object-oriented classification. Transactions of the Chinese Society of Agricultural Engineering, 33(14): 258-265
朱永森, 曾永年, 张猛. 2017. 基于HJ卫星数据与面向对象分类的土地利用/覆盖信息提取. 农业工程学报, 33(14): 258-265 [DOI: 10.11975/j.issn. 1002-6819.2017.14.035http://dx.doi.org/10.11975/j.issn.1002-6819.2017.14.035]
相关作者
相关机构