自适应滤波器的神经网络生成及遥感图像处理新应用
Neural network generation of adaptive filter and new applications in remote sensing image processing
- 2023年27卷第7期 页码:1523-1533
纸质出版日期: 2023-07-07
DOI: 10.11834/jrs.20232174
扫 描 看 全 文
浏览全部资源
扫码关注微信
纸质出版日期: 2023-07-07 ,
扫 描 看 全 文
唐娉,刘璇,金兴,张正.2023.自适应滤波器的神经网络生成及遥感图像处理新应用.遥感学报,27(7): 1523-1533
Tang P, Liu X, Jin X and Zhang Z. 2023. Neural network generation of adaptive filter and new applications in remote sensing image processing. National Remote Sensing Bulletin, 27(7):1523-1533
图像自适应滤波是非线性的图像变换,有广泛的应用场景。传统的图像自适应滤波器均是专家设计的,如双边滤波器和形状自适应滤波等。CNN作为特征提取和非线性能力表达的有效工具,可用于学习构造图像自适应滤波器。本文首先介绍了图像自适应滤波器的生成网络,继而介绍了图像自适应滤波的两个图像处理的新应用:不同时相间的图像变换用于图像插值、不同波段间的图像变换用于图像融合。从这两类应用中,可以窥见图像自适应滤波在构建图像非线性变换方面的应用能力。
Image adaptive filtering is a nonlinear image transformation
which has a wide range of applications. Traditional image adaptive filters are designed by experts
such as bilateral filter and shape adaptive filter. They can determine the shape
size
and weight of the filter based on the local structure and content of the image. They are commonly used to suppress noise while preserving the structural characteristics of the image. Convolutional Neural Networks (CNNs) are an effective tool for feature extraction and nonlinear expression. They can be used to learn and construct image adaptive filters. And this paper explores the application of nonlinear image adaptive filters generated by convolutional neural networks in image interpolation and image fusion.
This paper introduces the generation network of an image adaptive filter
including its model structure and objective function. The common network structure usually employs an encoder-decoder architecture
which is mainly composed of three parts: feature extraction
feature recovery
and filter (convolution kernel) estimation. Then
the paper presents two different application scenarios of image adaptive filters: image interpolation and image fusion. The adaptive filter for images enables transformation between different phases during image interpolation and transformation between different bands during image fusion. In these two scenarios
the image adaptive filters are learned by the filter generation network based on the specific application scenario and then applied. In image interpolation applications
the image adaptive filter is used as a nonlinear transformation between two temporal images. The interpolated image is looked at as the mean of adaptive filtering of the previous temporal image and adaptive filtering of the latter temporal image. In image fusion applications
the image adaptive filter is used as a nonlinear fitting method to regress multispectral bands to the panchromatic band. It then extracts spatial details from the difference of the panchromatic band and the simulated panchromatic band
and finally adds spatial details to all the multispectral bands.
We conducted experiments in two application scenarios. The first involved nonlinear transformation for image interpolation with different phases simultaneously. The second utilized an image adaptive filter as a nonlinear fitting method for multi-spectral band regression panchromatic band in image fusion. In image interpolation applications
the experimental results show that the interpolated results are consistent with the reference image in spatial and spectral characteristics
and the RMSE of the interpolated image with the reference image is relatively small. The experimental results for image fusion applications indicate that the low-resolution panchromatic band obtained through adaptive filter fitting of the multi-spectral band is more accurate than the traditional component replacement method. The fusion result obtained by nonlinear image adaptive filters has neither obvious spectral distortion nor obvious spatial distortion.
From the application of nonlinear image adaptive filters generated by convolutional neural networks in image interpolation and image fusion
we have a glimpse of its application potential of image adaptive filter in constructing image nonlinear transformation. The filter generation network can generate adaptive filters for particular application scenarios
resulting in more accurate and visually pleasing images.
遥感图像自适应滤波滤波器生成网络动态滤波器网络图像插值图像融合
remote sensingimage adaptive filteringfilter generation networkdynamic filter networkimage interpolationimage fusion
Aiazzi B, Alparone L, Baronti S, Garzelli A and Selva M. 2003. An MTF-based spectral distortion minimizing model for pan-sharpening of very high resolution multispectral images of urban areas//Proceedings of the 2003 2nd GRSS/ISPRS Joint Workshop on Remote Sensing and Data Fusion over Urban Areas. Berlin: IEEE: 90-94 [DOI: 10.1109/DFUA.2003.1219964http://dx.doi.org/10.1109/DFUA.2003.1219964]
Alvarez-Vanhard E, Houet T, Mony C, Lecoq L and Corpetti T. 2020. Can UAVs fill the gap between in situ surveys and satellites for habitat mapping?. Remote Sensing of Environment, 243: 111780 [DOI: 10.1016/j.rse.2020.111780http://dx.doi.org/10.1016/j.rse.2020.111780]
Chavez P S Jr and Kwarteng A Y. 1989. Extracting spectral contrast in Landsat Thematic Mapper image data using selective principal component analysis. Photogrammetric Engineering and Remote Sensing, 55(3): 339-348
Chavez P S, Sides S C and Anderson J A. 1991. Comparison of three different methods to merge multiresolution and multispectral data: LANDSAT TM and SPOT panchromatic. Photogrammetric Engineering and Remote Sensing, 57(3): 265-303
Dai J F, Qi H Z, Xiong Y W, Li Y, Zhang G D, Hu H and Wei Y C. 2017. Deformable convolutional networks. arXiv: 1703.06211 [DOI: 10.48550/arXiv.1703.06211http://dx.doi.org/10.48550/arXiv.1703.06211]
Dong C, Loy C C, He K M and Tang X O. 2016. Image super-resolution using deep convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(2): 295-307 [DOI: 10.1109/TPAMI.2015.2439281http://dx.doi.org/10.1109/TPAMI.2015.2439281]
Foi A and Katkovnik V. 2006. From local polynomial approximation to pointwise shape-adaptive transforms: an evolutionary nonparametric regression perspective// Proceedings of the 2006 International TICSP Workshop on Spectral Methods and Multirate Signal Processing, SMMSP2006, Florence, Italy. TICSP Series: 51-58
Garzelli A, Nencini F and Capobianco L. 2008. Optimal MMSE pan sharpening of very high resolution multispectral images. IEEE Transactions on Geoscience and Remote Sensing, 46(1): 228-236 [DOI: 10.1109/TGRS.2007.907604http://dx.doi.org/10.1109/TGRS.2007.907604]
Goodfellow I, Bengio Y and Courvillen A. 2016. Deep Learning. Cambridge: MIT Press
Jin X, Tang P, Houet T, Corpetti T, Alvarez-Vanhard E G and Zhang Z. 2021. Sequence image interpolation via separable convolution network. Remote Sensing, 13(2): 296 [DOI: 10.3390/rs13020296http://dx.doi.org/10.3390/rs13020296]
Kingma D P and Ba J. 2017. Adam: a method for stochastic optimization. arXiv: 1412.6980 [DOI: 10.48550/arXiv.1412.6980http://dx.doi.org/10.48550/arXiv.1412.6980]
Lee J and Lee C. 2010. Fast and efficient panchromatic sharpening. IEEE Transactions on Geoscience and Remote Sensing, 48(1): 155-163 [DOI: 10.1109/TGRS.2009.2028613http://dx.doi.org/10.1109/TGRS.2009.2028613]
Li C J, Liu L Y, Wang J H, Zhao C J and Wang R C. 2004. Comparison of two methods of the fusion of remote sensing images with fidelity of spectral information//2004 IEEE International Geoscience and Remote Sensing Symposium. Anchorage: IEEE: 2561-2564 [DOI: 10.1109/IGARSS.2004.1369819http://dx.doi.org/10.1109/IGARSS.2004.1369819]
Liu X, Tang P, Jin X and Zhang Z. 2022. From regression based on dynamic filter network to pansharpening by pixel-dependent spatial-detail injection. Remote Sensing, 14(5): 1242 [DOI: 10.3390/rs14051242http://dx.doi.org/10.3390/rs14051242]
Long J, Shelhamer E and Darrell T. 2015. Fully convolutional networks for semantic segmentation//Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston: IEEE: 3431-3440 [DOI: 10.1109/CVPR.2015.7298965http://dx.doi.org/10.1109/CVPR.2015.7298965]
Maurer T. 2013. How to pan-sharpen images using the Gram-Schmidt pan-sharpen method—A recipe. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XL-1/W1: 239-244 [DOI: 10.5194/isprsarchives-XL-1-W1-239-2013http://dx.doi.org/10.5194/isprsarchives-XL-1-W1-239-2013]
Meng X C, Xiong Y M, Shao F, Shen H F, Sun W W, Yang G, Yuan Q Q, Fu R D and Zhang H Y. 2021. A large-scale benchmark data set for evaluating pansharpening performance: overview and implementation. IEEE Geoscience and Remote Sensing Magazine, 9(1): 18-52 [DOI: 10.1109/MGRS.2020.2976696http://dx.doi.org/10.1109/MGRS.2020.2976696]
Su H, Jampani V, Sun D Q, Gallo O, Learned-Miller E and Kautz J. 2019. Pixel-adaptive convolutional neural networks//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach: IEEE: 11158-11167 [DOI: 10.1109/CVPR.2019.01142http://dx.doi.org/10.1109/CVPR.2019.01142]
Tomasi C and Manduchi R. 1998. Bilateral filtering for gray and color images//Sixth International Conference on Computer Vision. Bombay: IEEE: 839-846 [DOI: 10.1109/ICCV.1998.710815http://dx.doi.org/10.1109/ICCV.1998.710815]
Tu T M, Su S C, Shyu H C and Huang P S. 2001. A new look at IHS-like image fusion methods. Information Fusion, 2(3): 177-186 [DOI: 10.1016/S1566-2535(01)00036-7http://dx.doi.org/10.1016/S1566-2535(01)00036-7]
Wang Z J, Ziou D, Armenakis C, Li D and Li Q Q. 2005. A comparative analysis of image fusion methods. IEEE Transactions on Geoscience and Remote Sensing, 43(6): 1391-1402 [DOI: 10.1109/TGRS.2005.846874http://dx.doi.org/10.1109/TGRS.2005.846874]
Wu J L, Li D, Yang Y, Bajaj C and Ji X Y. 2019. Dynamic sampling convolutional neural networks. arXiv: 1803.07624 [DOI: 10.48550/arXiv.1803.07624http://dx.doi.org/10.48550/arXiv.1803.07624]
Xue T F, Wu J J, Bouman K L and Freeman B T. 2016. Visual dynamics: probabilistic future frame synthesis via cross convolutional networks//Proceedings of the 30th International Conference on Neural Information Processing Systems. Barcelona: Curran Associates Inc.: 91-99
Yang J, Fu X, Hu Y, Huang Y, Ding X, Paisley J. 2017. PanNet: a deep network architecture for pan-sharpening//Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV). Venice: IEEE: 1753-1761 [DOI: 10.1109/ICCV.2017.193http://dx.doi.org/10.1109/ICCV.2017.193]
相关作者
相关机构