深度学习在光学和SAR影像融合研究进展
Review of deep learning in optical and SAR image fusion
- 2022年26卷第9期 页码:1744-1756
纸质出版日期: 2022-09-07
DOI: 10.11834/jrs.20211136
扫 描 看 全 文
浏览全部资源
扫码关注微信
纸质出版日期: 2022-09-07 ,
扫 描 看 全 文
成飞飞,付志涛,黄亮,牛宝胜,陈朋弟,王雷光,季欣然.2022.深度学习在光学和SAR影像融合研究进展.遥感学报,26(9): 1744-1756
Cheng F F, Fu Z T, Huang L, Niu B S,Chen P D, Wang L G and Ji X R. 2022. Review of deep learning in optical and SAR image fusion. National Remote Sensing Bulletin, 26(9):1744-1756
遥感影像融合作为影像处理领域中最具有挑战的工作,一直是学术界研究的热点。合成孔径雷达SAR(Synthetic Aperture Radar)具备全天时、全天候、穿透云雾等多种特点,却因存在相干斑噪声等问题,使得影像难以解译。相比之下,光学影像可以反映地物的光谱和空间信息,易于解译,但容易受到云雾干扰,造成信息丢失,将光学与SAR影像数据融合可以实现不同类型传感器成像之间的信息互补,能够更好地为后续的影像分析与解译提供方便。本文首先对光学和SAR影像融合进行了系统性回顾,包括传统融合方法和基于深度学习方法在影像融合方面的最新工作,重点阐述了卷积神经网络CNN(Convolutional Neural Network)、生成式对抗网络GAN(Generative Adversarial Networks)等框架在光学和SAR影像融合中的进展;然后总结了光学和SAR影像融合在深度学习领域开发的数据集,并做了简单介绍和说明;最后,从数据集、时间序列影像融合、融合评价体系和算法轻量化等4个方面对光学和SAR影像融合的未来发展趋势进行了展望。
Remote sensing image fusion
as the most challenging work in the field of image processing
has been an academic research hotspot. SAR has various features
such as all-weather service
and penetrating clouds. However
the image is difficult to interpret due to the speckle noise problem. In contrast
optical images can reflect the spectral information of ground objects
which are easy to interpret
but interference by clouds and fog can easily occur
resulting in information loss. The fusion of optical and SAR image data can realize the complementary information between different types of sensor imaging
which can facilitate the subsequent image analysis and interpretation.
In this paper
the research status and future development trend of optical and SAR remote sensing fusion is reviewed. The introduction part presents the motivation of this paper by explaining the importance of image fusion. The second part outlines the classification of optical and SAR image fusion from traditional methods to deep learning methods. The third part presents the datasets in terms of optical and SAR images and explains each dataset in detail. The fourth part summarizes the difficulties and some challenging problems of optical and SAR image fusion and highlights the future trends in the field of optical and SAR image fusion.
The future development trend of fusion has four main aspects
namely
datasets
time series image fusion
fusion evaluation system
and algorithm lightweight. First
having well-targeted and sufficient datasets is an important part of training excellent fusion models
and the production of data sets is the key to improving the applicability of future models. Second
time series optical images are being used as supplementary information to SAR data with the increasing amount of satellite time series data
thus improving the utilization of image fusion. Third
the evaluation of the image fusion performance is an essential topic. No unified evaluation metric is available for objectively and comprehensively evaluating fusion algorithms
and an evaluation metric system in the field of optical and SAR image fusion must be developed. Finally
the lightweight of deep learning algorithms is an important future research direction. This paper provides a reference for researchers in optical and SAR image fusion.
深度学习遥感影像光学影像SAR影像影像融合数据集
deep learningremote sensing imageoptical imageSARimage fusiondatasets
Bermudez J D, Happ P N, Feitosa R Q and Oliveira D A B. 2019. Synthesis of multispectral optical images from SAR/optical multitemporal data using conditional generative adversarial networks. IEEE Geoscience and Remote Sensing Letters, 16(8): 1220-1224 [DOI: 10.1109/LGRS.2019.2894734http://dx.doi.org/10.1109/LGRS.2019.2894734]
Chandrakanth R, Saibaba J, Varadan G and Raj P A. 2011. Feasibility of high resolution SAR and multispectral data fusion//2011 IEEE International Geoscience and Remote Sensing Symposium. Vancouver, BC: IEEE: 356-359 [DOI: 10.1109/IGARSS.2011.6048972http://dx.doi.org/10.1109/IGARSS.2011.6048972]
Chen Y and Bruzzone L. 2019. Self-supervised sar-optical data fusion of sentinel-1/-2 images. IEEE Transactions on Geoscience and Remote Sensing, 60: 1-11 [DOI: 10.1109/TGRS.2021.3128072http://dx.doi.org/10.1109/TGRS.2021.3128072]
Chen K M, Zhou Z X, Xu G L, Fu K and Zhang D B (R & D Personnel). 2017. A Multi-source Remote Sensing Image Fusion Method Based on Deep Learning//Chinese Patent, Applicant (patent right): Institute of Electronics, Chinese Academy of Sciences. Beijing: CN106295714A, 2017-01-04
陈克明, 周志鑫, 许光銮, 付琨, 张道兵(技术研发人员). 2017. 一种基于深度学习的多源遥感图像融合方法//中国专利,申请(专利权)人: 中国科学院电子学研究所. 北京:CN106295714A, 2017-01-04
Chen S H, Zhang R F, Su H B, Tian J and Xia J. 2010. SAR and multispectral image fusion using generalized IHS transform based on à Trous wavelet and EMD decompositions. IEEE Sensors Journal, 10(3): 737-745 [DOI: 10.1109/JSEN.2009.2038661http://dx.doi.org/10.1109/JSEN.2009.2038661]
Cheng C Y, Wu X J, Xu T Y and Chen G Y. 2021. Unifusion: A lightweight unified image fusion network. IEEE Transactions on Instrumentation and Measurement,70: 1-14 [DOI: 10.1109/TIM.2021.3109379http://dx.doi.org/10.1109/TIM.2021.3109379]
Chou K C, Golden S A and Willsky A S. 1993. Multiresolution stochastic models, data fusion, and wavelet transforms. Signal Processing, 34(3): 257-282 [DOI: 10.1016/0165-1684(93)90135-Whttp://dx.doi.org/10.1016/0165-1684(93)90135-W]
Dupas C A. 2000. SAR and LANDSAT TM image fusion for land cover classification in the Brazilian Atlantic forest domain.International Archives of Photogrammetry and Remote Sensing, 33(1): 96-103.
El-Taweel G S and Helmy A K. 2014. Fusion of multispectral and full polarimetric SAR images in NSST domain. International Journal of Image Processing, 8(6): 497-513
Enomoto K, Sakurada K, Wang W M, Kawaguchi N, Matsuoka and Nakamura R. 2018. Image translation between SAR and optical imagery with generative adversarial nets//2018 IEEE International Geoscience and Remote Sensing Symposium. Valencia: IEEE: 1752-1755 [DOI: 10.1109/IGARSS.2018.8518719http://dx.doi.org/10.1109/IGARSS.2018.8518719]
Feng P M, Lin Y T, Guan J, Dong Y, He G J, Xia Z H and Shi H F. 2019a. Embranchment CNN based local climate zone classification using Sar and multispectral remote sensing data//2019 IEEE International Geoscience and Remote Sensing Symposium. Yokohama: IEEE: 6344-6347 [DOI: 10.1109/IGARSS.2019.8898703http://dx.doi.org/10.1109/IGARSS.2019.8898703]
Feng Q L, Yang J Y, Zhu D H, Liu J T, Guo H, Bayartungalag B and Li B G. 2019b. Integrating multitemporal sentinel-1/2 data for coastal land cover classification using a multibranch convolutional neural network: a case of the Yellow River Delta. Remote Sensing, 11(9): 1-22 [DOI: 10.3390/rs11091006http://dx.doi.org/10.3390/rs11091006]
Fu S L, Xu F and Jin Y Q. 2021. Reciprocal translation between SAR and optical remote sensing images with cascaded-residual adversarial networks. Science China Information Sciences, 64(2): 1-15 [DOI: 10.1007/s11432-020-3077-5http://dx.doi.org/10.1007/s11432-020-3077-5]
Fuentes Reyes M, Auer S, Merkle N, Henry C and Schmitt M. 2019. SAR-to-optical image translation based on conditional generative adversarial networks—Optimization, opportunities and limits. Remote Sensing, 11(17): 1-19 [DOI: 10.3390/rs11172067http://dx.doi.org/10.3390/rs11172067]
Gao J H, Zhang H and Yuan Q Q. 2019. Cloud removal with fusion of SAR and optical images by deep learning//Proceedings of the 2019 10th International Workshop on the Analysis of Multitemporal Remote Sensing Images (MultiTemp). Shanghai: IEEE: 1-3 [DOI: 10.1109/Multi-Temp.2019.8866939http://dx.doi.org/10.1109/Multi-Temp.2019.8866939]
Ghamisi P, Rasti B, Yokoya N, Wang Q M, Hofle B, Bruzzone L, Bovolo F, Chi M M, Anders K, Gloaguen R, Atkinson P M and Benediktsson J A. 2019. Multisource and multitemporal data fusion in remote sensing: a comprehensive review of the state of the art. IEEE Geoscience and Remote Sensing Magazine, 7(1): 6-39 [DOI: 10.1109/MGRS.2018.2890023http://dx.doi.org/10.1109/MGRS.2018.2890023]
Ghassemian H. 2016. A review of remote sensing image fusion methods. Information Fusion, 32: 75-89 [DOI: 10.1016/j.inffus.2016.03.003http://dx.doi.org/10.1016/j.inffus.2016.03.003]
Goodfellow I J, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A and Bengio Y. 2014. Generative adversarial nets//Proceedings of the 27th International Conference on Neural Information Processing Systems. Montreal: MIT Press: 2672-2680
Goshtasby A A and Nikolov S. 2007. Image fusion: advances in the state of the art. Information Fusion, 8(2): 114-118 [DOI: 10.1016/j.inffus.2006.04.001http://dx.doi.org/10.1016/j.inffus.2006.04.001]
Grohnfeldt C, Schmitt M and Zhu X X. 2018. A conditional generative adversarial network to fuse SAR and multispectral optical data for cloud removal from Sentinel-2 images//2018 IEEE International Geoscience and Remote Sensing Symposium. Valencia: IEEE: 1726-1729 [DOI: 10.1109/IGARSS.2018.8519215http://dx.doi.org/10.1109/IGARSS.2018.8519215]
He W and Yokoya N. 2018. Multi-temporal Sentinel-1 and -2 data fusion for optical image simulation. ISPRS International Journal of Geo-Information, 7(10): 1-11 [DOI: 10.3390/ijgi7100389http://dx.doi.org/10.3390/ijgi7100389]
Hoffmann S, Brust C A, Shadaydeh M and Denzler J. 2019. Registration of high resolution Sar and optical satellite imagery using fully convolutional networks//2019 IEEE International Geoscience and Remote Sensing Symposium. Yokohama: IEEE: 5152-5155 [DOI: 10.1109/IGARSS.2019.8898714http://dx.doi.org/10.1109/IGARSS.2019.8898714]
Huang M Y, Xu Y, Qian L X, Shi W L, Zhang Y Q, Bao W, Wang N, Liu X J and Xiang X S. 2021. The QXS-SAROPT dataset for deep learning in SAR-optical data fusion.arXiv: Image and Video Processing [DOI: 10.48550/arXiv.2103.08259http://dx.doi.org/10.48550/arXiv.2103.08259]
Huang Y Q, Liao J J, Guo H D and Zhong X L. 2005. The fusion of multispectral and SAR images based wavelet transformation over urban area//Proceedings of the 2005 IEEE International Geoscience and Remote Sensing Symposium. Seoul: IEEE: 3942-3944 [DOI: 10.1109/IGARSS.2005.1525774http://dx.doi.org/10.1109/IGARSS.2005.1525774]
Ienco D, Interdonato R, Gaetano R and Minh D H T. 2019. Combining Sentinel-1 and Sentinel-2 satellite image time series for land cover mapping via a multi-source deep learning architecture. ISPRS Journal of Photogrammetry and Remote Sensing, 158: 11-22 [DOI: 10.1016/j.isprsjprs.2019.09.016http://dx.doi.org/10.1016/j.isprsjprs.2019.09.016]
Isola P, Zhu J Y, Zhou T and Efros A A. 2017. Image-to-image translation with conditional adversarial networks//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI: IEEE: 5967-5976 [DOI: 10.1109/CVPR.2017.632http://dx.doi.org/10.1109/CVPR.2017.632]
Jia Y H. 1998. Fusion of Landsat TM and SAR images based on principal component analysis. Remote Sensing Technology and Application, 13(1): 46-49
贾永红. 1998. TM和SAR影像主分量变换融合法. 遥感技术与应用, 13(1): 46-49 [DOI: 10.3969/j.issn.1004-0323.1998.01.007http://dx.doi.org/10.3969/j.issn.1004-0323.1998.01.007]
Kim S, Song W J and Kim S H. 2018. Double weight-based SAR and infrared sensor fusion for automatic ground target recognition with deep learning. Remote Sensing, 10(1): 1-24 [DOI: 10.3390/rs10010072http://dx.doi.org/10.3390/rs10010072]
Kulkarni S C and Rege P P. 2020. Pixel level fusion techniques for SAR and optical images: a review. Information Fusion, 59: 13-29 [DOI: 10.1016/j.inffus.2020.01.003http://dx.doi.org/10.1016/j.inffus.2020.01.003]
LeCun Y and Bengio Y. 1995. Convolutional networks for images, speech, and time series//Arbib M A ed. The Handbook of Brain Theory and Neural Networks. Cambridge, MA: MIT Press: 255-258 [DOI: 10.5555/303568.303704http://dx.doi.org/10.5555/303568.303704]
Ley A, D’Hondt O, Valade S, Hänsch R and Hellwich O. 2018. Exploiting GAN-based SAR to optical image transcoding for improved classification via deep learning//Proceedings of the 12th European Conference on Synthetic Aperture Radar. Aachen: IEEE: 396-401
Li H and Wu X J. 2019. DenseFuse: a fusion approach to infrared and visible images. IEEE Transactions on Image Processing, 28(5): 2614-2623 [DOI: 10.1109/TIP.2018.2887342http://dx.doi.org/10.1109/TIP.2018.2887342]
Li H H, Guo L and Liu K. 2009. SAR and optical image fusion based on curvelet transform. Journal of Optoelectronics·Laser, 20(8): 1110-1113
李晖晖, 郭雷, 刘坤. 2009. 基于Curvelet变换的SAR与可见光图像融合研究. 光电子. 激光, 20(8): 1110-1113 [DOI: 10.3321/j.issn:1005-0086.2009.08.029http://dx.doi.org/10.3321/j.issn:1005-0086.2009.08.029]
Li S T, Li C Y and Kang X D. 2021. Development status and future prospects of multi-source remote sensing image fusion. National Remote Sensing Bulletin, 25(1): 148-166
李树涛, 李聪妤, 康旭东. 2021. 多源遥感图像融合发展现状与未来展望. 遥感学报, 25(1): 148-166 [DOI: 10.11834/jrs.20210259http://dx.doi.org/10.11834/jrs.20210259]
Li S Y, Zhang W F and Yang S. 2017. Intelligence fusion method research of multisource high-resolution remote sensing images. Journal of Remote Sensing, 21(3): 415-424
李盛阳, 张万峰, 杨松. 2017. 多源高分辨率遥感影像智能融合. 遥感学报, 21(3): 415-424 [DOI: 10.11834/jrs.20176386http://dx.doi.org/10.11834/jrs.20176386]
Li X, Lei L, Sun Y L, Li M and Kuang G Y. 2020. Multimodal bilinear fusion network with second-order attention-based channel selection for land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 13: 1011-1026 [DOI: 10.1109/JSTARS.2020.2975252http://dx.doi.org/10.1109/JSTARS.2020.2975252]
Liu L and Lei B. 2018. Can SAR images and optical images transfer with each other?//2018 IEEE International Geoscience and Remote Sensing Symposium. Valencia: IEEE: 7019-7022 [DOI: 10.1109/IGARSS.2018.8518921http://dx.doi.org/10.1109/IGARSS.2018.8518921]
Liu Y, Chen X, Cheng J and Peng H. 2017a. A medical image fusion method based on convolutional neural networks//Proceedings of the 2017 20th International Conference on Information Fusion. Xi’an: IEEE: 1-7 [DOI: 10.23919/ICIF.2017.8009769http://dx.doi.org/10.23919/ICIF.2017.8009769]
Liu Y, Chen X, Peng H and Wang Z F. 2017b. Multi-focus image fusion with a deep convolutional neural network. Information Fusion, 36: 191-207 [DOI: 10.1016/j.inffus.2016.12.001http://dx.doi.org/10.1016/j.inffus.2016.12.001]
Liu Y, Chen X, Wang Z F, Wang Z J, Ward R K and Wang X S. 2018. Deep learning for pixel-level image fusion: recent advances and future prospects. Information Fusion, 42: 158-173 [DOI: 10.1016/j.inffus.2017.10.007http://dx.doi.org/10.1016/j.inffus.2017.10.007]
Jung H, Kim Y, Jang H, Ha N and Sohn K. 2020. Unsupervised deep image fusion with structure tensor representations. IEEE Transactions on Image Processing, 29(99): 3845-3858 [DOI:10.1109/TIP.2020.2966075http://dx.doi.org/10.1109/TIP.2020.2966075]
Ma J Y, Ma Y and Li C. 2019. Infrared and visible image fusion methods and applications: a survey. Information Fusion, 45: 153-178 [DOI: 10.1016/j.inffus.2018.02.004http://dx.doi.org/10.1016/j.inffus.2018.02.004]
Manaf S A, Mustapha N, Sulaiman M N, Husin N A and Hamid M R A. 2016. Comparison of classification techniques on fused optical and SAR images for shoreline extraction: a case study at northeast coast of peninsular Malaysia. Journal of Computer Science, 12(8): 399-411 [DOI: 10.3844/jcssp.2016.399.411http://dx.doi.org/10.3844/jcssp.2016.399.411]
Meraner A, Ebel P, Zhu X X and Schmitt M. 2020. Cloud removal in Sentinel-2 imagery using a deep residual neural network and SAR-optical data fusion. ISPRS Journal of Photogrammetry and Remote Sensing, 166: 333-346 [DOI: 10.1016/j.isprsjprs.2020.05.013http://dx.doi.org/10.1016/j.isprsjprs.2020.05.013]
Mirza M and Osindero S. 2014. Conditional generative adversarial nets. 2672-2680. https://www.xueshufan.com/publication/2125389028https://www.xueshufan.com/publication/2125389028 [DOI: 10.48550/arXiv.1411.1784http://dx.doi.org/10.48550/arXiv.1411.1784]
Pandit V R and Bhiwani R J. 2015. Image fusion in remote sensing applications: a review. International Journal of Computer Applications, 120(10): 22-32 [DOI: 10.5120/21263-3846http://dx.doi.org/10.5120/21263-3846]
Prabhakar K R, Srikar V S and Babu R V. 2017. DeepFuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs//2017 IEEE International Conference on Computer Vision (ICCV). Venice: IEEE: 4724-4732 [DOI: 10.1109/ICCV.2017.505http://dx.doi.org/10.1109/ICCV.2017.505]
Ranchin T, Aiazzi B, Alparone L, Baronti S and Wald L. 2003. Image fusion - The ARSIS concept and some successful implementation schemes. ISPRS Journal of Photogrammetry and Remote Sensing, 58(1/2): 4-18 [DOI: 10.1016/S0924-2716(03)00013-3http://dx.doi.org/10.1016/S0924-2716(03)00013-3]
Scarpa G, Gargiulo M, Mazza A and Gaetano R. 2018. A CNN-based fusion method for feature extraction from sentinel data. Remote Sensing, 10(2): 236 [DOI: 10.3390/rs10020236http://dx.doi.org/10.3390/rs10020236]
Schmitt M, Hughes L H, Körner M and Zhu X X. 2018a. Colorizing sentinel-1 sar images using a variational autoencoder conditioned on sentinel-2 imagery. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLII-2: 1045-1051 [DOI: 10.5194/isprs-archives-XLII-2-1045-2018http://dx.doi.org/10.5194/isprs-archives-XLII-2-1045-2018]
Schmitt M, Hughes L H, Qiu C and Zhu X X. 2019. SEN12MS—A curated dataset of georeferenced multi-spectral Sentinel-1/2 imagery for deep learning and data fusion. ISPRSAnnals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, IV-2/W7: 153-160 [DOI: 10.48550/arXiv.1906.07789http://dx.doi.org/10.48550/arXiv.1906.07789]
Schmitt M, Hughes L H and Zhu X X. 2018b. The SEN1-2 dataset for deep learning in SAR-optical data fusion. ISPRSAnnals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, IV-1: 141-146 [DOI: 10.5194/isprs-annals-IV-1-141-2018http://dx.doi.org/10.5194/isprs-annals-IV-1-141-2018]
Schmitt M, Tupin F and Zhu X X. 2017. Fusion of SAR and optical remote sensing data-challenges and recent trends//Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium. Fort Worth, TX: IEEE: 5458-5461 [DOI: 10.1109/IGARSS.2017.8128239http://dx.doi.org/10.1109/IGARSS.2017.8128239]
Shakya A, Biswas M and Pal M. 2020. CNN-based fusion and classification of SAR and Optical data. International Journal of Remote Sensing, 41(22): 8839-8861 [DOI: 10.1080/01431161.2020.1783713http://dx.doi.org/10.1080/01431161.2020.1783713]
Shao Z F, Wu W F and Guo S J. 2020. IHS-GTF: a fusion method for optical and synthetic aperture radar data. Remote Sensing, 12(17): 2796 [DOI: 10.3390/rs12172796http://dx.doi.org/10.3390/rs12172796]
Song X, Wu X J, Li H. 2019. MSDNet for medical image fusion//International Conference on Image and Graphics. Beijing: Springer: 278-288.[DOI: 10.1007/978-3-030-34110-7_24http://dx.doi.org/10.1007/978-3-030-34110-7_24]
Stathaki T. 2008. Image Fusion: Algorithms and Applications. New York, USA: Academic Press,1-30
Sun C Q, Zhang C and Xiong, N X. 2020. Infrared and visible image fusion techniques based on deep learning: a review. Electronics, 9(12), 1-24 [DOI: 10.3390/electronics9122162http://dx.doi.org/10.3390/electronics9122162]
Toriya H, Dewan A and Kitahara I. 2019. SAR2OPT: image alignment between multi-modal images using generative adversarial networks//2019 IEEE International Geoscience and Remote Sensing Symposium. Yokohama: IEEE: 923-926 [DOI: 10.1109/IGARSS.2019.8898605http://dx.doi.org/10.1109/IGARSS.2019.8898605]
Wan J H, Zang J X and Liu S W. 2017. Fusion and classification of SAR and optical image with consideration of polarization characteristics. Acta Optica Sinica, 37(6): 0628001
万剑华, 臧金霞, 刘善伟. 2017. 顾及极化特征的SAR与光学影像融合与分类. 光学学报, 37(6): 0628001 [DOI: 10.3788/AOS201737.0628001http://dx.doi.org/10.3788/AOS201737.0628001]
Wang X L and Chen C X. 2016. Image fusion for synthetic aperture radar and multispectral images based on sub-band-modulated non-subsampled contourlet transform and pulse coupled neural network methods. The Imaging Science Journal, 64(2): 87-93 [DOI: 10.1080/13682199.2015.1136101http://dx.doi.org/10.1080/13682199.2015.1136101]
Wang Y Y and Zhu X X. 2018. The SARptical dataset for joint analysis of SAR and optical image in dense urban area//2018 International Geoscience and Remote Sensing Symposium. Valencia: IEEE: 6840-6843 [DOI: 10.1109/IGARSS.2018.8518298http://dx.doi.org/10.1109/IGARSS.2018.8518298]
Xia Y, Zhang H Y, Zhang L P and Fan Z Y. 2019. Cloud removal of optical remote sensing imagery with multitemporal Sar-optical data using X-Mtgan//2019 IEEE International Geoscience and Remote Sensing Symposium. Yokohama: IEEE: 3396-3399 [DOI: 10.1109/IGARSS.2019.8899105http://dx.doi.org/10.1109/IGARSS.2019.8899105]
Xu J C and Cheng X J. 2015. Comparative analysis of different fusion rules for SAR and multi-spectral image fusion based on NSCT and IHS transform//Proceedings of the 2015 International Conference on Computer and Computational Sciences. Greater Noida: IEEE: 271-274 [DOI: 10.1109/ICCACS.2015.7361364http://dx.doi.org/10.1109/ICCACS.2015.7361364]
Yang B, Zhong J Y, Li Y H and Chen Z Z. 2017. Multi-focus image fusion and super-resolution with convolutional neural network. International Journal of Wavelets, Multiresolution and Information Processing, 15(4): 1750037 [DOI: 10.1142/S0219691317500370http://dx.doi.org/10.1142/S0219691317500370]
Yi W, Zeng Y and Yuan Z. 2018. Fusion of GF-3 SAR and optical images based on the nonsubsampled contourlet transform. Acta Optica Sinica, 38(11): 1110002
易维, 曾湧, 原征. 2018. 基于NSCT变换的高分三号SAR与光学图像融合. 光学学报, 38(11): 1110002 [DOI: 10.3788/AOS201838.1110002http://dx.doi.org/10.3788/AOS201838.1110002]
Zhang H, Shen H F and Zhang L P. 2016. Fusion of multispectral and SAR images using sparse representation//2016 IEEE International Geoscience and Remote Sensing Symposium. Beijing: IEEE: 7200-7203 [DOI: 10.1109/IGARSS.2016.7730878http://dx.doi.org/10.1109/IGARSS.2016.7730878]
Zhang H S, Wan L M, Wang T, Lin Y Y, Lin H and Zheng Z Z. 2019. Impervious surface estimation from optical and polarimetric SAR data using small-patched deep convolutional networks: a comparative study. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12(7): 2374-2387 [DOI: 10.1109/JSTARS.2019.2915277http://dx.doi.org/10.1109/JSTARS.2019.2915277]
Zhang J X, Zhou J J and Lu X W. 2020a. Feature-guided SAR-to-optical image translation. IEEE Access, 8: 70925-70937 [DOI: 10.1109/ACCESS.2020.2987105http://dx.doi.org/10.1109/ACCESS.2020.2987105]
Zhang X C. 2021. Benchmarking and comparing multi-exposure image fusion algorithms. Information Fusion, 74: 111-131 [DOI: 10.1016/j.inffus.2021.02.005http://dx.doi.org/10.1016/j.inffus.2021.02.005]
Zhang X C, Ye P and Xiao G. 2020b. VIFB: a visible and infrared image fusion benchmark//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Seattle, WA: IEEE: 468-478 [DOI: 10.1109/CVPRW50498.2020.00060http://dx.doi.org/10.1109/CVPRW50498.2020.00060]
Zhao W Z, Qu Y, Chen J G and Yuan Z L. 2020. Deeply synergistic optical and SAR time series for crop dynamic monitoring. Remote Sensing of Environment, 247: 111952 [DOI: 10.1016/j.rse.2020.111952http://dx.doi.org/10.1016/j.rse.2020.111952]
Zhong J Y, Yang B, Huang G Y, Zhong F and Chen Z Z. 2016. Remote sensing image fusion with convolutional neural network. Sensing and Imaging, 17(1): 10 [DOI: 10.1007/s11220-016-0135-6http://dx.doi.org/10.1007/s11220-016-0135-6]
Zhou Y, Luo J C, Feng L, Yang Y P, Chen Y H and Wu W. 2019. Long-short-term-memory-based crop classification using high-resolution optical images and multi-temporal SAR data. GIScience and Remote Sensing, 56(8): 1170-1191 [DOI: 10.1080/15481603.2019.1628412http://dx.doi.org/10.1080/15481603.2019.1628412]
Zhu J Y, Park T, Isola P and Efros A A. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks//Proceedings of the 2017 IEEE International Conference on Computer Vision. Venice: IEEE: 2242-2251 [DOI: 10.1109/ICCV.2017.244http://dx.doi.org/10.1109/ICCV.2017.244]
Zhu X X, Montazeri S, Ali M, Hua Y S, Wang Y Y, Mou L C, Shi Y L, Xu F and Bamler R. 2021. Deep learning meets SAR: concepts, models, pitfalls, and perspectives. IEEE Geoscience and Remote Sensing Magazine, 9(4): 143-172 [DOI: 10.1109/MGRS.2020.3046356http://dx.doi.org/10.1109/MGRS.2020.3046356]
相关作者
相关机构