3D点云震害建筑物深度学习样本增强方法
Deep learning sample enhancement method for 3D point cloud seismic damaged buildings
- 2023年27卷第8期 页码:1876-1887
纸质出版日期: 2023-08-07
DOI: 10.11834/jrs.20211009
扫 描 看 全 文
浏览全部资源
扫码关注微信
纸质出版日期: 2023-08-07 ,
扫 描 看 全 文
崔驿宁,窦爱霞,杨慎宁.2023.3D点云震害建筑物深度学习样本增强方法.遥感学报,27(8): 1876-1887
Cui Y N,Dou A X and Yang S N. 2023. Deep learning sample enhancement method for 3D point cloud seismic damaged buildings. National Remote Sensing Bulletin, 27(8):1876-1887
针对震后复杂场景下LiDAR点云建筑物破坏类型自动识别问题,为满足应急救援时效性、准确性需求,告别传统人工震害特征提取,充分挖掘点云数据中灾区建筑物震害信息,进一步实现建筑物自动化智能化识别。本文将3D点云深度学习方法应用于建筑物震害识别,构建了包含倒塌、局部倒塌、未倒塌3种建筑物破坏类型的点云数据集。基于PointNet++网络探究了各类别样本量及其均衡性对识别精度的影响,并提出破坏建筑物样本增强方法,丰富了各类别样本点云形态。利用2010年海地7.0级地震后机载LiDAR数据,在PointNet++网络中进行了样本增强前后分类精度比较、样本量以及均衡性分析实验,样本增强后倒塌和局部倒塌的分类精度分别提高近27%和17%,模型整体平均分类精度、Kappa系数均有近15%的提升。实验结果表明三维建筑物震害深度学习模型在各类别样本量足够且均衡时,才能取得较好的分类识别效果。
To address the problems in the automatic identification of building damage types in post-earthquake complex lidar point cloud scenes
satisfy the timeliness and accuracy requirements in emergency rescue operations
abandon traditional artificial seismic damage feature extraction methods
fully excavate the seismic damage information of buildings in the disaster area from point cloud data
and further realize an automatic and intelligent recognition of buildings
this paper proposes a building seismic damage recognition model based on the PointNet++ network. This study also establishes collapsed
partially collapsed
and uncollapsed point cloud training datasets that can provide an important scientific basis for earthquake emergency rescue and disaster assessment.
This paper applies the 3D point cloud deep learning method to identify seismic-damaged buildings. Given the uneven sample size
based on the characteristics of the PointNet++ network and the original point cloud sample shape
we propose a sample enhancement method that involves inverse distance interpolation
symmetry
and top projection to increase the amount of collapsed and partially collapsed samples.
Sample enhancement not only increases the number of collapsed and partially collapsed buildings-hence making the samples more comprehensive and diverse-but also solves the problem of uneven samples. Therefore
the classification accuracies of collapse and partial collapse are improved by about 30% and 20%
respectively
and the overall average classification accuracy and kappa coefficient of the model are improved by more than 10%. The differences in classification accuracy between collapsed and uncollapsed and between partially collapsed and uncollapsed are reduced from 40% and 30% to about 15%.
(1) The characteristics of the point cloud samples and network model should be fully considered when building a seismic damage training dataset. In this paper
we assume that PointNet++ has the same learning characteristics for the same geometric shape in different scales and spatial rotation changes. We design a sample enhancement method for the collapse and partial collapse categories that not only increases the number of samples but also enriches their damage form
thus effectively improving the classification accuracy of local collapse.
(2) The sample size and its balance have a huge influence on the recognition effect of the earthquake damage recognition model established by the PointNet++ network. Only when the sample size is sufficient and the number of samples in each category is relatively uniform can a better classification and recognition effect be achieved. However
the sample size is not the decisive factor for improving the classification effect. When the sample size is uniform
the accuracy for the uncollapsed category is still higher than that for the other two categories. The classification effect is also related to other factors
such as sample selection
network design
and internal feature learning methods
which warrant further exploration.
遥感分类识别PointNet++样本增强LiDAR点云震害建筑物
remote sensingclassification and recognitionPointNet++sample enhancementLiDAR point cloudseismic damage buildings
Deng F, Dou A X and Wang X Q. 2018. Fusion of aerial imagery and airborne LiDAR data for post-earthquake building point extraction. Journal of Remote Sensing, 22(S1): 224-232
邓飞, 窦爱霞, 王晓青. 2018. 融合航空影像的震后机载LiDAR建筑物点云提取. 遥感学报, 22(S1): 224-232 [DOI: 10.11834/jrs.20187191http://dx.doi.org/10.11834/jrs.20187191]
Dou A X. 2018. Study on Earthquake Damage Identification Feature Parameters of Buildings Based on Airborne LiDAR Data. Beijing: Institute of Geology, Earthquake Administration
窦爱霞. 2018. 基于机载LiDAR数据的建筑物震害识别特征参数研究. 北京: 中国地震局地质研究所
Gerke M and Xiao J. 2014. Fusion of airborne laserscanning point clouds and images for supervised and unsupervised scene classification. ISPRS Journal of Photogrammetry and Remote Sensing, 87: 78-92 [DOI: 10.1016/j.isprsjprs.2013.10.011http://dx.doi.org/10.1016/j.isprsjprs.2013.10.011]
Gong W P, Li Y F, Chen Z and Long C. 2013. A study of fast building extraction from airborne LiDAR point cloud data. Urban Geotechnical Investigation and Surveying, (3): 84-88
龚威平, 李云帆, 陈卓, 龙超. 2013. 从机载LiDAR点云数据中快速提取建筑物的方法研究. 城市勘测, (3): 84-88
Hu B G. 2013. Object-Oriented Damage Building Extraction Based on Lidar and High Resolution Remote Sensing Imagery. Chengdu: Southwest Jiaotong University
胡本刚. 2013. 基于LiDAR点云与高分影像的面向对象的损毁建筑物提取方法研究. 成都: 西南交通大学
Huang S S, Dou A X, Wang X Q and Yuan X X. 2016. Building damage feature analyses based on post-earthquake airborne LiDAR data. Acta Seismologica Sinica, 38(3): 467-476
黄树松, 窦爱霞, 王晓青, 袁小祥. 2016. 基于震后机载激光雷达点云的建筑物破坏特征分析. 地震学报, 38(3): 467-476[DOI: 10.11939/jass.2016.03.014http://dx.doi.org/10.11939/jass.2016.03.014]
Kalogerakis E, Averkiou M, Maji S and Chaudhuri. 2017. 3D shape segmentation with projective convolutional networks//Proceedings of 2017 Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu: IEEE [DOI: 10.1109/CVPR.2017.702http://dx.doi.org/10.1109/CVPR.2017.702]
Klokov R and Lempitsky V. 2017. Escape from cells: deep Kd-networks for the recognition of 3D point cloud models//Proceedings of 2017 IEEE International Conference on Computer Vision (ICCV). Venice: IEEE. 863-872 [DOI: 10.1109/ICCV.2017.99http://dx.doi.org/10.1109/ICCV.2017.99]
Li Y Y, Bu R, Sun M C, Wu W, Di X H and Chen B Q. 2018. PointCNN: convolution on X-transformed points//Proceedings of the 32nd International Conference on Neural Information Processing Systems. Montreal: Curran Associates Inc
Lu Q, Chen C, Xie W J and Luo Y T. 2020. PointNGCNN: deep convolutional networks on 3D point clouds with neighborhood graph filters. Computers and Graphics, 86: 42-51 [DOI: 10.1016/j.cag.2019.11.005http://dx.doi.org/10.1016/j.cag.2019.11.005]
Ma W, Yue J P and Cao S. 2010. Building edge extraction from LIDAR data based on images segmentation. Geography and Geographic Information Science, 26(4): 57-59, 63
马文, 岳建平, 曹爽. 2010. 基于影像分割技术的LIDAR数据建筑物边缘提取. 地理与地理信息科学, 26(4): 57-59, 63
Maturana D and Scherer S. 2015. VoxNet: A 3D convolutional neural network for real-time object recognition//Proceedings of 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Hamburg: IEEE [DOI: 10.1109/IROS.2015.7353481http://dx.doi.org/10.1109/IROS.2015.7353481]
Qi C R, Su H, Mo K, Kaichun M and Guibas L J. 2017a. PointNet: deep learning on point sets for 3D classification and segmentation//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE [DOI: 10.1109/CVPR.2017.16http://dx.doi.org/10.1109/CVPR.2017.16]
Qi C R, Yi L, Su H and Guibas L J. 2017b. PointNet++: deep hierarchical feature learning on point sets in a metric space//Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach: Curran Associates Inc
Riegler G, Ulusoy A O and Geiger A. 2017. OctNet: learning deep 3D representations at high resolutions//Proceedings of 2017 Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu: IEEE: 3577-3586 [DOI: 10.1109/CVPR.2017.701http://dx.doi.org/10.1109/CVPR.2017.701]
Sinha A, Bai J and Ramani K. 2016. Deep learning 3D shape surfaces using geometry images//Proceedings of the 14th European Conference on Computer Vision. Amsterdam: Springer: 223-240 [DOI: 10.1007/978-3-319-46466-4_14http://dx.doi.org/10.1007/978-3-319-46466-4_14]
Su H, Maji S, Kalogerakis E and Learned-Miller E. 2015. Multi-view convolutional neural networks for 3D shape recognition//Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago: IEEE [DOI: 10.1109/ICCV.2015.114http://dx.doi.org/10.1109/ICCV.2015.114]
Turker M and Koc-San D. 2015. Building extraction from high-resolution optical spaceborne images using the integration of support vector machine (SVM) classification, Hough transformation and perceptual grouping. International Journal of Applied Earth Observation and Geoinformation, 34: 58-69 [DOI: 10.1016/j.jag.2014.06.016http://dx.doi.org/10.1016/j.jag.2014.06.016]
Wang C F. 2013. Detection of Collapsed Building by Classifying Airborne Laser Scanner Data. Chengdu: Southwest Jiaotong University
王彩凤. 2013. 利用机载LiDAR点云提取损毁建筑物的方法研究. 成都: 西南交通大学
Wang J X, Dou A X, Wang X Q, Huang S S and Zhang X H. 2017. The ground-objects classification based on post-earthquake airborne LiDAR Data. Technology for Earthquake Disaster Prevention, 12(3): 677-689
王金霞, 窦爱霞, 王晓青, 黄树松, 张雪华. 2017. 地震后机载LiDAR点云的地物区分方法研究. 震灾防御技术, 12(3): 677-689 [DOI: 10.11899/zzfy20170323http://dx.doi.org/10.11899/zzfy20170323]
Wu Z R, Song S R, Khosla A, Yu F, Zhang L G, Tang X O and Xiao J X. 2015. 3D ShapeNets: a deep representation for volumetric shapes//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston: IEEE [DOI: 10.1109/CVPR.2015.7298801http://dx.doi.org/10.1109/CVPR.2015.7298801]
Yu H Y, Cheng G, Zhang Y M and Lu X P. 2011. The detection of earthquake-caused collapsed building information from LiDAR data and aerophotograph. Remote Sensing for Land and Resources, (3): 77-81
于海洋, 程钢, 张育民, 卢小平. 2011. 基于LiDAR和航空影像的地震灾害倒塌建筑物信息提取. 国土资源遥感, (3): 77-81
Zhang W M, Qi J B, Wan P, Wang H T, Xie D H, Wang X Y and Yan G J. 2016. An easy-to-use airborne LiDAR data filtering method based on cloth simulation. Remote Sensing, 8(6): 501 [DOI: 10.3390/rs8060501http://dx.doi.org/10.3390/rs8060501]
Zhao H S, Jiang L, Fu C W and Jia J Y. 2019. PointWeb: enhancing local neighborhood features for point cloud processing//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach: IEEE: 5565-5573 [DOI: 10.1109/CVPR.2019.00571http://dx.doi.org/10.1109/CVPR.2019.00571]
相关作者
相关机构