Debris Flow Tracking and Detection Method Research Via Video Image Analysis

  • Jie ZHOU ,
  • Nengpan JU ,
  • Yan ZHANG ,
  • Huabing TIAN ,
  • Chaoyang HE
Expand
  • 1.State Key Laboratory of Geological Hazard Prevention and Geological Environmental Protection, Chengdu University of Technology, Chengdu 610059, China
    2.POWERCHINA Chengdu Engineering Corporation Limited, Chengdu 610072, China
ZHOU Jie, research areas include geological hazard monitoring and early warning. E-mail: 454435023@qq.com
JU Nengpan, research areas include geological hazard monitoring/early warning studies. E-mail: jnp@cdut.edu.cn

Received date: 2025-02-10

  Revised date: 2025-03-17

  Online published: 2025-05-15

Supported by

the Opening Fund of State Key Laboratory of Geohazard Prevention and Geoenvironment Protection (Chengdu University of Technology)(SKLGP2024K030)

Abstract

Debris flow disasters, known for their frequent occurrence and high destructiveness, are difficult to monitor effectively due to the limited real-time performance and high false-alarm rates of conventional monitoring methods. This critical limitation underscores the urgent need to develop highly efficient and precise intelligent detection techniques to substantially enhance early warning capabilities. To address the challenges of poor real-time performance and high false alarm rates in traditional debris flow monitoring systems, this study proposes an enhanced YOLOv8m-GCSlide model based on the YOLOv8 framework. The GlobalContext Network (GCNet) is integrated into the backbone network to improve spatial dependency modeling of dynamic fluid boundaries in complex terrains, while a Sliding Loss function (SlideLoss) is designed to dynamically adjust classification thresholds and mitigate sample imbalance. Knowledge distillation is applied to compress the model, resulting in a lightweight variant (YOLOv8n-GCSlide) with reduced computational complexity. A multi-source video dataset was constructed using publicly available resources, with frames extracted at 0.25-second intervals to balance feature retention and training efficiency. Data augmentation techniques, including random cropping, rotation, scaling, Gaussian blur, and color jittering, were used to enhance generalization, supplemented with negative samples (e.g., dry riverbeds and landslides) to reduce false positives. Experimental results show that the optimized model achieves 94.6% (+2.0%) detection accuracy, 88.0% recall, 95.9% mean Average Precision (mAP), and an inference speed of 244.1 FPS, outperforming mainstream lightweight models such as SwinTransformer and MobileNet variants. After compression, the model parameters were reduced by 88.1%, with the distilled version retaining 94.6% (+1.2%) accuracy and 88.0% (+0.7%) recall while maintaining an inference speed of 244.1 FPS. Field validation conducted in Sedongpu Gully, a high-risk debris flow region, confirmed the model’s practical applicability. Under complex environmental interference, the model achieved 82.3% recall, 4.2% false positive rate, and a processing speed of 240.6 FPS. The integration of global attention mechanisms and task-specific loss functions effectively captures dynamic motion features and suppress environmental noise. Additionally, model compression techniques help balance accuracy and computational efficiency, enabling edge deployment for real-time disaster warnings. This approach provides a robust technical foundation for intelligent geological hazard monitoring systems, emphasizing high precision, low latency, and adaptability to resource-constrained scenarios.

Cite this article

Jie ZHOU , Nengpan JU , Yan ZHANG , Huabing TIAN , Chaoyang HE . Debris Flow Tracking and Detection Method Research Via Video Image Analysis[J]. Advances in Earth Science, 2025 , 40(4) : 388 -400 . DOI: 10.11867/j.issn.1001-8166.2025.030

References

1 CHEN Gongyan, LI Ting, CHEN Jun, et al. Primary establishment of an early warning model of debris flow hazards in Nyingchi City of Tibetan autonomous region based on raster runoff simulation[J]. The Chinese Journal of Geological Hazard and Control202334(1): 110-120.
  陈宫燕, 李婷, 陈军, 等. 基于栅格径流汇流模拟的西藏林芝市泥石流灾害预警模型初探[J]. 中国地质灾害与防治学报202334(1): 110-120.
2 HOU R N, WU M Y, LI Z, et al. Big disaster from small watershed: insights into the failure and disaster-causing mechanism of a debris flow on 25 September 2021 in Tianquan, China[J]. International Journal of Disaster Risk Science202415(4): 622-639.
3 DU Y, LIU H, LI H, et al. Exploring the initiating mechanism, monitoring equipment and warning indicators of gully-type debris flow for disaster reduction: a review[J]. Natural Hazards2024120(15): 13 667-13 692.
4 LI Maoyue, Hongyu Lü, HE Xiangmei, et al. Surrounding vehicle recognition and information map construction technology in automatic driving[J]. Journal of Automotive Safety and Energy202213(1): 131-141.
  李茂月, 吕虹毓, 河香梅, 等. 自动驾驶中周围车辆识别与信息地图构建技术[J]. 汽车安全与节能学报202213(1): 131-141.
5 ALSUWAYLIMI A A. Enhanced YOLOv8-seg instance segmentation for real-time submerged debris detection[J]. IEEE Access202412: 117 833-117 849.
6 CHEN A, LIN D, GAO Q Q. Enhancing brain tumor detection in MRI images using YOLO-NeuroBoost model[J]. Frontiers in Neurology2024, 15. DOI:10.3389/fneur.2024.1445882 .
7 ZHANG Z Z, CHEN P J, SHI X S, et al. Text-guided neural network training for image recognition in natural scenes and medicine[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence202143(5): 1 733-1 745.
8 MAO Tianya, YU Lei, ZHOU Xiaohui, et al. Human behavior recognition method in infrared image based on improved MobileNet V1[J]. Journal of Liaoning Technical University (Natural Science)202342(3): 362-369.
  毛天雅, 余磊, 周啸辉, 等. 基于改进MobileNet V1的红外图像人体行为识别方法[J]. 辽宁工程技术大学学报(自然科学版)202342(3): 362-369.
9 ZHOU Xiaohui, YU Lei, HE Qian, et al. Research on human action recognition in infrared images based on improved ResNet-18[J]. Laser & Infrared202151(9): 1 178-1 184.
  周啸辉, 余磊, 何茜, 等. 基于改进ResNet-18的红外图像人体行为识别方法研究[J]. 激光与红外202151(9): 1 178-1 184.
10 CAO X H, SU Y X, GENG X, et al. YOLO-SF: YOLO for fire segmentation detection[J]. IEEE Access202311: 111 079-111 092.
11 XIAO Yang, GUO Yonggang, WEI Luning. Design and implementation of a debris flow monitoring and warning system in southeast Tibet[J]. Acta Geologica Sichuan202444(4): 708-715.
  肖烊, 郭永刚, 卫璐宁. 藏东南地区泥石流监测预警系统设计与实现[J]. 四川地质学报202444(4): 708-715.
12 LI Lin, LI Tao, HE Zhilin, et al. Monitoring and early warning of landslide and debris flow disaster chain risk based on experimental simulation[J]. Bulletin of Soil and Water Conservation202444(2): 167-175.
  李林, 李涛, 何治林, 等. 基于试验模拟的滑坡泥石流灾害链风险监测预警[J]. 水土保持通报202444(2): 167-175.
13 China Association of Geological Hazard Prevention Engineering. Technical specification for debris flow disaster prevention engineering investigation: T/CA [S]. Wuhan: China University of Geosciences Press, 2018.
  中国地质灾害防治工程行业协会. 泥石流灾害防治工程勘查规范: T/CA [S].武汉:中国地质大学出版社,2018.
14 Ministry of Water Resources of the People’s Republic of China. Technical guidelines for the preparation of mountain torrent disaster prevention plans: [S]. Beijing: China Water & Power Press, Jan.2024.
  中华人名共和国水利部. 山洪灾害防御预案编制技术导则: [S].北京:中国水利水电出版社,2024.
15 GitHub-CVHub 520/X-AnyLabeling: effortless data labeling with AI support from segment anything and other awesome models.[EB/OL]. [2024-09-17]. .
16 WANG D M, QIAN Y, LU J Y, et al. Ea-yolo: efficient extraction and aggregation mechanism of YOLO for fire detection[J]. Multimedia Systems202430(5): 287. DOI:10.1007/s00530-024-01489-4 .
17 LI Mao, XIAO Yangyi, ZONG Wangyuan, et al. Lightweight chestnut fruit recognition method based on improved YOLOv8 Model [J]. Journal of Agricultural Engineering202440(1): 201-209.
  李茂, 肖洋轶, 宗望远, 等. 基于改进YOLOv8模型的轻量化板栗果实识别方法[J]. 农业工程学报202440(1): 201-209.
18 LIU M X, LI R X, HOU M X, et al. SD-YOLOv8: an accurate Seriola dumerili detection model based on improved YOLOv8[J]. Sensors202424(11). DOI:10.3390/s24113647 .
19 LIU M G, ZHANG M, CHEN X L, et al. YOLOv8-LMG: an improved bearing defect detection algorithm based on YOLOv8[J]. Processes202412(5). DOI:10.3390/pr12050930 .
20 WANG X L, GIRSHICK R, GUPTA A, et al. Non-local neural networks[C]// 2018 IEEE/CVF conference on computer vision and pattern recognition. Salt Lake City, UT, USA: IEEE, 2018: 7794-7803.
21 CAO Y, XU J R, LIN S, et al. GCNet: non-local networks meet squeeze-excitation networks and beyond[C]// 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). Seoul, Korea (South): IEEE, 2019: 1 971-1 980.
22 XUE Z Y, XU R J, BAI D, et al. YOLO-tea: a tea disease detection model improved by YOLOv5[J]. Forests202314(2). DOI:10.3390/f14020415 .
23 LI S, YUAN M Z, WANG W H, et al. Enhanced YOLO- and wearable-based inspection system for automotive wire harness assembly[J]. Applied Sciences202414(7). DOI:10.3390/app14072942 .
24 WANG M J, LI Y D, ZHOU J, et al. GCNet: probing self-similarity learning for generalized counting network[J]. Pattern Recognition2024. DOI:10.1016/j.patcog.2024.110513 .
25 LIAN Z, CHEN L, SUN L C, et al. GCNet: graph completion network for incomplete multimodal learning in conversation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence202345(7): 8 419-8 432.
26 ZHANG Y, MA Y L, LI Y L, et al. Intelligent analysis method of dam material gradation for asphalt-core rock-fill dam based on enhanced Cascade Mask R-CNN and GCNet[J]. Advanced Engineering Informatics2023, 56. DOI:10.1016/j.aei.2023.102001 .
27 CHEN X, FAN C Y, SHI J J, et al. Underwater target detection and embedded deployment based on lightweight YOLO_GN[J]. The Journal of Supercomputing202480(10): 14 057-14 084.
28 JIANG T, ZHOU J, XIE B B, et al. Improved YOLOv8 model for lightweight pigeon egg detection[J]. Animals202414(8). DOI:10.3390/ani14081226 .
29 LI Y J, HU Z Y, ZHANG Y X, et al. DDEYOLOv9: network for detecting and counting abnormal fish behaviors in complex water environments[J]. Fishes20249(6). DOI:10.3390/fishes9060242 .
30 YU Z P, HUANG H B, CHEN W J, et al. YOLO-FaceV2: a scale and occlusion aware face detector[J]. Pattern Recognition2024, 155. DOI:10.1016/j.patcog.2024.110714 .
31 SELVARAJU R R, COGSWELL M, DAS A, et al. Grad-CAM: visual explanations from deep networks via gradient-based localization[J]. International Journal of Computer Vision2020128(2): 336-359.
32 ILG E, MAYER N, SAIKIA T, et al. FlowNet 2.0: evolution of optical flow estimation with deep networks[J/OL]. ArXiv, 2016. [2025-03-08]. . DOI:10.48550/arXiv.1612.01925 .
33 BERTASIUS G, WANG H, TORRESANI L. Is space-time attention all you need for video understanding?[J/OL]. ArXiv2021.[2025-04-29]. .
34 KHOSLA P, TETERWAK P, WANG C, et al. Supervised contrastive learning[J/OL]. ArXiv2021. [2025-03-08]. . DOI:10.48550/arXiv.2004.11362 .
35 GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial networks[J/OL]. ArXiv2014. [2025-03-29]. .
36 HüBL J, KOGELNIG A, SURINACH E, et al. A review on acoustic monitoring of debris flow[C/OL].DEBRIS FLOWS, 2012: 73-82. [2025-03-29]. .
37 RAMACHANDRAM D, TAYLOR G W. Deep multimodal learning: a survey on recent advances and trends[J]. IEEE Signal Processing Magazine201734(6): 96-108.
Outlines

/