Abstract:
The detection of the operating status of underground belt conveyors in coal mines is the key to the safe operation of belt conveyors. However, most detection methods for the operating status of belt conveyors can only handle a single detection task, making it difficult to achieve simultaneous detection of multiple tasks. A multi task detection method for the operation status of belt conveyors based on DR-YOLOM is proposed to address the current difficulty in achieving comprehensive detection with existing technologies. A single network is used to simultaneously recognize large-sized coal blocks, detect belt edges, and detect coal flow status. Compared with using a separate model for each task, integrating three different necks and heads into a model with a shared backbone can save a lot of computing resources and inference time. Firstly, the image semantic information collected in low illumination and dusty transportation tunnels is weak, which makes the model's ability to extract target semantic information poor. Therefore, the Bottleneck structure in C2f modules of the backbone network P6 and P8 layers is replaced with an Extended Residual Module (DWR), reducing the number of parameters while improving the model's ability to extract multi-scale contextual semantic information. Secondly, as the model requires target recognition and segmentation of different types of tasks, an efficient layer aggregation network (RepGFPN) with skip layer connection structure is adopted to optimize the feature fusion part, greatly improving the detection accuracy of the model for different detection tasks while controlling the number of model parameters and inference speed; Finally, to address the detection tasks of three different label shapes, the Inner CIOU loss function is introduced to compensate for the weak generalization ability of the CIoU loss function in different detection tasks. In order to verify the applicability and robustness of the DR-YOLOM algorithm, U-net and DeepLabV3+network models were selected to compare and analyze the segmentation performance of the DR-YOLOM multi task detection model. Faster RCNN and Yolov8 were used to compare the performance of object detection, and the loss function and accuracy curve before and after model improvement were compared. The results show that compared to mainstream single detection algorithms, DR-YOLOM multi task detection algorithm has better comprehensive detection ability, and this algorithm can ensure high target recognition accuracy, segmentation accuracy, and appropriate inference speed with a small number of parameters. Among them, the mAP50 for large-scale coal block recognition is 90%, the mIoU for belt edge segmentation and coal flow segmentation are 78.7% and 96.6%, respectively, and the number of model parameters is 4.43 M. The inference speed can reach 40 frames per second, which is 1.3%, 0.7%, and 2.1% higher than the basic models mAP50 and mIoU, respectively. Finally, in order to verify the practicality of the DR-YOLOM algorithm, an inspection robot was used to collect video data in the laboratory, and the DR-YOLOM multi task detection algorithm was used to detect the collected video data. The experimental results show that the DR-YOLOM multi task detection algorithm can meet the requirements of multi task detection for the operation status of belt conveyors.