TECI-YOLO: An Efficient, Lightweight Model for Detecting Small Floating Objects on Water Surfaces
DOI:
https://doi.org/10.63313/JCSFT.9046Keywords:
Water Surface Floating Object Detection, Small Object Detection, YOLOv11, Lightweight Network Architecture, Feature EnhancementAbstract
Timely detection of water-surface floating objects is critical for marine ecological protection, yet illumination variation, wave interference, and dense small-target distributions pose persistent challenges of low accuracy, high false-alarm rates, and excessive computational cost. This study proposes TECI-YOLO, a lightweight detection framework built upon YOLOv11s with four targeted improvements. The Tiny module adds a P2 layer while removing P5, preserving high-resolution spatial detail for small-target representation. The CEM–CFE module combines Channel-Enhanced MBConv and Channel-Fused Enhancer to strengthen feature discriminability and semantic robustness. The E_Head integrates coordinate attention, grouped convolution, and task-decoupled branches to reduce redundancy and improve localization. Inner-MPDIoU replaces standard MPDIoU with adaptive scaling and auxiliary bounding boxes for refined small-target geometric modeling. On FloW-IMG, TECI-YOLO improves Precision, Recall, [email protected], and [email protected]:0.95 by 2.4%, 3.8%, 3.3%, and 0.6% over YOLOv11s; on IWHR_AI_Label_Floater_V1, gains reach 1.3%, 1.4%, and 0.7%, respectively. Parameters are reduced by ~26% with 3.8% fewer FLOPs, demonstrating a compelling accuracy–efficiency tradeoff for real-time water-surface monitoring.
References
[1] Mejjad, N., Safhi, A. E. M., & Laissaoui, A. (2025). Insightful analytical review of potential impacts of microplastic pollution on coastal and marine ecosystem services. Journal of Hazardous Materials Advances, 17, 100578. https://doi.org/10.1016/j.hazadv.2024.100578
[2] dos Santos, C. R., Drumond, G. P., Moreira, V. R., de Souza Santos, L. V., & Amaral, M. C. S. (2023). Microplastics in surface water: occurrence, ecological implications, quantification methods and remediation technologies. Chemical Engineering Journal, 474, 144936. https://doi.org/10.1016/j.cej.2023.144936
[3] Chang, H.-C., Hsu, Y.-L., Hung, S.-S., Ou, G.-R., Wu, J.-R., & Hsu, C. (2021). Autonomous Water Quality Monitoring and Water Surface Cleaning for Unmanned Surface Vehicle. Sensors, 21(4), 1102. https://doi.org/10.3390/s21041102
[4] Wang, Y., Zhang, X., Chen, J., Cheng, Z., & Wang, D. (2019). Camera sensor-based contamination detection for water environment monitoring. Environmental Science and Pollution Research, 26(3), 2722-2733.https://doi.org/10.1007/s11356-018-3645-z
[5] Murat, A. A., & Kiran, M. S. (2025). A comprehensive review on YOLO versions for object detection. Engineering Science and Technology, an International Journal, 70, 102161.https://doi.org/10.1016/j.jestch.2025.102161
[6] Štricelj, A., & Kačič, Z. (2015). Detection of objects on waters’ surfaces using CEIEMV method. Computers & Electrical Engineering, 46, 511-527.https://doi.org/10.1016/j.compeleceng.2015.03.026
[7] Yuan, X., Martínez, J. F., Eckert, M., & López-Santidrián, L. (2016). An improved Otsu threshold segmentation method for underwater simultaneous localization and mapping-based navigation. Sensors, 16(7), 1148.https://doi.org/10.3390/s16071148
[8] Jin, X., Niu, P., & Liu, L. (2019). A GMM-based segmentation method for the detection of water surface floats. IEEE Access, 7, 119018-119025.10.1109/ACCESS.2019.2937129
[9] Li, N., Lv, X., Li, B., & Xu, S. (2019, July). An improved OTSU method based on uniformity measurement for segmentation of water surface images. In 2019 International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData) (pp. 675-681). IEEE.10.1109/iThings/GreenCom/CPSCom/SmartData.2019.00129.10.1109/iThings/GreenCom/CPSCom/SmartData.2019.00129
[10] Qiao, G., Yang, M., & Wang, H. (2022). A detection approach for floating debris using ground images based on deep learning. Remote Sensing, 14(17), 4161. https://doi.org/10.3390/rs14174161
[11] Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 580-587).10.1109/CVPR.2014.81
[12] Girshick, R. (2015). Fast r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 1440-1448). 10.1109/ICCV.2015.169
[13] Wang, J., Dong, J., Tang, M., Yao, J., Li, X., Kong, D., & Zhao, K. (2023). Identification and detection of microplastic particles in marine environment by using improved faster R–CNN model. Journal of Environmental Management, 345, 118802.https://doi.org/10.1016/j.jenvman.2023.118802
[14] Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., & Berg, A. C. (2016, September). Ssd: Single shot multibox detector. In European conference on computer vision (pp. 21-37). Cham: Springer International Publishing.https://www.sci-hub.vg/10.1007/978-3-319-4644
[15] Li, H., Yang, S., Zhang, R., Yu, P., Fu, Z., Wang, X., ... & Yang, Y. (2023). Detection of floating objects on water surface using YOLOv5s in an edge computing environment. Water, 16(1), 86. https://doi.org/10.3390/w16010086
[16] Wang, J., & Zhao, H. (2024). Improved yolov8 algorithm for water surface object detection. Sensors, 24(15), 5059. https://doi.org/10.3390/s24155059
[17] Zhang, C., Yue, J., Fu, J., & Wu, S. (2025). River floating object detection with transformer model in real time. Scientific Reports, 15(1), 9026.https://doi.org/10.1038/s41598-025-93659-1
[18] Khanam, R., & Hussain, M. (2024). Yolov11: An overview of the key architectural enhancements. arXiv 2024. arXiv preprint arXiv:2410.17725. https://doi.org/10.48550/arXiv.2410.17725
[19] Wang, F. (2025). Improving YOLOv11 for marine water quality monitoring and pollution source identification. Scientific Reports, 15(1), 21367.https://doi.org/10.1038/s41598-025-04842-3
[20] Xiong, Y., Li, Z., Chen, Y., Wang, F., Zhu, X., Luo, J., ... & Dai, J. (2024). Efficient deformable convnets: Rethinking dynamic and sparse operator for vision applications. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 5652-5661).https://doi.org/10.48550/arXiv.2401.06197
[21] Shang, J., Zhang, K., Zhang, Z., Li, C., & Liu, H. (2023). A high-performance convolution block oriented accelerator for MBConv-Based CNNs. Integration, 88, 298-312.https://doi.org/10.1016/j.vlsi.2022.10.012
[22] Saleem, N., Elmannai, H., Bourouis, S., & Trigui, A. (2024). Squeeze-and-excitation 3D convolutional attention recurrent network for end-to-end speech emotion recognition. Applied Soft Computing, 161, 111735.https://doi.org/10.1016/j.asoc.2024.111735
[23] Lee, Y., Park, J., & Lee, C. O. (2022). Two-level group convolution. Neural Networks, 154, 323-332.https://doi.org/10.1016/j.neunet.2022.07.024
[24] Zheng, Z., Wang, P., Liu, W., Li, J., Ye, R., & Ren, D. (2020, April). Distance-IoU loss: Faster and better learning for bounding box regression. In Proceedings of the AAAI conference on artificial intelligence (Vol. 34, No. 07, pp. 12993-13000). https://doi.org/10.1609/aaai.v34i07.6999
[25] Ma, S., & Xu, Y. (2023). Mpdiou: a loss for efficient and accurate bounding box regression. arXiv preprint arXiv:2307.07662.https://doi.org/10.48550/arXiv.2307.07662
[26] Cheng, Y., Zhu, J., Jiang, M., Fu, J., Pang, C., Wang, P., ... & Bengio, Y. (2021). Flow: A dataset and benchmark for floating waste detection in inland waters. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 10953-10962).10.1109/ICCV48922.2021.01077
[27] Qiao, G., Yang, M., & Wang, H. (2025). An annotated Dataset and Benchmark for Detecting Floating Debris in Inland Waters. Scientific Data, 12(1), 385.https://doi.org/10.1038/s41597-025-04594-9
Downloads
Published
Issue
Section
License
Copyright (c) 2026 by author(s) and Erytis Publishing Limited.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.













