Comparison of Convolutional Neural Network Architectures for Underwater Image Classification

Main Article Content

Krystian Kozakiewicz

Abstract

Convolutional neural networks (CNNs) play an essential role in classifying images collected in real-world environments. This article presents a performance comparison of selected CNNs for image classification tasks related to marine flora and fauna, using recordings from an Unmanned underwater vehicle (UUV). An attempt was made to find suitable CNN architectures for processing images of a poor-visibility marine environment among five commonly used architectures. The research was based on a uniform model training system: the same dataset and identical optimisation parameters were used to demonstrate the learning capabilities of each architecture. Thanks to the uniform CNN learning system, their direct learning capabilities for specific images can be more accurately estimated. This means that the conducted experiments showed that, in the early stage of training, the analysed networks achieved similar learning results, whereas the differences concerned the final training accuracy. The best results were achieved with models such as ResNet50, which have the most advanced architecture. Advanced models achieve improved classification of complex and distorted images by leveraging more parameters. The results provide insights into the performance of different architectures in underwater image classification and serve as a reference for further research on deep learning applications in marine environment monitoring.

Downloads

Download data is not yet available.

Article Details

Section

Articles

How to Cite

[1]
Krystian Kozakiewicz , Tran., “Comparison of Convolutional Neural Network Architectures for Underwater Image Classification”, IJITEE, vol. 14, no. 12, pp. 32–35, Nov. 2025, doi: 10.35940/ijitee.A1175.14121125.
Share |

References

Y. LeCun, Y. Bengio, G. Hinton (2015). “Deep learning”. Nature 521, 436–444 (2015). DOI: https://doi.org/10.1038/nature14539

M. J. Islam, S. Sakib Enan, P. Luo, and J. Sattar, "Underwater Image Super-Resolution using Deep Residual Multipliers," 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 2020, pp. 900-906,

DOI: https://doi.org/10.1109/ICRA40945.2020.9197213.

C. Li et al., "An Underwater Image Enhancement Benchmark Dataset and Beyond," in IEEE Transactions on Image Processing, vol. 29, pp. 4376-4389, 2020, DOI: https://doi.org/10.1109/TIP.2019.2955241.

M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, L. Chen. “MobileNetV2: Inverted Residuals and Linear Bottlenecks” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 4510-4520.

DOI: https://doi.org/10.48550/arXiv.1801.04381

M. Elpeltagy, H. Sallam, “Automatic prediction of COVID-19 from chest images using modified ResNet50”. Multimed Tools Appl 80, 26451–26463 (2021). DOI: https://doi.org/10.1007/s11042-021-10783-6

S. Upadhyay, J. Jain, R. Prasad. (2024). “Early Blight and Late Blight Disease Detection in Potato Using Efficientnetb0”. International Journal of Experimental Research and Review, 38, 15–25. DOI: https://doi.org/10.52756/ijerr.2024.v38.002

C. Wang et al., "Pulmonary Image Classification Based on Inception-v3 Transfer Learning Model," in IEEE Access, vol. 7, pp. 146533-146541, 2019, DOI: https://doi.org/10.1109/ACCESS.2019.2946000.

G. Huang, Z. Liu, L. van der Maaten, K. Weinberger; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 4700-4708. DOI: https://doi.org/10.48550/arXiv.1608.06993

D. Serrano, “ICM-Benchmark-20,” Kaggle Datasets, Oct. 2024. [Online]. Available: https://www.kaggle.com/datasets/tsunamiserra/icm-benchmark-20 [Accessed: 31-Oct-2025].

Shorten, C., Khoshgoftaar, T.M. A survey on Image Data Augmentation for Deep Learning. J Big Data 6, 60 (2019).

DOI: https://doi.org/10.1186/s40537-019-0197-0

J. Madewell, R.A. Feagin, T.P. Huff, B. Balboa; Estimating Freshwater Inflows for an Ungauged Watershed at the Big Boggy National Wildlife Refuge, USA. J. Mar. Sci. Eng. 2024, 12, 15. DOI: https://doi.org/10.3390/jmse12010015

Most read articles by the same author(s)

<< < 3 4 5 6 7 8 9 10 11 12 > >>