Comparative Study of Automated Machine Learning Services on Cloud Platforms

Main Article Content

Duckki Lee

Abstract

Automated machine learning (AutoML) has emerged as a practical approach to facilitate the adoption of machine learning by automating model development tasks, including preprocessing, model selection, and hyperparameter optimisation, thereby reducing reliance on specialised expertise. Recently, major cloud providers have integrated AutoML into their platforms to offer end-to-end machine-learning pipelines as managed services. However, the practical implications of cloud-based AutoML systems, particularly their system and operational aspects, remain insufficiently explored. This paper presents an empirical, system-oriented analysis of AutoML services provided by Microsoft Azure, Amazon Web Services, and Google Cloud Platform. Using representative regression and binary classification tasks, the predictive performance, evaluation metrics, and feature-importance results produced by each platform are compared. The study also examines how platform-level design choices influence usability, reproducibility, and lifecycle management. The results demonstrate that cloud AutoML platforms deliver high-performing models that operate without manual intervention, whereas differences among providers primarily reflect architectural and operational abstractions rather than algorithmic limitations. These findings suggest that cloud AutoML should be understood as an integrated system that combines automated modelling and MLOps capabilities and offers a viable pathway toward production-ready machine learning under real-world constraints.

Downloads

Download data is not yet available.

Article Details

Section

Articles

How to Cite

[1]
Duckki Lee , Tran., “Comparative Study of Automated Machine Learning Services on Cloud Platforms”, IJEAT, vol. 15, no. 3, pp. 23–28, Feb. 2026, doi: 10.35940/ijeat.D4748.15030226.
Share |

References

E. Dritsas and M. Trigka, “Exploring the Intersection of Machine Learning and Big Data: A Survey,” MAKE, vol. 7, no. 1, p. 13, Feb. 2025, DOI: https://doi.org/10.3390/make7010013

K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770-778, 2016, DOI: https://doi.org/10.1109/CVPR.2016.90

R. Elshawi, M. Maher, and S. Sakr, "Automated machine learning: State-of-the-art and open challenges," arXiv preprint arXiv:1906.02287, 2019. [Online]. Available: https://arxiv.org/abs/1906.02287

M. Zöller and M. F. Huber, "Benchmark and Survey of Automated Machine Learning Frameworks," Journal of Artificial Intelligence Research, vol. 70, pp. 409-472, 2021, DOI: https://doi.org/10.1613/jair.1.11854

X. He, K. Zhao, and X. Chu, "AutoML: A survey of the state-of-the-art," Knowledge-Based Systems, vol. 212, Art. no. 106622, 2021, DOI: https://doi.org/10.1016/j.knosys.2020.106622

L. Li et al., "Massively parallel hyperparameter tuning," arXiv preprint arXiv:1810.05934, 2018. [Online]. Available:

https://arxiv.org/abs/1810.05934

Microsoft Azure, "Tune hyperparameters for your model," 2024. [Online]. Available:

https://learn.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters

G. A. A. Santos et al., "Technical Debt in Machine Learning: A Systematic Mapping Study," Journal of Systems and Software, vol. 184, art. no. 111130, 2022, DOI: https://doi.org/10.1016/j.jss.2021.111130

E. Breck et al., "The ML test score: A rubric for ML production readiness and technical debt reduction," in Proc. IEEE Int. Conf. Big Data, pp. 1123-1132, 2017, DOI: https://doi.org/10.1109/BigData.2017.8258038

E. Gundersen et al., "Do machine learning platforms provide out-of-the-box reproducibility?" Future Generation Computer Systems, vol. 126, pp. 34-47, 2022, DOI: https://doi.org/10.1016/j.future.2021.07.017

F. Hutter et al., Automated Machine Learning: Methods, Systems, Challenges. Cham, Switzerland: Springer, 2019,

DOI: https://doi.org/10.1007/978-3-030-05318-5

F. Bayram and B. S. Ahmed, "Towards Trustworthy Machine Learning in Production: An Overview of the Robustness in MLOps Approach," ACM Computing Surveys, vol. 57, no. 5, 2025, DOI: https://doi.org/10.1145/3708497

M. Zarour, H. Alzabut, and K. T. Al-Sarayreh, "MLOps best practices, challenges and maturity models: A systematic literature review," Information and Software Technology, vol. 183, art. no. 107733, 2025, DOI: https://doi.org/10.1016/j.infsof.2024.107733

M. Steidl, "The pipeline for the continuous development of AI models: Practice and research challenges," Journal of Systems and Software, vol. 199, art. no. 111615, 2023, DOI: https://doi.org/10.1016/j.jss.2023.111615

S. Schlegel and K.-U. Sattler, "Management of machine learning lifecycle artefacts: A survey," ACM SIGMOD Record, vol. 51, no. 4, pp. 28-39, 2022, DOI: https://doi.org/10.1145/3579051.3579056

M. Xhepa and N. Sree, "Machine Learning Model Computation in AWS and Azure," Frankfurt University of Applied Sciences, Frankfurt, Germany, 2022. [Online]. Available:https://sites.google.com/view/ai-as-a-service/comparison

W. Choi, T. Choi, and S. Heo, "A comparative study of automated machine learning platforms for exercise anthropometry-based typology analysis," Bioengineering, vol. 10, no. 8, Art. no. 891, 2023, DOI: https://doi.org/10.3390/bioengineering10080891

US Health Insurance Dataset, 2022. [Online]. Available:https://www.kaggle.com/datasets/teertha/ushealthinsurancedataset

Bank Marketing Dataset, 2022. [Online]. Available: https://archive.ics.uci.edu/ml/datasets/Bank+Marketing

AWS Autopilot Model Insight, 2023. [Online]. Available:

https://docs.aws.amazon.com/sagemaker/latest/dg/autopilot-model-insights.html

Most read articles by the same author(s)

1 2 3 4 5 6 7 8 9 10 > >>