A Case Study on the Diminishing Popularity of Encoder-Only Architectures in Machine Learning Models

Main Article Content

Praveen Kumar Sridhar
Nitin Srinivasan
Adithyan Arun Kumar
Gowthamaraj Rajendran
Kishore Kumar Perumalsamy

Abstract

This paper examines the shift from encoder-only to decoder and encoder-decoder models in machine learning, highlighting the decline in popularity of encoder-only architectures. It explores the reasons behind this trend, such as the advancements in decoder models that offer superior generative capabilities, flexibility across various domains, and enhancements in unsupervised learning techniques. The study also discusses the role of prompting techniques in simplifying model architectures and enhancing model versatility. By analyzing the evolution, applications, and shifting preferences within the research community and industry, this paper aims to provide insights into the changing landscape of machine learning model architectures.

Downloads

Download data is not yet available.

Article Details

How to Cite
[1]
Praveen Kumar Sridhar, Nitin Srinivasan, Adithyan Arun Kumar, Gowthamaraj Rajendran, and Kishore Kumar Perumalsamy , Trans., “A Case Study on the Diminishing Popularity of Encoder-Only Architectures in Machine Learning Models”, IJITEE, vol. 13, no. 4, pp. 22–27, Apr. 2024, doi: 10.35940/ijitee.D9827.13040324.
Section
Articles

How to Cite

[1]
Praveen Kumar Sridhar, Nitin Srinivasan, Adithyan Arun Kumar, Gowthamaraj Rajendran, and Kishore Kumar Perumalsamy , Trans., “A Case Study on the Diminishing Popularity of Encoder-Only Architectures in Machine Learning Models”, IJITEE, vol. 13, no. 4, pp. 22–27, Apr. 2024, doi: 10.35940/ijitee.D9827.13040324.
Share |

References

Vaswani, A., et al. (2017). "Attention Is All You Need." Advances in Neural Information Processing Systems.

Radford, A., et al. (2018). "Improving Language Understanding by Generative Pre-Training." OpenAI Blog.

Devlin, J., et al. (2018). "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding." arXiv preprint arXiv:1810.04805.

Brown, T., et al. (2020). "Language Models are Few-Shot Learners." arXiv preprint arXiv:2005.14165.

Raffel, C., et al. (2019). "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer." Journal of Machine Learning Research.

Hochreiter, S., & Schmidhuber, J. (1997). Long Short-Term Memory. Neural Computation https://doi.org/10.1162/neco.1997.9.8.1735

Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to Sequence Learning with Neural Networks. arXiv:1409.3215

Radford, A., et al. (2018). Improving Language Understanding by Generative Pre-Training.

Kitaev, N., Kaiser, Ł., & Levskaya, A. (2020). Reformer: The Efficient Transformer. arXiv:2001.04451

Touvron, Hugo, et al. "LLaMA: Open and Efficient Foundation Language Models." ArXiv, 2023, /abs/2302.13971.

Radford, A., et al. (2019). Language Models are Unsupervised Multitask Learners

Liu, Pengfei, et al. "Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing." ArXiv, 2021, /abs/2107.13586.

Lewis, P., et al. (2020). "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks.

Hu, H., et al. (2021). "LoRA: Low-Rank Adaptation of Large Language Models.

Srikanth, P., Ushitaasree, & Anand, Dr. G. P. (2019). Conversational Chatbot with Attention Model. In International Journal of Innovative Technology and Exploring Engineering (Vol. 9, Issue 2, pp. 3537–3540). https://doi.org/10.35940/ijitee.b6316.129219

Balaji, S., Gopannagari, M., Sharma, S., & Rajgopal, P. (2021). Developing a Machine Learning Algorithm to Assess Attention Levels in ADHD Students in a Virtual Learning Setting using Audio and Video Processing. In International Journal of Recent Technology and Engineering (IJRTE) (Vol. 10, Issue 1, pp. 285–295). https://doi.org/10.35940/ijrte.a5965.0510121

Nayak, R., Kannantha, B. S. U., S, K., & Gururaj, C. (2022). Multimodal Offensive Meme Classification u sing Transformers and BiLSTM. In International Journal of Engineering and Advanced Technology (Vol. 11, Issue 3, pp. 96–102). https://doi.org/10.35940/ijeat.c3392.0211322

Singh, S., Ghatnekar, V., & Katti, S. (2024). Long Horizon Episodic Decision Making for Cognitively Inspired Robots. In Indian Journal of Artificial Intelligence and Neural Networking (Vol. 4, Issue 2, pp. 1–7). https://doi.org/10.54105/ijainn.b1082.04020224

Sharma, T., & Sharma, R. (2024). Smart Grid Monitoring: Enhancing Reliability and Efficiency in Energy Distribution. In Indian Journal of Data Communication and Networking (Vol. 4, Issue 2, pp. 1–4). https://doi.org/10.54105/ijdcn.d7954.04020224