The Journal of Informational Technology and Applications (JITA) is a scientific journal with an international reach. Its primary goal is to share new ideas, knowledge, and experiences that contribute the development of an information society based on knowledge.Our vision is to become a leading journal that publishes groundbreaking research that advances scientific progress. We invite you to collaborate by submitting original research works related to emerging issues in your field that align with our editorial policies.The journal is published twice a year, in June and December. The deadline for the June issue is April 15th; for the December issue, it is October 15th. After a blind review and evaluation process, authors will be notified of the publishing decision.
Dear Author, please read carefully all texts given on JITA website, especially „Instructions for Authors“. To submit your manuscript please download manuscript template and copyright form. Please attach also a short biography of author(s), max. 200 characters, as a separate MS Word© document. Clicking on „Upload paper“ button will open form to send
While the early evolution of large language models (LLMs), including shift from statistical approaches to the Transformer architecture, illustrates their historical impact on the processing of natural language; however, the latest research in neural networks has enabled the faster and more powerful rise of language models grounded in solid theoretical foundations. These advantages, driven by advances in computing systems (e.g., ultra-powerful processing and memory capabilities), enable the development of numerous new models based on new emerging technologies such as artificial intelligence (AI). Thus, we provide an evolutionary overview of LLMs involved in the shift from the statistical to deep learning approach, highlighting their key stages of development, with a particular focused on concepts such as self-attention, the Transformer architecture, BERT, GPT, DeepSeek, and Claude. Finally, our conclusions present a reference point for future research associated with the emergence of new AI-supported models that are irreversibly transforming the way an increasing number of human activities are performed.
While the early evolution of large language models (LLMs), including shift from statistical approaches to the Transformer architecture, illustrates their historical impact on the processing of natural language; however, the latest research in neural networks has enabled the faster and more powerful rise of language models grounded in solid theoretical foundations. These advantages, driven by advances in computing systems (e.g., ultra-powerful processing and memory capabilities), enable the development of numerous new models based on new emerging technologies such as artificial intelligence (AI). Thus, we provide an evolutionary overview of LLMs involved in the shift from the statistical to deep learning approach, highlighting their key stages of development, with a particular focused on concepts such as self-attention, the Transformer architecture, BERT, GPT, DeepSeek, and Claude. Finally, our conclusions present a reference point for future research associated with the emergence of new AI-supported models that are irreversibly transforming the way an increasing number of human activities are performed.
While the early evolution of large language models (LLMs), including shift from statistical approaches to the Transformer architecture, illustrates their historical impact on the processing of natural language; however, the latest research in neural networks has enabled the faster and more powerful rise of language models grounded in solid theoretical foundations. These advantages, driven by advances in computing systems (e.g., ultra-powerful processing and memory capabilities), enable the development of numerous new models based on new emerging technologies such as artificial intelligence (AI). Thus, we provide an evolutionary overview of LLMs involved in the shift from the statistical to deep learning approach, highlighting their key stages of development, with a particular focused on concepts such as self-attention, the Transformer architecture, BERT, GPT, DeepSeek, and Claude. Finally, our conclusions present a reference point for future research associated with the emergence of new AI-supported models that are irreversibly transforming the way an increasing number of human activities are performed.
jita@apeiron-edu.eu
+387 51 247 925
+387 51 247 975
+387 51 247 912
Pan European University APEIRON Banja Luka Journal JITA Pere Krece 13, P.O.Box 51 78102 Banja Luka, Republic of Srpska Bosnia and Hercegovina
© 2024 Paneuropean University Apeiron All Rights Reserved
jita@apeiron-edu.eu
+387 51 247 925
+387 51 247 975
+387 51 247 912
Pan European University APEIRON Banja Luka Journal JITA Pere Krece 13, P.O.Box 51 78102 Banja Luka, Republic of Srpska Bosnia and Hercegovina
© 2024 Paneuropean University Apeiron All Rights Reserved
Pan European University APEIRON Banja Luka Journal JITA Pere Krece 13, P.O.Box 51 78102 Banja Luka, Republic of Srpska Bosnia and Hercegovina
jita@apeiron-edu.eu
+387 51 247 925
+387 51 247 975
+387 51 247 912
© 2024 Paneuropean University Apeiron All Rights Reserved