NATURAL LANGUAGE PROCESSING (NLP) ASOSLARI VA MATNNI TAHLIL QILISH

Authors

  • Ikromov Xusan Xolmaxamatovich Andijon davlat texnika instituti
  • Botirova Shoxsanam Uraimjon qizi Andijon davlat texnika instituti

Keywords:

NLP, tabiiy tilni qayta ishlash, morfologik tahlil, sintaktik tahlil, semantik tahlil, BERT, FastText, lemmatizatsiya, chuqur o‘rganish (deep learning), mashinali o‘rganish (machine learning), transformer modellar, lingvistik resurslar

Abstract

Ushbu maqolada tabiiy tilni qayta ishlash (Natural Language Processing, NLP) texnologiyalarining asosiy tamoyillari va ularning o‘zbek tilida qo‘llanish imkoniyatlari yoritilgan. Maqolada morfologik, sintaktik, semantik va hissiy tahlil bosqichlari keng tahlil qilinib, har bir bosqich uchun zamonaviy mashinani o‘rganish va chuqur o‘rganish asosidagi modellar (BERT, FastText, LSTM va boshqalar) qo‘llanilgan. Tadqiqot davomida 1 milliondan ortiq o‘zbekcha matnlar korpusi tuzilib, prototip dasturiy ta’minot yaratilgan. Natijalar model samaradorligining yuqoriligini (aniqlik 89–92%) ko‘rsatdi. Shuningdek, mavjud lingvistik resurslarning cheklanganligi sababli, kelgusidagi ilmiy izlanishlarda o‘zbek tiliga mos NLP vositalarini yanada rivojlantirish zarurligi asoslab berilgan.

References

O‘zbekiston Respublikasi Prezidentining “Sun’iy intellekt texnologiyalarini joriy etish chora-tadbirlari to‘g‘risida”gi 2021-yil 26-oktabr PQ–5184-son qarori.

ANALYSIS OF MODERN METHODS OF TEACHING AND TRAINING STUDENTS OF THE SPECIALTY "INFORMATION SYSTEMS AND TECHNOLOGIES". (2023). International Journal of Advance Scientific Research, 3(11), 174-178. https://doi.org/10.37547/ijasr-03-11-29

Ikromov Khusan Kholmakhamatovich, . (2023). HISTORICAL CONTEXT OF DEVELOPMENT OF INFORMATION SYSTEMS AND DATABASE MANAGEMENT. International Journal of Pedagogics, 3(11), 119–123. https://doi.org/10.37547/ijp/Volume03Issue11-23

Jurafsky, D., & Martin, J. H. (2020). Speech and Language Processing (3rd ed.). Draft. Stanford University. https://web.stanford.edu/~jurafsky/slp3/

Young, T., Hazarika, D., Poria, S., & Cambria, E. (2018). Recent trends in deep learning based natural language processing. IEEE Computational Intelligence Magazine, 13(3), 55–75. https://doi.org/10.1109/MCI.2018.2840738

Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Proceedings of NAACL-HLT, 4171–4186. https://doi.org/10.48550/arXiv.1810.04805

Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient Estimation of Word Representations in Vector Space. arXiv preprint arXiv:1301.3781. https://doi.org/10.48550/arXiv.1301.3781

Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., & Zettlemoyer, L. (2018). Deep contextualized word representations. Proceedings of NAACL-HLT, 2227–2237. https://doi.org/10.48550/arXiv.1802.05365

Cambria, E., Schuller, B., Xia, Y., & Havasi, C. (2013). New avenues in opinion mining and sentiment analysis. IEEE Intelligent Systems, 28(2), 15–21. https://doi.org/10.1109/MIS.2013.30

Goldberg, Y. (2017). Neural Network Methods for Natural Language Processing. Synthesis Lectures on Human Language Technologies, 10(1), 1–309. https://doi.org/10.2200/S00762ED1V01Y201703HLT037

Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., ... & Stoyanov, V. (2019). RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:1907.11692. https://doi.org/10.48550/arXiv.1907.11692

Bahdanau, D., Cho, K., & Bengio, Y. (2015). Neural Machine Translation by Jointly Learning to Align and Translate. International Conference on Learning Representations (ICLR). https://doi.org/10.48550/arXiv.1409.0473

Liu, B. (2012). Sentiment Analysis and Opinion Mining. Synthesis Lectures on Human Language Technologies, 5(1), 1–167. https://doi.org/10.2200/S00416ED1V01Y201204HLT016

Downloads

Published

2025-06-16

How to Cite

Ikromov, K., & Botirova , S. (2025). NATURAL LANGUAGE PROCESSING (NLP) ASOSLARI VA MATNNI TAHLIL QILISH. DIGITAL TRANSFORMATION AND ARTIFICIAL INTELLIGENCE, 3(3), 140–146. Retrieved from https://dtai.tsue.uz/index.php/dtai/article/view/v3i320