TABIIY TILNI QAYTA ISHLASH: AMALIYOTDA TEZKOR TAHLIL VA UNING YANGICHA YONDASHUVLARI
Ключевые слова:
tabiiy tilni qayta ishlash, tokenizatsiya, stemming, lemmatizatsiya, his-tuyg‘ularni aniqlash, n-gram, TF-IDF, mashina o‘rganish, chatbotlarАннотация
Ushbu maqola tabiiy tilni qayta ishlash texnologiyalari yordamida matnlarni tezkor va samarali tahlil qilishning amaliy jarayonlari hamda zamonaviy yondashuvlarini o‘rganishga bag‘ishlangan. Tabiiy tilni qayta ishlash bugungi kunda sun’iy intellekt va ma’lumotlar tahlilining muhim yo‘nalishlaridan biri bo‘lib, turli sohalarda, jumladan, avtomatik matn tasniflash, sentiment tahlili, muloqot tizimlari, va tarjima xizmatlarida faol qo‘llanilmoqda. Maqolada ushbu usullarning ishlash tamoyillari, algoritmik jarayonlari va har bir usulning kuchli va zaif tomonlari batafsil tahlil qilinadi
Библиографические ссылки
Jurafsky, D., & Martin, J. H. (2020). Speech and Language Processing (3rd ed.). Pearson.
Manning, C. D., & Schütze, H. (1999). Foundations of Statistical Natural Language Processing. MIT Press.
Goldberg, Y. (2017). Neural Network Methods in Natural Language Processing. Morgan & Claypool Publishers.
Rahimov N. (2016). Structural and functional organization of business anaclitic systems. International Journal of Research in Engineering and Technology,5,Issue 7,94-96.
Рахимов Нодир, Эсановна Барно, Примкулов Ойбек (2023). Ахборот тизимларида мантиқий хулосалаш самарадорлигини ошириш ёндашуви. International Scientific and Practical Conference on Algorithms and Current Problems of Programming.
Vaswani, A., Shard, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is All You Need. Advances in Neural Information Processing Systems, 30.
Hirschberg, J., & Manning, C. D. (2015). Advances in Natural Language Processing. Proceedings of the National Academy of Sciences, 112(15), 4685-4692.
Chowdhury, G. G. (2003). Natural Language Processing. Annual Review of Information Science and Technology, 37(1), 51-89.
Kenton, J. D., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:1810.04805.
Yin, W., & Schütze, H. (2018). Global Context in Attention-based Neural Models for NLP. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 95-105.
Загрузки
Опубликован
Как цитировать
Выпуск
Раздел
Лицензия
Copyright (c) 2024 N.O. Raximov, D.E. Khojamberdiyev, Sh.A. Karaxanova
Это произведение доступно по лицензии Creative Commons «Attribution» («Атрибуция») 4.0 Всемирная.