SSRE-ID: KO‘P OBYETLAR KUZATUVLARIDA TO‘LIQ O‘Z-O‘ZINI O‘QITISHGA ASOSLANGAN QAYTA IDENTIFIKATSIYALASH

Authors

  • Pirimqulova Zilola Avaz qizi Muhammad al-xorazmiy nomidagi Toshkent axborot texnologiyalari universiteti
  • Hojiyev Sunatullo Nasridin o‘g‘li Muhammad al-Xorazmiy nomidagi Toshkent axborot texnologiyalari universiteti
  • Xo‘jamqulov Abdulaziz Xazrat o‘g‘li Muhammad al-Xorazmiy nomidagi Toshkent axborot texnologiyalari universiteti

Keywords:

Ko‘p obyektli kuzatuv(MOT), Self-supervised learning, reID

Abstract

Ko‘p obyektli kuzatish (Multi-object Tracking, MOT) kompyuter ko‘rishida eng murakkab masalalardan biri bo‘lib qolmoqda. Buning asosiy sabablari — obyektlarning bir-birini to‘sib qo‘yishi, keskin harakat trayektoriyalari, detektorlar keltirib chiqaradigan shovqinlar va vizual muhitning keskin farqlanishidir. So‘nggi yillarda Tracking-by-detection paradigmalarida yangi yutuqlar kuzatildi, ammo ular hanuzgacha, ayniqsa murakkab sahnalarda, obyektlarning identifikatsiya barqarorligini saqlashda muammolarga uchraydi.Ushbu tadqiqotda biz OCSORT algoritmi asosidagi assotsiatsiya mexanizmi bilan integratsiyalangan to‘liq o’z-o’zini o’qitishga asoslangan re-ID modelini taklif qilamiz. Avvalgi yondashuvlardan farqli ravishda, bizning tizim katta hajmdagi belgilangan ReID ma’lumotlar bazasiga tayanmaydi. Taklif etilgan metod MOT17 dagi belgilanmagan o‘quv ketma-ketliklaridan bevosita kuchli identifikatsiya embeddinglarini o‘rganadi. Bu jarayon videoketma-ketlik darajasidagi self-supervised kontrastiv o‘qitish orqali amalga oshirilib, vizual belgilar to‘silish, yoritilishning o‘zgarishi va pozalardagi farqlar mavjud bo‘lgan holatlarda ham barqaror bo‘lib qoladi.Bundan tashqari, biz geometrik IoU ko‘rsatkichlari va ko‘rinish o‘xshashligini birlashtiruvchi gibrid assotsiatsiya strategiyasini ishlab chiqdik. Bu yondashuv faqat harakat signallariga tayanuvchi kuzatuv tizimi(tracker)larning cheklovlarini bartaraf etishga imkon beradi. MOT17 ning barcha 21 ta ketma-ketligida o‘tkazilgan keng qamrovli tajribalar shuni ko‘rsatadiki, bizning tizim identifikatsiya izchilligida sezilarli yaxshilanishga erishadi. Bu ko‘rsatkichlar bir nechta mavjud supervised bazaviy modellardan yuqori natija beradi. Kod:https://github.com/zilolapirimqulova/SSL_MOT

References

1. A. Bewley et al., “Simple Online and Realtime Tracking,” ICIP, 2016.

2. J. Cao et al., “Observation-Centric SORT: Rethinking SORT for MOT,” ICCV, 2023.

3. Y. Zhang et al., “ByteTrack: Ko‘p obyektli kuzatuv by Associating Every Detection Box,” ECCV, 2022.

4. N. Wojke, A. Bewley, D. Paulus, “Simple Online and Realtime Tracking with a Deep Association Metric,” ICIP, 2017.

5. Y. Wang et al., “Towards Real-Time Multi-object Tracking “ECCV, 2020.

6. Y. Zhang et al., “FairMOT: On the Fairness of Detection and Re-Identification in Multiple Object Tracking,” IJCV, 2021.

7. T. Meinhardt et al., “TrackFormer: Ko‘p obyektli kuzatuv with Transformers,” CVPR, 2022.

8. S. Sun et al., “TransTrack: Multiple-Object Tracking with Transformer,” CVPR, 2021.

9. A. Zeng et al., “MOTR: End-to-End Multiple-Object Tracking with Transformers,” ECCV, 2022.

10. L. Zheng et al., “Scalable Person Re-ID: Market-1501,” ICCV, 2015.

11. E. Ristani et al., “DukeMTMC: A Large-Scale Benchmark for Multi-Target Multi-Camera Kuzatuv,” TPAMI, 2016.

12. T. Chen et al., “SimCLR: A Simple Framework for Contrastive Learning,” ICML, 2020.

13. K. He et al., “Momentum Contrast for Unsupervised Visual Representation Learning (MoCo),” CVPR, 2020.

14. X. Chen et al., “MoCo v2/v3: Improved Baselines for Contrastive Representation Learning,” Arxiv, 2021.

15. J.-B. Grill et al., “BYOL: Bootstrap Your Own Latent,” NeurIPS, 2020.

16. M. Caron et al., “SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments,” NeurIPS, 2020.

17. M. Caron et al., “DINO: Self-Supervised Vision Transformers,” ICCV, 2021.

18. X. Wang et al., “Self-supervised Kuzatuv via Video-level Contrastive Learning,” CVPR, 2021.

19. F. Wang, H. Liu, “Understanding Contrastive Representation Learning,” CVPR, 2021.

20. A. Milan et al., “MOT16/MOT17 Benchmark Overview,” TPAMI, 2016.

Downloads

Published

2025-12-09

How to Cite

SSRE-ID: KO‘P OBYETLAR KUZATUVLARIDA TO‘LIQ O‘Z-O‘ZINI O‘QITISHGA ASOSLANGAN QAYTA IDENTIFIKATSIYALASH. (2025). DIGITAL TRANSFORMATION AND ARTIFICIAL INTELLIGENCE, 3(6), 1-11. https://dtai.tsue.uz/index.php/dtai/article/view/v3i61

Most read articles by the same author(s)