A similarity-based oversampling method for multi-label imbalanced text data


Karaman I. H., Köksal G., Eriskin L., Salihoglu S.

International Journal of Data Science and Analytics, cilt.21, sa.1, 2026 (ESCI, Scopus) identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 21 Sayı: 1
  • Basım Tarihi: 2026
  • Doi Numarası: 10.1007/s41060-025-00966-x
  • Dergi Adı: International Journal of Data Science and Analytics
  • Derginin Tarandığı İndeksler: Emerging Sources Citation Index (ESCI), Scopus
  • Anahtar Kelimeler: Imbalanced classification, Multi-label classification, Oversampling, Text classification, Text similarity
  • TED Üniversitesi Adresli: Hayır

Özet

In real-world applications, as data availability increases, obtaining labeled data for machine learning (ML) projects remains challenging due to the high costs and intensive efforts required for data annotation. Many ML projects, particularly those focused on multi-label classification, also grapple with data imbalance issues, where certain classes may lack sufficient data to train effective classifiers. This study introduces and examines a novel oversampling method for multi-label text classification, designed to address performance challenges associated with data imbalance. The proposed method identifies potential new samples from unlabeled data by leveraging similarity measures between instances. By iteratively searching the unlabeled dataset, the method locates instances similar to those in underrepresented classes and evaluates their contribution to classifier performance enhancement. Instances that demonstrate performance improvement are then added to the labeled dataset. Experimental results indicate that the proposed approach effectively enhances classifier performance post-oversampling.