A comparative study of task formulations for detecting propaganda using large language models

Ескіз

Дата

2025

Науковий керівник

Назва журналу

Номер ISSN

Назва тому

Видавець

КПІ ім. Ігоря Сікорського

Анотація

This paper extends existing studies on propaganda detection using large language models by examiningseveral approaches to task formulation and applying them on different LLMs, namely, GPT-4o mini and Gemma / Gemma 2, aiming to find the most effective approach.Using a combination of two text corpora in English and Russian languages with 18 propaganda techniques, we fine-tune models on character-based, phrase-based and class-?fication -only variationsof this dataset with corresponding instructions to define which ins truction yields the best performance. We conducted experiments and evaluated performance across classification, span identification, and joint tasks, demonstrating the clear superiority of the phrase-based approach over the character-based one. At the same time, our findings indi cate that fine-tuning significantly improved model performance on span identification and joint tasks, while offering limited benefit for the classification task alone.

Опис

Ключові слова

propaganda detection, Large Language Models, propaganda techniques, fine-tuning, natural language processing, виявлення пропаганди, великі мовні моделі, методи пропаганди, точне налаштування, обробка природної мови

Бібліографічний опис

Oliinyk, V. A comparative study of task formulations for detecting propaganda using large language models / V. Oliinyk, N. Zakharchyn // Адаптивні системи автоматичного управління : міжвідомчий науково-технічний збірник. – 2025. – № 2 (47). – С. 14-24. – Бібліогр.: 12 назв.

ORCID