Method of Counteracting Manipulative Queries to Large Language Models

Вантажиться...
Ескіз

Дата

2025

Науковий керівник

Назва журналу

Номер ISSN

Назва тому

Видавець

Igor Sikorsky Kyiv Polytechnic Institute

Анотація

The integration of Large Language Models (LLMs) into critical infrastructure (SIEM, SOAR) has introduced new attack vectors, specifically prompt injection and jailbreaking. Traditional defense mechanisms, such as input sanitization and Reinforcement Learning from Human Feedback (RLHF), often fail against semantic obfuscation and indirect injections due to their inability to distinguish between control instructions and data context. This paper proposes a novel method for detecting manipulative prompts based on a Multi-Head DistilBERT architecture. Unlike standard binary classifiers, the proposed model decomposes the detection task into four semantic vectors: malicious intent, instruction override, persona adoption, and high-risk action. To address the scarcity of labeled adversarial datasets, we implemented a hybrid data generation strategy using Knowledge Distillation, employing a superior model (Teacher) to label synthetic attacks for the compact Student model. Experimental results on both synthetic and real-world datasets demonstrate that the proposed system achieves a Recall of 0.99, significantly outperforming traditional TF-IDF and keyword-based baselines. The solution operates effectively as a middleware layer, ensuring real-time protection with low computational latency suitable for deployment on edge devices

Опис

Ключові слова

large language models, prompt injection, jailbreaking, nlp security, distilbert, adversarial machine learning

Бібліографічний опис

Kovalchuk, Y. Method of Counteracting Manipulative Queries to Large Language Models / Yehor Kovalchuk, Mykhailo Kolomytsev // Theoretical and Applied Cybersecurity: scientific journal. – 2025. – Vol. 7, No. 3. – P. 114-118. – Bibliogr.: 16 ref.

ORCID