Адаптивні системи автоматичного управління : міжвідомчий науково-технічний збірник. – 2025. – № 2 (47)
Постійне посилання зібрання
Переглянути
Перегляд Адаптивні системи автоматичного управління : міжвідомчий науково-технічний збірник. – 2025. – № 2 (47) за Автор "Oliinyk, V."
Зараз показуємо 1 - 2 з 2
Результатів на сторінці
Налаштування сортування
Документ Відкритий доступ A comparative study of task formulations for detecting propaganda using large language models(КПІ ім. Ігоря Сікорського, 2025) Oliinyk, V.; Zakharchyn, N.This paper extends existing studies on propaganda detection using large language models by examiningseveral approaches to task formulation and applying them on different LLMs, namely, GPT-4o mini and Gemma / Gemma 2, aiming to find the most effective approach.Using a combination of two text corpora in English and Russian languages with 18 propaganda techniques, we fine-tune models on character-based, phrase-based and class-?fication -only variationsof this dataset with corresponding instructions to define which ins truction yields the best performance. We conducted experiments and evaluated performance across classification, span identification, and joint tasks, demonstrating the clear superiority of the phrase-based approach over the character-based one. At the same time, our findings indi cate that fine-tuning significantly improved model performance on span identification and joint tasks, while offering limited benefit for the classification task alone.Документ Відкритий доступ An efficient real-time gaze tracking method for browser-based applications(КПІ ім. Ігоря Сікорського, 2025) Oliinyk, V.; Korol, S.This paper presents a gaze tracking method based on a hybrid gaze direction prediction model, designed for real real-time operation in web applications under limited computational resources and without specialized hardware. The proposed approach combines geometric normalization of facial landmarks with a lightweight CNN CNN-Transformer network to estimate gaze direction and project it onto 2D screen coordinates. Designed for scalable and privacy privacy-preserving use in web applications, it addresses the limitations of appearanceappearance-only and geometry geometry-only methods. The system uses MediaPipe FaceMesh for 3D landmark detection, followed by normalization, hybrid gaze estimation, and a 9 9-point calibration procedure using regression regression-based mapping. A comprehensive experimental setup was developed to evaluate i ts effectiveness. Results demonstrate that our approach achieves high angular accuracy and lower jitter during a user active head movement, with real-time inference running entirely in-browser using ONNX Web Runtime. The proposed method is suitable for use in adaptive web interfaces, assistive technologies, educational tools, and behavioral research applications. It offers an accessible pathway for integrating gaze-based interaction into widespread browser platforms without the need for dedicated hardware.