Information, Computing and Intelligent systems, No. 7
Постійне посилання зібрання
Переглянути
Нові надходження
Документ Відкритий доступ Multi-strategy AJAX and event-driven state management for responsive web applications(National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute", 2025) Rudnikova, Nataliia; Nedashkivsky, OleksiiResearch addresses engineering high-performance, responsive web apps for complex data and real-time user interaction. The study focuses on the client-server integration in a monolithic Django pattern/architecture, specifically the orchestration of asynchronous client technologies, for instance, AJAX, JavaScript, and server logic, for instance, Python/Django. The goal is to design, implement, and validate a Unified AJAX Integration Framework. This framework enables seamless real-time data exchange, dynamic updates, and complex state management for diverse components: interactive tables, multi-dimensional charts, multi-step forms, and the Checkout Session Container. Django framework, jQuery for AJAX, and JavaScript libraries (Chart.js, DataTables) are included as materials. Methods applied involve systematic software architecture design, asynchronous programming analysis, RESTful API development, and empirical performance benchmarking of data-loading and state management strategies. Scientific contribution is twofold. Firstly, Multi-Strategy AJAX Integration Model is formalized as a decision framework that dynamically selects between server-side rendering (django-tables2), client-side rendering (vanilla jQuery/DataTables), and a hybrid AJAX-Datatable approach based on data complexity, volume, and interaction. Secondly, Event-Driven State Management System as a robust design for distributed, session-based UI components using a centralized AJAX action dispatcher and a universal state synchronization function. This ensures data consistency across independent page components and eliminates race conditions in concurrent operations. As a result, the framework achieved a significant reduction in server load and perceived latency. The benchmarked components consistently showed sub-200ms response times for datasets over 10,000 records. The cart system handled over 1,000 consecutive operations without any state desynchronization.Документ Відкритий доступ A multifactor model for detecting propaganda in textual data(National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute", 2025) Gavrilenko, Olena; Feshchenko, KyrylDetecting elements of propaganda in large volumes of textual data is currently one of the key tools in combating the information warfare taking place worldwide. This paper presents a multifactor model for determining the level of propaganda in a publication. The analyzed publications included text-based news articles and social media posts, which were processed using both quantitative and semantic text analysis methods. The model was constructed using the method of linear convolution, which enables the integration of multiple heterogeneous indicators into a unified value reflecting the degree of propaganda. The proposed model considers thirteen indicators, each of which, when exhibiting a high value, signals the potential presence of propaganda within a text. The indicators encompass lexical, syntactic, and semantic characteristics such as emotional tone, subjective evaluation, presence of manipulative triggers, and calls to action. The value of each indicator was calculated using methods of statistical analysis, intelligent data analysis, and machine learning. An algorithm for determining the influence level of each factor was proposed, as well as a scale for assessing the overall level of propaganda. For every analyzed publication, a utility function value was computed to quantify its propaganda intensity. The threshold value of this utility function – beyond which a publication is considered propagandistic – was defined as the sample mean across the dataset. This approach allows for an objective classification of textual materials without the need for expert labeling. The advantage of the developed method lies in the fact that each indicator is derived exclusively from empirical statistical data and validated computational procedures, ensuring the elimination of human subjectivity. The study demonstrates that the modified multifactor model can serve as a universal analytical tool for detecting propaganda in various types of textual data, thereby enhancing the transparency and reliability of media content analysis.Документ Відкритий доступ Optimized syntax concept for variable scoping, loop structures, and flow control in programming language(National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute", 2025) Zhyrytovskyi, Oleksandr; Zubk, RomanThis article examines syntactic redundancy in modern programming languages and its impact on code perception, readability, and logical consistency. The object of the study is the analysis of redundant syntactic constructs, particularly those related to variable declarations, scope management, loop structures, and flow control mechanisms. The primary aim is to develop and substantiate an optimized syntax concept. This concept combines the declarative rigor of classical languages with the simplicity of dynamic systems. The goal is to reduce code redundancy and improve cognitive ergonomics for developers. The research methodology involved a comparative analysis of key syntactic elements across different language paradigms. The materials for the study included a formal comparison of semantics and an evaluation of equivalent program fragments written in classical languages and in the proposed conceptual language.The results show that the proposed syntactic model significantly reduces auxiliary symbols, improves code clarity, and lowers cognitive load. The scientific novelty is a holistic syntax model defined by three key innovations. First, a simplified variable management system creates local variables automatically, eliminating keywords like var or global and using explicit markers for outer-scope access. Second, a universal loop operator unifies the functionality of traditional for, while, and do-while loops, allowing condition evaluation at the beginning, middle, or end of the block. Third, the traditional goto operator is replaced with a structured try-throw construct, providing a safe, semantically coherent mechanism for exiting nested blocks and error handling. This unified approach forms a basis for further research into minimalist syntax focused on naturalness and readability.Документ Відкритий доступ Method for combining CNN-based features with geometric facial descriptors in emotion recognition(National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute", 2025) Zinchenko, LiudmylaThis study presents a method for combining CNN-based visual features with geometric facial descriptors to improve the accuracy of emotion recognition in static images. The method integrates deep convolutional embeddings extracted from a pre-trained ResNetV2_101 model within the ML.NET framework with handcrafted geometric features computed from facial landmarks. Open-source datasets containing labeled emotional categories were used for experiments. At the first stage, deep image embeddings were obtained through transfer learning. At the second stage, 68 facial landmarks were detected to calculate distances and proportional relationships such as interocular distance, mouth width, eyebrow height, and other geometry-based indicators. These visual and geometric representations were concatenated into a unified feature space and classified using a multiclass linear model. The hybrid method achieved approximately 4% higher accuracy than the baseline CNN model relying solely on pixel-level features (from about 63% to 67%), confirming that combining heterogeneous features enhances generalization and robustness. The results also highlight that geometric descriptors act as stabilizing factors, compensating for noise, occlusions, and lighting variations that degrade CNN-only models. The developed pipeline demonstrates the feasibility of integrating interpretable geometric cues with deep embeddings directly in C# using ML.NET. The research novelty lies in proposing an interpretable hybrid model for emotion recognition that improves reliability while maintaining compatibility with .NET-based applications. The approach offers an accessible solution for developers working within enterprise .NET ecosystems, enabling direct deployment without cross-language integration. Future research will focus on extending the model toward multimodal emotion analysis that incorporates speech, gesture, and physiological signals to enhance contextual understanding of affective states. Additionally, the hybrid model can serve as a diagnostic tool for studying emotion dynamics in psychological or behavioral research.Документ Відкритий доступ Methodology of adaptive data processing in IоT monitoring systems with multilevel sensor data filtering and self-tuning(National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute", 2025) Haidai, Anatolii; Klymenko, IrynaThe study focuses on the processes of collecting and preprocessing heterogeneous sensor data. The aim of the research is to develop a method of adaptive filtering and automatic trigger adjustment that ensures stable operation of IoT monitoring systems in the presence of noise, impulse outliers, and seasonal fluctuations. A methodology for adaptive data processing is proposed, combining multi-level data filtering with automatic self-adjustment of control thresholds in monitoring systems. This approach not only improves the accuracy of real-time sensor measurements but also dynamically adapts the monitoring system parameters to changing operating conditions, thereby minimizing the number of false incidents. Within the study, a model of multi-level filtering was formalized, based on a median filter, a moving-average filter, and an exponential smoothing method. The use of a multi-level filter provides comprehensive data cleansing, stabilization of time series, and extraction of key trends. A mechanism for automatic adjustment of control thresholds in the Zabbix monitoring system was developed, where threshold values are determined based on statistical parameters and trends identified at the multi-level filtering stage. This mechanism integrates into the subsequent data-processing pipeline, ensuring that the system automatically accounts for daily, seasonal, and other fluctuations of the dynamic data-collection environment. Experimental studies involving various types of sensors confirmed improved measurement accuracy and a significant reduction in false alerts in the monitoring system. In particular, humidity-measurement accuracy improved by an average of 6.52%, while impulse temperature spikes were reduced by 53.06%. Compared to traditional approaches, the proposed methodology provides higher noise resilience and adaptability to changing environmental conditions, making it an effective solution for industrial, environmental, and other real-time IoT systems.Документ Відкритий доступ Deep Q-learning policy optimization method for enhancing generalization in autonomous vehicle control(National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute", 2025) Drahan, Mykhailo; Pysarenko, AndriiThe development of autonomous vehicle control policies based on deep reinforcement learning is a principal technical problem for cyber-physical systems, fundamentally constrained by the high dimensionality of state spaces, inherent algorithmic instability, and a pervasive risk of policy over-specialization that severely limits generalization to real-world scenarios. The object of this investigation is the iterative process of forming a robust control policy within a simulated environment, while the subject focuses on the influence of specialized reward structures and initial training conditions on policy convergence and generalization capability. The study's aim is to develop and empirically evaluate a deep Q-learning policy optimization method that utilizes dynamic initial conditions to mitigate over-specialization and achieve stable, globally optimal adaptive control. The developed method formalizes two optimization criteria. First, the adaptive reward function serves as the safety and convergence criterion, defined hierarchically with major penalties for collision, intermediate incentives for passing checkpoints and a continuous minor penalty for elapsed time to drive efficiency. Second, the mechanism of dynamic initial conditions acts as the policy generalization criterion, designed to inject necessary stochasticity into the state distribution. The agent is modeled as a vehicle equipped with an eight-sensor system providing 360 degrees coverage, making decisions from a discrete action space of seven options. Its ten-dimensional state vector integrates normalized sensor distance readings with normalized dynamic characteristics, including speed and angular error. Empirical testing confirmed the policy's vulnerability under baseline fixed-start conditions, where the agent demonstrated over-specialization and stagnated at a traveled distance of approximately 960 conventional units after 40,000 episodes. The subsequent application of the dynamic initial conditions criterion successfully addressed this failure. By forcing the agent to rely on its generalized state mapping instead of trajectory memory, this approach successfully overcame the learning plateau, enabling the agent to achieve full, collision-free track traversal between 53,000 and 54,000 episodes. Final optimization, driven by penalty, reduced the total track completion time by nearly half. This verification confirms the method's value in producing robust, stable, and efficient control policies suitable for integration into autonomous transport cyber-physical systems.Документ Відкритий доступ UAeroNet: domain-specific dataset for automation of unmanned aerial vehicles(National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute", 2025) Kochura, Yuriy; Trochun, Yevhenii; Taran, Vladyslav; Gordienko, Yuri; Rokovyi, Oleksandr; Stirenko, SergiiThis paper addresses the challenges and key principles of designing domain-specific datasets that canbe used especially for automation of unmanned aerial vehicles. Such datasets play a key role in buildingintelligent systems that enable autonomous operation and support data-driven decisions. The study presentsapproaches we used for data collection, analysis and annotation, highlighting their importance and practicalimpact on real-world application. The preparation of a domain-specific dataset for automating unmannedaerial vehicles operations (such as navigation and environmental monitoring) is a challenging task due tofrequently low image resolution, complex weather conditions, a wide range of object scales, backgroundnoise and heterogeneous terrain landscapes. Existing open datasets typically cover only a limited variety ofunmanned aerial vehicles use cases, which restricts the ability of deep learning models to perform adequatelyunder non-standard or unpredictable conditions.The object of the study is video data acquired by unmanned aerial vehicles for creating domain-specificdatasets that enable machine learning models to perform autonomous object recognition, navigation, obstacleavoidance and interaction with an environment with minimal operator involvement. The subject focuseson the collection, preparation and annotation of video data acquired by unmanned aerial vehicles. Thepurpose of the study is to develop and systematize workflow for creating specialized datasets to trainrobust models capable of autonomously recognizing objects in real-time video captured by unmanned aerialvehicles. To achieve this goal, a workflow was designed for collecting and annotating video data, raw videodata were acquired from unmanned aerial vehicles sensors and manually annotated using the ComputerVision Annotation Tool.As a result of this work, we developed a domain-specific dataset (UAeroNet) using an open-sourceannotation tool for object tracking task in real scenarios.UAeroNetconsists of 456 annotated tracks and atotal of 131 525 labeled instances that belong to 13 distinct classes.Документ Відкритий доступ DDOS attack detection with data imperfections using machine learning algorithms(National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute", 2025) Dremov, Artem; Volokyta, ArtemThe issue of DDoS attacks remains a prevalent one even in recent years. Modern environment is highlydynamic and is characterized by a large amount of traffic flow. Existing research covers several models,techniques and approaches to detecting DDoS traffic, which aim to optimize the detection in controlleddatasets. However, unintentional noise or data corruption may lower the efficacy of such methods. As such,determining most effective ways to detect DDoS traffic in conditions of data imperfections is necessary forreliable network performance.Therefore, the object of this research Is the usage of machine learning algorithms for detection ofincoming DDoS attacks. The purpose of this research is to determine the performance of ways to detectincoming DDoS attacks with machine learning algorithms based on detection accuracy, while simulatingimperfect data conditions. The study also examines the impact of class rebalancing on modified data.To achieve the aim of this research a variety of machine learning algorithms were implemented andtested on aCIC-DDoS2019dataset. The data is modified by removing values and introducing noise, tested,the classes are resampled and the dataset is tested again. The goal is to achieve over 90% accuracy in aclassification task of the type of DDoS attack and to determine how much the changes affect the performanceof the algorithms.The results of the testing indicated that several solutions reach the target mark and changes to thedataset in realistic conditions do not significantly affect the final result. However, all models tested showa decrease in accuracy compared to unmodified data with more complex models showing higher resilience(smaller decrease in accuracy). In addition, resampling of the data shows comparable decrease in accuracyof the models with more complex models being affected less.The results of this study may be used in development of an algorithm of repairing the corrupted dataor development of models more resistant to such data changes. Additionally, the results of this study maybe used when considering models for practical implementations of a DDoS traffic classification system.Документ Відкритий доступ Evaluation of the effectiveness of two approaches to building damage detection with satellite imagery(National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute", 2025) Oliinyk, Yurii; Rumiantsev, OleksiiThis study addresses the approaches for satellite image analysis to assess infrastructure damage. Themain aim is to conduct a comprehensive comparative analysis of the effectiveness of two key machinelearning approaches: specialized semantic segmentation based on theU-Netarchitecture and generalizedvisual analysis using large vision-language models. The object of the research is the process of quantitativelybenchmarking these two distinct approaches to determine their practical applicability for multi-class damageclassification.The research material is the publicly availablexView2dataset. The methods involved two parallelexperiments. For the semantic segmentation approach, aU-Netmodel with anEfficientNet-B4encoderwas implemented and trained on 6-channel input data (”before” and ”after” images) using a combinedDiceandFocalloss function. For the vision-language models approach, the open-sourceLLaVA-1.5-7Bmodelwas evaluated in a zero-shot mode using advanced prompt engineering for an aggregative counting task.To enable a direct comparison, the standardJaccard indexwas calculated based on the aggregated objectcounts for each damage class.The results of the experiments revealed a significant performance disparity. The specializedU-Netmodeldemonstrated high effectiveness, achieving an intersection over union score of 0.6141 on the test set. Incontrast, theLLaVAmodel proved unsuitable for accurate quantitative analysis, yielding an extremely lowJaccard indexof approximately 0.063, primarily due to its systemic failure to correctly identify and countobjects (𝑅𝑒𝑐𝑎𝑙𝑙≈0.07). The scientific novelty lies in being the first study to quantitatively document thisorder-of-magnitude capability gap, confirming that for tasks requiring high-precision mapping, specializedsegmentation models remain the indispensable tool.Документ Відкритий доступ Comparative analysis of LCNet050 and MobileNetV3 architectures in hybrid quantum–classical neural networks for image classification(National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute", 2025) Khmelnytskyi, Arsenii; Gordienko, YuriThis study explores the impact of classical backbone architecture on the performance of hybrid quantum-classical neural networks in image classification tasks. Hybrid models combine the representational power of classical deep learning with the potential advantages of quantum computation. Specifically, this research employs a quanvolutional neural network architecture in which a quantum convolutional layer, based on a four-qubit Ry circuit, preprocesses input images before classical processing. Despite the growing interest in hybrid models, few studies have systematically investigated how variations in classical architecture design affect the overall performance of hybrid quantum-classical neural networks. To address this gap, we compare two lightweight convolutional backbones – MobileNetV3Small050 and LCNet050 – integrated with an identical quantum preprocessing layer. Both models are evaluated on the CIFAR-10 dataset using 5-fold stratified cross-validation. Performance is assessed using multiple metrics, including accuracy, macro- and micro-averaged area under the curve, and class-wise confusion matrices. The results indicate that the LCNet-based hybrid model consistently outperforms its MobileNet counterpart, achieving higher overall accuracy and area under the curve scores, along with improved class balance and robustness in distinguishing less-represented classes. These findings underscore the critical role of classical backbone selection in hybrid quantum-classical architectures. While the quantum layer remains fixed, the synergy between quantum preprocessing and classical feature extraction significantly affects model performance. This study contributes to a growing body of work on quantum-enhanced learning systems by demonstrating the importance of classical design choices. Future research may extend these insights to alternative datasets, deeper or transformer-based backbones, and more expressive quantum circuits.Документ Відкритий доступ Optimization neural network for time series processing(National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute", 2025) Pysarchuk, Oleksii; Baran, DanyloThe article proposes the architecture of the optimization neural network and the model of test samplesynthesis for the process of extrapolation of time series parameters. In particular, the addition of an inputlayer with the introduction of an optimization scheme of nonlinear trade-offs has been implemented.Extrapolation of the behavior of the time series was carried out according to a test sample, which isformed as a data model with the selection of the trend according to the method of least squares. Thescientific novelty of the results obtained in the article is reflected in the essence of these decisions.The aim of the research is to develop an optimization network architecture and data model forextrapolation, which allows to improve the accuracy and time of predicting the behavior of the time seriesoutside the observation interval. Subject of research: architecture of an artificial neural network andmethods of extrapolation of time series. Object of research: processes of architectural synthesis of anartificial neural network and extrapolation of time series behavior outside the observation interval.The optimization layer provides mini-requirements for the approximation of training and test samples.This is especially appropriate for time series with stochastic noise and allows you to reduce the impactof random errors on time series prediction results. The use of model data for extrapolation allows you todetermine the behavior of the time series outside the observation interval. At the same time, the forecastingtime with acceptable accuracy characteristics increases. These solutions are reflected in the name of theoptimization neural network, which is proposed by the authors. The study of the effectiveness of the proposedsolutions was implemented by methods of simulation modeling on a modified artificial neural network. Theresults of the calculations proved an increase in the adequacy of data models and an increase in the accuracyof extrapolation.Документ Відкритий доступ Approach to hybrid load management in Fat-Tree web clusters(National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute", 2025) Radchenko, Kostiantyn; Chernenkyi, ArtemThe paper presents an approach to hybrid load management in a web cluster that is capable of providing adaptive request balancing based on load prediction and resilience to random web server failures. The proposed architecture is built upon the Fat-Tree topology, which ensures high scalability, structural redundancy, and efficient routing within the cluster network. The developed system performs load forecasting using moving average methods and Erlang-based queueing models, enabling the estimation of overload probabilities and proactive redistribution of computational resources. Four representative simulation scenarios were analyzed: baseline load, peak load, dynamic traffic variations, and random server failures. The obtained results demonstrate enhanced system reliability, reduced average response time, and more balanced utilization of cluster resources. In the context of rapidly growing web services and user traffic volumes, the issue of maintaining high reliability and efficiency of clustered infrastructures becomes increasingly significant. Even with robust topologies such as Fat-Tree, irregular traffic patterns and sudden surges in client requests can cause local overloads and performance degradation. Random node failures further complicate cluster management, necessitating the use of adaptive and predictive control mechanisms. The proposed model integrates Fat-Tree network simulation with statistical forecasting algorithms, forming the basis for proactive load management. This integration allows for minimizing service degradation risks, dynamically responding to workload changes, and maintaining stable operation of web infrastructures under partial node failures. The architecture shows strong potential for real-time implementation in large-scale distributed web systems. It can be further enhanced by incorporating machine learning or wavelet-based forecasting methods to improve the accuracy of load estimation and system adaptability.Документ Відкритий доступ Intelligent traffic management method in software-defined networks based on behavioral classification and adaptive priority service(National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute", 2025) Oboznyi, Dmytro; Kulakov, YuriiThe growing complexity of modern enterprise network environments demands sophisticated traffic management solutions that can provide quality of service (QoS) guarantees for encrypted and heterogeneous flows. Existing traffic management approaches face significant challenges when dealing with encrypted protocols and diverse application requirements, resulting in performance degradation for critical services and inefficient resource utilization. This paper addresses the problem of intelligent traffic management in software-defined networks through behavioral classification and adaptive priority service mechanisms. The study examines the development and implementation of an integrated traffic management method that combines behavioral deep packet inspection, class-based queuing, and weighted random early detection algorithms. The research investigates how behavioral flow characteristics remain observable in encrypted traffic environments and how these patterns can be leveraged for effective QoS provisioning. The proposed method utilizes packet timing patterns, connection behaviors, and flow statistics to classify traffic without relying on payload inspection or predefined port assignments. Experimental validation through discrete-event simulation demonstrates significant performance improvements compared to traditional first-in-first-out mechanisms. The behavioral classification component achieves over 95% classification accuracy. The experimental results demonstrate up to 97.5% improvement in latency performance and 0% packet loss for high-priority traffic. Integrating behavioral traffic recognition with adaptive queue management within a programmable network framework provides an effective and innovative approach to maintaining stable service quality in encrypted, multi-service environments. The proposed method is compatible with existing software-defined network controllers and can be deployed without modification of application protocols or infrastructure components.