Адаптивні системи автоматичного управління : міжвідомчий науково-технічний збірник. – 2024. – № 2 (45)
Постійне посилання зібрання
Переглянути
Перегляд Адаптивні системи автоматичного управління : міжвідомчий науково-технічний збірник. – 2024. – № 2 (45) за Автор "Novatsky, A."
Зараз показуємо 1 - 2 з 2
Результатів на сторінці
Налаштування сортування
Документ Відкритий доступ Convolutional neural network for dog breed recognition system(КПІ ім. Ігоря Сікорського, 2024) Khotin, K.; Shymkovych, V.; Kravets, P.; Novatsky, A.; Shymkovych, L.In this article a dataset with data augmentation for neural network training and convolutional neural network model for dog breed recognition system has been developed. Neural network model architecture using transfer learning to improve classification results was developed. Neural network based on the MobileNetV3-Large architecture. The structure of the dataset has analyzed and decision to use different methods for normalizes data. A large dataset containing 70 distinct categories of dog breeds was collected and balanced through the use of data augmentation techniques. Data augmentation enabled the reduction of the disparity between the minimum and maximum number of instances by eliminating redundant images and adding essential ones. The developed model was tested and the results were demonstrated. The final accuracy of the model is 96%. The result model implement in dog breed recognition system, which is based on mobile platform. The implemented application produces functionality to interact with the resulting model such as real-time process of identifying a dog's breed or from device's gallery. Further improvement the performance of the model classification quality can be achieved by expending the initial dataset or by applying other optimization methods and adjust the learning rate.Документ Відкритий доступ Intelligent control system with reinforcement learning for solving video game tasks(КПІ ім. Ігоря Сікорського, 2024) Osypenko, M.; Shymkovych, V.; Kravets, P.; Novatsky, A.; Shymkovych, L.This paper describes the development of a way to represent the state and build appropriate deep learning models to effectively solve reinforcement learning video game tasks. It has been demonstrated in the Battle City video game environment that careful design of the state functions can produce much better results without changes to the reinforcement learning algorithm, significantly speed up learning, and enable the agent to generalize and solve previously unknown levels. The agent was trained for 200 million epochs. Further training did not improve results. Final results reach 75% win rate in the first level of Battle City. In most of the 25% of games lost, the agent fails because it chooses the wrong path to pursue an enemy that is closer to the base and therefore slower. The reason for this is the limitation of cartographic information. To further improve performance and possibly achieve 100% win rate, it is recommended to find a way to effectively include full information about walls and other map objects. The developed method can be used to improve performance in real applications.