Novotarskyi, MykhailoKuzmich, Valentin2023-04-182023-04-182020Novotarskyi, M. USAK method for the reinforcement learning / Novotarskyi Mykhailo, Valentin Kuzmich // Information, Computing and Intelligent systems. – 2020. – No. 1. – Pp. 4–14. – Bibliogr.: 19 ref.https://ela.kpi.ua/handle/123456789/54656In the field of reinforcement learning, tabular methods have become widespread. There are many important scientific results, which significantly improve their performance in specific applications. However, the application of tabular methods is limited due to the large amount of resources required to store value functions in tabular form under high-dimensional state spaces. A natural solution to the memory problem is to use parameterized function approximations. However, conventional approaches to function approximations, in most cases, have ceased to give the desired result of memory reduction in solving real world problems. This fact became the basis for the application of new approaches, one of which is the use of Sparse Distributed Memory (SDM) based on Kanerva coding. A further development of this direction was the method of Similarity-Aware Kanerva (SAK). In this paper, a modification of the SAK method is proposed, the Uniform Similarity-Aware Kanerva (USAK) method, which is based on the uniform distribution of prototypes in the state space. This approach has reduced the use of RAM required to store prototypes. In addition, reducing the receptive distance of each of the prototypes made it possible to increase the learning speed by reducing the number of calculations in the linear approximator.enreinforcement learningKanerva codingfunction approximationprototypevalue functionUSAK method for the reinforcement learningArticlePp. 4-14https://doi.org/10.20535/2708-4930.1.2020.216042004.056.57:032.260000-0002-5653-85180000-0002-6077-3609