000000802 001__ 802
000000802 005__ 20141118153515.0
000000802 04107 $$acze
000000802 046__ $$k2002-05-13
000000802 100__ $$aBřezina, Tomáš
000000802 24500 $$aSTOCHASTlC POLlCY IN Q-LEARNING USED FOR CONTROL OF AMB
000000802 24630 $$n8.$$pEngineering Mechanics 2002
000000802 260__ $$bInstitute of Mechanics and Solids, FME, TU Brno
000000802 506__ $$arestricted
000000802 520__ $$2cze$$aAbstrakt: A great intention is lately focused on Reinforcement Learning (RL) methods. The article is focused on improving model free RL method known as Q-Iearning algorithm used on active magnetic bearing (AMB) model. Stochastic strategy and adaptive integration step increased the speed of learning approximately hundred times.  Impossibility of using proposed improvement online is the only drawback, however it might be used for pretraining on simulation model and further fined online.
000000802 520__ $$2eng$$aAbstract: A great intention is lately focused on Reinforcement Learning (RL) methods. The article is focused on improving model free RL method known as Q-Iearning algorithm used on active magnetic bearing (AMB) model. Stochastic strategy and adaptive integration step increased the speed of learning approximately hundred times.  Impossibility of using proposed improvement online is the only drawback, however it might be used for pretraining on simulation model and further fined online.
000000802 540__ $$aText je chráněný podle autorského zákona č. 121/2000 Sb.
000000802 7112_ $$aEngineering Mechanics 2002$$cSvratka (CZ)$$d2002-05-13 / 2002-05-16$$gEM2002
000000802 720__ $$aBřezina, Tomáš$$iVěchet, Stanislav$$iKrejsa, Jiří
000000802 8560_ $$ffischerc@itam.cas.cz
000000802 8564_ $$s132023$$uhttps://invenio.itam.cas.cz/record/802/files/Brezina_1.pdf$$y
             Original version of the author's contribution as presented on CD, .
            
000000802 962__ $$r451
000000802 980__ $$aPAPER