首页   按字顺浏览 期刊浏览 卷期浏览 Dynamic Update of the Reinforcement Function During Learning
Dynamic Update of the Reinforcement Function During Learning

 

作者: Juan Miguel Santos,   Claude Touzet,  

 

期刊: Connection Science  (Taylor Available online 1999)
卷期: Volume 11, issue 3-4  

页码: 267-289

 

ISSN:0954-0091

 

年代: 1999

 

DOI:10.1080/095400999116250

 

出版商: Taylor & Francis Group

 

关键词: Reinforcement Function;Reinforcement Learning;Robot Learning;Autonomous Robot;Behaviour-based Approach

 

数据来源: Taylor

 

摘要:

During the last decade, numerous contributions have been made to the use of reinforcement learning in the robot learning field. They have focused mainly on the generalization, memorization and exploration issues-mandatory for dealing with real robots. However, it is our opinion that the most difficult task today is to obtain the definition of the reinforcement function (RF). A first attempt in this direction was made by introducing a method-the update parameters algorithm (UPA)-for tuning a RF in such a way that it would be optimal during the exploration phase. The only requirement is to conform to a particular expression of RF. In this article, we propose Dynamic-UPA, an algorithm able to tune the RF parameters during the whole learning phase (exploration and exploitation). It allows one to undertake the so-called exploration versus exploitation dilemma through careful computation of the RF parameter values by controlling the ratio between positive and negative reinforcement during learning. Experiments with the mobile robot Khepera in tasks of synthesis of obstacle avoidance and wall-following behaviors validate our proposals.

 

点击下载:  PDF (698KB)



返 回