Full-Text Search:
Home|Journal Papers|About CNKI|User Service|FAQ|Contact Us|中文
《Journal of Beijing Institute of Technology》 2005-04
Add to Favorite Get Latest Update

Research on Fast Reinforcement Learning

TONG-liang,LU Ji-lian,GONG Jian-wei(School of Mechanical and Vehicular Engineering, Beijing Institute of Technology, Beijing100081, China)  
Based on eligibility trace theory, a delayed fast reinforcement learning algorithm DFSARSA(λ) is proposed in this paper. By redefining the eligibility trace and tracking the (TD(λ)) error, the Q-value of reinforcement learning updates may be postponed when they are needed instead of update in each step as traditional SARSA(λ). The update computing complexity is reduced from O(|S||A|) to O(|A|) compared with SARSA(λ) and the speed of the reinforcement learning is improved greatly. Simulation results show the method's validity.
【Fund】: 国家部委预研项目(40404070302)
【CateGory Index】: TP181
Download(CAJ format) Download(PDF format)
CAJViewer7.0 supports all the CNKI file formats; AdobeReader only supports the PDF format.
Chinese Journal Full-text Database 1 Hits
1 MENG Wei,HAN Xue-dong.1.Information School,Beijing Forestry University,Beijing 100083,China 2.706 Institute of China Aerospace Science and Industry Corporation,Beijing 100854,China;Parallel reinforcement learning algorithm and its application[J];Computer Engineering and Applications;2009-34
Chinese Journal Full-text Database 10 Hits
1 Fang Yuan, Shao Shihuang (Department of Automation & Electronic Information Engineering, China Textile University, Shanghai,200051);Agent Theory Review[J];;1998-04
2 XIONG Zhigang & XU Meilin (Department of Educational Information and Technology, East China Normal University, Shanghai 20062, China);Discussion of Learning Technology Based on Metadata and Semantic Web[J];Open Education Research;2004-05
3 Zhang Rubo(Dept.of Computer Science,Harbin Engineering University,Harbin150001);Research on the Method to Improve Reinforcement Learning Speed[J];Computer Engineering and Applications;2001-22
4 MAO Jun-jie,LIU Guo-dong School of Communications and Control Engineering,Jiangnan University,Wuxi,Jiangsu 214122,China;Modified reinforcement learning based on experience konwledge and its application in MAS[J];Computer Engineering and Applications;2008-24
5 ;协同设计和多Agent[J];Computer Science;1997-03
6 XU Yanqing, SHEN Ruimin, ZHANG Tongzhen, SHEN Liping (Computer Science and Technology Department, Shanghai Jiaotong University, Shanghai 200030);Application of Reinforcement Learning-based Multi-agents in Distance Learning[J];Computer Engineering;2001-08
7 Xu Li ①,Han Xiaogang ② and Wang Huaimin ① ( ①Department of Computer Science, National University of Defense Technology) ( ②Department of Computer Science,Hunan University);Application of Intelligent Agent Technology in Internet[J];COMPUTER ENGINEERING & SCIENCE;1999-01
8 CEN Ling1,LIU Jie2(1. Instiutute of Modern Chysics, Acadimia Sinica, Lanzhou Gansu 730000, China; 2. University of Science and Technology of China, Hefei Anhui 230022, China);MULTI-AGENT COOPERATION MODEL AND ITS APPLICATION[J];Computer Applications;2001-02
9 WANG Qian,LIU Ping,XIAO De-bao (Dept. of Computer Science, Huazhong Normal University, Wuhan Hubei 430079, China);Multi-Agent Management Frame Based on Roles and Allocate Strategies for Grid Resource[J];Application Research of Computers;2005-06
10 ZHONG Yu, GU Guo-chang, ZHANG Ru-bo (1.Computer Science and Technology College,Harbin Engineering University,Heilongjiang Harbin 150001,China; (2.Robotics Laboratory,Shenyang Institute of Automation,Chinese Academy of Sciences,Liaoning Shenyang 110015,China);Survey of distributed reinforcement learning algorithms in multi-agent systems[J];Control Theory & Applications;2003-03
©2006 Tsinghua Tongfang Knowledge Network Technology Co., Ltd.(Beijing)(TTKN) All rights reserved