发表时间:2019
文章要点:这篇文章主要是针对batch RL做了一个离散动作空间的benchmark,对比了DQN和一些batch RL算法的性能(DQN,REM,QR-DQN,KL-Control,BCQ)。并且把BCQ从连续动作空间改成适用离散动作空间,取得了SOTA的效果。作者得出的结论是,要想batch RL效果好,就要考虑外推误差(extrapolation error)的问题,否则会造成unstable value estimates,导致效果不好。extrapolation error的意思就是说我在评估action value的时候,可能在fix data上并没有这个值,但是由于TD更新,不小心把这个data里没有的action value更新大了,但是其实这个值实际可能很差,这就造成了误差导致效果更差了(induced from evaluating state-action pairs which are not contained in the provided batch of data. This erroneous extrapolation is propagated through temporal difference update of most off-policy algorithms, causing extreme overestimation and poor performance)。这个其实不做实验也能想到,不过这个结论没毛病。
总结:这篇paper主要就是做了个benchmark,得出的结论也是make sense的。
疑问:文章提了好多个batch RL的算法都没看过,要是以后真要做batch RL的东西,可以从这篇里面找这些算法出来看看:QR-DQN,REM,BCQ,BEAR-QL,KL-Control,SPIBB-DQN。
相关文章
- 10-25RL01: An Introduction to Deep Reinforcement Learning
- 10-25关系抽取 ---远程监督 ---《Robust Distant Supervision Relation Extraction via Deep Reinforcement Learning》
- 10-25【未】A Deep Reinforcement Learning Framework for Rebalancing Dockless Bike Sharing Systems
- 10-25论文记载: Deep Reinforcement Learning for Traffic LightControl in Vehicular Networks
- 10-25Deep Reinforcement Learning with Population-Coded Spiking Neural Network for Continuous Control
- 10-25Playing Atari with Deep Reinforcement Learning:打响DRL的第一枪
- 10-25A gentle introduction to Deep Reinforcement Learning
- 10-25DDPG:CONTINUOUS CONTROL WITH DEEP REINFORCEMENT LEARNING
- 10-25(ICLR 2020)COMPOSING TASK-AGNOSTIC POLICIES WITH DEEP REINFORCEMENT LEARNING
- 10-25Benchmarking Batch Deep Reinforcement Learning Algorithms