Curiosity-driven reinforcement learning for diverse visual paragraph generation

Y Luo, Z Huang, Z Zhang, Z Wang, J Li… - Proceedings of the 27th …, 2019 - dl.acm.org
Proceedings of the 27th ACM International Conference on Multimedia, 2019dl.acm.org
Visual paragraph generation aims to automatically describe a given image from different
perspectives and organize sentences in a coherent way. In this paper, we address three
critical challenges for this task in a reinforcement learning setting: the mode collapse, the
delayed feedback, and the time-consuming warm-up for policy networks. Generally, we
propose a novel Curiosity-driven Reinforcement Learning (CRL) framework to jointly
enhance the diversity and accuracy of the generated paragraphs. First, by modeling the …
Visual paragraph generation aims to automatically describe a given image from different perspectives and organize sentences in a coherent way. In this paper, we address three critical challenges for this task in a reinforcement learning setting: the mode collapse, the delayed feedback, and the time-consuming warm-up for policy networks. Generally, we propose a novel Curiosity-driven Reinforcement Learning (CRL) framework to jointly enhance the diversity and accuracy of the generated paragraphs. First, by modeling the paragraph captioning as a long-term decision-making process and measuring the prediction uncertainty of state transitions as intrinsic rewards, the model is incentivized to memorize precise but rarely spotted descriptions to context, rather than being biased towards frequent fragments and generic patterns. Second, since the extrinsic reward from evaluation is only available until the complete paragraph is generated, we estimate its expected value at each time step with temporal-difference learning, by considering the correlations between successive actions. Then the estimated extrinsic rewards are complemented by dense intrinsic rewards produced from the derived curiosity module, in order to encourage the policy to fully explore action space and find a global optimum. Third, discounted imitation learning is integrated for learning from human demonstrations, without separately performing the time-consuming warm-up in advance. Extensive experiments conducted on the Standford image-paragraph dataset demonstrate the effectiveness and efficiency of the proposed method, improving the performance by 38.4% compared with state-of-the-art.
ACM Digital Library
以上显示的是最相近的搜索结果。 查看全部搜索结果