[Submitted on 23 Aug 2018 (v1), last revised 24 Jun 2019 (this version, v3)]
Authors:Huang Hu, Xianchao Wu, Bingfeng Luo, Chongyang Tao, Can Xu, Wei Wu, Zhan Chen
Abstract:The 20 Questions (Q20) game is a well known game which encourages deductive reasoning and creativity. In the game, the answerer first thinks of an object such as a famous person or a kind of animal. Then the questioner tries to guess the object by asking 20 questions. In a Q20 game system, the user is considered as the answerer while the system itself acts as the questioner which requires a good strategy of question selection to figure out the correct object and win the game. However, the optimal policy of question selection is hard to be derived due to the complexity and volatility of the game environment. In this paper, we propose a novel policy-based Reinforcement Learning (RL) method, which enables the questioner agent to learn the optimal policy of question selection through continuous interactions with users. To facilitate training, we also propose to use a reward network to estimate the more informative reward. Compared to previous methods, our RL method is robust to noisy answers and does not rely on the Knowledge Base of objects. Experimental results show that our RL method clearly outperforms an entropy-based engineering system and has competitive performance in a noisy-free simulation environment.
Comments: | Accepted by EMNLP 2018 |
Subjects: | Human-Computer Interaction (cs.HC); Artificial Intelligence (cs.AI); Computation and Language (cs.CL) |
Cite as: | arXiv:1808.07645 [cs.HC] |
(or arXiv:1808.07645v3 [cs.HC] for this version) | |
https://doi.org/10.48550/arXiv.1808.07645 arXiv-issued DOI via DataCite |
Submission history
From: Huang Hu [view email]
[v1] Thu, 23 Aug 2018 06:34:32 UTC (516 KB)
[v2] Sun, 26 Aug 2018 09:47:54 UTC (252 KB)
[v3] Mon, 24 Jun 2019 06:28:09 UTC (261 KB)