Remember Me
Or use your Academic/Social account:


Or use your Academic/Social account:


You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.


Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message


Verify Password:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Cowling, Peter I.; Powley, Edward J.; Whitehouse, Daniel (2012)
Languages: English
Types: Article
Subjects: 1712, 2207, 1702, 2208

Classified by OpenAIRE into

arxiv: Computer Science::Computer Science and Game Theory
Monte Carlo tree search (MCTS) is an AI technique that has been successfully applied to many deterministic games of perfect information. This paper investigates the application of MCTS methods to games with hidden information and uncertainty. In particular, three new information set MCTS (ISMCTS) algorithms are presented which handle different sources of hidden information and uncertainty in games. Instead of searching minimax trees of game states, the ISMCTS algorithms search trees of information sets, more directly analyzing the true structure of the game. These algorithms are tested in three domains with different characteristics, and it is demonstrated that our new algorithms outperform existing approaches to handling hidden information and uncertainty in games. Index Terms—Artificial intelligence (AI), game tree search, hidden information, Monte Carlo methods, Monte Carlo tree search (MCTS), uncertainty.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • [1] S. Gelly and D. Silver, “Monte-Carlo tree search and rapid action value estimation in computer Go,” Artif. Intell., vol. 175, no. 11, pp. 1856-1875, Jul. 2011.
    • [2] L. Kocsis and C. Szepesvári, “Bandit based Monte-Carlo planning,” in Proc. Eur. Conf. Mach. Learn., J. Fürnkranz, T. Scheffer, and M. Spiliopoulou, Eds., Berlin, Germany, 2006, pp. 282-293.
    • [3] M. L. Ginsberg, “GIB: Imperfect information in a computationally challenging game,” J. Artif. Intell. Res., vol. 14, pp. 303-358, 2001.
    • [4] R. Bjarnason, A. Fern, and P. Tadepalli, “Lower bounding Klondike Solitaire with Monte-Carlo planning,” in Proc. 19th Int. Conf. Autom. Plan. Sched., Thessaloniki, Greece, 2009, pp. 26-33.
    • [5] I. Frank and D. Basin, “Search in games with incomplete information: A case study using Bridge card play,” Artif. Intell., vol. 100, no. 1-2, pp. 87-123, 1998.
    • [6] BoardGameGeek, Lord of the Rings: The Confrontation, 2011 [Online]. Available: http://boardgamegeek.com/boardgame/3201/lord-ofthe-rings-the-confrontation
    • [7] BoardGameGeek, Stratego, 2011 [Online]. Available: http:// boardgamegeek.com/boardgame/1917/stratego
    • [8] J. W. H. M. Uiterwijk and H. J. van den Herik, “The advantage of the initiative,” Inf. Sci., vol. 122, no. 1, pp. 43-58, Jan. 2000.
    • [9] E. J. Powley, D. Whitehouse, and P. I. Cowling, “Determinization in Monte-Carlo tree search for the card game Dou Di Zhu,” in Proc. Artif. Intell. Simul. Behav., York, U.K., 2011, pp. 17-24.
    • [10] D. Whitehouse, E. J. Powley, and P. I. Cowling, “Determinization and information set Monte Carlo tree search for the card game Dou Di Zhu,” in Proc. IEEE Conf. Comput. Intell. Games, Seoul, Korea, 2011, pp. 87-94.
    • [11] J. McLeod, Dou Dizhu, 2010 [Online]. Available: http://www.pagat. com/climbing/doudizhu.html
    • [12] J. Rubin and I. Watson, “Computer poker: A review,” Artif. Intell., vol. 175, no. 5-6, pp. 958-987, Apr. 2011.
    • [13] M. ShaÞei, N. R. Sturtevant, and J. Schaeffer, “Comparing UCT versus CFR in simultaneous games,” in Proc. Int. Joint Conf. Artif. Intell. Workshop Gen. Game Playing, Pasadena, CA, 2009 [Online]. Available: http://webdocs.cs.ualberta.ca/~nathanst/papers/uctcfr.pdf
    • [14] H. Kuhn, “A simpliÞed two-person poker,” in Contributions to the Theory of Games, H. Kuhn and A. Tucker, Eds. Princeton, NJ: Princeton Univ. Press, 1950, pp. 97-103.
    • [15] M. Ponsen, S. de Jong, and M. Lanctot, “Computing approximate Nash equilibria and robust best-responses using sampling,” J. Artif. Intell. Res., vol. 42, pp. 575-605, 2011.
    • [16] M. Lanctot, K. Waugh, M. Zinkevich, and M. Bowling, “Monte Carlo sampling for regret minimization in extensive games,” in Proc. Adv. Neural Inf. Process. Syst., Vancouver, BC, Canada, 2009, pp. 1078-1086.
    • [17] R. B. Myerson, Game Theory: Analysis of Conßict. Cambridge, MA: Harvard Univ. Press, 1997.
    • [18] J. R. Long, N. R. Sturtevant, M. Buro, and T. Furtak, “Understanding the success of perfect information Monte Carlo sampling in game tree search,” in Proc. Assoc. Adv. Artif. Intell., Atlanta, GA, 2010, pp. 134-140.
    • [19] E. K. P. Chong, R. L. Givan, and H. S. Chang, “A framework for simulation-based network control via hindsight optimization,” in Proc. IEEE Conf. Decision Control, Sydney, Australia, 2000, pp. 1433-1438.
    • [20] M. Buro, J. R. Long, T. Furtak, and N. R. Sturtevant, “Improving state evaluation, inference, and search in trick-based card games,” in Proc. 21st Int. Joint Conf. Artif. Intell., Pasadena, CA, 2009, pp. 1407-1413.
    • [21] J. Schäfer, “The UCT algorithm applied to games with imperfect information,” Diploma, Otto-Von-Guericke Univ. Magdeburg, Magdeburg, Germany, 2008.
    • [22] J. Borsboom, J.-T. Saito, G. M. J.-B. Chaslot, and J. W. H. M. Uiterwijk, “A comparison of Monte-Carlo methods for phantom Go,” in Proc. BeNeLux Conf. Artif. Intell., Utrecht, The Netherlands, 2007, pp. 57-64.
    • [23] P. Ciancarini and G. P. Favini, “Monte Carlo tree search in Kriegspiel,” Artif. Intell., vol. 174, no. 11, pp. 670-684, Jul. 2010.
    • [24] S. J. Russell and P. Norvig, ArtiÞcial Intelligence: A Modern Approach, 3rd ed. Upper Saddle River, NJ: Prentice-Hall, 2009.
    • [25] D. Koller and A. Pfeffer, “Representations and solutions for game-theoretic problems,” Artif. Intell., vol. 94, no. 1-2, pp. 167-215, 1997.
    • [26] M. Zinkevich, M. Johanson, M. Bowling, and C. Piccione, “Regret minimization in games with incomplete information,” in Proc. Adv. Neural Inf. Process. Syst., Vancouver, BC, Canada, 2008, pp. 1729-1736.
    • [27] M. Ponsen, G. Gerritsen, and G. M. J.-B. Chaslot, “Integrating opponent models with Monte-Carlo tree search in poker,” in Proc. Conf. Assoc. Adv. Artif. Intell.: Inter. Decision Theory Game Theory Workshop, Atlanta, GA, 2010, pp. 37-42.
    • [28] M. Richards and E. Amir, “Opponent modeling in Scrabble,” in Proc. 20th Int. Joint Conf. Artif. Intell., Hyderabad, India, 2007, pp. 1482-1487.
    • [29] O. Teytaud and S. Flory, “Upper conÞdence trees with short term partial information,” in Proc. Appl. Evol. Comput., C. Di Chio, S. Cagnoni, C. Cotta, M. Ebner, A. Ekárt, A. Esparcia-Alcázar, J. J. M. Guervós, F. Neri, M. Preuss, H. Richter, J. Togelius, and G. N. Yannakakis, Eds., Torino, Italy, 2011, pp. 153-162.
    • [30] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire, “Gambling in a rigged casino: The adversarial multi-armed bandit problem,” in Proc. Annu. Symp. Found. Comput. Sci., Milwaukee, WI, 1995, pp. 322-331.
    • [31] P. Auer, N. Cesa-Bianchi, and P. Fischer, “Finite-time analysis of the multiarmed bandit problem,” Mach. Learn., vol. 47, no. 2, pp. 235-256, 2002.
    • [32] A. Fern and P. Lewis, “Ensemble Monte-Carlo planning: An empirical study,” in Proc. 21st Int. Conf. Autom. Plan. Scheduling, Freiburg, Germany, 2011, pp. 58-65.
    • [33] D. Silver and J. Veness, “Monte-Carlo planning in large POMDPs,” in Proc. Neural Inf. Process. Syst., Vancouver, BC, Canada, 2010, pp. 1-9.
    • [34] D. Auger, “Multiple tree for partially observable Monte-Carlo tree search,” in Proc. Evol. Games., Torino, Italy, 2011, pp. 53-62.
    • [35] C. J. Clopper and E. S. Pearson, “The use of conÞdence or Þducial limits illustrated in the case of the binomial,” Biometrika, vol. 26, no. 4, pp. 404-413, 1934.
    • [36] H. J. van den Herik, J. W. H. M. Uiterwijk, and J. van Rijswijck, “Games solved: Now and in the future,” Artif. Intell., vol. 134, no. 1-2, pp. 277-311, Jan. 2002.
    • [37] L. V. Allis, H. J. van den Herik, and M. P. H. Huntjens, “Go-Moku solved by new search techniques,” IEEE Comput. Intell. Mag., vol. 12, no. 1, pp. 7-23, Feb. 1996.
    • [38] F. Teytaud and O. Teytaud, “Lemmas on partial observation, with application to phantom games,” in Proc. IEEE Conf. Comput. Intell. Games, Seoul, Korea, 2011, pp. 243-249.
    • [39] B. E. Childs, J. H. Brodeur, and L. Kocsis, “Transpositions and move groups in Monte Carlo tree search,” in Proc. IEEE Symp. Comput. Intell. Games, Perth, Australia, 2008, pp. 389-395.
    • [40] J. A. M. Nijssen and M. H. M. Winands, “Monte-Carlo tree search for the game of Scotland Yard,” in Proc. IEEE Conf. Comput. Intell. Games, Seoul, Korea, 2011, pp. 158-165.
    • Peter I. Cowling (M'05) received the M.A. and D.Phil. degrees from Corpus Christi College, University of Oxford, Oxford, U.K., in 1989 and 1997, respectively. He is a Professor of Computer Science and Associate Dean (Research and Knowledge Transfer) at the University of Bradford, Bradford, U.K., where he leads the ArtiÞcial Intelligence Research Centre. In September 2012, he will take up an Anniversary Chair at the University of York, York, U.K., joined between the Department of Computer Science and
  • No related research data.
  • No similar publications.

Share - Bookmark

Cite this article