OpenAIRE is about to release its new face with lots of new content and services.
During September, you may notice downtime in services, while some functionalities (e.g. user registration, login, validation, claiming) will be temporarily disabled.
We apologize for the inconvenience, please stay tuned!
For further information please contact helpdesk[at]openaire.eu

fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Naveed, Munir Hussain
Languages: English
Types: Doctoral thesis
Subjects: T1
This thesis is focused on the design of a new path planning algorithm to solve path planning problems in dynamic, partially observable and real-time environments such as Real-Time Strategy (RTS) games. The emphasis is put on fast action selection motivating the use of Monte-Carlo planning techniques. Three main contributions are presented in this thesis. The first contribution is a Monte-Carlo planning technique, called MCRT, that performs selective action sampling and limits how many times a particular state-action pair is explored to balance the trade-off between exploration of new actions and exploitation of the current best action. The thesis also presents two variations of MCT as the second contribution. The first variation of MCRT randomly selects an action as a sample at each state seen during the look-ahead search. The second variation, called MCRT-CAS, performs the selective action sampling using corridors. The third contribution is the design of four real-time path planners that exploit MCRT and its variations to solve path planning problems in real-time. Three of these planners are empirically evaluated using four standard pathfinding benchmarks (and over 1000 instances). Performance of these three planners is compared against two recent rival algorithms (Real-time D*-Lite (RTD) and Local Search Space-Learning Real-Time A* (LSS-LRTA)). These rival algorithms are based on real-time heuristic search. The results show that a variation of MOCART, called MOCART-CAS, performs action selection significantly faster than the rival planners. The fourth planner, called the MG-MOCART planner, is evaluated using a typical Real-Time Strategy game. The MG-MOCART planner can solve the path planning problems with multiple goals. This planner is compared against four rivals: Upper Con�dence bounds applied to Trees (UCT), LSS-LRTA, Real-Time Dynamic Programming (RTDP) and a rapidly-exploring random tree (RRT) planner. The performance is measured using score and planning cost. The results show that the MG-MOCART planner performs better than its rival techniques with respect to score and planning cost.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • [1] N. M. Amato and Y. Wu. A Randomized Roadmap Method for Path and Manipulation Planning. In In IEEE International Conference on Robotics and Automation, pages 113{120, 1996.
    • [2] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time Analysis of the Multiarmed Bandit Problem. Machine Learning, 47(2-3):235{256, May 2002.
    • [3] R. Balla and A. Fern. UCT for Tactical Assault Planning in Real-Time Strategy Games. In Proceedings of the 21st International Joint Conference on Arti cial Intelligence, pages 40{45, 2009.
    • [4] J. Barraquand, L. Kavraki, J.-C. Latombe, T.-Y. Li, R. Motwani, and P. Raghavan. A random sampling scheme for path planning. INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 16:759{774, 1996.
    • [5] A. Barto, S. Bradtke, and S. Singh. Learning to act using Real-Time Dynamic Programming. Arti cial Intelligence, 72:81{138, 1995.
    • [6] R. Bellman. The Theory of Dynamic Programming. Bulletin of The American Mathematical Society, 60(6):503 { 516, 1954.
    • [7] D. Bertsekas. Distributed Dynamic Programming. IEEE Transactions on Automatic Control, 27(3):610{616, 1982.
    • [8] R. Bjarnason, A. Fern, and P. Tadepalli. Lower Bounding Klondike Solitaire with Monte-Carlo Planning. In Proceedings of the 19th International Conference on Automated Planning & Scheduling, 2009.
    • [9] Y. Bjornsson, V. Bulitko, and N. Sturtevant. TBA*: Time-Bounded A*. In Proceedings of the 21st International Joint Conference on Arti cal Intelligence, pages 431{436, San Francisco, CA, USA, 2009. Morgan Kaufmann Publishers Inc.
    • [10] Y. Bjornsson, M. Enzenberger, R. Holte, and J. Schae er. Fringe Search: Beating A* at Path nding on Game Maps. In Proceedings of the IEEE Symposium on Computational Intelligence and Games. 2005.
    • [14] B. Bonet and H. Ge ner. mGPT: A Probabilistic Planner Based on Heuristic Search. Journal of Arti cial Intelligence Research, 24:933{944, 2005.
    • [15] A. Botea, M. Muller, and J. Schae er. Near Optimal Hierarchical Path- nding. Journal of Game Development, 1:7{28, 2004.
    • [16] V. Bulitko and G. Lee. Learning in Real-Time Search: A Unifying Framework. Journal of Arti cial Intelligence Research, 25:119{157, 2006.
    • [17] V. Bulitko, N. Sturtevant, J. Lu, and T. Yau. Graph abstraction in Real-Time Heuristic Search. Arti cial Intelligence, 30:51{100, 2007.
    • [18] M. Buro. ORTS: A Hack-free RTS Game Environment. In Proceedings of the International Computers and Games Conference, pages 280{291, 2002.
    • [19] M. Buro. Call for AI research in RTS games. In Proceedings of the AAAI Workshop on AI in Games, pages 139{141. AAAI Press, 2004.
    • [20] H. Chan, A. Fern, S. Ray, N. Wilson, and C. Ventura. Online Planning for Resource Production in Real-Time Strategy Games. In Proceedings of the International Conference on Automated Planning and Scheduling, pages 65{72, 2007.
    • [23] E. Copson. On a Generalisation of Monotonic Sequences. In Proc. Edinburgh Math. Soc., volume 17, pages 159{164, 1970.
    • [37] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi. Optimization by Simulated Annealing. Science, 220:671{680, 1983.
    • [38] D. E. Knuth and R. W. Moore. An Analysis of Alpha-Beta Pruning. Arti cial Intelligence, 6(4):293{326, 1975.
    • [39] L. Kocsis and C. Szepesvari. Bandit Based Monte-Carlo Planning. In Proceedings of the 17th European Conference on Machine Learning, pages 282{293, 2006.
    • [40] S. Koenig and M. Likhachev. D* Lite. In AAAI/IAAI. AAAI Press, 2002.
    • [69] R. S. Sutton. Learning to predict by the methods of Temporal Di erences. Machine Learning, 3(1):9{44, 1988.
  • No related research data.
  • Discovered through pilot similarity algorithms. Send us your feedback.

Share - Bookmark

Cite this article

Cookies make it easier for us to provide you with our services. With the usage of our services you permit us to use cookies.
More information Ok