General Game Playing
Artificial Intelligence researchers have for decades worked on building game-playing systems capable of matching wits with the strongest humans in the world. The success of such systems has largely been because of years of knowledge-engineering effort on behalf of the program developers, manually adding application-dependent knowledge to their programs.
The aim of General Game Playing (GGP) is to take this approach to the next level, that is, to create intelligent agents that can autonomously learn how to play many different games at an expert level without any human intervention, given only a description of the game rules. This requires that the agents learn diverse game-playing strategies without any game-specific knowledge being provided by their developers. A successful realization of this task poses interesting research challenges for artificial intelligence sub-disciplines such as knowledge representation, agent-based reasoning, heuristic search, and machine learning.
The most effective GGP agents typically rely on one of two different approaches. The first is using traditional game-tree search augmented with an (automatically learned) heuristic evaluation function for encapsulating the domain-specific knowledge (Clune 2007; Schiffel & Thielscher 2007; Kuhlmann, Dresner, & Stone 2006). However, instead of using a set of carefully handcrafted domain-specific features in their evaluation as high-performance game-playing programs do, GGP programs typically rely on a small set of generic features (e.g. piece-values and mobility) that apply in a wide range of games. The relative importance of the features is then automatically tuned in real-time for the game at hand. The second is to use Monte-Carlo (MC) based search simulations, thus bypassing the need for a heuristic evaluation function. However, automatic discovery of search-control knowledge is of critical importance for such agents (Finnsson & Björnsson 2008).
CadiaPlayer, the GGP agent developed under this project, uses the second approach, although we are actively investigating both types of approaches. It has already proven its effectiveness by winning the GGP competition (hosted by Stanford University and AAAI) in both 2007 and 2008.
Adpative Real-Time Heuristic Search
The topic of this research is the study and development of new adaptive heuristic search methods. Such methods become increasingly efficient at solving tasks over time as more and more problem instances are encountered and solved, that is, they are capable of "learning from experience". The main focus of the research will be on developing new methods for learning search-control in real-time heuristic search (both single-agent and adversary search).
Search plays a central role in solving a wide range of problems in many fields of computer science and, in particular, artificial intelligence (AI). For example, recent successes in AI disciplines like planning and scheduling, game playing, and constraint programming are in big part due to the development of effective search techniques for exploring huge search spaces (Bonet and Geffner, 2001; Campbell et. al, 2002; Selman, 2000). Because of the huge problem spaces these methods must rely on well-informed heuristics for guiding the search process, and much time and effort is therefore typically spent on developing such heuristics. Heuristics search is a fundamental component in many software application systems and is, for example, widely uses in both problem solving and planning domains as well as game-playing programs. Methods that can automatically learn effective search guidance are therefore of a great practical value, and can improve both the efficiency and decision quality of such applications.
Research into learning of search control has in the past primarily been conducted as a part of automated planning research, the focus being on search-space reductions by using high-level domain-independent problem abstractions. The reduced problem space allows the use of knowledge-rich representation and reasoning techniques for guiding the search process, mainly in the form of search-control rules. However, in the past few years research interest into rule-based search guidance techniques has faded. This is in part because the focus of the planning community has shifted somewhat away from such knowledge-rich approaches and more towards heuristic state-space search planners (Zimmerman and Kambhampati, 2003). The rule-based search-control methods are not feasible for guiding heuristic state-space search. First, experience has shown the difficulty of producing rules that generalize well from one state to the next (mainly because states are represented much less abstract than in knowledge-rich planners). Secondly, heuristic search usually explores order of magnitude more states and efficiency is therefore of a paramount importance. The overhead of manipulating complex search-control rules may easily outweigh the possible benefits.
On the other hand, there is a renewed interest into learning of search control in heuristic state-space based search (Zimmerman and Kambhampati, 2003). Such methods emphasize less the problem-space reduction aspect, but rather focus on well-informed heuristics and clever search enhancements for selectively traversing the large state space. Automatic learning of search control has the potential of improving heuristic search solvers and making them applicable to larger domain areas, hopefully allowing them to solve larger problems than they are capable of today.
Efficient Navigation in Dynamic Multi-Agent Environments
Intelligent terrain navigation is one of the more fundamental AI problems that the game industry faces. In typical computer-game worlds tens or hundreds of human and computer controlled agents are simultaneously navigating and interacting with each other in a dynamically changing environment. Although path-finding is a well-studied problem in computer science, existing methods generally assume a single moving agent, a static known world, and/or non-real-time response requirements. Unfortunately, none of these assumptions hold in complex game worlds.
The main objective of this research project is to develop more efficient algorithms for real-time terrain navigation in large dynamic environments with multiple moving agents. The immediate focus will be on applications to commercial computer games, although such techniques may also prove useful in other domains that require real-time decisions making using limited computing and memory resources.