ICGA Journal

Vol. 27, No. 1 - March 2004

Table of Contents

Table of Contents1
Chinese Chess (H.J. van den Herik) 1
Computer Chinese Chess (S-J. Yen, Jr-C. Chen, T-N. Yang, and S-C. Hsu) 3
Checking Indefinitely in Chinese-Chess Endgames (H-R. Fang, T-S. Hsu, and S-C. Hsu) 19
Notes: 38
Alpha-Beta Search Enhancements with a Real-Value Game-State Evaluation Function (J. Mandziuk and D. Osman) 38
Fisher Numbers (C.H. Hesse) 44
Reviews: 47
Know Your Enemy (D. Hartmann) 47
The Tenth Commandment (D. Hartmann) 48
Information for Contributers 51
News, Information, Tournaments, and Reports: 52
Humans vs. Computers: Vrijthof Maasvogels - JACK (D. Kleinman and H. Kuijf) 52
The 13th International Paderborn Computer-Chess Championship (E. van Reem) 55
ICGA Treasurer's Report for 2003 (The Tresurer of ICGA) 58
ICGA's Activities in Ramat-Gan, Israel (The Board of ICGA) 59
Rules for the 12th World Computer-Chess Championship (The Board of ICGA) 60
The 2001 and 2002 ChessBase Best-publication Award 62
Calendar of Computer-Chess Events in 2004 62
The Swedish Rating List (T. Karlsson) 63
How the ICGA Journal Reaches You 64

 



CHINESE CHESS

Xiang Qi is usually translated as Chinese chess. It is a descendent of the Indian game Chaturanga. Obviously, Xiang Qi is a sibling of our (Western) chess and as such it belongs to the family of games for which the pages of the ICGA Journal are open. In fact, being a chess game, articles on Xiang Qi were welcome in the preceding ICCA Journal, too. Over the last fifteen years, many Xiang Qi tournaments have been held in the game tournaments of the eight Computer Olympiads. So, our readers should be familiar with Chinese chess since tournament reports have been published extensively.

In this issue of the Journal we have two articles on Chinese chess. They deal with (1) the current state of computer Chinese chess (by Shi-Jim Yen, Jr-Chang Chen, Tai-Ning Yang, and Shun-Chin Hsu) and with (2) checking indefinitely in Chinese-chess endgames (by Haw-Ren Fang, Tsan-Sheng Hsu, and Shun-Chin Hsu). It turns out that the rules for perpetual check are rather difficult. But there is more, even the names of the pieces are difficult. Your Editor assumes that in Chinese chess they will have the same names. However, in the translation into English different sets of authors use different sets of names. In the first article we see that the pieces are called: King, Advisor, Elephant, Rook, Horse, Cannon, and Pawn (with abbreviations: K, A, E, R, H, C, and P). In the second article they are translated as: King, Guard, Minister, Rook, Knight, Cannon, and Pawn (K, G, M, R, N, C, and P).

The current days of Internet allowed me to figure out what has happened with the Chinese-chess terminology. At http://horse1.gte.net/res1bup4/chess_intro.htm I found An Introduction to Chinese Chess by (Terence) Peter Donnelly, in which he provides the proper translations for the Chinese names: General (King), Mandarin (Assistant), Elephant, Chariot (Rook), Horse, Cannon, and Soldier (Pawn). In brackets we see the names which are used for the standard abbreviations (given above in the first series of abbreviations). Moreover, our (Western) readers may be challenged by the syntax of the names of the Chinese authors. In this issue preference is given to the authors' choices.

Donnelly also elaborates on the meaning of the name. "Qi means a strategy game, and xiang is the character that appears on the so-called elephants of the black side." The intricacies of the game led him to the observation that "Chinese chess is more a tactical game than a strategic one." It even results in the personal conclusion that Chinese chess is all "middle game". Your Editor is not convinced of the truth of those statements, but they can be seen as provocative, meaning that it is high time for the Western world to focus on Chinese chess. According to the authors of the articles Chinese chess is the immediate successor of chess when looking at the next contest between the human World Champion and a chess-like computer program. The authors expect that the human World Champion will be defeated before 2010.

Besides the two articles on Chinese chess, this issue contains contributions on bridge, chess, give-away checkers, and Fischer numbers. The report by Eric van Reem on HYDRA shows how internationally oriented our computer-chess community is. HYDRA is affiliated to the United Arab Emirates. It won the 13th International Paderborn Computer-Chess Championship (IPCCC) and outclassed all other participants. The team members are: Chrilly Donninger, Ulf Lorenz, and Erdogan Guenes. Competition is the magic word to increase playing strength. Therefore, we are pleased that Donninger has not withdrawn from the scene as announced after Graz 2003, but that he re-entered in a new setting.

Chess, Chinese Chess, Shogi, and Go, they are all prepared to reveal their secrecies to persistent researchers. It will go step by step, progress will be slow, but soon honours will be given to the next computer World Champion (for chess) and the next Computer Olympiad winners (for the other games). As we may believe our own contributions, this procedure will continue at least to 2010 and for Go even longer. I am curious to see how much progress will be shown in Ramat-Gan.

Jaap van den Herik


COMPUTER CHINESE CHESS

Shi-Jim Yen, Jr-Chang Chen, Tai-Ning Yang, and Shun-Chin Hsu

This article describes the current state of computer Chinese chess (Xiang Qi). For two reason s, Chinese-chess programming is important in the field of Artificial Intelligence. First, Chines e chess is one of the most popular and oldest board games worldwide; currently the strength of a Chinese - chess program can be compared to that of human players. Second, the complexity of Chinese che ss is between that of chess and Go. We assume that after DEEP BLUE's victory over Kasparov in 19 97, Chinese chess will be the next popular chess-like board game at which a program will defeat a human top player.

In the article we introduce some techniques for developing Chinese-chess programs. In the Computer Olympiads of 2001 and 2002, the programs ELP and SHIGA were the top Chinese-chess programs. Although these two programs roughly have the same strength, they were developed following completely different techniques, as described in the article. The improvements of t he best Chinese-chess programs over the last twenty years suggest that a human top player will be def eated before 2010.


CHECKING INDEFINITELY IN CHINESE-CHESS ENDGAMES

Haw-ren Fang, Tsan-sheng Hsu, and Shun-Chin Hsu

Retrograde analysis has been widely used to solve many problems. In Western chess, it has been successfully applied to construct 6-piece endgame databases (Thompson, 1996). The classical algorithm first determines, all terminal positions, e.g., checkmate or stalemate in Western chess, and then iteratively propagates the values back to their predecessors until no propagation is possible. In the final phase the remaining unknown positions are then declared draws. However, in Chinese chess, there are special rules other than checkmate and stalemate to end a game. In real games, some positions are adjudicated as wins or losses because of these special rules, but in the final phase of a typical retrograde-analysis algorithm the positions are mistakenly marked as draws. The most influential special rule is checking indefinitely. In this article, we thoroughly investigate and abstract this special rule. Finally, we give a practical solution to construct endgame databases that adequately deal with the special rule.


ALPHA-BETA SEARCH ENHANCEMENTS WITH A REAL-VALUE GAME-STATE EVALUATION FUNCTION

Jacek Mandziuk and Daniel Osman

The note presents results of applying several Alpha-Beta search enhancements in the game of g ive- away checkers with a real-value state evaluation function. In particular, the MTD-bi (bisection) algorithm is tested and compared (1) with MTD (f) and (2) with Alpha-Beta search enhanced with transposition tables and the history heuristic. The results show that in the real-value domain the MTD(f) algorithm becomes impractical and loses its superiority over MTD-bi. Several remarks concerning possible implementations of different replacement schemes in transposition tables are presented and discussed.


FISCHER NUMBERS

C.H. Hesse

This note introduces Fischer numbers and the Fischer graph. It aims to initiate the study of their properties by stating two conjectures and anticipating on questions regarding both. They refer to the small-world property of the Fischer graph and to the finiteness of the Fischer numbers of most chess players.


KNOW YOUR ENEMY

Nosce Hostem, Searching with Opponent Models

by H.H.L.M. Donkers

Ph.D. thesis Universiteit Maastricht, 2003
SIKS Dissertation Series No. 2003-13
ISBN 90-5278-390-X
http://www.cs.unimaas.nl/~donkers/pdf/noscehostem.pdf

Reviewed by Dap Hartmann

Most game-playing programs are symmetrical in their evaluation function: what is good for you is bad for your opponent, and vice versa. For example, in Chess it is generally good to have your Rook on the 7th rank because it invades the opponent's fortress and limits his mobility. Likewise, your opponent would also like to have a Rook on your second rank. The assumption is that both sides prefer the same things. But why would that be the case? When you analyze the games of any particular player, you notice a certain style of play - defensive or attacking, strategic or tactical - which is usually already apparent in the choice of open ing. Studying your opponent, and learning about his preferences can give you an important advantage. At the highest level of competition this is standard practice. Every world-class player knows his opponent's preferences and prepares accordingly. Kasparov even plays through hundreds of games in preparation for a simultaneous exhibition against a chess club.

There is no good reason why game-playing programs use symmetrical evaluation functions, other than that it is hard to know your opponent and difficult to codify such knowledge. I am aware of only two techniques (tricks) which influence the behaviour of a computer (chess) program when it is suspected that the opponent is much stronger or much weaker. The first trick is the 'contempt factor', which offsets the value assigned to a draw. When you play a much stronger opponent, you are happy to draw even when you are a Pawn up. And vice versa. The second trick is less well-known. It is also based on a contempt for the opponent, and was implemented in HITECH. The idea is that the program will not make an inferior move to avoid problems which only occur very deep into the game tree. An inferior move immediately weakens your position, and any opponent can grab that advantage. When HITECH considered itself superior in strength, it would engage in this bluff poker, and play a move that was better (only) in the short term. It is the computer equivalent of a schwindel. Both these tricks use only the general estimate of the overall strength of the opponent.

Jeroen Donkers addresses the following question: How can (board) game-playing programs that use the minimax (or equivalent) search algorithm benefit from knowledge about the opponent? He describes Opponent- Model (OM) search, an alternative to minimax search, which was developed independently by two research groups about ten years ago. The main idea is that, at nodes where the opponent is to move, OM search selects the move that the opponent would prefer. That is not necessarily the move that the program would choose when the colors are reversed. Simply stated, there are two evaluation functions: one for the program and one for the opponent. That sounds reasonable, but the real problem is how to get to know that opponent. Another important issue is that of implementation. Donkers discusses two ways in which OM search can be implemented, both with strong emphasis on efficiency and effectiveness. The first is a one-pass approach, the second uses ??? probes. Various tree-pruning algorithms are analysed for both implementations. A best-case analysis shows that one-pass ?-pruning OM search is the most efficient implementation. But, of course, the real world is not paved with best-case situations. To measure the more practically useful average-case behaviour of the two ?-pruning OM-search algorithms, Donkers performed extensive experiments on four different domains: random game trees, Lines of Action (LoA), Chess (the KQKR endgame), and Bao. The results are a mixed bag, and do not convince that the OM-search implementations used in these experiments lead to better play. One of the (rather obvious) conclusions is that the quality of the evaluation function is very important to the effectiveness of the OM search. This conclusion was also theoretically argued in the preceding chapter.

Time for a new approach. Enter Probabilistic Opponent Model (PrOM) search. The main idea is to take into account the uncertainty about the opponent's strategy. The opponent model contains several evaluation functions (representing different opponents), including the one used by the computer itself. At every move, these different opponents ('pure strategies') are mixed (stirred, not shaken) according to a probability distribution. A further refinement is to make PrOM learn such a distribution by studying the opponent. This wishy-washy behaviour makes it impossible for the opponent to deduce the strategy of the program, simply because no such (pure) strategy exists. Next, Donkers chooses the same two implementations as he did with 'traditional' OM search: a one-pass version and a version that uses ??? probing. Again, experiments are carried out to determine the effect of PrOM search on the level of play in random trees, LoA, and Bao. No Chess this time. A simulated game is used to test the (off-line) learning of opponent probability distributions. In general, PrOM search performs better than OM search. That in itself is scarcely reason for a celebration, as OM search was not doing so great in the first place. More importantly, PrOM search also performs better than straight minimax search. However, the computational costs of PrOM search are very high, and increasingly so at greater search depths.

With this thesis, Jeroen Donkers presents a fine piece of ambitious research which, unfortunately, did not result in an efficient and effective algorithm to exploit knowledge about an opponent. In his own words: "The main outcome is that there exists a great risk when using opponent models in search." Despite this disappointment, I am convinced that his work will be regarded a corner stone - if not a foundation - for future research in Opponent-Model search.


THE TENTH COMMANDMENT

Advances in Computer Games 10: Many Games, Many Challenges

H. Jaap van den Herik, Hiroyuki Iida, and Ernst A. Heinz (eds.)

Kluwer Academic Publishers, November 2003, 398 pp.
ISBN 1-4020-7709-2
$ 170.00 / € 153.00 / £ 105.00

Reviewed by Dap Hartmann

What an amazing accomplishment. The proceedings of the 10th Advances in Computer Games conference (ACG-10) were ready before the conference even began. Compare that to, say, the 5th conference (then only reporting on Advances in Computer Chess) when the proceedings appeared two years after the conference

ACG-10 was held in Graz (Austria), together with the 11th World Computer-Chess Championship. The proceedings contain 24 papers on a wide variety of games and techniques. The book is organised according to the type of game. There are six papers on Chess, six papers on Go, two papers on Checkers, and two papers on Lines of Action. The remaining eight papers each deal with a different game. A complete list of contributions alone would take up all the space available for this review. Instead, I will highlight just a few papers.

The ever-productive Jonathan Schaeffer and collaborators describe the construction of the 10-piece database for Checkers. Thanks to the continuing validity of Moore's Law, databases get bigger and bigger. Not so long ago, a 1 GB database was huge, and storing it was next to impossible. While CHINOOK's original database measured 5.6 GB, the present project has spawned a database of 148 GB in size, containing (in compressed form) over 1013 board positions. If you counted one every second, you would still be counting after 40,000 years. That many positions cannot be addressed with a 32-bit key, and so the authors had to convert everything (including parts of the operating system) to 64-bit addressing. The ultimate goal is to solve the game of Checkers. In the words of the authors: "The reason for computing the 10-piece databases was to solve the game of checkers. The databases eliminate the bottom of the search tree. A separate project is building the top of the proof tree, searching forward from the root towards the databases. When the two search frontiers meet, checkers will be solved. At this point in time, it is too early to tell how soon this will happen."

Billings and Björnsson describe the two currently best Lines of Action (LoA) programs: YL and MONA. Both programs were developed at the University of Alberta, one of the hubs of the computer game-playing universe. I always enjoy reading such accounts. It is admirable to show the world what makes your program tick, especially when it is the best one around. The authors focus on the eternal question: what is the optimal mix of knowledge and search? Extensive knowledge limits the search depth, while limited knowledge may be insufficient even at very deep searches: "Many of the strategic properties perceived by MONA's evaluation function cannot possibly be uncovered within a practical search horizon." MONA has a more thorough evaluation function than YL, and that makes MONA 22 times slower than YL. YL's philosophy is: "Start fast and stay fast", while MONA's credo is "If you're slow anyway, take advantage of it." Together, the two programs are ideally suited to explore the trade-off between search and knowledge. The authors may have invited us into the kitchen, but they do not give away their recipes. They describe the various components of the evaluation function(s) (material, mobility, centralisation, etc.), but they do not reveal any real details. They discuss evaluation weights but give no numbers. Descriptions go like this: "Each line is evaluated using aforementioned features that are then combined into a single line value using a linear function". That is like giving the following recipe for cranberry muffins: mix together flour, sugar, baking soda, salt, egg(s), (butter)milk, oil, and cranberries; bake for some time in an oven at some temperature; enjoy. I hope that this paper presents only the early results of a more thorough investigation into the tradeoff between search and knowledge. The authors clearly possess the tools (YL and MONA) and the talent to obtain more concrete results than they present here. Fortunately, the phrase "the results obtained so far", found in the Conclusions, is an indication that there is more to come in the future.

Winands, Van den Herik, and Uiterwijk also present an evaluation function for LoA. It is the heart of MIA (Maastricht In Action), a LoA program that has consistently finished right behind YL and MONA in all major tournaments. Like in the Canadian paper, the components that make up the evaluation function are described, but no specifics are provided. It is another batch of mystery muffins. MIA looks at quads, walls, and connections. It computes the center-of-mass, mobility, uniformity, and various other notions. Four versions of MIA (with increasing complexity of the evaluation function) were pitted against each other in matches of 1000 games, and at four different (fixed) search depths. All in all, 24,000 games were played. Again, no details are provided: "the weights and details of the features may differ between different evaluators". Just like different muffins may taste different to different people. In addition, the evaluation functions contained a (sufficiently large) random factor to avoid playing identical games. Whereas the retarded YL is 22 times faster than the brainy MONA, MIA-IV (which has the most sophisticated evaluation function) is only 15 percent slower than MIA-I (with the simplest evaluation function). The results are unambiguous. At all search depths, MIA-IV consistently won at least 75 percent of its games against the older (less sophisticated) evaluators. It is interesting to see how the two rivalling groups refer to each other. Billings and Björnsson feel that MIA-III has caught up in strength with YL and MONA: "it [appears] that MIA-III [has] largely closed the gap that previously existed." And Winands et al. proclaim that "Until now the authors of YL and MONA have not published the details of their programs' evaluators. If their knowledge becomes available, combining their ideas with MIA-IV would probably further the playing strength significantly." That may very well be, but who are they going to play then? It is in the best interest of future competition that neither group has published their muffin recipe.

And then there is that every researcher's nightmare. You devote a lot of time and energy to a project, only to find that someone else has beaten you to it. Sometimes you could (and should) have known, while at other times it is an honest surprise. It happened to the astrophysicist Fowler. In the early 1960s he gave a talk at Caltech in which he explained that quasars are just super-massive stars. The legendary Richard Feynman got up and said that it would be gravitationally instable, and therefore impossible. Years before, Feynman had already done the tremendously complicated general relativistic calculation, which filled a hundred pages. He had never bothered to publish it. The astrophysicist Chandrasekhar independently got the same result which, twenty years later, won him the Nobel prize.

A similar thing happened to Helmstetter and Cazenave. In their paper 'Searching with analysis of dependencies in a solitaire card game', they confess: "After our experiments, it was a surprise that a game called Superpuzz, studied by Berliner and more recently by Shintani, is a variant of Gaps [the solitaire game that is the subject of this paper]. However, at the time this paper was written, we did not know precisely what work has been done on Superpuzz." Instead of writing their disclaimer, the authors could have figured out what Berliner and Shintani had done previously. And then they could have updated their paper to better reflect prior work on the subject. But they did not. Even so, it is a case of poor preparation. Because unlike Feynman, who solved a problem but never talked about it, Berliner talked about Superpuzz quite a lot. That was 10 to 15 years ago. At that time I even wrote a program that could solve some of the simpler problems. Many more people must have known about Superpuzz, so why was it a surprise to Helmstetter and Cazenave?

The 24 papers in this book fill 384 pages. At $170 that is 44 cents per page, which is not cheap. Most papers have two authors. Four people have contributed to three different papers: Björnsson, Buro, Van den Herik, and Uiterwijk. Six papers are single-authored, and six papers carry three names. There are two papers which list six authors. Three papers share first place for the longest paper (20 pages), which suggests that contributions were limited to that maximum. The shortest paper counts six pages.

I close this review with the (by now) almost standard praise for a book in this series of Advances in Computer Games: very well edited, and beautifully published. Well done! Onwards with Volume 11.