|Table of Contents||129|
|New Games, Different Stages (H.J. van den Herik)||129|
|Technology Transfer from One High-Performance Search Engine to Another (J. Schaeffer) abstract||131|
|Fast, Memory-efficient Retrograde Algorithms (R. Wu and D.F. Beal) abstract||147|
|Depth by the Rule (G.McC. Haworth)||160|
|Discarding Like Pieces (G.McC. Haworth)||161|
|Ulf Lorenz: Controlled Conspiracy Number Search (D. Hartmann)||162|
|Information for Contributors||164|
|News, Information, Tournaments, and Reports:||165|
|The First Man-Machine Match in the History of International Draughts (N. Guibert)||165|
|The CMG 6th Computer Olympiad (H.J. van den Herik and J.W. Hellemons)||172|
|Deep Junior wins the 18th World Microcomputer Chess Championship (F. Schneider)||173|
|YL Wins Lines of Action Tournament (M. Winands and Y. Björnsson)||180|
|ELP Wins the Chinese Chess Tournament (Jr-Ch. Chen and S-C. Hsu)||182|
|8QP Wins Amazons tournament (H. Iida)||183|
|GF1 Wins Gipf Tournament ( D. Wentink and J.W.H.M. Uiterwijk)||185|
|Shotest Wins Shogi Tournament (J. Rollasson)||187|
|Report on the CMG Computer Games Workshop (J.W.H.M. Uiterwijk)||189|
|The BGN World Qualifier Match Deep Fritz vs. Deep Junior (J.W.H.M. Uiterwijk and K. Müller)||191|
|The Match Hübner vs. Deep Fritz (J. van Reek and J.W.H.M. Uiterwijk)||193|
|The 4th Advanced Chess Match (J.W.H.M. Uiterwijk)||195|
|CFP Machine Learning for "Games" of Perfect and Imperfect Information (Las Vegas, USA)||196|
|CFP Computer and Games 2002 (Edmonton, Canada)||197|
|Calendar of Computer-Games Events in 2001-2003||198|
|The Swedish Rating List (T. Karlsson)||199|
|How the ICGA Journal Reaches You||200|
In the CMG Sixth Computer Olympiad held in Maastricht in August 2001 a new game, named Gipf, entered the scene. The game was invented by Kris Burm in 1996. It has been played by a group of human players and since August 2001 by two programs, too. The programs are called GF1 and Gipfted. A brief description of the game, the board and the rules are given in this issue. Moreover, two competition games are published, but it is hard to assess the quality of the games. The publication is meant as an incentive. In 1983, the inclusion of two Scrabble papers in the special issue of SIGART Newsletter No. 80 had such an effect, since these publications inspired Brian Sheppard to build his World-Champion calibre program Maven. Reading the two papers he believed that he himself could do a better job, at least a better engineering job. Over the next eighteen years (1983-2001), he built a Scrabble program of which he claims that it plays close to perfection and that it definitely outperforms any human being. In a forthcoming paper in Artificial Intelligence - his first paper on this topic - he describes his struggle over the years. He unveils human myths and unlocks mysteries of the game. As a case in point we mention the following myth: "Experts use the general principle of avoiding placing vowels adjacent to bonus squares. The idea is that a heavy tile can be "double-crossed" on the bonus square, leading to a good score for the opponent. The theory goes that it is appropriate to sacrifice up to 6 points to avoid placing a vowel next to the bonus squares on the initial turn." The facts are: "The human estimate is grossly in error. Computer analysis showed that the largest such penalty is only 0.7 points! Most vowel-bonus square combinations had penalties of 0.01 points, meaning that you should avoid it only if there was no other consideration."
At the previous Computer Olympiad (London, 2000), we saw four newcomer games, viz. Amazons, Hex, Lines of Action and Shogi. Three of them were part of this year 's Olympiad again (there was no Hex tournament) and it is tempting to investigate whether any progress in playing strength had been made. How difficult such a question is, can be best explained by referring to the game of chess in the years 1980 up to 1994. Afterwards, the 15-years of time span mentioned saw a steady progress in playing strength, but inspection of the newspaper reports, for instance on the well-known AEGON tournaments, shows that suspicious journalist refused to see any progress at all. Admittedly, for the layman it is difficult to discern progress, since it is usually small and is mostly expressed in details. Typically, progress is manifest only after a number of years.
For Lines of Action the reporters of the tournament in this issue are convinced of the progress of their programs and implicitly claim that the current playing strength is within or above the class of the human world top. Sheppard called the following World-Champion level the Abandon-Hope class; it is the last stage before solving the game. For Amazons we have only guesses on the playing strength, since there have been no contests in the domain between human beings and computer programs. In their optimism, the Amazons programmers believe that their programs play quite well, but our reader will await signs of evidence and ignore claims. Nevertheless, the programs 8QP (Johan de Koning) performed outstandingly by winning the tournament for the second time with a perfect score. For Shogi, it is obvious, the computer performances are still in their infancy. There is a long path with many obstacles to go for all Shogi researchers.
Above we implied that there is a clear difference in research approaches. Some researchers do experiments, build a program based on a new idea and publish papers in a journal or address an audience in a conference. They are the missionaries of games research. Other researchers are true believers in an idea and in working out their brainchild. They foster the infant child for years and years to allow it to florish eventually. In the computer-chess world we have seen many such foster parents. Some of them even attained the rank of World Champion. Brian Sheppard is such an example in the world of Scrabble.
The Editorial Board of the ICGA Journal considers it their task to bridge the gap between researchers and engineers as distinguished before. In order to obtain insight into the world of games we encourage publications on new games, i.e., games that have not gained our attention before. In this issue we are happy to publish contributions on three domains, i.e., on Chinese chess, checkers and international draughts. The first two contributions are closely related to chess and are worth reading in the context of how to apply computer-chess techniques in other domains, meanwhile improving these techniques. The contribution on international draughts describes briefly the route from a novice programmer to a proud participant in a world-class match against an international draughts Grand Master. The contribution by Nicolas Guibert deals with the first Men-Machine match in the history of international draughts: IGM N'Diaga Samb (Senegal) versus Buggy (France). Guibert is our guide in a fascinating match. On the one hand, IGM Samb cooperates with the Buggy programming team (imagine Kasparov cooperating wit the Deep Blue team), and on the other hand Samb acts as the opponent to be defeated. So far, so good. But what if a programming error occurs at the moment the program has a clear advantage? Seasoned computer-chess programmers know what this means: the bug is the bug, it always comes unexpectedly and you have to take it as it is. The game is over: last. However ....., most programmers have difficulty in accepting what seems unavoidable. Maybe a draw is possible as a trade-off between a programmer's error and the program's advantage. The seasoning is clearly lopsided, and most reporters will suppress the details. As an Editor, I read the international draughts report and I liked it. Having seen the paragraph indicated above, I decided to leave it as is and not to suggest any changes. Chess and international draughts are in different stages of development. For each game, phenomena such as the one described provide landmarks on the path to the Abandon-Hope level.
In the early days of computer-chess tournaments, e.g., in the first European Computer-Chess Tournament in Amsterdam 1976, the Tournament Director, David Levy, once allowed play to continue in a game where a promoted Pawn could not be transformed into any other piece. The Pawn turned out to be immobile and the promotion square could not be occupied by the opponent. In that game there was even a second pawn promotion under the same circumstances. However, play still continued since the player did not wish to move the promoted pieces and the opponent did not want to capture them. A quarter of a century later chess has obviously passed this stage. I am sure that games like Gipf, Shogi, and Chinese chess and even Go will follow, each game in its own style and at its own pace. In fact, we nowadays distinguish only four classes of playing strength, viz. (1) weaker than World Champion, (2) World-Champion level, (3) the Abandon-Hope class and (4) the class of solved games. So far, the Computer Olympiad has abandoned only four games since they were solved: Connect-Four, Qubic, Go-Mohku and Nine-Men's Morris. For all other games researchers and engineers are encouraged to try and reach the last class.
Jaap van den Herik
University of Alberta, Edmonton, Alberta
This paper describes the high-performance alpha-beta-based search engine used in Chinook, the World Man-Machine Checkers Champion. Previous experience in designing a chess program was important in the initial design of the program. As it evolved, however, numerous application-specific modifications had to be made for the algorithms to excel at searching checkers trees. This paper describes the experience of transferring the technology used to develop a chess program to the creation of a high-performance checkers program.
Ren Wu1 and Don Beal2
|Palo Alto, USA||London, UK|
In this paper, we discuss and compare algorithms used to generate endgame databases, focussing on resource consumption, particularly space requirements. We present a fast, memory-efficient retrograde algorithm that was used to generate Chinese-chess endgame databases. It only requires 1 bit per position of RAM, while still producing the full-depth information in the final file on disc. In comparison, earlier published algorithms need either a byte per position in RAM, produce only win/draw information, or slow down tremendously swapping parts of the data to and from disc. We compare and contrast algorithms published to date, based on their need for RAM, disc space and CPU time.
As an indication of where the combination of algorithm improvements and hardware advances have now taken the state of the art, we give an example and some estimates. A pair of 10-pieces Chinese-chess endgame databases with size of 1GB each was constructed by this algorithm in about 13 hours on a 1GHz PIII machine with 256 MB memory3. The authors estimate that their algorithm makes it possible to produce a 5-men (western) chess endgame database in matter of hours on a similar desktop PC. The algorithm also makes it feasible to produce 6-men chess endgame databases on such hardware, assuming 20GB of available disk space.
This note corrects a previous treatment of algorithms for the metric DTR, Depth by the Rule.
Haworth (2000) addressed optimal winning strategies for chess endgames, given the constraint of the 50-move rule. It proposed the revival of the Depth to Zeroing-move2 metric DTZ, to be used with the familiar DTC, Depth to Conversion, and DTM(ate) metrics. It also introduced the new DTR metric, Depth by the Rule.
A depth under the DTX rule is denoted by dx here: dr is the least k under which a position can be won with best play under a k-move rule. dm ≥ dr ≥ dz while dr = dz if dz ≥ maxDTZ for all subsequent endgames. Mednis (1996) highlights the Shamkovich-Wirthensohn (Biel, 1980) KQPKQ position wKe7Qf6Ph4/bKc7Qg3+w. This could have led to a 50-move draw claim with the sides minimaxing dz but appears to avoid the claim if dc is minimaxed. DTR endgame tables (EGTs) determine whether such 50-move draw claims are avoidable.
In an ill-advised attempt to create an easily-understood DTR algorithm AL1, dr was initialised to dz across the endgame rather than selectively, and the following incorrect recurrence relations were adopted:
The relations assume wrongly that any move sequence minimaxing dr starts with a sequence minimaxing dz. KRRKB position wKa1Ra2b1/bKd1Bc1+w is a counterexample; dz = 1 ply (1. Rxc1+) while dr = 3 plies (1. Rab2 Ke1 2. Rxc1#) and min(dr of won successors) = 2 plies.
In fact, the standard retrograde-analysis algorithm which has already produced EGTs for the DTC, DTZ and DTM metrics (Thompson, 1986) can be used to produce EGTs to the DTR metric. The most familiar DTC version sets dc in the first possible phase, and the same holds for the DTZ version and dz. For the DTM metric, dm may be set in the first possible phase and potentially reduced later, or more efficiently (Wu and Beal, 2001), all positions with DTM = dm can be given their depth correctly in phase dm.
The DTR computation requires the parameter dzr (plies), "Depth to Zeroing-move while minimaxing DTR", rather than dz3. Position P[dr, dzr] potentially backs up to P'[max(dr, dzr+1), dzr+1]. The proof that the retrograde algorithm is correct for DTM and DTR, as well as for DTC and DTZ, is arguably non-trivial and ultimately uses the fact that it is a correctly-initialised, discrete, finite process where any error in the results must be preceded by an earlier error, an impossibility.
In conclusion, algorithm AL1 (Haworth, 2000) should be discounted as incorrect and AL2-3 are perhaps needlessly conservative and inefficient as they are constrained to identify won positions in increasing dr order.
Haworth, G.McC. (2000). Strategies for Constrained Optimisation. ICGA Journal, Vol. 23, No. 1, pp. 9-20.
Mednis, E. (1996). Advanced Endgame Strategies, esp. 94-96. Chess Enterprises. ISBN 0-9454-7059-2.
Thompson, K. (1986). Retrograde Analysis of Certain Endgames. ICCA Journal, Vol. 9, No. 3, pp. 131-139.
Wu, R. and Beal, D. (2001). Solving Chinese chess endgames by database construction. Information Sciences, Vol. 135, pp. 207-228.
The guideline of 'discarding like men' to estimate the merit of a chess position is well known. This note corrects a previous reference to a citation of it – and compares it with some statistics.
After discussing 5-man positions, Beasley and Whitworth (1996) cite a commonly-held guideline: "More complicated positions can usually be evaluated by ignoring like pieces." They did so after stating the caveat "It is assumed that the position is 'typical': in other words, that both sides have organized their forces to reasonable advantage and that neither King is trapped against the side of the board".
Previously (Nalimov, Wirth and Haworth, 1999), I accidentally misquoted this excellent source, omitting both the 'usually' and the typical positions caveat. With fulsome apologies to Beasley and Whitworth for the displaced misquote, this note examines to what extent the guideline holds good. The web (Tamplin, 2001) provides full statistics for the 2/4- and 3/5-man comparisons, some statistics, q.v. Table 1, for the 4/6-man comparison from the best estimates available (Wirth, 2000; Hyatt, 2001; Thompson, 2001), and the density of won position in KQKQ and KQQKQQ after shallow wins are discounted.
Similarities between the 4/6-man percentages are not obvious, so indeed, untypical positions should be discounted. These are arguably positions where White or Black can or must force conversion to a successor endgame in a few plies. Current endgame tables show forced wins but not forced draws: some relatively short computations remain to be done to enable these "drawn positions with depth" to be discounted as well.
Table 1: Some 4-man/6-man comparisons of White's winning chances.
Beasley, J. and Whitworth, T. (1996). Endgame Magic. Batsford, London, UK. ISBN 0-713-47971-X.
Hyatt, R. (2000). ftp://ftp.cis.uab.edu/pub/hyatt/TB/. Server providing Nalimov's tables and statistics.
Nalimov, E.V., Wirth, C., and Haworth, G.McC. (1999). KQQKQQ and the Kasparov-World Game. ICCA Journal, Vol. 22, No. 4, pp. 195-212.
Tamplin, J. (2001). http://chess.jaet.org/endings/. Chess endgame site.
Thompson, K. (2000). http://cm.bell-labs.com/cm/cs/who/ken/chesseg.html. 6-man EGTs, maximal positions, maximal mutual zugzwangs and endgame statistics.
Wirth, C.W. (2000). Personal communication.
by Ulf Lorenz
Ph.D. thesis University of Paderborn
(in German) 140 pp.
Reviewed by Dap Hartmann
To reduce the exponential growth of the game tree, virtually all strong chess programs use the α–β algorithm (or similar algorithms, such as NegaScout) as an enhancement to the MiniMax search. For well-ordered trees, the reduction in the search effort generally exceeds 90%, while the outcome of the search remains unaltered. However, the major shortcoming of this type of tree searching is that it provides no information about the reliability of the scores which are propagated up towards the root. As these scores ultimately decide which of the legal moves is the best move to play, it seems somewhat careless not to have a measure of their reliability. Moreover, generally only the precise score (evaluation) of the best move is known; the score of the other moves is merely 'lower'.
In 1988, David McAllester (1988) published a paper on an ingenious new algorithm which establishes a reliable best move: Conspiracy Number Search (CNS). A Conspiracy Number (CN) is a measure for the number of leaf nodes that must change their value for the score at the root of the search tree to change. A move for which the score ultimately hinges on just a single leaf node, is critically dependent on the correctness of this one particular call to the evaluation function. So, obviously, a higher conspiracy number indicates a more stable, more reliable move.
Implementations of CNS in chess programs (for example, by Schaeffer (1989) and by Van der Meulen (1990)) exhibit solid playing strength and stability in tactical positions, but there is also a price to pay. It is difficult to control the expansion of the search tree, and at every node a vector of CN values must be maintained. The size of these vectors greatly depends on the resolution (graininess) of the evaluation function. And, as with every best-first search, the entire search tree must be stored in memory.
In his thesis, Ulf Lorenz attempts to overcome some of these problems, in particular the rapid expansion of the search tree. He developed the so-called Controlled Conspiracy Number Search (CCNS), which uses target-oriented searching to confine the expansion. The algorithm is carefully described in the 40 pages of Chapter 2. In the next chapter, the CCNS algorithm is parallelized. The achieved efficiency for 160 processors is about 30%, and the question remains whether this figure can be improved upon. Chapter 4 analyzes the propagation in a game tree of errors due to an imperfect evaluation function. The (appropriate) conclusion is that the use of Conspiracy Numbers is highly desirable. An important concept throughout this thesis is that of 'leafnode disjunct strategies'. Two strategies are said to be 'leafnode disjunct' when they share no common leaf nodes. They may, however, share internal nodes.
How well does a chess program using CCNS perform? The algorithm was implemented in the 'world-class chess program' ConNerS (single-processor version; the multi-processor parallel version is preceded by 'P.'). Lorenz presents a table comparing ConNerS (using CCNS) with various other chess programs in the BT2630 test. But, as all these programs ran on different computer platforms and at various clock rates, a comparison is not very meaningful. The only tentative conclusion that may be drawn from this experiment, is that the single-processor version of ConNerS (using CCNS) performs as well (pseudo-Elo = 2402) as some of the best commercial chess programs (Fritz, Hiarcs). The pseudo-Elo rating for P.ConNerS peaks at 2589 points, when using 79 processors. Doubling that number of CPUs did not further improve the score. One should also keep in mind that BT2630 is basically a tactical (albeit very difficulty) test. A similarly rather meaningless statistic was obtained from a test match of 100 games against Cheiron'97 (which achieved the lowest score in the BT2630 test). ConNerS (with CCNS) lost this match by 6 points. Running on hardware consisting of 160 Pentium II-450 MHz processors, P.ConNerS won the 10th Grandmaster Tournament (Category 11) in Lippstad (2000) with a score of +6=3–2. An impressive result (Lorenz, 2000).
Clear descriptions and abundant examples make this thesis a pleasure to read. Unfortunately, the title is the only English sentence in the entire work. Not even an English summary is provided. This is a pity. I believe that the German educational system has revised its ordinance that a thesis must be presented in the German language. This silly rule has considerably disadvantaged German scientists in the past, as they had to rewrite their work in English to make it accessible to a world-wide audience. If I am not mistaken, and this rule was indeed abolished, I cannot understand why anyone would persist in presenting their scientific research in German. I therefor strongly encourage Ulf Lorenz to transform this fine Ph.D. thesis into articles for the ICGA Journal. That is not a trivial undertaking, but other people in the field deserve to learn the intricacies of Controlled Conspiracy Number Search.
Lorenz, U. (2000). P.CONNERS Wins the 10th Grandmaster Tournament in Lippstadt (Germany). ICGA Journal, Vol. 23, No.3, pp. 189-191.
McAllester, D.A. (1988). Conspiracy Numbers for Min-Max Search. Artificial Intelligence, Vol. 35, pp. 287-310. ISSN 0004-3702.
Meulen, M. van der (1990). Conspiracy-Number Search. ICCA Journal, Vol. 13, No. 1, pp. 3-14.
Schaeffer, J. (1989). Conspiracy Numbers. Artificial Intelligence, Vol. 43, No. 1, pp. 67-84. ISSN 0004-3702.