|Table of Contents|
|The Road Map to Tera (H.J. van den Herik)||65|
|Exhaustive and Heuristic Retrograde Analysis of the KPPKP Endgame (C. Wirth and J. Nievergelt)||67|
|Knowledgeable Encoding and Querying of Endgame Databases (E.A. Heinz)||81|
|Toward Opening Book Learning (M. Buro)||98|
|Information for Contributors|
|News, Information, Tournaments, and Reports:|
|Computer Go: A Research Agenda (M. Müller)||104|
|Programme of the ACC9 Conference (H.J. van den Herik and B. Monien)||113|
|The Latest Information on the 9th World Computer-Chess Championship (B. Monien, R. Feldmann, and H.J. van den Herik)||115|
|FRITZ 5.32 Wins the 1998 Herschberg Best-Annotation Award (T.A. Marsland and Y. Björnsson)||116|
|The 1999 Herschberg Best-Annotation Award (The Board of ICCA)|
|Calendar of Computer-Games Events in 1999 and 2000||118|
|The Swedish Rating List (T. Karlsson)||119|
|How the Journal Reaches You|
Once DEEP BLUE had beaten World Champion Kasparov, many felt that chess no longer provided a real research challenge and that the research scope should be broadened, towards other games and applications. IBM, the financial father of DEEP BLUE, did so by selecting special applications for their RS6000 supercomputer. Speed was no longer expressed in micro-seconds, but in nano-seconds; storage no longer in MBytes but in GBytes. Commercially they have a product that sells itself. Meanwhile the computer-chess world has become too small for them. It is with some regret that we remark that neither DEEP BLUE nor DEEP BLUE JR. will be participating in the 9th World Computer-Chess Championship to be held in Paderborn, Germany.
After the retirement of CRAY BLITZ, DEEP BLUE is the second supercomputer program to leave the computer-chess scene. Has the development of supercomputers stopped? Are there no new increases of speed and storage to be expected? The answer is no. All reputable supercomputer vendors are busy making ambitious plans for the new century. Their line of reasoning is that the 1970s stands for kilo, the 1980s for mega, the 1990s for giga and that after 2000 comes tera. Some of them are even announcing that in the year 2002 their machines will operate at 30 TFlops. This means 30 * 1012 floating points operations per second. Without any doubt the ply depth of computer-chess programs which may use such a machine will increase considerably. Hence new findings will emerge and to all it will be clear that computer-chess research has not come to an end. But what will the impact of the tera development be? First of all, the storage capacity will grow analogously: memory will be expressed in TBytes. Second, machines with less memory or with cache memory insufficient for the task at hand will use advanced compression techniques, especially if space is an obstacle and time is not.
A closer investigation of the future developments brings three completely distinct contrasts to light: time versus space, heuristics versus proven facts, and vector processing versus parallelism. The time-versus-space issue has a recurrent nature. Depending on the progress of technology we see a balance swing in the trade-off between both issues. Currently storage is the major obstacle. In the pages of this Journal we read that Wirth and Nievergelt need 42 CD ROMs to save their findings, and that Heinz uses a special compression technique to make the 3-piece and 4-piece databases suitable for the main memory of his machine.
This last issue brings up the fundamental question whether using heuristics, i.e., potentially sacrificing correctness, is a true sign of scientific progress. The following example may illustrate this question. Some twelve years ago, the computer-chess research group of the Delft University of Technology built the first 6-piece endgame database using heuristics, viz. KRP(a2)KbBP(a3). For the world of chess, especially for those Grandmasters interested in Timman-Velimirovic, it was a breakthrough. For some researchers in the computer-chess world too, but not for all. One of the founding fathers of chess endgame databases, Ken Thompson, once gave as his view: "I am not sure that my temperament allows me to do something where I am really not sure of the answer. I think that I have to do it from first principles, the 6 pieces with the two Pawns." This issue of the Journal deals with the above-mentioned fundamental question. The Editorial Board and the referees believe that heuristic approaches also constitute true science and as a sign of progress the contributions are included.
The next challenge is to prove that the heuristics are true. For this purpose we need faster machines and more storage. Here we arrive at the third contrast. Will vector processing bring the required increase of speed? Or are we dependent on parallelism and if so, what are the bounds? On the road map of the supercomputer vendors we vaguely saw that tera was followed by peta (1015) and exa (1018). No dates were mentioned. As to the unit of speed per instruction the road map was even more vague: after nano-seconds and pico-seconds we found femto-seconds (10-15), and atto-seconds (10-18). The 21st century has not yet started, but the large-scale road maps have been printed and the only question remaining seems taken directly from chess-tournament competition: who will be the first to arrive at the next milestone? This and other topics will be discussed at the ACC9 Conference in Paderborn. I look forward to meeting all of you there.
H. Jaap van den Herik
C. Wirth1 and J. Nievergelt
Although the techniques for constructing chess endgame databases are well understood, it is a permanent challenge to extend the size of the endgames that can be computed with the resources currently available. Ken Thompson’s pioneering attack on 5-piece endgames was limited to those with at most one Pawn. Using our parallel retrograde analysis program RETROENGINE on a variety of hardware platforms, we have constructed all 111 databases necessary to compute the KPPKP endgame, i.e., King and two Pawns vs. King and Pawn.
Furthermore, we present a new approach called heuristic retrograde analysis that trades accuracy for space and time, i.e., it reduces the space and time needed to solve an endgame while accepting a small error rate. Experiments for the KPPKP endgame yielded reductions in the required space and time by factors of more than 50. The penalty incurred is that a (computer) player using the heuristic database plays suboptimally in less than 4 percent of the positions submitted.
The KPPKP database contains a wealth of interesting positions. We present 15 endgame studies of unconventional design and surprising difficulty.
Karlsruhe, F.R. Germany
Modern chess programs quickly become I/O-bound if they probe their external endgame databases not only at the root node but also at interior nodes of the search tree. This tendency increases at faster search speeds if the I/O speed does not scale accordingly. Hence, the foreseeable trends in CPU and I/O technology will not improve the probing but rather aggravate it. Instead of resorting to "quick and dirty" fixes such as stopping the accesses at a specific depth, our chess program DARKTOUGHT probes its 3-piece and 4-piece endgame databases everywhere in the search tree at full speed without any I/O delays. It does so by loading the entire databases into the main memory of its host system.
To this end, we introduce a new domain-dependent encoding technique that reduces the space consumption of all 3-piece and 4-piece endgame databases to roughly 15 Mbytes overall. A-priori studies of Edwards’ publicly available distance-to-mate tablebases provided the necessary feedback for our so-called knowledgeable encoding. We rely on the algorithmic recognition of rare exceptional endgame positions in order to achieve a compact representation of the stored data. The knowledgeable approach enables chess programs to pre-load all 3-piece and 4-piece endgame databases even on cheap personal computers with low memory capacities starting at 32 MBytes of RAM.
In this contribution an opening-book framework for game-playing programs is presented. The research is motivated by the aspiration to play a sequence of games successfully, i.e., to avoid losing a game twice in the same way. We show how reasonable move alternatives can be found to deviate from previous lines of play. Variants of the algorithm are used by several of today's best Othello programs. They allow the programs to extend their opening books automatically.
The field of computer Go has seen impressive progress over the last decade. However, its future prospects are unclear. This paper suggests that the obstacles to further progress posed by the current structure of the community are at least as derious as the purely technical challenges.
To overcome the obstacles, three possible scenarios are developed. They are based on approaches previously used in computer chess, and aim at building the next generation of Go programs.
This article is an adapted and shortened revision of a paper presented at the CG’98 conference and published under the same title as Müller (1999) in Computers and Games, Springer-Verlag, Heidelberg. We thank the publisher and editors for the possibility to publish this amended version.
1 ETL Complex Games Lab, Tsukuba, Japan. Email: firstname.lastname@example.org.