ICGA Journal

Vol. 24, No. 1 - March 2001

Table of Contents

Table of Contents1
New Games (H.J. van den Herik)1
The Quad Heuristic in Lines of Action (M.H.M. Winands, J.W.H.M. Uiterwijk, and H.J van den Herik) abstract 3
Synthesis of Chess and Chess-like Endgames: A Proof of Correctness (K. P. Coplan) abstract 16
Solving Renju (J. Wágner and J. Virág) abstract 30
Multigame - an Environment for Distributed Game Tree Research (D. Hartmann) 35
Information for Contributors37
News, Information, Tournaments, and Reports:38
The Match Van der Wiel vs. REBEL CENTURY 3.0 (J. van Reek and J.W.H.M. Uiterwijk) 38
The 10th International Paderborn Computer-Chess Championship (U. Lorenz) 42
The First Open Mediocrity Programming Competition (H. Iida and M. Hlynka) 46
The Second International Conference on Computers and Games (J. van Rijswijck and M. Müller) 49
The Sixth Computer Olympiad (Maastricht, 2001) 54
The Computer-Games Workshop (Maastricht, 2001) 55
Calendar of Computer-Games Events in 2001-2002 56
The Swedish Rating List (T. Karlsson)57
Claude E. Shannon (1916-2001) (K. Thompson; H.J. van den Herik) 58
Herbert A. Simon (1916-2001) (H. Berliner; F. Gobet) 60
How the ICGA Journal Reaches You64



Do we really need new games? This question quickly emerges when a discussion is initiated on the fundamentals of the International Computer Games Association Journal, the ICGA Journal. In 1999 the ICCA Journal left the preserve of chess to broaden its scope and to increase interest in game-playing programs in general. Many game-playing specialists with some affection towards computers had in mind games close to chess, such as checkers and Go, and their more distant relatives Chinese Chess and Shogi. They turned out to be wrong, since there are many more games around, as could be read in the lists of the participating programs in the five Computer Olympiads held so far. There we have seen a total of 23 games. Four of them disappeared, having been solved (Connect-Four, Qubic, Go-Moku, and Nine Men's Morris) and this issue reports on a claim that a fifth Olympic game has also been solved, viz. Renju.

However, a game can be solved without knowing the exact strategy; then it is said that the game has been solved ultra-weakly. For instance, the game of Hex is a first-player win as proven by John Nash, although so far the winning strategy is unknown. To complicate matters a swap rule has been introduced implying that after the first move the second player may choose which side to play. As a consequence of this rule the first player will always choose a move which does not look too good. For Hex, this is difficult since in that game it holds that the first move is win preserving or losing (no draws are possible). So, a need for strategic concepts to understand the power of a move remains on our research list.

From the games mentioned above it seems that two issues are important: (1) solving a game and (2) developing a winning strategy. In the past we have seen scientific progress in both areas. When looking in the restricted domain of endgames, especially chess endgames, we notice that our scientific vocabulary has been enriched by various metrics, such as distance to conversion, distance to mate, distance to zeroing move, and depth by the rule, as well as by an attempt to introduce a measure of beauty.

Yet, the original aim of computer-chess (computer-checkers) researchers was to obtain insight into the human brain processes when grandmasters were playing chess or checkers. Of course, a second goal was to develop a program of World-Champion calibre. Accordingly, the emphasis of these researchers was on developing techniques that should eventually result in strong playing programs.

The growth of computer power – or otherwise stated, the effect of Moore's law – is that in the future more games will be solved, or will be endowed with such powerful techniques that they will play at a level not understandable to human beings. Is this the end of computer-games research? In principle, it is, if no new games are invented, developed, or found in ancient history. Thus any respected Journal on games should look for new games, since 'to stand still is to move back'.

Therefore we are pleased to introduce the readers into the intricacies of a new game called Mediocrity. In February 2001 the First Open Mediocrity Programming Competition was held in Hamamatsu, Japan. Mediocrity is played by n players and for this competition n was set at 3. To stimulate programming games the Information Processing Society of Japan (IPSJ) always organizes a side event during its annual Programming Symposium, called Games and Puzzles Competitions on Computers (GPCC). In 2000, it was decided that Mediocrity was one of the new games at the GPCC 2001. The rules, the tournament, and information on the participating programs are reported in this issue. Moreover, another new game, called Ackanomic, is referred to.

Playing games is fun and that must be the point of departure, but some games are brain teasing and then a computer can help. This is true for old games, such as Domineering or Dots and Boxes, as well as for the new ones mentioned above. Fortunately, the world of games is rather diverse. So, besides chess-like games, we have connection games, mancala games, racing games, games of chance, strategy games, and negotiating games.

All in all it is astonishing how many games there are and it is fascinating that due to the current opportunities of computing and communication they are less of a mystery. Dakon has been solved by a human brain, but the proof was given by a computer. Many configurations of Kalah, including the official one have been solved (see ICGA Journal, Vol. 23, No. 3); more solutions of Tchuka Ruma are expected to be next in line. Tchuka Ruma is a traditional game from Malaysia and the Philippines. It is played on a circular board with a ring of holes in it. One of the holes is called the 'Ruma'; it is larger than the other ones. At the start, all holes except the Ruma are filled with an equal number of stones. The goal of the game is to collect as many stones in the Ruma as possible. It is a one-player game and the rules for sowing are Awari-like. Tchuka Ruma can be played with a different number of holes and stones. The game can best be compared to Solitaire and Rubik's Cube. So far the game has been solved for configurations with ≤ 10 pits and up to 100,000 stones per pit. This is but one example of the many games in which mathematics and logic perform side by side. Other challenging games are Gipf, Tamsk, Zčrtz, Twixt, and Onyx.

The exposition above almost indicates that the new century is starting with new ideas and new games. This might be the case, but with much sadness we would also like to pay our sincere respect to two giants of our games world: Claude Shannon and Herb Simon. Both passed away in the first quarter of 2001; both had an enormous impact on science in general. For them the world of chess was only an application domain for their theories. Shannon was a front-ranked researcher in Boolean circuitry, the entropy of information, the cryptography and Artificial Intelligence. For Herb Simon the research domains were complementary, i.e., economy, rational behaviour, cognitive science, the physical-symbol system, and Artificial Intelligence. We are grateful for the contributions they made to our domain as can be read in the Obituaries.

Finally, we return to the question: do we need new games? The answer is clearly affirmative. Only by analysing, programming, and attempting to solve new games are we able to develop new techniques and consequently to understand better the intricacies of the games, the secrets of computer technology, and the enormous range of applications for the techniques developed in our game park.

Jaap van den Herik



Mark H.M. Winands, Jos W.H.M. Uiterwijk and H. Jaap van den Herik1

Maastricht, The Netherlands


Lines of Action (LOA) is a two-person zero-sum chess-like connection game with perfect information. The goal of each side is to connect its own pieces. The state-space complexity and game-tree complexity are comparable with those of Othello. A LOA evaluation function usually consists of a combination of the following components: threats, solid formations, (partial) blocking, centralisation, material advantage, and mobility.

In this article we introduce the quad heuristic and disregard mobility. The quad heuristic is used in three qualities, viz. (1) to assess whether a position is non-terminal, (2) to contribute to the evaluation function, and (3) to support the quiescence search. The heuristic is based on quad counts and Euler numbers, and derived from the field of optical character recognition.

We show that an evaluation function using the quad heuristic outperforms evaluation functions based on the centre-of-mass approach and on (partial) blocking, i.e., evaluation functions without the quad heuristic. Moreover, it turns out that a quiescence search, which only looks at capturing moves that destroy or create connections, is quite effective when using the quad heuristic. Finally, we provide three conclusions and suggest three ways of improving the use of the quad heuristic in LOA.

1 Universiteit Maastricht, Department of Computer Science, P.O. Box 616, 6200 MD Maastricht, The Netherlands. Email: {m.winands, uiterwijk, herik}@cs.unimaas.nl.



K.P Coplan1

University of Bradford
Bradfort, UK


An algorithm for synthesising correct and optimal game-playing programs in the domain of chess-like endgames from specifications is investigated and explained in detail. The central component of the algorithm is a routine that repeatedly transforms expanded win-in-n-ply predicates to equivalent but smaller expressions. The routine is designed to monitor its own execution in order to accelerate its progress through the problem space, drastically reducing execution time. The basis of this self-monitoring is to enable the reuse of computations avoiding redundancy in both the synthesis process itself, and its resulting output. The paper presents a formal proof of correctness for this routine. The problem of debugging implementations of the algorithm, due to the complexity of the predicates generated is discussed. The assistance provided by using the formal proof to identify coding errors is explained.

1 School of Computing and Mathematics, Phoenix Building, University of Bradford, Richmond Road, Bradford BD7 1DP, England. Email: K.P.Coplan@bradford.ac.uk.




János Wágner1           István Virág2

Budapest, Hungary


Renju is a very old two-player board game with subtle tactics and strategies. It comes from Japan and is a professional variant of Go-Moku. In many countries, such as China, Estonia, Japan, Latvia, Russia, and Sweden, Renju is played on a high level of performance. Compared to Renju, the game of Go-Moku, mainly played in China and Japan, has rather simple rules. Go-Moku was solved by Allis in 1992. He proved that it is a won game for the player to move first. In this note we claim that Renju is a first-player win too. We describe our research approach and how we arrived at this game-theoretical value.

1 1193 Budapest Deák F. u. 1 VIII/32, Hungary. E-mail: wagjan@mail.datanet.hu.
2 Department of Inorganic and Analytical Chemistry, Eötvös Loránd University, P.O. Box 32, Budapest 112, H-1518, Hungary. E-mail: virag@para.chem.elte.hu.




Ph.D. thesis
by John W. Romein
Vrije Universiteit Amsterdam, 176 pp.
Postscript version available from: http://www.cs.vu.nl/~john/thesis

Reviewed by Dap Hartmann1

In the spirit of the new, broader scope of the ICGA, John Romein's Ph.D. thesis describes an environment in which a wide variety of one- and two-person games can be researched on parallel computing platforms. By creating a very high-level language, he supplies computer game researchers with a computing environment in which search experiments on distributed game trees can be carried out without the burden of having to deal with the intricacies of parallelism. The rules of most games can be readily described in this Multigame language. Such a program is then translated by a front-end compiler into a move generator for that game. Together with a user-supplied evaluation function in C code, the move generator is linked to the Multigame runtime system code, and compiled into a game-playing program. With this clever approach, parallel programming issues such as communication, synchronization, work and data distribution, and deadlock prevention are hidden from the programmer. This will undoubtedly come as quite a relief to many researchers. However, there is a performance price to pay, as the C code from the front-end compiler can be an order of magnitude slower than a hand-crafted move generator. But the user may choose to provide such C code himself, and still benefit from the pre-fab parallelism that the platform provides.

Multigame applies to one- and two-person board games with perfect information. 'Board games' is a flexible concept which may also include one-person games that can be mapped onto a board representation. An example is Rubik's Cube, where the six sides of the cube are projected on a board consisting of 54 squares, each representing one of the visible faces of the puzzle's sub-cubes. Programs written in the Multigame language are surprisingly compact. A move generator for the game of Domineering requires just 15 lines of Multigame code, while the move generator for solving Rubik's Cube can be represented in 462 lines. A complete move generator for the game of chess needs only 208 lines of code. For example, the following fragment generates knight moves:

knight_move =
find own knight,
pick up,
either turn 45 or turn -45,
not points at own piece,
put down.

For solving one-person games (puzzles), the Multigame platform uses the Iterative Deepening A* (IDA*) algorithm developed by Korf. For two-person games, there is a choice between the Alpha-Beta, NegaScout, and MTD(f) algorithms.

After thorough descriptions and discussions of the Multigame language (Chapter 3) and the parallel search engines (Chapter 4), the meat is delivered in Chapter 5. Here, Romein presents the most innovative part of his research: Transposition-table Driven (work) Scheduling (TDS), which he successfully implemented for the IDA* algorithm. "The key idea is to partition the transposition table over the processors and to migrate a state to the processor that owns the corresponding table entry, rather than using work stealing." This idea seems counter-intuitive at first, because TDS requires a great deal of data communication. But the big advantage lies in the fact that the transposition table is now always local to the processor working on the transferred state, whereas traditional 'work stealing' approaches must access remote transposition tables. Instead of stealing work from the work queues of other processors, in TDS a processor waits for other processors to send it work, namely jobs on states that have signatures referring to that processor's transposition table. This way, no two processors will ever work on the same sub-tree concurrently. Another unusual aspect of TDS is that the processor which is assigned the work does not report back its results to the processor that supplied the work. Only when such a 'slave without a master' finds a solution at the current search iteration, does it raise a global flag signifying its success. For this reason, TDS must synchronize the processors after each IDA* iteration. In several experiments, Romein convincingly shows that TDS clearly outperforms conventional work-stealing algorithms, typically by a factor of 2 to 3. Tested on systems with 2N (N = 0…7) processors, TDS achieved a nearly linear speed-up with the number of processors. Running on 128 processors, TDS realized speed-ups between 109 and 122.

The big question left open is whether TDS can also be applied to two-person games, which do require back-propagation of search results. It is a bit disappointing that Romein has little more to say about this important issue than "More research is necessary to evaluate the applicability of TDS to this class of search algorithms." Even though the thesis as a whole is a nice piece of work, it is obvious that TDS is its saving grace, inspired perhaps by that kindred spirit who has motivated so many young computer-science students. Doing original research is hard work, and original ideas do not always present themselves when they are needed most. From his own candid revelations in the acknowledgement section, Romein admits that he was still full of ideas about improving the algorithms when the harsh reality of writing up the thesis struck. It was a task that he dreaded more than tracking down the cause of a core dump. But the latter is a deterministic problem for which a solution exists, while writing is a creative and non-deterministic activity where a single page of output may require more intellectual effort than a week's worth of debugging. Despite the wise words of Thomas Edison, that the process of inventing is one percent inspiration and 99 percent transpiration, one simply cannot succeed without that essential one percent. Maybe the Indonesian dinner Romein refers to should have come earlier in his thesis project. Nevertheless, the excellent results achieved with TDS in IDA* are of great significance, and researching its applicability to two-person game tree-searching algorithms should be next on the agenda.

In closing, the rules to obtain this nice piece of research can simply be stated as:

wise_move =

find website,
pick up thesis,
print thesis,

1 Binnendoor 14, 2512 XX Den Haag, The Netherlands. Email: dap_hartmann@yahoo.com.



In the first quarter of 2001 two great scientists passed away. Both had seminal contributions to computer chess. We therefore considered it appropriate to honour them by obituaries, for which we would like to thank Ken Thompson, Hans Berliner and Fernand Gobet. – Ed.


Ken Thompson1

San José, USA

Claude Shannon, born April 30, 1916, died February 24, 2001, after a long struggle with Alzheimer's disease. No other person, except possibly Alan Turing, made such deep and fundamental contributions to the computer era. His contributions underlie every aspect of our digital society. He made major advances to the analysis and synthesis of digital circuits and he founded the fields of information theory and coding theory. He was a consummate gadgeteer. He built a maze-solving mouse in 1950, and he built the Ultimate Machine.

Of course, Shannon's greatest contribution was founding the field of computer chess. He wrote a paper with the first workable plan for a computer to play chess. In typical fashion, this paper contained not one, but two algorithms. Over the years, chess programmers have split into two cantankerous camps following each algorithm, brute force and selective search. Today it seems that the two methods have melded into a brute force search to varying depths depending on features found during search.

This is a microcosm of Shannon's life. He went where his interests took him - typically long before anyone else and before there was any practical use. Ultimately his ideas survived. Sometimes his contributions fell short and did not start a new field of science. By his own admission, he spent a lot of time on frivolous things. Above all he had fun.

On a personal level, Shannon was someone that you instinctively liked. He was always ready to talk to anyone about any subject. He was the special guest of the ICCA at the 1980 computer championship in Linz. He made a point of roaming the hall introducing himself to the programmers and learning what he could about the programs. He was just a nice guy. He will be missed.

On Neil Sloane's web page, you will find an excellent biography of Claude Shannon along with collected works. http://www.research.att.com/~njas/doc/ces5.html


Jaap van den Herik

Some researchers of the younger generation are privileged: they have met Claude Elwood Shannon, the founding father of computer chess. In some sense he completed a fine triumvirate over three centuries: Baron Wolfgang von Kempelen (who launched the first ideas on chess-playing automatons in 1769), Charles Babbage (who expressed the idea that the analytical engine could play games, in 1833) and Claude Shannon (who lectured on Programming a Computer for Playing Chess in New York, on March 9, 1949). The lecture was published in Philosophical Magazine (1950) and is still a source of inspiration. It is self containing and has pointed out the computer-chess research directions for the second half of the last century.

After his retirement as a Professor at MIT, Shannon remained interested in scientific progress and was eager to follow the advancements in computer games at the tournament sides. Together with his wife Mary Shannon he attended as Guest of Honour the third World Computer-Chess Championship in Linz 1980, the sixth World Computer-Chess Championship in Alberta 1989, and the First Computer Olympiad in London 1989.

For many researchers in Linz 1980 Shannon was founder, head and also front-runner of the computer-chess community, despite Ken Thompson's ingenious BELLE machine. Every day he attended the matches, gave interviews, and exchanged ideas with all persons who recognized him and addressed him. Indeed, he was easily accessible, always friendly and prepared to discuss a variety of topics. There, I learned on his relation with Turing, e.g., their disjoint research on cryptography and secrecy systems. Both worked for Roosevelt and Churchill, but as researchers they were not allowed to cooperate. In the meetings they had, they did not speak on cryptography (even not after the war) and only a little on computer chess. Their main topic was what you could do with a brain and how a brain could be made mechanically.

In 1989, I was honoured that Dr. Shannon was session chair of a conference session in which I had a lecture. In his introduction he stated: "I think man is a machine of a very complex sort." Since Shannon has pointed out many times how to break down complex problems into tractable parts, I then knew for sure that my solved problem (i.e., the conference contribution) was only a small tractable part within his tractable problem parts.

The ICCA Journal is happy that we have given Claude Elwood Shannon due credit during his life by a special Shannon issue, viz. ICCA Journal, Vol. 12, No. 4 (1989), in which we published an interview with his ideas on the impact of computer-chess research, a written tribute titled Thank You, Dr. Shannon, and many photographs, e.g., three with Shannon testing his common equation of juggling. However, even in a Games Journal it is not appropriate to emphasize Shannon's contributions to that field too much, since this man was far greater than our community. A long list of his contributions and publications is available on the web (see Ken Thompson's obituary). Here I would like to single out three landmarks, which clearly distinguish Shannon as one of the giant scientists of the twentiest century.

First, in 1936 only twenty years old, he wrote a master's thesis A Symbolic Analysis of Relay and Switching Circuits, in which he showed how Boole's logical symbols could be treated as a series of on or off switches and how binary arithmetic could be carried out by electrical circuits. It was published in 1940 and won the Alfred Noble Prize of the combined engineering societies of the USA. According to H.H. Goldstine (1973): "one of the most important master's theses ever written … a landmark in that it changed circuit design from an art into a science."

Second, in 1948 Shannon published A Mathematical Theory of Communication, in which he distinguished the message from the medium. He introduced the notion entropy of information, and established fundamental limits on the efficiency of communication over noisy channels. Over time, his work became more important by the advent of wireless telephones, high-speed data networks, and Internet.

Third, in the 1950s he successfully investigated many games and did other "frivolous" inventions. We only mention three of the many developments he contributed to: the construction of a mechanical mouse trapped in a maze, the invention of a two-seater version of his unicycle, and the common equation of juggling.

For all these, Shannon received many awards and laurels from institutions and universities all over the world, among them the National Medal of Science in 1966 and the Kyoto Prize in 1985. On behalf of our community we thank Dr. Shannon for all his contributions and Mrs. Shannon for her first-rate support of this giant scientist.


Hans Berliner2

Pittsburg, USA

  No man is an Island, entire of it self;
every man is a piece of the Continent, a part of the main;

And therefore never send to know for whom the bell tolls,
It tolls for thee.
    John Donne
Meditation XVII

Herbert Simon thought of himself as a Renaissance man. He was involved in many enterprises, and had a knack for appearing at or near the top in most of them. In 1978, he received a Nobel Prize in economics for nearly forgotten work he had done decades before, and he slept in unheated barracks in China and learned to read Chinese in order to acquaint himself with the Mainland way of doing things while offering what he could.

His detractors would say he dabbled a great deal, could always be counted on to be there at the finish line, could provide a reprint of any of the thousands of papers he had ever written at a moment's notice, and was arrogant in a very refined kind of way.

For me, Herb Simon was the guy who in 1956 at his inaugural address as President of the Operations Research Society of America predicted that within 10 years a computer would be World Chess Champion unless it were barred from the competition. I had seen the articles on the Rand Corp. chess program authored by Newell, Simon, and Shaw that this prediction was based upon, and I laughed, because I knew too much about chess, and did not see anything resembling the kind of play that was required to even play at the Master level. Here again was a situation typical of Herb's career. Many looked in wonder, and many looked with disdain. To make such a prediction was not a very scientific thing to do. The program had never played a tournament game. Even when playing against people who were touted as being champions of something-or-another it showed rather serious weaknesses. But Herb always liked being there first.

In later years when I discussed this prediction with him (he vehemently rooted for computers ever after, even if, as DEEP BLUE, they did not play chess in the style of Cognitive Psychology), he always said that it was important for science to make such predictions as they galvanized the field toward certain important goals. And he would point out DaVinci's drawings of man flight, and other things. To him it was important to push the ball along, and just how scientific it was, could be left for future appraisals. Yet Herb was one of the first members of the National Academy of Science to come from the ranks of the soft sciences rather than from Engineering, Physics and Chemistry. As such he opened the door for things that this nation (US) sorely needed to examine.

I first met Herb at a meeting in 1967, where he was an invited speaker. I chatted with him after his presentation, and was surprised to find that he knew who I was. I indicated that after the formidable work of Greenblatt, I had been encouraged to try to write a chess program on my own time at IBM, where I was then employed. He was very interested, and when I mentioned that I did not intend to stay at IBM too much longer, he was quick to suggest that I should visit Carnegie-Mellon University and see what they had to offer there. In due course I did visit, and in 1969 entered there as a first-year graduate student in Computer Science. He was on my committee when I did my thesis which demonstrated how a computer could do very complicated analytical things in the Human Style. But typical of Herb, he was not involved in the nuts and bolts, and the main credit for my guidance must be given to Allen Newell.

While a graduate student at CMU, I had ample opportunity of interact with Herb and Bill Chase who was a Post-Doc and later a professor in the Psychology Department there. It was there that I got involved in the Human Perception of chess which built on the work of the Russians in the 1930s and of De Groot in the 1940s. I was very impressed by the machinery that clearly existed in the human head and made it possible to remember positions almost perfectly even after having seen them for only 2 seconds. What was even more interesting, was that errors in reconstructing the position had a definite pattern to them. For instance, a Bishop may be misplaced in the reconstruction, but it would still be on the critical diagonal, and thus retain its function on that diagonal whatever the function was. This insight has been incredibly important in my career, and is seen clearly in the program CAPS, in my book The System, and in other writings presently under way.

Herb was excellent in pointing these things out and in developing the interpretation that allowed further progress into insights of what our brain machine was really doing. For me, it was very exciting to watch and wonder. Of course, what was missing at that time was a method of emulating human learning without which we would be faced with infinite twiddling of data by a human being who could just not grasp the complexity of all he was twiddling. Thus, until neural net learning came along, there was really no hope of building a program that could play in the human style. Yet, without people such as Herb, it is likely that no one would have tried. Now, with the success of the brute force method, it is doubtful anyone will ever try it. The fact that neural nets can be the difference is documented by the success of Tesauro's backgammon program that learned strategies that almost no one new, and became a top player by playing millions of games against itself. To do that in chess would have been much harder, but humans were doing it.

Herb Simon's role in all this is not at all clear. He was always there to discuss things with genuine interest, and had an amazing repetoire of facts that could be pertinent of the situation. One would come away from such a session with months of research leads to chase down. Not too many people could have done that. I will miss you, Herb.

"I come to bury Caesar, not to praise him.
The evil that men do lives after them,
The good is oft interred with their bones;
So let it be with Caesar."



Fernand Gobet3

Nottingham, UK


  Once upon a time when the world was young,
    Oh best beloved.
  There came to the banks of the Monongogo river,
    All muddy and brown,
Oh best beloved,
  A djinn who was one thing on the inside
    But many things on the outside.

Allen Newell, 1987

With the disappearance of Herbert A. Simon, we have lost one of the most original thinkers of the 20th century. Highly influential in a number of scientific fields – some of which he actually helped create, such as artificial intelligence or information-processing psychology – Simon was a true polymath. His research started in management science and political science, later encompassed operations research, statistics and economics, and finally included computer science, artificial intelligence, psychology, education, philosophy of science, biology, and the sciences of design. His often controversial ideas earned him wide scientific recognition and essentially all the top awards of the fields in which he researched, including the Turing Award from the Association of Computing Machinery, with Allen Newell (in 1975), the Nobel prize in economics (in 1978), and the Gold Medal Award for Psychological Science from the American Psychological Foundation (in 1988).

While his research spanned a dazzling number of fields, his interests were by and large centred upon a single question, that of bounded rationality: How do human beings, in spite of their limited processing capacities, manage to make reasonable decisions? Or, to use one of Simon's favourite metaphors, how can one find a satisfactory path in a branching maze, in spite of limited access to information? This central question, which originated in his Ph.D. thesis on organisations (published in 1947 as Public Administration), is clearly apparent in most of his research, and is often combined with questions about the role of the environment in the adaptability of complex systems (these themes were developed in detail in The Sciences of The Artificial, 1969). Bounded rationality is also at the source of concepts such as heuristic search and satisficing (selecting solutions that are good enough, although perhaps not optimal), and was a strong motivation for the introduction of production systems in psychology; this strand of research, carried out in collaboration with the late Allen Newell, is described in detail in their classic book Human Problem Solving (1972).

Simon's fascination with human decision making found an important source of inspiration in the study of puzzles and games (in particular chess). How can human beings, in spite of their bounded rationality, become experts in these combinatorial tasks? Simon studied games from two main angles: artificial intelligence and psychology. His research into computer chess started in the mid-fifties in collaboration with Allen Newell and Cliff Shaw, at the time when they were also engaged in the development of the Logic Theorist and list-processing languages, which would later lead to the General Problem Solver. While their program did not reach a high standard of play, it allowed Simon to formalise several key ideas related to bounded rationality, such as the presence of goals, the dynamic adjustment of expectation, heuristic search, and satisficing. The concept of selective search was also explored in MATER, a program subsequently developed with Simon's son Peter and with George Baylor. MATER was able to solve relatively complex checkmating combinations by highly selective search, using a number of heuristics such as keeping checking, limiting the number of possible moves, and so on. Finally, moving into game theory, Simon developed a mathematical model to explain the concept of error in two-person games, based upon the idea that slight differences in the quality of moves chosen can cumulatively produce substantial differences in outcome (losing or winning the game).

Simon's interest in computer chess was clearly motivated by his desire to understand the mechanisms of human thought, and not by ambitions to develop strong programs per se. Even so, some of his ideas were later used in developing top-level programs. The best example is perhaps Hans Berliner's HITECH, which illustrates the role of pattern recognition in heuristic search, one of Simon's key ideas. When DEEPER BLUE beat Gary Kasparov in 1997, Simon was both pleased and disappointed. He was pleased that his famous 1957 prediction – that a computer program would beat the World Champion within ten years – was vindicated; as he was quick to point out, 30 years rather than 10 years represents a rather small error factor. He was also disappointed, however, that an approach based essentially upon brute force, rather than heuristic search, did the job.

In the psychological study of games, Simon took up questions raised by the seminal research of Adriaan de Groot on perception, memory, and decision making in chess. In collaboration with William Chase, he carried out a number of experiments which were destined to have a tremendous impact on cognitive psychology in general and on the study of expertise in particular. With colleagues at Carnegie Mellon, he worked on several computer models of chess players' perception and memory, such as PERCEIVER, MAPP, and CHREST. In both his experimental and modelling work, Simon was interested in how humans' limited cognitive capacity can cope with the exponential complexity of chess, proposing that pattern recognition both made heuristic search possible and explained such phenomena as intuition. Chess was only one of Simon's many lines of research into human cognition, which included research on creativity, scientific discovery, imagery, problem solving, learning, and memory.

A striking feature of Simon's personality was his immense curiosity. For example, he was always keen to learn new languages, artificial and natural, to find the solution to (sometimes trifling) problems, to keep track of developments in almost any science. He was of course very well aware of his insatiable curiosity, and jokingly entitled one of the many talks he gave on the Carnegie Mellon campus "The cat curiosity couldn't kill." It is likely that he found as much pleasure in the search process as in the solution.

To his collaborators and students, meetings with Simon were a unique experience: the breadth and depth of his knowledge, as well as his ability to combine information from various fields, was amazing. His informal style and his generosity made you sometimes forget that you were talking to one of the greatest scholars of the 20th century. In spite of his fame, he would treat even the least experienced undergraduate with respect. He would patiently listen to the most naive argument, and had a unique way of answering awkward questions without offending his interlocutor. He was a charismatic public speaker as well as a skilled polemicist – a skill he had already perfected in his high-school years.

A self-declared workaholic, he would also expect strong commitment from his collaborators, not least in time spent on task – a direct application of his finding from expertise research that it takes at least ten years to become an expert. While generous with his collaborators and students, he could also be extremely impatient with bad ideas. In particular, he abhorred theories stated in vague and informal terms, and was tireless in trying to convince social scientists to use formal approaches, preferably computer modelling. In his last years, he sometimes expressed disappointment that some of his favourite ideas did not have the lasting impact he had hoped for: classical economics was still making strong assumptions of optimality, in spite of his work on bounded rationality, and information-processing psychology was being overcome first by connectionism, and then by the neurosciences.

In the final chapter of his autobiography (Models of My Life, 1991, p. 367), Simon reflected: "In describing my life, I have situated it in a labyrinth of paths that branch, in a castle of innumerable rooms. The life is in the moving through that garden or castle, experiencing surprises along the path you follow, wondering (but not too solemnly) where the other paths would have led: a heuristic search for the solution of an ill-structured problem. If there are goals, they do not so much guide the search as emerge from it. It needs no summing up beyond the living of it."

Born on June 15, 1916, in Milwaukee, Herbert A. Simon received his doctorate from the University of Chicago in 1943. In 1937, he married Dorothea Pye, with whom he would have an extremely happy relationship for almost sixty-five years. He held research and faculty positions at the University of California (Berkeley) and the Illinois Institute of Technology, before going to Carnegie Tech (later called Carnegie Mellon University) in Pittsburgh, where he remained active in research and teaching until the time of his death, on February 9, 2001.

1 5410 Ligurian Drive, San José, CA 95138, USA. Email: ken@entrisphere.com.
2 Carnegie-Mellon University, School of Computer Science, Pittsburg, PA 15213, USA. Email: Hans_Berliner@ux4.sp.cs.cmu.edu.
3 Allan Standen Reader in Intelligent Systems, ESRC Centre for Research in Development, Instruction and Training School of Psychology, University of Nottingham, Nottingham NG7 2RD, England. Email: frg@psyc.nott.ac.uk.