Check out Grant Acedrex, our featured variant for April, 2024.


[ Help | Earliest Comments | Latest Comments ]
[ List All Subjects of Discussion | Create New Subject of Discussion ]
[ List Latest Comments Only For Pages | Games | Rated Pages | Rated Games | Subjects of Discussion ]

Comments by Hans Aberg

Later Reverse Order EarlierEarliest
Aberg variation of Capablanca's Chess. Different setup and castling rules. (10x8, Cells: 80) [All Comments] [Add Comment or Rating]
💡📝Hans Aberg wrote on Sat, May 3, 2008 04:43 PM UTC:
H.G.Muller:
| But the point is that this does not alter the piece values.

Right, though that might be just a preferred way to structure theory because it suits human thinking. Essentially, define contexts, and attach values to them. First define piece values in nutralk settings. Then observe that the bishop pair gets an added value. Then try to figure out values for good and poor bishops. And so on. By contrast, computers tend to be very poor at handling such contexts, so other methods might be suitable for programs.

💡📝Hans Aberg wrote on Sat, May 3, 2008 11:50 AM UTC:
H.G.Muller:
| As piece values are only useful as strategic guidelines for quiet
| positions, they cannot be sensitive to who has the move.

At the beginning of the game, white is thought to have a slight advantage, and the first task of black will be attempting to neutralize that. And it might be possible set a picee value to that positional advantage, just as when reasoning in terms of getting positional compensation for a sacrifice. Somewhat less han a pawn, perhaps. If one knows about the black/white winning statistics, one might be able to set a value on it that way. It may not be usable for a computer program as it does not change sides, but only computes the relative values of moves.

💡📝Hans Aberg wrote on Sat, May 3, 2008 09:46 AM UTC:
H.G.Muller:
| Note that a Nash equilibrium in a symmetric zero-sum game must be the
| globally optimum strategy.

Chess isn't entirely symmetric, since there is in general a small advantage of making the first move. But for players (or games) adhering to a piece value theory throughout as a main deciding factor, perhaps such balance may occur. The only world champion that was able to do that, winning by playing with very small positional and material advantages, was perhaps Karpov. Kasparov learned to break through that heavily positional playing, in part by training against the Swedish GM Andersson who specialized in a similar hyper-defensive playing. A more normal way of winning is at some point making material sacrifices in exchange for strong initiative particularly combined with a mating attack, and then winning by either succeeding by a mate or via some material gains neutralizing to a winning end-game. Perhaps when determining piece values, such games should be excepted. And since computers are not very good at such strategies, perhaps such game exclusion occurs naturally when letting computers playing against themselves.

💡📝Hans Aberg wrote on Fri, May 2, 2008 09:42 PM UTC:
H.G.Muller:
| Indeed, I plan to submit a paper to the ICGA Journal discussing the
| piece values and the empirical statistical method used to obtain them.

You might have a look at things like:
  http://en.wikipedia.org/wiki/Perfect_information
  http://en.wikipedia.org/wiki/Complete_information
  http://en.wikipedia.org/wiki/Nash_equilibrium
  http://en.wikipedia.org/wiki/Prisoner's_dilemma
Your claims are similar to the idea that chess players under some circumstances get a Nash equilibrium. This might happen, say, if the players focus on only simple playing strategies where piece vales have an important role, and they are unable to switch to a different one. Note that the prisoner's dilemma leads to such an equilibrium when repeated, because players can punish for past defections. In chess, this might happen if chess players are unable to develop a more powerful playing theory, say due to the complexity. - Just an input, to give an idea of what reasoning one might expect to support claims of predictions.

💡📝Hans Aberg wrote on Fri, May 2, 2008 05:21 PM UTC:
H.G.Muller | Fairy-Max is already able to play most Chess variants, and WinBoard | protocol already supports those variants. I just found Jose-Chess that supports both Xboard and UCI protocols, and may have the future for it worked up (right now it is somewhat buggy), as it is open source. | Many engines are now able to play Capablanca-type variants under | WinBoard protocol, some of them quite strong. Perhaps the Dragon knight D = K+N and what you call the Amazon M = Q+N should be included. I am thinking about a 12x9 variant R D N B A Q K M B N C R which has the property that all pawns are protected, and tries to keep a material balance on both king sides. On a 12x10 board, one might use a rule that pawns can move 2 or 3 steps, if that does not make them cross the middle line. | I have no interst in convincing anyone to use my empirically derived | piece values. The normal thing would be that the values are just published, with indications on how they were derived. Different authors may have different values, if using different methods to derive them.

💡📝Hans Aberg wrote on Fri, May 2, 2008 01:08 PM UTC:
H.G.Muller:
| If piece values cannot be used to predict outcomes of games, they
| would be useless in the first place.

If one is materially behind, one knows one should better win the middle game, or win material back before coming into the end-game, unless the latter is a special case.

| Why would you want to be an exchange or a piece ahead, if it might
| as frequently mean you are losing as that you are winning?

This is indeed what happens if with programs focusing too much on material, or weak players starting with a piece ahead.

| Precisely knowing the limitations of your opponent allows
| you to play a theoretically losing strategy (e.g. doing bad trades) in
| order to set a trap.

Sure, this seems to be essentially to be the effect of a brute force search on a very fast computer.

| In general, this is a losing strategy, as in practice
| one cannot be sufficiently sure about where the opponent's horizon will
| be.

In human tournament playing, a top player either plays against opponents of lower horizon, or against well known opponents whose playing style has been well analyzed. In the first case, there is not much need to adapt playing as one can see deeper, but in the latter case, one certainly does choose strategies adapted against the opponent. Now, with computer programs, at least in the past, the GMs were pitted against program which they did not know that well, the latter which ran in special versions and on very fast computers when tried on humans. So humans do not get much of  chance to develop better strategies. But this may not matter if the strategy does not allow them to handle the theoretical faulty combinations used by computer by relying on a somewhat deeper search.

| Fact is that I OBSERVE that the piece values I have given below
| do statistically predict the outcome of games with good precision.

You only observe past events, not the future, and a statistical prediction is only valid for for a true stochastic variable, or in situations that continue behave as such. But don't worry about this:

If you find methods to duplicate the analysis by Larry Kaufman, and use that to compute values for various pieces on boards like perhaps 8x8, 10x8, 12x8, then it seems simple enough to modify engines to play with different chess variants (if protocols like UCI are extended to cope with it).

I think though the real test will be when humans play against those programs.

💡📝Hans Aberg wrote on Wed, Apr 30, 2008 08:24 PM UTC:
H.G.Muller:
| Chess as we play it is a game of chance...

The main point is that such a statistic analysis is only valid with respect to a certain group of games, as long as as the players stick to a similar strategy. The situation is like with pseudo-random numbers, where in one case one discovered that if the successive numbers generated were plotted in triples, they fell into a series of sloped planes. Such a thing can be exploited. So there results a circle of making better pseudo-random generators and methods to detect flaws, without ever resulting in true random numbers. A similar situation results in cryptography.

In chess, the best strategy is trying to beat whatever underlying statistical theory the opponent is playing against. When playing against computer programs this is not so difficult, because one tries to figure what material exchanges the opposing program favors and shuns, and then tries playing into situations where that is not valid. Now, this requires that the human player gets the chance of fiddling around with the program interactively for some time in order to discover such flaws - learning this through tournament practice is a slow process - plus a way to beat the computers superior combinatorial skills if the latter is allowed to do a deeper search by brute force.

| Anyway, you cannot know what Kaufman thinks or doesn't.

His stuff looks like all the other chess theory I have seen, only that he uses a statistic analysis as an input, attempting to fine-tune it. By contrast, you are the only guy I have seen that thinks of it as a method to predict the average outcome of games. You might benefit from asking him, or others, about their theories - this is how it looks to me.

You might still do values and percentages and display them as your analysis of past games in a certain category, but there is gap in the reasoning claiming this will be true as a prediction for general games.

💡📝Hans Aberg wrote on Wed, Apr 30, 2008 03:02 PM UTC:
Rich Hutnik:
| 2. When pitting one side against another, if the sides are unbalanced,
| this system should allow a balancing in points for handicapping reasons
| of the forces.

Games where both sides have equal material are also unbalanced, as in general, there is an advantage to play the first move.

💡📝Hans Aberg wrote on Wed, Apr 30, 2008 02:07 PM UTC:
H.G.Muller:
| Define 'suggestions'. What I get are a set of piece values from which
| you can accurately predict how good your winning chances are, all other
| thing being equal or unknown.

You do not get a theory that predicts winning chances, as chess isn't random. If the assumption is that opponents will have a random style similar in nature to the analyzed data, then it might be used for predictions.

It is clear that Larry Kaufman does not think of his theory in terms of 'x pawns ahead leads to a winning chance p'. You can analyze your data and make such statements, but it is an incorrect conclusion it will be a valid chess theory predicting future games - it only refers to the data of past games you have analyzed.

💡📝Hans Aberg wrote on Tue, Apr 29, 2008 09:49 PM UTC:
H.G.Muller:
| If you read Larry Kaufman's paper, you see that he continues
| quantifying the principal positional term ...

This is nothing new: this was done in classical theory. He just uses statistical input in an attempt to refine the classical theory.

| The piece values Kaufman gets are very good. And, in so far I tested
| them, they correspond exactly to what I get when I run the the piece
| combinations from shuffled openings.

You can sync your method against his values, to get piece value suggestions. But that is just about what you get out from it.

💡📝Hans Aberg wrote on Tue, Apr 29, 2008 12:58 PM UTC:
H.G.Muller:
| Larry Kaufman has applied the method on (pre-existing) Human
| Grand-Master games, to determine piece values for 8x8 Chess.

If I look at:
http://home.comcast.net/~danheisman/Articles/evaluation_of_material_imbalance.htm
he says things like:
  [...] an unpaired bishop and knight are of equal value [...], so
  positional considerations [...] will decide which piece is better.
and also see the section 'Applications'.

In other words, he is using a statistical approach merely as a point of departure for developing a theory which combines point values with other reasoning, such as positional judgement.

The values he gives though are interesting:
  P=1, N=3¼, B=3¼, BB=+½, R=5, Q=9¾
where BB is the bishop pair.

💡📝Hans Aberg wrote on Mon, Apr 28, 2008 08:35 PM UTC:
H.G.Muller:
| I am not sure what 'brute force' you are referring to.

See
  http://en.wikipedia.org/wiki/Computer_chess

| What do you mean by 'classical theory'?

What was used before the days of computers. A better term might be 'deductive theory' as opposed to a 'statistical theory', i.e., which has the aim of finding the best by reasoning, though limited in scope due to the complexity of the problem.

| What does it matter anyway how the piece-value system for normal Chess
| was historically constructed anyway?

It is designed to merge with the other empiric reason developed to be used by humans.

You might have a look at a program like Eliza:
  http://en.wikipedia.org/wiki/ELIZA
It does not have any true understanding, but it takes while for humans to discover that. The computer chess programs are similar in nature, but they can outdo a human by searching through many more positions. If a human is compensated for that somehow (perhaps: allowed to use computers, take back at will, or making a new variant), then I think it will not be so difficult for humans to beat the programs. In such a setting, a statistical approach will fail sorely, since the human will merely play towards special cases not covered by the statistical analysis. The deductive theory will be successively strengthened until in principle the ideal best theory will emerge. This latter approach seems to have been cut short, though by the mergence of very fast computers that can exploit the weakness of human thinking, which is the inability of making large numbers of very fast but simple computations.

💡📝Hans Aberg wrote on Mon, Apr 28, 2008 05:27 PM UTC:
H.G.Muller:
| It seems to me that in the end this would produce exactly the same
| results, at the expense of hundred times as much work. You would still
| have to play the games to see which piece combinations dominantly win.

Perhaps the statistical method is only successful because it is possible by a brute-force search to seek out positions not covered by the classical theory.

💡📝Hans Aberg wrote on Sun, Apr 27, 2008 10:33 PM UTC:
H.G.Muller:
| Why would you want to set the values the same? Because both a Pawn and
| a Rook advantage in the end-game is 100% won?

I describe how a piece value theory might be developed without statistics. Since one P or R ahead generically wins, set them to the same value. Now this does not work in P against R; so set the value higher than P. Then continue this process in order to refine it, comparing different endings that may appear in play, taking away special cases, always with respect to tournament practice, using postmortem game analysis.

💡📝Hans Aberg wrote on Sun, Apr 27, 2008 08:25 PM UTC:
H.G.Muller:
| You seem to attach a variable meaning to the phrase 'a pawn ahead', so
| that I no longer know if you are referring to KPK, or just any position.

I tried to explain how explain the idea of a general, or 'generic' situation by restricting to this example. In a general case, one could not classify the games as exactly.

| The rule of thump amongst Chess players is that in Pawn endings that
| you cannot recognize as obvious theoretical wins/draws/losses (like all
| KPK positons, positions with passers outside the King's square etc.) a
| Pawn advantage makes it 90% likely that you will win.

It haven't seen anything like that in end-game books, or players from the time whan I was active, before the days of computer chess. I think that nowadays, people learn much by playing against strong computer programs, rather than learning classical theory. So I do no think such percentages relate to any classical chess theory, and possibly no formal mathematical statistics. Such a statistical approach, even if formalized, may still work well in a computer that by brute force can do a deeper lookahead than a human, but may fail otherwise.

💡📝Hans Aberg wrote on Sun, Apr 27, 2008 05:09 PM UTC:
H.G.Muller:
|| Change the values radically, and see what happens...
| What do you mean? Nothing happens, of course.

You say that a pawn ahead is always a win with only some exceptions. So is a rook. So set values of these the same - does not work to predict generic rook against pawn end-games.

| If they would occur 90% of the time, I would call them common, not special.

90 % with respect to what: all games, those that GMs, newbies, or a certain computer program play?

| So in your believe system, if a certain position, when played by expert
| players to the best of their ability, is won in 90% of the cases by white,
| it might still be that black has 'the better chances' in this position?

The traditional piece value system does not refer to a statement like: 'this material advantage leads to a win in 90 % of the cases'.

💡📝Hans Aberg wrote on Sun, Apr 27, 2008 10:23 AM UTC:
Me:
| So in general, one P ahead does not win, but in special circumstances 
| it can be,  ...
H.G.Muller:
| This is already not true. In general, one Pawn ahead does win in a Pawn ending. KPK
| is an exception (or at least some positions in it are).

My statement referred to this particular ending. For other endings, I indicated it depends on the playing strength, noting that a GM would make sure to win whenever possible, but a weaker player may prefer more material. It is classification for developing playing strategies, not the theoretically best one.

| But is is still completely unclear to me how this has any bearing on piece values.

Change the values radically, and see what happens...

| KPK is a solved end-game (i.e. tablebases exist), so the concept of piece value is
| completely useless there. In solved end-games it only matters if the position is
| won, ...

It is true of all chess positions, not only end-game.

| ...and having KPK in a won position is better than having KQK in a drawn position.

So here one the point-system would be useless, if one faces the possibility of having to choose between those two cases. But the point system will say that KQ will win over KP unless there are some special circumstances, not that it will win in a certain percentage if players of the same strength are making some random changes in their play.

| I don't see how you could draw a conclusion from that that a Pawn has a higher value
| than a Queen.

I have no idea what this refers to.

| Piece values is a heuristic to be used in unsolved positions, to determine who has
| likely the better winning chances.

Only that the 'winning chances' does not refer to a percentage of won games of equal strength players making random variations, which is what you are testing. It refers to something else, which can be hard to capture giving its development history.

💡📝Hans Aberg wrote on Fri, Apr 25, 2008 10:37 PM UTC:
H.G.Muller:
| It is too vague for me. Could you give a specific example?

Take K+P against K. Then it can only be won if the king is well positioned against the pawn and the pawn isn't on the sides if the opponent king is in good position. So in general, one P ahead does not win, but in special circumstances it can be, So a player that isn't very good at end-games will probably try to get more material, but a better one will know, and an even better player will be able trying to play for the most favored end-game. A weak player may loose big in the middle game because they don't know how to keep the pieces together, but a GM might do that because knowing that the alternatives will lead to a lost end-game, so trying something wilder may be a better try. But a GM may not have much use of such long-range strategic skills against a computer that can search all positions deeper, becoming reduced, relative the computer, not being to keep the pieces together.

💡📝Hans Aberg wrote on Fri, Apr 25, 2008 09:20 PM UTC:
Me:
| By experience, certain generic types of endgames will empirically 
| classified by this system. Those that aspiring becoming GMs study 
| hard to refine it, so that exceptions are covered. Once learned, 
| it can be used to instantly evaluate a position.
Someone:
| What do you mean by this? The result of end-games is not determinedby
| the material present. There are KRKBPP end games that are won for the
| Rook, and that are won for the Bishop. Similar for KRKBP and KRKBPPP.
| So how would you derive a piece-value system of this?

It hangs on the word 'generic', meaning some general empiric cases. Then the cases you cover would be special cases studied separately as a refinement. These are excepted when defining the piece value system.

💡📝Hans Aberg wrote on Fri, Apr 25, 2008 05:47 PM UTC:
Me:
| As I said, the outcome is decided by the best playing from both sides. 
| So if one starts to play poorly in the face of a material advantage, 
| that is inviting a loss.
Someone:
| We still don't seem to connect. What gave you the impression I
| advocated to play poorly? Problem is that even with your best play, it
| might be a loss. And as it is an end leaf of your search tree, which
| is limited by the time control, you have no time to analyze it until
| checkmate, or in fact analyze it at all. You have to judge in under a
| second if you are prepared to take your chances in this position,

The original context was what how I think that the classical piece value system is constructed:

By experience, certain generic types of endgames will empirically classified by this system. Those that aspiring becoming GMs study hard to refine it, so that exceptions are covered. Once learned, it can be used to instantly evaluate a position.

This is then not a statistical system.

💡📝Hans Aberg wrote on Fri, Apr 25, 2008 05:38 PM UTC:
H.G.Muller:
| But it is still not clear to me that a Human would not suffer as bad
| from [increasing the average number of moves].

This is clearly the difficulty.

| Furthermore, computers are not really totally ignorant on strategical
| matters either. But they cannot found by search, and must be programmed
| in the evaluation.

The strength in orthodox chess derives much from implementing human heuristics. So if the game is changeable or rich in this respect, it will be more difficult for computers.

| So it would also depend on the difficulty to recgnize the
| strategical patterns for a computer as opposed to a Human.

Computers have difficulty in understanding that certain positions are generally toast. They make up by being very good at defending themselves, so a good program can hold up positions that to humans may look undefendable. So a either a variant should have better convergence between empirical reasoning and practice, or one should let the humans have access to a cmoputer that do combinatorial checking.

| And until the game strongly simplifies, the main strategic goal is
| usually to gain material, using piece values as an objective.

That seems optimistic :-). A GM advice for becoming good in learning end-games.

| Unless the opponent really ignores his King safety. Then the 
| strategic goal will become to start a mating attack.

There are all sorts of tactics possible.

One thing one can try against a computer program is giving it material in exchange for initiative. Then, the human may need some assistance on the combinatorial side.

If the game is highly combinatorial the computer is favored. So one can try to close it.

So a chess variant should admit stalling. If it is always possible to break into highly combinatorial positions, then the computer will be favored.

💡📝Hans Aberg wrote on Fri, Apr 25, 2008 01:00 PM UTC:
H.G.Muller:
| Why do you think the bigger board and the stronge piece make the game
| more strategical?

I said: if one increases the average number of moves in each position, then a full search may fail, as there will be too many of them. Then a different strategy is needed for success. If it is doubled, then in a 7-ply search, if the positions are independent, a search for all would require 2^7 = 128 more positions to search for. If there are 10 times more average moves, then 10^7 more positions need to be searched.

Strategic positions is another matter: indeed, in orthodox chess, trying to settle for positions were advantage depends on long term development is a good choice against computers, the latter which tend to be good in what humans find 'chaotic' positions.

The design of a variant must be so that it admits what humans find strategic, and so it is possible to play towards them from the initial position. I am not sure exactly what factors should be there. Just putting in more material may indeed favor the computer. In orthodox chess, one can stall by building a pawn chain, and then use the minor pieces for sacrifices to create breakthrough. The chess variant must contains some such factors as well.

💡📝Hans Aberg wrote on Fri, Apr 25, 2008 08:57 AM UTC:
H.G.Muller:
| It never happened to you that early in the game you had to step out of
| check, and because of the choice you made the opponent now promotes with
| check, being able to stop your passer on the 7th?

Early in the game, most things happen by opening theory. And if one is getting an advantage like a passer, one should be careful to not let down the defense of the king, including computing checks.

With those computer programs, a tactic that may work is to let down the defenses of the king enough that the opponent thinks it is worth going after it, and then exploit that in a counterattack.

| I think that if you are not willing to consider arguments like 'here I
| have a Knight against two Pawns (in addition to the Queen, Rook,
| Bishop and 3 Pawns for each), so it is likely, although not certain, that
| I will win from there', the number of positions that remains acceptable
| to you is so small that the opponent (not suffering from such scruples) will
| quickly drive you into positions where you indeed have 100% certainty....
| That you have lost!

As I said, the outcome is decided by the best playing from both sides. So if one starts to play poorly in the face of a material advantage, that is inviting a loss. So a material advantage of one pawn must happen in circumstances of where one can keep the initiative, otherwise, it might be better to returning that material for getting the initiative hopefully.

| What is your rating, if I may ask?

I have not been active since the 1970s, just playing computers sometimes. About expert, I think.

💡📝Hans Aberg wrote on Fri, Apr 25, 2008 08:27 AM UTC:
H.G.Muller:
| Computers have no insight what to prune, and most attempts to make
| them do so have weakened their play. But now hardware is so fast that
| they can afford to search everything, and this bypasses the problem.

So it seems one should design chess variants where the average number of moves per position is so large that one has to prune.

| Making the branching ratio of a game larger merely means the search
| depth gets lower. If this helps the Human or the computer entirely
| depends on if the fraction of PLAUSIBLE moves, that even a Human
| cannot avoid considering, increases less than proportional. Otherwise
| the search depth of the Human might suffer even more than that of the
| computer. So it is not as simple as you make it appear below.

I already said that: the variant must be designed so that it is still very strategic to human. - It is exactly as complicated as I already indicated :-).

Therefore, I tend to think that perhaps a 12x8 board might be better, with a Q+N piece, and perhaps an extra R+N piece added.

💡📝Hans Aberg wrote on Thu, Apr 24, 2008 08:21 PM UTC:
H.G.Muller:
| Chess is a chaotic system, and a innocuous difference between two
| apparently completely similar positions [...] can make the difference
| between win and loss.

This is only true if positions are viewed out of context.

Humans overcome this by assigning a plan to the game. Not all position may be analyzable by such a method. The human analyzing method does not apply to all positions: only some. For effective human playing, one needs to link into the positions to which the theory applies, and avoid the others. If one does not succeed in that, a loss is likely.

The subset of positions where such a theory applies may not be chaotic, then.

25 comments displayed

Later Reverse Order EarlierEarliest

Permalink to the exact comments currently displayed.