Pick any player from the top 20....

Pick any player from the top 20....

Only Chess

Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.

chemist

Linkenheim

Joined
22 Apr 05
Moves
656208
12 Feb 14

Originally posted by Zygalski
There are another couple of points to consider.

Firstly, assuming these players aren't using engines, do you think that the longer such a player looks at a position the more engine-like their moves necessarily become?
The pre-1990's correspondence World Championship finalists had match rates which were very similar to the best OTB Super GM's. These day ...[text shortened]... an easily beat an unassisted GM, especially if the GM was unaware that an engine was being used.
The question about "engine-ness" of moves has often been discussed. I think the summary is that a machine is doing the claculation different to a human and thus differences are here to stay. This is also corrobated by your statement that player preparing using a computer will still not find all "best moves" of a programm. A forced tactical win of a pawn in seven moves is normally out of a humans reach...

e4

Joined
06 May 08
Moves
42492
12 Feb 14
1 edit

If you faced a computer move, don't blunder and play a sensible reply then
there is a good chance your move will be anticipated by a computer thus
given the appearance that you too have replied with a computer.

A good test would be for a resonable player to play against a computer and see
what his match up would be using the computer he played against to do the match up test.

I'm guesing his score would quite high giving the false impression he replied using a box.

Zygalski is correct, the top engines are now at the 3000 level but let us
not assume that all the cheats on here are using the latest software.
The new engines are quite expensive. (just checked on chessbase!)
Cheats are not real chess players.They would not fork out good money for
the latest stuff.

Whilst I was there on chessbase I came across this.

Carlsen's secrets: How does he do it? (1)

"There is no World Championship which I found so disappointing as the one
that has just taken place. Carlsen won because he is the better athlete
and not the better chess player. He plays and plays and forces the
opponent, who is 20 years older, into the fourth and fifth hour.

The position is basically drawn, but he plays on and on and sits Anand out,
waiting for errors by the previous World Champion. This is from a chess
point of view unconvincing. Because he does not play but simply makes no
mistakes and waits for his opponent to make them.

Carlsen's games are very similar to those of a computer: bloodless and
soulless." Etc.


Anybody know who said this because it was followed with:

"This person, a friend and colleague, is in the meantime quite embarrassed
by his radio remarks, given on the spur and completely unprepared.

After discussions he has tempered his view. In any case he will probably prefer that we do not reveal his identity."

Read as far as there and then remembered I was writing a thread.

FL

Joined
21 Feb 06
Moves
6830
12 Feb 14

Originally posted by greenpawn34
Anybody know who said this ...
I think it was Robert Weizsäcker, "Honorary President of the German Chess Federation".

e4

Joined
06 May 08
Moves
42492
12 Feb 14

Cheers Fat L. He is my new hero. 🙂

chemist

Linkenheim

Joined
22 Apr 05
Moves
656208
12 Feb 14

Originally posted by greenpawn34
Cheers Fat L. He is my new hero. 🙂
...and I thought some GM said: The winner is the one with the last but one mistake...

S

Joined
27 Apr 07
Moves
119346
13 Feb 14

Forget the top 20. Do a comparison of my games, and we can all have a real laugh.

Joined
08 Apr 09
Moves
19531
13 Feb 14

I think one of the problems with the methodology is that human experts make too much "clearly" good moves. Many moves (recapturing, getting out of check) are more or less forced or can be justified by "simple" tactics. Computers and expert humans will find these moves, because they are more or less "obvious" (to them of course, not me, hence all the quotation sings).

The real strength of computers is in their huge tactical horizon. I therefore think that analysis of possible cheaters should focus on moves that are only justified by analysis with a long horizon. These are the moves that are extremely hard, or impossible to find for humans. So, one should filter out moves that are justified within a short tactical horizon, and then calculate the matchup rate for the remaining moves.

wotagr8game

tbc

Joined
18 Feb 04
Moves
61941
13 Feb 14

Originally posted by tvochess
I think one of the problems with the methodology is that human experts make too much "clearly" good moves. Many moves (recapturing, getting out of check) are more or less forced or can be justified by "simple" tactics. Computers and expert humans will find these moves, because they are more or less "obvious" (to them of course, not me, hence all the quotati ...[text shortened]... ed within a short tactical horizon, and then calculate the matchup rate for the remaining moves.
While i agree in principle, how would you go about testing in this way? It seems rather impossible to implement to me..

Cornovii

North of the Tamar

Joined
02 Feb 07
Moves
53689
13 Feb 14
1 edit

Originally posted by Ponderable
Well if we compare we should compare with Correspondence masters. Someone did that years ago.
Thread 146589

Here's one I did a couple of years ago.

D
Losing the Thread

Quarantined World

Joined
27 Oct 04
Moves
87415
14 Feb 14
1 edit

Originally posted by Marinkatomb
While i agree in principle, how would you go about testing in this way? It seems rather impossible to implement to me..
In positions with straightforward recaptures most other moves will change the score compared with best by a large amount. So you could measure the number of moves available that don't score less than some threshold value below whatever the engine's best move scores. This gives us a way of handling cases where one would expect a high risk of a human matching a computer due to there only being a few candidate moves.

Child of the Novelty

San Antonio, Texas

Joined
08 Mar 04
Moves
618650
14 Feb 14

Almost all of the top 20 are chess engines and/or pure databases. I have played many of them and, to put it bluntly, they are not players of the strength which their rating indicates. Their opening choices show very little real chessplaying ability. Even a mediocre OTB player knows that the French Defense Exchange Variation and games starting with 1.d4, d5 2.Nc3 (with non-Veresov ideas) are usually played by players of limited chess understanding. Of course, these "players" do not have a clue since they are chess engines and/or pure databases. It is pathetic that someone will use these cheating measures just to win at chess.
BTW: David Tebb is NOT a chess engine user.

Joined
08 Apr 09
Moves
19531
14 Feb 14


Originally posted by Marinkatomb
While i agree in principle, how would you go about testing in this way? It seems rather impossible to implement to me..


I hoped this was clear from my previous post, but I'll elaborate. However, I have very little experience with engines myself, so there may be practical difficulties.

Let's assume the analysis is based on an engine with a depth of 20 moves. Currently, a large part of the game score is already discarded from the analysis, because the analysis is started after the game is out of opening books. I propose to also discard the moves that are justified by a low-horizon engine analysis. Let's say only about 5 moves (*) deep. What remains is not tactically justified at low horizon, so could be the mistakes and blunders (typically human), positionally strong moves (typically human) and the computer moves. The remaining moves are then analysed with the long-horizon engine as usual.

(*) 5 is just a number. Testing should point out what number leaves enough moves for the statistics and discards enough of the "obvious" moves. Maybe better is a limit on the amount of analysed positions. The human mind is more limited in memory then in position depth (e.g. think of long forced sequences).


Originally posted by DeepThought
In positions with straightforward recaptures most other moves will change the score compared with best by a large amount. So you could measure the number of moves available that don't score less than some threshold value below whatever the engine's best move scores. This gives us a way of handling cases where one would expect a high risk of a human matching a computer due to there only being a few candidate moves.


This would indeed provide a way of filtering out "obvious" moves.

Z

Joined
24 May 08
Moves
717
14 Feb 14

Obvious moves are also included in all the benchmark human games.
Why go to the bother of attempting to filter them out unless you think for some reason there might by far more in one large batch of games than another?
What subjective criteria do you use to say "well that's a fairly obvious move" when looking at a chess engine multi pv eval?
It all sounds rather subjective & incredibly long-winded. Especially when all the human benchmarks are so consistent at the highest level of play.

Z

Joined
24 May 08
Moves
717
14 Feb 14

Originally posted by caissad4
.
BTW: David Tebb is NOT a chess engine user.
Not trying to be funny, but I'm assuming you don't sit alongside him whilst he makes his moves.
Whatever, he is certainly playing strong chess against many engine users.

Child of the Novelty

San Antonio, Texas

Joined
08 Mar 04
Moves
618650
14 Feb 14

Originally posted by Zygalski
Not trying to be funny, but I'm assuming you don't sit alongside him whilst he makes his moves.
Whatever, he is certainly playing strong chess against many engine users.
David Tebb is a known OTB player and I have played him a number of times. I know you may not comprehend that a strong player can be seen by their games but it can. Have you ever played OTB or are you just another "internet chessplayer" ? 🙄🙄🙄