Skip to content

Releases: sbrunaugh/AiChessEngine

v5-13-6-1

07 Apr 00:21
Compare
Choose a tag to compare
v5-13-6-1 Pre-release
Pre-release

Evaluation engine version: 4.
Model version: 13.
Decision engine version: 6.
Browser extension version: 1.

Uprooted the entire evaluation engine. It now bases the score of a move/position entirely on the ELO of the player making the move.

Engine didn't perform better overall, but that could be for a number of reasons:

  1. Not enough low ELO data.
  2. Not a consistent spread across ELOs.
  3. Even low ELOs are pretty good, so the engine doesn't understand blunders/bad moves.

v4-13-6-1

24 Feb 20:52
Compare
Choose a tag to compare
v4-13-6-1 Pre-release
Pre-release

Evaluation engine version: 4.
Model version: 13.
Decision engine version: 6.
Browser extension version: 1.

Evaluation engine now changes evaluation depending on if it's in the early game or not. Early game gives 0.15 score boost to each player for each move. This causes the engine to prefer known moves. Later in the game, the logarithmic evaluation strategy is used again.

Ran a new model on the new training data. Very quick version.

I still beat the engine (it made silly blunders around move 12), but it played a real opening which is pretty encouraging.

New ideas is to balance out each position's evaluation depending on piece value difference. This would hopefully cause the engine to stop throwing away pieces for free.

v3-11-6-1

17 Feb 20:41
Compare
Choose a tag to compare
v3-11-6-1 Pre-release
Pre-release

Evaluation engine version: 3.
Model version: 11.
Decision engine version: 6.
Browser extension version: 1.

Went through many renditions of the model. This one is quite big, trained on the same data set. Only difference is I skewed the early game in favor of white/black depending on whose turn it was.

Decision engine went through a big refactor. Now it calls the model rest API instead of creating a separate process. This speeds things up quite a bit.

Very excited about the new browser extension. It makes interacting with the engine much faster. Basically, it uses chess.com and logs the next command I need to send to the engine to the browser console. So, I make a move, print the command and paste it into my terminal.

Overall, I'm having to actual games against this engine. It still blunders, but it recaptures correctly and avoids checks, etc. I'm seeing some actual tactical gameplay for the first time.

v3-8-5

10 Feb 15:33
Compare
Choose a tag to compare
v3-8-5 Pre-release
Pre-release

Evaluation engine version: 3.
Model version: 7.
Decision engine version: 5.

Models have a much lower loss. The engine is making pretty bad blunders though. I'm thinking I should reintroduce the future calculations or perhaps just keep expanding the neural network...

v3-7-5

10 Feb 07:11
Compare
Choose a tag to compare
v3-7-5 Pre-release
Pre-release

Evaluation engine version: 3.
Model version: 7.
Decision engine version: 5.

Basically, just made the model structure bigger:
Input -> 64 -> 64 -> 32 -> 16 -> 8 -> 1.

Loss went down overall. I'm seeing numbers between 0.001 and 0.002 now.

I think I'll keep making the network bigger and see what happens. The most recent game was really encouraging. No hug blunders for 15 or so moves. It developed pretty well honestly.

v3-6-5

10 Feb 06:09
Compare
Choose a tag to compare
v3-6-5 Pre-release
Pre-release

Evaluation engine version: 3.
Model version: 6.
Decision engine version: 5.

Evaluation engine now separates training data into two files. One for positions where it's white's move and the other for positions where it's black's move.

Model rearchitected with two new layers and now split into two (see above). Model black is only trained on black positions and white model only on white positions.

Decision engine is back to only evaluating one layer of moves.

v2-4-4

09 Feb 21:33
Compare
Choose a tag to compare
v2-4-4 Pre-release
Pre-release

Evaluation engine version: 2.
Model version: 4.
Decision engine version: 4.

New model is trained on samples of position data (no sequentially from the beginning of a game through the end like prior versions).

Decision engine was refactored. Still need a lot of bugs fixed here. Check logic isn't good.

I think the next iteration, I will create two models. That always assumes it's black's turn, another for white. Then invoke one model or the other depending on whose turn it is. I think this will also remove the need to look at future evaluations.

v2-3-3

09 Feb 15:26
Compare
Choose a tag to compare
v2-3-3 Pre-release
Pre-release

Evaluation engine version: 2.
Model version: 3.
Decision engine version: 3.

Major improvement to the decision engine. It now doesn't evaluate the set of next moves, but rather the positions two moves ahead (way more evaluations calculated). It then selects the next position that results in the worst position for the opponent after the following move (2 moves ahead).

There needs to be some sort of efficiency code introduced on the forward pass. It really doesn't need to keep 2nd layer possibilities after it sees a spike in the enemy's position. That would really speed up the decision making.

I can still beat this engine, but it's definitely better than before.

Next step: I think a new model needs to be trained on random positions, not sequentially seeing all the positions in a single game over and over.

v2-3-2

09 Feb 04:02
Compare
Choose a tag to compare
v2-3-2 Pre-release
Pre-release

Evaluation engine version: 2.
Model version: 3.
Decision engine version: 2.

Have a new version of the AI model. This one is better in that it doesn't score every move the same. I introduced cross-validation in the model training process. The outstanding issues are:

  1. The model doesn't think that less material is equivalent to losing. After sacking a rook and a bishop, it saw its position as better than before.
  2. The decision engine incorrectly thinks pawns can jump over pieces from their home rank.

v2-2-2

08 Feb 21:47
Compare
Choose a tag to compare
v2-2-2 Pre-release
Pre-release

Evaluation engine version: 2.
Model version: 2.
Decision engine version: 2.

Lots of changes in this release:

  • Evaluation engine now uses logarithmic function to determine a position's eval. The closer to the end of the game, the more it scores in favor of the winner.
  • Notation converter now removes duplicates while generating training data.
  • Neural network has two new layers, filling out the funnel shape.
  • Trained on only ~1 million games this time and ~76,000,000 unique positions.

Future looking:
I have a really bad bug right now where the decision engine/model evaluates everything basically zero with no variation among all possible moves. I think this is due to a mistake in how I trained it.