All AI are running on client side, need 10~20s to load it first time.
Santorini
- https://cestpasphoto.github.io/santorini.html
- https://cestpasphoto.github.io/santorini_with_gods.html
Splendor
- https://cestpasphoto.github.io/splendor.html
- https://cestpasphoto.github.io/splendor_3pl.html
- https://cestpasphoto.github.io/splendor_4pl.html
Smallworld
Wordle
This AI is based on AlphaZero training. I have reused an existing training engine and significantly modified it, I fully implemented everything else from game logic, ML network design, training tuning, port to browser and JS/HTML interface.
Compared to best AI I found:
- >95% winrate against Ai Ai - 20 wins in 20 games
- Even running a degraded version of mine leads to >90% winrate - 10 wins in 10 games
- 98+% win rate against BoardSpace AI, using BestBot
Details: these games use no god power, other AI always starting first, both with random initial positions. Mine was running at 800 rollouts per move (50 for degraded version), Ai Ai was set with a timelimit of 15+15sec/move (about 900k iterations on my computer).
The training has been fully done on CPU, check my repo and this ReadMe.
The UI also propose "undervolted" settings: instead of exploring 800 future positions, it explores 200 or 50 or 10 of them to reduce strength.
About 5 to 10sec per turn when using AI native level. See other details in (#common-technical-details). You can find higher performance application on this repo, requiring you to install python and many other modules.
Using same AI as Santorini, described above. Supporting 2 players only. Each token movement is considered as one move.
No objective tests done. Note that randomness can have a lot of impact on this game but benchmarks should be done carefully. The training has been fully done on CPU, check my repo.
The UI also propose "undervolted" settings: instead of exploring 1600 future positions, it explores 400 or 100 or 25 or 10 of them to reduce strength.
Using same AI as Santorini, described above. Supporting 2, 3 and 4 players.
Compared to the only AI I found: >90% winrate against Lapidary AI - 10 wins in 10 games, with a median 14 points difference (median game is 16-2).
Details: needed to reproduce Lapidary "behavior" (only to win gold even if one has 10 gems, new card from deck appears on the right instead of replacing old card slot). My AI doesn't allow simultaneously giving back and taking gems so I needed to do some hack. Lapidary is aware of card in deck, whereas mine wasn't (I always replaced random card by the one chose by Lapidary). My AI was running in "native" mode, meaning 400 rollouts per move. My AI doesn't know which cards will be drawn.
The training has been fully done on CPU, check my repo.
The UI also propose "undervolted" settings: instead of exploring 400 future positions, it explores 200 or 50 or 10 of them to reduce strength. If AI was trained with these lower numbers, it could have been stronger but that wasn't the purpose.
Based on entropy method from the 3Blue1Brown video, I developed this AI compatible with not only English dictionary but also French one.
You can find higher performance application on this repo, requiring you to install python.
Click here to see technical details
Longest computation time is first word, when we know nothing about solution. I pre-computed these best first words on all conditions (fr/eng, all words lengths, with first letter known/unknown). To improve even further computation time, I can restrict research to the X most popular words: it decrease a little bit AI strenght for a much shorter thinking time.I managed to retrieve occurence percentage for each word: that allows to filter out very rare words, which is advised. We can even weight words depending on their occurence: this is advised for "easy" game but not advised for "hard" game like the one in the NY times.
See other details in (#common-technical-details).
My goal from the start was to run AI on client side. They all use the following technology:
- Python code running in browser running with pyodide. Still in beta phase, a bit long to load but very stable and compatible with several external modules such as numpy! It is based on WebAssembly so performance is quite good.
- AlphaZero need a ML inference library: onnx has an JS version and has no framework incompatibility (both TF and PyTorch can export in ONNX format). I found the WebGL to be quite buggy on some browsers so I went for the WASM version which uses client CPU.
- I am no expert in JS/HTML/CSS so I chose fomantic-ui: result is quite decent on PC and mobile with low barrier at start. But gosh JavaScript is such an ugly language :-)
I was quite surprised by the final performance of AlphaZero ported in the browser: since game logic and MCTS is in python/pyodide and ML is in JavaScript, browser has to switch these two frameworks and therefore a larger overhead for type conversion. I expected this overhead to be worse but at the end, it is has roughly same speed than regulary python and only 5-10x slower than numba.