Correct!
Good! Not the game move, but Leela Zero favors this move.
Wrong.
Likely could have been the pro move in very similar situations.
Possible, might have been the pro move in other similar situations.
Unlikely, but might have been the pro move in a different situation.
This site arose out of experiments with neural nets from my GitHub repo to see how well they could model what players of different ranks understand. The problems on this site were created using neural nets to automatically extract positions for each rank from high-level games where the neural net thought the next move would be instinctive for a pro but might be educational or non-obvious for players of that rank. (Due to randomness, a few easier problems may still appear in harder sets and vice versa).
The problems have NOT been curated by any human and probably have more inaccuracies on average than published Go books or problem collections. The neural nets were also trained primarily on historical human pro games rather than trained through a self-play process. So they will not reflect the latest joseki innovations and may not always be as sharp or accurate as the neural nets from the most recent wave of strong Go programs.
However, these problem sets also contain far more open-space tactics than most other problem sets, a highly under-covered topic in the Go literature. On the whole, they should cover a wide sampling of the standard good shapes that form the building-blocks of strong play. So I hope you find the problems interesting and possibly worth using as a learning tool!
This website's UI makes heavy use of WGo.js , an awesome library which was very helpful for getting this site running.