The player achieved an Engine Likelihood of 81/100 across 59 scored moves — 57/100 — Moderate Confidence. At near-perfect precision ACPL (below the human floor — engine-level precision), the precision, error timing, and pressure-response profile do not match any confirmed human behavioral fingerprint in the calibration database. Every independent signal is documented below.
An independent ML classifier confirmed this finding with a 83.7% engine probability. Two independent systems pointing in the same direction is a strong convergence signal.
Six independent signals evaluated. Red = engine signal, green = human signal, gray = insufficient data.
| Metric | Measured Value | Human Benchmark | Signal Direction |
|---|---|---|---|
| ACPL | < 2 cp | Engine: <5 cp — Ambiguous: 5–10 cp — Human (blitz): 10–28 cp | ▲ Sub-human floor |
| Error Texture CV | N/A | > 0.765 (human bursty) | — Ambiguous zone |
| KS Distribution Distance | N/A | < 0.197 (human dist.) | — N/A |
| Tier-4 Blunder Rate | 0.0% | 15–50% (human range) | ▲ Below floor |
| Crisis Correlation ρ | N/A | > 0.0 (human tilt) | — Neutral |
| Markov P₃₀ | N/A | < 0.40 (human recovery) | — Normal range |
| Regan Model Z | 2.00 | Normal: −0.5 to +2.0 | ✓ Normal range |
Every scored move plotted by centipawn loss. ■ Green = engine-like precision. ■ Red = blunder. Reference lines: engine threshold (14 cp), SuperGM ceiling (28 cp). Notable: 0 blunders in 59 moves — zero blunders is a statistical outlier for human play.
* Estimated from centipawn loss.
| Quality | Count | % | Human Baseline |
|---|---|---|---|
| Best move | 47 | 79.7%* | 33–67% |
| Strong | 12 | 20.3%* | 10–25% |
| Minor inaccuracy | 0 | 0.0%* | 5–28% |
| Mistake | 0 | 0.0%* | varies |
| Blunder | 0 | 0.0%* | 15–50% |
* Estimated from centipawn loss (0/≤10/≤25/≤50/>50 cp).
Human error peaks in the middlegame; engine accuracy stays consistent across all phases.
| Metric | Value |
|---|---|
| ML Engine Probability | 83.7% |
| ML Verdict | ENGINE_ELEVATED |
| ML Confidence | MEDIUM |
The ML model agrees with the behavioral analysis — two independent systems pointing the same direction strengthens the finding.
A move-by-move behavioral reconstruction of STOCKFISH [A1_depth20]'s game
The game opens in a standard opening with 2. Nb3, 3. h3, 4. g4, 5. Qd3. Through the first 9 scored moves, The player maintains engine-level precision. The theory phase ends here, and the real chess begins.
The middlegame spans 20 moves and plays clinical — 1.3 cp average loss with no error exceeding 25 cp. 24. Rxg8 costs 6 cp.
From move 31 the game enters a rook-and-minor-piece endgame spanning 30 scored moves. Average loss holds at 0.0 cp, virtually unchanged from the middlegame — flat precision across phases.
The largest deviation is minor — 7 cp at 26. c4. In context, this barely registers against the game's overall accuracy profile.
The defining pattern is consistency — engine-level precision across all phases, a 34-move precision chain from move 27 to 60.
At 57/100 confidence, the evidence leans toward engine-assisted play. Multiple independent signals converge — phase accuracy, error shape, crisis response, and recovery dynamics point in the same direction.
Based on the evidence presented above, the behavioral fingerprint of The player's game is statistically incompatible with authenticated human play at any documented level. Multiple independent detectors — measuring different aspects of how decisions were made under pressure — converged on the same finding. Engine Likelihood: 81/100. Confidence: 57/100 — Moderate Confidence.
The statistical fingerprint in this report is consistent with external engine assistance. Recommended action: Submit this report to the platform's anti-cheat team as supporting evidence and request a review of the player's full game history. If this is a tournament context, flag the game to the arbiter. A single game rarely results in sanctions — this report is most effective when combined with a pattern across multiple games.
A full diagnostic assessment of your play in this game, measured against SuperGM blitz benchmarks. Every bar and every sentence is derived from specific signals in your moves.
Accuracy (ACPL) (0.7 cp): Your average centipawn loss of 0.7 places you in the upper echelon of accuracy for this game. At the SuperGM level, this indicates strong move selection throughout and suggests deep familiarity with the position types that arose.
Blunder Rate (0%): Your blunder rate of 0% is above average and represents your highest-impact improvement opportunity. Each blunder costs 50+ centipawns, which means a single blunder can undo 5-10 good moves. Tactical puzzle training is the fastest path to reducing this number.
Error Clustering (EDR) (0.17): Your EDR of 0.165 shows highly dispersed errors. Your mistakes are evenly spread rather than clustered, which suggests consistent low-level inaccuracies across all phases rather than isolated concentration drops.
Tilt Autocorrelation (0.07): Tilt autocorrelation of 0.071 is in the normal range. Your errors show mild serial correlation, which is typical for human play.
No major weaknesses were identified in this game. Your play was remarkably clean across all measured dimensions. To continue improving, focus on expanding the types of positions you are comfortable in, and challenge yourself with stronger opponents.
Exceptional Accuracy: Your overall ACPL of 0.7 cp is outstanding. This level of precision means your move selection process is well-calibrated and you rarely drift far from the optimal path. In practical terms, this means opponents must play very accurately to beat you, because you give them few opportunities to capitalize on your errors.
Tactical Discipline: A blunder rate of only 0% means your tactical awareness is sharp. You rarely make the kind of game-changing errors that decide most amateur games. This tactical foundation gives you the luxury of focusing on strategic improvement rather than firefighting.
Opening Preparation: Your opening ACPL of 1.3 cp demonstrates strong theoretical knowledge or excellent positional intuition in the opening phase. You consistently reach playable middlegame positions, which is the primary objective of the opening.
A narrative walkthrough of the critical moments in your game. Not a table of numbers, but the story of what happened and why it mattered.
The opening covered moves 1 through 10, spanning 9 scored decisions. STOCKFISH [A1_depth20] navigated the opening with excellent precision, averaging only 1.3 centipawns of loss per move. The position at the end of the opening stood at +0.4, indicating a well-played theoretical phase.
The middlegame spanned 20 scored moves, with 60% of them being engine-perfect and a total loss of 26 centipawns (average: 1.3 cp per move). Remarkably, there were no significant errors in the middlegame. STOCKFISH [A1_depth20] navigated the complexity with consistent accuracy, making decisions that kept the position under control throughout this phase.
The game reached an endgame phase covering 30 scored moves. STOCKFISH [A1_depth20] entered the endgame with an evaluation of +0.0 and finished the game with the evaluation at +0.0. The endgame averaged 0.0 centipawns of loss per move, with a total of 1 centipawns lost in this phase.
STOCKFISH [A1_depth20] handled the endgame with excellent technique. No significant errors were detected, and the average loss of just 0.0 cp per move indicates precise play in a phase where small inaccuracies can be decisive.
Across 59 scored moves, STOCKFISH [A1_depth20] accumulated 39 centipawns of total loss. 47 moves (80%) were engine-perfect. 0 moves were classified as blunders (50+ cp). The game result was * against Stockfish [depth-10].
A structured, week-by-week improvement program tailored to the specific weaknesses identified in your game. Every recommendation is derived from your data.
Your game showed no single dominant weakness, which means you are a well-rounded player. This week focuses on building your overall chess fitness through balanced training.
Day 1-3: Tactical Maintenance (15 min/day)
Solve 5 complex positions from recent SuperGM games per day on Lichess to keep your tactical vision sharp. Alternate between rated puzzles (for challenge) and Puzzle Streak (for speed).
Day 4-5: Game Analysis (30 min/day)
Review your recent games. For each, identify the critical moment and your thought process. Were you accurate? Were you confident? Understanding your decision-making pattern is more valuable than memorizing engine lines.
Day 6-7: Play Strong Opponents (45 min/day)
Seek games against players 100-200 rating points above you. Losing to stronger players and analyzing the games afterward is one of the fastest improvement methods available.
Week 2 shifts focus to your secondary weakness while maintaining the habits built in Week 1. Continue your daily tactical puzzles from Week 1 (reduce to 10 minutes per day as maintenance) and add the following:
Your game showed no single dominant weakness, which means you are a well-rounded player. This week focuses on building your overall chess fitness through balanced training.
Day 1-3: Tactical Maintenance (15 min/day)
Solve 5 complex positions from recent SuperGM games per day on Lichess to keep your tactical vision sharp. Alternate between rated puzzles (for challenge) and Puzzle Streak (for speed).
Day 4-5: Game Analysis (30 min/day)
Review your recent games. For each, identify the critical moment and your thought process. Were you accurate? Were you confident? Understanding your decision-making pattern is more valuable than memorizing engine lines.
Day 6-7: Play Strong Opponents (45 min/day)
Seek games against players 100-200 rating points above you. Losing to stronger players and analyzing the games afterward is one of the fastest improvement methods available.
Integration exercise: Play 3-4 rated games this week. After each game, immediately check: (1) Did you apply the blunder-check habit from Week 1? (2) Did you work on this week's focus area? Score yourself honestly. Improvement is not about being perfect; it is about being more intentional than you were last week.
Your opening ACPL of 1.3 cp suggests this phase needs dedicated attention. This week is a focused opening improvement sprint.
Day 1-2: Repertoire Audit (30 min/day)
Review your last 10 games in the Lichess Opening Explorer. Identify: which openings do you play most? Where do you first leave theory? Are you getting comfortable positions or struggling from move 5? If you are consistently struggling, consider switching to a more principled system.
Day 3-5: Deep Prep (25 min/day)
For your main opening, learn one new variation per day. Not 20 moves deep, just one branching point where you did not know what to play. Understand the IDEA behind each move, not just the move itself. Why is the knight going to f3 and not d2? What does the pawn break c4 accomplish?
Day 6-7: Opening Blitz Session (30 min/day)
Play 5 blitz games specifically to practice your opening. Resign after move 15 if you want. The goal is repetition: get comfortable reaching the same types of positions. After each game, spend 1 minute checking your opening moves.
The final week brings everything together. You have spent three weeks building specific skills; now it is time to integrate them into your natural playing process and measure your progress.
Day 1-2: Game Review Ritual (30 min/day)
Play 2 rated games per day. After each game, before checking with the engine, write down: (1) your 3 best moves and why, (2) your 3 worst moves and why, (3) the critical moment of the game and what you were thinking. THEN check with the engine. Compare your self-assessment to the engine's evaluation. The closer they match, the more self-aware you are becoming.
Day 3-5: Full Integration Games (45 min/day)
Play 3 rated games with conscious application of all skills: blunder check (Week 1), secondary weakness awareness (Week 2), phase-specific improvement (Week 3). Do not try to apply everything simultaneously; instead, pick one focus per game and rotate. After each game, analyze and note whether the focus area improved.
Day 6-7: Benchmark Assessment (60 min)
Play 3-5 rated games as your "assessment set." After all games, run them through a ForgeChess analysis or Lichess computer analysis. Compare your ACPL, blunder rate, and phase performance to the numbers from this report. Specifically, look at: (1) Is your ACPL below 1? (2) Is your blunder rate below 0%? (3) Did your worst phase improve? Even modest improvement confirms the training is working.
Rating targets: Based on your current estimated SuperGM level (~2700 Elo), consistent training at 20-30 minutes per day should yield approximately +50 points in 30 days (target: ~2750) and +150 in 90 days (target: ~2850). These are realistic expectations based on typical improvement curves for dedicated study at your level.
These resources are selected specifically for the SuperGM level. Using resources too far above or below your level wastes time and builds frustration.
Books: Dvoretsky's Analytical Manual, Endgame Virtuoso by Smyslov, My Great Predecessors by Kasparov
Video content: ChessBase MegaDatabase analysis, Original analysis of your own games with deep engine time (30+ sec/move)
Lichess tools: Lichess cloud analysis at depth 40+, Lichess studies with annotation, Syzygy tablebase for endgame verification
Daily minimum: 5 complex positions from recent SuperGM games + 5 minutes of game review. This 20-minute daily investment compounds dramatically over weeks and months. Consistency beats intensity: 20 minutes every day is far more effective than 3 hours once a week.
This training plan is based on the specific signals from this single game. For the most accurate coaching, analyze 3-5 games to identify persistent patterns versus one-game anomalies. The weaknesses that appear across multiple games are the ones that matter most.
ChessForensics uses 35 independent behavioral dimensions calibrated on over 10,000+ validated games from confirmed SuperGM players (Magnus Carlsen, Hikaru Nakamura, Alireza Firouzja, Nihal Sarin, and peers). Each dimension targets a distinct behavioral fingerprint that separates human cognition from engine assistance. Signal weights, thresholds, and formula parameters are proprietary and available to accredited arbitration panels upon written request.
Engine Likelihood and Human Authenticity scores are expressed on a 0–100 scale. Validated on 10,000+ games from 80+ verified SuperGM accounts (Magnus Carlsen, Alireza Firouzja, Nihal Sarin, and peers) and 6,000+ confirmed engine games. Cross-validated false positive rate: under 1% on SuperGM play. Zero false positives on named SuperGM accounts. Reports automatically adjust analysis context based on time control (blitz, rapid, classical).
This report was created for you personally. You are welcome to use it in an appeal, dispute, or tournament proceeding. Please do not redistribute the file publicly. ChessForensics is not affiliated with Lichess, Chess.com, FIDE, or any governing chess body. This analysis is independent statistical evidence, not a platform ruling. This report constitutes statistical and behavioral probability analysis, not definitive proof of misconduct. ChessForensics accepts no liability for outcomes of disputes in which this report is used.
Calibration Scope: Validated on 3+0 blitz and 2+1 bullet time controls using confirmed SuperGM behavioral baselines. Other time controls produce results with reduced confidence and are marked accordingly.