This is the full forensic record of API-5555's game across 45 scored moves. SuperGM Score: 87/100. Engine Likelihood: 13/100. Confidence: 65/100 — Moderate Confidence.
The system confirms an authentic human profile. The ACPL of 29.4 cp (consistent with strong master-level play) is impressive by any measure — but the signature of that accuracy is what matters here. It carries every mark of unaided chess: natural tilt under pressure, error clustering at moments of high complexity, and the imperfect but unmistakable rhythm of a human mind working at the edge of its ability. The behavioral dissection below documents it signal by signal.
An independent ML classifier confirmed this finding with a 4.1% engine probability. Two independent systems pointing in the same direction is a strong convergence signal.
Six independent signals evaluated. Red = engine signal, green = human signal, gray = insufficient data.
| Metric | Measured Value | Human Benchmark | Signal Direction |
|---|---|---|---|
| ACPL | 29.4 cp | Engine: <5 cp — Ambiguous: 5–10 cp — Human (blitz): 10–28 cp | ✓ Above GM range |
| Error Texture CV | 1.153 | > 0.765 (human bursty) | ✓ Bursty — human |
| KS Distribution Distance | 0.290 | < 0.197 (human dist.) | ▲ Engine dist. |
| Tier-4 Blunder Rate | 55.6% | 15–50% (human range) | ✓ Normal range |
| Crisis Correlation ρ | N/A | > 0.0 (human tilt) | — Neutral |
| Markov P₃₀ | N/A | < 0.40 (human recovery) | — Normal range |
| Regan Model Z | -2.39 | Normal: −0.5 to +2.0 | ▲ Above expected |
Every scored move plotted by centipawn loss. ■ Green = engine-like precision. ■ Red = blunder. Reference lines: engine threshold (14 cp), SuperGM ceiling (28 cp). Notable: 7 blunders in 45 moves — consistent with human play.
* Estimated from centipawn loss.
| Quality | Count | % | Human Baseline |
|---|---|---|---|
| Best move | 12 | 26.7%* | 33–67% |
| Strong | 9 | 20.0%* | 10–25% |
| Minor inaccuracy | 7 | 15.6%* | 5–28% |
| Mistake | 10 | 22.2%* | varies |
| Blunder | 7 | 15.6%* | 15–50% |
* Estimated from centipawn loss (0/≤10/≤25/≤50/>50 cp).
Human error peaks in the middlegame; engine accuracy stays consistent across all phases.
| Metric | Value |
|---|---|
| ML Engine Probability | 4.1% |
| ML Verdict | HUMAN_CLEAR |
| ML Confidence | HIGH |
The ML model agrees with the behavioral analysis — two independent systems pointing the same direction strengthens the finding.
A move-by-move behavioral reconstruction of API-5555's game
The game opens in a standard opening with 1...d5, 2...Nc6, 3...h6, 4...Qxd5. Through the first 10 scored moves, API-5555 maintains 35.9 cp average loss. The first notable deviation arrives at 2...Nc6, costing 32 cp — an early sign of independent judgment over engine orthodoxy. The theory phase ends here, and the real chess begins.
The middlegame spans 20 moves and plays chaotic — 20.1 cp average loss as 5 errors above 25 cp accumulate. 11...O-O-O costs 67 cp. 15...Nxc3 costs 89 cp, swinging the evaluation by 89 cp.
From move 31 the game enters a rook endgame spanning 15 scored moves. Accuracy degrades from the middlegame's 20.1 cp to 37.5 cp — fatigue and clock pressure take their toll.
43...Kb5 costs 250 cp — catastrophic. The position transforms entirely. The aftermath confirms the human pattern: the next moves average 42.8 cp, elevated compared to baseline. The error leaves a wake before play returns to normal.
The defining pattern is variance — errors cluster around move 43. Error texture CV of 1.153 sits above the human baseline — errors arrive in bursts, not at a steady engineered rate.
At 65/100 confidence, the evidence leans toward genuine human play. The behavioral signals converge on the known human signature — imperfect, pressure-driven, and emotionally textured. No credible engine signal was found.
Based on the evidence presented above, API-5555's game carries the authentic behavioral fingerprint of elite human chess. The errors came at the right moments, in the right phases, with the right signatures. The system found no credible engine signal. Human Authenticity Score: 87/100.
The analysis confirms a legitimate human performance. No action required. If this player was under suspicion, this report constitutes documented statistical evidence of authentic play. Retain it as a positive baseline for future reference.
A full diagnostic assessment of your play in this game, measured against SuperGM blitz benchmarks. Every bar and every sentence is derived from specific signals in your moves.
Accuracy (ACPL) (29.4 cp): Your 29.4 centipawn average loss falls in the lower quartile. At the Class A level in blitz, this suggests either unfamiliarity with the positions or consistent small inaccuracies compounding across the game. The good news: even reducing your worst 3 moves would dramatically improve this number.
Engine Match Rate (31%): At 31%, your engine-match rate indicates room for improvement in move selection. You are choosing suboptimal moves more often than average, which contributes to centipawn loss. Focus on candidate-move discipline: before committing, identify at least two options and briefly compare them.
Blunder Rate (56%): Only 56% of your moves were classified as blunders (50+ centipawn loss). This is excellent tactical discipline. Your mistakes, when they occur, are minor inaccuracies rather than game-changing oversights.
Error Texture (CV) (1.15): An error texture of CV=1.15 falls in the middle range. Your errors are neither perfectly uniform nor wildly unpredictable. This balanced pattern is actually healthy and mirrors what we see in professional human play.
PV Stability (0.58): Your PV stability of 58% is on the lower side, meaning your move choices shift frequently from what the engine considers optimal. This could indicate indecision or difficulty committing to a plan. Working on positional understanding will help you identify the right plan and execute it with conviction.
Error Clustering (EDR) (0.37): Your error distribution ratio (EDR=0.368) shows that your mistakes are somewhat clustered together in the game. This clustering pattern is typical of human play and suggests concentration lapses in specific game phases rather than uniform weakness.
The data is unambiguous: 56% of your moves in this game were classified as blunders, meaning each one cost you 50 or more centipawns. To put that in perspective, a single blunder can undo the accumulated advantage of 5-10 carefully played moves. In this game alone, your blunders accounted for the majority of your total centipawn loss.
At the Class A level, a blunder rate above 15% is common but also the single most impactful thing you can fix. Unlike positional understanding, which takes months to develop, blunder reduction responds quickly to targeted training. Most blunders at this level come from one of three sources: failing to check what your opponent can do after your move, missing a basic tactical pattern (fork, pin, skewer), or moving too quickly in critical positions.
You know that feeling during a game when you play a move and instantly see your mistake? That moment of stomach-dropping realization? The cure is the "blunder check" habit: before pressing the mouse button, ask one question: "After I play this, what is the best thing my opponent can do?" Make this a physical routine and you will cut your blunder rate in half.
Exercises: (1) Solve 15 tactical puzzles daily on Lichess Puzzle Streak at your rating level. (2) After each rated game, find every blunder and write one sentence about what you were thinking. (3) Practice the CCT scan: Checks, Captures, Threats, in that order, before every critical move.
Your endgame performance of 37.5 cp average loss is 1.9x worse than your best phase (Middlegame at 20.1 cp). You played the opening and middlegame well enough to reach a favorable or drawable position, but your technique in converting or defending the endgame let you down.
Endgame weakness is the most efficient area to study because the positions are simpler, the patterns are more concrete, and the knowledge transfers directly to results. Most players below 2000 Elo have significant gaps in basic endgame theory. Learning just five theoretical positions (King+Pawn vs King, Lucena, Philidor, Vancura, basic Queen vs Pawn) can add 50-100 rating points.
Exercises: (1) Complete the free "Basic Endgames" course on Chessable. (2) Study de la Villa's "100 Endgames You Must Know" chapters 1-3. (3) After every game that reaches an endgame, analyze the position with a tablebase to check if you played the theoretical best moves.
You matched the engine's top choice only 31% of the time. This low engine-match rate means your move selection process is systematically choosing suboptimal moves, even in positions where the best move is relatively straightforward.
At the Class A level, improving engine match rate comes from two areas: pattern recognition (seeing common tactical motifs faster) and candidate-move discipline (considering multiple options before committing). The fastest improvement path is solving rating-appropriate tactical puzzles to expand your internal pattern library.
Exercises: (1) Solve 20 tactical puzzles daily at your rating level on Lichess. (2) For each puzzle, identify the tactical theme (fork, pin, discovered attack) and name it out loud. (3) Review this game move-by-move with engine analysis and for each move where you did not play the best move, understand what made the engine's choice better.
Healthy Error Texture: Your CV of 1.15 reflects a natural, human error pattern. Your mistakes are varied in size and timing, which is the signature of genuine, engaged play. This balanced texture indicates you are fully present in the game rather than playing on autopilot.
Strong Middlegame Phase: Your middlegame accuracy of 20.1 cp was the cleanest phase of your game, demonstrating that your understanding of middlegame play is a genuine asset you can build on.
A narrative walkthrough of the critical moments in your game. Not a table of numbers, but the story of what happened and why it mattered.
The opening covered moves 1 through 10, spanning 10 scored decisions. The opening was troubled, averaging 35.9 cp loss per move. Problems began as early as move 2 (Nc6), a knight move that cost 32 centipawns. The position went from +0.3 to +0.6, creating an early deficit that shaped the rest of the game. By the end of the opening, the evaluation stood at +1.6.
The middlegame spanned 20 scored moves, with 30% of them being engine-perfect and a total loss of 402 centipawns (average: 20.1 cp per move). Here are the critical moments that defined this phase of the game:
Move 15: Nxc3 — BLUNDER (−89 cp)At this point, API-5555 was ahead. The knight move Nxc3 was a significant mistake, losing 89 centipawns. The position shifted noticeably — about 0.9 pawns worth. The engine preferred c6-d4 instead, which would have saved 89 centipawns (0.9 pawns). Knights are strongest on central outposts where they can't be chased by pawns. Before retreating a knight, look for active squares first.
Move 11: O-O-O — BLUNDER (−67 cp)At this point, API-5555 was behind. The king move O-O-O was a significant mistake, losing 67 centipawns. The position shifted noticeably — about 0.7 pawns worth. The engine preferred e7-e5 instead, which would have saved 67 centipawns (0.7 pawns). King safety is paramount in the middlegame. Before moving the king, always check: is the new square actually safer?
Move 16: Nxd4 — BLUNDER (−58 cp)At this point, API-5555 was ahead. The knight move Nxd4 was a significant mistake, losing 58 centipawns. The position shifted noticeably — about 1.2 pawns worth. The engine preferred d8-d4 instead, which would have saved 58 centipawns (0.6 pawns). Knights are strongest on central outposts where they can't be chased by pawns. Before retreating a knight, look for active squares first.
These 3 critical moments accounted for 214 of the 402 total middlegame centipawn loss (53%). This concentration of errors in a few key moments, rather than spread across every move, suggests that API-5555's general middlegame understanding is sound but breaks down at specific decision points.
The game reached an endgame phase covering 15 scored moves. API-5555 won the game from this position. The endgame averaged 37.5 centipawns of loss per move, with a total of 563 centipawns lost in this phase.
Move 43: Kb5 — BLUNDER (−250 cp)This king move cost 250 centipawns (2.5 pawns). The engine preferred a2-a4 instead. In the endgame, king activity is critical — but walking into danger loses immediately.
Move 45: Ra2+ — BLUNDER (−86 cp)This rook move cost 86 centipawns (0.9 pawns). The engine preferred a3-a2 instead. Rook endgames require precise calculation — one wrong check can let the opponent escape.
Across 45 scored moves, API-5555 accumulated 1324 centipawns of total loss. 12 moves (27%) were engine-perfect. 7 moves were classified as blunders (50+ cp). The game result was 0-1 against shopen.
A structured, week-by-week improvement program tailored to the specific weaknesses identified in your game. Every recommendation is derived from your data.
Your blunder rate of 56% is your highest-priority fix. This week is entirely dedicated to building the neural pathways that catch tactical mistakes before they happen. The approach is simple but requires daily consistency.
Day 1-2: Diagnostic (30 min/day)
Review every blunder from this game. For each one, set up the position on a Lichess analysis board and identify: (a) what you were threatening, (b) what your opponent could do, (c) what you missed. Write a one-sentence description for each. This builds awareness of your specific blind spots. Your blunders in this game were at: move 43 (Kb5, −250 cp), move 3 (h6, −149 cp), move 15 (Nxc3, −89 cp), move 45 (Ra2+, −86 cp), move 11 (O-O-O, −67 cp).
Day 3-5: Pattern Building (20 min/day)
Solve 10 puzzles rated 1400-1800 on Lichess Puzzle Streak. Do NOT skip ahead to harder puzzles. The goal is pattern recognition speed, not maximum difficulty. After each puzzle, identify the tactical theme (fork, pin, skewer, discovered attack, removal of the guard). Name it out loud.
Day 6-7: Application (45 min/day)
Play 2-3 rated games and consciously apply the "blunder check" before every move: "After I play this, what is the best thing my opponent can do?" Track how many times you catch yourself about to blunder. If you catch even one, the week was successful.
Week 2 shifts focus to your secondary weakness while maintaining the habits built in Week 1. Continue your daily tactical puzzles from Week 1 (reduce to 10 minutes per day as maintenance) and add the following:
Your endgame ACPL of 37.5 cp reveals that your technique breaks down when the board opens up. This week covers the essential endgame knowledge that every Class A-level player needs.
Day 1-2: King + Pawn Endgames (25 min/day)
Study the opposition concept and the rule of the square. These two ideas decide the majority of King + Pawn endgames. Use the free "Basic Endgames" course on Chessable or watch Daniel Naroditsky's endgame lessons on YouTube.
Day 3-4: Rook Endgames (25 min/day)
Learn the Lucena position (how to win with Rook + Pawn vs Rook) and the Philidor position (how to draw). These two positions appear in over 30% of all rook endgames. Practice setting them up and playing them against the computer.
Day 5-7: Endgame Puzzles (20 min/day)
Solve endgame-specific puzzles on Lichess or ChessTempo. Focus on positions with 3-5 pieces where precision is required. After each puzzle, verify with a tablebase. If any endgame positions from your game reached this phase, review them: move 43 (Kb5, −250 cp), move 3 (h6, −149 cp), move 15 (Nxc3, −89 cp), move 45 (Ra2+, −86 cp), move 11 (O-O-O, −67 cp).
Integration exercise: Play 3-4 rated games this week. After each game, immediately check: (1) Did you apply the blunder-check habit from Week 1? (2) Did you work on this week's focus area? Score yourself honestly. Improvement is not about being perfect; it is about being more intentional than you were last week.
Your endgame ACPL of 37.5 cp reveals significant room for improvement in the final phase. This week provides a structured endgame curriculum.
Day 1-2: Essential Positions (30 min/day)
Study these five positions until you can play them perfectly from memory: (1) King + Pawn vs King with opposition, (2) Lucena position (rook + pawn win), (3) Philidor position (rook + pawn draw), (4) Vancura position (rook vs rook + a-pawn draw), (5) Queen vs passed pawn on 7th rank. Resources: de la Villa "100 Endgames You Must Know" chapters 1-2, or free Chessable courses.
Day 3-5: Endgame Drills (20 min/day)
Use Lichess "Play from Position" to practice endgame positions. Set up a position from the positions you studied and play it against the computer (Level 5-6). Repeat until you can win or draw (depending on the theoretical result) consistently. Key principle: activate your king early. In the endgame, the king is a fighting piece worth roughly 4 points.
Day 6-7: Practical Endgames (30 min/day)
Play 3-4 rapid games and consciously aim to reach endgames. Even if you have a winning middlegame attack, try to convert through simplification instead. This forces endgame practice in real game conditions and builds the habit of confident endgame play.
The final week brings everything together. You have spent three weeks building specific skills; now it is time to integrate them into your natural playing process and measure your progress.
Day 1-2: Game Review Ritual (30 min/day)
Play 2 rated games per day. After each game, before checking with the engine, write down: (1) your 3 best moves and why, (2) your 3 worst moves and why, (3) the critical moment of the game and what you were thinking. THEN check with the engine. Compare your self-assessment to the engine's evaluation. The closer they match, the more self-aware you are becoming.
Day 3-5: Full Integration Games (45 min/day)
Play 3 rated games with conscious application of all skills: blunder check (Week 1), secondary weakness awareness (Week 2), phase-specific improvement (Week 3). Do not try to apply everything simultaneously; instead, pick one focus per game and rotate. After each game, analyze and note whether the focus area improved.
Day 6-7: Benchmark Assessment (60 min)
Play 3-5 rated games as your "assessment set." After all games, run them through a ForgeChess analysis or Lichess computer analysis. Compare your ACPL, blunder rate, and phase performance to the numbers from this report. Specifically, look at: (1) Is your ACPL below 29? (2) Is your blunder rate below 56%? (3) Did your worst phase improve? Even modest improvement confirms the training is working.
Rating targets: Based on your current estimated Class A level (~1627 Elo), consistent training at 20-30 minutes per day should yield approximately +50 points in 30 days (target: ~1677) and +150 in 90 days (target: ~1777). These are realistic expectations based on typical improvement curves for dedicated study at your level.
These resources are selected specifically for the Class A level. Using resources too far above or below your level wastes time and builds frustration.
Books: Winning Chess Strategies by Seirawan (positional play), Silman's Complete Endgame Course chapters 1-4, My System by Nimzowitsch (strategic foundations)
Video content: Hanging Pawns opening series, ChessBase India instructional content, Saint Louis Chess Club lectures
Lichess tools: Lichess Puzzles (rated mode), Lichess Studies (create a study of your games), Lichess Opening Explorer (post-game review)
Daily minimum: 10 puzzles rated 1400-1800 + 5 minutes of game review. This 20-minute daily investment compounds dramatically over weeks and months. Consistency beats intensity: 20 minutes every day is far more effective than 3 hours once a week.
This training plan is based on the specific signals from this single game. For the most accurate coaching, analyze 3-5 games to identify persistent patterns versus one-game anomalies. The weaknesses that appear across multiple games are the ones that matter most.
ChessForensics uses 35 independent behavioral dimensions calibrated on over 10,000+ validated games from confirmed SuperGM players (Magnus Carlsen, Hikaru Nakamura, Alireza Firouzja, Nihal Sarin, and peers). Each dimension targets a distinct behavioral fingerprint that separates human cognition from engine assistance. Signal weights, thresholds, and formula parameters are proprietary and available to accredited arbitration panels upon written request.
Engine Likelihood and Human Authenticity scores are expressed on a 0–100 scale. Validated on 10,000+ games from 80+ verified SuperGM accounts (Magnus Carlsen, Alireza Firouzja, Nihal Sarin, and peers) and 6,000+ confirmed engine games. Cross-validated false positive rate: under 1% on SuperGM play. Zero false positives on named SuperGM accounts. Reports automatically adjust analysis context based on time control (blitz, rapid, classical).
This report was created for you personally. You are welcome to use it in an appeal, dispute, or tournament proceeding. Please do not redistribute the file publicly. ChessForensics is not affiliated with Lichess, Chess.com, FIDE, or any governing chess body. This analysis is independent statistical evidence, not a platform ruling. This report constitutes statistical and behavioral probability analysis, not definitive proof of misconduct. ChessForensics accepts no liability for outcomes of disputes in which this report is used.
Calibration Scope: Validated on 3+0 blitz and 2+1 bullet time controls using confirmed SuperGM behavioral baselines. Other time controls produce results with reduced confidence and are marked accordingly.