FC-2CC81316
FC-2CC81316
ChessForensics
SuperGM-Calibrated · Arbiter-Grade Evidence
Full Forensic Report
PROBABLE HUMAN — Consistent With Natural Play
ChessForensics Behavioral Forensics · 57 moves analyzed · FC-2CC81316
38/100Engine Likelihood 71/100Human Authenticity Score
ACPL: 8.3 cp Moves: 57 Rank-1: 73.7% Date: 2022.09.04 Confidence: 60/100 — Moderate Confidence
▶ Game Details — Hans Moke Niemann vs Magnus Carlsen — CLASSICAL · 0-1 · 2022.09.04
Analyzed Player
Hans Moke Niemann (Black)
Opponent
Magnus Carlsen
Time Control
CLASSICAL
Result
0-1 · 2022.09.04
Event
Sinquefield Cup
Generated 2026-03-24 · Validated on 10,000+ games · chessforensics.com
CALIBRATION NOTE — CLASSICAL
All baselines in this report are calibrated on SuperGM blitz play (3+0 to 5+3). Classical games with longer thinking time naturally produce lower ACPL and higher engine-match rates. Flagged metrics should be interpreted with this context.

Executive Summary

This is the full forensic record of Hans Moke Niemann's game across 57 scored moves. ACPL: 8.3 cp (solid GM-level accuracy for classical play). Engine Likelihood: 38/100. Confidence: 60/100 — Moderate Confidence.

The behavioral fingerprint is consistent with authentic human play — no strong engine indicator was returned across the full signal suite. The sections below document each tested signal and its finding.

Note: An isolated ML classifier returned an elevated 100.0% engine probability based on raw precision metrics. However, the full behavioral analysis — including tilt patterns, error clustering, crisis response, and recovery dynamics — conclusively overrides this signal. Raw accuracy alone cannot distinguish elite human preparation from engine assistance; the behavioral dimensions can, and they confirm human play.

AUTHENTICATION CONFIRMED — KEY FINDINGS

The Evidence

Signal Scorecard

Six independent signals evaluated. Red = engine signal, green = human signal, gray = insufficient data.

Engine Likelihood
38/100
Flagged: 80+
ACPL
8.3 cp
Human: 10-28 cp
Markov P30
N/A
No qualifying blunders
Error Texture CV
1.950
> 0.765 (human bursty)
Crisis Rho
N/A
> 0.0 (human tilt)
KS Distribution
N/A
< 0.197 (human dist.)
Blunder Rate
8.8%
Human: 15-50%

Statistical Evidence

MetricMeasured ValueHuman BenchmarkSignal Direction
ACPL8.3 cpEngine: <5 cp — Ambiguous: 5–10 cp — Human (blitz): 10–28 cp▲ Below human floor
Error Texture CV1.950> 0.765 (human bursty)✓ Bursty — human
KS Distribution DistanceN/A< 0.197 (human dist.)— N/A
Tier-4 Blunder Rate8.8%15–50% (human range)▲ Below floor
Crisis Correlation ρN/A> 0.0 (human tilt)— Neutral
Markov P₃₀N/A< 0.40 (human recovery)— Normal range
Regan Model Z-1.53Normal: −0.5 to +2.0▲ Above expected

Move-by-Move Loss Profile

Every scored move plotted by centipawn loss. Green = engine-like precision. Red = blunder. Reference lines: engine threshold (14 cp), SuperGM ceiling (28 cp). Notable: 4 blunders in 57 moves — consistent with human play.

10 20 30 50 75 100 14 cp engine threshold 28 cp — human ceiling Move 1. Nf6 — 0.0cp loss Move 2. e6 — 0.0cp loss Move 3. Bb4 — 4.0cp loss Move 4. O-O — 0.0cp loss Move 5. d5 — 3.0cp loss 5 Move 6. Bxc3+ — 0.0cp loss Move 7. dxc4 — 7.0cp loss Move 8. c5 — 0.0cp loss Move 9. cxd4 — 0.0cp loss Move 10. Nc6 — 0.0cp loss 10 Move 11. e5 — 0.0cp loss Move 12. h6 — 0.0cp loss Move 13. Be6 — 0.0cp loss Move 14. Bxc4 — 0.0cp loss Move 15. Rxa8 — 0.0cp loss 15 Move 16. gxf6 — 0.0cp loss Move 17. Rd8 — 4.0cp loss Move 18. Na5 — 11.0cp loss Move 19. Rc8 — 1.0cp loss Move 20. Be6 — 0.0cp loss 20 Move 21. Bxc4 — 28.0cp loss Move 22. Rxc4 — 0.0cp loss Move 23. Kg7 — 0.0cp loss Move 24. Rc7 — 0.0cp loss Move 25. a6 — 7.0cp loss 25 Move 26. f5 — 0.0cp loss Move 27. e4 — 13.0cp loss Move 28. Rc5 — 0.0cp loss Move 29. Nc4 — 72.0cp loss Move 30. Nd6 — 10.0cp loss 30 Move 31. fxg4 — 111.0cp loss Move 32. e3 — 0.0cp loss Move 33. Ne4 — 0.0cp loss Move 34. Rc1+ — 89.0cp loss Move 35. Rc2 — 8.0cp loss 35 Move 36. Rxe2+ — 6.0cp loss Move 37. Re1+ — 5.0cp loss Move 38. Re2+ — 2.0cp loss Move 39. Kf6 — 6.0cp loss Move 40. Rd2 — 0.0cp loss 40 Move 41. Kg6 — 7.0cp loss Move 42. Ng5 — 0.0cp loss Move 43. Kf5 — 64.0cp loss Move 44. Nf3+ — 0.0cp loss Move 45. Nxd2 — 0.5cp loss 45 Move 46. Ke5 — 0.0cp loss Move 47. Nf1+ — 13.5cp loss Move 48. Nxh2 — 0.0cp loss Move 49. Kxe4 — 0.0cp loss Move 50. Kf4 — 0.0cp loss 50 Move 51. Nf3 — 0.0cp loss Move 52. Ne5 — 0.0cp loss Move 53. Nc6 — 0.0cp loss Move 54. Nxa5 — 0.0cp loss Move 55. h5 — 0.0cp loss 55 Move 56. h4 — 0.0cp loss Move 57. Ke5 — 0.0cp loss Best Strong Minor Mistake Blunder Move number · centipawn loss per move · green = engine-like precision · red = significant error

Move Quality Distribution

61% 25% 5% 7% Best (35) Strong (14) Minor (3) Mistake (1) Blunder (4)

* Estimated from centipawn loss.

QualityCount%Human Baseline
Best move3561.4%*33–67%
Strong1424.6%*10–25%
Minor inaccuracy35.3%*5–28%
Mistake11.8%*varies
Blunder47.0%*15–50%

* Estimated from centipawn loss (0/≤10/≤25/≤50/>50 cp).

Phase-by-Phase Accuracy

Human error peaks in the middlegame; engine accuracy stays consistent across all phases.

< 2 cp Opening 7.3 cp Middlegame 11.6 cp Endgame

Secondary ML Model

MetricValue
ML Engine Probability100.0%
ML VerdictENGINE_STRONG
ML ConfidenceHIGH

The ML model diverges from the behavioral analysis. When the two systems disagree, the confidence score is reduced accordingly. The ML model is trained on 10,000+ games including verified SuperGM play.


Forensic Narrative

A move-by-move behavioral reconstruction of Hans Moke Niemann's game

The game opens in a standard opening with 1...Nf6, 2...e6, 3...Bb4, 4...O-O. Through the first 10 scored moves, Hans Moke Niemann maintains engine-level precision. The theory phase ends here, and the real chess begins.

The middlegame spans 20 moves and plays a grinding battle — 7.3 cp average loss across 20 moves. 21...Bxc4 costs 28 cp. 29...Nc4 costs 72 cp.

From move 31 the game enters a rook-and-minor-piece endgame spanning 27 scored moves. Average loss holds at 11.6 cp, virtually unchanged from the middlegame — flat precision across phases.

31...fxg4 costs 111 cp — catastrophic. The position transforms entirely. The aftermath confirms the human pattern: the next moves average 29.7 cp, elevated compared to baseline. The error leaves a wake before play returns to normal.

The defining pattern is variance — accuracy drops from 1.4 cp in the opening to 11.6 cp in the endgame, errors cluster around move 31. Error texture CV of 1.950 sits above the human baseline — errors arrive in bursts, not at a steady engineered rate.

At 60/100 confidence, the evidence is consistent with genuine human play. The behavioral signals converge on the known human signature — imperfect, pressure-driven, and emotionally textured. No credible engine signal was found.

Five Costliest Decisions

Board Positions at Critical Moments

. . . . . . . .
. p . . R p k .
p . . n . . . p
. . r . . p . .
P . . . p . P .
. . . . . . . .
B . . . P P . P
. . . . K . . .
Move 31: fxg4111 cp lost
Played Engine best
. . . . . . . .
. p . R . p k .
p . . . . . . p
. . r . . . . .
P . . . n . p .
. . . . P . . .
B . . . P . . P
. . . . . K . .
Move 34: Rc1+89 cp lost
Played Engine best
. . . . R . . .
. p . . . p k .
p . . . . . . p
n . r . . p . .
. . . . p . P .
P . . . . . . .
B . . . P P . P
. . . . K . . .
Move 29: Nc472 cp lost
Played Engine best
Move 31: fxg4
−111.0 cp — Blunder
A 111 cp blunder — the position collapsed after this decision. The engine's evaluation swung hard.
Move 34: Rc1+
−89.0 cp — Blunder
Costing 89 cp, this late-game error reshaped the entire evaluation.
Move 29: Nc4
−72.0 cp — Blunder
A 72 cp swing. Time pressure may explain the lapse.
Move 43: Kf5
−64.0 cp — Blunder
At 64 cp, this was a major turning point — a time-pressure collapse with lasting consequences.
Move 21: Bxc4
−28.0 cp — Inaccuracy
At 28 cp, this middlegame choice traded precision for practicality.

Final Verdict

PROBABLE HUMAN — Consistent With Natural Play
Engine Likelihood: 38/100 · Confidence: 60/100 — Moderate Confidence · FC-2CC81316

Based on the evidence presented above, the analysis of Hans Moke Niemann's game did not produce a strong signal in either direction. Additional games are recommended before drawing any conclusion.

Recommendation

The analysis was inconclusive. Recommended action: Submit additional games for analysis. A minimum of 3 games from the same player — each analyzed independently — provides the statistical foundation for a reliable conclusion.

YOUR NEXT STEPS

DEEP COACHING REPORT

Personalized for Hans Moke Niemann · 57 moves analyzed · SuperGM level (classical)

CHAPTER 1: YOUR WEAKNESS MAP

A full diagnostic assessment of your play in this game, measured against SuperGM blitz benchmarks. Every bar and every sentence is derived from specific signals in your moves.

SIGNAL RADAR ASSESSMENT

Signal Radar — SuperGM Blitz Percentile Comparison (higher = better)
Accuracy (ACPL)8.3 cp · p98
Engine Match Rate74% · p98
Blunder Rate9% · p98
Error Texture (CV)1.95 · p98
PV Stability0.86 · p84
Error Clustering (EDR)0.10 · p2
Percentiles compare this single game against a database of SuperGM blitz games. The shaded middle band represents the p25-p75 range.

Accuracy (ACPL) (8.3 cp): Your average centipawn loss of 8.3 places you in the upper echelon of accuracy for this game. At the SuperGM level, this indicates strong move selection throughout and suggests deep familiarity with the position types that arose.

Engine Match Rate (74%): You matched the engine's top choice 74% of the time, which is exceptionally high. This means your intuitive move selection aligns with computer analysis on more than two-thirds of decisions. For a SuperGM player, this signals strong pattern recognition.

Blunder Rate (9%): Your blunder rate of 9% is above average and represents your highest-impact improvement opportunity. Each blunder costs 50+ centipawns, which means a single blunder can undo 5-10 good moves. Tactical puzzle training is the fastest path to reducing this number.

Error Texture (CV) (1.95): Your error consistency (CV=1.95) shows high variance in your mistake sizes. You alternate between very clean stretches and sudden large errors. This bursty pattern typically indicates lapses of concentration rather than systematic weakness in any one area.

PV Stability (0.86): Your PV stability of 86% means you consistently stuck with the same plan across consecutive moves. High stability indicates confident, committed play where you execute a strategy rather than improvising move-to-move.

Error Clustering (EDR) (0.10): Your EDR of 0.105 shows highly dispersed errors. Your mistakes are evenly spread rather than clustered, which suggests consistent low-level inaccuracies across all phases rather than isolated concentration drops.

YOUR TOP 3 WEAKNESSES — RANKED

WEAKNESS #1: CONSISTENCY / VARIANCE

Your error texture coefficient of CV=1.95 indicates high variance in your play. You oscillate between stretches of near-perfect moves and sudden, significant errors. This feast-or-famine pattern suggests that your best play is very good, but you cannot sustain it consistently across an entire game.

High variance typically comes from relying on intuition without a structured decision process. When your intuition is right, you play brilliantly. When it fails, there is no safety net to catch the error. The goal is not to suppress your intuitive play but to add a verification step that catches the worst mistakes without slowing down your good decisions.

Exercises: (1) Before every move, spend 3 seconds checking for immediate threats. (2) Maintain a consistent tempo: do not rush through easy positions and then spend all your time on hard ones. (3) Track your error distribution across games to identify which game phases produce the most variance.

WEAKNESS #2: BLUNDER RATE

Your blunder rate of 9% is moderate. You are not hemorrhaging centipawns through tactical oversights, but there is a meaningful gap between where you are and where you could be. Most of these blunders likely occurred in positions where calculation depth mattered most.

At the SuperGM level, reducing blunders from 9% to under 8% is achievable within 4-6 weeks of targeted practice. The key insight: you do not need to calculate deeper, you need to calculate more carefully. The "candidate moves" technique, where you identify 2-3 options before committing to one, eliminates the most common source of moderate blunders.

Exercises: (1) Lichess Puzzle Storm: 3-minute sessions, twice daily. (2) For each blunder in this game, set up the position and find the best move without engine help, then check. (3) In your next 5 games, consciously pause for 3 seconds before every move in the middlegame.

YOUR HIDDEN STRENGTHS

Exceptional Accuracy: Your overall ACPL of 8.3 cp is outstanding. This level of precision means your move selection process is well-calibrated and you rarely drift far from the optimal path. In practical terms, this means opponents must play very accurately to beat you, because you give them few opportunities to capitalize on your errors.

Strategic Commitment: Your PV stability of 86% indicates you follow through on your plans with conviction. Rather than changing direction every few moves, you identify a strategy and execute it. This is a crucial skill for converting advantages into wins.

Opening Preparation: Your opening ACPL of 1.4 cp demonstrates strong theoretical knowledge or excellent positional intuition in the opening phase. You consistently reach playable middlegame positions, which is the primary objective of the opening.

CHAPTER 2: YOUR GAME — MOVE BY MOVE

A narrative walkthrough of the critical moments in your game. Not a table of numbers, but the story of what happened and why it mattered.

THE OPENING

The opening covered moves 1 through 10, spanning 10 scored decisions. Hans Moke Niemann navigated the opening with excellent precision, averaging only 1.4 centipawns of loss per move. The position at the end of the opening stood at +0.0, indicating a well-played theoretical phase.

THE CRITICAL MIDDLEGAME

The middlegame spanned 20 scored moves, with 60% of them being engine-perfect and a total loss of 146 centipawns (average: 7.3 cp per move). Here are the critical moments that defined this phase of the game:

Move 29: Nc4 — BLUNDER (−72 cp)

At this point, Hans Moke Niemann was ahead. The knight move Nc4 was a significant mistake, losing 72 centipawns. The position shifted noticeably — about 0.7 pawns worth. The engine preferred f5-g4 instead, which would have saved 72 centipawns (0.7 pawns). Knights are strongest on central outposts where they can't be chased by pawns. Before retreating a knight, look for active squares first.

Move 21: Bxc4 — MISTAKE (−28 cp)

At this point, Hans Moke Niemann was in a roughly equal position. The bishop move Bxc4 was an inaccuracy costing 28 centipawns. The engine preferred b7-b6 instead, which would have saved 28 centipawns (0.3 pawns). Bishop placement defines long-term strategic potential. A bishop locked behind its own pawns is far less effective than one commanding open diagonals.

These 2 critical moments accounted for 100 of the 146 total middlegame centipawn loss (68%). This concentration of errors in a few key moments, rather than spread across every move, suggests that Hans Moke Niemann's general middlegame understanding is sound but breaks down at specific decision points.

THE ENDGAME

The game reached an endgame phase covering 27 scored moves. Hans Moke Niemann won the game from this position. The endgame averaged 11.6 centipawns of loss per move, with a total of 312 centipawns lost in this phase.

Move 31: fxg4 — BLUNDER (−111 cp)

This pawn move cost 111 centipawns (1.1 pawns). The engine preferred c5-c2 instead. Pawn endgames are the most concrete — every tempo counts.

Move 34: Rc1+ — BLUNDER (−89 cp)

This rook move cost 89 centipawns (0.9 pawns). The engine preferred c5-f5 instead. Rook endgames require precise calculation — one wrong check can let the opponent escape.

GAME SUMMARY

Across 57 scored moves, Hans Moke Niemann accumulated 472 centipawns of total loss. 35 moves (61%) were engine-perfect. 4 moves were classified as blunders (50+ cp). The game result was 0-1 against Magnus Carlsen.

CHAPTER 3: YOUR 30-DAY TRAINING PLAN

A structured, week-by-week improvement program tailored to the specific weaknesses identified in your game. Every recommendation is derived from your data.

WEEK 1: CONSISTENCY

Your error variance (CV=1.95) shows feast-or-famine play. This week targets building a consistent decision-making process to reduce your worst moments.

Day 1-3: Tempo Training (in casual games)
Play 3 casual games per day at a deliberate, even tempo. Do not rush through easy moves and do not agonize over hard ones. The goal is a steady rhythm: roughly the same time per move (5-8 seconds in blitz, 30-60 seconds in rapid).

Day 4-5: Safety-First Puzzles (15 min/day)
Solve puzzles but before playing the "winning" move, check: is this move also safe? Does it leave anything undefended? This trains the habit of verifying before executing.

Day 6-7: Game Review (30 min total)
After your weekend games, identify every move where your loss exceeded 30 cp. For each one, determine: were you rushing? Were you distracted? Were you frustrated? Pattern recognition of WHEN you lose consistency is the key to fixing it.

WEEK 2: POSITIONAL SHARPENING

Week 2 shifts focus to your secondary weakness while maintaining the habits built in Week 1. Continue your daily tactical puzzles from Week 1 (reduce to 10 minutes per day as maintenance) and add the following:

Your game showed no single dominant weakness, which means you are a well-rounded player. This week focuses on building your overall chess fitness through balanced training.

Day 1-3: Tactical Maintenance (15 min/day)
Solve 5 complex positions from recent SuperGM games per day on Lichess to keep your tactical vision sharp. Alternate between rated puzzles (for challenge) and Puzzle Streak (for speed).

Day 4-5: Game Analysis (30 min/day)
Review your recent games. For each, identify the critical moment and your thought process. Were you accurate? Were you confident? Understanding your decision-making pattern is more valuable than memorizing engine lines.

Day 6-7: Play Strong Opponents (45 min/day)
Seek games against players 100-200 rating points above you. Losing to stronger players and analyzing the games afterward is one of the fastest improvement methods available.

Integration exercise: Play 3-4 rated games this week. After each game, immediately check: (1) Did you apply the blunder-check habit from Week 1? (2) Did you work on this week's focus area? Score yourself honestly. Improvement is not about being perfect; it is about being more intentional than you were last week.

WEEK 3: ENDGAME DEEP DIVE

Your endgame ACPL of 11.6 cp reveals significant room for improvement in the final phase. This week provides a structured endgame curriculum.

Day 1-2: Essential Positions (30 min/day)
Study these five positions until you can play them perfectly from memory: (1) King + Pawn vs King with opposition, (2) Lucena position (rook + pawn win), (3) Philidor position (rook + pawn draw), (4) Vancura position (rook vs rook + a-pawn draw), (5) Queen vs passed pawn on 7th rank. Resources: de la Villa "100 Endgames You Must Know" chapters 1-2, or free Chessable courses.

Day 3-5: Endgame Drills (20 min/day)
Use Lichess "Play from Position" to practice endgame positions. Set up a position from the positions you studied and play it against the computer (Level 5-6). Repeat until you can win or draw (depending on the theoretical result) consistently. Key principle: activate your king early. In the endgame, the king is a fighting piece worth roughly 4 points.

Day 6-7: Practical Endgames (30 min/day)
Play 3-4 rapid games and consciously aim to reach endgames. Even if you have a winning middlegame attack, try to convert through simplification instead. This forces endgame practice in real game conditions and builds the habit of confident endgame play.

WEEK 4: INTEGRATION AND ASSESSMENT

The final week brings everything together. You have spent three weeks building specific skills; now it is time to integrate them into your natural playing process and measure your progress.

Day 1-2: Game Review Ritual (30 min/day)
Play 2 rated games per day. After each game, before checking with the engine, write down: (1) your 3 best moves and why, (2) your 3 worst moves and why, (3) the critical moment of the game and what you were thinking. THEN check with the engine. Compare your self-assessment to the engine's evaluation. The closer they match, the more self-aware you are becoming.

Day 3-5: Full Integration Games (45 min/day)
Play 3 rated games with conscious application of all skills: blunder check (Week 1), secondary weakness awareness (Week 2), phase-specific improvement (Week 3). Do not try to apply everything simultaneously; instead, pick one focus per game and rotate. After each game, analyze and note whether the focus area improved.

Day 6-7: Benchmark Assessment (60 min)
Play 3-5 rated games as your "assessment set." After all games, run them through a ForgeChess analysis or Lichess computer analysis. Compare your ACPL, blunder rate, and phase performance to the numbers from this report. Specifically, look at: (1) Is your ACPL below 8? (2) Is your blunder rate below 9%? (3) Did your worst phase improve? Even modest improvement confirms the training is working.

Rating targets: Based on your current estimated SuperGM level (~2688 Elo), consistent training at 20-30 minutes per day should yield approximately +50 points in 30 days (target: ~2738) and +150 in 90 days (target: ~2838). These are realistic expectations based on typical improvement curves for dedicated study at your level.

RESOURCES TAILORED TO YOUR LEVEL

These resources are selected specifically for the SuperGM level. Using resources too far above or below your level wastes time and builds frustration.

Books: Dvoretsky's Analytical Manual, Endgame Virtuoso by Smyslov, My Great Predecessors by Kasparov

Video content: ChessBase MegaDatabase analysis, Original analysis of your own games with deep engine time (30+ sec/move)

Lichess tools: Lichess cloud analysis at depth 40+, Lichess studies with annotation, Syzygy tablebase for endgame verification

Daily minimum: 5 complex positions from recent SuperGM games + 5 minutes of game review. This 20-minute daily investment compounds dramatically over weeks and months. Consistency beats intensity: 20 minutes every day is far more effective than 3 hours once a week.

This training plan is based on the specific signals from this single game. For the most accurate coaching, analyze 3-5 games to identify persistent patterns versus one-game anomalies. The weaknesses that appear across multiple games are the ones that matter most.

Thank you for trusting ChessForensics.
Every report we deliver is backed by rigorous calibration on thousands of confirmed games. We take our role in the integrity of competitive chess seriously — and we’re honored to play a part in yours.
chessforensics.com  ·  Report ID: FC-2CC81316  ·  2026-03-24

Analytical Methodology

ChessForensics uses 35 independent behavioral dimensions calibrated on over 10,000+ validated games from confirmed SuperGM players (Magnus Carlsen, Hikaru Nakamura, Alireza Firouzja, Nihal Sarin, and peers). Each dimension targets a distinct behavioral fingerprint that separates human cognition from engine assistance. Signal weights, thresholds, and formula parameters are proprietary and available to accredited arbitration panels upon written request.

Engine Likelihood and Human Authenticity scores are expressed on a 0–100 scale. Validated on 10,000+ games from 80+ verified SuperGM accounts (Magnus Carlsen, Alireza Firouzja, Nihal Sarin, and peers) and 6,000+ confirmed engine games. Cross-validated false positive rate: under 1% on SuperGM play. Zero false positives on named SuperGM accounts. Reports automatically adjust analysis context based on time control (blitz, rapid, classical).

Legal Disclaimer

This report was created for you personally. You are welcome to use it in an appeal, dispute, or tournament proceeding. Please do not redistribute the file publicly. ChessForensics is not affiliated with Lichess, Chess.com, FIDE, or any governing chess body. This analysis is independent statistical evidence, not a platform ruling. This report constitutes statistical and behavioral probability analysis, not definitive proof of misconduct. ChessForensics accepts no liability for outcomes of disputes in which this report is used.

Calibration Scope: Validated on 3+0 blitz and 2+1 bullet time controls using confirmed SuperGM behavioral baselines. Other time controls produce results with reduced confidence and are marked accordingly.