How It Works

From Raw Footage
to Plugged Leaks

SpotMyTell.AI turns your live stream into a timestamped, confidence-scored tell report. Here's the exact pipeline — and why it catches what no self-analysis or coaching session can.

Why This Matters

The Science of Poker Tells Is Real. The Gap in Tools Is Massive.

40 years of research and professional observation have documented exactly how physical tells work. Zero tools have automated their detection — until now.

📚

50+ Documented Tell Categories

Mike Caro's foundational research catalogued over 50 distinct tell categories from decades of live observation. FBI agent Joe Navarro (co-authored with Phil Hellmuth) extended this with rigorous body language science. This knowledge exists — but applying it to your own game has always required another person watching you.

Source: Caro's Book of Poker Tells; Read 'Em and Reap — Navarro, Karlins & Hellmuth
🧠

Daniel Negreanu's Documented Tell List

Negreanu publicly documented the 4 tells he watches most: chip glances after seeing strong cards, reaching for chips before others act, aggressive chip splashing when weak, and verbal confusion under bluffing pressure. These aren't secrets — they're widely known. The question is whether you're exhibiting them.

Source: Negreanu, High Performance Podcast interview; PokerNews tell analysis, Nov 2025
🛠️

GTO Closed the Strategy Gap. Nothing Closed the Tell Gap.

The poker training market is worth $4B+. GTO Wizard, PioSolver, PokerSnowie — these tools have closed the strategy gap for serious players. A perfectly balanced range can still leak like a sieve physically. Solvers and HUDs address your betting lines. Nothing addresses your body. Until SpotMyTell.AI.

Source: Grand View Research, Online Poker Market 2024; HUDStore solver pricing survey 2026
The Process

Three Steps. End-to-End Automated.

No manual tagging. No editing. No coaching session required. Submit your footage and the pipeline does what a team of analysts would take days to do.

1 📤

Upload Your Stream

Share a Twitch VOD link, YouTube recording URL, or upload a private video file. Tell us your approximate seat position (e.g. "seat 3, left of dealer button") — that's all we need to lock onto you. No face photo required, no biometric database, no player profiling.

Any footage where you're visible at the table qualifies. Casino streams, home game recordings, footage captured from another player's stream — it all works. Minimum 45 minutes recommended for meaningful pattern detection; multi-session Deep Dive analysis is the gold standard.

Supported formatsTwitch VOD links, YouTube links, MP4 / MOV / AVI, Google Drive / Dropbox links, direct file upload up to 10GB
Minimum length45 minutes for single analysis. 4+ hours total across multiple sessions recommended for Deep Dive cross-session pattern detection.
Video quality720p or higher recommended. Lower resolution is processed but reduces micro-expression detection accuracy. 1080p is ideal.
Privacy at uploadFootage is encrypted in transit and at rest. Processed by our pipeline only. Deleted from servers within 30 days. Never shared, never trained on.
2 🧠

AI Tracks 23 Behavioral Categories Per Hand

This is where SpotMyTell.AI does the work that no human analyst could replicate at scale. Our computer vision pipeline identifies you in the frame, segments the footage into individual hands from deal through showdown, and runs parallel behavioral tracking across 23 tell categories simultaneously.

For every hand decision — preflop, flop, turn, river — the AI generates a behavioral fingerprint. These fingerprints are clustered and correlated against hand strength outcomes (where showdown data is available) and street-specific action patterns. Behaviors that reliably co-occur with strong hands, bluffs, or specific stack depths are flagged as candidate tells. A confidence threshold filters out noise.

Player identificationComputer vision + pose estimation locates you by seat. Reacquires after camera cuts or crowd obstructions. Tracks across full session without drift.
Hand segmentationAI detects deal events, action sequences, and showdown moments to create clean per-hand analysis windows. No manual tagging required.
23 behavioral dimensionsTracked simultaneously per decision point: posture lean, shoulder position, chip handling, bet-motion style, eye direction, gaze duration, breathing rate, facial micro-expressions, hand tremor, arm position, head tilt, vocal cues (if audio present), reaction timing, chip stack touch patterns, card peeking behavior, and more.
Pattern clusteringStatistical correlation of behavioral signals with hand strength categories (strong made hands, medium hands, draws, bluffs) across hundreds of decision points. Filters out single-occurrence noise to surface reliable recurring patterns.
3 📋

Your Confidential Tells Report — Timestamped and Actionable

Your report lands in your email and account dashboard. It's organized by tell reliability — the highest-confidence, most frequently occurring, most exploitable tells are listed first. Everything is cross-referenced back to specific hands in your footage, with timestamps you can jump to directly.

Every identified tell includes: a confidence score, the frequency of occurrence, which hand-strength situations it correlates with, timestamps of specific instances, and an actionable behavioral adjustment — a concrete change you can practice and apply before your next session.

Tell rankingSorted by reliability score (confidence %) × frequency. Prioritize the leaks that are most exploitable, most often — not just the most dramatic.
Timestamped instancesJump to the exact hands in your footage where each tell was detected. See it for yourself. Denial is harder when you can watch the clip.
Confidence scoresEach tell includes a % confidence score based on how consistently the behavior correlated with hand strength. Tells below our threshold are not reported.
Actionable fixesSpecific behavioral adjustments for each tell — not generic advice. "Keep both hands on the table during all bet-timing windows" vs. "be more aware of your hands."
What We Track

23 Tell Categories Tracked Per Session

Not a generic body language scanner. Every category is calibrated for the specific behavioral dynamics of live poker — where adrenaline, stack pressure, and decision fatigue produce consistent, exploitable patterns.

🪑

Posture Lean

Forward/back position correlation with hand strength

💪

Shoulder Position

Tension vs. relaxation signals across streets

🎰

Chip Handling

Riffling, touching, splashing patterns by situation

🤌

Bet Motion Style

Aggressive vs. deliberate bet execution tells

👁️

Eye Direction

Post-check gaze: board, chips, opponent, away

⏱️

Gaze Duration

How long you hold eye contact before/after action

😤

Breathing Rate

Visible breathing changes under pressure hands

😐

Facial Micro-Expressions

Sub-second involuntary emotional signals

🤲

Hand Tremor

Bet motion steadiness — adrenaline vs. nerves

💪

Arm Position

Crossed, open, or guarding during opponent action

🗣️

Head Tilt

Angle changes correlated with strength/bluff

🔊

Verbal Response Timing

Instant vs. delayed verbal responses to pressure

Bet Timing

Snap, delayed, and tanked timing by hand strength

🃏

Card Peek Behavior

How often/long you check your hole cards

💵

Stack Touch Pattern

Chip-touching before bets: value vs. bluff correlation

🏃

Chip Reach Timing

Pre-action chip reaches that telegraph decisions

🎭

Acting Patterns

Sigh, shrug, and false weakness tells

🔄

Reaction Timing

How fast you react to cards, bets, and boards

🧍

Body Orientation

Turning toward/away from pot during action

😮

Lip Compression

Involuntary lip/jaw tension on strong hands

🌡️

Neck/Throat Signals

Swallowing, pulse visibility under pressure

🔀

Cross-Session Patterns

Multi-session correlation to confirm true tells

👥

Opponent Profiling

Pro/Elite: same analysis applied to your opponents

🔬
Proof of Concept — Verified and Real
Josh Thatcher (PLO Professor) · Venetian & TCH Live Sessions · Feb 2026

Before SpotMyTell.AI was a product, it was a controlled test. We analyzed Josh Thatcher (PLO Professor) — a professional PLO coach with a public Twitch stream — across 4 live cash game sessions at the Venetian and Texas Card House, without any prior knowledge of Josh's game.

The AI processed the footage and identified posture lean as Josh's #1 tell with 80% confidence across 3 separate sessions and 33 analyzed hands. The report also flagged 4 secondary tells. When we showed Josh the timestamped report, he confirmed the posture pattern immediately — he had no idea.

This is exactly what SpotMyTell.AI does. It finds the tells you don't know you have — with timestamps and confidence scores you can verify yourself.

4
Sessions processed
33
Hands analyzed
80%
#1 tell confidence
5
Tells identified
Sample Report Output — Josh's Top Tells (Proof of Concept Excerpt)
#1 Forward posture lean on strong made hands (turn + river) 80% confidence 3 sessions
#2 Chip stack touch before large value bets 67% confidence 2 sessions
#3 Delayed eye contact after bluff bets (looking away) 61% confidence 2 sessions
#4 Shoulder release (tension drop) before folding marginal hands 58% confidence 2 sessions
Ready to Find Your Tells?

Join the Waitlist.
30 Days Free at Launch.

We're in pre-launch. Be first in line when SpotMyTell.AI goes live — and close the one gap in your training stack that solvers can't touch. No credit card required.

FAQ

Questions About the Process

Common technical questions about how SpotMyTell.AI works.

Do I need to tag my hands or edit the footage?
No. The AI handles all segmentation automatically. You submit the raw stream link or file and tell us your seat position — that's it. No timestamps, no hand histories, no editing required. The pipeline locates you, identifies hands, and runs the full analysis without any manual input from you.
What if the camera angle is bad or I'm partially obscured?
Partial obstruction is handled by the tracking system — it reacquires you when the view clears. For frames where you're fully obscured, those decision windows are excluded from the analysis rather than generating unreliable data. Poor angles reduce the number of trackable events, which lowers tell confidence scores. We recommend 720p+ footage with a clear table-level view for best results.
How is this different from just watching my own footage?
When you watch your own footage, you know your hand — which biases observation and prevents genuine tell detection. SpotMyTell.AI analyzes without that bias, tracks 23 categories simultaneously across every hand (not just hands you flag as interesting), and applies statistical pattern clustering across hundreds of data points to distinguish true tells from one-off behaviors. Humans cannot replicate this at scale. Even professional coaches watching your footage miss what a 23-dimension automated tracker catches.