In 2026, the line between AI-generated conversation and human conversation is blurring. Advanced large language models can sustain weeks of emotionally intelligent dialogue, maintain consistent personas, and respond in ways that feel genuinely personal. Scam operations use these tools at scale in romance scams, investment fraud, and customer service impersonation.
This guide gives you practical, specific techniques to test whether the person you're talking to is real — without relying on vague advice like "trust your gut."
The Definitive Test: Live Video Verification
No text-based test is as reliable as a live video call with a real-time element. Most AI scam systems cannot generate convincing live video — though deepfake technology is advancing rapidly. Here's how to do it properly:
📹 Live Video + Real-Time Instruction
Request a live video call (not a pre-recorded video or photo). When the call starts, ask them to perform a specific action you choose in that moment — hold up a piece of paper with a word you tell them, spin an object on the table, or wave with a specific hand. This real-time instruction defeats both pre-recorded deepfake video and AI systems that can't generate on-demand video. If they refuse, make excuses, or the response has any delay inconsistent with live action, treat it as a strong red flag.
Text-Based Verification Techniques
When video isn't possible or before video is requested, these text-based tests reveal AI patterns:
🤔 Ask Questions Only They Could Know
Reference something specific from early in your conversation that would require genuine memory — not something they could reconstruct from context. "What was the thing you said you were most worried about when we talked two weeks ago?" or "What was the name of the friend you mentioned in passing last Tuesday?" AI systems struggle with highly specific episodic recall under pressure. If they deflect, generalize, or answer incorrectly — it's a signal.
🌡️ Ask About Current Sensory Experience
Ask what their physical environment looks like, sounds like, or smells like right now. "What can you hear around you right now?" "What's the weather like where you are today?" A real person gives an instant, specific, sensory answer. AI either makes something up (which can be further probed) or gives a vague, slightly generic response that doesn't quite match the immediacy of the question.
💬 Ask a Genuinely Controversial Opinion Question
Choose a topic where there's a genuine, common human opinion divide — nothing political or inflammatory, but something with real texture. "Do you think [specific well-known food] is overrated or underrated?" or "What's your honest take on [specific niche topic relevant to their claimed background]?" AI systems are trained to hedge and present balanced perspectives. Real people have actual preferences and often express them somewhat defensively or enthusiastically.
⏰ Watch for Inhuman Consistency
Humans are emotionally and temporally inconsistent. They have bad days, slow responses at certain times, distracted moments, brief grumpy patches. AI romance systems are consistently emotionally warm, always available, always saying the right thing. Sustained, perfect emotional attunement over weeks without a single irritable or distracted moment is a human behavioral anomaly that suggests AI generation.
🔍 Reverse Image Search All Photos
Use Google Images, TinEye, and Yandex Image Search on every photo they've sent. AI-generated faces or stolen photos often appear on other profiles, in stock photo databases, or trace to identified scam accounts. A clean, professional-looking photo that appears nowhere online is suspicious — real people leave photo trails. A photo that appears on a scam database site is definitive.
🎙️ Unexpected Conversation Pivot
Abruptly change the subject mid-conversation to something unrelated to the current topic — something personal and specific. "Wait, before we continue — what did you name your first pet?" The conversational dislocation catches AI systems that are operating on scripted response trees or that need to recalibrate. Human conversational pivots feel natural. AI pivots sometimes feel slightly off or produce an overlong, over-explanation response.
What AI Systems Can and Can't Do Well
AI Does Well
- Remembering details from earlier in the same conversation
- Producing grammatically perfect, emotionally appropriate responses
- Maintaining consistent persona details (name, backstory, personality)
- Expressing empathy and emotional attunement at scale
- Responding quickly at any hour without fatigue signals
AI Struggles With
- Genuine episodic memory across multiple conversation sessions (without retrieval systems)
- Real-time video generation with live instruction following
- Expressing genuine strong opinions on ambiguous topics
- Describing sensory experiences with immediate specificity
- Behaving inconsistently in natural human ways (tiredness, distraction, irritability)
- Answering questions about shared history that wasn't scripted
AI Detection Tools
Several tools attempt to detect AI-generated text:
- GPTZero (gptzero.me) — designed to detect AI-generated writing; useful for analyzing message samples
- Originality.AI — text-based AI detection with some reliability
- Hive Moderation AI Detector — image and text detection
These tools are imperfect and should be used as supporting evidence, not primary verification. They can produce false positives and false negatives. The live video test remains the gold standard.
Warning Signs That Should Trigger Verification
- Unusually fast emotional progression
- Profile that looks professionally photographed but traces nowhere online
- Refusal to live video call despite extended contact
- Consistent availability at unusual hours without explanation
- Conversation that feels slightly too smooth or perfectly supportive
- Any financial ask, regardless of context
See the full red flag guide at AI Romance Scam Red Flags. If something has already gone wrong, find recovery resources at AIScamRecovery.com. Current scam campaigns at AIScamNews.com.
🛡️ Reduce Your Digital Exposure
Less publicly available data means less fuel for AI impersonation. NordVPN and Aura help protect your digital footprint.
More Prevention Guides
Related Resources
- What to do if you were already scammed by AI If prevention failed, here's how to recover.
- Remove yourself from data broker sites Reducing your data footprint makes you a harder target.
- Current AI scam alerts Know what scams are circulating right now.
Frequently Asked Questions
How can I tell if I'm talking to an AI chatbot?
Request a live video call with a real-time test. Ask highly personal questions only a real person from your shared context could answer. Watch for inhuman consistency — AI never gets tired or irritable. Ask about immediate sensory experience.
Can AI pass a video call verification?
Most AI systems cannot do convincing live video. Ask the person to hold up a word you give them at that moment or perform a live action. Pre-recorded deepfake video is possible, but responding to live instructions in real time is much harder to fake.
Are AI chatbots getting better at impersonating humans?
Yes significantly. Modern LLMs maintain consistent personas and are emotionally attuned. However, they still struggle with genuine episodic memory, live video, ambiguous opinions, and sensory descriptions under scrutiny.
What questions can I ask to test if someone is an AI?
Ask about a specific sensory experience in their current environment. Ask for their opinion on something genuinely ambiguous. Reference a specific detail from an earlier conversation. Ask something only someone with your shared history would know.