A finance professional in Hong Kong joined a video call with his CFO and colleagues based in London, UK. His CFO told him to transfer $25 million to a specific bank account.
Their faces and voices seemed real. But they weren’t. It was a fully deepfaked meeting - every participant was AI-generated. After he transferred the money, he realized that he was fooled. But the money was gone.
This incident happened in 2024. And it’s precisely why every team - from finance to operations to HR - needs to rethink how they verify trust in a GenAI-powered world.
Quick Takeaways
GenAI can replicate trusted voices and faces perfectly in many professional contexts.
Organizations must add Responsible AI literacy training to their cybersecurity training.
If it looks or sounds right, don’t assume it’s real. Always verify through a second trusted channel.
3 Steps to Outsmart a Deepfake
Here’s how to train your eye, ear, and instinct to spot red flags when everything looks real.
1. Break the Script
Deepfakes perform best when the interaction is predictable. When the conversation veers off the expected path, things can unravel.
Tip: Ask something hyper-specific, personal, or recent - something only the real person would know (e.g., “What did we agree on in our last Slack thread?” or “What hotel were we at during last year’s conference?”).
Why this works: Deepfakes are often scripted or trained on static data. They may hesitate, give generic responses, or redirect the topic elsewhere.
2. Watch Eyes and Facial Micro-expressions
AI still struggles with replicating natural human eye behavior and emotion. Blinks may be too frequent or too rare, and expressions often feel delayed or too smooth.
Tip: Look for inconsistencies like robotic blinking, emotionless reactions to emotional topics, unnatural lighting - such as shadows that don’t shift when the person moves - or a face that looks flat and unaffected by changing light.
Why this matters: These subtle cues are where “uncanny valley” detection lives - when something looks almost human but just off enough to trigger discomfort. Humans are surprisingly good at spotting these cues when we pause to observe.
3. Always Verify Through a Secondary Channel
No high-stakes request - financial, personal or account access-related - should be executed based on appearance or sound alone.
Tip: Call or message the person via a pre-established contact (e.g., saved phone number or corporate line). Ask them to confirm outside the platform where the request was made.
Why this matters: Verifying through a trusted second channel reinforces a culture of due diligence—and makes thoughtful confirmation a professional norm, not a nuisance.
The greatest threat to GenAI security isn’t the technology. It’s our instinct to trust what we see and hear.
Why It Matters
The danger of deepfakes goes far beyond business transactions. It touches every part of life:
A job candidate deepfakes their way into an interview with AI-generated credentials
A family member receives a panicked call from their "kidnapped child" - a voice-cloned ransom scam in action
A politician is deepfaked into a false statement, eroding public trust
Today, the scariest question is no longer “Can I trust this stranger?” It’s “Can I even trust what I see or hear?”
From internal comms to board meetings, from hiring to parenting - everyone needs to pause and verify.
Because the real danger isn’t the technology, it’s assuming it won’t affect you.
Watch the Free On-Demand Webinar & Download the Human Skills Playbook Click Here!
Outsmart Deepfakes With This Checklist
Deepfakes are no longer just a threat in headlines. They’re now showing up in everyday professional moments: a voicemail from your lawyer or banker, a video message from a vendor or partner, or a voice call from a government agency.
When the stakes are high, use this quick checklist to protect yourself and your organization:
Is the request unusually urgent, emotional or secretive?
Does the voice or tone sound slightly robotic or oddly flawless?
Are the eyes, lip sync, or expressions just a little off?
Is this a high-stakes request (money, account access, sensitive info)?
Have I confirmed this person’s identity through a second trusted channel?
If you check even one of these - PAUSE.
Don’t trust your gut alone. Trust your verification process.
"It looked and sounded real” is not protection. It’s the warning sign.
Share The Love
Found this issue valuable? Share it with a friend who wants to learn how to use AI ethically and responsibly. Forward this email or send them this link to subscribe: https://skills4good.ai/newsletter/
Till next time, stay curious and committed to AI 4 Good!
Josephine and the Skills4Good AI Team
P.S. Want to stay ahead in Responsible AI?
Here’s how we can help you:
1. Leadership Cohort
Join our Leadership Cohort Program and gain exclusive access to the Responsible AI Certification program with expert-led learning and community support.
Short on time? Get up to speed fast! This on-demand course teaches you practical Responsible AI fundamental principles - all in just 3 hours. Plus, gain 3 months of AI 4 Good community access!
Copyright 2025 Skills4Good AI. All Rights Reserved.
You are receiving this email because you previously requested to be added to our mailing list, subscribed to our newsletter in our website, enrolled in one of our courses, attended one of our events, or accessed one of our resources. If a friend forwarded you this message, sign up here to get it in your inbox.