ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Always on, always at risk

Matt Berzinski at Ping Identity explains why live streamers are the new prime targets for deepfakes

Live streaming has transformed personal connection into a full-time broadcast. Creating hours of face-to-camera video and clean audio is brilliant for building a community, a brand and, for some advertisers, a piece of prime real estate.

 

Yet, this always-online presence hands bad actors the perfect toolkit to counterfeit a creator’s likeness. The very same content enables someone to mimic a face, clone a voice and make a fabrication look real. This isn’t a niche worry. Malicious deepfakes are already targeting creators with explicit fabrications and unnerving sponsors who fear association with doctored content. This erodes the fundamental trust streamers and brands spend years building.

 

Deepfake tools are now easier to use than ever, the cost of creation has plummeted and fabricated clips spread faster than any correction. Tackling this threat will require stronger safeguards, clearer provenance and the ability to verify what’s real before the damage is irreparable.

 

 

Why streamers are in the blast zone

It all starts with data. Modern deepfake tools can build a convincing model from mere seconds of audio or video. A creator’s entire back catalogue offers hundreds of hours of well-lit, frontal footage captured at consistent angles with high-quality microphones. This clean signal dramatically lowers the barrier to realism; the result may not be perfect, but it is more than persuasive enough for social media platforms that reward quick reactions and constant engagement over careful verification.

 

Because audiences feel they “know” the person on screen, the fabricated content is automatically granted baseline trust. When a creator with influence appears to offend, confess or ask for money, the fallout can be immediate and catastrophic.

 

In the influencer economy, a creator’s trust with their audience translates directly into watch time, ongoing engagement and lucrative brand deals. But that trust can also be weaponised. A single convincing fake can fracture a community or instantly jeopardise a partnership, long before facts can catch up.

 

Rapid distribution finishes the job. While a show may live on Twitch or YouTube, the surrounding ecosystem spans TikTok, Shorts, Discord and Reddit. Clips spread across these surfaces in minutes. Platforms like Twitch and others have already had to explicitly address synthetic sexual imagery. By the time anyone checks the source, the lie has already travelled.

 

The target surface extends far beyond the main host. Co-hosts, moderators, agents and even family members often leave enough public trace to clone. This creates paths for bogus rights claims, fraudulent payout-change requests or “urgent” voice notes that can pressure people into financial or operational mistakes.

 

 

Detection and the law are lagging behind

Detection is necessary, but it can’t carry the load alone. The best systems look for subtle ‘tells’ – mismatched lighting, lip-sync glitches, odd micro-expressions or unnatural audio transitions. Yet, with the rate at which toolmakers advance their models, these gaps are closing. New lip-sync models can drive almost any face from any voice; voice clones now capture breath, tone and cadence; and post-production smooths away the artefacts older detectors relied on. Even when detection works, it often fires only after a deepfake has already started to inflict damage. No one can unsee a fake that has already done the rounds, and by that point, malicious incentives have already paid out.

 

Current legal frameworks are struggling to keep pace. Some jurisdictions – including the UK, several US states and parts of the EU – have criminalised non-consensual sexual deepfakes and harmful impersonation. Others, however, still rely on statutes never written with synthetic media in mind.

 

Cross-border enforcement is slow and offenders can easily spin up new accounts or offshore sites with little trouble. In practice, victims face a brutal cycle: the malicious content reappears faster than takedowns can propagate, while legal action remains costly and unrushed.

 

Meanwhile, fraud is simply adapting. Once a creator’s voice model exists in the wild, it can be used to impersonate them in calls or voice notes. We’ve already seen “CEO-voice” scams in the corporate world; the same playbook works here. A cloned voice can nudge a sponsor to pay a bogus invoice, charm a support agent into resetting an account or coax a moderator into sharing private details. None of this is futuristic; it is old-fashioned social engineering, supercharged by AI.

 

 

What should change now

The aim is not to ban synthetic media, but to make malicious deepfakes harder to create, harder to spread and easier to disprove.

 

Creator tools and platforms must adopt digital provenance. This starts by cryptographically signing content at capture and preserving tamper-evident metadata – the digital record that shows when and how a file has been edited and uploaded. With a clear chain of custody creators can prove “this is mine,” and platforms can show “this content was altered.”

 

The open standards for this technology exist; what’s missing is adoption and simple viewer cues – labels or tap-through details that immediately show source, edits and publisher. Provenance doesn’t stop a deepfake from being made, but it makes authentic content easier to trust.

 

Detection systems must be directly linked to response protocols. Media forensics should sit alongside behavioural and network signals. If a suspicious clip originates from a new account or is spreading unusually fast, platforms must be ready to slow the clip down and mandate a human check before it gains traction.

 

They can immediately halt monetisation, alert the targeted creator and, if the content is proven harmful or non-consensual, block future uploads of the same material. Under clear rules, those digital fingerprints should also be shared with trusted partners to stop the same fake reappearing elsewhere.

 

 

Secure accounts, fast support and smart policy

High-risk actions – such as payout changes, stream access resets and ownership transfers – must demand phishing-resistant authentication. These critical actions cannot rely on voice calls or email alone. Platforms must enforce the use of device-bound passkeys or hardware security keys, coupled with strict “no exceptions” rules for support staff. This closes the key routes sophisticated voice clones can otherwise bypass.

 

Victims need a clear, functional crisis channel: a single route to trained staff available 24/7, with a set timetable for decisions. Platforms should utilise standard forms and official statements, assign case handlers who deeply understand creator workflows and apply new content hashes platform-wide immediately upon confirming an incident. Crucially, creators should be encouraged to retain all originals and “official clips” to maintain a trusted record when fakes appear.

 

Policy must reinforce, not replace, this operational work. Laws need to clearly prohibit non-consensual deepfakes and harmful impersonation, backed by meaningful penalties and faster cross-border enforcement. Alongside legal reform, we need investment in open standards for verifying authentic media and better public awareness to help people spot and question manipulated content online.

 

 

The way forward

Deepfakes are not going away. For live streamers, the core challenge is making the risk manageable without losing the spontaneity and intimacy that makes the medium compelling. We can meet this challenge, but only by accepting that no single fix is enough. We must combine forces: provenance makes authentic content easier to trust; joined-up detection and response mechanisms limit the reach of malicious fakes; strong account safeguards shut down voice-led scams; and focused laws enforce those rules.

 

By uniting these elements, we can protect not only the creators, but the fragile, essential trust that keeps the live chat worth watching. 

 


 

Matt Berzinski is Senior Director, Product Management at Ping Identity

 

Main image courtesy of iStockPhoto.com and LeoPatrizi

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543