Top 5 Emerging Security Threats for Video Conferencing in 2025
SAFAS

SAFAS

November 13, 2024

Top 5 Emerging Security Threats for Video Conferencing in 2025

Amidst controversial return-to-office mandates by large companies like Amazon, remote and hybrid work is actually still growing - remote work is up 6% in the US since 2023, according to Owl Labs’s 2024 “State of Hybrid Work” report (source).

And with remote work, video conferencing has solidified its permanence in handling every workplace interaction. - from daily standups to client meetings, hiring interviews and high-stakes financial transactions, virtual meetings on camera are here to stay. Yet, as companies open up to rely on platforms like Google Meets, Teams, Zoom, and WebEx, fraudsters focus increasing efforts on exploiting these new entry points in a new wave of cybersecurity threats.

The stakes are high:

Video conferencing, for all its convenience, has proven to be particularly vulnerable to sophisticated forms of digital attacks. With sophisticated deepfake technology already available as end-user services at low cost, attackers find new ways to manipulate what we see and hear on the screen. Deloitte recently published a study on deepfake fraud in financial services, highlighting how these threats are no longer just speculation of what’s to come - they’re real, they’re growing, and they’re affecting industries ranging from banking to education. Below, we break down the five most pressing threats to watch closely in 2025—and what security leaders can do to prepare.

1. Injection of Pre-Recorded Video

pre-recorded video injection

One of the more traditional threat types involves attackers injecting pre-recorded video to make it appear as if a person is present and engaging in real-time. Prerecorded footage can also be manipulated to create infinite loops or otherwise be tampered with to go undetected.

Why It’s a Threat:

This tactic could let unauthorized users join a leaked meeting link and pose as legitimate attendees, violating security protocols and gaining access to sensitive discussions while the fraudster is listening. Imagine an internal company meeting in which an employee’s video is used to gain confidential insights about strategic or financial decisions.

Prevention Strategy:

Real-time hardware verification technology offers a robust first line of defense - coincidentally, this is what Safas does: It guarantees that camera feeds in video calls originate from real camera hardware, and are not the result of a video streamed via a virtual camera.

2. Injection of Deepfakes (Face Swaps, Face Morphs, Synthetic faces)

Deepfakes are imagery created using generative AI, allowing bad actors to alter or replace or morph their face with that of someone else: They are no longer a preview of what’s to come, but already here: free tools like Deep-Live-Cam are well-known in the online community on Reddit, and despite being free they work very effectively with only a single photo of the victim needed to assume their likeness. As a result, the injection of deepfakes has emerged as a top security threat.

Why It’s a Threat:

In a high-stakes environment—say, an M&A negotiation or a legal deposition—the consequences of a fraudster duping employees into transferring payments or leaking information have already had disastrous consequences and constitute regulatory compliance violations, financial and reputational loss in a few high profile cases, see our first blog.

Prevention Strategy:

IT administrators can employ deepfake detection apps that are available on marketplaces like the Zoom App Marketplace. These apps scan video for deepfakes, which unfortunately is a race against rapidly evolving GenAI itself, and it can be easily spoofed by a fraudster lowering camera resolution (and blaming their Internet connection for it), among other tricks. A more secure way is once again to guarantee the camera stream comes from the real camera. This is particularly crucial for meetings that can be accessed via web browsers, which are completely unprotected against video injection.

3. Adding Filters to Hide The Real Image

While the concept of filters often brings Snapchat or Instagram to mind, they’re becoming a security concern for video conferencing. Attackers use overlays or effects to obscure their real identity, disguising features or making it challenging to determine who they are. These filters are not necessarily created by GenAI, but they can be to make the effect more subtle when facing an unknown person (for example applying gender-change filters).

Why It’s a Threat:

Filters make it harder to verify someone’s true looks, enabling bad actors to sidestep doubts of the victim. This could become particularly problematic in industries where visual ID checks are part of the security process.

Prevention Strategy:

In this case, IT managers can enforce policies to only allow filters in video conferencing tools that do not hide the face of the person. Filters are overlays by apps such as Teams or Zoom, and can be configured, flagged and even be deactivated at the IT admin level. For filters that are already part of the camera feed, they may be maliciously added via a virtual camera.

4. AI Voice Generation (Audio Deepfake)

Deepfake technology doesn’t stop at video; audio deepfakes are on the rise, too, allowing for fraud without taking the risk of adding a visual. With the help of voice AI, attackers can mimic a person’s voice in real-time, opening the door to impersonation and manipulation through verbal exchanges.

Why It’s a Threat:

A skilled attacker could engage in real-time dialogue as someone else, potentially extracting confidential information or issuing harmful directives. Recent cases, such as one in which a PhD student spoofed a popular voice authentication system (source), highlight that this threat has materialized already.

Prevention Strategy:

Verifying voices against source samples can help counter this risk - though the risk is the same as with AI-driven deepfake detection: AI is rapidly improving, so any detection method needs to keep up with a rapidly evolving opponent that is headed for perfection. While it would be intuitive to say that Safas can help the real audio feed from the microphone be transmitted as part of a video call, another immediate remedy is to make video turned on a mandatory requirement for high-stakes conversations.

5. Hijacking Meeting Links via Compromised Accounts

hijacking meeting links Finally, there’s the fundamental threat of meeting link hijacking, where attackers gain access to a participant’s account and join meetings as legitimate participants. Once in, they can use any of the tactics above to manipulate, deceive, or extract information.

Why It’s a Threat:

Hijacking access is a hacking approach as old as hacking itself. In video conferencing, this threat is taken to another level, as meeting links can be reverse-engineered (e.g. custom links that use employee names in certain formatting), and especially larger meetings can be attended by unauthorized parties. In sensitive environments, like corporate boardrooms or government briefings, the potential for damage is immense, as demonstrated by Russian spies who attended WebEx meetings with the German military that discussed their Ukraine strategy (source).

Prevention Strategy:

This is a fundamental security loophole, created by the rapid change to remote work. Organizations often missed to protect videoconferencing access with authentication to ensure end-to-end verification of all participants. Security leaders should consider using a secure camera feed using a solution like Safas, requiring MFA-login of all participants for high-risk meetings, or even procuring a biometric authenticator to join confidential meetings.

Conclusion: It’s hard to win the race against AI - consider guardrails instead

Video conferencing has become essential for remote work, bringing new security threats that many CISOs and compliance officers may overlook. With deepfakes now free and easy to make, guardrails are crucial to prevent these threats from becoming as common as spam mails.

While tools like AI-powered biometric verification help detect fraudulent video, they’re in a constant race with the same AI-technology that is used against them.

Safas goes beyond image analysis, and instead puts guard rails in place at the root of it all: By certifying that all video in a call originates from real camera hardware in real-time, blocking non-genuine feeds from virtual cameras, including any recordings and deepfakes, right at the source.ktop and web applications.

Safas is currently in closed beta. Join our waitlist for a trial.