There is a new wave of cyber attacks targeting crypto executives, and this time, they are using AI-powered deepfake technology. Changpeng Zhao (CZ), the CEO of Binance, has joined others in warning about this growing threat. These deepfake impersonation attacks are becoming more sophisticated, making it harder to trust what we see online. This is especially dangerous for people in the crypto industry, who are often targeted by hackers.
In the past few weeks, reports have surfaced about a hacking scheme that uses Zoom video calls to trick people. This scheme has been targeting crypto industry figures by impersonating their friends and colleagues using deepfake technology. CZ, who has been closely following the developments, urged the crypto community to stay vigilant and be aware of these dangerous tactics.
How the Deepfake Zoom Scam Works
One of the recent victims, Japanese crypto figure Mai Fujimoto, shared her experience with the deepfake attack. According to her report, she was contacted by someone who appeared to be one of her acquaintances. It looked like a normal Zoom video call, so she didn’t suspect anything. During the call, the impersonator raised concerns about her audio quality and asked her to click on a link for a software update.
AI already used in new types deepfake hacking. Even a video call verification will soon be out of the window. 😨😱
Don’t install software from a non-official link, especially NOT from your “friends” (they are most likely hacked). https://t.co/kfRSDPiJWb
— CZ 🔶 BNB (@cz_binance) June 20, 2025
When she clicked the link, her system was compromised. The hackers gained access to her data and took control of her X, Telegram, and Metamask accounts. Fujimoto said, “When I opened the Zoom link, her face appeared, so I didn’t suspect anything.” She later added, “If I had known about this kind of attack, I might not have clicked the link. I want everyone to be aware of this and take caution to prevent similar incidents.”
Similar Attacks on Other Crypto Execs
Fujimoto’s experience mirrors a similar attack on another crypto executive, Mehdi Farooq, who used to work at Animoca. In his case, the hackers used deepfakes of two of his friends to impersonate them during a Zoom call. Just like Fujimoto, Farooq was asked to update an app because of audio quality issues. After he clicked the link and installed the fake update, the hackers drained six of his crypto wallets, taking most of his savings.
Other founders and executives from companies like Manta Network, Mon Protocol, Stably, and Devdock AI have also reported similar attacks. This has raised alarms within the crypto community, as it seems like these attacks are part of a larger, coordinated effort.
Lazarus Group and the Rise of Deepfakes in Cybercrime
Security analysts have traced these attacks back to the Lazarus Group, a North Korea-backed hacker syndicate. Lazarus is notorious for targeting the crypto industry and has been responsible for some of the largest and most high-profile breaches and thefts in recent years. The use of AI-generated deepfakes makes these attacks harder to detect, and they are quickly becoming a major security concern.
A recent report from Bitget revealed that deepfake impersonations of government officials, billionaires, and celebrities were responsible for 40% of “high-value frauds” in 2024. This highlights how serious the problem is becoming, especially as deepfake technology continues to improve. The report urges the crypto industry to adopt stricter security measures and stay alert to prevent further attacks.
The Need for Stronger Crypto Security Measures
As the use of AI in cybercrime grows, it’s becoming clear that crypto businesses and users need to strengthen their security practices. The rise of deepfake technology is one of the biggest threats to the crypto industry right now, and it’s important to stay ahead of these hackers by being aware of how they operate.
For crypto executives and users, this means being extra cautious when receiving messages or links, especially during video calls. If something seems suspicious, it’s important to double-check and verify before clicking on any links or updating software. Additionally, using multi-factor authentication (MFA) and other security tools can help protect accounts from being compromised.
The industry must also focus on educating people about the risks of deepfakes and other digital impersonations. As AI technology becomes more advanced, staying vigilant and informed is the best way to avoid falling victim to these attacks.