,

DeepFake and the $25 million Heist

February 9, 2024
Musa Nadir Sani

Introduction

A very sophisticated deepfake phishing scam cost a yet-to-be-known Hong Kong multinational company more than $25 million after an employee was tricked by digital imitations of his colleagues and the CFO on a conference call. The heist, one of the biggest involving the use of deepfake technology has raised concerns over the use, and regulation of AI technology globally.

What we know so far

    • Scammers stole over $25 million from a multinational business by utilizing cutting-edge real-time video deepfake technology to convince an employee in the firm’s accounts-payable department that the worker had properly validated a payment request previously sent to him via email.
    • The worker had initially received a payment request, seemingly from the firm’s CFO, via email to issue the payment of $200 Million Hong Kong Dollar (equivalent to approximately $25.6 Million USD at the time of the theft and at present). The worker, suspicious that the email might be fraudulent, requested for a video conference call. 

Note: The firm’s CFO is based in the United Kingdom so the employee took the standard steps in that situation, it is also industry standard for employees to make a request for a video conference call when faced with a similar situation.

  • Despite the employee’s best efforts, the cybercriminals were a few steps ahead and had orchestrated a video conference call with the employee. This call had in attendance, what the employee was convinced was his colleagues, complete with audio and video.
  • The AI-generated deepfake video call was enough to convince the employee who then proceeded to initiate the payment request he had initially received via mail.
  • The payment request was discovered only after the employee later mentioned the payment to operations personnel at the company’s headquarters. By then, the $25 million was already lost.
  • Investigations are still ongoing by the Hong Kong police and no arrests have been made.

The Dangers of DeepFake

While AI has brought about revolutionary changes to the way the internet works and how humans interact with software, the dangers of deepfake AI technology is one that raises serious security concerns for every internet user. Some of these dangers include:

  • Misinformation and Fake News: Deepfakes can be used to create convincing videos or audio recordings of people saying or doing things they never actually did. This can lead to the spread of misinformation, manipulation of public opinion, and damage to reputations.
  • Political Manipulation: Deepfakes can be used to create fake videos of political figures making controversial statements or engaging in illegal activities, which could potentially influence elections or policy decisions.
  • Fraud and Scams: Deepfakes could be used to impersonate individuals in video calls or to create fake evidence for fraudulent activities such as blackmail or extortion.
  • Privacy Violations: Deepfakes can be created using photos or videos taken without the subject’s consent, leading to violations of privacy and potential harassment or exploitation.
  • Undermining Trust: The proliferation of deepfakes could lead to a general distrust of audio and video evidence, making it more difficult to discern truth from falsehood in the digital realm.
  • Security Threats: Deepfake technology could be used by malicious actors to create convincing fake identities for the purpose of bypassing security measures such as facial recognition systems.
  • Legal and Ethical Concerns: Deepfakes raise complex legal and ethical questions regarding issues such as consent, intellectual property rights, and freedom of expression.

 

How can you guard against deepfake technology?

Individuals can take several steps to guard against the potential threats posed by deepfake technology:

  • Be Skeptical: Develop a healthy skepticism when encountering media online. Question the authenticity of videos or audio recordings, especially those that seem sensational or out of character for the person depicted.
  • Verify Sources: Whenever possible, verify the source of the media you’re consuming. Look for credible sources and cross-reference information from multiple sources to confirm its authenticity.
  • Check for Manipulation Signs: Look for signs of manipulation in videos or images, such as unnatural facial movements, inconsistencies in lighting or shadows, or artifacts around the edges of objects or people.
  • Stay Informed: Stay informed about the latest developments in deepfake technology and the methods used to detect and combat it. Knowledge is key to staying vigilant against potential threats.
  • Protect Personal Information: Be cautious about sharing personal information, such as photos or videos, especially on social media platforms or other public forums where they could be used without your consent.
  • Enable Two-Factor Authentication: Enable two-factor authentication on your accounts to add an extra layer of security and reduce the risk of unauthorized access.
  • Use Strong Passwords: Use strong, unique passwords for all your online accounts, and consider using a password manager to keep track of them securely.
  • Report Suspected Deepfakes: If you encounter a suspected deepfake or believe you are being targeted by one, report it to the platform hosting the content and inform relevant authorities if necessary.
  • Educate Others: Share information about deepfake technology and the risks it poses with friends, family, and colleagues to help raise awareness and prevent its spread.
  • Support Legislation and Regulation: Advocate for legislation and regulation aimed at addressing the potential threats posed by deepfake technology, such as laws governing the creation and distribution of manipulated media.

The CcHub Helpdesk (help@cchub.africa) is available for any organizations looking for assistance with the incident response so, please reach out 

Related Posts

Scroll to Top