The rise of deepfake financial fraud has sparked concerns amongst finance leaders. Although fraud schemes using deepfakes are not new, the sophistication and frequency of these scams have increased significantly.
Deloitte poll results from 2024 revealed that 1 in 4 (25.9%) executives had experienced one or more deepfake incidents targeting accounting and financial data in their organisations.
Deepfake technology is a tool cybercriminals use in identity fraud, the spread of misinformation, and financial scams.
The technology was initially developed for entertainment purposes and is a way of using artificial intelligence (AI) and machine learning (ML) to create realistic content of events that never actually took place.
Deepfakes are essentially AI-generated media that manipulate or fabricate content to appear authentic. These hyper-realistic videos and voice recordings make it challenging to distinguish between real and fake communications.
For example, cybercriminals can use deepfakes to impersonate executives, tricking employees into transferring funds or sharing sensitive data.
Recognising when AI is used in fraudulent schemes is crucial in safeguarding financial integrity; and awareness and vigilance against fraud are vital.
The threat of deepfake financial fraud: A real-world example
In February 2024, Hong Kong police reported a multimillion-dollar fraud case involving deepfake technology; the victim was UK engineering firm Arup. Cybercriminals used deepfakes to mimic the company’s CFO and other staff members, resulting in a finance worker at the firm transferring £20m to the scammers.
Arup’s global chief information officer, Rob Greig, told the Guardian, ‘Like many other businesses around the globe, our operations are subject to regular attacks, including invoice fraud, phishing scams, WhatsApp voice spoofing, and deepfakes. What we have seen is that the number and sophistication of these attacks has been rising sharply in recent months.’
The incident shows the growing complexity of digital fraud and the critical vulnerabilities in current digital identity verification processes. Large-scale fraud like the Arup case demonstrates how deepfake technology poses a serious business risk.
Recognising deepfakes and financial scams
The increase in deepfake financial fraud incidents requires organisations to recognise and address the risks; but, according to IBM, many have yet to implement robust defences against deepfake attacks.
Strong countermeasures are vital to avoid deepfake-driven fraud.
There are several key signs to watch for when identifying a deepfake. These telltale signs could be present in manipulated videos or audio recordings, and a keen eye can spot and mitigate the threat of a deepfake scam.
Identifying a deepfake includes observing and scrutinising audio and video content, looking out for inconsistencies:
-
Unnatural facial expressions
Videos created using deepfake technology often struggle to create realistic eye movements, leading to unnatural blinking rates, sometimes too rarely or too often. Issues with lip-syncing or distorted facial features can be indicators of AI-generated content. If the mouth doesn’t align well with the words being said, or the expressions seem out of sync with the dialogue, this could be a sign of manipulated content.
-
Audio discrepancies
When AI is used to create an audio or video message, voices can sound overly mechanical. The audio could have been generated or altered using AI if the content lacks natural inflexions or the right emotional tone. Authentic speech usually has subtle variations in pitch and rhythm, while a fraudulent audio message or video could sound synthetic.
-
Background inconsistencies
Deepfakes may have blurry, inconsistent, or poorly blended backgrounds that don’t match the subject’s realism. Irregular lighting, shadows, or movement that seems out of place can hint at pixel manipulation and editing errors.
A general practice is to verify the source of the message or video and cross-check the information with a reliable source to avoid digital fraud.
Approach the content sceptically. If you find discrepancies or the source seems untrustworthy, immediately report it to the appropriate department, such as IT or HR.
Safeguarding against deepfake fraud
Management accountants have an opportunity to safeguard companies against deepfake deception.
Using multifactor authentication (MFA) beyond biometrics, investing in AI-powered detection tools, verifying transactions through multiple sources, and conducting regular fraud detection training are key steps to strengthening security and reducing risks.
As an accounting and finance professional, it’s essential to be vigilant and have safeguards to enhance your ability to discern authentic content from potentially deceptive deepfakes.
To strengthen your expertise, earning the prestigious Chartered Global Management Accounting® (CGMA®) designation and becoming a CIMA® member is a great way to showcase your commitment to the highest standards in management accounting. Through the CGMA® Finance Leadership Program, you can study at your own pace with flexible, on-demand learning modules.
With expertise in risk management, forensic accounting, and ethical financial practices, CGMA designation holders are vital in detecting and preventing AI-driven financial fraud.
While deepfakes pose a growing challenge, proactive measures, technological vigilance, and professional expertise can mitigate the risk and potential impact of fraud using deepfakes.