Advertisement
Singapore markets closed
  • Straits Times Index

    3,410.81
    -29.07 (-0.85%)
     
  • Nikkei

    40,912.37
    -1.28 (-0.00%)
     
  • Hang Seng

    17,799.61
    -228.67 (-1.27%)
     
  • FTSE 100

    8,203.93
    -37.33 (-0.45%)
     
  • Bitcoin USD

    56,480.30
    +2,068.99 (+3.80%)
     
  • CMC Crypto 200

    1,175.34
    -33.36 (-2.76%)
     
  • S&P 500

    5,567.19
    +30.17 (+0.54%)
     
  • Dow

    39,375.87
    +67.87 (+0.17%)
     
  • Nasdaq

    18,352.76
    +164.46 (+0.90%)
     
  • Gold

    2,399.80
    +30.40 (+1.28%)
     
  • Crude Oil

    83.44
    -0.44 (-0.52%)
     
  • 10-Yr Bond

    4.2720
    -0.0830 (-1.91%)
     
  • FTSE Bursa Malaysia

    1,611.02
    -5.73 (-0.35%)
     
  • Jakarta Composite Index

    7,253.37
    +32.48 (+0.45%)
     
  • PSE Index

    6,492.75
    -14.74 (-0.23%)
     

Every CFO’s worst nightmare just came true

Getty Images

Good morning. There are many benefits to AI for CFOs but also serious dangers, including the technology's ability to create the hyper-realistic impersonations known as deepfakes. That became clear after one finance professional's encounter with a digitally manipulated impression cost their company millions. The tale of how it happened is enough to give any CFO nightmares.

In Hong Kong, a finance worker at a multinational firm remitted a total of $200 million Hong Kong dollars, about $25.6 million, to fraudsters who used deepfake technology, CNN reported. Hong Kong authorities, who confirmed the crime on Feb. 2, have not disclosed the name or details of the company or the worker. According to reports, the scammer used past online conferences to train AI to digitally recreate a scenario where the CFO ordered money transfers. The worker was actually the only real person on the video call. “This time, in a multi-person video conference, it turns out that everyone you see is fake,” said Baron Chan Shun-ching, an acting senior superintendent, according to South China Morning Post.

The employee, who made 15 transfers into five local bank accounts, did initially have some suspicions, as the conversation indicated the need for a secret transaction to be carried out, according to reports. But the people on the call looked and sounded just like his colleagues.

Combatting deepfakes

In the age of AI, a scenario like this is frightening. So the question is: How is this preventable? Well, risk management plays a big role. As companies navigate the cybersecurity threat landscape complicated by AI-driven technologies, “the tried-and-true saying, ‘trust but verify,’ is important to remember,” according to Lisa Cook, governance, risk, and compliance professional practices principal at ISACA, a professional association focused on IT governance.

ADVERTISEMENT

Companies will need to employ a variety of preventative and detective controls to mitigate the risk of financial or reputational damage from deepfakes, Cook said. For starters, making sure employees are educated about deepfakes, including the technology’s ability to impersonate people using voice and video, she said. Another suggestion? “Providing employees with alternative identity verification methods to include inserting a ‘human-in-the-middle' for additional confirmation of significant transactions,” Cook said. Also, deploying AI-driven tools specifically designed to analyze and detect deepfakes which would not be readily identified by a human, Cook said.

According to Cook, another best practice is to identify who in your firm is responsible for detecting and addressing abnormal behavior, while outlining the chain of command and communication for handling such abnormalities. ISACA’s recently released white paper, “The Promise and Peril of the AI Revolution: Managing Risk,” delves further into the topic.

Baptiste Collot, cofounder and CEO of Trustpair, a payment fraud prevention platform provider, also said risk management is key. Collot’s firm specializes in global bank account validation, he said. So even if an employee is duped by a deepfake video call and attempts to send a payment, when the information is set into the the systems, the technology can potentially determine whether the bank account belongs to the vendor or not, he explained.

Trustpair’s new research survey of more than 260 senior finance and treasury leaders found that 83% of respondents saw an increase in cyber fraud attempts on their organization in the past year. To dupe organizations, fraudsters primarily used text messages (50%), fake websites (48%), social media (37%), hacking (31%), business email compromise scams (31%), and deepfakes (11%). In fact, CEO and CFO impersonations were the third most common type of fraud, according to the report.

“It's almost impossible to trust the person you're talking to, unless you have this person just physically in front of you,” Collot said.

The ability to determine deepfakes will need to become second nature for companies.

Sheryl Estrada
sheryl.estrada@fortune.com

This story was originally featured on Fortune.com