Saturday, June 5, 2021

"Deepfakes and Synthetic Media in the Financial System: Assessing Threat Scenarios"

We last visited the Carnegie Endowment in May 28's "The WEF - Carnegie Endowment Cyber Threats Report". which concerned the results of their 2020 scenario-scheming, released in November 2020.

It concluded that the major banks, their regulators and the intelligence agencies should all merge.

Here's one of the Carnegie Endowment's earlier papers, dated July 8, 2020:

Bad actors could use deepfakes—synthetic video or audio—to commit a range of financial crimes. Here are ten feasible scenarios and what the financial sector should do to protect itself.

Summary

Rapid advances in artificial intelligence (AI) are enabling novel forms of deception. AI algorithms can produce realistic “deepfake” videos, as well as authentic-looking fake photos and writing. Collectively called synthetic media, these tools have triggered widespread concern about their potential in spreading political disinformation. Yet the same technology can also facilitate financial harm. Recent months have seen the first publicly documented cases of deepfakes used for fraud and extortion.

Today the financial threat from synthetic media is low, so the key policy question is how much this threat will grow over time. Leading industry experts diverge widely in their assessments. Some believe firms and regulators should act now to head off serious risks. Others believe the threat will likely remain minor and the financial system should focus on more pressing technology challenges. A lack of data has stymied the discussion.

In the absence of hard data, a close analysis of potential scenarios can help to better gauge the problem. In this paper, ten scenarios illustrate how criminals and other bad actors could abuse synthetic media technology to inflict financial harm on a broad swath of targets. Based on today’s synthetic media technology and the realities of financial crime, the scenarios explore whether and how synthetic media could alter the threat landscape.

The analysis yields multiple lessons for policymakers in the financial sector and beyond:

  • Deepfakes and synthetic media do not pose a serious threat to the stability of the global financial system or national markets in mature, healthy economies. But they could cause varying degrees of harm to individually targeted people, businesses, and government regulators; emerging markets; and developed countries experiencing financial crises.
  • Technically savvy bad actors who favor tailored schemes are more likely to incorporate synthetic media, but many others will continue relying on older, simpler techniques. Synthetic media are highly realistic, scalable, and customizable. Yet they are also less proven and sometimes more complicated to produce than “cheapfakes”—traditional forms of deceptive media that do not use AI. A bad actor’s choice between deepfakes and cheapfakes will depend on the actor’s strategy and capabilities.
  • Financial threats from synthetic media appear more diverse than political threats but may in some ways be easier to combat. Some financial harm scenarios resemble classic political disinformation scenarios that seek to sway mass opinion. Other financial scenarios involve the direct targeting of private entities through point-to-point communication. On the other hand, more legal tools exist to fight financial crime, and societies are more likely to unite behind common standards of truth in the financial sphere than in the political arena.
  • These ten scenarios fall into two categories, each presenting different kinds of challenges and opportunities for policymakers. Six scenarios involve “broadcast” synthetic media, designed for mass consumption and disseminated widely via public channels. Four scenarios involve “narrowcast” synthetic media, tailored for small, specific audiences and delivered directly via private channels. The financial sector should help lead a much-needed public conversation about narrowcast threats.
  • Organizations facing public relations crises are especially vulnerable to synthetic media. Broadcast synthetic media will tend to be most powerful when they amplify pre-existing negative narratives or events. As part of planning for and managing crises of all kinds, organizations should consider the possibility of synthetic media attacks emerging to amplify the crises. Steps taken in advance could help mitigate the damage.
  • Three malicious techniques appear in multiple scenarios and should be prioritized in any response. Deepfake voice phishing (vishing) uses cloned voices to impersonate trusted individuals over the phone, exploiting victims’ professional or personal relationships. Fabricated private remarks are deepfake clips that falsely depict public figures making damaging comments behind the scenes, challenging victims to refute them. Synthetic social botnets are fake social media accounts made from AI-generated photographs and text, improving upon the stealth and effectiveness of today’s social bots....

....MUCH MORE

Just to make this stuff all the more interesting, here's another of the scenarios that were war-gamed the year prior:

....Previously from the World Economic Forum et al, Friday, October 18, 2019: 

About the Event 201 exercise

Event 201 was a 3.5-hour pandemic tabletop exercise that simulated a series of dramatic, scenario-based facilitated discussions, confronting difficult, true-to-life dilemmas associated with response to a hypothetical, but scientifically plausible, pandemic. 15 global business, government, and public health leaders were players in the simulation exercise that highlighted unresolved real-world policy and economic issues that could be solved with sufficient political will, financial investment, and attention now and in the future.....