INTEGRATIVE INSIGHTS ON EMERGING OPPORTUNITIES |
Integrative research means our extensive company research informs every thesis and perspective. The result is deep industry knowledge, expertise, and trend insights that yield valuable results for our partners and clients.
- Deepfake creation technology has evolved significantly from the rudimentary face swaps that first allowed everyday users to create low-quality deepfakes in the mid-2010s. Since then, deepfake creators, including bad actors, have developed a variety of creation methods, and the technology continues to evolve rapidly.
- Governments, individuals and corporations are eager to find ways to stop malicious deepfakes, given their sometimes enormous monetary and societal costs. Deepfake detection companies address this need. They essentially reverse engineer the deepfake creation process to identify manipulated content.
- The criteria for choosing among deepfake detection solutions vary based on use case. We discuss use cases in news media, law enforcement and other governmental functions, banking, and general commerce. Each differs in the level and type of deepfake detection it needs.
- We highlight a sample of large technology companies that offer some deepfake detection solutions and highlight some deepfake specialists, including three for which we provide detailed profiles.
TABLE OF CONTENTS
Includes discussion of three private companies
Growing rapidly, harmful deepfakes extract high monetary and societal costs
Deepfake creation models continue to grow in complexity, creating more convincing fakes
Combatting malicious deepfakes with detection software
Use cases influence buying behavior
Some players in the deepfake detection market
The truth is out there
Cybersecurity index opens wide lead over Nasdaq
Cybersecurity M&A: Notable transactions include Talon Cyber Security and Tessian
Cybersecurity private placements: Notable transactions include SimSpace and Phosphorus
Growing rapidly, harmful deepfakes extract high monetary and societal costs
Deepfakes are synthetic media generated by artificial intelligence (AI), created either entirely anew or by modifying real content, to produce compelling imitations of reality. They take form in a wide variety of media such as photos, videos, and audio recordings. Deepfakes make it difficult to distinguish fact from fiction. The incidence of deepfakes was 10 times greater in 2023 than in 2022, according to SumSub, an identity verification and fraud prevention company, clear evidence that deepfake creation technology is being used more than ever.
Although most sentiment around deepfakes is negative, the technology can be beneficial. One example is in marketing, where actors and marketers can leverage talent by licensing actors’ identities to swiftly and cost-effectively generate advertisements with deepfake technology instead of requiring actors to perform. Another example is using deepfake creation technology to personalize ad content based on individual customer preferences and demographics. Beyond marketing, deepfake technology is increasingly being used for entertainment content such as television shows, movies and podcasts. Deepfake technology is used to manipulate actors’ appearances and facial expressions to best fit production needs.
Of course, deepfake technology is often also used to cause harm, a vivid example being unauthorized use of people’s likenesses in pornography. In fact, the majority of current deepfake regulation in the United States deals with banning its use for nonconsensual pornography. For purposes of this report, however, the most relevant harmful use of deepfake technology is in creating deepfakes to influence geopolitical events and public policy and to perpetrate fraud. For example, hackers recently created and published a deepfake video of Ukrainian President Volodymyr Zelenskyy urging Ukrainians to lay down their arms in the conflict with Russia. (This deepfake was quickly identified and removed.) In early 2019, a deepfake video of Ali Bongo, president of Gabon, played a role in sparking a military coup there. Many more examples are being found and reported regularly. In the context of fraud, the Federal Trade Commission reported imposter scams resulted in $2.6 billion in losses in 2022, affecting over 36,000 victims. A somewhat well-known example is bad actors who impersonate grandchildren and urgently ask for money from a grandparent. In the corporate world, the CEO of a UK-based energy company received what he thought was a call from its parent company’s CEO requesting he have money wired to a Hungarian supplier. The CEO recognized the voice and transferred the funds, not realizing the voice was generated by AI; the money was lost.