In December, the verified Facebook page of Adam Klotz, a meteorologist for Fox News, began displaying unusual video advertisements. Some of these ads featured an AI-generated voice mimicry of former President Donald Trump, offering viewers “$6,400 with your name on it, no payback required” for clicking on a link and completing a form. Other ads used AI to reproduce President Joe Biden’s voice, suggesting the availability of funds with no repayment conditions.
There was no free money involved; rather, the audio was artificially generated. Individuals who engaged with the ads were redirected to a form that collected their personal data, which was then sold to telemarketers for potential legitimate offers or scams.
Klotz’s page circulated over 300 such ads until late August when ProPublica reached out to him. A spokesperson for Klotz explained that his page had been hacked and he was unavailable to notice the situation until contacted.
This page had been compromised by a vast advertising network that continued its activities on Facebook over several years, producing about 100,000 misleading election and social issue advertisements. This activity has persisted despite Meta’s declared effort to eliminate harmful content, as detailed by a joint investigation from ProPublica, Columbia Journalism School’s Tow Center for Digital Journalism, and the Tech Transparency Project.
The network, commonly referred to as Patriot Democracy in its advertising accounts, is one of eight deceptive Meta advertising operations identified through this investigation. This collection of networks controlled over 340 Facebook pages alongside associated Instagram and Messenger accounts, with the majority generated by these advertising networks. Some pages pretended to be government entities, whereas others were verified profiles of public figures like Klotz, who had been hacked. These networks published over 160,000 ads related to elections and social issues in multiple languages, primarily English and Spanish. Meta reportedly made these ads accessible to users nearly 900 million times across its platforms.
Although these ads represent just a small portion of Meta’s over $115 billion in annual ad revenue, the networks collectively represent the 11th-largest all-time advertising spender on Meta for U.S. elections or social issues ads since data sharing began in 2018. This issue highlights ongoing challenges for one of the largest global platforms to protect users from fraud and uphold its long-standing promise to prevent deceptive political ads.
These ad networks are largely operated by lead-generation companies specializing in harvesting and selling personal information. Instances where users unknowingly signed up for monthly credit card charges or had health insurance switched under fraudulent pretenses were part of the documented impact. Such fraudulent switches potentially left victims without health coverage or liable for unanticipated tax bills.
Meta’s policies prohibit tactics employed by these networks, such as unauthorized use of AI-altered political figures’ voices and promoting misleading government program-related claims to obtain personal data. Some ads illegally displayed state and county seals, along with images of governors, intending to mislead users. For example, one deceptive ad featured Illinois Gov. JB Pritzker’s image and the state seal, erroneously promising insurance coverage for funeral expenses.
More than 13,000 ads employed incendiary political narratives or falsehoods to market unofficial Trump merchandise. Although Meta removed some of these ads post-approval, numerous other ads with similar content escaped detection. In several instances, despite the removal of violating ads, the associated Facebook pages and accounts continued operations, thereby enabling the networks to generate new pages and ads.
Despite Meta’s stringent requirement for political or social issue ads to carry a “paid for by” disclaimer indicating the funding entity, the verification measures fall short when compared with competitors like Google, as uncovered by ProPublica and Tow. Many such disclaimers are linked to non-existent entities.
In response, a Meta spokesperson highlighted the company’s significant investment in trust and safety measures, leveraging human and technological resources in scrutinizing election and social issue ads. However, this investigation identifies areas where Meta’s enforcement appeared inconsistent or delayed. Prior to the investigation’s findings being relayed to Meta by ProPublica and Tow, the enforcement impacted fewer than half of the pages associated with the eight identified networks.
The investigation reveals that despite subsequent actions by Meta to remove the flagged pages, several networks persisted, running over 5,000 ads in October alone. One such network, Patriot Democracy, launched approximately two new pages per day on average earlier that month.
Jeff Allen, chief research officer of the Integrity Institute, critiqued Meta’s enforcement efforts as sporadic and ineffective at addressing foundational issues. The structurally complex design of Facebook pages, which permits affiliations with numerous ad accounts and user profiles, presents an enforcement challenge, according to Allen.
Meta noted the adversarial nature of the space, pointing to ongoing updates to its enforcement mechanisms aimed at tackling evolving scammer tactics. Legal measures against several operators have also been pursued by Meta.
Since at least 2016, Meta has grappled with misleading election ads, an issue that came to the fore when Russian operatives purchased ad slots in an attempt to influence the U.S. electorate. In response to widespread criticism and governmental scrutiny, Meta implemented specialized guidelines and transparency tools like the Ad Library. Nonetheless, layoffs and organizational changes have affected the company’s integrity teams, further complicating enforcement during election cycles.
The investigation also revealed significant differences between Meta’s and Google’s verification processes for political and social issue ads, leading to varying levels of transparency concerning ad sponsorship.
The Patriot Democracy network, deemed the largest among the identified groups, misappropriated official-sounding names and fake organizational listings in disclaimers to lend a semblance of legitimacy to their activities, misleading a broad audience across the United States.
The Montana Division of Insurance and Ventura County officials discovered ads employing unauthorized usage of official seals, leading to cease-and-desist actions against the aired deceptive ads. The actor behind these operations, including Abel Medina, encountered multiple legal and regulatory challenges across different states.
Even among the discovered deceptive strategies, Patriot Democracy and allied networks predominantly capitalized on ad content promising fraudulent government subsidies or free services. Despite its obligations and prior commitments, Meta’s response revealed systemic inefficiencies in recognizing and curbing such activities.
An analysis of Meta systems showed inadequacies in the automated tools designed to identify and obstruct duplicate ads, with a critical part of the company’s safety strategy hampering effective enforcement.
In dealings with mislead consumers, calls to customer service lines often led to unsatisfactory resolutions, further complicating redress for victims of fraudulent ad schemes.
Meta and associated platforms, including those connected to celebrity figures like Adam Klotz, remain under scrutiny as the investigation into deceptive advertising practices continues to unfold.
Despite proactive measures by state and local jurisdictional authorities, the complex and dynamic nature of digital advertising necessitates ongoing vigilance and collaboration to mitigate deceptive practices effectively.