Meta report on U.S. midterm elections, Ukrainian war influence operations canceled

Another self-report from Meta today with internal efforts to identify and shut down political disinformation networks (aka “coordinated, inauthentic behavior” — or CIB — as it prefers to call it) operating on its social platforms ).

Specifically, it reportedly shut down two separate political disinformation networks — a “small” one originating in China that allegedly had some different but short-term ramifications between fall 2021 and mid-September 2022. Political targets, including both sides in the U.S. midterm elections, and an attack on Czech government-backed Ukraine; a second, larger network, Meta, identified as Russia, and said it started in May this year in multiple European countries (including Germany, France, Italy , the United Kingdom and Ukraine itself) distribute anti-Ukrainian content.

Notably, the Chinese network was the first network reported by Meta to target the U.S. midterm elections, although the disinformation campaign appears to have a variety of different goals based on its description. It also doesn’t sound very successful in attracting Meta users, according to the company’s assessment.

“In the U.S., it targets people from the political spectrum; in the Czech Republic, the campaign is largely anti-government, criticizing the state for supporting Ukraine in its war with Russia and its impact on the Czech economy, using criticism to warn against angering China,” Meta wrote in a newsroom post written by its head of global threat intelligence, the precursor to former social media analytics firm Graphika, and director of threat disruption, David Agranovich.

“Each group of accounts — about six per account — publishes a small amount of content during business hours in China rather than when their target audience is usually awake.​​There are very few people involved, some of whom call it the Fake. Our automated systems have removed numerous accounts and Facebook pages for various Community Guidelines violations, including impersonation and inauthenticity.

Examples of inauthentic content in the Meta report originate from Chinese networks targeting US users (screenshot: Natasha Lomas/TechCrunch)

The Russian network has been described by Meta as “the largest and most sophisticated Russian-origin operation we have disrupted since the beginning of the Ukraine war” – which it says has an “unusual combination of sophistication and brute force” – which suggests disinformation efforts are consistent.

“The operation, which began in May this year, revolves around a vast network of more than 60 websites carefully impersonating the legitimate sites of European news organizations, including Der Spiegel, The Guardian and Bild,” Meta said. wrote, and explained modus operandi is publishing original articles critical of Ukraine and Ukrainian refugees, is pro-Russian, and believes that Western sanctions against the country would be counterproductive.

“They will then promote these articles along with original memes and YouTube videos on many internet services, including Facebook, Instagram, Telegram, Twitter, petition sites Change.org and Avaaz, and even LiveJournal,” it continued. “During our investigation, when we blocked the domain of the operation, they tried to build new websites that showed persistence and continued investment in this activity on the Internet. They were mainly in English, French, German, Italian , Spanish, Russian and Ukrainian. On several occasions, the content of the operation was amplified by the Facebook pages of Russian embassies in Europe and Asia.”

“The two approaches taken together are an attempt at the information environment rather than a serious effort to occupy it for a long time,” Meta added.

In its assessment, the combination of deceptive sites plus multiple languages ​​required “technological and language investment”, but it said social media amplification was less ingenious – relying “mainly” on “rude” ads and fake accounts.

“From the very beginning, the operation has created mini-brands on the internet, using them to post, retweet and like anti-Ukrainian content in different languages. They will create accounts with the same name on different platforms as support for each other for them to see It’s more legitimate and amplifies each other. It’s like a fake engagement carousel, with multiple layers of fake entities boosting each other, creating their own echo chambers,” Meta also wrote in a more complete report on Russian disops.

The tech giant said “many” accounts involved in the network were “frequently” detected by its automated systems and removed for being inauthentic — though it didn’t provide a more specific breakdown of AI success rates, saying ” Most” accounts, pages and ads are detected and removed this way before an investigation begins.

Meta’s newsroom post said it covered the CIB’s activities in part through investigative journalism in Germany — also thanks to independent researchers at the Digital Forensics Research Lab who uncovered the disinformation campaign. It also noted that its report included a list of domains, petitions and Telegram channels that it had assessed as being relevant to the operation “to support further research into this and similar cross-Internet activity.”

These disclosures are just the latest in Meta’s coordinated removal of inauthentic behavior, which regularly publishes reports that it identifies and removes disinformation operations after the fact—for example, recently revealing that it had taken action against pro-U.S. influence operations. Or report a 2019 wave of Iran clearing suspicious activity.

However, Meta disclosed this information without a comprehensive overview of the total amount of fake accounts and activity on its platform, making such reports (and their relative importance) difficult to quantify.

While disclosure is clearly important for further research into specific disinformation networks and tactics (which may continue to exist even after a particular network is eliminated), some transparency does have the potential to present a distorted view—which could mean a difference in weeding out false activity The ratio is happening in reality – so perhaps the strongest function of self-reporting is to promote Meta’s self-service statement for a close and comprehensive focus on platform security.

There are, of course, other points of view about the comprehensiveness of its anti-manipulation efforts — such as those expressed by Facebook/Meta whistleblower Sophie Zhang, whose leaked memo two years ago accused the company of focusing resources on areas of political expediency, and those Possibly getting the most PR, while paying little heed to the massive manipulation efforts going on elsewhere, even as she tries to pique internal interest in taking action.

Meta’s self-proclaimed “detailed report” on the removal of Chinese and Russian networks contained only some controversial metrics: It said the Chinese network’s presence on Facebook and Instagram stretched to 81 Facebook accounts, eight pages, one group and two Instagram account; although it says “About 20 accounts follow one or more pages; “about 250” accounts join one or more of these groups; and “less than 10 accounts” follow one or more of these Instagram accounts.

While the Russian network expanded to 1,633 Facebook accounts, 703 pages, a group and 29 Instagram accounts — Meta says “about 4,000” accounts followed one or more of these pages, “less than 10” joined “About 1,500” accounts followed one or more Instagram accounts.

The report also revealed that Russian disops spent “around $105,000” on ads on Facebook and Instagram — Meta noted that these ads were paid “mainly” in U.S. dollars and euros. (Notoriously, some Russian ads targeting the 2016 US election were paid in rubles.)

That level of ad spending is tied to Russia’s targeting of the 2016 U.S. election compared to the size of political ad spending Facebook disclosed in 2017.

Source link