Influence operations going smaller, and more difficult to detect
Disinformation actors are getting more sophisticated, FB says
The newsletter is late this week because I managed to overcome my local government booking app’s hesitation to give me a vaccination slot. Happy to report that aside from several days of fatigue and of not being at all productive, I am otherwise OK from receiving the jab from Sinovac.
Influence operations—coordinated efforts to manipulate or corrupt public debate for a strategic goal— are shifting to smaller and more targeted campaigns to skirt scrutiny, according to a Facebook threat report.
This means they are more difficult to detect but also means that they are more expensive to do and are not as successful.
According to "The State of Influence Operations 2017-2020" that Facebook released late last month, platforms have learned to root out fake accounts better, making disinformation actors shift "retail" campaigns "that use fewer assets and focus on narrowly targeted audiences."
An Iranian network discovered in 2019, for example, used fake accounts that claimed to be actual people, including journalists, to seed and amplify content by directly reaching out "to policymakers, reporters, academics, dissidents, and others.”
"Their fictitious personas also submitted letters to the editor and wrote guest columns in US newspapers, masqueraded as journalists soliciting interviews with politicians and pitched stories to reporters," Facebook said, adding the accounts managed to get some of their content picked up by some publications but did not gain traction on the platform.
Facebook said it removed a similar operation by Russian military intelligence in 2020.
"Each fake account in these 'retail' campaigns takes more time and effort to create because the actors invest heavily in developing more credible online personas so they can't be as easily spotted," it also said.
That includes creating versions of that fake persona across platforms to better withstand verification by researchers, journalists, and the public.
Facebook said that despite the sophistication of "retail" influence operations— in contrast to basically just broadcasting disinformation to as many people as possible— they face a big challenge: "Without a lucky break, they go nowhere."
Operating in 'gray' areas
These "retail" influence operations are also seen to complicate discourse on campaigns in general.
"We anticipate seeing more local actors worldwide attempt to use IO tactics to influence public debate in their own countries, further blurring the lines between authentic public debate and deception," Facebook said.
It said disinformation actors might co-opt "unwitting (but sympathetic) domestic groups to amplify their narratives."
In a case study on operations related to the 2020 US elections, for example, Facebook found an influence operation linked to Russia that included setting up websites that were made to look like news outlets.
"To appear more legitimate, IRA operators created sophisticated fake personas with profiles on multiple platforms claiming to be the editors of these sites. They recruited freelance journalists, including people in Europe and America, to write on social and political issues targeting both the right and the left," Facebook said, and tried to co-opt people to place ads on Facebook on their behalf.
Facebook said that disinformation actors have often "adapted their behavior and sought cover in the gray spaces between authentic and inauthentic engagement and political activity" and that this means it has to constantly review its policies.
Consider a few examples: Political campaigns have long paid canvassers to knock on doors, but when campaigns pay supporters or influencers to use fake or misleading online accounts to amplify their message on social media, does that cross a line into deception and manipulation?
Consider as well activists, governments or lobbyists who seek support for their causes by creating seemingly independent media entities to inject their message into the public discourse; or marketing firms amplifying particular narratives through Pages and Groups without disclosing who runs them.
Such tactics exemplify the gray areas where the boundary between advocacy and deception can be hard to define, and pose important questions for how public debate should function online.
Although there are measures on the platform like labeling state-operated media pages and better page transparency, Facebook said better engagement is needed among civil society, industry, and governments to "collectively determine how to tackle this challenge without encroaching on free speech and other democratic values."
RELATED (BUT SKETCHY TBH): Pro-Duterte accounts taken down to weaken his support base
Influence Operations as a business
The Philippines is among the countries where Facebook has investigated and removed influence operations by commercial actors; the 2019 elections saw disinformation from the administration and opposition camps.
"Some of these operations were domestic, promoting interests aligned with political entities from within their country of origin. Some offered IO services to paying clients both at home and abroad, making these techniques accessible to those with less resources or infrastructure to run their own IO campaigns," Facebook said of these "IO-for-hire" entities in media, marketing, and public relations.
Facebook said it is working to make these kinds of operations, which take time and resources to put up, more difficult and less profitable.
"There is reputational cost in being publicly labeled and taken down for foreign or domestic IO. There is also a business risk in losing your company’s infrastructure, particularly if its entire business model is built on providing ready-built accounts and Pages to reach target audiences for deceptive purposes," it said.
Aside from the actual and reputational costs, running influence operations can also expose these firms to action from law enforcement agencies in some jurisdictions.
"When their operations are discovered, these companies lose their on-platform assets and in the most severe cases they are banned from ever coming back to our platform. Even if they start over and try harder to hide, this does not make for a sustainable business model in the long run," it said.
ELSEWHERE ON THE INTERNET:
A businessman has withdrawn from a cyberlibel complaint against Rappler’s Maria Ressa, leading to the dismissal of the charge.
Supporters say the government “should end its campaign of harassment and persecution against Maria Ressa and Rappler immediately and cease prosecuting the journalist and her news outlet for their public interest journalism.”Doublethink Lab in Taiwan has launched a website “to provide the public with tools to ‘identify Tiananmen propaganda’ and equip social media users with ‘strategies to counter propaganda,’ with the hope of promoting a rational discussion online.”
Probably not news to you, but a survey suggests people who are confident that they can spot ‘fake news’ are more likely to fall for it anyway.
“When researchers looked at data measuring respondents’ online behavior, those with inflated perceptions of their abilities more frequently visited websites linked to the spread of false or misleading news,” the Guardian reports on a survey of more than 8,000 Americans.Former US President Donald Trump should remain banned on Facebook, a New York Times opinion piece argues:
”If the oversight board were to restore Mr. Trump’s account, it would stand as an affirmation of Facebook’s self-serving policies permitting the most divisive and engaging content to remain and a clarion call to leaders like Rodrigo Duterte and Jair Bolsonaro, who have similarly peddled in misinformation, to keep on posting.”