Civics: Does Twitter Disinformation Even Work?

Wall Street Journal:

One theme from the internal Twitter files being released by Elon Musk is the government’s gnawing fear that armies of foreign trolls and bots might be changing real people’s opinions. After Twitter said it was investigating false claims circulating in 2020 of a communications blackout in Washington, D.C., the FBI reached out to ask if the tweets might be “driven by foreign-controlled bots.”

The answer in that case was no, Twitter replied. The #dcblackout campaign was “a small-scale domestic troll effort,” without “a significant bot or foreign angle.” Vladimir Putin does try to sow unrest this way, but how much can this nonsense even accomplish? Less than Mr. Putin imagines and the FBI fears, or at least that’s the way we read a paper out Monday in the journal Nature Communications.

The study focuses on the 2016 election and uses “longitudinal survey data” that’s linked to the respondents’ Twitter feeds. Its six authors, three of whom hail from New York University, find that “exposure to Russian disinformation accounts was heavily concentrated: only 1% of users accounted for 70% of exposures.” Also, Mr. Putin’s bots were “eclipsed by content from domestic news media and politicians.” During the final month of the campaign, an average user was potentially exposed to four posts per day by Russian bots, compared with 106 by national news sites and 35 by politicians.

Exposure to the Russian Internet Research Agency foreign influence campaign on Twitter in the 2016 US election and its relationship to attitudes and voting behavior