One year after that rollout, legitimate publishers accounted for only two of the top 10 publishers on Facebook in Myanmar. By 2018, they accounted for zero. All the engagement had instead gone to fake news and clickbait websites. In a country where Facebook is synonymous with the internet, the low-grade content overwhelmed other information sources.
It was during this rapid degradation of Myanmar’s digital environment that a militant group of Rohingya—a predominantly Muslim ethnic minority—attacked and killed a dozen members of the security forces, in August of 2017. As police and military began to crack down on the Rohingya and push out anti-Muslim propaganda, fake news articles capitalizing on the sentiment went viral. They claimed that Muslims were armed, that they were gathering in mobs 1,000 strong, that they were around the corner coming to kill you.
It’s still not clear today whether the fake news came primarily from political actors or from financially motivated ones. But either way, the sheer volume of fake news and clickbait acted like fuel on the flames of already dangerously high ethnic and religious tensions. It shifted public opinion and escalated the conflict, which ultimately led to the death of 10,000 Rohingya, by conservative estimates, and the displacement of 700,000 more.
In 2018, a United Nations investigation determined that the violence against the Rohingya constituted a genocide and that Facebook had played a “determining role” in the atrocities. Months later, Facebook admitted it hadn’t done enough “to help prevent our platform from being used to foment division and incite offline violence.”