Africa
How Facebook Is Fanning The Flames In Ethiopia
Published
3 years agoon
Nearly half the world is on Facebook or one of its apps. The social media platform has revolutionised the way that humans communicate – but only now are we beginning to understand the consequences of this revolution. Fake news, conspiracy theories, hate speech, incitements to violence: these all thrive on Facebook, thanks to an algorithm that has been trained to prioritise shares and likes over our safety.
The Continent is the first African publication to obtain access to thousands of documents leaked by Frances Haugen, a former Facebook data scientist. These documents prove that Facebook knows that its platform can cause immense harm to the people that use it. They prove that Facebook has not done nearly enough to protect the people who use it – especially if those users happen to live outside of the English- speaking western world.
Ethiopia is a case study in how Facebook can inflame tensions and fuel real-world violence. But unless Facebook – and the other social media giants – change the way they operate, it may also be a sign of things to come everywhere else.
On August 30, nine months into Ethiopia’s brutal civil war, a Facebook user who goes by the name Northern Patriot Tewodros Kebede Ayo posted a clear incitement to violence on his page.
He accused the Qimant, an ethnic minority in Ethiopia’s Amhara region, of supporting the opposition forces. He called them “snitches”, and singled out the Qimant residents of Aykel, a small town in Amhara.
Writing in Amharic, he said: “The punishment has been imposed … the clean-up continues.”
Two days later, between September 1 and 2, more than a dozen Qimant in Aykel were dragged from their homes and butchered on the street, allegedly by members of the feared Fano militia – an Amhara nationalist paramilitary group that has been implicated in multiple atrocities. This was reported at the time by Al Jazeera, and two sources have independently confirmed this account to The Continent.
On September 1, users on another Facebook account – a page called “The Fano Patriotic People’s Radical System Change” – joined in the online lynch mob: “No mercy for the Qimant,” one post said, even as the killings in Aykel were happening. Another user on the page had previously, in May 2020, laid out a 14-page road map on how to organise the Fano militia, with both violent and non- violent options.
There is no evidence that there is a direct causal link between these Facebook posts and the massacre in Aykel. What we do know, however, is that Facebook staff already knew about both of these accounts, and were worried about their potential to incite violence.
Months earlier, in a leaked internal document seen by The Continent, a team within Facebook had found that these accounts were key nodes in a major online disinformation network aligned to the Fano militia, codenamed Disarming Lucy.
According to Facebook’s own data, this network was co-ordinating “calls for violence and other armed conflict in Ethiopia”; and “promoting armed conflict, co-ordinated doxxing, recruiting and fund-raising for the militia”.
The Facebook team that had discovered Disarming Lucy recommended that all the accounts associated with it be taken down. This was in March 2021. But as of today, The Continent can reveal that every single one of those accounts is still active – and many are still spreading hate speech and inciting violence.
The Continent reached out to Facebook for comment on this and other issues raised in this article, but received no response prior to publication.
The algorithm is the problem. Between Facebook, Messenger, WhatsApp and Instagram, more than 3.6-billion people regularly use one of Facebook’s apps (the company has recently rebranded and is now known as Meta). Ironically, for a company that is built on the sharing of personal information, Facebook’s inner workings have always been relatively opaque. Until now.
In May 2021, a data scientist named Frances Haugen resigned from her job with Facebook’s civic integrity unit. That’s the unit, based in Facebook’s San Francisco headquarters, that was supposed to monitor – and, crucially, mitigate – all the ways in which Facebook causes harm, including the spread of hate speech and disinformation on the platform. It was disbanded in the wake of the American election last year.
Haugen had grown increasingly disillusioned with Facebook, coming to believe that it was putting profit ahead of providing users with meaningful protection.
Before she left the company for good, Haugen took more than 10,000 documents with her. She copied them by taking photos of her computer screen – in some of the documents, her silhouette is even visible in the reflection.
She shared these documents first with the United States Congress, and then with the Wall Street Journal.
Now she’s shared them with a small
consortium of journalists from around the world, including The Continent, which is the only African newspaper represented. The documents are drawn largely from the civic integrity unit and Facebook’s internal workplace forum.
They paint a damning picture of a company that understands exactly how dangerous its platform can be – but which has repeatedly failed to take actions to make it safer, especially outside of the US. Haugen describes herself as “an algorithm person”. She is an expert in the intricacies of Facebook’s back-end, rather than the political complexities of individual countries, which gives her an insider’s perspective on how hate speech spreads so wildly on the platform.
The key point, confirmed in a number of internal experiments, is that content that is inflammatory and extreme is more likely to go viral, generating what Facebook calls Meaningful Social Interaction (MSI) – a metric that measures reach and impact on Facebook (and which is central to how Facebook makes money).
Most of Facebook’s features are designed to maximise MSI, which means that the algorithm has a tendency to promote extreme content.
Or, in Facebook-speak, according to one leaked internal memo: “Analyses consistently document that harmful content and low quality Producers disproportionately garner distribution from unconnected reach compared to benign content and high quality Producers.” And, from another leaked document: “It’s no secret Facebook’s growth-first approach to product development leads us to ship risky features.”
In other words: it appears that the prevalence of hate speech and disinformation on Facebook is not a bug. It’s a feature. An unprofitable trade-off
There are, broadly, two ways in which Facebook can tackle this problem. The first is by tweaking the way Facebook works, to make it harder for people to share problematic content. These changes can be subtle, but have an enormous impact nonetheless.
One change favoured by Haugen is to make it more difficult for Facebook users to reshare content from people who are not in their friends list. Currently, it’s as easy as pressing the share button, which requires little effort or thought. Internal experiments have shown that disabling the share button in these contexts leads to an instant, dramatic reduction in the spread of fake news and hate speech. Users are still free to reshare the content, but they have to copy and paste it to do so – and even this minimum level of effort makes most people think twice.
Technical solutions like this work across different countries and languages. It’s a quick, comprehensive fix. The only problem, as far as Facebook is concerned, is that it also has a strongly negative impact on MSI: people share things less, they like things less, they engage less. In a briefing with the civic integrity team, the notes of which are among the leaked documents, Facebook CEO Mark Zuckerberg makes his position clear, directing the team not to go ahead with any changes “if there was a material trade-off with MSI impact”.
This left the civic integrity team with an impossible task, according to another document. “…integrity teams spend months searching for win/wins – pro- safety features that are also pro-growth. But there’s the thing: these solutions- without-downside almost never exist.”Or, as Haugen put it in a briefing with journalists on Thursday: “Facebook knows how to make these harms better. But they also know that no one can catch them. So they keep falling on the side of profits.”
Underfunded and understaffed
The second option available to Facebook is to tackle hate speech and disinformation on a case-by-case basis, using a combination of human moderators and machine learning to analyse individual posts. This approach requires enormous financial and human resources, which Facebook appears to be reluctant to commit outside of the United States.
For Ethiopia, for example, two sources told The Continent that there are fewer than 100 people working on content moderation across the four Ethiopian languages that are supported by Facebook (Amharic, Oromo, Somali and Tigrinya). With some 6.4-million Ethiopians on Facebook, this works out to less than one moderator per 64,000 people.
To compound the problem, the vast majority of Facebook’s spending is directed towards the US. According to financial records, in 2020 just 13% of the company’s budget to combat misinformation went to countries outside of the US – even though these countries account for 90% of Facebook’s user base.
Nor is machine learning an effective solution. Not at the moment, anyway. Timnit Gebru, a computer scientist who studies algorithmic bias, told The Continent that major errors can happen when computers are responsible for translating and assessing content for languages they do not prioritise.
“You have to have people who are following, investigating and understanding the context very clearly, like journalists. However, the social media platforms working on this don’t seem to have that.”
“Facebook needs to do better as far as content moderation on the continent goes,” said Eric Mugendi, Africa programme manager at Meedan, a tech non-profit that aims to improve the quality of online discourse. “Additionally, the platform needs to allocate more resources to local languages that are spoken widely in the continent and used on the platform, but are not properly monitored for potential harm. They need to acknowledge the real world harm that their inaction has led to, and much more needs to be done.”
Violating community standards
In Ethiopia, a country engulfed in a civil war that has been characterised by multiple accounts of massacres and atrocities, that real-world harm is all too visible.
Earlier this month, Facebook deleted a post by Prime Minister Abiy Ahmed that called on citizens to “bury the terrorist TPLF”. The post violated its community standards against inciting violence, the company said. (If anyone understands the power of Facebook, it is Abiy: he was swept into power on the back of the grassroots, youth-led Qeerroo movement, which was itself enabled and then supercharged through Facebook).
But even more explicit posts by other prominent figures remained online. Berhan Taye, a digital rights researcher and activist, personally reported one post by media personality Mesay Mekonnen, which called for all Tigrayans to be placed in concentration camps. Multiple media reports confirm that Tigrayans in Addis Ababa are currently being rounded up and held in detention facilities around the capital.
Taye was told by Facebook that the post was reviewed, but “doesn’t go against one of our specific community standards”. Only after she escalated her complaint, using her own connections within Facebook, was the post removed.
The problems Ethiopia is experiencing are mirrored elsewhere in the developing world, Taye told The Continent. “[Facebook claims] less than 10% of Ethiopians use Facebook, implying investing in a place like that doesn’t make sense. But when you look at markets like India, Philippines or Brazil where Facebook has over 500-million users, Facebook has equally failed. Either they don’t care, or don’t know the impact the platforms have and are figuring out things after they have transpired.”
‘We are still blind’
“People across Africa should be concerned about Facebook’s poor handling of the Ethiopa crisis because it is indicative of the quality of the response we are likely to see in other African countries,” said Rosemary Ajayi of the Digital Africa Research Lab.
Facebook – or rather Meta, now – insists it is taking its responsibilities in Ethiopia seriously. In a statement on Tuesday, the company said that “for more than two years, we’ve been implementing a comprehensive strategy to keep people safe on our platform given the severe, long-standing risks of conflict”.
But the leaked documents tell a different story. As of December 2020, Ethiopia was categorised as having the weakest level of protection among the countries the civic integrity team had identified as at risk. In an attached rubric, this level is described as: “We are still blind to the extent of the problem”.
And basic safety features that are available to US and western audiences are not available to many Ethiopian users, the same document shows; despite the clear risks, Ethiopian users are without protection against misinformation, civic harassment, civic spam and fake accounts.
This leaves them vulnerable to the kind of inflammatory rhetoric that accompanied the massacre in Aykel, and has been a consistent, hate-filled soundtrack to Ethiopia’s civil war.
This article was first published on The Continent.
Kenya Insights allows guest blogging, if you want to be published on Kenya’s most authoritative and accurate blog, have an expose, news TIPS, story angles, human interest stories, drop us an email on [email protected] or via Telegram
Email [email protected] for news tips, press releases, advertising, sponsored articles and any other inquiries.
You may like
-
Facebook, WhatsApp Are The Most Popular Social Media Platforms In Kenya, Report Says
-
EU probes Meta’s Role In Making Facebook, Instagram Addictive For Children
-
Ethiopia Emerges As The Largest Wheat Producer In Africa
-
Ethiopia and Somaliland: A Deal with Domino Effects
-
Why Ex-Staffer Wants To Sue Zuckerberg And US Firm Sama Banned From Kenya
-
Facebook To Scrutinize Political Adverts From Kenya Ahead Of General Elections
Joyce Akinyi Convicted For Drug Trafficking
Starlink Faces Predatory Pricing Allegations In Kenya
Mombasa Lawyer Sheila Muthee Charged with Stealing 17.5 M from Her Client
Arrest Warrants Issued For Netanyahu, Gallant And Hamas Commander Over Alleged War Crimes
Wandayi Advances KETRACO-Adani Deal Despite Ongoing Fraud Allegations
Kenya Investigating How Uganda Opposition Figure Besigye Was ‘Abducted’
Adani Group Reacts To US Bribery Charges, Vows Legal Action
Isolating Wetang’ula: Inside Ruto’s Strategy to Dominate Luhya Leadership
Ex-LSK President Havi Accuses NCBA Bank Of Disclosing Client’s Bank Account Details To KRA In Alleged Breach Of Trust Act Violation In Tax Row With KRA
How Besigye Was Abducted In Nairobi And Secretly Ferried To Kampala Military Jail
Margaret Kenyatta’s Brother Who Died By Suicide To Be Cremated In Nairobi
NCBA Bank Manager Arrested For Stealing Sh47M Sadaka From Customer’s Account
With Over 300 Videos, Is Baltasar Ebang Engonga Greatest Sex Addict Ever?
Local Bank CEO Summoned by DCI and FRC Over Ksh. 13 Billion Money Laundering Allegations
Nairobi Lady Jumps To Her Death From 14th Floor Of KICC
Dig Into Flashy But Brutal World Of Fake Gold Kingpin
Senior Judge’s Officials Allegedly Takes Millions To Betray CJ Koome, Protect The State In Sensitive Matters Amid Public Outcry
Baltasar Engonga Sacked After Viral ‘sex tape’ As Government Moves To Block Video Circulation
Massive Sex Tape Leak Could Be Ploy For Power In Equatorial Guinea
Professor Kithure Kindiki Sworn In As Kenya’s Third Deputy President
Most Popular
-
News1 week ago
Margaret Kenyatta’s Brother Who Died By Suicide To Be Cremated In Nairobi
-
News2 weeks ago
NCBA Bank Manager Arrested For Stealing Sh47M Sadaka From Customer’s Account
-
News2 weeks ago
Nairobi Lady Jumps To Her Death From 14th Floor Of KICC
-
News2 weeks ago
Senior Judge’s Officials Allegedly Takes Millions To Betray CJ Koome, Protect The State In Sensitive Matters Amid Public Outcry
-
Africa2 weeks ago
Baltasar Engonga Sacked After Viral ‘sex tape’ As Government Moves To Block Video Circulation
-
Africa2 weeks ago
Massive Sex Tape Leak Could Be Ploy For Power In Equatorial Guinea
-
News2 weeks ago
DCI Detectives Links Car Dealer To Car Theft
-
Investigations2 weeks ago
Detectives Discover Terror Links In Eastleigh Murder Suspect, ATPU Joins Probe