In April 2021, English football announced a boycott of social media. Players, coaches and pundits from across the sport shunned Twitter, Facebook and Instagram for four days in protest against racism on these platforms. Corporate sponsors including Adidas and Barclays also took part in the boycott.
A report from Kick It Out, English football’s equality and inclusion organisation, illustrated why such action was necessary. It found a significant increase in racist and homophobic abuse of those involved in the sport since the beginning of the 2019-’20 season. Many social media users cited “unsatisfactory responses” from the big platforms after they had made initial complaints about hate speech.1
The boycott was just the latest episode in a wider backlash against social media companies. In July 2020, more than 1,000 prominent advertisers launched a month-long boycott of Facebook as part of the #StopHateForProfit campaign, pressing the firm to do more to stamp out racist content in the wake of George Floyd’s murder and the Black Lives Matter protests.2
Social media companies continue to enjoy the confidence of the market
Despite these controversies, social media companies continue to enjoy the confidence of the market. Share prices have risen in line with the wider tech sector amid growing demand for online tools, even as the bricks-and-mortar economy suffers under COVID-19 restrictions. But as advertisers pull out, users log off and regulators circle, some investors are warning the persistence of hate speech on social media could yet pose a serious threat to the future of the tech giants.
From dial-a-hate to the Twitter feed
Throughout history, advances in communications technology have enabled new forms of hate speech. In the 1960s, for example, extremist groups in the US set up automated voice messages connected to phone lines, broadcasting their views to a wide audience.3
The “dial-a-hate” phenomenon drew the attention of Congress with policymakers putting pressure on AT&T to tackle the issue
The so-called “dial-a-hate” phenomenon drew the attention of Congress. Prevented from banning the recordings by First Amendment laws protecting freedom of speech, policymakers put pressure on telecoms company AT&T to tackle the issue. The company argued it was powerless to regulate the activity of private individuals on its phone lines.4
Today, social media giants such as Facebook and Twitter make similar arguments when criticised for the content that appears on their platforms. But hate speech is a far bigger problem in the Internet era, when millions of people around the world can meet and instantaneously exchange information – or intimidate, bully or harass.
As defined by the United Nations, hate speech encompasses “any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are”. This might include their religion, ethnicity, nationality, race, gender, sexuality or any other identity factor.5
In 2015, countries around the world committed to tackling the problem as part of the UN’s 17 Sustainable Development Goals, many of which affirm the right to freedom of expression and protection from harassment. For example, SDG 16 aims to “promote peaceful and inclusive societies for sustainable development, provide access to justice for all and build effective, accountable and inclusive institutions at all levels”.6 But moving from commitment to practice has proved more difficult.
“It’s clear that although countries have committed to protect people from harassment, the online reality is unfortunately quite different. Neither companies or governments have found a way to tackle online hate speech, but we expect to see both sides take more action as pressure increases from customers and voters,” says Marte Borhaug, global head of sustainable outcomes at Aviva Investors.
Hate speech has real-world effects beyond the Internet, making it a fundamental human rights issue. In Germany, a correlation was found between anti-refugee posts on Facebook by the far-right Alternative für Deutschland party and physical attacks on refugees.7
From hate speech to bullying, extremism to misinformation, there is a lot of content here that damages communities
“The 24/7 nature of social media, the amplification of content through sharing, clearly exacerbates the impact of these kinds of messages on wider society,” says Louise Piffaut, ESG analyst at Aviva Investors. “From hate speech to bullying, extremism to misinformation, there is a lot of content here that damages communities.”
The perpetrators of racist mass shootings in the US and elsewhere have publicised their acts to supporters on the major social media sites and even used the platforms to broadcast videos of their crimes. The shooter who murdered 51 people at two mosques in Christchurch, New Zealand in March 2019 streamed a video of the attacks using Facebook Live, and clips of the footage spread quickly across Facebook and YouTube.8
While this sort of activity tends to get taken down relatively swiftly, Facebook only blocked white nationalist content as a matter of policy in the immediate aftermath of the Christchurch attacks.9 YouTube and Twitter allowed Ku Klux Klan leader David Duke to post on their networks for years before finally banning him in 2020.10
Social media firms have global reach and hate speech is a global problem. In Myanmar, military personnel used Facebook to spread propaganda demonising Rohingya Muslims ahead of a campaign of ethnic cleansing, according to a UN investigation. In India, lynch mobs have used Facebook-owned messaging service WhatsApp to coordinate attacks.11
Echo chambers
Some experts point the blame at social media business models. Social networks encourage like-minded individuals to gather, so as to more-efficiently target them with advertisements. But as algorithms push users towards content that aligns with their pre-existing views, echo chambers can form. Without the corrective offered by opposing opinions or moderating voices, rhetoric can quickly spiral towards extremes.
Keeping people on the platforms allows them to be targeted with advertising
“These companies want to keep people on the platforms because that’s what allows them to be targeted with advertising,” says Dr Jennifer Cobbe, coordinator of the Trust and Technology Initiative at Cambridge University, an interdisciplinary research project that explores the dynamics of trust and distrust in relation to Internet technologies, society and power.
“Part of the problem with that is people collectively tend to be drawn to things that are shocking and controversial and which raise an emotional response,” she adds. “Hate speech is actually great for these companies’ business models, because that kind of controversial, shocking, emotional content will draw people in. As long as people are on their platforms, that’s all they want.”
Algorithms designed to promote engagement can exacerbate the problem. Take YouTube’s auto-play function, which has become notorious for displaying a series of ever-more incendiary videos to users who linger on the site. As New York Times columnist Zeynep Tufekci remarked: “You are never ‘hard core’ enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century.”12
Each social media platform has its own, more or less stringent, rules over what is permissible. Facebook’s guidelines are “relatively detailed”, according to Piffaut; they describe the kinds of content that would violate its policies on both Facebook and Instagram. YouTube and Twitter also have clear guidelines over what should be removed – any incitement to violence is off-limits – and what should be allowed to remain with content warnings attached (this includes certain forms of hate speech).
Social media firms rely on three methods of policing content: artificial intelligence, human moderators and user reporting
Enforcement of these rules is patchy, however. To varying degrees, social media firms rely on three methods of policing content: artificial intelligence, human moderators and user reporting.
The algorithms used to detect and delete content that violates the rules are opaque: YouTube appears to have tweaked its algorithm to curb the “radicalising” effect of its auto-play function in recent years, but the logic behind its recommendations remains obscure. Facebook’s own data suggests automated methods are more effective at rooting out violent or graphic content than cases of harassment and bullying, which tend to be flagged by users in the first instance.13
Human moderators, meanwhile, can quickly become overwhelmed by the thankless task of sifting through reams of disturbing content, which takes its toll on their mental health. Making things worse, such teams are often understaffed. In Myanmar, Facebook employed only two Burmese-speaking moderators at the time the anti-Rohingya propaganda was flooding its local platform.14
Tougher regulation
As global businesses whose operations span countries with very different laws on freedom of expression, social media companies must tread a fine line when deciding which content to ban or to flag as harmful.
Twitter’s own technology may play a role in spreading the most inflammatory messages
In the US, senior Republican politicians have long accused Facebook and Twitter of being quicker to crack down on conservative voices than liberal ones. However, an independent civil rights audit, commissioned by Facebook in 2020, found the company contravened its own policies on hate speech by refusing to take down inflammatory posts from President Trump.15
Trump was eventually banned from both Twitter and Facebook after he contested Joe Biden’s presidential election victory and incited a riot by his supporters at the US Capitol building in Washington DC in January 2021. Twitter said Trump’s tweets around this time “were highly likely to encourage and inspire people to replicate the criminal acts that took place at the US Capitol”.16
But Twitter’s own technology may also play a role in spreading the most inflammatory messages. A recent Economist study found that its algorithm often amplifies the most stridently negative conservative voices. Figure 1 shows the kind of messages served to a clone of Donald Trump’s account: the president was more likely to see emotive tweets criticising his political rivals than would have been the case if his feed simply displayed tweets in chronological order.
Figure 1: Twitter’s algorithm favours the most emotive and negative tweets17

Note: Sentiments of tweets served to a clone of Donald Trump’s account by newsfeed type. Average for tweets containing 40 most frequent words, Sep-Dec 2019. Source: ‘Twitter’s algorithm does not seem to silence conservatives’, The Economist, August 1, 2020
In some quarters, the social media firms drew criticism for their delay in banishing Trump; in others, they were criticised for censoring free speech. The French and German governments were among those to protest Trump’s ban: German Chancellor Angela Merkel argued decisions to limit free speech should be made by politicians, not private companies.18
Facebook CEO Mark Zuckerberg has asked governments to devise a consistent set of rules for Internet companies, including guidelines on how to deal with harmful content. The company has also set up an independent oversight panel – dubbed Facebook’s “Supreme Court” – to review content management decisions and potentially overturn them.19 In May 2021 the panel upheld Trump’s suspension, but said the indefinite nature of the ban was unusual and called on Facebook to be more transparent in its decision-making process.20
From the tech companies’ perspective, a worst-case scenario would be an amendment to Section 230
In setting up an independent commission to rule on content moderation, Zuckerberg may be hoping to ward off calls for punitive legislation. From the tech companies’ perspective, a worst-case scenario would be an amendment to Section 230 of the US Communications Decency Act, according to which technology companies are currently immune from prosecution from harmful or defamatory content published by third parties on their platforms.
In June 2020, Trump had issued an executive order aimed at limiting the protections offered by Section 230, ostensibly in response to Twitter’s decision to add fact-checks to his recent tweets on voting by mail (in the text of the order, the president denounced Twitter’s decision as “selective censorship”).21 His successor, Joe Biden, raised the possibility of repealing Section 230 altogether during his election campaign.
For its part, the European Commission is drawing up legislation that will force tech giants to remove illegal content or face the threat of sanctions under a comprehensive “Digital Services Act” due to be unveiled at the end of 2021. Germany has introduced the German Network Enforcement Act (NetzDG), which forces large social media companies to review complaints and remove any content that is clearly illegal within 24 hours.
“A tougher regulatory environment is long overdue,” says Cobbe. “We are now acknowledging the reality these platforms play such an outsized role in society that they need to have some kind of responsibility, and need to be brought under some degree of control.
“The German Network Enforcement Act is a good example; it’s not so much targeted at the content itself, as the content is still regarded as the speech of individuals, but it does focus on what the platforms should be doing. We need to look at algorithms of platforms and how they disseminate conspiracy theories, hate speech and violent extremism through their recommendation systems,” she adds.
Facebook had to pay €2 million for under-reporting illegal activity on its platforms in Germany
Cobbe says regulations need to be enforceable and come with severe punishments to be effective. Facebook has already fallen foul of the NetzDG law: in July 2019 the company had to pay €2 million for under-reporting illegal activity on its platforms in Germany.22
This is a small sum in the context of Facebook’s global revenues ($85 billion in 2020), but other countries are seeking tougher punishments. In 2019, the Australian parliament passed the Sharing of Abhorrent Violent Material Act, introducing criminal penalties for social media companies, possible jail sentences for tech executives for up to three years and financial penalties worth up to ten per cent of a company's global turnover. Based on their 2020 earnings, this would amount to $370 million for Twitter, $8.5 billion for Facebook and $1.8 billion for YouTube’s parent Alphabet.23
Commercial impact
The size of these figures shows that, quite apart from the moral impetus to act, hate speech poses a threat to these firms’ revenues. Even if they manage to avoid costly regulatory sanctions, it is likely they will have to invest much more heavily in content-management initiatives in the future, from improved automated systems to new armies of human moderators.
Any increase in R&D and labour costs may have a material impact on Facebook’s profit margins
As operating expenses among tech companies tend to be high even before these added outlays – over 40 per cent of total revenue in 2020, in Facebook’s case – any increase in R&D and labour costs may have a material impact on the company’s profit margins, says Piffaut.
Advertising boycotts could also have a growing impact over time, given ad sales make up the vast majority of social media companies’ revenues (see Figure 2), though the impact so far has been small. Facebook’s ad revenues actually increased during the boycott in July 2020, partly because its customers are mostly local, “mom-and-pop” businesses that did not participate in the walk-out. Facebook has eight million active advertisers, and the top 100 brands, including the largest companies involved in #StopHateforProfit, generated only six per cent of its total revenue in 2019.24
Figure 2: Leading source of revenue for tech companies (per cent)
Source: Facebook, Twitter, Alphabet, Apple, Amazon, Microsoft, March 2020
That’s not to say the boycott will not work in forcing changes at Facebook’s handling of hate speech, however. YouTube’s response to its own ad boycott in 2017 may be a salient precedent.
“YouTube survived the exodus of advertisers, but only because it took fairly drastic action in response. Throughout 2017 and 2018 there was a wholesale shift in the content preferred by the platform, with regard to the allocation of monetisation rights, towards less controversial, more family-friendly content,” says Charles Devereux, ESG analyst at Aviva Investors.
Investor engagement
This example shows tech companies will respond when sufficient external pressure is applied, indicating investor engagement could bear fruit. And the fate of the tech companies is certainly an issue of increasing significance to investors, for financial as well as ethical reasons. The five biggest technology companies (Alphabet, Amazon, Apple, Facebook and Microsoft) accounted for about 22 per cent of the total market capitalisation of the S&P 500 as of May 1, 2021.25
With both moral and financial risks at stake, more investors are beginning to question the social media firms over content. Piffaut and Devereux set out a framework for engaging with these companies in key areas. They recommend investors should ensure social media firms are properly assessing how their operations affect human rights, developing more robust content policies in light of these principles and demonstrating how these are enforced.
Investors should engage with social media firms to properly enforce their own rules
Investors should also engage with social media firms to improve internal accountability and provide more transparency as to their actions on hate speech. Investment in more sophisticated detection algorithms would lessen the burden on human moderators, the analysts say.
“Progress has been made, but not enough yet,” says Piffaut. “The issue is that the measures taken so far have been very reactive, rather than preventative. We are looking for higher investments in technology, and for companies to take ownership of this issue instead of outsourcing the solution.”
When devising an effective strategy for engaging with social media companies, investors need to be mindful of two further points. The first is that coordinated action is likely to be more effective than acting alone.
“Collaborative initiatives are important in this area,” says Borhaug. “When you are initially raising concerns with companies that may not even want to meet with you – and tech companies are infamous for not talking to investors – it can be helpful for investors to collaborate globally.”
Secondly, investors need to pay attention to the social and political context and maintain consistency in engaging with companies across borders. As well as pressure over hate speech, social media companies are facing calls to defend freedom of speech, and not only from hard-line US conservatives.
In July 2020, 150 global authors, academics and intellectuals from across the political spectrum signed an open letter published in Harper’s magazine, defending freedom of speech against what they called a “culture of illiberalism” on both right and left.26
While social media companies in the West face criticism for being lax in shutting down abusive content on their platforms, technology firms elsewhere may be too quick to restrict debate at the behest of authoritarian governments. In those countries, the onus is on investors to pressure companies to defend individual freedoms and maintain their access to information where possible.
Technology firms in developing markets may be too quick to restrict debate at the behest of authoritarian governments
“Across many emerging markets, social networking is dominated by Western companies: Instagram, Facebook, Twitter. Those where it isn’t tend to be countries where freedom of speech is not a policy priority, such as Russia and China,” says Alistair Way, head of equities at Aviva Investors.
“For social media companies in those markets, such as Tencent or TikTok owner ByteDance in China, or Mail.ru Group in Russia, we talk to them about how they manage the line between not falling foul of the government and giving users access to information and maintaining their rights. These issues are front and centre in our conversations with these firms,” he adds.
Reputational risk
Striking the right balance when engaging globally with social media companies over content management may be tricky, but investors must be willing to do this if they are to invest according to their own moral framework and defend the value of those investments. After all, if hate speech is allowed to flourish, calls will grow to rein in the social media firms with stronger regulation, and perhaps even to break them up.
Biden criticised Facebook's content-management policies and ordered it to “move fast and fix it”
Cobbe argues the power of the big tech platforms has grown to an extent they are effectively unmanageable. She believes they should be scaled down so they can be properly overseen and regulated.
“If governments want to address hate speech or the other problems with these platforms, they need to address the structural problems of scale and power, and that’s where I would begin with trying to address this,” she says.
During the election campaign in 2020, Biden signalled he would take a tough line on Facebook. In June, he wrote an open letter to the company in which he criticised its content-management policies and ordered it to “move fast and fix it”, referring to the problems of hate speech and misinformation; he and his supporters disseminated this slogan on the network.27
This follows a remark Biden made in 2019, when he told the Associated Press that breaking up the company “is something we should take a really hard look at”.28 However, in a blow to the antitrust crackdown a recent ruling in Washington by Judge James Boasberg dismissed two cases against Facebook. He stated that the Federal Trade Commision's case was “legally insufficient” and “failed to plead enough facts to plausibly establish [a monopoly]”.
Despite all this, Biden’s own strategy illustrated the importance of the platform as an essential communications tool in modern politics. Biden spent over $85.2 million on Facebook ads during the presidential campaign, a little less than Trump’s $89.1 million (Biden outspent his rival on Google ads, spending $60 million, around $4 million more than Trump).29 After his victory, Biden hired former Facebook executives to run his transition team, although other senior members of his administration are vocal critics of the company.30
“This is very much a political issue, and there is no doubt social media companies will face increasing regulatory risk, as it is on the agenda for politicians – see the US congressional antitrust hearings in 2020. The key question is around the speed of change, and that is difficult to answer,” says Piffaut. (Read our in-depth feature on the potential antitrust threat to Big Tech).
Domino effect
Whether or not regulators decide to break up the larger tech companies, users could become disaffected by the increasingly poisonous atmosphere on social media platforms. This could create a negative feedback loop, whereby a decline in user engagement removes the incentive for companies to pay for ads over the long run.
New disruptors could emerge to nab user attention and the associated advertising cash
New disruptors could emerge to nab user attention and the associated advertising cash, much as Facebook dislodged one-time social media leader Myspace, a platform widely considered to have an unassailable monopoly as recently as 12 years ago.31 One reason Facebook was able to displace its rival is that it offered a more family-friendly alternative to Myspace, which struggled to filter out spam, phishing scams and unwholesome ads, in much the same way Facebook is failing to get a grip on hate speech today.32
There are already signs users don’t value Facebook particularly highly in monetary terms. This makes the pact between users and the platform – free access in exchange for personal data that will be used for commercial purposes – somewhat precarious, especially when compared with more diversified tech companies like Google, whose portfolio includes search engines and other tools such as maps.
Consider a recent study led by Erik Brynjolfsson, director of the Initiative on the Digital Economy at the Massachusetts Institute of Technology. It asked Facebook users how much they would have to be paid to forgo search engines for a year. Respondents offered an average figure of $17,500; they were willing to give up access to Facebook for less than $600.33
Businesses need to stay on the right side of the ‘value-for-money’ equation
“Even if a company looks like it has an unregulated monopoly, there is always a tacit societal contract that constrains how it can act and how much money it can make. Businesses need to stay on the right side of the ‘value-for-money’ equation,” says Giles Parkinson, global equities portfolio manager at Aviva Investors. “Compared with Google, Facebook elicits more of a shrug from users – they like it but don’t love it. This means that when regulatory or societal scrutiny comes to bear, Alphabet should find itself in a better position than Facebook.”
To grasp the risk, one only needs to consult Mark Zuckerberg himself, or at least his fictional alter ego in the 2010 biographical film, The Social Network. In a key scene, Zuckerberg frets about how quickly the fortunes of his company could turn: “Users are fickle,” he says. “Even a few people leaving would reverberate through the entire user base. The users are interconnected. That is the whole point. College kids are online because their friends are online. And if one domino goes, the other dominos go. Don't you get that?”
As hate speech threatens to trigger a domino effect among Facebook’s users, advertisers and investors, the real Zuckerberg would do well to heed the warning.