On July 24, British Grime artist Wiley posted an antisemitic tirade on Twitter. In a series of messages, he alleged Jews were behind a global conspiracy and likened the Jewish community to the Ku Klux Klan. The rant continued on his Instagram and personal Facebook pages.1
In response, the major social media sites temporarily suspended Wiley’s accounts. High-profile celebrities and charities boycotted Twitter for 48 hours to push for stronger action. Among them was the UK’s chief rabbi, Ephraim Mirvis, who wrote an open letter to Facebook and Twitter in which he condemned “the hatred that currently thrives on your platforms”. As the outrage grew, both companies permanently banned Wiley.2
The persistence of hate speech on social media could yet pose a serious threat to the future of the tech giants
This incident was part of a wider backlash against social media companies over their failure to tackle hate speech. At the beginning of July, more than 1,000 prominent advertisers launched a month-long boycott of Facebook as part of the #StopHateForProfit campaign, pressing the firm to do more to stamp out racist content in the wake of George Floyd’s murder and the Black Lives Matter protests.3
Despite these controversies, social media companies continue to enjoy the confidence of the market. Share prices have risen in line with the wider tech sector this year amid growing demand for online tools, even as the bricks-and-mortar economy suffers under COVID-19 restrictions. But as advertisers pull out, users log off and regulators circle, some investors are warning the persistence of hate speech on social media could yet pose a serious threat to the future of the tech giants.
“Hate speech is tarnishing Facebook’s brand,” says David Cumming, chief investment officer for equities at Aviva Investors. “The company needs to take its social obligations seriously. While the share price hasn’t reacted yet, this could eventually have an impact. If Facebook loses the trust of its users, that could damage the business.”
From dial-a-hate to the Twitter feed
Throughout history, advances in communications technology have enabled new forms of hate speech. In the 1960s, for example, extremist groups in the US set up automated voice messages connected to phone lines, broadcasting their views to a wide audience.4
The so-called “dial-a-hate” phenomenon drew the attention of Congress. Prevented from banning the recordings by First Amendment laws protecting freedom of speech, policymakers put pressure on telecoms company AT&T to tackle the issue. The company argued it was powerless to regulate the activity of private individuals on its phone lines.5
Today, social media giants such as Facebook and Twitter make similar arguments when criticised for the content that appears on their platforms. But hate speech is a far bigger problem in the Internet era, when millions of people around the world can meet and instantaneously exchange information – or intimidate, bully or harass others.
As defined by the United Nations, hate speech encompasses “any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are”. This might include their religion, ethnicity, nationality, race, gender, sexuality or any other identity factor.6
In 2015, countries around the world committed to tackling the problem as part of the UN’s 17 Sustainable Development Goals, many of which affirm the right to freedom of expression and protection from harassment. For example, SDG 16 aims to “promote peaceful and inclusive societies for sustainable development, provide access to justice for all and build effective, accountable and inclusive institutions at all levels”.7 But moving from commitment to practice has proved more difficult.
“It’s clear that although countries have committed to protect people from harassment, the online reality is unfortunately quite different. Neither companies or governments have found a way to tackle online hate speech, but we expect to see both sides take more action as pressure increases from customers and voters,” says Marte Borhaug, global head of sustainable outcomes at Aviva Investors.
Hate speech has real-world effects beyond the Internet, making it a fundamental human rights issue. In Germany, a correlation was found between anti-refugee posts on Facebook by the far-right Alternative für Deutschland party and physical attacks on refugees.8
The amplification of content through sharing clearly exacerbates the impact of these kinds of messages on wider society
“The 24/7 nature of social media, the amplification of content through sharing, clearly exacerbates the impact of these kinds of messages on wider society,” says Louise Piffaut, ESG analyst at Aviva Investors. “From hate speech to bullying, extremism to misinformation, there is a lot of content here that damages communities.”
The perpetrators of racist mass shootings in the US and elsewhere have publicised their acts to supporters on the major social media sites and even used the platforms to broadcast videos of their crimes. The shooter who murdered 51 people at two mosques in Christchurch, New Zealand in March 2019 streamed a video of the attacks using Facebook Live, and clips of the footage spread quickly across Facebook and YouTube.9
While this sort of activity tends to get taken down relatively swiftly, Facebook only blocked white nationalist content as a matter of policy in the immediate aftermath of the Christchurch attacks.10 YouTube and Twitter allowed Ku Klux Klan leader David Duke to post on their networks for years before finally banning him this summer.11
Social media firms have global reach and hate speech is a global problem. In Myanmar, military personnel used Facebook to spread propaganda demonizing Rohingya Muslims ahead of a campaign of ethnic cleansing, according to a UN investigation. In India, lynch mobs have used Facebook-owned messaging service WhatsApp to coordinate attacks.12
Some experts point the blame at social media business models. Social networks encourage like-minded individuals to gather, so as to more-efficiently target them with advertisements. But as algorithms push users towards content that aligns with their pre-existing views, echo chambers can form. Without the corrective offered by opposing opinions or moderating voices, rhetoric can quickly spiral towards extremes.
These companies want to keep people on the platforms because that’s what allows them to be targeted with advertising
“These companies want to keep people on the platforms because that’s what allows them to be targeted with advertising,” says Dr Jennifer Cobbe, coordinator of the Trust and Technology Initiative at Cambridge University, an interdisciplinary research project that explores the dynamics of trust and distrust in relation to Internet technologies, society and power.
“Part of the problem with that is people collectively tend to be drawn to things that are shocking and controversial and which raise an emotional response,” she adds. “Hate speech is actually great for these companies’ business models, because that kind of controversial, shocking, emotional content will draw people in. As long as people are on their platforms, that’s all they want.”
Algorithms designed to promote engagement can exacerbate the problem. Take YouTube’s auto-play function, which has become notorious for displaying a series of ever-more incendiary videos to users who linger on the site. As New York Times columnist Zeynep Tufekci remarked: “You are never ‘hard core’ enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century.”13
Each social media platform has its own, more or less stringent, rules over what is permissible. Facebook’s guidelines are “relatively detailed”, according to Piffaut; they describe the kinds of content that would violate its policies on both Facebook and Instagram. YouTube and Twitter also have clear guidelines over what should be removed – any incitement to violence is off-limits – and what should be allowed to remain with content warnings attached (this includes certain forms of hate speech).
Social media firms rely on three methods of policing content: artificial intelligence, human moderators and user reporting
Enforcement of these rules is patchy, however. To varying degrees, social media firms rely on three methods of policing content: artificial intelligence, human moderators and user reporting.
The algorithms used to detect and delete content that violates the rules are opaque: YouTube appears to have tweaked its algorithm to curb the “radicalising” effect of its auto-play function in recent years, but the logic behind its recommendations remains obscure. Facebook’s own data suggests automated methods are more effective at rooting out violent or graphic content than cases of harassment and bullying, which tend to be flagged by users in the first instance.14
Human moderators, meanwhile, can quickly become overwhelmed by the thankless task of sifting through reams of disturbing content, which takes its toll on their mental health. Making things worse, such teams are often understaffed. In Myanmar, Facebook employed only two Burmese-speaking moderators at the time the anti-Rohingya propaganda was flooding its local platform.15
As global businesses whose operations span countries with very different laws on freedom of expression, social media companies must tread a fine line when deciding which content to ban or to flag as harmful.
In the US, President Donald Trump and other senior Republican politicians have accused Facebook and Twitter of being quicker to crack down on conservative voices than liberal ones. However, an independent civil rights audit, commissioned by Facebook, found the company contravened its own policies on hate speech by refusing to take down inflammatory posts from Trump himself.16
As for Twitter, a recent Economist study found that its algorithm often amplifies the most stridently negative conservative voices. Figure 1. shows the kind of messages served to a clone of Donald Trump’s account: the president is more likely to see emotive tweets criticising his political rivals than would be the case if his feed simply displayed tweets in chronological order.
Figure 1: Twitter’s algorithm favours the most emotive and negative tweets
Facebook CEO Mark Zuckerberg has argued that deciding which content is acceptable is beyond the remit of social media companies themselves. He has asked governments to devise a consistent set of rules for Internet companies, including guidelines on how to deal with harmful content. The company has also set up an independent oversight panel – dubbed Facebook’s “Supreme Court” – to review content management decisions and potentially overturn them.17
From the tech companies’ perspective, a worst-case scenario would be an amendment to Section 230
Zuckerberg may be hoping to ward off calls for more punitive legislation. From the tech companies’ perspective, a worst-case scenario would be an amendment to Section 230 of the US Communications Decency Act, according to which technology companies are currently immune from prosecution from harmful or defamatory content published by third parties on their platforms.
In June 2020, Trump issued an executive order aimed at limiting the protections offered by Section 230, ostensibly in response to Twitter’s decision to add fact-checks to his recent tweets on voting by mail (in the text of the order, the president denounced Twitter’s decision as “selective censorship”).18 Both Trump and his Democratic opponent in November’s election, Joe Biden, have raised the possibility of repealing Section 230 altogether.
For its part, the European Commission is drawing up legislation that will force tech giants to remove illegal content or face the threat of sanctions under a comprehensive “Digital Services Act” due to be unveiled at the end of 2021. Germany has introduced the German Network Enforcement Act (NetzDG), which forces large social media companies to review complaints and remove any content that is clearly illegal within 24 hours.
“A tougher regulatory environment is long overdue,” says Cobbe. “We are now acknowledging the reality these platforms play such an outsized role in society that they need to have some kind of responsibility, and need to be brought under some degree of control.
“The German Network Enforcement Act is a good example; it’s not so much targeted at the content itself, as the content is still regarded as the speech of individuals, but it does focus on what the platforms should be doing. We need to look at algorithms of platforms and how they disseminate conspiracy theories, hate speech and violent extremism through their recommendation systems,” she adds.
Regulations need to be enforceable and come with severe punishments to be effective
Cobbe says regulations need to be enforceable and come with severe punishments to be effective. Facebook has already fallen foul of the NetzDG law: in July 2019 the company had to pay €2 million for under-reporting illegal activity on its platforms in Germany.19
This is a small sum in the context of Facebook’s global revenues ($70 billion in 2019), but other countries are seeking tougher punishments. Last year, the Australian parliament passed the Sharing of Abhorrent Violent Material Act, introducing criminal penalties for social media companies, possible jail sentences for tech executives for up to three years and financial penalties worth up to 10 per cent of a company's global turnover. Based on their 2019 earnings, this would amount to $350 million for Twitter, $7 billion for Facebook and $16 billion for YouTube’s parent Alphabet.20
The size of these figures shows that, quite apart from the moral impetus to act, hate speech poses a threat to these firms’ revenues. Even if they manage to avoid costly regulatory sanctions, it is likely they will have to invest much more heavily in content-management initiatives in the future, from improved automated systems to new armies of human moderators.
As operating expenses among tech companies tend to be high even before these added outlays – 40 per cent of total revenue in 2019, in Facebook’s case – any increase in R&D and labour may have a material impact on the company’s profit margins, says Piffaut.
Ad sales make up the vast majority of social media companies’ revenues
Advertising boycotts could also have a growing impact over time, given ad sales make up the vast majority of social media companies’ revenues (see Figure 2), though the impact so far has been small. Facebook’s ad revenues actually increased during July’s boycott, partly because its customers are mostly local, “mom-and-pop” businesses that did not participate in the walk-out. Facebook has eight million active advertisers, and the top 100 brands, including the largest companies involved in #StopHateforProfit, generated only six per cent of its total revenue in 2019.21
Figure 2. Leading source of revenue for tech companies
That’s not to say the boycott will not work in forcing changes at Facebook’s handling of hate speech, however. YouTube’s response to the ad boycott of 2017 may be a salient precedent.
“YouTube survived the exodus of advertisers three years ago, but only because it took fairly drastic action in response. Throughout 2017 and 2018 there was a wholesale shift in the content preferred by the platform, with regard to the allocation of monetisation rights, towards less controversial, more family-friendly content,” says Charles Devereux, ESG analyst at Aviva Investors.
This example shows tech companies will respond when sufficient external pressure is applied, indicating investor engagement could bear fruit. And the fate of the tech companies is certainly an issue of increasing significance to investors, for financial as well as ethical reasons. The five biggest technology companies (Alphabet, Amazon, Apple, Facebook and Microsoft) accounted for 23.8 per cent of the total market capitalisation of the S&P 500 as of August 28.22
A tech selloff prompted by further controversy or new regulation could drag the wider market down
Surging revenues among these companies have propped up the US stock market in 2020: while the S&P 500 and the tech-focused Nasdaq index have risen over the year to date (by nine per cent and 33 per cent respectively), small-cap indices have fallen.23 It seems reasonable to surmise that a tech selloff, prompted by further controversy or new regulation, could drag the wider market down.
With both moral and financial risks at stake, more and more investors are beginning to question the social media firms over content. Piffaut and Devereux set out a framework for engaging with these companies in key areas. They recommend investors should ensure social media firms are properly assessing how their operations affect human rights, developing more robust content policies in light of these principles and demonstrating how these are enforced.
Investors should also engage with social media firms to properly enforce their own rules, improve internal accountability and provide more transparency as to their actions on hate speech. Investment in more sophisticated detection algorithms would lessen the burden on human moderators, the analysts say.
“Progress has been made, but not enough yet,” says Piffaut. “The issue is that the measures taken so far have been very reactive, rather than preventative. We are looking for higher investments in technology, and for companies to take ownership of this issue instead of outsourcing the solution.”
When devising an effective strategy for engaging with social media companies, investors need to be mindful of two further points. The first is that coordinated action is likely to be more effective than acting alone.
“Collaborative initiatives are important in this area,” says Borhaug. “When you are initially raising concerns with companies that may not even want to meet with you – and tech companies are infamous for not talking to investors – it can be helpful for investors to collaborate globally.”
Secondly, investors need to pay attention to the social and political context and maintain consistency in engaging with companies across borders. As well as pressure over hate speech, social media companies are facing calls to defend freedom of speech, and not only from hard-line US conservatives.
In early July, 150 global authors, academics and intellectuals from across the political spectrum signed an open letter published in Harper’s magazine, defending freedom of speech against what they called a “culture of illiberalism” on both right and left.24
“We are talking about sensitivity of content, and very close to that is freedom of speech and restriction of debate,” says Cumming. “We believe Facebook is abdicating its responsibility as to its social penetration and influence in allowing hate speech. But we need to stay conscious of the need to defend freedom of speech as well, especially at a time when this basic human right is being denied in many countries.”
While social media companies in the West face criticism for being lax in shutting down abusive content on their platforms, technology firms elsewhere may be too quick to restrict debate at the behest of authoritarian governments. In those countries, the onus is on investors to pressure companies to defend individual freedoms and maintain their access to information where possible.
“Across many emerging markets, social networking is dominated by Western companies: Instagram, Facebook, Twitter. Those where it isn’t tend to be countries where freedom of speech is not a policy priority, such as Russia and China,” says Alistair Way, head of emerging market equities at Aviva Investors.
“For social media companies in those markets, such as Tencent or TikTok owner ByteDance in China, or Mail.ru Group in Russia, we talk to them about how they manage the line between not falling foul of the government and giving users access to information and maintaining their rights. These issues are front and centre in our conversations with these firms,” he adds.
Striking the right balance when engaging globally with social media companies over content management may be tricky, but investors must be willing to do this if they are to invest according to their own moral framework and defend the value of those investments.
If hate speech is allowed to flourish, calls will grow to rein in the social media firms with stronger regulation
After all, if hate speech is allowed to flourish, calls will grow to rein in the social media firms with stronger regulation, and perhaps even to break them up. Both Trump and Biden have raised this possibility in the past.25
Cobbe argues the power of the big tech platforms has grown to an extent they are effectively unmanageable. She believes they should be scaled down so they can be properly overseen and regulated.
“If governments want to address hate speech or the other problems with these platforms, they need to address the structural problems of scale and power, and that’s where I would begin with trying to address this,” she says.
The early signs are that a Biden administration would take a tougher line on Facebook than Trump. In June, the Democratic presidential nominee wrote an open letter to the company in which he criticised its content-management policies and ordered it to “move fast and fix it”, referring to the problems of hate speech and misinformation; he and his supporters disseminated this slogan on the network.26
This follows a remark Biden made in 2019, when he told the Associated Press that breaking up the company “is something we should take a really hard look at”.27 Despite this, Biden’s own campaign illustrates the importance of the platform as an essential communications tool in modern politics. Biden spent over $6 million on Facebook ads in the second week of August, following the announcement of Kamala Harris as his running mate, more than the Trump campaign’s outlay of $4 million.28
Social media companies will face increasing regulatory risk
“This is very much a political issue, and there is no doubt social media companies will face increasing regulatory risk, as it is on the agenda for politicians – see the US congressional antitrust hearings this year. The key question is around the speed of change, and that is difficult to answer,” says Piffaut.
US Congress recently concluded hearings with the CEOs of Amazon, Apple, Facebook and Google as part of an investigation into claims big tech firms wield too much dominance over the market. A subcommittee will soon publish a report on its findings, but legal experts say that even if antitrust action is forthcoming, any legislation is likely to be subject to lengthy political and bureaucratic delays.29
Whether or not regulators decide to break up the larger tech companies, users could become disaffected by the increasingly poisonous atmosphere on social media platforms. This could create a negative feedback loop, whereby a decline in user engagement removes the incentive for companies to pay for ads over the long run.
New disruptors could emerge to nab user attention and the associated advertising cash
New disruptors could emerge to nab user attention and the associated advertising cash, much as Facebook dislodged one-time social media leader Myspace, a platform widely considered to have an unassailable monopoly as recently as 12 years ago.30 One reason Facebook was able to displace its rival is that it offered a more family-friendly alternative to Myspace, which struggled to filter out spam, phishing scams and unwholesome ads, in much the same way Facebook is failing to get a grip on hate speech today.31
There are already signs users don’t value Facebook particularly highly in monetary terms. This makes the pact between users and the platform – free access in exchange for personal data that will be used for commercial purposes – somewhat precarious, especially when compared with more diversified tech companies like Google, whose portfolio includes search engines and other tools such as maps.
Consider a recent study led by Erik Brynjolfsson, director of the Initiative on the Digital Economy at the Massachusetts Institute of Technology. It asked Facebook users how much they would have to be paid to forgo search engines for a year. Respondents offered an average figure of $17,500; to retain access to Facebook, they were willing to pay less than $600.32
“Even if a company looks like it has an unregulated monopoly, there is always a tacit societal contract that constrains how it can act and how much money it can make. Businesses need to stay on the right side of the ‘value-for-money’ equation,” says Giles Parkinson, global equities portfolio manager at Aviva Investors. “Compared with Google, Facebook elicits more of a shrug from users – they like it but don’t love it. This means that when regulatory or societal scrutiny comes to bear, Alphabet should find itself in a better position than Facebook.”
To grasp the risk, one only needs to consult Mark Zuckerberg himself, or at least his fictional alter ego in the 2010 biographical film, The Social Network. In a key scene, Zuckerberg frets about how quickly the fortunes of his company could turn: “Users are fickle,” he says. “Even a few people leaving would reverberate through the entire user base. The users are interconnected. That is the whole point. College kids are online because their friends are online. And if one domino goes, the other dominos go. Don't you get that?”
As hate speech threatens to trigger a domino effect among Facebook’s users, advertisers and investors, the real Zuckerberg would do well to heed the warning.