• Equities
  • Global Equity
  • Technology

The Anti-Social Network

Social media companies face reckoning over hate speech

Facebook, Twitter and other platforms are drawing criticism for their failure to tackle hate content. But will the hit to their reputation do any lasting commercial damage?

In April 2021, English football announced a boycott of social media. Players, coaches and pundits from across the sport shunned Twitter, Facebook and Instagram for four days in protest against racism on these platforms. Corporate sponsors including Adidas and Barclays also took part in the boycott.

A report from Kick It Out, English football’s equality and inclusion organisation, illustrated why such action was necessary. It found a significant increase in racist and homophobic abuse of those involved in the sport since the beginning of the 2019-’20 season. Many social media users cited “unsatisfactory responses” from the big platforms after they had made initial complaints about hate speech.1

The boycott was just the latest episode in a wider backlash against social media companies. In July 2020, more than 1,000 prominent advertisers launched a month-long boycott of Facebook as part of the #StopHateForProfit campaign, pressing the firm to do more to stamp out racist content in the wake of George Floyd’s murder and the Black Lives Matter protests.2

Social media companies continue to enjoy the confidence of the market

Despite these controversies, social media companies continue to enjoy the confidence of the market. Share prices have risen in line with the wider tech sector amid growing demand for online tools, even as the bricks-and-mortar economy suffers under COVID-19 restrictions. But as advertisers pull out, users log off and regulators circle, some investors are warning the persistence of hate speech on social media could yet pose a serious threat to the future of the tech giants.

From dial-a-hate to the Twitter feed

Throughout history, advances in communications technology have enabled new forms of hate speech. In the 1960s, for example, extremist groups in the US set up automated voice messages connected to phone lines, broadcasting their views to a wide audience.3

The “dial-a-hate” phenomenon drew the attention of Congress with policymakers putting pressure on AT&T to tackle the issue

The so-called “dial-a-hate” phenomenon drew the attention of Congress. Prevented from banning the recordings by First Amendment laws protecting freedom of speech, policymakers put pressure on telecoms company AT&T to tackle the issue. The company argued it was powerless to regulate the activity of private individuals on its phone lines.4

Today, social media giants such as Facebook and Twitter make similar arguments when criticised for the content that appears on their platforms. But hate speech is a far bigger problem in the Internet era, when millions of people around the world can meet and instantaneously exchange information – or intimidate, bully or harass.

As defined by the United Nations, hate speech encompasses “any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are”. This might include their religion, ethnicity, nationality, race, gender, sexuality or any other identity factor.5

In 2015, countries around the world committed to tackling the problem as part of the UN’s 17 Sustainable Development Goals, many of which affirm the right to freedom of expression and protection from harassment. For example, SDG 16 aims to “promote peaceful and inclusive societies for sustainable development, provide access to justice for all and build effective, accountable and inclusive institutions at all levels”.6 But moving from commitment to practice has proved more difficult.

“It’s clear that although countries have committed to protect people from harassment, the online reality is unfortunately quite different. Neither companies or governments have found a way to tackle online hate speech, but we expect to see both sides take more action as pressure increases from customers and voters,” says Marte Borhaug, global head of sustainable outcomes at Aviva Investors.

Hate speech has real-world effects beyond the Internet, making it a fundamental human rights issue. In Germany, a correlation was found between anti-refugee posts on Facebook by the far-right Alternative für Deutschland party and physical attacks on refugees.7

From hate speech to bullying, extremism to misinformation, there is a lot of content here that damages communities

“The 24/7 nature of social media, the amplification of content through sharing, clearly exacerbates the impact of these kinds of messages on wider society,” says Louise Piffaut, ESG analyst at Aviva Investors. “From hate speech to bullying, extremism to misinformation, there is a lot of content here that damages communities.”

The perpetrators of racist mass shootings in the US and elsewhere have publicised their acts to supporters on the major social media sites and even used the platforms to broadcast videos of their crimes. The shooter who murdered 51 people at two mosques in Christchurch, New Zealand in March 2019 streamed a video of the attacks using Facebook Live, and clips of the footage spread quickly across Facebook and YouTube.8

While this sort of activity tends to get taken down relatively swiftly, Facebook only blocked white nationalist content as a matter of policy in the immediate aftermath of the Christchurch attacks.9 YouTube and Twitter allowed Ku Klux Klan leader David Duke to post on their networks for years before finally banning him in 2020.10

Social media firms have global reach and hate speech is a global problem. In Myanmar, military personnel used Facebook to spread propaganda demonising Rohingya Muslims ahead of a campaign of ethnic cleansing, according to a UN investigation. In India, lynch mobs have used Facebook-owned messaging service WhatsApp to coordinate attacks.11

Echo chambers

Some experts point the blame at social media business models. Social networks encourage like-minded individuals to gather, so as to more-efficiently target them with advertisements. But as algorithms push users towards content that aligns with their pre-existing views, echo chambers can form. Without the corrective offered by opposing opinions or moderating voices, rhetoric can quickly spiral towards extremes.

Keeping people on the platforms allows them to be targeted with advertising

“These companies want to keep people on the platforms because that’s what allows them to be targeted with advertising,” says Dr Jennifer Cobbe, coordinator of the Trust and Technology Initiative at Cambridge University, an interdisciplinary research project that explores the dynamics of trust and distrust in relation to Internet technologies, society and power.

“Part of the problem with that is people collectively tend to be drawn to things that are shocking and controversial and which raise an emotional response,” she adds. “Hate speech is actually great for these companies’ business models, because that kind of controversial, shocking, emotional content will draw people in. As long as people are on their platforms, that’s all they want.”

Algorithms designed to promote engagement can exacerbate the problem. Take YouTube’s auto-play function, which has become notorious for displaying a series of ever-more incendiary videos to users who linger on the site. As New York Times columnist Zeynep Tufekci remarked: “You are never ‘hard core’ enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century.”12

Each social media platform has its own, more or less stringent, rules over what is permissible. Facebook’s guidelines are “relatively detailed”, according to Piffaut; they describe the kinds of content that would violate its policies on both Facebook and Instagram. YouTube and Twitter also have clear guidelines over what should be removed – any incitement to violence is off-limits – and what should be allowed to remain with content warnings attached (this includes certain forms of hate speech).

Social media firms rely on three methods of policing content: artificial intelligence, human moderators and user reporting

Enforcement of these rules is patchy, however. To varying degrees, social media firms rely on three methods of policing content: artificial intelligence, human moderators and user reporting.

The algorithms used to detect and delete content that violates the rules are opaque: YouTube appears to have tweaked its algorithm to curb the “radicalising” effect of its auto-play function in recent years, but the logic behind its recommendations remains obscure. Facebook’s own data suggests automated methods are more effective at rooting out violent or graphic content than cases of harassment and bullying, which tend to be flagged by users in the first instance.13

Human moderators, meanwhile, can quickly become overwhelmed by the thankless task of sifting through reams of disturbing content, which takes its toll on their mental health. Making things worse, such teams are often understaffed. In Myanmar, Facebook employed only two Burmese-speaking moderators at the time the anti-Rohingya propaganda was flooding its local platform.14

Tougher regulation

As global businesses whose operations span countries with very different laws on freedom of expression, social media companies must tread a fine line when deciding which content to ban or to flag as harmful.

Twitter’s own technology may play a role in spreading the most inflammatory messages

In the US,  senior Republican politicians have long accused Facebook and Twitter of being quicker to crack down on conservative voices than liberal ones. However, an independent civil rights audit, commissioned by Facebook in 2020, found the company contravened its own policies on hate speech by refusing to take down inflammatory posts from President Trump.15

Trump was eventually banned from both Twitter and Facebook after he contested Joe Biden’s presidential election victory and incited a riot by his supporters at the US Capitol building in Washington DC in January 2021. Twitter said Trump’s tweets around this time “were highly likely to encourage and inspire people to replicate the criminal acts that took place at the US Capitol”.16

But Twitter’s own technology may also play a role in spreading the most inflammatory messages. A recent Economist study found that its algorithm often amplifies the most stridently negative conservative voices. Figure 1 shows the kind of messages served to a clone of Donald Trump’s account: the president was more likely to see emotive tweets criticising his political rivals than would have been the case if his feed simply displayed tweets in chronological order.

Figure 1: Twitter’s algorithm favours the most emotive and negative tweets17
Twitter’s algorithm favours the most emotive and negative tweets
Note: Sentiments of tweets served to a clone of Donald Trump’s account by newsfeed type. Average for tweets containing 40 most frequent words, Sep-Dec 2019. Source: ‘Twitter’s algorithm does not seem to silence conservatives’, The Economist, August 1, 2020

In some quarters, the social media firms drew criticism for their delay in banishing Trump; in others, they were criticised for censoring free speech. The French and German governments were among those to protest Trump’s ban: German Chancellor Angela Merkel argued decisions to limit free speech should be made by politicians, not private companies.18

Facebook CEO Mark Zuckerberg has asked governments to devise a consistent set of rules for Internet companies, including guidelines on how to deal with harmful content. The company has also set up an independent oversight panel – dubbed Facebook’s “Supreme Court” – to review content management decisions and potentially overturn them.19 In May 2021 the panel upheld Trump’s suspension, but said the indefinite nature of the ban was unusual and called on Facebook to be more transparent in its decision-making process.20

From the tech companies’ perspective, a worst-case scenario would be an amendment to Section 230

In setting up an independent commission to rule on content moderation, Zuckerberg may be hoping to ward off calls for punitive legislation. From the tech companies’ perspective, a worst-case scenario would be an amendment to Section 230 of the US Communications Decency Act, according to which technology companies are currently immune from prosecution from harmful or defamatory content published by third parties on their platforms.

In June 2020, Trump had issued an executive order aimed at limiting the protections offered by Section 230, ostensibly in response to Twitter’s decision to add fact-checks to his recent tweets on voting by mail (in the text of the order, the president denounced Twitter’s decision as “selective censorship”).21 His successor, Joe Biden, raised the possibility of repealing Section 230 altogether during his election campaign.

For its part, the European Commission is drawing up legislation that will force tech giants to remove illegal content or face the threat of sanctions under a comprehensive “Digital Services Act” due to be unveiled at the end of 2021. Germany has introduced the German Network Enforcement Act (NetzDG), which forces large social media companies to review complaints and remove any content that is clearly illegal within 24 hours.

“A tougher regulatory environment is long overdue,” says Cobbe. “We are now acknowledging the reality these platforms play such an outsized role in society that they need to have some kind of responsibility, and need to be brought under some degree of control.

“The German Network Enforcement Act is a good example; it’s not so much targeted at the content itself, as the content is still regarded as the speech of individuals, but it does focus on what the platforms should be doing. We need to look at algorithms of platforms and how they disseminate conspiracy theories, hate speech and violent extremism through their recommendation systems,” she adds.

Facebook had to pay €2 million for under-reporting illegal activity on its platforms in Germany

Cobbe says regulations need to be enforceable and come with severe punishments to be effective. Facebook has already fallen foul of the NetzDG law: in July 2019 the company had to pay €2 million for under-reporting illegal activity on its platforms in Germany.22

This is a small sum in the context of Facebook’s global revenues ($85 billion in 2020), but other countries are seeking tougher punishments. In 2019, the Australian parliament passed the Sharing of Abhorrent Violent Material Act, introducing criminal penalties for social media companies, possible jail sentences for tech executives for up to three years and financial penalties worth up to ten per cent of a company's global turnover. Based on their 2020 earnings, this would amount to $370 million for Twitter, $8.5 billion for Facebook and $1.8 billion for YouTube’s parent Alphabet.23

Commercial impact

The size of these figures shows that, quite apart from the moral impetus to act, hate speech poses a threat to these firms’ revenues. Even if they manage to avoid costly regulatory sanctions, it is likely they will have to invest much more heavily in content-management initiatives in the future, from improved automated systems to new armies of human moderators.

Any increase in R&D and labour costs may have a material impact on Facebook’s profit margins

As operating expenses among tech companies tend to be high even before these added outlays – over 40 per cent of total revenue in 2020, in Facebook’s case – any increase in R&D and labour costs may have a material impact on the company’s profit margins, says Piffaut.

Advertising boycotts could also have a growing impact over time, given ad sales make up the vast majority of social media companies’ revenues (see Figure 2), though the impact so far has been small. Facebook’s ad revenues actually increased during the boycott in July 2020, partly because its customers are mostly local, “mom-and-pop” businesses that did not participate in the walk-out. Facebook has eight million active advertisers, and the top 100 brands, including the largest companies involved in #StopHateforProfit, generated only six per cent of its total revenue in 2019.24

Figure 2: Leading source of revenue for tech companies (per cent)
Source: Facebook, Twitter, Alphabet, Apple, Amazon, Microsoft, March 2020

That’s not to say the boycott will not work in forcing changes at Facebook’s handling of hate speech, however. YouTube’s response to its own ad boycott in 2017 may be a salient precedent.

“YouTube survived the exodus of advertisers, but only because it took fairly drastic action in response. Throughout 2017 and 2018 there was a wholesale shift in the content preferred by the platform, with regard to the allocation of monetisation rights, towards less controversial, more family-friendly content,” says Charles Devereux, ESG analyst at Aviva Investors.

Investor engagement

This example shows tech companies will respond when sufficient external pressure is applied, indicating investor engagement could bear fruit. And the fate of the tech companies is certainly an issue of increasing significance to investors, for financial as well as ethical reasons. The five biggest technology companies (Alphabet, Amazon, Apple, Facebook and Microsoft) accounted for about 22 per cent of the total market capitalisation of the S&P 500 as of May 1, 2021.25

With both moral and financial risks at stake, more investors are beginning to question the social media firms over content. Piffaut and Devereux set out a framework for engaging with these companies in key areas. They recommend investors should ensure social media firms are properly assessing how their operations affect human rights, developing more robust content policies in light of these principles and demonstrating how these are enforced.

Investors should engage with social media firms to properly enforce their own rules

Investors should also engage with social media firms to improve internal accountability and provide more transparency as to their actions on hate speech. Investment in more sophisticated detection algorithms would lessen the burden on human moderators, the analysts say.

“Progress has been made, but not enough yet,” says Piffaut. “The issue is that the measures taken so far have been very reactive, rather than preventative. We are looking for higher investments in technology, and for companies to take ownership of this issue instead of outsourcing the solution.”

When devising an effective strategy for engaging with social media companies, investors need to be mindful of two further points. The first is that coordinated action is likely to be more effective than acting alone.

“Collaborative initiatives are important in this area,” says Borhaug. “When you are initially raising concerns with companies that may not even want to meet with you – and tech companies are infamous for not talking to investors – it can be helpful for investors to collaborate globally.”

Secondly, investors need to pay attention to the social and political context and maintain consistency in engaging with companies across borders. As well as pressure over hate speech, social media companies are facing calls to defend freedom of speech, and not only from hard-line US conservatives.

In July 2020, 150 global authors, academics and intellectuals from across the political spectrum signed an open letter published in Harper’s magazine, defending freedom of speech against what they called a “culture of illiberalism” on both right and left.26

While social media companies in the West face criticism for being lax in shutting down abusive content on their platforms, technology firms elsewhere may be too quick to restrict debate at the behest of authoritarian governments. In those countries, the onus is on investors to pressure companies to defend individual freedoms and maintain their access to information where possible.

Technology firms in developing markets may be too quick to restrict debate at the behest of authoritarian governments

“Across many emerging markets, social networking is dominated by Western companies: Instagram, Facebook, Twitter. Those where it isn’t tend to be countries where freedom of speech is not a policy priority, such as Russia and China,” says Alistair Way, head of equities at Aviva Investors.

“For social media companies in those markets, such as Tencent or TikTok owner ByteDance in China, or Mail.ru Group in Russia, we talk to them about how they manage the line between not falling foul of the government and giving users access to information and maintaining their rights. These issues are front and centre in our conversations with these firms,” he adds.

Reputational risk

Striking the right balance when engaging globally with social media companies over content management may be tricky, but investors must be willing to do this if they are to invest according to their own moral framework and defend the value of those investments. After all, if hate speech is allowed to flourish, calls will grow to rein in the social media firms with stronger regulation, and perhaps even to break them up.

Biden criticised Facebook's content-management policies and ordered it to “move fast and fix it”

Cobbe argues the power of the big tech platforms has grown to an extent they are effectively unmanageable. She believes they should be scaled down so they can be properly overseen and regulated.

“If governments want to address hate speech or the other problems with these platforms, they need to address the structural problems of scale and power, and that’s where I would begin with trying to address this,” she says.

During the election campaign in 2020, Biden signalled he would take a tough line on Facebook. In June, he wrote an open letter to the company in which he criticised its content-management policies and ordered it to “move fast and fix it”, referring to the problems of hate speech and misinformation; he and his supporters disseminated this slogan on the network.27

This follows a remark Biden made in 2019, when he told the Associated Press that breaking up the company “is something we should take a really hard look at”.28 However, in a blow to the antitrust crackdown a recent ruling in Washington by Judge James Boasberg dismissed two cases against Facebook. He stated that the Federal Trade Commision's case was “legally insufficient” and “failed to plead enough facts to plausibly establish [a monopoly]”.

Despite all this, Biden’s own strategy illustrated the importance of the platform as an essential communications tool in modern politics. Biden spent over $85.2 million on Facebook ads during the presidential campaign, a little less than Trump’s $89.1 million (Biden outspent his rival on Google ads, spending $60 million, around $4 million more than Trump).29 After his victory, Biden hired former Facebook executives to run his transition team, although other senior members of his administration are vocal critics of the company.30

“This is very much a political issue, and there is no doubt social media companies will face increasing regulatory risk, as it is on the agenda for politicians – see the US congressional antitrust hearings in 2020. The key question is around the speed of change, and that is difficult to answer,” says Piffaut. (Read our in-depth feature on the potential antitrust threat to Big Tech).

Domino effect

Whether or not regulators decide to break up the larger tech companies, users could become disaffected by the increasingly poisonous atmosphere on social media platforms. This could create a negative feedback loop, whereby a decline in user engagement removes the incentive for companies to pay for ads over the long run.

New disruptors could emerge to nab user attention and the associated advertising cash

New disruptors could emerge to nab user attention and the associated advertising cash, much as Facebook dislodged one-time social media leader Myspace, a platform widely considered to have an unassailable monopoly as recently as 12 years ago.31 One reason Facebook was able to displace its rival is that it offered a more family-friendly alternative to Myspace, which struggled to filter out spam, phishing scams and unwholesome ads, in much the same way Facebook is failing to get a grip on hate speech today.32

There are already signs users don’t value Facebook particularly highly in monetary terms. This makes the pact between users and the platform – free access in exchange for personal data that will be used for commercial purposes – somewhat precarious, especially when compared with more diversified tech companies like Google, whose portfolio includes search engines and other tools such as maps.

Consider a recent study led by Erik Brynjolfsson, director of the Initiative on the Digital Economy at the Massachusetts Institute of Technology. It asked Facebook users how much they would have to be paid to forgo search engines for a year. Respondents offered an average figure of $17,500; they were willing to give up access to Facebook for less than $600.33

Businesses need to stay on the right side of the ‘value-for-money’ equation

“Even if a company looks like it has an unregulated monopoly, there is always a tacit societal contract that constrains how it can act and how much money it can make. Businesses need to stay on the right side of the ‘value-for-money’ equation,” says Giles Parkinson, global equities portfolio manager at Aviva Investors. “Compared with Google, Facebook elicits more of a shrug from users – they like it but don’t love it. This means that when regulatory or societal scrutiny comes to bear, Alphabet should find itself in a better position than Facebook.”

To grasp the risk, one only needs to consult Mark Zuckerberg himself, or at least his fictional alter ego in the 2010 biographical film, The Social Network. In a key scene, Zuckerberg frets about how quickly the fortunes of his company could turn: “Users are fickle,” he says. “Even a few people leaving would reverberate through the entire user base. The users are interconnected. That is the whole point. College kids are online because their friends are online. And if one domino goes, the other dominos go. Don't you get that?”

As hate speech threatens to trigger a domino effect among Facebook’s users, advertisers and investors, the real Zuckerberg would do well to heed the warning.

References

  1. ‘Discrimination in football on the rise,’ Kick It Out, September 3, 2020
  2. Tiffany Hsu and Eleanor Lutz, ‘More than 1,000 companies boycotted Facebook. Did it work?’, The New York Times, August 1, 2020
  3. Steven Melendez, ‘Before social media, hate speech spread by phone’, Fast Company, February 4, 2018
  4. Steven Melendez, ‘Before social media, hate speech spread by phone’, Fast Company, February 4, 2018
  5. ‘United Nations strategy and plan of action on hate speech’, United Nations Office on Genocide Prevention and the Responsibility to Protect, May 2019
  6. ‘Goal 16: Promote peaceful and inclusive societies for sustainable development, provide access to justice for all and build effective, accountable and inclusive institutions at all levels’, United Nations Department of Economic and Social Affairs Sustainable Development, 2020
  7. Zachary Laub, ‘Hate speech on social media: Global comparisons’, Council on Foreign Relations, June 7, 2019
  8. Billy Perrigo, ‘”A game of Whack-a-Mole.” Why Facebook and others are struggling to delete footage of the New Zealand shooting’, Time, March 16, 2019
  9. Liam Stack, ‘Facebook announces new policy to ban white nationalist content’, The New York Times, March 27, 2019
  10. Lois Beckett, ‘Twitter bans white supremacist David Duke after 11 years’, The Guardian, July 31, 2020
  11. Zachary Laub, ‘Hate speech on social media: Global comparisons’, Council on Foreign Relations, June 7, 2019
  12. Zeynep Tufekci, ‘Youtube: The great radicalizer’, The New York Times, October 3, 2018
  13. Facebook’s algorithm detected only 16 per cent of the cases of bullying and harassment cited in its latest report; the rest were identified through user reporting
  14. Zachary Laub, ‘Hate speech on social media: Global comparisons’, Council on Foreign Relations, June 7, 2019
  15. Elizabeth Dwoskin and Cat Zakrzewski, ‘Facebook’s own civil rights auditors say its policy decisions are a “tremendous setback”’, The Washington Post, July 8, 2020
  16. ‘Permanent suspension of @realDonaldTrump’, Twitter, January 8, 2021
  17. ‘Twitter’s algorithm does not seem to silence conservatives’, The Economist, August 1, 2020
  18. Birgit Jennen and Ania Nussbaum, ‘Germany and France oppose Trump’s Twitter exile’, January 11, 2021
  19. Jane Wakefield, ‘Facebook’s “Supreme Court” members announced’, BBC News, May 6, 2020
  20. ‘Facebook's Trump ban upheld by Oversight Board for now’, BBC News, May 6, 2021
  21. Zachary Laub , ‘Hate speech on social media: Global comparisons’, Council on Foreign Relations, June 7, 2019
  22. Thomas Escritt, ‘Germany fines Facebook for underreporting complaints’, Reuters, July 2, 2019
  23. Based on the companies’ own reporting for 2019
  24. Brian Fung, ‘The hard truth about the Facebook ad boycott: Nothing matters but Zuckerberg’, June 26, 2020
  25. S&P data
  26. Bloomberg data, August 14, 2020
  27. ‘A letter on justice and open debate’, Harper’s Magazine, July 7, 2020
  28. Taylor Hatmaker, ‘Biden slams Facebook for letting Trump run wild and demands policy changes in open letter’, TechCrunch, June 11, 2020
  29. Hunter Woodall, ‘2020 hopeful Biden says he’s open to breaking up Facebook’, AP News, May 14, 2019
  30. Simon Dumenco and Kevin Brown, ‘Here’s what Biden and Trump have spent on Facebook and Google ads’, AdAge, October 30, 2020
  31. Nancy Scola and Alex Thompson, ‘Former Facebook leaders are now transition insiders’, November 16, 2020
  32. Victor Keegan, ‘Will Myspace ever lose its monopoly?’, The Guardian, February 8, 2007
  33. Felix Gillette, ‘The rise and inglorious fall of Myspace’, Bloomberg, June 22, 2011
  34. Zach Church, ‘How much are search engines worth to you?’, MIT, March 26, 2019

View The Tech Edition online

It has taken a global pandemic to make us see the value of living in the present, but it is just one example of seismic changes taking place that we must live with and adapt to, as AIQ: The Tech Edition explores.

Find out more

Want more content like this?

Sign up to receive our AIQ thought leadership content.

Please enable javascript in your browser in order to see this content.

I acknowledge that I qualify as a professional client or institutional/qualified investor. By submitting these details, I confirm that I would like to receive thought leadership email updates from Aviva Investors, in addition to any other email subscription I may have with Aviva Investors. You can unsubscribe or tailor your email preferences at any time.

For more information, please visit our privacy notice.

Related views

Important information

THIS IS A MARKETING COMMUNICATION

Except where stated as otherwise, the source of all information is Aviva Investors Global Services Limited (AIGSL). Unless stated otherwise any views and opinions are those of Aviva Investors. They should not be viewed as indicating any guarantee of return from an investment managed by Aviva Investors nor as advice of any nature. Information contained herein has been obtained from sources believed to be reliable, but has not been independently verified by Aviva Investors and is not guaranteed to be accurate. Past performance is not a guide to the future. The value of an investment and any income from it may go down as well as up and the investor may not get back the original amount invested. Nothing in this material, including any references to specific securities, assets classes and financial markets is intended to or should be construed as advice or recommendations of any nature. Some data shown are hypothetical or projected and may not come to pass as stated due to changes in market conditions and are not guarantees of future outcomes. This material is not a recommendation to sell or purchase any investment.

The information contained herein is for general guidance only. It is the responsibility of any person or persons in possession of this information to inform themselves of, and to observe, all applicable laws and regulations of any relevant jurisdiction. The information contained herein does not constitute an offer or solicitation to any person in any jurisdiction in which such offer or solicitation is not authorised or to any person to whom it would be unlawful to make such offer or solicitation.

In Europe, this document is issued by Aviva Investors Luxembourg S.A. Registered Office: 2 rue du Fort Bourbon, 1st Floor, 1249 Luxembourg. Supervised by Commission de Surveillance du Secteur Financier. An Aviva company. In the UK, this document is by Aviva Investors Global Services Limited. Registered in England No. 1151805. Registered Office: 80 Fenchurch Street, London, EC3M 4AE. Authorised and regulated by the Financial Conduct Authority. Firm Reference No. 119178. In Switzerland, this document is issued by Aviva Investors Schweiz GmbH.

In Singapore, this material is being circulated by way of an arrangement with Aviva Investors Asia Pte. Limited (AIAPL) for distribution to institutional investors only. Please note that AIAPL does not provide any independent research or analysis in the substance or preparation of this material. Recipients of this material are to contact AIAPL in respect of any matters arising from, or in connection with, this material. AIAPL, a company incorporated under the laws of Singapore with registration number 200813519W, holds a valid Capital Markets Services Licence to carry out fund management activities issued under the Securities and Futures Act (Singapore Statute Cap. 289) and Asian Exempt Financial Adviser for the purposes of the Financial Advisers Act (Singapore Statute Cap.110). Registered Office: 138 Market Street, #05-01 CapitaGreen, Singapore 048946.

In Australia, this material is being circulated by way of an arrangement with Aviva Investors Pacific Pty Ltd (AIPPL) for distribution to wholesale investors only. Please note that AIPPL does not provide any independent research or analysis in the substance or preparation of this material. Recipients of this material are to contact AIPPL in respect of any matters arising from, or in connection with, this material. AIPPL, a company incorporated under the laws of Australia with Australian Business No. 87 153 200 278 and Australian Company No. 153 200 278, holds an Australian Financial Services License (AFSL 411458) issued by the Australian Securities and Investments Commission. Business address: Level 27, 101 Collins Street, Melbourne, VIC 3000, Australia.

The name “Aviva Investors” as used in this material refers to the global organization of affiliated asset management businesses operating under the Aviva Investors name. Each Aviva investors’ affiliate is a subsidiary of Aviva plc, a publicly- traded multi-national financial services company headquartered in the United Kingdom.

Aviva Investors Canada, Inc. (“AIC”) is located in Toronto and is based within the North American region of the global organization of affiliated asset management businesses operating under the Aviva Investors name. AIC is registered with the Ontario Securities Commission as a commodity trading manager, exempt market dealer, portfolio manager and investment fund manager. AIC is also registered as an exempt market dealer and portfolio manager in each province of Canada and may also be registered as an investment fund manager in certain other applicable provinces.

Aviva Investors Americas LLC is a federally registered investment advisor with the U.S. Securities and Exchange Commission. Aviva Investors Americas is also a commodity trading advisor (“CTA”) registered with the Commodity Futures Trading Commission (“CFTC”) and is a member of the National Futures Association (“NFA”). AIA’s Form ADV Part 2A, which provides background information about the firm and its business practices, is available upon written request to: Compliance Department, 225 West Wacker Drive, Suite 2250, Chicago, IL 60606.