The release of ChatGPT has heralded an artificial intelligence boom. We assess the investment and ESG considerations.

Read this article to understand:

  • How generative AI technology is being used across sectors
  • The risks and opportunities for equity investors
  • The potential social disruption and other ESG-related risks

An ethereal black-and-white portrait that scooped a major photography prize.1 A faked image of Pope Francis wearing a Balenciaga bomber jacket.2 Software that can convert brainwaves into text.3 Step-by-step instructions on how to make risotto, kumquat cocktails – and napalm.4 The headline to this article.

These are examples of work created by ChatGPT, an artificial intelligence (AI) platform developed by Silicon Valley-based start-up Open AI and released in November 2022. A so-called “large language model” (LLM) that draws on vast troves of data, ChatGPT has ushered AI into the public consciousness with bewildering speed.

AI technology is not new: it is present in everyday technologies, from jumbo jets to email. But ChatGPT has demonstrated the seemingly endless possibilities of a new form of generative AI, which can conjure text and images in response to user prompts. Companies across the world are now scrambling to integrate it into their products and services. Microsoft, which has a stake in OpenAI, led the way in adding an AI-powered chatbot into its Bing search engine. Google quickly followed with its equivalent, Bard.

This new era in AI is being greeted with excitement and trepidation in equal measure. While some sectors will benefit from productivity gains, other business models will come under pressure.

LLMs are already being used to create fake news and may amplify the power and reach of controversial technologies

Disruption is likely to be felt across wider society, too. LLMs are already being used to create fake news and may amplify the power and reach of controversial technologies such as facial-recognition algorithms. The invention of a near-costless method of crunching data and generating content may lead to redundancies in a range of service industries.

As regulators mull how to respond, some experts have called for a six-month moratorium on AI development, citing fears an all-powerful artificial general intelligence – capable of both world-changing innovations and catastrophic harms – could now be within reach.5

In this article, the AIQ editorial team brings together experts from across Aviva Investors to discuss the implications of generative AI. Joining the discussion: Francois de Bruin (FdB), global equity fund manager; Louise Piffaut (LP), head of ESG equity integration; Sora Utzinger (SU), head of ESG corporate research; Alistair Way (AW), head of equities; and Julie Zhuang (JZ), global equities fund manager.

Paraphrasing Amara’s Law, which refers to the impact of new technology, Bill Gates famously observed “we overestimate the change that will occur in the next two years and underestimate the change that will occur over the next ten”.6,7 Is this likely to prove true of generative AI?

JZ: Experts I speak to believe the AI opportunity is of a much greater order of magnitude than previous developments such as the personal computer or cloud computing. But it’s worth pointing out Big Tech firms have been working on LLMs for some time: like previous tech advancements, AI has been made possible by a series of innovations. It took the creation of the Internet and arrival of cutting-edge semiconductors to provide the wealth of data and computing power needed to train LLMs (see Figure 1).

A successful new technology first requires public awareness – and that’s what the release of ChatGPT last year enabled. The platform already has over 100 million users. We have passed the tipping point of awareness and entered the adoption phase, but companies are scratching the surface of that opportunity. Microsoft is integrating LLMs into its Azure cloud computing platform, which it says will add one percentage point to its top-line growth.

Generative AI is largely being powered by the biggest tech firms that already have massive scale and reach

FdB: I think Bill Gates has it the wrong way around: people are potentially underestimating the impact that will happen in the next two years. We expect to see exponential growth in the near term, because generative AI is largely being powered by the biggest tech firms that already have massive scale and reach. And because this technology is based on information rather than physical assets, it can be scaled up further almost instantly.

At the same time, people are potentially overestimating where we will be in ten years. In the same way science fiction got it wrong when it imagined a future with flying cars, we're probably still a long way away from a sentient artificial general intelligence running everything.

LP: Whether we will ever see a viable artificial general intelligence, the social impacts are already happening. We have concerns over the use of AI-powered facial recognition technology by governments and big companies for surveillance purposes, for example. And AI-related human rights abuses could be made worse by the distribution and expansion of ever-more-powerful AI tools.

Figure 1: Computation used to train notable AI systems, 2001-2023

Note: Floating-point operations (FLOP) are a measure of computing performance. One petaFLOP is equivalent to 10¹⁵ floating-point operations.
Source: Our World in Data, March 15, 2023
8

Big Tech firms are leading these developments. What are the implications for the balance of power within that industry?

JZ: The widespread take-up of AI is positive for Big Tech as a whole, rather than good for one firm and bad for another.

Microsoft is moving more quickly than Google but over time, all Big Tech companies should benefit

Google’s AI research capabilities are on a par with, or even better than, those of Microsoft’s partner Open AI. But what Microsoft has done effectively is to start creating a bundled ecosystem of consumer products that integrate ChatGPT-style interfaces; it is moving more quickly than Google, for example by inviting the developer community to build an application layer on top of its base-layer technological infrastructure. As these products start to be released to customers, we expect Microsoft to gain market share. But, over time, all Big Tech companies should benefit.

FdB: A lot of the focus initially was on how generative AI could disrupt search. But using search engines to find information is already efficient. This is why Microsoft is looking at other ways to use AI, such as by integrating “co-pilots” into its software to improve the user experience across email, video calls and presentations. This kind of AI could lead to exciting efficiency gains.

Chinese firms have powerful AI capabilities. How could the progress of the technology in China differ from the West?

AW: China has struggled to catch up with Western firms when it comes to high-level semiconductor development – and with the recent round of legislative measures in the US, which contain tough sanctions against Chinese firms, it probably never will. Instead, it is devoting more and more energy into areas of technology where it can already credibly be described as the world leader. That means software and AI.

Companies in China are still likely to have to share consumer data with the government if ordered to do so

There is a huge amount of data in China, and there were historically far fewer restrictions on the ability of tech companies to use, exploit and own data without protections for consumers than in the West. Although Beijing has introduced laws to improve and modernise the data-protection regime since 2017, companies are still likely to have to share consumer data with the government if ordered to do so. This would give a Chinese government-sponsored AI programme a broader dataset with which to train machine-learning models than is available in the West.

Chinese AI development could go in good or bad directions. For instance, Baidu can claim to be a world leader in safe autonomous-driving technology – that looks a positive trend. Equally, Beijing is making progress in facial recognition and other tech-enhanced ways to monitor, control and exploit parts of society. That is clearly a negative outcome and will be accelerated by new developments in AI.

Which other sectors could benefit from generative AI tools? And where might the investment opportunities lie?

JZ: AI is already employed extensively in medical technology: around 40 per cent of all radiology devices approved by the US Food and Drug Administration are AI enabled; that number was zero only seven years ago. That sector is likely to benefit from the availability of more-powerful AI tools.

AI is driving industrial automation and that is a big investment theme we play across different portfolios

Similarly, AI is driving industrial automation and that is a big investment theme we play across different portfolios. Many industrial bellwethers – Schneider, Siemens – stand to benefit as automation becomes more sophisticated.

IT services companies such as Accenture and Capgemini are the kinds of “handholding” firms that can help companies integrate AI into their businesses – they could also benefit. PwC recently announced a $1 billion deal with Microsoft, which it says will equip it with capabilities to help clients “reimagine their businesses through the power of generative AI”.9

FdB: Outsourced service providers are seeing a boost from AI. These firms can use AI for discovery (to better understand the customer’s problem), find the relevant information to develop a solution and log that information for reference if the issue arises again. One firm claims it has already been able to cut the time spent dealing with customer queries by 40 per cent thanks to AI.10

We could also see advantages in financial services. Our ability to gather market information across broad datasets, bring it into uniform decision-making platforms and compare it against benchmarks will improve thanks to AI. It could help improve risk management and enhance the customer experience by providing access to a wealth of real-time data.

AW: The gaming industry could be revolutionised. In developing a triple-A title, the big studios employ thousands of people to do mundane things: for example, designers spent years making the trees in Red Dead Redemption 2 look slightly more realistic. Studios are both concerned and excited by the potential of AI to speed up such tasks.

To do AI properly is astonishingly intensive in terms of computing power

You also need to think about the nuts and bolts of the hardware that goes into AI. To do AI properly is astonishingly intensive in terms of computing power, and competing at the cutting edge requires expensive high-end tech. That should help our investments in chipmakers such as TSMC, which is the most advanced foundry, and companies like ASML and ASMI, which supply parts to it. Similarly, the rise of AI should fuel demand for chips made by Nvidia and AMD, which computers need to undertake the kind of rapid parallel processing AI requires.

Which sectors could see disruption in the near term?

AW: Online travel agencies could see disruption, given that industry involves taking large amounts of data and interpreting relatively complex user preferences to come up with a compelling package. It is hard to see a situation where AI can't do a better job than humans in bringing that information together.

Online education could also be vulnerable. On May 2, shares across the sector fell when online-learning platform Chegg admitted its customers had been switching to ChatGPT, which can provide a similar service for free.11

If cutting-edge AI models require additional computing power, what could be the environmental impacts?

AW: The environmental impacts are a definite concern. Semiconductor foundries consume huge amounts of energy and water, which is a problem in a drought-ridden country like Taiwan. Parts of the US where the government is encouraging investment in chipmaking, like Arizona, are experiencing similar pressures.

SU: I'm concerned about the implications for carbon emissions (see Figure 2). Now the technology is out there, it is not something we will be able to row back from. The prospect of efficiency gains has been cited as a way of offsetting some of the extra energy consumption required, but I’m sceptical.

Efficiency gains just make it cheaper for people to consume more: more resources, more energy

What we have seen through history is that efficiency gains just make it cheaper for people to consume more: more resources, more energy. Data use per household has been rising steadily over the past decade. The Internet consumed approximately 800 Terawatt hours (TWh) of electricity in 2022; with AI taking a huge leap forward in 2023, it is not implausible to envisage a scenario where the Internet’s energy demand could double by 2030. I would expect that the environmental impact of AI could be a huge flashpoint in climate negotiations given where we are right now in terms of emissions increases.

Figure 2: CO2 equivalent emissions from training of selected machine-learning models and real-life examples (tonnes)

Note: Car = lifetime average including fuel; Human life (US) = 1 year average; Human life (Global) = 1 year average; Air travel = 1 passenger, NY–SF.
Source: Stanford University, 202312

Given the hype and investment going into AI start-ups, is there a risk of a dot.com-style boom-and-bust cycle?

JZ: We expect a lot of growth and development in the application layer in the short term, as companies integrate AI and offer it to their own users, whether that’s for content marketing or protein research in healthcare. This is a watershed moment and companies need to seize it, so a certain amount of hype is understandable. But three-to-five years out there may be a cooling of interest as awareness of the social and environmental risks improves and regulation catches up.

AW: There is lots of investment and not all of it will pay off. At times of technological revolution, it is rational for investors to try to back a start-up that could be worth billions; equally, many of these firms will fail. But that doesn’t mean the overall pie won’t grow dramatically.

We are more confident generative AI is going to create lasting change

Currently, the identity of the winners looks uncertain, but the fact this is being driven by large, established companies gives the technology more legitimacy. Compared with other recent innovations, such as cryptocurrencies or the metaverse, we are more confident generative AI is going to create lasting change.

FdB: To be successful in AI you need scale, so that suits incumbents such as Microsoft, Google and Amazon. So far, the effect on share prices does not compare with the boom periods in these own companies’ histories. It could be that the wider capital markets impact is not hugely meaningful in the short term, given the rewards are mostly likely to accrue to large established players. But it’s still early days.

We supported a recent Investor Alliance for Human Rights letter to the European Commission in response to the EU AI Act (which is currently being drafted), citing the need for better regulation of AI development. What are the key human rights risks and where should regulators focus their efforts?

LP: Human-rights risks can potentially arise at different stages of the value chain in the development and use of AI products. For example, there may be a bias against a particular social group that is rooted in the coding of the AI, or in the dataset the model was trained on. Additionally, the product may be used in such a way that infringes on human rights. For example, AI-enabled facial-recognition technology is sometimes used in public places where people have no “opt out” – that could lead to potential abuses.

The precautionary principle would prevent the use of AI unless there is confidence the harms can be mitigated

Lawmakers must exercise caution. The precautionary principle would prevent the use of AI unless there is confidence the harms can be mitigated. That’s not to say we can’t have responsible innovation, but we need to make sure human-rights risks are better managed. That requires better understanding of unintended consequences by companies, regulators and industry bodies.

Regulation also needs to be outcome-seeking, with a focus on human-rights protections, because technology develops and evolves relatively quickly, resulting in loopholes companies can exploit.

AI technology could also bring other forms of social disruption, such as job losses or AI-generated fake news. What are the other ESG considerations?

LP: LLMs could be used for social manipulation. Even before this form of AI was developed, tech platforms were implicated in issues such as challenges to democracy and increasing polarisation of society. AI could exacerbate these negative impacts. (Figure 3 indicates the growing number of AI-related controversies related to privacy, surveillance and fake news).

AI models will often replicate racism and other biases present in the datasets they are trained on

Social biases embedded in new technology are a huge problem. AI models will often replicate racism and other biases present in the datasets they are trained on. If this is not addressed, it could worsen discrimination in recruitment, education and access to finance – there are already big problems with bias in the way credit ratings are calculated, for instance.

Job security is another important aspect. We often speak of a “just transition” to a low-carbon future. But we also need a just transition to a world driven by new technologies. We expect companies that develop AI, and those that use it, to communicate their plans to stakeholders, especially where they involve a shift in workforce needs.

Figure 3: AI incidents and controversies (2012-2021)

Source: AIAAIC, Stanford University, 202313,14

What about potential social benefits?

SU: One potential positive use is in healthcare. There is huge scope for AI to speed up the drug-discovery process, and there will be other applications in this area as well.

Previously, companies using AI applications tended to focus on the design of the AI algorithm and how it could be tailored to their purposes. Now the accessibility of ChatGPT and similar platforms has resulted in a “levelling out” of AI, attention has shifted to which companies have large datasets AI could help them make better use of. Take a firm like 23andMe, which has been collecting genetic information for many years: that company is likely to be at the forefront of an effort to marry large datasets with AI capabilities.

Greater AI utilisation in healthcare could create opportunities for better disease prevention and detection

Of course, this unleashes ethical questions. But the optimist in me would argue greater AI utilisation in healthcare could create opportunities for better disease prevention and detection, and a shift to a much more personalised healthcare model. That, in turn, could move us to a more viable healthcare system, one premised on prevention rather than acute treatment, which is what we have now and costs huge amounts of taxpayer money. If you can work upstream with primary care physicians and healthcare practitioners on prevention, and use AI to your advantage, the upside could be significant, not just in terms of lives saved but also in improvements to people’s quality of life.

References

  1. Paul Glynn, “Sony World Photography Award 2023: Winner refuses award after revealing AI creation,” BBC News, April 18, 2023
  2. Kalley Huang, “Why Pope Francis is the star of AI-generated photos,” The New York Times, April 8, 2023
  3. Hannah Devlin, “AI makes non-invasive mind-reading possible by turning thoughts into text,” The Guardian, May 1, 2023
  4. Dan Milmo and Alex Hern, “From pope’s jacket to napalm recipes: how worrying is AI’s rapid growth?,” The Guardian, April 23, 2023
  5. Ian Hogarth, “We must slow down the race to godlike AI,” Financial Times, April 13, 2023
  6. Trevor Green, “Amara’s Law: Applying an old adage to new technology,” Aviva Investors, October 1, 2019
  7. Bill Gates, afterword to “The Road Ahead”, Penguin, 1996
  8. “Computation used to train notable AI systems, by affiliation of researchers”, Our World in Data, March 15, 2022
  9. “PwC US makes $1 billion investment to expand and scale AI capabilities,” PwC, April 26, 2023
  10. “Quarterly information at March 31, 2023,” Teleperformance, April 25, 2023
  11. Bethan Staton, “Education companies’ shares fall sharply after warning over ChatGPT,” Financial Times, May 2, 2023
  12. Nestor Maslej, et al., “The AI Index 2023 annual report”, AI Index Steering Committee,
  13. Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2023
  14. “AIAAIC repository”, AIAAIC Repository, 2023
  15. Nestor Maslej, et al., “The AI Index 2023 annual report”, AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2023

Related views

Important information

THIS IS A MARKETING COMMUNICATION

Except where stated as otherwise, the source of all information is Aviva Investors Global Services Limited (AIGSL). Unless stated otherwise any views and opinions are those of Aviva Investors. They should not be viewed as indicating any guarantee of return from an investment managed by Aviva Investors nor as advice of any nature. Information contained herein has been obtained from sources believed to be reliable, but has not been independently verified by Aviva Investors and is not guaranteed to be accurate. Past performance is not a guide to the future. The value of an investment and any income from it may go down as well as up and the investor may not get back the original amount invested. Nothing in this material, including any references to specific securities, assets classes and financial markets is intended to or should be construed as advice or recommendations of any nature. Some data shown are hypothetical or projected and may not come to pass as stated due to changes in market conditions and are not guarantees of future outcomes. This material is not a recommendation to sell or purchase any investment.

The information contained herein is for general guidance only. It is the responsibility of any person or persons in possession of this information to inform themselves of, and to observe, all applicable laws and regulations of any relevant jurisdiction. The information contained herein does not constitute an offer or solicitation to any person in any jurisdiction in which such offer or solicitation is not authorised or to any person to whom it would be unlawful to make such offer or solicitation.

In Europe, this document is issued by Aviva Investors Luxembourg S.A. Registered Office: 2 rue du Fort Bourbon, 1st Floor, 1249 Luxembourg. Supervised by Commission de Surveillance du Secteur Financier. An Aviva company. In the UK, this document is by Aviva Investors Global Services Limited. Registered in England No. 1151805. Registered Office: 80 Fenchurch Street, London, EC3M 4AE. Authorised and regulated by the Financial Conduct Authority. Firm Reference No. 119178. In Switzerland, this document is issued by Aviva Investors Schweiz GmbH.

In Singapore, this material is being circulated by way of an arrangement with Aviva Investors Asia Pte. Limited (AIAPL) for distribution to institutional investors only. Please note that AIAPL does not provide any independent research or analysis in the substance or preparation of this material. Recipients of this material are to contact AIAPL in respect of any matters arising from, or in connection with, this material. AIAPL, a company incorporated under the laws of Singapore with registration number 200813519W, holds a valid Capital Markets Services Licence to carry out fund management activities issued under the Securities and Futures Act (Singapore Statute Cap. 289) and Asian Exempt Financial Adviser for the purposes of the Financial Advisers Act (Singapore Statute Cap.110). Registered Office: 138 Market Street, #05-01 CapitaGreen, Singapore 048946.

In Australia, this material is being circulated by way of an arrangement with Aviva Investors Pacific Pty Ltd (AIPPL) for distribution to wholesale investors only. Please note that AIPPL does not provide any independent research or analysis in the substance or preparation of this material. Recipients of this material are to contact AIPPL in respect of any matters arising from, or in connection with, this material. AIPPL, a company incorporated under the laws of Australia with Australian Business No. 87 153 200 278 and Australian Company No. 153 200 278, holds an Australian Financial Services License (AFSL 411458) issued by the Australian Securities and Investments Commission. Business address: Level 27, 101 Collins Street, Melbourne, VIC 3000, Australia.

The name “Aviva Investors” as used in this material refers to the global organization of affiliated asset management businesses operating under the Aviva Investors name. Each Aviva investors’ affiliate is a subsidiary of Aviva plc, a publicly- traded multi-national financial services company headquartered in the United Kingdom.

Aviva Investors Canada, Inc. (“AIC”) is located in Toronto and is based within the North American region of the global organization of affiliated asset management businesses operating under the Aviva Investors name. AIC is registered with the Ontario Securities Commission as a commodity trading manager, exempt market dealer, portfolio manager and investment fund manager. AIC is also registered as an exempt market dealer and portfolio manager in each province of Canada and may also be registered as an investment fund manager in certain other applicable provinces.

Aviva Investors Americas LLC is a federally registered investment advisor with the U.S. Securities and Exchange Commission. Aviva Investors Americas is also a commodity trading advisor (“CTA”) registered with the Commodity Futures Trading Commission (“CFTC”) and is a member of the National Futures Association (“NFA”). AIA’s Form ADV Part 2A, which provides background information about the firm and its business practices, is available upon written request to: Compliance Department, 225 West Wacker Drive, Suite 2250, Chicago, IL 60606.