AI could be as risky as crypto, says Fidelity’s Stotzel | Trustnet
The severe lack of regulation on AI companies could backfire just as it did to crypto investors.
The lack of regulation around artificial intelligence (AI) could make it as risky as cryptocurrency, according to Fidelity European manager Marcel Stotzel.
Investors are rightly excited by the potential applications of AI, but its rapid advancements could backfire if regulators do not put guidelines in place soon, the fund manager has warned.
Today’s enthusiasm for the nascent industry reminds Stotzel of the early days of cryptocurrency, when overly-keen investors were let down when the lack of regulation ultimately derailed performance.
“The regulators could have done more [about crypto] earlier, but at the same time, it’s a new tech and they don’t want to overregulate, which is particularly the case in an area like AI”, he said.
“You don’t want to lose competitiveness but if we use crypto as a benchmark, the regulation could come too late and it might already have damaged consumers, companies and society. That’s something we worry about.”
Stotzel also drew parallels to the clean energy transition – many governments have enforced carbon targets on companies now, but markets were urging them to do it far earlier.
Businesses with AI software would benefit from guardrails sooner rather than later, but Stotzel said there is a risk that regulators could again leave it too late.
“It reminds me of the early stages of climate change,” he added. “It would have been great if 20 years ago our industry had insisted on climate goals for each company rather than waiting 20 years for governments to regulate it.
“There’s room to do that now but the difference is I feel like AI companies actually want to be regulated. They themselves are crying out for a common set of standards and I’m not saying we should do the job for the regulator, but we can definitely do some of the legwork.
“If they wait even a year or two to address it, this battle might be won or lost by the time they eventually start looking at it.”
Some might argue that regulation could stifle innovation in this fast growing field, but Stozel said that even the leading AI companies see benefits to stricter guidelines.
A study by the Centre for the Governance of AI in May presented a list of 50 safeguarding practices to AI experts including some from Google Deepmind and Anthropic, which 98% of respondents agreed with.
These AI companies are already anticipating the environmental, social and governance (ESG) risks they will collide with if regulation is not implemented, according to Stozel.
He said: “Just look at all the certifications that a plane has to go through before it can fly or the trials a drug has to go through before you can sell it. I’m not saying we should stifle innovation and shut everything down, we just need some guardrails.
“These leading AI companies – Facebook, Google, Apple, Nvidia – are not stupid and they know a blowback is coming if there aren’t guardrails in place. That doesn’t seem outrageous to me, but very few people in our industry are talking about it, which I think is a shame.”
Whilst many large AI companies have been on the front foot in encouraging regulation, David Coombs, head of multi asset investment at Rathbones, was more sceptical of their intentions.
He said the likes of Microsoft and Alphabet, who have a significant head start in AI, want harsher regulation because it makes it difficult for new competitors to enter the field.
“Regulation is brilliant for the guys who have got the AI technology already like Google and Amazon because it stops any new entrants,” Coombs explained.
“It’s classic bullying in the in the playground – ‘we’ve got it, we don’t want anyone else getting it, so let’s get the regulators involved and scare everybody’. It’s in their interest to pull up the drawbridge down after them. They say they’re really worried about humanity, but I think they’re digging their moat and building a wall.”
Coombs previously explained why he has dropped exposure to AI companies despite their strong performance this year due to fears that they may come crashing down.
He was not the only concerned manager – Microsoft was one of Bankers Investment Trust’s best performing stocks over the past year, but manager Alex Crooke said “some of that enthusiasm will wane as reality kicks in”.
He explained that a narrow set of companies were “turbocharged” by retail investors captivated by the AI story, but that could reprice when the economy encounters difficulties.
Like Coombs, Schroders’ head of sustainable investment research Angus Bauer acknowledged concerns around over-regulation, although it wasn’t his main focus when screening AI for risks.
“I am very aware of the arguments that exist around regulatory capture,” he said. “If we over-regulate, then you create a cartel-like or monopolistic structure that only allows a certain number of large organisations to dominate the market, so one needs to be very careful on regulation.”
Instead, Bauer found more sizable risks on the ESG front in the power consumption of AI tools. Google alone consumes a “massive” amount of energy to keep its AI projects running, he said.
Machine learning accounts for 10-15% of the company’s consumption equivalent to round 2.3 terawatt hours of energy demand.
“That’s effectively all of the homes in Atlanta for a year, so it’s huge,” Bauer explained. “From an operational perspective, training these large language models is a really big incremental energy consumption drag.”
One of the other pressing risks that investors want regulators to tackle is around human capital. Many are concerned that the efficiency of AI could cost people their jobs, but Bauer said that disruptive technologies usually create more jobs than they replace.
“There is a very striking innate capacity in humans to create new demand, services and opportunities, so it’s quite a big statement to say AI is going to be different this time and permanently remove all of these jobs,” he added.
“Simplistically, the greater the number of service industry business models that one creates, the greater the number of additional roles that are created to support those different business models. That is how it has played out historically.”
What regulation could therefore focus on is guiding companies on how to re-train their workforce to adapt to these new roles created by AI.
“The introduction of AI can potentially help alleviate some of the labour shortages that certain countries are facing, provided the effective training and reskilling mechanisms are in process, which needs an awful lot of policy support,” Bauer said.
“We can’t automatically assume that people can just work out how to do these jobs. We do need reskilling.”
On whether regulators are taking too long to draw up AI safeguards, Bauer said that “the regulatory agenda and environment is constantly changing,” especially in Europe.
Governments around the world are rapidly building their own AI regulations, with the EU Artificial Intelligence Act in the most advanced stage.
The legislation first proposed in April 2021 seeks to address and categorise the risks associated with AI and is in ongoing development by the European Parliament.
“In Europe, you have a very ambitious EU Artificial Intelligence Act, which is a pretty stringent legal framework that is trying to focus on data quality, transparency, human oversight and accountability,” Bauer said.
“When you step back and think about just how ambitious the European regulation is, it could potentially be similar in scale to GDPR from a scope perspective, so that’s really quite bold.”
Source link
#risky #crypto #Fidelitys #Stotzel #Trustnet