Stock exchanges need robust defences from repeated tech errors
Did anyone else get a sense of déjà vu at the sudden outage on the London Stock Exchange (LSE) last week? From Euronext in 2021 and Nasdaq’s Nordic markets in 2022, to the New York Stock Exchange (NYSE) cancelling trades after “technical glitches” with its opening auction triggered wild price swings earlier this year, all kinds of “outages” have occurred in recent times. In every single case, investors have only been able to respond based on what the exchanges have been willing to communicate.
Last week’s incident was no different. While the incident is still under investigation, it is clear as night follows day that glitches are now a recurring nightmare – creating challenges for regulators, members and, ultimately, trust in the market. Whenever it happens, there is the inevitable knee-jerk reaction from global regulators to better monitor future situations to the extent that it affects trade reporting obligations and issues of wider market integrity.
This is often followed by the subsequent “reviewing existing processes” period to understand what went wrong, before then enforcing measures to reduce the risk of it happening in the future. But there are already rules in place, which the exchange community is willing to abide by, so it is hard to see what the subsequent “review” processes will achieve. No matter how many reviews take place, they are not preventing technical glitches from occurring.
After all these glitches should, in theory, be taking place far less frequently as a result of industry wide technological advancements. Not to mention the copious amounts of capital invested every year on to minimise the impact of outages.
The answer may be that humans cannot keep up with the rate at which technology is advancing. Human error is a common factor in technology failures. Mistakes made by staff during routine maintenance, updates, or configuration changes can inadvertently cause outages. Stock exchanges also operate highly complex technology systems that handle massive volumes of trades. While many exchanges have redundancy and failover systems in place, these mechanisms can themselves fail or not function as expected, leading to outages. As we saw in the case of the NYSE at the start of this year, it is not as easy as someone just flipping a switch to get things up and operational again.
Our best hope may be that computers eventually will become smart enough to maintain themselves. With the current ageing infrastructure that exchanges rely on, this is hard to envisage. In some cases, venues are underpinned by decades old technology infrastructure which is far more susceptible to failures – particularly as more and more trading firms seek to execute a larger number of equity transactions in milliseconds in order to make a profit.
Upgrading and maintaining legacy infrastructure is a costly process. But on top of the already high annual commitment to technology spend, this has to be the answer to being better prepared. Technology will break from time to time but there is no excuse for not upgrading existing infrastructure in. Backups, automated alerts, and other functionalities are prevalent across other sectors to keep system administrators apprised.
It beggars’ belief that this is not the case for an industry where, due to frequent spikes in trading volumes often triggered by unexpected events, the risk of an exchange outage is far higher. When these events do happen, they don’t just have consequences for capital markets, the broader economy is often affected as many of the shares that are unable to be traded are sitting in someone’s pension pot.
All of this underscores the need for continuous improvement in the underlying exchange infrastructure and risk management. After all, the major exchanges have an equity market capitalisation upwards of $20tn, so it is high time they flex their financial muscle to ensure their technology infrastructure is fit for the 2020s, as opposed to the 1990s.