Testing times: Trading systems are out of sync with new regulations

 
Nick Hammond
CeBIT 2013 Technology Trade Fair
Financial firms need to strip back the walls and have a good look at the pipes – to check the bathroom isn’t leaking into the kitchen (Source: Getty)

No one likes to talk about the IT infrastructure challenges at a bank – it would be a long conversation and largely focused on the amount of work to do.

But with new regulations that have come into force this year, firms cannot afford to be unclear about what applications they have, where they are positioned with their infrastructure, and who they can talk to. There’s a clear need to know your environment now more than ever.

For instance, one key part of the EU’s second Markets in Financial Instruments Directive (MiFID II) requires that the testing and production environments for algorithmic trading must be separated. Software should be tested in a controlled environment, so that any bugs in the system do not affect the production. Once the system has been tested and approved, it can then be deployed into production.

Separating the two means that there is a gap, preventing any changes in the development process from causing problems in production, while ensuring that incremental updates are delivered in a controlled manner.

There’s also the requirement to be able to stress test the scalability of systems. Given the monetary volumes involved, there’s a clear need to close any backdoors and stop any accidental code updates finding their way into production.

But despite signing up to these new rules, many firms have been unable to prove that their systems are tested and operating properly, or that testing and operational environments are completely separate – and this could risk financial disaster, as well as regulatory wrath.

Trying to take one complex application or algorithm out of the mix without impacting something else is like attempting to pull an egg out of an omelette

In recent years, financial services providers have suffered a host of expensive problems, usually from glitches that have impacted the wider system.

One of the biggest was in 2012, at Knight Capital – formerly regarded as one of the best market-making firms in the US, with the tech to match.

Programmers tested a new software, which, when not uninstalled from one server, activated an outdated algorithm that went haywire, buying high and selling low. The company lost $460m in 45 minutes and bankrupted itself before the system could be switched off.

Why this algorithm had been left dormant within the system at all is not known.

The trouble is that many financial companies rely on hefty, complex, and outdated IT systems with federated accountability. It’s also very rare to see a clear real-time picture of system interdependencies. This makes it very easy to miss or hash application and infrastructure updates.

To make matters worse, it’s typical for applications and infrastructure to be managed – and changed – by different teams.

Changes take place independently, and when something does go wrong (which happens surprisingly often), there is a fire drill where all teams have to jump on a call and figure out who changed what.

As a result, the vast majority of established companies now rely on layers of management systems to build a picture of what is happening in real-time on shared infrastructure.

Trying to take one complex application or algorithm out of the mix without impacting something else is like attempting to pull an egg out of an omelette.

Financial firms need to strip back the walls and have a good look at the pipes – to check the bathroom isn’t leaking into the kitchen

The requirements under MiFID II to separate development and production are only the tip of the iceberg when it comes to testing system interdependencies within financial services.

To prepare for the changes that new regulations require, financial firms need to strip back the walls and have a good look at the pipes – to check the bathroom isn’t leaking into the kitchen.

A real-time, living picture of the entire system needs to be mapped out, tracing every communication and

interdependency, otherwise it is impossible to test how a new application or algorithm will behave within the existing system.

In many cases, it’s actually easier to re-platform the entire application on a new infrastructure platform, or even externally with a cloud provider.

Of course, this is not only a technology problem. This isn’t just an issue for the network managers and IT guys to worry about: MiFID II’s stipulation on algorithms is broad, and in some cases even an excel spreadsheet can be included.

Teams working on applications and algorithms may not want to talk about IT infrastructure, but the current levels of independence between different teams cannot continue if programmes are to be properly tested and safely put into production in the infrastructure.

If strict processes aren’t in place to prevent against dangerous changes, systems will still be vulnerable to a breach of regulation or system security – no matter how secure they believe they are.

Financial firms need to have the correct processes in place, and the right people to oversee them.

But to adapt to the current regulatory climate in an agile way, compliance officers need to talk about infrastructure, and get a clear sense of what is going on in their systems, before putting the most appropriate solution in place.

Related articles