Machine learning: the big risks and how to manage them – Financial Times

Machine learning: the big risks and how to manage them – Financial Times

Algorithmic trading has been prevalent in equities trading for more than two decades. It is also now well entrenched in fixed income, too. 

It has created new opportunities by speeding up execution of orders, cutting costs and increasing volumes. But it has also introduced new hazards for market participants and created the occasional “flash crash”. We have also seen trading algorithms being programmed to manipulate markets.

Now we face an even bigger challenge — machine learning. This technology, centred on computer models that can learn from experience, poses serious challenges and requires a global response.

The Bank of England and the UK Financial Regulation Authority recently published a survey of banks and capital markets firms which found that about half of respondents use machine learning in a modest way today. Most expect to make much greater use of it over the next few years.

Today, machine learning is deployed mainly in back-office functions such as anti-money laundering, fraud detection and credit risk management. It is not currently used much in front-line trading functions, but we expect material change in this area over the next few years. We therefore have the responsibility to consider potential risks and to mitigate them, as far as possible.

Unlike traditional rules-based algorithms, machine-learning algorithms are not static engines, programmed to run only along the paths created by their human programmers.

Instead, they use massive data sets and enormous computational power to enable them to recognise patterns, train themselves, and to make decisions about when and how to trade without human intervention.

This is a transformative moment for those trading in financial markets. It will bring great opportunities, but it will also create new hazards that we simply have not had to think about before. Here are four to consider.

First, what we call “model drift”. Machine-learning trading engines learn for themselves how to create prices by repeated and constantly evolving experimentation. In this optimisation process it becomes hard, or even impossible, to trace how decisions are made. It is therefore very difficult to prevent undesirable outcomes in advance, or to correct them afterwards.

These concerns over transparency explain the present regulatory focus on model risk management and software validation, as well as questions about how company boards can satisfy themselves that they have an appropriate level of understanding about what is going on inside the “black box”.

Second, bias. Machine learning creates the potential for unexpected or unfair changes in pricing or liquidity for certain types of market users, or even for individual customers — as a result of factors that are impossible to uncover because they lie, effectively undiscoverable, in the heart of the optimisation engine. Unless the machine chooses to tell you its secrets, you will never know why it did something.

More worrying, perhaps, is that a machine optimising on its own will probably find that unethical, manipulative trading practices are more profitable. How do we ensure that the machine understands not just the law and regulatory rules, but also concepts of right and wrong?

Third, market concentration. The way in which machine learning models improve by accessing increasing quantities of data is likely to create network effects, where a small number of data providers effectively control access. That may, in turn, throw up high barriers to entry.

Such barriers could entrench the power of today’s large banks and financial services firms or, alternatively, allow technology-based competitors to create new oligopolies at the expense of today’s financial sector. 

But the consequences of concentrated market structures need careful thought. Participants in the market could be disadvantaged, through unfair rationing of liquidity and skewed pricing.

Fourth, the skills gap. There is a massive shortfall of expert programmers, data scientists and risk managers needed to safely develop, test and implement machine learning in financial markets.

This is just as much the case in the private sector as it is among central banks and market regulators. It creates a significant knowledge gap in the boardrooms of financial services firms and within policymaking institutions about the challenges and hazards posed by machine learning.

Given the international nature of financial markets, these are all challenges that need to be properly considered and addressed at a global level. 

The complexity of the issues raised also makes collaboration between public authorities and the private sector essential. A fragmented approach could lead to a trading environment where no one truly knows what the black box is going to do. That is a risk for everyone, not just wholesale markets in one location or another. 

The writer is chairman of the Fixed Income, Currencies and Commodities Markets Standards Board

Source

Leave a Reply

%d bloggers like this: