Probably the best place to start with an a analysis of artificial intelligence and machine learning is to define exactly what these terms mean and why they are different to what we would traditionally call ‘expert systems’. There’s a good definition provided by the Commodity Futures Trading Commission in their paper “A Primer on Artificial Intelligence in Financial Markets” which can be found here.
Expert Systems
Rules-based, hard-coded algorithms. Developers provide the machine with a roadmap of anticipated input and directed output. Examples: First generation chess-playing machines and “domain expert” programs.
Machine Learning
Machine learning may take a variety of approaches that include learning algorithms, pattern recognition, graphical and statistical modeling, and decision trees. Examples: Natural language processing, facial recognition, and robotic process automation are each applications of machine learning.
Deep Learning
A type of machine learning in which the machine incorporates context sensitivity and machine driven pattern discovery. Can make use of reinforcement learning to extract progressively higher level features from data and master complex topics in a short time frame. • Examples: Gaming machines like Alpha Zero, language translation, medical diagnosis, and object detection by self-driving vehicles.
Even the most complex traditional forecasting model is nothing more than an ‘expert system’ of rules defined by human hand and mind. They might be evolved over time with the benefit of back testing and ‘learning’, But they are by definition a closed system of transparent rules. Where Machine and Deep learning change the game, is the way in which and the speed of which those rules are changed, dynamically, on the fly with a staggering number of variable inputs, some of which may incomprehensible even if we were to view them. An article in WIRED entitled Our Machines Now Have Knowledge We’ll Never Understand offers good example of the scale of computing going on.
Google’s AlphaGo program came to defeat the third-highest ranked Go player in the world. ….the game has 10^350 possible moves; there are 10^123 possible moves in chess, and 10^80 atoms in the universe. Google’s hardware wasn’t even as ridiculously overpowered as it might have been: It had only 48 processors, plus eight graphics processors that happen to be well-suited for the required calculations.
Lets pause – there’s 10^80 atoms in the universe. There’s 10^350 possible moves in Go. And yet with a computer that would fit in the corner of most offices and not raise an eyebrow was able to learn how to become the best Go player in the world – and we don’t even know how. Apply that to the complex modelling and decisioning going into the forecastingm trading and optimisation in financial services and you can understand the issue with both transparency and control.
Taking a step back, we can see what this does to the issue of Bounded Rationality – the boundaries are being ever pushed back at a rate we cannot fathom and the concept of rationality is removed entirely – what is rational about a system we don’t understand. The only measure of a systems performance is whether it is meeting our objectives even if we don’t understand it. (In this case the objectives are overrulingly financial, in the realm of medical science, the objective is disease eradication, more of that later).
This isn’t the article to discuss artificial intelligence, plenty of time for that, but for now we can start to understand the huge implications to Bounded Rationality, Information Asymmetry and dare I suggest behaviour (when you take the human out of the concept of human behaviour) on the ability of financial markets to apply regulation and enforce decision making transparency. In brief, our football field of dominos and stadium full of speculators is no longer a useful analogy for the complexity of the markets. In my imagination, every blade of grass is a new potential interaction, every rain drop a new variable to consider.
All we have done so far is prove to a degree how difficult it is to manage risk 'after the fact - to manage risk once the risk has been taken or distributed. How do we control the risk before it has been committed to? Prediction is futile! Regulation is the way forward...
PREDICTION IS FUTILE!
If predictive controls (eg Forecasting, Credit Rating Agency ratings and complex financial reporting to the regulator) are not sufficient to identify risk then then we have to deal with the other end of the spectrum - the entry of new instruments and new technology into the markets. But here we come across different issues altogether and it has nothing to do with complexity at all, in fact it is to do with simplicity.
The options for any regulator that wants to control a market from the supply side are limited - prescribe strict rules around the type of product that can be launched, pre-approve 'shrink wrapped' products that cannot be deviated from or close the market to new additions or pre-approve the adoption of technology and modelling approaches. It is an 'all or nothing' approach to regulation as any watering down of the rules allows that gap within which new products and technologies grow into and control is lost once again. Whilst the regulator might like to get to a point where it pre-approves products and agrees limits on them, it is something the market will resist at all cost, citing loss of entrepreneurial ability, limiting innovation when dealing with new challenges or reduction in competitive advantage.
Even if there were, classification of products is hard. When is a derivative not a derivative? The truth is they probably always are since even simple transactions that do not look like derivatives can have embedded charges and fees that act like a contingent liability. How then do you regulate a market where products are so difficult to define? How do you approve models that even the AI engineers who built them do not fully understand how they work?
Even with tighter regulation, further complexity emerges when companies attempt to sidestep legislation in order to obtain competitive advantage or simply generate profit. Legislation at the product level or modelling level can never be anything other than a trailing act of control based on assessing past failure or success.
Some effort has been made to ensure entities keep some element or risk - Skin In The Game. The problem with skin in the game is that it only removes risks to the market where information asymmetry is prevalent (that is to say one party has by determination more information about the risks and rewards than the other). Where the originator is equally obtuse to the risks or complexity then having skin in the game only serves to introduce a false sense of security in the instrument being offered.
What better example of how slow and poor regulation is as a tool for control is there than the crypto currency market?! Crypto currency was barely a twinkle in the eye of financial institutions in 2008, a marginalised department set up to explore the ideas of blockchain technology at best. This last few years has seen the explosion of new crypto currencies, distributed financial instruments and security based use cases, all available to the layperson to get involved in with the objective of democratising investment opportunities to the masses and to remove central government control via fiat currency. The regulators themselves don’t know how to regulate these instruments even if they had the remit, resources and appropriate laws to let them do so. Take the SEC v Ripple debacle which is resting not on official documents of regulatory standpoint and guidance but on what the SEC Director of Corporation Finance said in an after dinner speech in 2018 and on the definition laid down by the “Howey test” of what constitutes a security which was written in 1946!
Conclusion
What sobering conclusions have we reached so far?
Proactive regulation to prevent crisis is difficult on both the supply side and the control side of the financial system.
Models are a natural simplification of a complex landscape that requires qualitative and quantitative judgement that are all too easily corrupted by rapidly changing circumstances or the vagaries if human behaviour.
AI has introduced the ability to remove simplification of reality, but in a way that means we are even further away from understanding the ‘why’ of reality.
Human behaviour adds variables that are impossible to predict. Profit is achieved by having more data and more insight that your competitors so the aim is to increase information asymmetry not remove it from the market.
We are left then with the conclusion that the mainstay of dealing with future crisis is not on preventing or predicting failure but in simply maintaining enough reserves to deal with cataclysmic losses. The Swiss had it right all along by adding their own significantly higher levels of capital adequacy to the generally accepted limits. Im not suggesting or one moment we give up and remove regulation or stop attempting to predict systemic risk – rather we acknowledge that over-leveraging with debt is a bad thing and carrying a significantly greater degree of sovereign and organisational reserves is an essential part of responding to a systemic shock.
It’s a good job we have learned our lessons and prepared for that future scenario.
Ooooops…….