Dr. Joseph P. Farrell Ph.D.
June 27, 2017
Over the years I’ve become increasingly wary of the various markets that are now run almost exclusively by computers and have occasionally commented about it in blogs. I’ve even entertained the possibility, in my high octane speculation mode, that various “flash crash” events seem to have features that suggest that the algorithm “took over” and drove a market event with no connection to human market realities; in this respect, I continue to be unconvinced, for example, by the various explanations of the May 2010 flash crash; call it a suspicion, or a hunch, nothing more. Yes, in short, I’ve entertained the idea that artificial intelligence (AI) is not “coming” but already “here”, and may be infesting the “dark pools” and high frequency trading (HFT) algorithms.
Well, now I’m not the only one, according to these stories shared by Ms. K.M.:
From the first article, I want to draw your attention to the following statements:
Listen Luddites, for the stock market, too, it’s a thing about the machines.
Throw away your fundamental analysis, your price charts, interest rates and economic growth forecasts, as the market has lost its moorings.
It is no longer a pyramid of fundamental and technical analysis nor is it a response to changing investor sentiment.
The ongoing multiyear changes in the market structure and dominant investor strategies in which quants, algos and other passive strategies (e.g., ETFs) have replaced active managers raise the same risks that Finchley faced 57 years ago.
And the overwhelming impact of central bankers’ largesse is the cherry on the market’s non-fundamentally influenced sundae.
As I have written:
“The combination of central bankers’ unprecedented largesse (and liquidity) when combined with mindless quant strategies and the enormous popularity of ETFs will, as night follows day, become a toxic cocktail for the equity markets. While we live in an imperfect world, we face (with valuations at a 95% decile on a number of metrics) a stock market that views the world almost perfectly.”
Back to JPMorgan’s Marko Kalonovic, who is quoted at the top of this piece and again here:
“… some striking facts: to understand this market transformation, note that Passive and Quantitative investors now account for ~60% of equity assets (vs. less than 30% a decade ago). We estimate that only ~10% of trading volumes originates from fundamental discretionary traders. This means that while fundamental narratives explaining the price action abound, the majority of equity investors today don’t buy or sell stocks based on stock-specific fundamentals. (Bold emphasis added)
Let that last statement sink in for a moment, for if you, like I, have been wondering just why the heck markets don’t make sense any more, it’s because they are utterly unconnected to humanity and human decision-making. That “less than ten percent” of trading volume that “originates from fundamental discretionary traders” means that actual human consideration of stock performance, or even equities in a certain specific sector of industry – say, film-making or farm implement manufacture – are based on actual human consideration of the performance, risk, and returns of a particular stock.
I don’t know about you, but I find this development more than disturbing.
But before we move on to the second article, pause and consider something else: it is often a criticism or critique that centralized solutions, the “one size fits all” political solutions of the political left are unworkable, precisely because no human being can calculate for all possible circumstances for all human beings: one cannot, as it were, create a bureaucratic policy or algorithm to stick in “guideline notebooks” for every possible situation.
And that raises the thorny philosophical question that no one seems to want to address:
How then, can we expect human creators of computer algorithms to do for markets, what cannot be done for other segments of human interaction by bureaucrats?
With that philosophical point in mind, turn to the second article, and consider these very cogent points made for our friends at Zero Hedge:
Most people think of artificial intelligence and algos as simply executing logical rules programmed into them by humans — the same rules that the programming humans would follow if they were presented with the same data and data analysis. The algos and AIs are doing it in the same way humans have always done and would do, but at a much slower speed or perhaps not at all because of the very weak and distant relationship of some data items to other data items.
The general belief is that algos and AIs are just “faster humans able to do a lot more calculations in a meaningful time frame”. That may NOT be a correct characterization of some of the more powerful AIs that may be working in the markets. Of course, we don’t know what AIs are working because there are no regulations requiring that machine decision-making accounts disclose and register as such … a very, very big gap in regulation.
True, AI and the related “machine learning” developments at the leading edge of such technology do NOT simply duplicate human rules and logic. Instead, while they may perform simple repetitive correlations initially on data as humans currently formulate that data, the more advanced machines go on to program themselves at successive layers, where the data being analyzed and correlated is no longer what we think of as data. Rather, it is often data artifacts created by the first layers in a form that no human would ever consider or has ever seen. To put in a more street-level way, the first level creates ghosts and apparitions and shadows that the second layer treats as real data on which it assesses correlation and predictability in the service of some decision asked of it. AND … a third and fourth and on and on are doing the same thing with output from each layer below it.
The result of this procedure is striking and terrifying when the the leading experts in AI and machine learning are interviewed. They admit that they have no way of determining what rules AI and machine- learning powered machines are following in making their decisions AND we cannot even know what inputs are being used in making those decisions.
Think about that. The creators have no knowledge of what their creations are thinking or what kind of inputs the machines are thinking about and how decisions about that are being made. The machines are inscrutable and, most terrifyingly important, UNPREDICTABLE.
We are not telling these AIs how to make decisions. The machines are figuring out how to decide to “make a profit” on their own and subject to no enforceable constraint.
The resulting risk of “flash crashes” — to lump all sudden and unexpected behaviors into a catchphrase — is unknowable but probably much greater than anyone even dreams. The machines have no fear of flash crashes or any other kind of crash. Such crashes might even serve their purpose of “making a profit.”
Note what is really being said:
(2) is processing and making trading decisions based on those artifacts;
(3) none of these processes are transparent, and thus, we do not even know why the markets are behaving as they are behaving, we only know they are not reflective of human market realities; and finally,
(4) all this can lead to the risk of flash crashes.
Lest one think that this sounds too incredible to be true, consider the final closing paragraph of this article, which is the biggest jaw-dropper of them all:
Everyone should read this important note from JPMorgan’s head quant (hat tip to Zero Hedge) in order to understand how risk parity, volatility trending, stat arb and other quant strategies that are agnostic to balance sheets, income statements and private market value artificially are impacting the capital markets and, temporarily at least, are checking volatility. (Bold and italics emphasis added)
Let that sink in for a moment: because algorithms trade at such extraordinary speed, and execute trades in blocks of equities, little or no correlation is being with actual specific equity performance, such as a human “discretionary investor” would make, looking at “old fashioned analogue sorts of things” like balance sheets, income, profit/loss statements, company indebtedness, cost-earnings ratios, exposure, assets &c… in other words, the algorithms have little to no connection to markets and their realities, much less to human decision-making processes that are normally involved in the investment process.
The bottom line? Well, over the long term, obvious a huge rethink of computer-based trading is in order. Frankly, I’m old fashioned enough to want to see a Wall Street trading floor of shouting traders, piles of paper, and bundles of stock certificates being mailed out every day. But beyond this, there’s a short term necessity, perhaps one can call it a strategy, and that’s “keep it local”, and in “keeping it local” I mean, even for local investments, finding out about their exposure to national and international markets: how much of that local bank’s stock is traded on the big markets, and who are the major shareholders? And so on… because, for right now, these machines are at the root of market unreality.
This should, and I hope will, prompt a discussion, and it will have to be a deep one, for the problem of the quants and their algorithms is highlighting the limitations of technology for a human world. The disconnection of markets from real human market activity is a case in point of how technologies have been adapted to a normal human activity – investing and trading – in an inhuman way. And the problem is, if the markets are that far removed from human realities, what will happen if, suddenly, someone pulls the plug? How many would remember how to conduct trades on the floor, the “old fashioned way”?
See you on the flip side…
Read More At: GizaDeathStar.com
About Dr. Joseph P. Farrell
Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and “strange stuff”. His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into “alternative history and science”.