5.12 Be cautious about trusting AI without having deep understanding.
I worry about the dangers of AI in cases where users accept—or, worse, act upon—the cause-effect relationships presumed in algorithm produced by machine learning without understanding them deeply.
Before I explain why, I want to clarify my terms. “Artificial intelligence” and “machine learning” are words that are thrown around casually and often used as synonyms, even though they are quite different. I categorize what is going on in the world of computer-aided decision making under three broad types: expert systems, mimicking, and data mining (these categories are mine and not the ones in common use in the technology world).
Expert systems are what we use at Bridgewater, where designers specify criteria based on their logical understandings of a set of cause-effect relationships, and then see how different scenarios would emerge under different circumstances.
But computers can also observe patterns and apply them in their decision making without having any understanding of the logic behind them. I call such an approach “mimicking.” This can be effective when the same things happen reliably over and over again and are not subject to change, such as in a game bounded by hard-to-fast rules. But in the real world things do change, so a system can easily fall out of sync with reality.
The main thrust of machine learning in recent years has gone in the direction of data mining, in which powerful computers ingest massive amounts of data and look for patterns. While this approach is popular, it’s risky in cases when the future might be different from the past. Investment systems built on machine learning that is not accompanied by deep understanding are dangerous because when some decision rule is widely believed, it becomes widely used, which affects the price. In other words, the value of a widely known insight disappears over time. Without deep understanding, you won’t know if what happened in the past is genuinely of value and, even if it was, you will not be able to know whether or not its value has disappeared—or worse. It’s common for some decision rules to become so popular that they push the price far enough that it becomes smarter to do the opposite.
Remember that computers have no common sense. For example, a computer could easily misconstrue the fact that people wake up in the morning and then eat breakfast to indicate that waking up makes people hungry. I‘d rather have fewer bets (ideally uncorrelated ones) in which I am highly confident than more bets I’m less confident in, and would consider it intolerable if I couldn’t argue the logic behind any of my decisions. A lot of people vest their blind faith in machine learning because they find it much easier than developing deep understanding. For me, that deep understanding is essential, especially for what I do.
I don’t mean to imply that these mimicking or data-mining systems, as I call them, are useless. In fact, I believe that they can be extremely useful in making decisions in which the future range and configuration of events are the same as they’ve been in the past. Given enough computing power, all possible variables can be taken into consideration.
When you get down to it, our brains are essentially computers that are programmed in certain ways, take in data, and spit out instructions. We can program the logic in both the computer that is our mind and the computer that is our tool so that they can work together and even double-check each other. Doing that is fabulous.
For example, suppose we were trying to derive the universal laws that explain species change over time. Theoretically, with enough processing power and time, this should be possible. We would need to make sense of the formulas the computer produces, of course, to make sure that they are not data-mined gibberish, by which I mean based on correlations that are not causal in any way. We would do this by constantly simplifying these rules until their elegance is unmistakable.
Of course, given our brain’s limited capacity and processing speed, it could take us forever to achieve a rich understanding of all the variables that go into evolution. Is all the simplifying and understanding that we employ in our expert systems truly required? Maybe not. There is certainly a risk that changes not in the tested data might still occur. But one might argue that if our data-mining-based formulas seem able to account for the evolution of all species through all time, then the risks of relying on them for just the next ten, twenty, or fifty years is relatively low compared to the benefits of having a formula that appears to work but is not fully understandable (and that, at the very least, might prove useful in helping scientists cure genetic diseases).
In fact, we may be too hung up on understanding; conscious thinking is only one part of understanding. Maybe it’s enough that we derive a formula for change and use it to anticipate what is yet to come. I myself find the excitement, lower risk, and educational value of achieving a deep understanding of cause-effect relationships much more appealing than a reliance on algorithms I don’t understand, so I am drawn to that path. But is it my lower-level preferences and habits that are pulling me in this direction or is it my logic and reason? I’m not sure. I look forward to probing the best minds in artificial intelligence on this (and having them probe me).
Most likely, our competitive natures will compel us to place bigger and bigger bets on relationships computers find that are beyond our understanding. Some of those bets will pay off, while others will backfire. I suspect that AI will lead to incredibly fast and remarkable advances, but I also fear that it could lead to our demise.
We are headed for an exciting and perilous new world. That’s our reality. And as always, I believe that we are much better off preparing to deal with it than wishing it weren’t true.
* Source: Principles by Ray Dalio