Please share widely:
George Phillies writes:
On one hand the stock market and associated businesses are moving into a tall bubble. A few days ago, the S&P 500 set a new record, even though on that day 398 of the 500 S&P stocks fell. We see more and more reports of AI companies that have no path to an income and are worth billions and billions of dollars. Companies are boosting their sales by investing in each other in an incestuous cycle. Hedge funds are becoming strongly correlated with stock prices, so that they no longer hedge. Car reposessions are at a high. I am seeing orthodox investment houses running articles about how you should stay invested in the market…all of the above were seen in the dot-com bubble.
Meanwhile, we have a K-shape economy. People at the top are doing very well. The bottom 80% are doing poorly. It appears that SNAP will disappear, which is a moderate demand shock, and leads to a modest segment of the underclass saying that they will simply loot food stores. Trump’s ICE people are considerably farther out of hand than most urban police forces of a while ago. At some point, there will be serious demonstrations, not starring upper middle class Caucasian woman. One delay in this event has been that events such as the kidnapping of American citizens have tended to be in places with rigid gun control laws.
Student loans bloated university administrations and encouraged people to enter low-available-job fields. At some point, indebted people will complain more vigorously.
That’s the less serious problem. The more serious problem is that it is plausible that AI as now constituted will fail to deliver its promises of General Intelligence. At that point, most AI investments go up in smoke. Why? Large language models look at statistics and try to chose preferences that agree with what has been published. Said differently, an LLM gives an answer that tries to make you feel good. That can make it an effective search engine. However, there is no real understanding there.
Unlike most of you, I experienced the understanding issue a long time ago.
Student: Why did I get a zero on the question?
Me as professor: Your answer is incorrect at point after point.
Student: But I showed that I understand the concepts. I just can’t solve problems. Don’t I get any points?
and at this point I did make a mistake. I neglected to ask a student what the student meant by ‘concepts’, since so far as I could see there was no understanding there.
Words without understanding:
We may consider a student who took the displacement at constant acceleration formula
x = x0 + v0 t + 0.5 a t^2,
was handed a time-dependent acceleration a(t) = q t,
and simply plugged q t in for a in the formula. Complete nonsense, but perhaps it showed a ‘concept’.
The LLM is like the student. It has no understanding and no way to get one with parameter fitting. It may think it has concepts right, but its outputs will hit a limit.
And if that is the limit on large language models, well, LLMs and AI centers are tying up 40% of the country’s investment capital for a better grade of snake oil.
I am publicaly conceeding a point I lost on a Computer Science exam 35 years ago.
The profesor asked what logic family will the next super computer be built from.
I answered ECL the Profesor claimed Gallium Arsenide. We spent the next couple of quarters swapping papers back and forth about Crays efforts.
I would find one that said GaAs wasn’t working, he would another that Cray was back on tract.
Turns out we both were wrong. Versions of FETs have won out.
To this I would add the observation that Barclay’s recently downgraded Oracle debt, based on the observation that they are borrowing a sum of money that is far larger than their cash on hand or current income. The amount of money that is being invested in AI computation centers is a significant part of the country’s available investment capital. If Artificial Intelligence does not pan out, large numbers of investors are going to lose incredible amounts of money. Already the administration is denying that it would have the Federal Reserve or Treasury buy up large amounts of AI-corporation bonds in order to save their investors if there was another financial crisis, ‘denial’ being a strong indicator that the idea is being considered.
When large language models are integrated with robots and more specialized types of artificial intelligence, they can easily overcome the limits you described. Although they may not possess consciousness as we know it, they can integrate and react to sensory input and learn from experience. Examples are self-driving cars and warehouse robots, with humanoid household robots on the horizon.
Yes, that is the thing of it. The companies driving the AI spending can all afford it. But, Google already has almost all of the search market. It isn’t going to get more users, or sell more advertising, just because it improved the way people can search. And if it tries to charge for it, I suspect the vast majority of people would switch back to regular Google searches. It doesn’t seem likely to me that more people are going to subscribe to Amazon just because Amazon upgraded Alexa.
They need new product lines to justify the spending. If those products are slow to materialize, or if they are priced out of reach of a sufficient number of consumers, then stock valuations are going to come down in a hurry. And if AI increases productivity sufficiently to require many fewer workers, that could cause other problems. One company’s margins could improve in the short term after cutting suddenly unnecessary staff, but if every company does the same the number of employed people will fall to the point where consumer spending will be cut back and that will hurt corporate income, again resulting in contracting multiples.
Still, who wants to bet against companies like Microsoft and Google? The quantum stocks are an easier short, if you can stand the meme stock level volatility. No product, no hope of having a saleable product in the next decade, and some of their stock prices have gone up 40x this year. At some point they’re likely going to zero. They just might quadruple first.
LLMs are limited by their developers to the boxes they’re stuck in and the biases their programmers bake into them. Plus, they are really unable to actually understand nuances of language, including humor, sarcasm, plays on words, and new word constructions.
Plus, they are terrible at oral narratives, being unable to convert simple concepts like dollar amounts, times, and end of sentences into coherent sounds.
They have a long way to go, and they barely understand that blinker fluid exists and is made by Bausch & Lomb.
REM said it best… “It’sThe End of the World As We Know It!”
Ayn Rand said that the distinguishing feature of humans is our ability to create concepts. When AI is capable of actually creating concepts, and not just playing a guessing game with ideas, then we need to worry,
*creating concepts unprompted, proactive, not reactive*
But yes indeed.
Good article, George. Yes, there is overinvestment in AI with much of it based on debt.