28 Aug 2025

"LATE-STAGE CAPITALISM" IS JUST OVERFITTING

I get the strong impression that people using the phrase "late stage capitalism" mostly just mean "capitalism right now, which I don't like". It's just my impression though, and who really cares what I think (even among the readers of my blog post), so let's see what Wikipedia thinks people mean by it:

"In contemporary academic or journalistic usage, "late stage capitalism" often refers to a new mix of (1) the strong growth of the digital, electronics and military industries as well as their influence in society, (2) the economic concentration of corporations and banks, which control gigantic assets and market shares internationally (3) the transition from Fordist mass production in huge assembly-line factories to Post-Fordist automated production and networks of smaller, more flexible manufacturing units supplying specialized markets, (4) increasing economic inequality of income, wealth and consumption, and (5) consumerism on credit and the increasing indebtedness of the population."

Well, that's more or less just "recent capitalism", and doesn't particularly impress me. Points (2), (4), and (5) seem bad,(3) seems maybe good, and (1) is debatable. But, I recently decided that maybe there is something to the idea that we are in a late stage of capitalism, and it's worth attention (and concern). But first, we have to take a diversion into machine learning.

Overfitting

Given the recent hype about "AI", the reader might be forgiven for thinking that I'm wanting to talk about machine learning (which is what you call neural networks when you're being honest, rather than trying to hype something) because I think that "AI" is going to transform capitalism. I don't, and it won't. Rather, it's just that a market economy IS a neural network, in a very real sense, and therefore we can learn something about how it works (or fails to) from looking at how machine learning works (or, crucially for this thesis, fails to).

In order for the machine in "machine learning" to learn, it has to be trained. This is a process where you present it with inputs, have it generate outputs, and then smack it on the wrist for getting the output wrong. If it was very wrong, you smack it on the wrist hard, and if it was closer to the correct output, you smack it on the wrist more softly. There are mathematical ways to put all of this, and the devil is most definitely in the details, but this gives you the basic idea.

But, you can only train it on so much; the Real World always has more infinitely, fractally complicated details than any finite sized set of training examples you can get. This doesn't mean machine learning cannot ever be useful, but it does mean that you have to deal with the risk of "overfitting". A small example might help.

Let's say I'm using a neural network (or any other machine learning algorithm) to predict sales at a store, based on the date. I give it a few days' worth of data, and it tries to predict that based on weather that day, day of the week, month of the year, the economic growth as reported in the previous month's statistics, and so on. Based on this, it can get better and better at predicting how good the sales will be on a given day in the future. It might learn rules like:

  1. People shop more on Saturdays than Tuesdays
  2. People buy more in December than January
  3. People don't like to shop when it's below freezing, or above 100 degrees Fahrenheit
  4. People shop more when the economy is good

And so on. Great! All reasonable inferences, and probably likely to remain true in the future (i.e. outside of the training set). But then, there is still some error, some difference between the algorithm output and the truth. If you keep training the neural network, it will try to reduce this error as much as it can, and will start to add bogus rules, perhaps like:

Why does it do this? Two reasons. First, its training set is finite in size, and it cannot have abundant data that represents every case, so it will inevitably start extrapolating from odd situations that are only represented once or a few times in its training set. Second, there is just some random variation in any real-world response of interest, and a perfect representation of the thing you're modeling would therefore never get to zero error (no matter how much data you have). But, if the machine learning training algorithm is still cranking, then inevitably it will keep trying to drive down the error, so after it has learned all of the real, valid correlations, it will start making unreal, invalid correlations, that happened (by chance) to work in the training set (but will not repeat in future data).

This is a well-known phenomenon in machine learning: Wikipedia, once again, is the best source of a short summary of the topic. But we can also think of it as being quite similar to what, in humans, we call superstition. Somebody had a bad thing happen, by chance, on the 13th of the month. Somebody else had a bad thing happen, by chance, on a Friday. After a while, if they are the type of people who do not believe that anything happens by chance, they will conclude that bad things happen because it is Friday the 13th. They were unwilling to accept, "sometimes you just get unlucky", and insisted on finding a reason, and therefore, Friday the 13th is the reason (because there's no better candidate, since it was in fact just bad luck).

"Overfitting" is what we call it when a machine learning algorithm does not believe in chance, and it gets superstitious.

Money As A Machine For Learning

Not all economies use market mechanisms for setting values. However, in a market economy, the way in which many people form a market, with supply and demand curves setting the price of a thing, is very similar to how a neural network arrives at its answer to a problem. The explanation of why takes a while, and we can skip it for our purposes here. But, just as information in a neural network passes between many nodes, some of which get more than others, and these nodes are connected broadly to many others, so money in a market economy passes between many people (or corporations), some get more than others, and they connect to each other (by passing money). This is how (and why) a market economy works; money in a market economy is like information in a neural network.

But, what is the "market economy neural network" trying to learn? It is trying to learn the value of things, and what is best to do in order to increase wealth. If somebody discovers a better way to build a mousetrap (and plagues of mice are a problem), then money will flow to them, and that will result in them making more of the improved mousetrap. The market economy/neural network develops rules for how to create value, by creating organizations that are better and better at creating wealth. In the beginning, this is because they are building better mousetraps.

The longer you have a capitalist system, the more developed its rules. For example, its finance system will get extraordinarily baroque, if you let it. It will invest money into things mostly because of the fact that the last time something similar was invested in, there was a great deal of profit; whether or not it makes sense now, will not be as important a consideration. Thus you have umpteen sequels for a movie that made a lot of money, whether or not those sequels are likely to be good (or make money). One company comes out with a profitable product, and then umpteen other companies come out with a similar product, even though the very fact that the first product already exists means it is quite likely they are not truly creating any wealth (that is, the overall economy/society is not any better off for having a dozen of that same kind of product instead of one).

For a long time, economists would try to come up with explanations for why this all did make sense after all, and day-trading or eight electric scooter startups or other free-market nonsense was actually rational. I have noticed that since around 2008 or so, they have mostly stopped pretending that things like the stock market or the banking sector or venture capitalists are rational, and mostly just content themselves with pointing out (correctly) that no other system does a better (or even equally good) job of allocating resources.

But, there was a system that DID do a better job of allocating resources, and creating wealth, than our current market economy: the market economy of a few generations back. "Late-stage capitalism", is a market economy neural network, that has overfitted. An overfitted neural network performs worse than one which has not (yet) overfitted. A "late-stage capitalist" market economy, performs worse than a market economy that is not "late-stage capitalist", for the same reason that a machine learning algorithm performs better if it has not overfitted.

What Can We Do About It?

It is interesting to look at what Wikipedia says about the topic of how to prevent or at least lessen overfitting:

"The basis of some techniques is to either (1) explicitly penalize overly complex models or (2) test the model's ability to generalize by evaluating its performance on a set of data not used for training, which is assumed to approximate the typical unseen data that a model will encounter."

How would these apply to a market economy? The first, would "explicitly penalize" complexity in pricing or financial matters. We certainly do not do this in our own economy. In fact, complexity (in finance, software, and many other fields) is often rewarded, either because it makes government regulation more difficult (since the regulator cannot understand what is really happening), or because the complexity makes the person who brought it about look more impressive (about half of software complexity in the real world, I estimate, is because the programmer wanted to add to their resume or otherwise look more impressive). We not only fail to "explicitly penalize" complexity, we implicitly reward it.

The second idea, in a market economy, relies upon the idea that you can halt the training of a neural network's training, so if you can detect when it has started to "overfit", that tells you when to stop. However, market economies have continually changing prices, so it is difficult to apply this technique. It does appear, though, that anything which allows for more rapid changes in price (for example 24-hour, microsecond stock-exchanges), are heading us in the opposite direction which we should want to go, because they allow the market to "overfit" more before any real change in value or environment happens.

If we cannot (or at least are not going to) stop it, what can we do in response to it?

How To Deal With An Overfitted Model

The first thing we must recognize, is that the market economy is bigger than we are, and we aren't going to be able to change it (that is, no one reading this blog is the sort of person who can change the rules of our late-stage capitalist economy). We are "dealing" with the fact that our market economy is overfitted, in the sense that we are finding a strategy for living with that which we do not have the power to fix.

Second, the definition of the problem is that our economy cannot be relied on to even approximately value things correctly. Therefore, buying and selling things must be done without any expectation that they are even approximately correctly priced, now or in the future. Just because something is valuable, does not mean you will be able to sell it for a good price. Just because something is expensive, does not mean it will likely be valuable, in actual fact. Sometimes, as for example with NFT's, this will be fairly obvious, but not always. When you buy something, do not assume that its price is a fair marker of its value. If you intend to sell it later, do not assume that its price in the future will be correlated in any way to its value.

You can think of it like driving a car with a broken speedometer. The market was, at one time, a working speedometer, that accurately measured something. Now, it cannot be relied on to do so. You will have to judge your speed by other means, which is a pain, but it is no good pretending that the speedometer works when it does not. Analogously, you will have to judge the value of things by other means than the market, which is a pain. But it is no good pretending that a late-stage capitalist market is a properly working one.