Why are inflation and deficits bad?

A disproportionate amount of political debate centers around vague abstractions: government spending, deficits, and inflation. The latter two are supposed to be particularly horrible, leading to hyperinflation (and therefore Hitler) or else mountains of debt that are impossible to pay off (and therefore Hitler). Meanwhile, government spending is always at risk of “crowding out” the presumably much more desirable private sector spending.

A moment’s reflection will reveal that these three technocratic abstractions are actually code words for “stuff that makes rich assholes powerful.” Inflation decreases the spending power of hoarded money, and when it is kept at a moderate pace (which does not, as in Weimar Germany, vastly outstrip actual growth in production), it tips the balance of power away from rentiers and toward people who make their money from wages. It’s a way of indirectly decreasing the power of concentrated wealth, and hence moderate inflation is profoundly pro-democratic in its effects.

The same goes for government spending, which designates economic activity that is controlled by democratically accountable representatives rather than by the whims of individual rich assholes. In practice in the U.S., the rich assholes wind up directing some of the flow of this spending, but the bulk of it — such as Social Security, Medicare and Medicaid, and other government benefits — reduces people’s reliance on being exploited by rich assholes. Hence we’ve got to rein in that out of control government spending! Which means: spending that is out of rich assholes’ control and leaves people out of rich assholes’ control.

Government spending at least has the benefit of being tax-financed and hence parasitic on the wealth of rich assholes. Worst of all, however, is deficit spending, where the government creates money over and above its tax revenue in order to spend it in ways not controlled by rich assholes. Our current system requires newly-created money to be matched by a Treasury bond, which I like to think originated as a crafty way of tricking rich assholes into buy into a powerful federal government that would be beyond their effective control. It also has the positive side effect of providing a 100% guaranteed savings vehicle for the general public.

The Treasury bonds that pile up as a result of deficit spending look like “debt,” but it doesn’t work like your credit card, because the government actually creates the currency in which the debt is paid — hence we can always go ahead and “pay off the national debt” by liquidating all our Treasury bonds, and foreign governments who hold our debt can only “punish” us by converting their interest-bearing asset into non-interest-bearing cash. In the last analysis, the federal government’s currency sovereignty can only be controlled by our own elected representatives (and by the need to keep the inflation rate from too greatly outpacing economic growth).

Why the explicit or implicit invocations of Hitler around these abstractions, then? Presumably because it reinforces the message that populism always leads to totalitarianism and disaster. In reality, though, we have plenty of examples of healthy societies that have struck a different balance between the power of rich assholes and the power of democratic deliberation about people’s needs and priorities, and it turns out that none of them are in any danger of producing a Hitler. The only real danger they’re courting is that their rich assholes might wind up being less rich in the long run, and that’s a price I for one am willing to pay.

The inertia of the suburbs

The Girlfriend and I have been watching The Wonder Years lately, and it’s striking how generic the setting is — if not for references to news events in the late 1960s, it could be any time period from 1965 to the late 1990s (and I only posit that cut-off point because of the advent of the internet). The suburban model that was built out starting in the immediate postwar era has proven to be remarkably resilient, and even now it has a kind of self-evidence as the “mainstream” American approach to family and community life.

In the immediate postwar years, it seems as though there was a level of “buy-in” across the population, as the prospect of one’s own house, a car, etc., seemed like wonderful luxuries. By now, however, the suburban model has shown itself to be costly, environmentally destructive, and in many cases isolating and community-destroying. Further, the concentration of good schools in the suburbs perpetuates an ongoing vicious cycle of “white flight” that reinforces the systemic racism of our society. And as the financial crisis revealed, the aspiration to suburban middle class status increasingly carries the risk of financial ruin.

More and more people are realizing all of this and don’t want to buy into the suburban model — yet except for the very wealthy, there seems to be no real choice for middle class people if you want to have children. And the reason for this surprising persistence of a model that no one really wants anymore is the power of state planning. Even if the population could be initially convinced to want suburban-style development, the decisive factor was a concentrated effort on all levels of government to create all the necessary conditions for that lifestyle, through physical and legal infrastructure and often through explicit subsidies (such as the mortgage interest tax deducation, which seems to be invulnerable). All of the stuff they created in that heroic era of American urban planning is still in place. The roads and schools have been built, and the legal structures for expanding suburban development if needed are already in place and ready to go. All the incentives for middle-class families still point outward into the suburbs.

While reading about the ongoing disaster of education “reform,” I once thought: “What if cities stopped trying to attract tourists and started trying to gain permanent residents by creating awesome schools?” As I thought about what that would entail, however, it became clear that no one city has the resources to fully reverse the trend — to really work, it would have to entail a complete reshaping of the school funding structures, a build-out of public transportation infrastructure to support the expanded population, etc., etc. In other words, it would take forceful state planning on the model of what created the suburbs in the first place.

Unfortunately, it appears that the U.S. only had one relatively brief window for such forceful state planning, extending from FDR to Nixon (only 40 years out of the 200+ of the Republic’s existence) — and it wasted it on the suburbs. Barring a new FDR, we’re probably stuck with it. The bright side, I guess, is that The Wonder Years will remain legible and relatable for generations to come.

Why do the Gremlins love Snow White?

gremlins watching snow white

It’s a strange moment. The Gremlins, having eaten after midnight and turned from teddy bears into evil reptilian creatures, find themselves in a movie theater. Suddenly, Snow White starts playing — and they are transfixed. They all sing in unison along with the seven dwarves: “Hi ho!” Indeed, their love of Snow White proves to be their undoing, as their absorption in the movie is what ultimately allows them to be defeated when the protagonists start a fire, burning down the theater.

The use of Snow White cannot be random. Gremlins was not produced by Disney, and so the producers had to pay extra to use the film. But what does it mean? Read the rest of this entry »

“Skin in the game”: The market for high-stakes services

It is a commonplace in public policy that exposing citizens to more price signals for important services is an important part of bringing down costs. In the health care arena, for example, it’s often said that the reason medical costs are so high is that customers by and large are not paying their own bills and don’t even know the costs of the various procedures — if they were the ones who had to pay the difference in price between a brand-name or generic drug, they would be more likely to make the sensible, cost-effective decision.

This theory is completely, 180-degrees wrong. Even if we assume that consumers are rational utility-maximizers, “saving money” is not the relevant utility to maximize in the medical situation — preserving one’s health and, ultimately, one’s life is the priority that overrides all others. Even total financial ruin is preferable to death. Further, in the absence of specialized medical knowledge, it is understandable and even rational to use price as a proxy for effectiveness. Even if there is a point of diminishing returns, that extra little bit of effectiveness may be the decisive factor.

We can see a similar dynamic in other high-stakes scenarios. Higher education is an economic good, but it is an economic good of a very particular type: it permanently affects your long-term economic prospects, and you only get one shot at it. Here again, it is reasonable to use price as a proxy for quality in the absence of other information — and it is likely that the elaborate attempts to generate hard data about learning effectiveness will lead to the unsurprising conclusion that schools with better resources deliver better outcomes! Again, even if there is a point of diminishing returns on the price vs. quality graph, people will want to maximize the quality to the extent possible, in their one chance to go to college. In the choice between thwarted life prospects and unmanageable debt, unmanageable debt surely seems preferable. The analogy with legal services is easy to draw as well — when one’s freedom is at stake, no price is too high.

Once we acknowledge that there is a qualitative difference between high-stakes, life-or-death outcomes and financial outcomes, we should expect people in a private market to drive up costs for high-stakes services at every opportunity — which is in fact what we witness in practice. The only way to control prices is to control prices, i.e., to limit what can be charged for the relevant services either through government regulation, through negotiation on behalf of large groups of customers, or through the creation of a “public option” for the service in question (like public universities in the postwar era).

The theodicy of ethical consumerism

I wrote a few weeks ago about the ideological function of free will: we don’t blame people because they have free will, we say they have free will so we can blame them. In the theological realm, the goal of granting us free will isn’t to enhance our dignity or the meaningfulness of our life, but to make sure God has someone to blame for all the bad things that happen — and I believe we can apply the principle of a homology between the theological and the political realm here as well.

A perfect example of this is dynamic is ethical consumerism. It often strikes me as bizarre that we’re even given a choice between the gross processed food and the healthy organic food, or between the hideously wasteful product and the ecologically conscious product — much less that the “price signals” are invariably tilted toward the bad option. Wouldn’t it be better to remove the bad option in the first place? Why is something so important left to arbitrary individual choice?

Here I think the fact that we know consumers will generally make the wrong choice is not a bug, but a feature of ethical consumerism. The political class and business elites have already collectively decided that ethical farming and environmental sustainability are not important goals — and so they have left them up to individual consumer choices so that they can disavow responsibility and blame all of us for not choosing correctly.

Whenever we’re offered a free choice, we’re being set up.

A thought experiment

Imagine there was a new drug that could indefinitely increase a person’s physical strength — the more they have, the stronger they are. In the aggregate, the increased use of this drug would increase the total physical strength of the human race.

Now imagine two regimes for distributing this drug. In the first, access to the drug is limited to a relatively small portion of the population, who are able to get as much as they want. Human physical strength overall would be growing under this regime, but the vast majority of the population would be effectively weaker with all these Incredible Hulks walking around — in fact, even if their own strength remained constant throughout the process, most people would be in greater physical danger by virtue of the very existence of the Hulks.

In the second, access to the drug is widespread across the population. Everyone is able to do one-armed push-ups and free-standing hand-stands, but no one is able to gain a significant edge over anyone else. Here I think it would be more meaningful to talk about a general increase in human strength, even if the aggregate effects of the drug were less overall.

If the first distribution regime were the only possible one, I think we’d all agree that it would be better not to have the drug at all than to allow, say, 1% of the population to become Incredible Hulks and walk around among us — even if the Incredible Hulks were able to “create jobs” by forcing the weaklings to slave for them.

“My power is made perfect in weakness”: On institutional breakdown

One point from Hardt and Negri’s Empire has always stood out to me: namely, that institutions typically become more powerful as they break down. The most familiar example is the university, which has in many ways squandered its cultural credibility and has even actively victimized some of its key constituencies (student loans, adjunctification, pervasive rape on campus). Yet the demands we make on the university are ever-increasing. It’s as though the very breakdown of the university highlights the fact that we need “something like” the credentialing role it performs to make modern society manageable — and so we settle for “something like” the university (i.e., the actual-existing university).

One can see the same dynamic at work with contemporary capitalism. Clearly the economy is not working, yet the very injustice and discontent it breeds highlights the benefits of having an apparently impersonal mechanism for distributing economic rewards, lest we degenerate into a post-apocalyptic hellscape of survivalist anarchy. During the government shutdown, I started a series of tweets jokingly predicting absolute social breakdown if the U.S. defaulted, and many of my readers seemed to be deeply disturbed by them — it felt a little too realistic that the social bond in a highly individualistic nation with a lot of guns lying around may turn out to be more fragile than we’d ever imagine. The same holds for the U.S. Constitution. It is widely acknowledged to be highly irrational in its design, and yet the idea of “rebooting” seems unthinkable to most Americans.

If institutions make their demands more strongly felt precisely when they’re failing to deliver on their promises, it seems that the reverse would also hold: we are more able to reform our institutions when their hold feels less urgent. I imagine that much of the strong regulation of capitalism during the Cold War era came from the existence of a living, breathing alternative to the free market — even if the Soviet model did not seem desirable compared to the US model, everyone could tell that the USSR was not a post-apocalyptic hellscape. During the financial crisis, by contrast, it was commonplace to hear people say that if a key financial apparatus broke down, we simply “wouldn’t have an economy anymore.”

Similarly, as I was saying yesterday, in a world where every area of life is increasingly saturated with cutthroat competition, there doesn’t seem to be any alternative to the traditional family as a space of meaningful relationships — and hence people persist in propping up the model and even want to expand it to previously excluded populations, even though it winds up being a costly and painful situation for increasing numbers of people.

Since I can’t figure out how to wrap this post up: “hence the need for full communism is all the more urgent.”

The solution to unemployment isn’t better-trained workers: Or, Systemic problems have systemic solutions

Following Chomsky’s advice, I follow the business press to see what the ruling class thinks is going on in the world, and more specifically, I subscribe to Bloomberg Businessweek, which occasionally allows reality to creep in (global warming is real, deficits aren’t always bad) as opposed to the more nakedly ideological Economist. Recently, for instance, they ran a piece on the minimum wage which included the fact that raising the minimum wage does not actually decrease employment outside of the artificial environment of Econ 101. Yet it also included this little gem:

“Raising the minimum wage is a short-term fix,” contends Wal-Mart’s [vice president for communications, David] Tovar. The long-term solution, he says, involves “expanding education, training, and workforce development.”

This kind of nonsense drives me absolutely crazy. It makes no sense to assume that changes in the composition of the workforce will lead to significant increases in aggregate employment levels. Read the rest of this entry »

It’s the *political* economy, stupid!

We live in an era where there is a deep desire to view humans as machines. Humans are not machines — they are free beings who can do surprising things for a variety of reasons or no particular reason at all — but our whole society is set up to hide that fact. Public policy is now the art of “nudging” “incentives,” setting up conditions where human-machines will respond appropriately. Important social choices are outsourced to something called “the market,” which is presented as a kind of naturally-occuring decision-generating machine despite being a product of human choices that runs on human choices.

It makes sense that people would turn to such impersonal, supposedly a-political models of our shared life. Politics has always been traumatic, particularly in the 20th century. We’ve all heard it before: “You think people can take collective control of their destiny in a deliberate and purposeful way? So did Hitler and Stalin!” But politics in the sense of purposeful human decisions about the distribution of power and resources is irreducible. Even if there were a supercomputer perfectly calibrated to distribute the best possible outcomes to everyone, the decision to entrust it with this responsibility would be a human decision — as would the ongoing decision to continue to submit to it. We like to pretend that something called “the market” effectively is that supercomputer, but it isn’t. All it does is cover over the human decisions that are being made.

The irreducibility of actual human decisions holds even at the level of the global market. Read the rest of this entry »

The job skills employers crave!!!

Periodically, we learn that employers need a particular set of skills and universities should re-tool accordingly. These in-demand fields command higher wages, and so students are encouraged to flock to them. Indeed, politicians often claim that producing more graduates with said skills will help with unemployment and increase wages overall.

Let’s look at the economics underlying these claims. Under what circumstances does more widespread availability of a product lead to higher prices for that product? I’m pretty sure that the laws of supply and demand would indicate just the opposite result — significantly increasing the supply of a product leads to commodification, creating a buyer’s market where sellers have to compete on price.

Furthermore, since when have employers clamored to pay more people higher wages? If there’s a single characteristic trait of contemporary capitalism, surely it is the constant demand for ever-cheaper labor.

Hence, I conclude that when a particular field or skillset is trumpeted as the Next Big Thing demanded by employers, the goal is to get students to flood that field or skillset in order to commodify it. And this isn’t just a hypothetical — isn’t it exactly what has happened with computer science majors within the last ten years or so?

For that reason, I would advise students to actually avoid such “hot” majors, unless they have a good reason to believe that they will be significantly ahead of the curve. By the time the in-demand field is being propagandized in the mainstream media, it’s almost certainly too late for that.

Given the pressures leading inevitably to the commodification of particular job skills, I believe that a liberal arts-style education that increases students’ adaptability and ability to pick up new skills is a much better — indeed, safer — investment than any directly job-oriented program of study. But what do I know? I’m just an idiot who cares about my students’ actual well-being, not a job creator who wants to exploit them on as cheap and flexible terms as possible.

Follow

Get every new post delivered to your Inbox.

Join 2,684 other followers