With support from the University of Richmond

History News Network puts current events into historical perspective. Subscribe to our newsletter for new perspectives on the ways history continues to resonate in the present. Explore our archive of thousands of original op-eds and curated stories from around the web. Join us to learn more about the past, now.

Why All of Us Have to Keep Spending Like Crazy--Or Else!

Herewith an update on the credit crunch, which many well-placed financial analysts see as a potential crisis.  I doubt it will degenerate to that extent, but nobody pays me to look after their assets.  The people who do get paid for it are treading very carefully these days—hedging their bets, as it were, even the most bearish among them.

I propose an update in two senses.  First, I’ll do some history of crisis management and central banking, and then I’ll offer some, er, speculation on the near term causes and effects of the Fed’s agenda, as interpreted by Martin Wolf and certain lesser lights at FT and the New York Times.

My rewrite of their analysis eventually reaches a bottom line when I claim that the Fed has become the lender of last resort to the consumers who have kept the American economy afloat for the last ten, hell, the last sixty years—it no longer stands apart from the retail level of transactions, I will suggest, and in this sense it has become an emblematic institution of the consumer culture we inhabit. 

That is a characterization, not a complaint.  It may even be a compliment.  For you can’t have socialism without monetary policies that favor consumption over saving and investment. 

At the end of the 19th century there were three ways of explaining the onset of economic crisis. 

(1) If you were an adherent of Say’s Law (our contemporary George Gilder still is), you saw crisis as a deviation from normality—that is, from equilibrium determined by two facts, or rather theorems.  On the one hand, the revenue generated by goods production (wages, salaries, profits) was equal to the value of the goods produced, so there couldn’t be “overproduction” in the absence of financial chicanery.  On the other hand, interest rates regulated the distribution of income between saving and consumption.  For example, as interest rates on government securities fell—as saving became less lucrative—individuals would consume more, and vice versa.

(2) You could also grasp economic crisis as “disproportionality” (too much capital sunk in, say, railroads), usually created by monetary disturbance, for example by the Sherman Silver Purchase Act of 1890, which mandated the federal government to coin silver even though it had become something less than a precious metal with a stable price relative to gold.  Here, too, financial chicanery played the leading role.

(3) Finally, you could see crisis as a function of surplus capital—not in one sector or another, as in disproportionalities induced by monetary disturbance, but across the board.  In this view of things, crisis was an inevitable moment in a “business cycle,” but, when managed properly, crisis would become opportunity; for it would puncture bubbles and strip the market system of speculative or imprudent or criminal constituents (they could never be purged altogether, of course, because markets require such crass opportunists).

The Federal Reserve System was invented by people who embraced this last explanation of economic crisis, and who designed the tools of crisis management still in use (emergency lending, interest rate adjustments, mobilization of reserves, all to inject liquidity in times of stress).  They thought of the banking system as the “headquarters” of the economy, because the purpose of bankers was to move assets from sector to sector according to their changing prospects.  Because they straddled all the intersections between economic sectors, telling their customers where change could lead, they always wanted the big picture.

Even so, it was not bankers who invented the Fed.  They played a crucial role, to be sure, but the social movement that created a central banking system here was a cross-class coalition that included merchants, industrialists, journalists, academics, and, yes, trade unionists.  Their goal was not to subsidize the profits of bankers; it was to socialize market forces, to bend them to human purposes other than profit.  They understood that the anarchy of the financial markets—the absence of a central bank that could coordinate crisis management—was a danger to development, and to democracy.

The “moral hazard” of crisis management (or of guaranteeing bank deposits) is the possibility of rewarding the speculative, imprudent, and criminal constituents of the market system as you go about saving it.  If you prop the whole thing up by emergency lending and liquidity—if you resist massive deflation, default, disgrace, and other solutions supplied by free market forces—you may allow the inflation of values to continue, and thus further enable the irrational atrocities associated with speculation, imprudence, and criminality.  

Thorstein Veblen, who was not a fan of banking or bankers, got it right in 1904:  “This cumulative extension of credit through the enhancement of prices goes on, if otherwise undisturbed, so long as no adverse price phenomenon obtrudes itself with sufficient force to convict the cumulative enhancement of capitalized values of imbecility.” (The Theory of Business Enterprise, p. 55.) 

But what if the banking system, under the leadership of the Fed itself, won’t, or rather can’t, allow this adverse price phenomenon to obtrude itself with sufficient force?  What if the problem of “moral hazard” has no solution because we—that would be us consumers—are all at risk?  What if the central banking system now functions as our lender of last resort because without our increasing debts, our inspired profligacy, the whole thing goes south?

To answer, I turn to Martin Wolf & Co. at the Financial Times and his counterpart at the NY Times.  Let me start with David Leonhardt, who writes the “Economic Scene” column at the NY Times.  In his piece of 8/22, he suggests that the Fed “could have tightened financial rules” during the housing boom—or bubble—and thus could have prevented the creative loans that flooded the mortgage market (for example, a third of the mortgages issued in 2005 were interest-only; subprime mortgages were 20% of those issued in 2006).

But there’s a catch.  The folks at the Fed had watched the meltdown of Japan’s economy in the 1990s, after the central bank tried to prick a similar bubble by raising interest rates.  The long deflation on that economic scene—it ain’t over yet—was not something our guys wanted to reproduce by introducing new constraints on consumer spending.  So they waited.  Leonhardt thinks they waited too long, but even he hedges his bet on preventive financial medicine: “A crackdown [on creative and subprime lending] wouldn’t have been without complications, given that the expansion of consumer credit in recent decades has been mostly a good thing [do tell!].  ‘It is a fine line to walk between protecting consumers and doing something that has the unintended consequence of preventing responsible lending,’ Sandra Braunstein, who oversees consumer issues at the Fed, told me yesterday.  ‘And we’re always walking that fine line.’ ”

The very cranky but always lively Clive Crook over at the FT (8/23) fully agrees that a consumer-oriented Fed was reluctant to go down the Japanese road of deflation.  “Regulated banks do little if any such lending [in the mortgage market].  Bank affiliates or independent mortgage companies have built the business—and they are, respectively, lightly regulated or virtually unregulated [on which compare Robert Brent Toplin at HNN, August 20, 2007].  They lent eagerly to borrowers of limited means, often on patently reckless terms . . .. Everything was premised on perpetually rising house prices.   The Fed was worried, but mainly on consumer protection not systemic-risk grounds.”

Crook thinks more regulation of “non-bank lenders” is in order.  He also thinks that the Fed should avoid the problem of “moral hazard” by supplying liquidity only at a “penalty rate” of interest; here he follows the advice of those who want the market to cleanse itself.  But even he hedges his bet by suggesting the Fed should not cut its federal funds rate—it already opened a discount window a week ago to supply emergency funds to banks with clients strapped for cash—“unless there is evidence of harm spreading to the real economy.” 

Most sentient observers other than Grover Norquist would agree with this imperative (he’s the village idiot on K Street who wants to abolish the income tax).  But how and where do we locate this “real economy”?  If growth is driven by consumer expenditures—not by private investment funded through corporate profits—then the Fed’s solicitous stance toward consumer protection during (and after) the housing bubble makes perfect economic sense.  The policy-relevant question is, can it sustain this stance if consumption has become so conspicuously dependent on increasing, and increasingly risky, debt?  To put it another way, is the Japanese road to deflation the only alternative to the “moral hazard” of continued reflation on behalf of consumers?

As usual, Martin Wolf has the answers.  His FT column of 8/22 begins with a prescript from Ben Bernanke on the “global saving glut,” and goes on to explain why consumption financed out of deficits (just like during the Great Depression) is the key to both the Fed’s relative passivity during the housing boom and its frenetic activity in the aftermath.  His short answer is that households in the USA—that is, consumers—have offset the lack of private investment by going into debt, and that the Fed validated this offset because it caused growth.

Here is the answer I need: “The recent household deficit more than offset the persistent financial surplus [the “saving glut”] in the business sector.  For a period of six years—the longest since the second world war—US business invested less than its retained earnings.”  That household deficit, defined as savings minus residential investment, has been the engine of economic growth since 2000, and everybody at the Fed knew it; by comparison, the move from surplus to deficit in the US government budget, from 2001 to 2003, was a blip on the traders’ screens.

And here is the long answer.  “Nothing that has happened has been a product of Fed folly alone.  Its monetary policy may have been loose too long.  The regulators may also have been asleep.  But neither point is the heart of the matter.  Assume that the US remains a huge net importer of capital.  Assume, too, that US business sees no reason to invest more than its retained profits.  Assume, finally, that the government pursues a modestly prudent fiscal policy.  Then US households must spend more than their incomes [to offset the size of the current account deficit, and to absorb—that is, to buy—the increase of Gross Domestic Product].  If they fail to do so, the economy will plunge into recession unless something else changes elsewhere.”

So the Fed clearly has no choice except to risk the “moral hazard” of propping up a mortgage market saturated by speculation, imprudence, and criminality.  In the historical moment when net private investment became almost irrelevant as a source of economic growth—the moment named the “disaccumulation of capital” by the historian Martin J. Sklar—the Fed clearly had no choice except to validate consumer preferences, even when they’re financed out of household deficits.

The question that remains is what to do about those deficits.  For my purposes, for my part, the answer is pretty simple.  If you know that consumer expenditures are the engine of economic growth, then you try to synchronize supply and demand by raising wages instead of forcing people to go into debt as a way of financing such growth.  Or you detach income from work, so that the lack of jobs that pay a living wage won’t constrain consumer spending.  Whatever.  It all spells socialism, Grover.