At one level, profiling is unexceptionable.
If witnesses report a theft by a young black male, it would be absurd for the police to look for suspects among other groups in the population.
Profiling becomes interesting only when the differential probability of guilt is much smaller.
Even then, it is unproblematic, as Becker notes, when the disfavored group is not a sensitive minority.
No one objects when smokers are charged a higher price for life insurance than nonsmokers, even though many smokers outlive many nonsmokers.
Even when the condition that puts one in the disfavored class is involuntary, such as having a family history of cancer or heart disease, discrimination on this basis (what economists call statistical discrimination) is generally considered permissible because it is not exploitive or based on hostility or contempt and it does promote a more efficient allocation of resources.
Profiling based on race, sex, or national origin, however, is intensely controversial.
It is helpful in discussing it to make two distinctions: between ordinary crimes and Islamist terrorism (e.g., al Qaeda), and in the terrorist case between profiling U.S.
citizens and profiling foreigners.
I will be discussing these issues purely as issues of policy rather than of law.
In the case of ordinary crimes, where for example profiling might take the form of disproportionately frequent searches of vehicles driven by Hispanics because Hispanics are disproportionately represented in illegal drug trafficking, I would expect profiling to have little effect on the crime rate.
The reason is the positive elasticity of supply of persons who commit victimless crimes, which is to say crimes resulting from the outlawing of products or services for which there is a demand.
If one class of suppliers is driven out of business, this makes room for others.
Given the fixed budget for law enforcement assumed by Becker, the increased apprehension of Hispanic drug couriers would be offset by a reduced risk to non-Hispanics of being apprehended for transporting drugs, and so the non-Hispanics would flock to replace the Hispanics as couriers.
The ethnic composition of the illegal work force would be altered by profiling, but the crime rate would be affected only to the extent that Hispanics are more efficient drug couriers because of language and other ties to major drug supply countries; the net effect on the crime rate would probably be small.
In the case of terrorism, a similar replacement effect can be anticipated, although it would probably be smaller.
Assume a fixed budget for screening airline passengers and a reallocation of funds within the budget limit to enable more young male airline passengers who appear to be Muslim (or of Middle Eastern origin, but for simplicity Ill assume that Muslim-appearing is the screening criterion) to be subjected to intensive screening, as distinguished from the limited screening to which all passengers are subjected.
Then fewer passengers who do not fit the profile will be screened (this is implied by the fixed budget), which will induce terrorist groups to make greater use of female Muslims (as happened in suicide attacks in Israel), older Muslims, and young Muslims who do not appear to be Muslim, for members of these groups will now be less likely to be apprehended than before the adoption of profiling.
The elasticity of supply of terrorists is probably not as great as that of drug couriers, but it is positive and will reduce the effect of profiling.
A parallel analysis recommends against concentrating too many of our antiterrorist resources on the protection of New York and Washington, since terrorists can substitute other targets.
The benefits of airline passenger profiling are thus likely to be modest, and the costs may be great in the case of Muslims who are U.S.
citizens.
Being singled out on the basis of race, religion, or ethnic origin is intensely resented by the people who are discriminated against and could undermine their loyalty to the United States if they have strong ethnic and religious ties with the nations enemies.
A paramount goal of U.S.
antiterrorist policy should be to prevent the disaffection of U.S.
citizens of the Muslim faith and Middle Eastern ethnicity.
That goal would be undermined by profiling.
I do not think compensating them financially for the additional inconvenience would rectify the problem; indeed, it would underscore their differentness from their fellow citizens.
(This is also an argument against reparations for blacks and American Indians.).
The argument for the efficiency of profiling is further undermined by relaxing Beckers assumption of a fixed security budget.
By increasing the budget for airline security, it would become possible to screen everybody carefully.
I suspect that the optimal policy is to subject more U.S.
citizens of apparent Middle Eastern origin or Muslim religious identity to intensive screening than other citizens, but to subject enough of the other citizens to the same intensive screening so that the (lightly) profiled group does not feel markedly discriminated against--and so that substitution of terrorists who do not fit the profile is held in check.
My view with regard to profiling noncitizens is different.
Noncitizens are not expected to be loyal to the United States and so the concern with alienating them by profiling is less acute.
No foreigner expects to be treated identically to a citizen.
Larry Summers, the president of Harvard, stirred up a hornets nest when, at a recent conference on the underrepresentation of women and members of minority groups in science and engineering, he suggested the following two possible reasons why women are underrepresented.
First, womens math and science aptitude test scores exhibit less variance than mens and this difference may have a biological basis.
Second, women are on average unwilling to make the same sacrifice of time to career that men are willing to do.
(A third reason, he suggested, might be discrimination against women.)   Conference  .
(For an interesting discussion of the issues, see   Saletan  .) I want to consider whether there is any merit to his suggestionsbut also whether he should have raised the issue at all, given his position as the president of the nations best-known university, and whether, having done so and been criticized, he should have apologized, as he did; he said that he had been wrong to have spoken in a way that has resulted in an unintended signal of discouragement to talented girls and women, although he did not repudiate the content of his remarks.
  Summers  .
Were Summers an expert on the reasons for gender-related occupational patterns, and as a result had special insight into the issue of womens lack of proportional representation in science careers, there might have been a real cost in his failing to speak to the issue.
However, since he is not an expert in this area, there would have been no great loss to human knowledge had he kept silent and let the experts engage with the issue.
Although it is a highly sensitive issue, it is notunlike the issue of racial differencesso hot a topic that no reputable academic dares investigate it.
So the benefit of Summerss speaking out was small.
The cost would have been small, toowere he not the highly visible president of the nations most famous university.
For as a practical matter, chief executive officers do not enjoy freedom of speech.
A CEO is the fiduciary of his organization, and his duty is to speak publicly only in ways that are helpful to the organization.
Not that he should lie; but he must avoid discussing matters as to which his honestly stated views would harm the organization.
(Judges also lack complete freedom of speech; as I mentioned in our introductory blog posting, I am not permitted to comment publicly on any pending or impending court case.) Summers must think that his remarks did harm the university, as otherwise he would not have apologizedfor he apologized not for what he said, but for saying it.
A university president might make provocative remarks because he wanted to change his university in some way, for example by encouraging greater intellectual diversity, or because he wanted to signal strength, independence, intransigence, or other qualities that he thought would increase his authority, or even because he wanted to intimidate certain faculty by seeming to be a wild man. But that explanation is not available to Summers, because of the apology.
And the apology was probably another error, whether or not he should have raised the issue of womens relative scientific aptitudes or tastes in the first place.
The apology signaled weakness, and it cannot help a leader to appear weak.
Summers has enemies in the Harvard faculty who will be encouraged by his apology to press him for concessions on issues important to themsuch as diversity hiring.
The apology was also condescending.
It assumed that womens career commitments are so fragile that Summerss remarks at the conference would actually reduce the number of women who choose a science career.
Science is a tough career, both highly competitive and not very well paid.
It is not for the fainthearted of either sex.
If (as I doubt) women are as easily discouraged as Summerss critics believe, their future in science is not bright.
The apology was particularly unfortunate because it dignified the criticisms of Summerss remarks at the conference, and those criticisms were obtusewhich brings me at last to the substantive issue.
The critics misunderstood Summers to have been claiming that female scientists are inferior to male scientists.
Not at all.
He made no comparison between male and female scientists.
He was venturing possible explanations other than discrimination (the politically correct explanation) for why there are fewer female scientists than male.
The ratio of female to male scientists is unrelated to the average quality of female and male scientists, and indeed is consistent with the average female scientists being abler than her male counterpart.
In fact if, as Summerss critics allege, and Summers admitted was a possibility, discrimination against women is a major cause of the imbalance in the number of male and female scientists, the implication is that the average female scientist is probably abler than the average male scientist.
Employment discrimination usually manifests itself in a refusal to hire a person in the disfavored class unless he or she is so superior that the refusal would impose serious costs on the institution and perhaps invite a lawsuit.
When anti-Semitism was rife in universities, it was assumed that a Jew had to be abler than a gentile to obtain a university appointment; it would follow that the average Jewish professor was abler than the average gentile professor in that era.
Summerss suggestion that women on average (an essential qualification, obviously) are not as willing to invest as much time in a career as men should not have been controversial.
Women who want to have children, as most do, must expect to devote more time to child care that men do.
That is a brute fact and has nothing to do with scientific careers as such.
Summerss controversial conjecture was that since science-related aptitude tests exhibit less variance in female than in male scores, there are likely to be fewer women in both tails of the distributionfewer scientific dopes but also fewer scientific geniuses.
Imagine two bell curves, each with the same mean but different variance, superimposed on each other.
The bell curve with the smaller variance (female) will be narrower and thus have shorter tails.
So as one moves toward the end of each tail, the population with the greater variance (male) will increasingly be overrepresented.
This will affect the relative number of the two populations in the tails; it may or may not affect the average quality of the members of each population who are in the tails.
Summers rightly offered the variance story as a speculation rather than as an established truth, though another fact consistent with it, besides the test scores, is that at the undergraduate level womens science performance is equal to mensfor at that level, one is not as far out in the tail as at the graduate level.
You dont need as much science talent to obtain a B.S.
as to obtain a Ph.D.
Could the difference in variance have a biological basis? That is a legitimate subject of inquiry, which is all that Summers suggested.
I cited Saletans article, which unlike most media coverage of the controversy engaged with the issues rather than merely playing it as a fight between angry feminists and an embattled public figure.
But Saletan made one silly argument.
It is that the likelihood of a biological explanation for the gender imbalance in science is enhanced by the fact that a man has more genes in common with a male chimpanzee than with a female human being.
It is a surprising fact, but it may well be entirely explicable by the different biological roles of male and female in reproduction; it need have no connection to scientific aptitude.
Summers said that discrimination may also contribute to the imbalance between male and female scientists.
It is certainly in the national interest to eliminate such discrimination, as he strongly believes.
Nevertheless the fact that there may be nondiscriminatory reasons for disparities in occupational choice deserves investigation.
Discrimination has declined, yet occupational disparities between various groups persist, suggesting that we should be looking for causes that are unrelated to discrimination as well as those that are related.
A glance at the composition of different occupations shows that in many of them, particular racial, ethnic, and religious groups, along with one or the other sex and even groups defined by sexual orientation (i.e., heterosexual versus homosexual), are disproportionately present or absent.
For example, a much higher percentage of biologists than of physicists are women, and at least one branch of biology, primatology, appears to be dominated by female scientists.
It seems unlikely that all sex-related differences in occupational choice are due to discrimination; and therefore someone who explores alternative explanations should not be excoriated.
Unless perhaps he is a university president!.
There were as usual many interesting comments, not all of which I can reply to.
(Among critics of my position, I particularly commend the thoughtful comment posted by Anonymous on January 24.) I was however startled by the large number of comments that compare profiling to affirmative action and ask that commenters who oppose profiling as demeaning, alienating, and so forth take an equal stand against affirmative action.
Although I have serious reservations about many forms of affirmative action, and although there is indeed a conceptual parallel between it and profiling, since in both cases a single criterion, such as ethnicity, is used as the basis for imposing benefits and burdens respectively, the symmetry is incomplete.
The reason is simply that most beneficiaries of affirmative action are happy to have the benefits! Most people take for granted whatever advantages they have, however adventitious and undeserved.
What is more adventitious than having wealthy parents? And yet how many rich kids are bitter because they have been singled out for benefits unrelated to their merit?.
My argument against racial reparations, and likewise compensation for victims of profiling, is not that the beneficiaries will lose self-esteem or otherwise be immiserated by being benefited, but that using ethnic or racial or other such criteria for benefits treats the benefited group as being importantly different from the rest of the community.
I would think it healthy for Americans to become less conscious of their differences, whether the differences are based on race, sex, national origin, ethnicity, politics, or sexual orientation, and think of themselves, rather, as being "just Americans." That would certainly help in presenting a united front against the threat, which is real and probably growing, of international terrorism.
It is particularly important that Arab-Americans and other Americans of Middle Eastern origins or Muslim religion feel fully American.
At the risk of seeming an alarmist (a "McCarthyite," some might call me), I believe that there are almost certainly al Qaeda sleeper cells in the United States, and it is extremely important that they not receive any assistance, financial or otherwise, or protection, active or passive, from the Muslim community.
Cementing the community's loyalty to the United States is a vital national project, and this has to affect the amount of profiling that is in the national interest.
I do not agree with the comment that, in defense of remedial affirmative action, describes profiling as a product of "entrenched bias." When profiling is based on a relevant characteristic, such as a known greater propensity to engage in some antisocial behavior, it need have no connection whatsoever with bias, entrenched or otherwise.
A comment I strongly disagree with is that profiling airline passengers is unsound because no terrorist has ever been intercepted as the result of profiling.
First, we don't know whether this is accurate; people are occasionally stopped from boarding a plane because of a secondary search prompted by their profile, and some of these people may be terrorists though they cannot be proved to be.
Second and more important, knowing that there is profiling may discourage some terrorists from attempting to board an aircraft, since if they are arrested their career as terrorists may be terminated before they can do any harm.
As usual, there were many excellent comments.
Let me respond to some:.
Several comments point out correctly that the determination of medical malpractice (that is, medical negligence) by the courts is very often inaccurate; there are many false positives   and   false negatives.
To that problem, capping damages judgments is no solution.
An attractive solution is testimony by a neutral expert witness.
The fact that judges may have difficulty determining who is neutral is no objection; the judge can ask the parties' medical experts to jointly nominate a third; he would be the neutral and the judge and jury would appropriately rely heavily on his testimony.
The procedure I am suggesting is similar to a widely used procedure for picking a neutral arbitrator: each party designates one arbitrator, the two arbitrators choose the third, who is neutral, and he then provides the deciding vote.
An alternative, mentioned in one comment and already in force in a number of states, is to require the malpractice plaintiff before suing to submit his claim to a panel of physicians, whose findings, if unanimous, are admissible in court should the claim result in a lawsuit.
One comment pointed out that medical errors are often systemic, i.e., they result from erroneous procedures or practices by hospitals, drug companies, and other institutions rather than from mistakes by individual physicians.
However, those entities are suable.
It was also noted that heavy insurance premiums might drive some physicians from practice and deter some people from becoming physicians in the first place.
That is true, but if the result is less medical negligence, the benefits might exceed the costs.
In addition, the overall effect on medical expenses is likely to be slight, because physicians' fees are only a moderate component of overall medical expenses.
Furthermore, if physicians are driven out by high premiums, the resulting reduction in the supply of physicians should enable those who remain to raise their fees.
A slightly esoteric point: one comment suggested that pain and suffering, disfigurement, and other nonpecuniary losses imposed by medical errors are not real costs because people rarely try to buy insurance against such losses.
However, the reason they do not buy insurance is not that the losses aren't real, but that insurance is designed primarily for replacing income or defraying an expense.
I also disagree that negotiation of the level of medical care should be left to physician and patient, because they have a preexisting contractual relationship.
The principle is fine (though it would require a chance in existing law), but the transaction costs would be prohibitive because of the patient's ignorance of particular procedures, risks, and so forth.
In addition, a physician who told his patient that he would operate on him only if he waived his right to sue for medical malpractice would be signaling the likelihood of an unfavorable outcome.
Hence physicians would be reluctant to suggest such waivers.
I was pleased to learn from two of the comments that some insurance conmpanies do experience-rate medical malpractice insurance.
Why others do not is a mystery, but it occurs to me that one possibility is that the inaccuracy of judicial determinations of malpractice is so great that being sued and losing a malpractice case does not provide useful information about the likelihood of being sued in the future.
On this view, malpractice liability is random.
One hopes not; but if so, reforms, such as those suggested above, aimed at increasing the accuracy of malpractice determinations are urgently needed.
There is a movement afoot, assisted by the strengthening of Republican control over Congress, to impose federal limits on tort litigation, particularly medical malpractice; premiums for malpractice insurance have soared in the last two years and physicians are protesting vigorously.
The costs of malpractice premiums are only about 1 percent of total U.S.
health-care costs.
Moreover, insofar as physicians are forced to swallow the cost of the premiums rather than being able to pass them on to their patients or their patients insurers in the form of higher prices, the premiums do not actually increase total health-care costs.
There is an indirect effect, however, insofar as malpractice liability causes doctors to practice defensive medicine.
But there may be offsetting benefits, to the extent that defensive medicine actually improves outcomes for patients; and surely it does for at least some.
What is more, because malpractice insurance is not experience-ratedphysicians are not charged premiums based on their personal liability experiencemalpractice liability may have only a slight effect on physicians methods or carefulness, except insofar as physicians are pressured by their insurers to change their methods in order to reduce the amount of malpractice litigation.
The relation between malpractice premiums and malpractice judgments is also uncertain.
No doubt capping judgments, which is the principal reform that is advocated, has some tendency to reduce premiums, but perhaps not much, because there is evidence that premiums are strongly influenced by the performance of the insurance companies investment portfolios.
A better reform would be to permit, encourage, or even require insurance companies to base malpractice premiums on the experience of the insured physician, much as automobile liability insurance is based on the drivers experience of accidents.
That would make malpractice liability a better engine for deterring malpracticewhich in turn would reduce malpractice premiums by reducing the amount of malpractice.
Capping judgments, in contrast, would reduce the incentive of insurance companies and their regulators to move to a system of experience-rated malpractice insurance.
It is always important to distinguish between financial and real costs.
Insofar as malpractice liability merely transfers wealth from physicians to (some) patients, aggregate costs are unaffected.
The real cost of malpractice liability is limited to the cost of the actual resources consumed by such liability, principally the time of lawyers and expert witnesses (roughly half the total amount awarded in judgments goes to pay lawyers and expert witnesses), unless defensive medicine is assumed to cost more than its benefits in improving treatment outcomes.
The real benefit of malpractice liability is its effect if any in deterring medical negligence; reducing that benefit would impose a real cost.
Hence it is simplistic to assume that the total annual malpractice premiums paid is a good index of the net social cost of malpractice liability, or that measures to reduce those premiums by capping malpractice liability would result in a net improvement in welfare.
To repeat, part of the premiums represent simply a wealth transfer from physicians to the patients who receive malpractice judgments or settlements paid by insurers.
The part (roughly half) that pays for lawyers and expert witnesses should be understood as the cost of maintaining a system for increasing medical safety; the efficacy of the system could be improved, I have argued, by experience rating, but not by capping judgments.
In any event, there is no compelling case for federal limitations on malpractice liability.
The issue belongs at the state level, and as reported in a   New York Times     article   last Friday, a number of states have adopted or are seriously considering adopting the kind of caps being advocated in Congress.
Federal legislation would simply stifle state experimentation with different methods of regulating physicians and prevent us from learning which is best.
There is a stronger case for federal regulation of class actions, as in the case of suits against asbestos manufacturers.
When the members of a plaintiff class are scattered across the country, the class lawyer has a wide range of places in which to sue, and there are certain counties in the United States in which judges and juries are disproportionately generous to tort plaintiffs.
Most of the costs of a large judgment or settlement in such a case are exported to other states, while the benefits are concentrated in the locale where the suit was litigated, because of the business generated for local lawyers, as well as the judgments or settlements received by the members of the class in the locale.
This is a formula for abuse, concretely for a tendency for such judgments and settlements to exceed an unbiased estimate of the true costs imposed on the class by the defendants misconduct.
Malpractice litigation does not give rise to such an abuse to any very great extent, because patient and physician are usually in the same state, and a single plaintiff has only a limited choice of courts in which to sue.
This is another reason not to make medical malpractice the principal object of federal tort reform.
We should be cautious about tort reform.
It would be unfortunate if interest-group politics, and anecdotes concerning outlandish lawsuits (such as the suit against McDonalds by the customer who spilled hot coffee in her lap), were allowed to obscure the difficult policy issues.
I agree with what Becker has written on this important subject.
I want to approach the subject from a slightly different angle, however, which is to consider why higher education in the United States is dominated by public and nonprofit-private institutions (abroad, almost all education is government-operated) and what this implies about the reasons for the growth of the profit-making institutions.
A nonprofit enterprise is one that (1) enjoys an exemption from taxation and (2) operates under a nondistribution constraint--that is, any surplus of revenues over expenses cannot be distributed as profits to the firm‚Äôs "owners." The points are related.
To enjoy a charitable exemption from taxes, an institution must not only have a purpose deemed worthy (such as promoting education, health, religion, the arts, and so forth), but must also devote all its resources, including income on endowment, to its charitable purpose.
The nondistribution constraint is indeed constraining, because it means that the institution cannot raise money in the equity markets.
It can compete with profit-making competitors only if it can attract investment from donors.
Generally, this requires that it have many affluent alumni, as they are the principal donors to colleges and universities (partly out of gratitude, partly for the less altruistic reason that they derive prestige from having attended a distinguished institution and they want to help it maintain its distinction).
There is a chicken and egg problem.
To attract children of well-to-do families, and other children who have good earning prospects, the school has to offer an attractive program, good living and athletic facilities, and a distinguished faculty, but all those things cost money, which is hard for a nonprofit institution to raise unless it already has wealthy alumni.
This may be why the very successful nonprofit colleges and universities tend to be quite old.
They have had a long time to "grow" alumni who make generous contributions.
Brandeis University, founded in the 1940s, is one of the few prominent private universities that is not very old--and it has had great trouble building up an endowment (though in part this is because of the elimination of Jewish quotas at other prominent universities--those quotas were one of the major factors in the decision to create Brandeis).
The result is a tendency for nonprofit colleges and universities to be quite expensive.
Access to them by kids who are not well off and do not have good earnings prospect is further restricted by the practice of "legacy admissions," an important part of the fund-raising strategy of the classy nonprofit institutions.
Public colleges and universities take up much of the slack by subsidizing tuition; there are also federal and state loan programs for college tuition.
But tuition expense at public institutions has been rising, at the same time that these institutions have begun angling for more affluent students by becoming semi-private--sometimes more than semi: for example, the University of Michigan, though state-owned, now derives only about 10 percent of its revenue from the state.
The rise of the profit-making college and university, described in Becker's post, can therefore be interpreted as a response to the increasing scarcity of places in nonprofit and public colleges and universities for students who for whatever reason do not have good prospects as high earners, which would make them attractive to and able to afford the tuition charged by the nonprofit and public institutions.
Not being able to rely on future alumni donations from such students, the capital required for their education must be raised from nonaltruists, i.e., profit-making investors; hence the increasing adoption of the for-profit form.
Nonprofit institutions catering to the low end of the market have also emerged in recent years, but they may be at a competitive disadvantage vis-√†-vis profit-making firms, as they may find it difficult to raise capital without an alumni base.
Is fraud and other malfeasance more likely in the new profit-making institutions? I think so, for two reasons.
First, the consumers served by these institutions are less sophisticated than the consumers (the students and their families) of the educational services provided by the established institutions.
Second, established institutions have more ‚Äúreputation capital‚Äù at stake than a new enterprise; hence fraud or other misconduct is more costly to them and so they make greater efforts to prevent it.
This has nothing to do with any differences in "greed" across different organizational forms, but merely with differences in the cost of engaging in misconduct, which is greater for the nonprofit and public institutions because of their clientele and reputation.
But reputation capital is as important to established profit-making institutions, such as the University of Phoenix, as to nonprofit ones.
However, the the rapid growth in the number of profit-making colleges and universities means that a disproportionate number of these institutions are new and therefore not yet established, and that would suggest that fraud may indeed be on the increase, as the New York authorities believe.
Even so, that is no reason to shut down the profit-making educational sector, which may have discovered a demand for college education that the nonprofits had overlooked.
Given the private as well as social return to higher education, the contribution of for-profit colleges and universities should not be disparaged.
President Bush has suggested that spreading democracy is the surest antidote to Islamist terrorism.
He can draw on a literature that finds that democracies very rarely go to war with each other, although a conspicuous exception is the U.S.
Civil War, since both the Union and the Confederacy were democracies.
Hamas, which has just won a majority in the parliament of the Palestinian proto-state, is a political party that has an armed terrorist wing and is pledged to the destruction of Israel.
Can that surprising outcome of what appears to have  been a genuinely free election be squared with the belief that democracy is the best antidote to war and terrorism?.
The first thing to note is that one democratic election is not the equivalent of democracy.
When Hitler in 1933 was asked by President Hindenburg to form a government, the processes of democracy appeared to be working.
The Nazi Party was the largest party in the Reichstag; it was natural to invite its leader to form a government.
Within months, Germany was a dictatorship.
So the fact that Hamas has won power fairly and squarely does not necessarily portend the continuation of Palestinian democracy.
But suppose Palestine remains democratic.
What can we look forward to? I don't  think the question is answerable if democracy is analyzed realistically.
The great economist Joseph Schumpeter sketched in his 1942 book   Capitalism, Socialism, and Democracy   what has come to be called the theory of "elite" or "procedural" or "competitive"  democracy.
In this concept, which I have elaborated in my book   Law, Pragmatism, and Democracy   (2003), and which seems to me descriptive of most modern democracies, including that of the United States, there is a governing class, consisting of people who compete for political office, and a citizen mass.
The governing class corresponds to the selling side of an economic market, and the citizen mass to the consuming side.
Instead of competing for sales, however, the members of the governing class compete for votes.
The voters are largely ignorant of policy, just as consumers are ignorant of the inner workings of the products they buy.
But the power of the electorate to turn elected officials out of office at the next election gives the officials an incentive to adopt policies that do not outrage public opinion and to administer the policies with some minimum of honesty and competence.
It was Fatah's dramatic failure along these dimensions that opened the way to Hamas's surprisingly strong electoral showing.
Hamas cleverly coupled armed resistance to Israel with the provision of social welfare services managed more efficiently and honestly than the services provided by the notoriously corrupt official Palestinian government, controlled by Fatah.
In troubled times, such as afflicted Germany in the early 1930s and Palestine today, democratic elections provide opportunities for radical parties that provide an alternative to discredited policies of incumbent officials.
The worse the incumbent party, the better even an extremist challenger looks.
The German example suggests that  moderation of a radical party when it takes power is not inevitable.
The party may continue its radical policies and even use its initial popularity to destroy democracy.
Hitler and Mussolini took power in a more or less orderly democratic fashion and Lenin by a coup, but in all three cases the consequence of the seizure of power by a radical party was the opposite of moderation.
Hitler and Mussolini remained popular until their policies failed dramatically; there is no theoretical or empirical basis for supposing that popular majorities in all societies are bound to favor more enlightened policies than a dictator or oligarchy would.
How then to explain the empirical regularity that democracies rarely war with each other, and the concomitant hope that if Palestine were democratic it would stop trying to destroy Israel? The answer lies in considering what is required for democracy to take root rather than to make a rapid transition to dictatorship.
Democracy is unstable unless anchored by legally protected liberties, including freedom of speech, freedom from arbitrary arrest, and property rights.
The liberties in turn tend to be unstable without a measure of democracy.
When there are no liberties, a one-sided election can result in a quick extinction of democracy, because there is nothing to prevent the winner from calling an end to the electoral game in order  to perpetuate his control.
When there is no democracy, rulers are not effectively checked, and corruption and other abuses flourish.
The combination of democracy and liberty, as in the U.S.
Constitution, provides an auspicious framework for prosperity, resulting eventually in dominance of the society by a large middle class.
Middle-class people don't have much taste for offensive wars or violence in general.
They are not specialized to such activities, which benefit primarily monarchs and aristocrats (who internalize martial values), impoverished adventurers, and (closely related to the adventurers) political and religious fanatics.
(This is in general, not in every case; the Germany that Hitler took over was a middle-class republic, democratic though imperfectly so.) As Samuel Johnson said, people are rarely so innocently engaged as when trying to make money, since in a well-ordered society they can do that only through trade, which wars disrupt.
So democracy itself is not a panacea for the world's political ills and dangers.
But if the Palestinians are able to develop a genuinely republican government and move rapidly toward embourgeoisement, there is some hope for the eventual emergence of a peaceful Palestinian state.
There is another point, special to the Palestinian situation, that provides a further ray of hope.
With Hamas in power, its members are paradoxically much more vulnerable to Israeli military power than they were when Fatah was in power.
The Hamas leaders then were scattered and hidden and efforts to fight them risked killing innocent civilians and discrediting the Palestinian government, with which Israel was trying to make peace.
Given Fatah's inability to suppress Hamas, Israel could not crush Hamas by  bombing the government buildings occupied by Fatah.
Once Hamas is the government, however, further violence toward Israel by Hamas members can be met appropriately by massive military force directed against the organs and leaders of the government.
This threat may cause Hamas to avoid attacks on Israel.
Hamas's victory may be the best thing that has happened to Israel in years.
A number of interesting comments, as usual.
I respond to a number of them here.
On whether unions promote efficiency, a commenter was correct to point out that unions can benefit members, but they do so but restricting competition among workers.
While this may raise the wages of unionized workers, it harms nonunionized workers (as well as consumers).
If because of unionization an employer's wage bill rises, its demand for labor will decline, which means that fewer workers will be employed.
By the way, in response to another comment, the decline in unionization in the private sector seems to me better evidence that union-protected employment is less efficient than employment at will than a study would be.
It is the real market test.
One commenter suggests that tenure increases the incentive of workers to invest in specialized skills.
This may be true, but observation suggests that employers are able to encourage such investment without granting tenure.
All sorts of nontenured private-sector workers, including doctors, lawyers, and engineers, invest in specialized skills.
Becker explains the mechanism: specialization in firm-specific skills  may make a worker more dependent on his employer, but it also increases the worker's value to the employer.
One comment perpetuates the very natural error of thinking that Einstein was employed by Princeton University.
He was employed by the Institute of Advanced Study, which is located in Prdinceton, New Jersey, but is not part of the university.
Princeton U.
has garnered a great deal of prestige from the co-location of the Institute!.
Another and more germane misunderstanding is that tenure is guaranteed employment.
That is not correct.
If a college shuts down, it does not have to continue paying the tenured faculty.
And I think without being certain that if a university closes a department, it doesn't have to retain the faculty of that department on its payroll.
In effect what tenure guarantees is that you won't be replaced--even by a better candidate!.
Iincidentally, I do not suggest that a university or other employer should be forbidden to offer a tenure contract if the employee is willing to accept a compensating reduction in wage.
The problem is asymmetric information.
If the employee asks for such a contract, the employer may wonder whether the employee has private information that he is not sharing--for example, that he doesn't intend working hard any more.
I agree that tenure protects academics against being fired because of their unpopular ideas, but there are other forms of retaliation that are almost as effective.
If there is a market for the unpopular idea, the fired professor can find another job.
If there is no market, he's likely to be ostracized by his peers.
I would like some examples of where tenure made the difference between production and suppression (presumablly temporary) of a genuinely important idea.
One comment misunderstands me as advocating abolition of tenure for civil servants.
Not so.
All I said was that I didn't think the Supreme Court in the name of the First Amendment should have abolished the spoils system.
I emphasized that when performance measures are unavailable, which they often are for public services, the creation of a "high commitment" environment, including tenure, as a substitute for high salaries to compensate for risk of being fired for nonobjective reasons, may be optimal.
A spoils system may well be less efficient than a tenure system, yet the tenure system may be less efficient than employment at will in settings in which performance measures are feasible.
I do think tenure for judges makes sense, because without it the judiciary would be excessively politicized.
I do not have tenure in my part-time teaching job at the University of Chicago, and I think that's fine.
Most Americans employed in the private sector do not have any job protection.
They are what are known as "employees at will." They can quit or be fired at any time for any reason other than a reason forbidden by law, such as race.
Unionized workers (now a very small percentage of the private-sector work force) have some job protection; they can be laid off if their employer experiences a fall in demand and therefore doesn‚Äôt need as many workers, but they can be fired only "for cause," normally some form of deficient job performance.
In the public sector, most employees below the top political level have extensive job protection (including teachers), except in the military and other national-security employment, such as the CIA.
Generally, civil employees of the government can be discharged only for cause, which often is very difficult to prove.
The Supreme Court has largely abolished, in the name of free speech, the "spoils" system whereby state and local government jobs were given to the political supporters of the party in power.
Federal judges can be removed (barring physical or mental disability) only by the cumbersome process of impeachment by the House of Representatives and conviction by the Senate.
An important category of job-protected workers that bridges the public and private divide is tenured professors, who cannot be fired without cause.
Finally, in Europe most workers have far more extensive job protections than American workers do.
The question I wish to address is whether this pattern makes any economic sense.
One way to pose the question is to ask why--since employment at will is the cheapest form of employment contract--aren't all employees employees at will? In the otherwise dissimilar cases of unionized workers and public employees protected by the Supreme Court‚Äôs interpretation of the First Amendment against political firing, tenure (employment protection) is imposed from the outside.
Employers would like greater flexibility, but outsiders--unions or judges--impose tenure for their own reasons.
Unions worry that without tenure protection, employers will pick off the union's supporters; the Supreme Court worries that without tenure protection public employees will be afraid to express political views opposed to those of their superiors, and so freedom of expression will be curtailed.
But surely the curtailment would be slight, since few public employees will engage in public disagreement with their superiors even if they can't be disciplined for doing so.
Moreover, there is a tradeoff between professional competence and personal loyalty.
A slightly less able employee who is loyal to his superiors because of political compatibility or even nepotism will work more harmoniously with them, and the reduction in friction may offset a (modest) competence deficit.
Tenure is an efficient system in what organizational economists call a "high commitment" workplace.
Contrast two types of enterprise.
In one, the contribution of the individual employee to the enterprise‚Äôs output is readily measured.
Ordinarily this will be a business firm.
Revenues, costs, and ultimately profits provide objective measures of performance.
The individual employee's contribution to those measures may be more difficult to measure, especially when employees work in teams.
But reasonable estimates are usually possible--employees and their superiors negotiate reasonable goals for the coming year relating to sales, markups, and cost reductions and progress toward those goals is measured throughout the year.
Employees can therefore be paid a salary or wage that approximates their marginal product.
With their productivity continuously measurable, there is no need for job protection.
Or so it seems; for even in a firm, there may be some benefits to providing a degree of job protection.
Suppose employees are in a position where by sharing their know-how they could increase the productivity of other employees.
They may be reluctant to do this if they fear losing their jobs because they have helped the other employees become more productive than they.
Some firms deal with this problem by making an employee's  annual bonus depend not only on his own contribution but also on the overall performance of the firm that year.
This is a more flexible method than giving workers tenure.
The sharing problem is sometimes offered as an argument for how unionization might actually increase productivity.
But it is a weak argument.
If tenure is an efficient employment contract, employers will institute it without union prodding.
The steep decline of unionization in the private sector is a convincing "Darwinian" refutation of the argument one used to hear that unions actually promote efficiency.
Although performance measures are generally most feasible for business firms, some governmental or other noncommercial activities lend themselves to such measures.
Criminal-investigation agencies such as the FBI provide good examples.
An FBI agent can be evaluated by the number of arrests he makes weighted by convictions (arrests that do not lead to convictions are not productive), with the convictions in turn weighted by the length of the sentence and the value of any property recovered as a result of the prosecution.
Note that the measure here, as in a firm, is not a simple quantitative measure of contribution to output, but rather is a value measure.
In activities (some of which may be team production within business firms) in which performance measures are infeasible, usually because either the value of output or the employee's contribution to that output cannot be quantified, other methods of employee motivation than performance-based compensation must be sought.
The "high commitment" workplace is a recognition that, fortunately, employees have other motivations for working productively besides the hope of salary increments, such as identification with the goals of the employer, as when judges and (other) civil servants internalize a "public service" ethic that induces them to work productively for a modest wage with limited hope of advancement.
Tenure in such a setting both encourages sharing and discourages "influence activities," a term organizational economists use to refer to the kind of jockeying for position that occurs in the workplace when the absence of objective performance measures opens the door to worker competition based on personality, connections, and intrigue.
Even in a high-commitment environment, additional motivation may be provided by a tournament-style promotion system.
Even if an employee's output cannot be measured with any precision, it may be possible to identify the best employee because the gap between his contribution and that of the next best may be large enough to be perceived without being quantifiable.
Promoting the best employee to the next rank is therefore a method of incentivizing employees to do their best.
Both judicial and academic tenure are defended as needed to encourage independent thought and prevent political retaliation for unpopular views.
This rationale is more persuasive in these contexts than in that of ordinary public employees, but it is not very satisfactory.
In most nations, including nations that we consider our peers, the judiciary is insulated from political pressures but the judicial career is much like that of other employees.
Judges start at the bottom rung of the judiciary when they are appointed and work their way up by impressing their superiors.
The U.S.
federal judicial system (also the British judiciary, and that of the other former British possessions) is unusual in being a system of lateral appointments (from practice or the academy, generally) with very limited promotion.
The difference may be due to the fact that the Anglo-American and especially the U.S.
legal system gives much more discretionary authority to judges than other foreign systems do, so that identifying the "best" for promotion is difficult and even arbitrary.
I do not think tenure makes a great deal of sense any longer in the academic setting, and I expect to see it gradually abandoned.
(It has already been abandoned in England, for example.) If a university wishes to offer its faculty protection against political retaliation for unpopular views, it can do that by writing into the employment contract that politics is an impermissible ground for termination.
Tenure is no longer needed because of an absence of performance measures.
These measures exist in abundance.
Quality of teaching is readily measurable by student evaluations, provided care is taken to prevent teachers from courting popularity by easy grading and light assignments and student evaluations are supplemented by faculty observation of the classroom.
Quality of research is readily measurable by grants, prizes, and above all by citations to the professor‚Äôs scholarly publications, weighted by the quality of the journal in which the citations appear.
In some fields, such as mathematics, there is generally a significant falling off in academic output at a young age, and there is fear that without tenure these faculty would be turned out to pasture long before retirement age.
But this is no different from the situation in professional sports, modeling, and other youthful occupations, where it is handled by an alteration in the wage profile.
If a career in mathematics entails a sharp fall-off in market wages after, say, age 40, the academic market will compensate by offering disproportionately high wages to young mathematicians; otherwise, talented mathematicians will choose professions, such as economics, in which math skills are valued but productivity does not decline steeply with age.
One reason for the superior productivity of U.S.
compared to European workers is that tenure encourages lazinesss by reducing the cost of laziness to the worker.
But that is not the principal problem.
Tenure removes the stick but not necessarily the carrot.
More productive professors can be paid more and, even if their university has a lock-step compensation system, can obtain prestige and outside income by outstanding performance.
The greater cost of tenure is simply in forcing retention of inferior employees.
The 80-year-old mathematician may be working hard, but he may be incapable of achieving the output of the 25-year-old mathematician who would take his place were it not for tenure.
Note how governmental prohibition of compulsory retirement at a fixed age aggravates the inefficiency of tenure--and is no doubt contributing to its eventual abandonment.
Perhaps the strongest argument for academic tenure is that without it academics would be reluctant to undertake promising projects with a high risk of failure.
But the situation is no different in "knowledge" firms such as software and pharmaceutical-drug producers, which encourage their scientists to undertake high-risk projects--and do not think it necessary to offer tenure.
If most good new ideas are produced by young academics, then an institution that raises the average age of faculty, namely tenure, seems likely to reduce academic productivity.
An interesting empirical project, therefore, would be to study the effect of England's abolition of tenure on the average age and productivity of English university faculties.
One comment in particular merits a response: that Botswana, one of the best-governed countries in sub-Saharan Africa, and one of the most prosperous (its GDP per capita is about $10,000, and thus close to the Republic of South Africa's $12,200), has one of the highest AIDS rates in Africa (24 percent), and therefore I am wrong to suggest that bad government and poverty are the roots of Africa's disproportionate incidence of HIV-AIDS.
I had pointed out, however, that the factors that influence a country's AIDS rate are multiple.
One of them is migrant labor, which facilitates prostitution and casual sex.
Botswana sends many of its workers to South Africa, and also sits astride major north-south traffic arteries.
In addition, its relative prosperity has led to rapid urbanization, and cities provide greater opportunities for casual sex than rural areas.
Another factor that has undoubtedly influenced the AIDS rate in Botswana is that the AIDS epidemic started in southern Africa, which is probably why the highest African AIDS rates are in the countries of southern Africa, which include Botswana.
The epidemic had already spread widely before preventive measures, including education in the danger of the disease, were taken.
But that of course cannot explain why Botswana's AIDS rate is higher even than South Africa's.
A further point about Botswana is that although its average income is high, because of its diamond mines and tourist attractions, the income is very unevenly allocated.
Most of the population is very poor (30 percent are below the poverty level) and 20 percent are illiterate.
However, a wealthy country with many poor can pay for public health, and in fact Botswana has an advanced program for combating AIDS by education, free condoms, etc., and has had it for a number of years, so it is a puzzle why the overall incidence of HIV-AIDS remains so high.
I want to emphasize two other factors that did not receive adequate attention in my original post.
One is that short life expectancies reduce the cost of risky behavior: a 25 percent risk of dying from AIDS is very high, but the expected cost of death that is generated by that risk is lower the higher the probability of dying young from some other disease, a probability much higher in sub-Saharan Africa than in the United States.
Another factor is the prevalence of other sexually transmitted diseases, such as syphilis, which increase susceptibility to infection by the AIDS virus.
Both of these points are discussed in an early article by Tomas J.
Philipson and me on AIDS in Africa--‚ÄúThe Microeconomics of the AIDS Epidemic in Africa,‚Äù 21   Population and Development Review   835 (1995)--and in a more recent article by Emily Oster, "Sexually Transmitted Infections, Sexual Behavior and the HIV/AIDS Epidemic," 120   Quarterly Journal of Economics   467 (2005).
We note in our article the curious positive correlation in Africa of AIDS with education, and suggest that educated Africans are likely to be urban and therefore have more opportunities for casual sex.
On the economics of AIDS generally, see also Philipson's and my book   Private Choices and Public Health: The AIDS Epidemic in an Economic Perspective   (1993).
From an economic standpoint, President Bush's proposal to treat as taxable income of employees the amount of employer health insurance that the employee receives in excess of $15,000 a year for a family or $7,500 a year for an individual is a step in the right direction.
But, from that same standpoint, his proposal to subsidize the purchase of individual (nonemployer-provided) health insurance, in order to reduce the number of people who have no health insurance (now almost 47 million), is a step in the wrong direction.
I cannot think of a good reason for subsidizing health insurance, or, indeed, for the demand for noncatastrophic health insurance.
The economic explanation for insurance is that because of diminishing marginal utility of income, people will pay to avoid a big financial loss (e.g., will pay $2 to avoid a 1/100,000 prospect of a $100,000 loss, even though the actuarial cost of such a prospect is only $1), but most medical expenses are modest.
So if there were no tax subsidy for health insurance, probably much less would be purchased, which would be fine.
People might even be healthier, because diet and other life-style choices are substitutes for medical care and thus for health insurance.
The fact that millions of people have no health insurance does not strike me as a social problem.
It is true that they are free riders, but so to a considerable degree are the insured, since their premiums don't vary much or at all with how much health care they obtain.
As Becker points out, the quality and conditions of charity medical treatment (such as long queues in emergency rooms) discourage overuse of "free" medical care--it isn't really free, because the nonpecuniary costs are substantial; among those costs are the fear and discomfort associated with medical treatment.
Becker also points out that the uninsured are not the most frequent visitors to emergency rooms.
Many of them can afford to pay at least the modest expenses that are all that are required to to obtain most medical treatments in the market.
They do not need to resort to charity and indeed, unless they are indigent, they are ineligible for it.
The choice not to carry health insurance is of course influenced by the fact that individual as distinct from group health insurance is very expensive.
There is a good reason for this--adverse selection.
Sickly people are the most likely to insure, driving up premiums and causing the healthy to drop out of the insurance pool.
This effect is reduced when insurance is tied to employment, both because the sickly are less likely to be employed and, more important, because the healthy cannot opt out without quitting their jobs.
The combination of high premiums and low demand observed in the individual insurance market is thus an efficient combination.
I see no need for public intervention, as proposed by the President.
The best, though politically unattainable, reform would be to abolish Medicare, brutal as the suggestion sounds.
Then people would purchase catastrophic or other medical insurance for their old age, or depend like the young on charity.
If it were thought "unfair" to make elderly people of limited means pay for their entire costs of health care, there could be a subsidy, but it should be means-tested, unlike Medicare.
Why taxpayers should pay the medical expenses of affluent oldsters, of whom there are a great number, is an abiding mystery, at least from an ethical as distinct from a political standpoint.
There is widespread concern, though to a considerable extent politically generated, with the total amount of money spent on health care in the United States.
To the extent that the money is spent by individuals or firms without any public subsidy, there is no economic problem.
If people want to spend more of their money on medical care and less on food or housing because they greatly value good health and longevity, that is their free, legitimate, and authentic choice.
It is a sign of affluence that the nation can afford to devote so high a percentage of national income to medical care.
The Detroit auto manufacturers complain that the high costs of their employer health insurance makes it difficult to compete with foreign firms.
That is not a social problem, and indeed makes little sense.
Foreign firms such as Toyota manufacture cars in the United States, yet are able to control their labor costs.
Competition will force the Detroit firms to do likewise.
Their business error in making long-term commitments to their unionized workers is being punished by the market, as, from an economic standpoint, it should be.
A legitimate concern about health costs is with the expense to the taxpayer of health-care entitlement programs, mainly Medicare.
Yet even that concern is exaggerated.
The demand for medical care is not as open-ended as the demand for other goods and services, with the exception of such purely optional medical treatments as cosmetic surgery for people who are not severely disfigured and drugs to enhance athletic performance, and such nonillness-related medical expenses are not subsidized.
If they were, demand would soar.
But most people do not court illness in order to be able to consume subsidized medical care, or demand more medical care than is necessary to treat their illnesses.
This means that the demand for medical care is driven primarily by the prevalence of illness and the progress of medical technology rather than by the payment scheme.
Even abolishing Medicare, therefore, would probably not greatly affect the amount of money that is spent in the United States on medical care.
Whether that money is spent by the sick or by the taxpayer is more than a detail, in part because withdrawal of subsidy might induce people to adopt a healthier style of living, but it is not the principal factor driving total health costs.
The term is indeed an oxymoron.
Libertarianism, as expounded in John Stuart Mill's   On Liberty  , is the doctrine that government should confine its interventions in the private sector to what Mill called "other-regarding" acts, which is to say acts that cause harm to nonconsenting strangers, as distinct from "self-regarding" acts, which are acts that harm only oneself or people with whom one has consensual relations authorizing acts that may result in harm.
So, for example, if you are hurt in a boxing match, that is a "self-regarding" event with which the government has no proper business, provided the boxer who hurt you was in compliance with rules--to which you had consented--governing the match, and provided you were of sound mind and so could give meaningful consent.
Paternalism is the opposite.
It is the idea that someone else knows better than you do what is good for you, and therefore he should be free to interfere with your self-regarding acts.
Paternalism makes perfectly good sense when the "pater" is indeed a father or other parent and the individual whose self-regarding acts are in issue is a child.
In its more common sense, "paternalism" refers to governmental interference with the self-regarding acts of mentally competent adults, and so understood it is indeed the opposite of libertarianism.
The yoking of the two in the oxymoron "libertarian paternalism" is an effort to soften the negative connotation of paternalism with the positive connotation of libertarianism.
I would further limit the term "paternalism" to situations in which the government wishes to override the informed preferences of competent adults.
The dangers of smoking are well known; indeed, they tend to be exaggerated--including by smokers.
(The increased risk of lung cancer from smoking is smaller than most people believe.) Interventions designed to prevent smoking, unless motivated by concern with the effect of smoking on nonsmokers (ambient smoke, which is not much of a health hazard but is an annoyance to nonsmokers), are paternalistic in the sense in which I am using the term.
Thus I was not defending paternalism when I defended the ban on trans fats in New York City restaurants.
If people are aware of the dangers of trans fats but wish to consume them anyway, the only nonpaternalistic ground for intervention, which I would be inclined to think insufficient by itself, is that they may be shifting some of the costs of their medical treatment for heart disease to taxpayers who forgo consumption of trans fats.
If, however, people don't know the dangers of trans fats and it would not be feasible for them to learn those dangers (prohibitive transaction costs), and if as I believe the dangers clearly exceed any benefits from trans fats compared to substitute ingredients, then the ban can be defended on nonpaternalistic grounds, as I attempted to do.
Another way to put this is that it is not paternalistic to delegate a certain amount of decision making to the government.
There are some goods that government can produce at lower cost than the private sector, and among these is the banning of trans fats from food served in restaurants.
It might seem that the good could be produced just by competition-impelled advertising by restaurants that do not use trans fats.
But such a suggestion ignores the difference between disseminating and absorbing information.
If you have a peanut allergy, and the label on a package of cake mix says that the mix contains peanut oil, you know not to buy it; the cost of absorbing the information on the label is trivial.
But if you are told that a restaurant does not use trans fats in its meals, determining the significance of that information to you would require you to undertake a substantial research project.
You would have to learn about trans fats, somehow estimate the total amount of trans fats that you consume every year, estimate the amount of trans fats in the restaurant meals you consume relative to your total consumption of trans fats, and assess the significance of that consumption in relation to other risk factors that you have or don‚Äôt have for heart disease.
Few people have the time for such research, or the background knowledge that would enable them to conduct it competently.
Given that trans fats have close substitutes in both taste and cost, it is not unrealistic to suppose that the vast majority of people would if consulted delegate to government the decision whether to ban trans fats.
One of the great weaknesses of "libertarian paternalism" is failure to weigh adequately the significance of the operation of the cognitive and psychological quirks emphasized by libertarian paternalists on government officials.
The quirks are not a function of low IQ or a poor education; they are universal, although there is a tendency for the people least afflicted by them to enter those fields, such as gambling, speculation, arbitrage, and insurance, in which the quirks have the greatest negative effect on rational decision making.
As Edward Glaeser has pointed out, the cost of these quirks to officials--who are not selected for immunity to them--is lower than the cost to consumers, because the officials are making decisions for other people rather than for themselves.
Professor Sunstein has posted a response to Becker's and my postings on the University of Chicago Law School blog.
It can be accessed at   http://uchicagolaw.typepad.com/faculty/  .
One comment that was made on my post, a comment similar to arguments made by Sunstein and Thaler, is that paternalism can be libertarian if it does not extinguish consumer choice.
An example is "cooling off" requirements in laws such as the Truth in Lending Act.
The Act does not forbid people to borrow at what might strike an observer as an exorbitant interest rate, but merely gives him an opportunity to rescind without penalty what may have been his impulsive decison to borrow at that rate.
It is true that such a law does not interfere with freedom of choice as much as a law imposing an interest ceiling would do.
But it does interfere with it to an extent, by increasing lenders' costs.
Moreover, if it is true that borrowers would be happier in the long run to have their impulses checked in this fashion, then some lenders would offer the rescission right without prodding by government--in fact it is quite common for sellers to permit consumers to return goods they have bought without penalty.
So perhaps the matter of cooling off can be left to the market after all.
Here it should be added that impulse control can often be left to the impulsive.
Chocolaholics may decide not to keep chocolate in their house because they know they cannot control their "addiction" otherwise.
There is something to be said for encouraging self-control rather than shifting some or much of the responsibility for impulse control to the government.
I distinguish the cooling-off case and self-control issues generally from the trans fat case (to which several of the commenters returned) on the ground that the market may not solve a problem when information costs are prohibitive.
They are not prohibitive in the cooling-off case.
There is also an empirical question whether cooling-off requirements have any effect, or whether borrowers ignore them as govrernment-mandated paperwork.
I don't consider proposals for energy conservation, even when required by government rather than undertaken on purely private initiatives, to be paternalistic.
If as I believe the social costs of global warming and excess empowerment of oil-exporting countries are considerable, then government should intervene, since few individuals will reduce their consumption of energy in order to reduce the social costs of energy consumption; the contribution that the individual's change in consumption would make to energy conservation would be virtually zero.
I agree with the comments that disavow doctrinaire libertarianism.
I am not an "anarcho-capitalist," which is the extreme of libertarianism, or even a strict Millian (nor was MIll!).
I'm happy to listen to arguments for government interventions designed to protect people from themselves, even if they are adults and not mentally incompetent.
What troubles me is that the interventions may be thought up by officials suffering from the same cognitive or emotional limitations as the consumers or other private individuals with whose choices they want to interfere; that the interventions may be politically motivated rather than based on efficiency norms; and that once one begins questioning consumer competence it is difficult to know where to draw the line.
I do think the cognitive and psychological limitations are real, but II happen to think that they are especially serious in a domain of policy that I have written about extensively in recent years--that of national security intelligence, where the limitations--operating on intelligence officers and policymakers, which is to say govenment actors--explain many intelligence failures.
With the decline in AIDS among the white population in the United States, the advent of effective treatment (the antiretroviral drugs), and the slowing in the growth of the international epidemic, Americans' interest in the disease has waned.
Only about a third of one percent of the U.S.
population is infected by the AIDS virus (HIV), and half of those are black (thus the per capita prevalence of the disease is roughly four times as great as its prevalence among whites).
Among whites, the principal means of transmission are homosexual sex; among blacks, heterosexual sex and needle-sharing drug use.
The international epidemic is undiminished, indeed growing, though at a diminished rate.
Some 40 million people worldwide are infected by HIV, up from 8 million in 1990.
But the international distribution of the epidemic is remarkably skewed.
In North America and Western and Central Europe, it is only .3 percent, and in most of the world it is no higher than 1 percent.
In the Caribbean countries, however, it is 1.2 percent (which is the approximate prevalence among U.S.
blacks) and in sub-Saharan Africa it is at least 6 percent and perhaps as great as 10 percent.
Because antiretroviral drugs are available to only about 20 percent of the infected population in sub-Saharan Africa, the death rate is much higher than elsewhere, and indeed about two-thirds of the world's AIDS-related deaths occur there.
The ratio of total infected persons to annual number of deaths is about 10 percent in sub-Saharan Africa versus 1 percent in the United States.
Even within sub-Saharan Africa, there are vast differences in the prevalence of the disease among the different countries.
Most of the West African countries, including Nigeria (Africa's most populous country), have prevalance in the 5 to 7 percent range.
But there are a number of countries in East Africa, notably the Republic of South Africa, where the prevalence is in excess of 20 percent (it is 24 percent in Botswana, for example).
The overall prevalence of the disease in sub-Saharan Africa seems, however, to have peaked, so that the continuing increase in worldwide prevalence is being driven by increases in other countries, mainly in Asia.
The disease is a principal focus of foreign aid by wealthy nations, multinational groups such as the United Nations, and private foundations such as the Bill and Melinda Gates Foundation.
The total amount of money spent fighting AIDS in other than the wealthy countries has been estimated at $8.3 billion a year, of which $2.6 billion is spending by the affected countries themselves and the rest represents donations--so a total of about $5.7 billion in foreign aid.
The money goes for such things as buying condoms, educating people about the disease, training health workers, and buying antiretroviral drugs.
There is, of course, a great deal of waste.
The United States devotes a significant fraction of its assistance to preaching sexual abstinence and requires that all the condoms it supplies be purchased from U.S.
manufacturers, which charge much higher prices than Asian manufacturers.
I am dubious that the foreign donations are money well spent, compared to alternatives.
This is not because HIV-AIDS isn't a ghastly disease, and economically very harmful because of its debilitating effect on the working-age population, to which most of the victims belong; it is because the causes of its prevalence in those countries in which it is prevalent are social and economic conditions, or political decisions, that must be changed before there can be any real hope of significantly reducing the prevalence of the diseases, and that are unlikely to be changed by foreign money.
The causes include profound ignorance about the disease (due in part to superstition and in any event an aspect of much broader deficiencies in education and literacy), miserable living conditions and short life expectancy which reduce aversion to risky behavior, migrant male labor that increases the demand for paid sex, cultural traditions of male promiscuity, female circumcision (a risk factor for HIV), and the extremely low status of women that drives many of them into prostitution and reduces their ability to bargain effectively with men over safe sex, to which men are more averse than women.
Underlying all these things is the extreme poverty of most sub-Saharan countries, which in turn stems, in major part anyway.
from the dreadful legal and political infrastructure of most of these nations.
And, by the way, these awful conditions are not the legacy of colonialism, as is often charged.
These countries were better administered when they were colonies, at least those that were French or British colonies; and many other former colonial nations, such as India, Singapore, Malaysia, Tunisia, and Trinidad, are prosperous relative to sub-Saharan countries, while Liberia, a sub-Saharan African nation that has never been a colony, remains profoundly disordered and impoverished.
Because of the inadequate legal and political infrastructure in sub-Saharan countries, giving money to these countries for any purpose is likely to be a poor investment.
This is dramatically shown by the case of South Africa, which has one of the highest rates of HIV-AIDS of any country in the world.
Because of its mineral resources and its substantial white minority, South Africa is by African standards a wealthy country.
Its GDP is almost $200 billion.
Its leaders have been in a shocking state of denial concerning AIDS.
Any money given to South Africa to fight AIDS is likely simply to replace the money that South Africans spend on AIDS.
This of course is a general problem of charity, such as food stamps in the U.S.--if charity, even when earmarked for a specific expenditure, is less than the recipient would spend on the item anyway, his consumption of the item will be unaffected.
So if a person spends $2,000 of his own money every year on food, and then is given $500 worth of food stamps, he will not eat more (unless having a larger total income increases his demand for food), but rather will spend $500 less out of his own pocket.
The same may be true in the case of foreign assistance for fighting AIDS in Africa.
An interesting contrast to South Africa is presented by Uganda.
Unlike South Africa, Uganda is very poor; its annual GDP per capita is only about $1,500, compared to more than $12,000 for South Africa.
Yet its HIV-AIDS prevalence dropped steeply in the 1990s, from 15 percent to 5 percent.
Although its prevalence has been increasing somewhat since and there is dispute over the accuracy of the government‚Äôs statistics, it is generally believed that the prevalence of the disease in Uganda has indeed declined substantially--and has done so as a result of an inexpensive (only tens of millions of dollars) government campaign to educate people in the danger of AIDS.
It is the kind of campaign that virtually any country could afford, without need for foreign assistance.
In contrast, the antiretroviral drugs are expensive (even though sold at very low prices for use in poor countries) when the cost of the health-care infrastructure required for their effective administration is taken into account.
Yet the drugs, unlike a vaccine (which has proved thus far impossible to develop, because of the extreme mutability of the virus), do not eliminate the disease; a person on the drugs can still transmit the virus.
The South African and Ugandan cases suggest that political will rather than huge foreign charity holds the key to reducing the prevalence of AIDS in poor countries.
HIV-AIDS is a disease readily preventable by financially inexpensive behavioral changes, such as the use of condoms, once people are alerted to the character and gravity of the disease.
A government that communicates effectively with its people and makes condoms cheaply available to them will go far toward reining in the epidemic.
The United States is frequently criticized for the meagerness, relative to the nation's aggregate wealth, of its contribution to what is called "overseas development assistance," which is to say government financial aid (other than for military purposes) to poor countries.
Although U.S.
ODA spending has increased substantially since 9/11, as a percentage of gross national income it is, at .22 percent, at the very bottom of the 22 wealthiest countries in the world.
(Norway is at the top, with .93 percent.) Aggregate private giving by U.S.
foundations, businesses, nongovernmental organizations, colleges (for scholarships), and religious organizations almost equals the government's expenditure; yet even ignoring private contributions by other countries, which though lower in percentage terms than American private giving are not negligible, total U.S.
public/private ODA would as a percentage of gross national income fall short of many of the other wealthy nations.
(For a useful compendium of statistics and commentary, see "Sustainable Development: The US and Foreign Aid Assistance," www.globalissues.org/TradeRelated/Debt/USAid.asp, visited Jan.
20, 2007.).
These figures are meaningless from an ethical standpoint.
To begin with, there is a big difference between the amount given and the amount received, administrative costs to one side.
Most U.S.
foreign aid requires the recipient to spend the money for U.S.
goods and services, which are often much more expensive than those available elsewhere.
Suppose the U.S.
gives a foreign country $1 million for the purchase of goods in the United States that could be purchased elsewhere for $750,000.
Then the net transfer is not $1 million but only $750,000.
Nor should administrative costs, often inflated, be ignored, or the waste that is endemic in government programs.
The largest recipient of U.S.
foreign aid today is Iraq, and it seems that much of that aid has been squandered.
On the other side of the ethical balance, however, the statistics ignore the benefits that the United States confers on foreign countries by virtue of its enormous defense expenditures (including financial assistance to foreign militaries, but that is only a small percentage of the total defense budget).
The United States spends more than 4 percent of its gross domestic product on defense, compared to a world average of 2 percent--and only 1.9 percent for Norway.
We really are the world‚Äôs policeman, holding a security umbrella over a large number of nations, which would have to spend much more on defense were it not for that umbrella.
Of course we do not do this from the goodness of our heart, but to protect our national security--but then very little that government does is motivated by altruism toward foreigners.
My own, unfashionable view is that charitable giving, both governmental and private, is more likely to increase than to alleviate the poverty, ill health, and other miseries of the recipient populations.
That is a familiar proposition with regard to antipoverty policy on the national rather than international scene.
We generally and I think rightly applaud the substitution of workfare for welfare, because welfare promotes dependency by taxing work heavily--if welfare is cut off when the recipient's income reaches, say, $20,000 a year, so that if he was receiving welfare payments in cash or kind worth $2,000 an increase in his income to $23,000 will net him only $1,000 ($23,000 - $2,000 = $1,000), this means that his marginal income tax rate is .67--a potent discouragement to working.
Something like this occurs, I believe, on the international scale.
Receipt of money enables a government to avoid grappling with the political, social, and economic conditions (cultures, institutions) that are impeding economic development.
It has been argued that countries that have enormous natural resources (mainly oil) relative to population seem not to benefit from that gift, as wealth without effort does not create good attitudes toward work, enterprise, and savings, at the same time that it enables the government to defer consideration of its social and other problems.
Foreign aid has similar effects.
Moreover, the more "generous" the foreign aid, the worse these effects.
When foreign aid becomes a significant part of a nation's income, the result is likely to be inflation, waste, corruption, rent-seeking, and indefinite postponement of needed economic and political reforms.
Insignificant foreign aid does not have these bad effects, but, by the same token, has few good effects.
Of course the donors, both public and private, can and often do attach conditions designed to assure that the money they give is used for constructive purposes.
But, first, they do not know what these countries need (the major theme of William Easterly's 2002 book   The Elusive Quest for Growth  ), and, second, unless foreign assistance is a large fraction of the total income of the recipient country, the effect of the assistance, however many strings are tied to it, will tend to be that of an unrestricted grant.
If a country spends $100 million on health care, and receives foreign assistance for health care of $20 million, it may decide to reallocate $20 million of the health care expenditures that it makes out of its own resources to some other purpose, in which event the restriction on the grant will have no effect.
This is a general problem of charitable giving and public welfare, but it is particularly difficult to solve when the donor is dealing with a foreign country.
This point is pertinent to foreign aid for such projects as eliminating (realistically, greatly reducing) such Third World plagues as HIV/AIDS and malaria.
The former can be effectively combatted with a combination of public health education and free condoms, and the latter with DDT spraying in people's bedrooms.
These are projects within the financial capacity of most Third World countries.
The substitution effect will disappear if the foreign money is given for a purpose on which the recipient nation spends nothing (or less than the grant).
But if the nation does not value the project, often this will be because the project has little value to the nation.
The chalice is poisoned in still another way.
The "generous" gifts from wealthy countries--pluming themselves on their greater (apparent) generosity than the United States--enable those countries to hide, perhaps even from themselves, the extent to which their tariff policies immiserate poor countries.
Most of them are agricultural producers with costs much lower than in wealthy countries, which use tariffs to shield their farmers from Third World competition even though their farmers are much wealthier that those in the Third World, and would be even without tariff protection.
The non-farmer taxpayer in a wealthy country in effect pays his country's farmer twice: in higher food costs or in taxes that finance farm subsidies, and in the taxes that support the government's foreign aid program.
No doubt some foreign aid, including nonmilitary aid, advances the foreign policy objectives of the donor nation (though quite possibly at the expense of the populations of the recipient nations), rather that just lining the pockets of domestic producers and enabling publics to feel better about (or simply ignore) their nations' tariff policies.
The focus of my discussion has been on the question whether the recipient nations benefit at all.
My guess is that they do not.
It is just a guess, but it has support in empirical research.
I mentioned Easterly's book, and there is much more.
And sometimes gross data can be highly suggestive.
Africa has received some $600 billion in foreign aid since 1960, yet most African nations are poorer today than they were then.
I am mindful that recent economic research has tended to find a positive relation between foreign aid on the one hand and economic growth and improved health in recipient nations on the other; for a recent summary, see Steven Radelet, ‚ÄúA Primer on Foreign Aid‚Äù (Center for Global Development, Working Paper No.
92, July 2006).
Considering that aggregate overseas development assistance amounts to $92 billion a year (2004), some positive effect can be expected.
Yet I remain skeptical.
The studies necessarily ignore the tradeoff between foreign aid and tariff reductions; if the former reduces pressure for the latter, the net effect of the aid on the recipient nations could be negative.
I agree with Becker's analysis as far as it goes, but I question whether the amount of terrorism is highly sensitive to economic development, to which the "demographic transition"--the well-documented tendency of birth rates (also death rates) to decline sharply when a nation reaches a threshold level of economic development--contributes.
When birth and death rates decline, the average age of the population rises, which is a stabilizing force, the number of young men declines, and the economic opportunities of the young are greater because there are fewer young.
So the number of potential foot soldiers for terrorism is diminished, as it is by anything that raises the opportunity costs of prospective terrorist recruits.
But how important are those opportunity costs to the amount of terrorism?.
It is helpful to think of terrorism as of other goods and services in demand and supply terms.
There is a demand for terrorism, and a supply of terrorism, and the intersection of demand and supply gives the amount of terrorism.
Terrorism is a political phenomenon, and the demand is driven mainly by political grievances, real or imagined.
Often the grievances are related to foreign occupation.
France in Algeria; the British in Palestine; now the Israelis in the West Bank; the United States in Iraq (and earlier in the Philippines)--though in the case of Islamic terrorism, the major factor seems to be the Western "presence" in the Middle East, rather than foreign occupation; even Israel's occupation of the West Bank seems a subsidiary factor.
And the Baader-Meinhoff gang in West Germany, the Red Brigades in Italy, and Aun Shirikyo in Japan are examples of terrorist groups unrelated to foreign occupation.
But it is the existence of grievance that is key, and often--probably typically--the grievance is political rather than economic.
If demand for terrorism is grievance-driven, then one can expect the supply of terrorists to come mainly from the intelligentsia, for the members of the intelligentsia are more likely than ordinary people to be moved by ideas, resentments, and political ambitions rather than by material concerns.
They have the leisure and the education to think big thoughts, like overthrowing a government, which rarely brings material improvements.
Nor is it the case that the intelligentsia supplies merely the leaders, who then send their simple-minded followers to destruction.
The leaders are at risk themselves; more important, the perpetrators of the actual terrorist attacks tend to be middle class (though the second   intifada  , mentioned by Becker, may be an exception).
From a labor-market standpoint, there are two important tradeoffs in recruiting a supply of terrorists: quality-quantity, and capital-labor, and they are related.
Because terrorists tend to be few in number if only because of the need for concealment, and to be operating in a hostile environment, the recruitment of a large number of poorly trained and motivated cannon fodder is unlikely to be optimal; they are likely to give the game away.
Moreover, the most effective terrorism requires some technical sophistication (such as piloting an airplane), and this is a further reason for terrorist leaders to recruit high-quality personnel.
The relation of economic development in general or the demographic transition in particular to terrorism is likely to be extremely indirect, and is probably small.
If one looks at a list of 195 countries ranked by birth rate, see http://en.wikipedia.org/wiki/List_of_countries_by_birth_rate, one discovers that of the 25 nations with the highest birth rates, all but one (Afghanistan) are in Africa, and Africa has not proved to be a major source of terrorists relative to its vast population.
Pakistan has the world‚Äôs 57th highest birth rate--27.2 per thousand.
This is high--replacement is 21; the U.S.
birth rate 14; Germany‚Äôs 8.2--and Pakistan is often used as an illustration of a nation that has not made the demographic transition yet.
Saudi Arabia, that cradle of Islamic terrorism, has a lower birth rate--24.2--though it is still high.
On the other hand, Saudi Arabia is a relatively wealthy country by international standards; its per capita income is similar to that of Poland and Chile.
Algeria, with a birth rate (20.8) considerably below Saudia Arabia's, has a severe terrorism problem.
Jordan has a substantially higher birth rate than Algeria (in fact it is only slightly lower than Pakistan's), but is not a hotbed of terrorism.
All this said, there is some negative correlation between birth rates and terrorism in Muslim countries, but it is weak, and probably swamped by other factors.
The major factor in Islamic terrorism may have nothing directly to do with economic development or the factors that influence it; it may simply be the influence of extremist Islamic religious beliefs in particular Muslim nations and communities.
According to Census statistics, average black family income was only 51 percent of average white family income in 1947, but rose to 56 percent in 1964, the year that the federal law forbidding employment discrimination (Title VII of the Civil Rights Act of 1964) was enacted.
The gap continued to narrow for some years, as Becker notes; by 1990, average black family income had risen to 63 percent of average white family income.
But it has not risen (it may have fallen slightly) since then.
Notice that the annual rate at which the gap shrank was much faster before the modern civil rights era ushered in by Title VII--6 percent in 17 years, versus 7 percent in the next 36 years.
It is nowhere written that all ethnic groups shall have the same average income, but the white population is itself an amalgam of ethnic groups; and it is not as if blacks are newcomers to America, who would be expected to lag the average income of the settled population.
Quite apart from the black "stars" whom Becker mentions, there is a large and thriving black middle- and upper-middle class.
The income gap, and the related gaps in longevity, law-abidingness, education, and family stability, are due to the disproportionate incidence of social disorder among blacks, creating a large black "underclass" that drags black average-income statistics down.
There was very little civil rights law before Title VII; nevertheless the black-white income differential narrowed more rapidly in that benighted era than it has since.
It is possible that antidiscrimination laws do not benefit their intended beneficiaries, because they give the beneficiaries a sense of entitlement and victimhood, foster tokenism, increase employers' costs, cast a shadow over the real achievements of outstanding members of the "benefited" group, create an unhealthy preoccupation with racial and ethnic identity, and cause white backlash.
It is also possible that the sexual revolution of the 1960s promoted the break-up of the black family--of the white too, but the whites were in a better position to adapt.
To the extent that the "Great Society" programs of the 1960s and the social disorder of the same period are correlated phenomena, together constituting a lurch to the Left, the net effect on black progress may have been negative.
Probably the focus of reform should not be on the black-white income gap as such but on the social pathologies that are responsible (at least in part) for it.
The best approach might simply be to remove obstacles to labor mobility and to competition more generally; Becker mentions school vouchers and charter schools.
In addition, reducing or eliminating the minimum wage would expand employment opportunities for blacks.
Measures can also be taken to reduce the out-of-wedlock birth rate of blacks; in this regard the Administration's effort to stress abstinence, rather than contraception, as a means of limiting teenage pregnancy is misguided.
But there seems to be little political pressure for such reforms.
The costs of the social disorders that afflict poor blacks are incurred mainly by poor blacks themselves, and poor blacks do not vote very much.
Moreover,  blacks support the Democratic Party so overwhelmingly that Democrat politicians have little incentive to expend their necessarily limited political capital on policies that might benefit blacks at the expense of groups that are in play between the two parties, such as public school teachers.
A step in the right direction might be to allow (as many states already do) felons who have completed their sentence to vote.
Virtually all the presidential candidates have proposed plans for reforming health care in the United States.
All the plans would require federal legislation, although many include measures that the executive branch of the federal government could implement without new legislation.
To evaluate proposed solutions, one must know what the problem is.
Different candidates perceive the problem differently, but there is general agreement that health care in the United States costs too much--it accounts for more than 16 percent of GNP, compared to less than 11 percent in France, which the World Health Organization ranks first in the world for the quality of its health system; the WHO ranks the United States 37th.
Now that is one of those multi-factor rankings that can be criticized for arbitrariness.
However, if one confines one's attention to just one of the criteria, "disability-adjusted life expectancy," the United States still does not do very well.
It ranks 24.
(France is 3; Japan is 1.).
There also is general agreement that too many people in the United States lack health insurance, whether public or private, and that this is either an economic problem or an ethical problem, or both.
More than 45 million persons under the age of 65 lack insurance (few older persons do, because of Medicare, though Medicare coverage is incomplete and elderly people who can afford to buy medi-gap insurance usually do so), about 90 percent of whom are citizens or lawful residents.
The uninsured are disproportionately poor and lower-middle-class (and therefore disproportionately black and Hispanic), though many poor children are covered by Medicaid or by SCHIP (State Children's Health Insurance Program).
Contrary to popular impression, Medicaid is intended primarily for poor families with children; it does not cover the poor as such.
Also, Medicaid reimbursement to health-care providers is chintzy, unlike Medicare reimbursement, and the quality of service is as a result poor.
Most (70 percent) of the uninsured are in families with at least one full-time worker.
Most are young: The age breakdown is children: 20 percent; ages 19‚Äì44, 56 percent; 44‚Äì64, 23 percent.
The health of the uninsured is on average significantly worse than that of insured persons of the same age.
As one would expect, the uninsured consume less health care than the insured--only about $1,000, on average, a year, though this is partly because elderly persons, who consume the most health care on average, are covered by Medicare, and more broadly because of the relative youth of the uninsured.
The care they do not pay for--the uncompensated care--is provided to them as charity, for example by hospital emergency rooms, which swallow much of the cost, though some is reimbursed by various government programs.
In part because they consume less health care, in particular less emergency health care, the uninsured have as I have mentioned poorer health and greater mortality than the insured, though I do not know how large a part; low income, and the style of living that goes with low income, may explain more of the difference in health and longevity between the insured and the uninsured than the lesser demand for health care by the uninsured.
A further complication is that since premiums for employees' health insurance plans are deductible from corporate income tax and heavy medical expenses are deductible from individual income tax, the health care of group-insured persons (and most health insurance is employee group health insurance), and of persons with high incomes (and therefore high deductibles from income tax), is subsidized.
The goals of reducing the costs of health care (at least without reducing quality or producing political outrage) and increasing health-insurance coverage are in conflict, but the candidates' plans strive somehow to achieve both goals.
Some of the proposals for reducing aggregate costs are either fluff, like reining in jury awards in medical malpractice cases (those awards are a tiny fraction of total health costs, and already are being reined in by judges and by tort-reform measures adopted by state legislatures), or measures that the market is in process of implementing, such as the digitization of medical records.
Other economizing proposals have hidden negative implications for quality--such as placing price controls on prescription drugs, reducing the protection that the patent laws provide against competition by generic (nonpatented) substitutes, and permitting the reimportation of drugs from countries that have price controls on drugs.
Reducing property rights in medical innovations is likely to reduce the rate of those innovations and hence, in the long run, health and longevity, and those costs have to be traded off against benefits in lower prices for existing drugs.
Some measures defended as economizing because they would simplify the administration of health insurance would generate offsetting costs, such as forbidding "discrimination" against persons with preexisting health conditions.
Which brings me to the essential point in evaluating the candidates' health care reform proposals: significantly expanding health insurance coverage is bound to be very costly, whether the role of government in bringing about the expansion of coverage is large, as in the case of the Democratic candidates' proposals, or small, in the case of the Republicans' proposals, which generally are limited to increasing the tax subsidies for the purchase of private health insurance.
Although some of the uninsured are healthy risk takers, most would have difficulty affording health insurance, and, as a practical matter, would require a subsidy of some sort.
The subsidy itself would just be a transfer, financed presumably by a tax increase; the social cost (that is, the consumption of scarce resources by the program) would be the cost of administering the subsidy program and the misallocative effects that a tax increase would create.
The larger social cost would be the additional health care resulting from the expansion of coverage.
Insured people use more medical care because the possession of insurance lowers the marginal cost of that care to them.
And because the uninsured are on average less rather than more healthy than the insured, forcing them to buy insurance would not lower insurance rates to others.
The average annual cost of employee group health insurance for a family of four is $12,000.
Supposing there are 10 million families without health insurance, and that two-thirds could not afford such insurance, it might well cost more than $80 billion a year to buy it for them.
This would be more than 3 percent of the federal budget.
That is not an unthinkable amount, but the political opposition would be great, because the majority of the population--the people who have public or private health insurance already--would not benefit from it.
Might there be a compensating offset because with greater medical care the people who now are uninsured would be healthier and live longer, and thus cost less in subsidized medical care in the long run? Not necessarily, since the longer a person lives, the greater his average medical expenses because average annual such expenses grow with age.
Living a healthier and longer life is of course a benefit to a person; my point is only that it need not reduce his average annual health costs.
The way to economize on expenditures on health care, though it is utterly infeasible politically, would be to eliminate the tax subsidies for health insurance and health care and institute a means test for Medicare, and at the same time to limit medical services.
Then both the demand for and the supply of those services would be reduced, and the percentage of GNP that goes for health care would drop.
But the principal result might be to reallocate consumption spending to goods and services that most people value less at the margin than they do health care.
Moreover, there is an economic argument for some level of tax subsidies for health insurance premiums or health care.
Medical care increases human capital, and is thus an investment, and investment expenditures need not be (probably should not be) taxed as long as the revenues generated by them are.
Medical treatment that extends life or enables a person to work increases the person's income, which is taxable.
Maybe a little patchwork here and there is the most that is both economically desirable and politically feasible by way of reform of American health care.
There are striking differences in tax burdens across nations, as explained in a recent report by the Organisation for Economic Co-Operation and Development.
Measuring the tax burden in 2006 as the percentage of gross domestic product that is collected in taxes, the report arrays 20 countries from top to bottom.
At the top is Sweden, with a tax burden of 50.1 percent; at the bottom is South Korea, with a tax burden of 26.8 percent.
The United States is near the bottom, with 28.2 percent, and between it and South Korea are Greece and Japan, each with 27.4 percent.
Next below Sweden is Denmark, with 49 percent, France,with 44.5 percent, and Norway, with 43.6 percent.
The middle range is illustrated by Britain with 37.4 percent, Spain with 36.7 percent, and Germany with 35.7 percent.
In all 20 countries except the Netherlands, the tax burden has increased since 1975, though in some countries, such as the United States, the increase has been slight--only 2.6 percent.
In others, however--Denmark Greece, Italy, Portugal, South Korea, Spain, and Turkey--it has exceeded 10 percent.
Spain's increase has been the greatest, at 18.3 percent, followed by Italy's at 17.3 percent and Turkey's at 16.5 percent.
The OECD report explains that the increase in tax burden is due to increased revenues from "direct" taxes--income (including payroll) and corporate taxes--rather than from "indirect" taxes such as VAT, sales taxes, and other excise taxes.
Even though most countries, including the United States, have cut income and corporate tax rates, the cuts have been more than offset by increases in income and corporate profits; of course the cuts may have helped generate those increases.
The OECD favors indirect taxes because they tax only consumption, whereas direct taxes tax income that is saved, and thus discourage investment.
The increase in the tax base for direct taxes explains the mechanism by which the tax burden has grown but not why it has grown--why in other words the demand for government spending has grown.
The OECD speculates that the cause is increased demand for social services such as pensions and health care.
The curious thing about the OECD data is that prosperity, economic growth, and other measures of economic well-being do not seem closely correlated with the tax burden.
The variance across countries in tax burden is very great, yet one finds troubled economies, such as those of Japan and Greece, near the bottom of the tax-burden distribution--of course Japan is a very wealthy country, as Greece is not, but Japan's economic performance has been disappointing in recent decades.
And one finds some high-performing economies, such as those of Sweden, Norway, and Finland at the top of the distribution, or (as in the case of the Netherlands, Spain, and the United Kingdom) in the middle.
However, there is some negative correlation between economic performance and the tax burden; for Ireland, Switzerland, and the United States are low on the distribution, while typically low-performing Western European countries cluster in the upper half.
One would think that the tax burden, especially but not only when it is created mainly by direct taxes, would have a strong negative effect on economic well-being.
(Perhaps it does, when other factors affecting economic well-being are adjusted for, which I have not attempted to do.) If government is less efficient than private enterprise, the more economic activity that is performed by government rather than by the private sector the less productive the economy as a whole should be; and the higher the tax burden, the greater the amount of economic activity performed by government.
To the extent, moreover, that variance in tax burden across countries reflects variance in marginal rates of taxing income and corporate profits, we would expect the high tax-burden countries to be less productive, because the higher the tax on income, the greater the incentive to substitute leisure (which is untaxed) for work and to expend resources (and create economic distortions) in an effort to reduce the tax bite.
But there is an important difference between the actual production of economic goods and services by government, on the one hand, and transfer payments on the other.
The effect of taxes on the behavior of the taxed entity is the same, but the effect on the efficiency of production is different.
In the United Kingdom (which nevertheless has a high-performing economy), the government produces medical services; the National Health Service is the employer of the vast majority of doctors and other health professionals and owns most of the hospitals and other health care facilities in the U.K.; only about 8 percent of the U.K.‚Äôs population is served by private health providers.
In contrast, the U.S.
Medicare and Medicaid programs transfer vast amounts of public money to health care providers, but the providers are mostly private.
The transfers come with strings attached, of course, and some of those strings induce inefficient behavior by the recipients.
Nevertheless, a U.S.
National Health Service on the English model would undoubtedly be highly inefficient compared to our admittedly highly imperfect private provision of health care.
Transfer socialism is not as inefficient as means-of-production socialism.
To the extent that the growth in government spending is a growth in transfers rather than in government ownership of producers, the impact on economic growth and prosperity may be small, especially since the growth in transfers has coincided with the deregulation movement, which has resulted in privatization of significant areas of traditional public ownership, less regulation of the economy, and, as I mentioned, lower direct-tax rates.
There thus appears to be a kind of balance, in which the efficiency-reducing effects of greater government spending are contained by reductions in direct-tax rates, by increased privatization and deregulation, and by channeling increased tax revenues mainly into transfer programs rather than into government production of goods and services.
Last week we blogged on whether a deficit-spending program--a stimulus package currently estimated to cost $825 billion--is appropriate to deal with the current economic crisis.
I will not repeat the points I made.
If there is a risk of deflation, and increasing the supply of money is not considered an adequate response, then there is an argument for the stimulus.
It is essentially Keynes's argument: if private demand falls short of the supply that the economy is capable of producing, public demand--public expenditures on projects that will put people to work--can fill the gap.
Our focus this week narrows to the infrastructure portion of the program, which is the most conventionally Keynesian.
The stimulus package proposed by Democratic Congressmen--the "American Recovery and Reinvestment Act of 2009"--allots $90 billion to infrastructure: $30 billion for highway construction, $31 billion for energy savings in federal and other public facilities, $19 billion for flood control and environmental improvements, and $10 billion for transit and rail.
Other parts of the Act, however, allocate additional funds to other construction projects, and from an anti-depression standpoint it is really construction that is the relevant category.
There is a lot of unemployment in construction--110,000 construction workers were laid off in December--as part of the continuing fallout from the housing bubble, which increased the demand for new houses; the demand has now collapsed.
One objection to public-works spending as an anti-depression measure is that by the time work on the public projects actually begins, the depression will be over and all that remains will be the bill for the projects, in the form of an increased national debt, since public-works spending that is financed by taxes rather than borrowing has no effect on increasing demand for goods and services.
What is given with one hand is taken away with the other.
But construction projects, especially those interrupted or postponed because of the economic collapse, can be started up (or resumed) pretty quickly.
Moreover, this depression (as I think it is, and not merely a recession) is likely to last at least two more years, and that should be time enough for much of the $90 billion (plus additional money allocated to construction) to be spent.
Another advantage to infrastructure, and construction generally, as an emergency measure is that it may not add significantly to the deficit in the long run.
The reason is that the costs can be recouped out of user fees, such as tolls for highways and taxes on airline and railroad tickets.
This presupposes that the projects create some real value, unlike the "bridge to nowhere" that was proposed for Alaska, or else the user fees will really just be taxes.
To the extent that the money allocated to infrastructure in the American Recovery and Reinvestment Act (which doubtless however will be changed before it is actually enacted and signed by the President) is for interrupted or deferred projects, or merely accelerates projects planned for a later date (accelerated to increase demand now), there will not be incremental waste--that is, waste beyond what is already built into approved projects.
With new projects, the risk that costs will exceed benefits is greater, but we must be careful not to view costs and benefits in too narrow terms.
Even a worthless project, if it puts people to work, can reduce the economic impact of a depression, as Keynes argued long ago.
Another advantage of construction-oriented public-works spending is that the risk that it will crowd out private investment is small.
If labor and other resources used in construction were being fully employed, then the only effect of the government's launching construction projects would be to increase construction prices, because it would be bidding against private employers for a limited stock of labor.
But given the high unemployment rate of construction workers, there are plenty of them to hire without an employer's having to lure them away from other employers by the promise of higher pay.
One thing that must give one pause, however, is the question of substitutability across construction projects.
Building a house and building a highway are not interchangeable construction activities.
Unemployment in the construction industry may be concentrated in residential construction, and residential construction workers may not have the right skills for highway or bridge construction, let alone for flood control.
(What use is a plumber or a carpenter in building a highway?) The more specialized the American workforce has become, the less effect Keynesian public-works projects are likely to have on employment.
And if they do not reduce unemployment but instead compete with private employers for workers, the only effect may be to increase the national debt and engender inflation (though inflation is one way of combating deflation).
This is a general concern rather than anything peculiar to construction.
In fact it is a greater concern with some of the other projects proposed in the American Recovery and Reinvestment Act.
Projects designed to promote efficient use of energy (in order to limit global warming and dependence on foreign oil--worthy objectives, however) will create inflationary pressure by bidding for scarce resources (such as scientific and engineering skills and complex, novel technologies) against the private sector.
That is a compelling reason for concentrating the stimulus in the industries in which unemployment is greatest.
I agree with Becker that it would be a mistake to raise gasoline taxes.
We're in the midst of a depression and threatened with deflation, which would be an especially ominous development.
Deflation occurs when the price level falls, as can happen--as may be happening now--when demand falls so far that sellers, to avoid complete ruination, slash the prices of their goods by extreme percentages, such as 50 or 75 percent.
With prices depressed, a given amount of dollars buys more goods--money thus is more valuable.
Credit tends to dry up, since even if the nominal interest rate is zero, the real interest rate may be very high.
Imagine, to take an extreme case, that a dollar will buy you a loaf of bread today but two loaves of bread in a year.
Then to borrow a dollar today for repayment in a year at a nominal interest rate of zero amounts to borrowing at a real interest rate of 100 percent, because to have the loaf of bread today you will have to give up two loaves in a year.
When there is a danger of deflation, raising taxes increases the danger by reducing the demand for goods and services, in the present instance for gasoline and therefore also for cars, in particular cars made by the Detroit automakers because the gas mileage of their cars is inferior to that of the foreign cars.
As demand falls, discounts will increase, so prices will continue to fall.
Output will be falling too, but prices can fall faster than output, especially if sellers have swollen inventories because they did not anticipate a depression.
It is true that many "foreign" cars are actually manufactured in the United States.
A mere substitution of those cars for Detroit-made cars would not reduce demand.
Nor for that matter would a substitution of cars manufactured abroad, though by reducing employment in the United States such a substitution would deepen our depression.
But the foreign cars (wherever actually made) would be sold at a discount too, in order to compete with the Detroit-made cars.
If gasoline taxes were raised to a very high level, there might actually be an increase in overall demand for cars if there are new cars that are enormously more fuel-efficient than existing ones.
But the effect on the economy would still be negative, because people would have much less money to spend on other products.
Becker points to the possibility of a double whammy: raising gasoline taxes would not only reduce the demand for cars but by doing so it would increase the cost of the auto bailout.
I am inclined to disagree if the bailout is understood as I hope it will be as intended simply to postpone the bankruptcy of the three Detroit automakers until the overall economic picture clarifies, rather than to reform and revitalize them.
I don't think there would be any social benefit from saving the companies once the economy can absorb their disappearance or radical shrinkage without serious macroeconomic consequences.
At that point, it should be sink or swim for them.
To preserve them beyond that point by means of continuing federal grants would be merely to subsidize the United Auto Workers and the blue-collar workers whom the union represents, plus automobile dealers, the companies' managerial and white-collar employees, and the companies' stockholders and bondholders.
I hope that after the depression ends, however, serious consideration will be given to four types of tax (broadly defined), none a gasoline tax as such, that would reduce the demand for motor vehicles.
One would be a tax on carbon emissions.
The second a tax on traffic congestion.
The third a tax in the form of highway tolls, to pay for the infrastructure projects that are part of the Obama Administration's "stimulus" (i.e., Keynesian--deficit spending) program.
The fourth would be a tax on petroleum, designed to reduce our dependence on foreign oil and (relatedly) the income of the oil-exporting nations.
In 2007, the House of Representatives passed the Employee Free Choice Act, a law to promote unionism.
The bill failed in the Senate because of Republican opposition.
Obama in his presidential campaign urged passage of the bill, and with greater Democratic control of the Senate as a result of the recent election there is a good chance that it will be passed, though not a certainty in view of the fierce opposition of the business community and the Republican senators, who could filibuster the bill; but the Democrats might persuade enough of those senators to defect, to have enough votes to shut down the filibuster.
The Act would do three things.
The first is strengthen the very weak machinery for enforcing the prohibition in the National Labor Relations Act (the Wagner Act) of unfair labor practices, such as employers' discriminating against employees who support unionization.
This part of the Act is uncontroversial.
The second thing the Act would do is dispense with the requirement of a secret election to determine whether the employer must recognize a union as the representative of his workers (more precisely, of a "bargaining unit" consisting of workers having similar jobs; a large employer might have a number of such units).
Recognition means that the employer must try in good faith to negotiate a binding collective bargaining agreement with the union that will specify terms and conditions of employment.
The Act would require the employer to recognize the union if the union obtained signed union-authorization cards from a majority of the workers in the bargaining unit.
This is the most controversial provision of the bill.
The remaining provision, which is also controversial, would require that if within a specified period (including a period for mediation) union and employer could not agree on the terms of a collective bargaining agreement, their dispute would be submitted to binding arbitration.
The arbitrators would thus determine those terms.
The card-signing provision would undoubtedly make it easier for unions to organize companies in which there was considerable union support but not quite enough to assure victory in a secret-ballot election.
Supporters of unionization are likely to feel more strongly than opponents, and so will be more likely to exert pressure on waverers to sign cards than opponents will be to exert pressure on them not to sign.
Compulsory arbitration would also promote unionization, and perhaps more so than the card-signing provision of the Act.
It would eliminate the costs of striking against a stubborn employer, and would appeal to workers because an arbitrator could be expected to be more generous in setting terms and conditions of employment than the employer, though there will be cases in which a union could extract more from an employer by striking or threatening to strike than an arbitrator would be likely to give it.
I doubt that the Act would have a great effect on unionization.
Unions have been in steady decline in the private sector for decades and now account for only about 7 percent of nonfarm workers in that sector (farm workers are not covered by the National Labor Relations Act).
Elaborate government regulation of workplace safety and health has reduced the value of unions to workers, as has greater job mobility and the increasingly technical and individualized character of many jobs, which makes it difficult for workers to agree on the terms and conditions of employment that they should be seeking.
International competition has reduced the power of unions to extract supracompetitive wages, benefits, or work rules, as has the deregulation movement, which has made the formerly regulated industries, such as transportation, more competitive.
Unions have little power in a competitive industry, because a supracompetitive wage, by increasing the employer's cost, will shift his output to competitors.
We are seeing this happen in the automobile industry, where union intransigence has been a factor in the decline of the Detroit automakers, now on federal life support.
These economic forces will not be changed by passage of the Employee Free Choice Act, and so the Act's effect on unionization and therefore on the economy will probably be marginal.
But whatever the magnitude of the economic effect, that effect will be negative.
This is not because all unionization is bad.
One should distinguish between nonadversarial unionism and adversarial unionism.
In nonadversarial unionism the union recognizes that it is in partnership with the employer and focuses on activities that are supportive of rather than antagonistic to the efficient operation of the company.
These activities include protecting workers from abusive supervisors and coworkers, forwarding the concerns of workers to management, assisting workers to obtain skills necessary for their advancement, providing social amenities, interpreting management to the workers, and, in short, mediating between the workforce and the management.
One might think that these are functions that the employer itself could perform, and often this is true.
But an independent union (company unions are forbidden) may have a degree of credibility with the workers that the employer lacks and may reduce agency costs by monitoring the behavior of supervisors over whom the employer has limited control.
The Act will not promote nonadversarial unionism, because an employer will not resist being unionized by a union that will make his company operate more efficiently.
It will promote, though one hopes to only a limited degree, adversarial unionism, illustrated by the relation between the United Auto Workers and the Detroit automakers.
The union is determined to squeeze the companies for all it can get for the shrinking number of workers employed by the companies--the union being responsible in significant part for the shrinkage.
Adversarial unionism is also conspicuous in education.
More of that we do not need.
We especially do not need an uptick in adversarial unionism during what increasingly appears to be a depression.
The fact that Democrats in Congress should be pressing for a revival of the union movement at this time indicates a lack of understanding of the economics of depressions.
A depression involves a severe reduction in output, resulting in a reduction in inputs, including labor inputs: hence increased unemployment.
Adversarial unions increase unemployment, by obtaining wage increases that reduce employers' output by increasing labor costs.
A similarly incoherent New Deal program of fighting depression combined sensible measures like going off the gold standard, expanding the money supply, and increasing employment by public-works programs with output-restricting programs like the National Industrial Recovery Act, which encouraged the formation of producer cartels, the Agricultural Adjustment Act, which curtailed agricultural output in order to raise farmers' incomes--and the National Labor Relations Act (the Wagner Act), which encouraged the formation of workers' cartels: adversarial unions such as the United Auto Workers.
Some economists believe that such measures prolonged the depression.
They certainly did not shorten it.
I suspect that we have entered a depression.
There is no widely agreed definition of the word, but I would define it as a steep reduction in output that causes or threatens to cause deflation and creates widespread public anxiety and a sense of crisis.
Suppose some shock to the economy, such as the bursting of the housing and credit bubbles, causes people to reduce their demand for goods and services.
Before the shock, demand and supply were both X; now demand is X - Y.
How do producers respond? If all prices, including the price of labor (i.e., wages), are completely flexible, producers (and suppliers of inputs to them, including suppliers of labor--i.e., workers) will reduce their prices, and this will induce consumers to increase their buying.
Consumers will have less income because those who are employed will have lower wages, but since prices are lower they will buy enough to prevent a substantial reduction in output.
Unfortunately, not all prices are flexible; wages especially are not.
This is not primarily because of union or other employment contracts.
Few private-sector employers in the United States are unionized and as a result few workers (other than federal judges!) have a guaranteed wage.
The reasons that employers generally prefer to lay off workers than to reduce wages when demand drops are first that by picking the least productive workers to lay off an employer can increase the productivity of its work force; second that workers may respond to a reduction in their wages by working less hard, and, conversely, may work harder if they think that by doing so they are reducing the likelihood of their being laid off; and third that when all workers in a plant or office have their wages cut, all are unhappy, whereas with lay offs the unhappy workers are off the premises and so do not incite unhappiness among the ones who remain.
When, to bring output down from X to X - Y in my example, producers and other sellers begin laying off workers, demand is likely to sink even further because the workers who have been laid off suffer a loss of income and the ones who are not laid off fear that they may be next and so try to save more of their income rather than spending it.
As demand falls, sellers will lay off more workers, putting still more pressure on demand, but in addition they will reduce prices still further in an effort to avoid losing all their customers.
As prices spiral downward, consumers may start hoarding their money in the expectation that prices will keep falling.
In addition, they will be reluctant to borrow (and borrowing increases economic activity by giving people more money to spend) because with prices falling they will be paying back their loans in more expensive dollars, that is, dollars that have greater purchasing power.
When the same number of dollars buys more goods, we have deflation--money is worth more--as distinct from inflation, where money is worth less because more money is chasing the same number of goods and services.
One way to try to prevent a deflationary spiral is for the Federal Reserve Board to increase the supply of money, so that dollars don't buy more goods than they used to.
The Fed does this by buying federal securities from banks; the cash the banks receive from the sale is available to them to lend, and what they lend ends up in people's bank accounts and so increases the number of dollars available to be spent.
Fearing deflation, the Fed has done this--without success.
The banks, because they are close to being insolvent, are fearful of making risky loans, and loans in a recession or depression are risky.
So they have put more and more of their money into federal securities, thus bidding down the interest rate virtually to zero.
Zero-interest short-term federal securities are the equivalent of cash.
If banks want to hold cash or its equivalent rather than lend it, the Fed's buying cash-equivalent securities for cash does nothing to increase the money supply.
So the Board is now buying other debt, and from other financial firms as well as banks--debt that has a positive interest rate, so that if the Board buys the debt for cash, the seller is likely to lend out the cash so that it does not lose the interest income that it was receiving on the debt it sold to the Fed.
But as yet this program has not had much success either.
This is the background to the stimulus program proposed by soon-to-be President Obama.
To return to my example, if monetary policy is not going to equate demand to supply--is not going to close the gap between a demand of X - Y and a supply of X--then maybe government spending can do the trick.
The government can buy Y worth of goods and services, thus replacing private with public demand, or it can reduce taxes by Y, so that people have more money to spend, or it can do some of both, as in fact Obama proposes to do.
At this writing, roughly 40 percent of his proposed multi-hundred-billion deficit-spending package (that is, spending financed by borrowing rather than by taxing) is earmarked for tax reductions.
The rest is split between public-works programs, such as road construction, and transfer payments in such forms as additional unemployment benefits, mort-gage relief, and health insurance for people who don't have any.
There are three basic questions to ask about the program.
The first is whether it is necessary, the second whether it has the right structure, and the third whether it is the right size.
I will discuss just the first two questions.
Ben Bernanke, the chairman of the Federal Reserve Board and the leading economic student of the Great Depression of the 1930s, is a conservative economist.
Conservatives don't like huge deficit-spending programs, or at least the public-works and transfer-payment components of them, which increase government involvement in and control over the economy.
Bernanke supports the program, after having failed to avert a depression by means of monetary policy alone.
Almost the entire economics profession converted--virtually overnight--from being Milton Friedman monetarists (Friedman believed that only bad monetary policy could turn a recession into a depression) to being John Maynard Keynes deficit spenders.
I'll assume they're right, and move on to the question of structure.
I do not think the tax cuts are a good idea.
Most of the increase in after-tax income is likely to be saved, rather than spent on buying goods and services.
One of the reasons why the recession has turned into a depression is that Americans have meager savings, most of them in overpriced houses and overpriced stocks, and so they are sensibly reallocating income from consumption to saving.
And there is much evidence that even in normal times, people spend less out of temporary income spurts than they do when they receive what they think will be a permanent increase in in-come.
There is no such thing as a permanent tax cut, because the Congress that enacts a tax cut cannot bind subsequent Congresses (there is a new one every two years) not to rescind it.
I also think the transfer payments are a bad idea.
The goal of a Keynesian deficit-spending program is to restore demand to X, not to increase it.
If instead of demand rising as a consequence of the program from X - Y to X, it rises from X - Y to X + Z, there will be inflation because demand will exceed supply.
Programs to transfer wealth are very difficult to abolish, because interest groups form about them.
The problem is somewhat less serious with public-works programs, especially road-building and other infrastructure projects, and especially those infrastructure projects that were planned or begun by states or municipalities and interrupted or deferred because of the fall in tax revenues resulting from the depression.
The federal government can finance these projects until the depression is over, and then the states can continue them with its own tax money.
There is a legitimate concern that many of the projects undertaken by the federal government will yield costs in excess of benefits.
But the concern is exaggerated, because it ignores the benefits that such projects confer on fighting the depression as distinct from simply improving the nation's transportation system or reducing carbon emissions or buying military equipment to replace what has been lost in the Iraqi and Afghan wars.
To the extent that the projects by increasing demand reduce unemployment, and reduce fear of unemployment by those who are not laid off (yet), they not only increase people's spendable income (unemployment benefits are lower than the wages they replace) but by reducing job insecurity reduce the fraction of wages that people save rather than spend.
The saving rate has soared in recent months and is one of the major factors in reducing consumption and pushing us to the edge of a deflation.
In addition, public-works spending has a multiplier effect.
The government's expenditure on buying goods and services (a road, a bridge, or whatever) increases output directly, but it also does so indirectly because the company that builds the project with government funds pays its employees and suppliers, and they in turn spend part of the money they receive, further stimulating output.
Properly structured, a Keynesian program can help to check a downward economic spiral.
With monetary policy apparently inadequate to avert a downward spiral big enough to trigger deflation, there may be no good alternative to such a program.
In the wake of the attempted Christmas bombing of an American airliner en route to     Detroit    , there has been a flurry of new security measures.
These measures are costly, primarily in delaying the passage of passengers through airport security, but there are also the expenses of additional screening equipment, such as body scanners, and of additional security personnel, such as armed guards on flights and additional screeners in the intelligence agencies.
The economic question is the optimal expenditure on preventing terrorist attacks on airlines.
The question is bafflingly difficult because of the uncertainty associated with such attacks.
Cost-benefit analysis of precautions is a reliable tool of economic decision making only (in general—I will suggest an exception below) when not only the cost of the precautions and the loss (cost) that will occur if the event sought to be prevented is allowed to happen can be calculated, but also the probability of the event if the precautions are not taken.
For that second calculation is also necessary in order to estimate the expected loss if the precautions are not taken.
If the loss if the event occurs is $  x  , and the probability that the event will occur unless precautions are taken to prevent it is .01, then, as a first approximation, one should spend up to $.01  x   to prevent the event from occurring, but not more.
But what is the incremental probability of a successful terrorist attack on an airline if the precautions instituted after the Christmas bombing attempt are withdrawn? Moreover, although that is the most difficult question, there is also uncertainty about the loss should such an attempt succeed.
There are pretty good value of life estimates, which would suggest for example that an airline bombing that killed 200 people would inflict a loss of $1.4 billion (200 x $7 million).
But that leaves out the terrible fear that these people would experience unless the bombing caused instant death (which it would not), plus the fear of other airline passengers and crew after the bombing, plus added time and safety costs of passengers diverted by fear to other means of transportation—other means that may be more serious, depending on how common successful terrorist attacks on aircraft become; for the death rate per mile is, at present anyway, markedly higher for automobile transportation than for air transportation.
There is an instinctual fear of flying (easy to explain in terms of the ancestral environment in which the human brain developed, for in that environment heights were exceptionally dangerous), and as a result the prospect of being killed in an airline crash fills many people with particular dread; that prospect is a cost, like any other.
For many people, it exceeds the expected accident cost of driving relative to flying.
But I want to focus on the uncertainty of the occurrence of a terrorist attack.
Some statisticians, and some famous economists of yore such as Frank Knight and John Maynard Keynes, distinguish between calculable risk, as in my .01 example, and uncertainty, in the sense of risk that cannot be calculated with any confidence.
No one can say with any real confidence what the probability of a successful attack in the next year on a   U.S.
  airline (specifically on a flight originating outside the     United States    , which seems the likeliest type of terrorist attack) is, except that it is between 0 and 1, which is unhelpful.
(It is naïve to base an estimate of probability on frequency, that is, on past occurrences, if there is no reason to believe that the future will be like the past.) In fact not only most travelers but also the airlines and the government think that the probability of a successful airline attack is much closer to 0 than to 1; if they thought it was close to 1, there would be far more radical precautionary measures taken and a sharp decline in demand for air travel.
But it makes a big difference to the optimal investment in precaution whether the probability of such an attack is .0001 or .1 (as it could easily be if the attack was the first in a planned series), and whether the cost of the attack if it is successful would be $1.4 billion or $5 billion--or much more, as it could be, if one thinks that the Iraq and Afghanistan wars, which together have already cost at least $1 trillion, were a consequence of the 9/11 terrorist attacks.
There is thus uncertainty in size of loss as well as in probability of loss.
As a result of these dual uncertainties, the realistic expected-loss range could,on my assumptions, easily extend from $140,000 to $500 million.
How to pick a point in that range? The question may seem unanswerable, but it is not.
The reason is discontinuity in the range of available precautions.
Greater vigilance and more screening equipment and screeners are costly but there are sharply declining marginal returns.
One might decide to place two or three guards on every international flight, but it wouldn’t make any sense to place 10 guards on every flight; the incremental benefit would be negligible.
Similarly, maybe every security line in every international airport should be equipped with a body scanner, but it wouldn’t make sense to equip every security line with two body scanners.
The inability of our intelligence agencies to pool information effectively will be costly to correct, but these are costs that have to be incurred anyway—to protect the nation from a range of terrorist and other threats, not just threats to airline safety.
There is probably a bias among security personnel in favor of doing more than can be justified by the kind of analysis that I have just offered: that is, a bias in favor of adopting some precautions that have little or even no efficacy in preventing attacks, such as subjecting children who have already been patted down in the security line to a second pat-down at the airline gate.
The reason for the bias is bureaucratic, or in other words careerist: from the career perspective of a security officer, the worst thing that can happen is an exact repetition of a successful attack.
For then no excuses, even if reasonable, for having failed to prevent the new attack will be accepted.
So security agencies will tend to overinvest in preventing the repetition of previous attacks.
It has been argued persuasively that the nation has overinvested in airline security since 9/11 relative to security against other attacks, for example on trains or subways, because we have thus far been spared such attacks, though other nations, notably   Britain   and     Spain    , have not been.
I agree with Becker that the Internet has been on the whole a valuable innovation.
It is a less costly form of communication than either telephone or mail, and (a related point) a better substitute for personal communication than either telephone or mail; it economizes on time by reducing the transportation involved in meetings.
I commute to work less frequently nowadays because I can work efficiently at home; and academics in different universities or even different countries can collaborate in writing books and articles at far lower cost than if they had to rely on the older modes of interaction.
The Internet also, as Becker emphasizes, reduces the cost of access to information (though not always accurate information); it thus reduces information costs as well as communication costs.
As with most new technologies, however, there are downsides as well as upsides.
The effect on substitute services as such should not be cause for concern except to the producers, for substitution effects are the inevitable consequence of innovation.
The hotel and travel industries are hurt if business travel falls because of the substitution of online for in-person conferencing, though the hurt is offset to the extent that by lowering the costs of communication the Internet increases the amount of business activity and the geographical scope of firms.
But other harms caused by the Internet warrant social concern.
The Internet lowers the cost of communication and there are bad as well as good communications: bad in the sense that they promote activities that reduce overall welfare.
Examples are the use of the Internet by terrorists, by advocates of hate crimes, by purveyors of child pornography, by plagiarists, by defrauders, by violators of copyright law, and by identity thieves.
The increase in these pathologies as a result of the Internet is a social cost and may be considerable.
The contribution that the Internet has made to the recruitment and coordination of terrorists has created a considerable threat to our national security.
Of course, it is possible to monitor Internet communications, and our security and law enforcement agencies do that.
But the volume is overwhelming; coded communications provide a challenge to monitors; and privacy advocates insist on limitations on monitoring.
The Internet is also highly vulnerable to penetration and disruption by enemies of the     United States    .
China     and other authoritarian countries censor the Internet to prevent their populations from obtaining access to information critical of government.
I agree with Becker that we should not make efforts to prevent     China     from censoring the Internet.
I don’t think we should interfere in the internal affairs of countries that are not enemies of the     United States    .
I also don’t think that Chinese censorship will be effective in the long run; it is too easy to circumvent Internet censorship.
The negative impact of the Internet on the newspaper industry is a possible source of concern.
Newspapers are a bundled product: the publisher provides a large variety of news, opinion, and advertising in an effort to obtain a large enough readership to offset the heavy fixed costs of producing information.
The Internet enables unbundling at low cost, which makes it difficult to cover those heavy fixed costs.
As classified advertising migrates from newspapers to inexpensive Internet services, for example, the revenues of such advertisng no longer support costly newspaper newsrooms.
But the effect on the extent to which the public is well informed may be offset by the rise of the blogs, which provide immense quantities of information and opinion on public issues at zero cost (other than time cost) to readers.
At the same time, however, because the blogs are an unfiltered medium they are also a source of a great deal of misinformation.
Yet Wikipedia illustrates how prompt correction, which the Internet also facilitates, can reduce inaccuracies in online dissemination of information.
The time costs imposed by the Internet are a source of some concern.
People receive a great many more communications, because of their lower cost, in the form of email than they did in letters and phone calls, because email is cheaper.
This can be a burden, and it is only partially offset by the “junk mail” filter programs that email services provide.
The sender of a communication will usually not consider the cost to the recipient.
Information overload can be a real cost.
Finally, we have become aware recently than the use of the Internet by drivers is a significant source of automobile accidents.
The net effect of the Internet on social welfare has probably been positive, but it is difficult to say how great it has been.
Communication and information flows were rapid before the Internet, and the effect of increased rapidity on economic output and personal satisfaction may not be great when the full costs of the Internet are taken into account.
The President is asking Congress to enact a one-year $33 billion job-subsidy plan.
An employer would receive a $5,000 tax credit in 2010 credit for increasing his labor force by one person and an additional subsidy for giving an employee a wage increase greater than the inflation rate.
The total subsidy would be limited to $500,000 per employer, in the hope that the principal recipients would be small businesses.
I do not know why the ceiling should be expected to have that effect.
Even big businesses like $500,000 windfalls.
If a big business happens to be increasing its hiring or its wages, why wouldn’t it claim the subsidy?.
That point to one side, and disregarding also the abundant possibilities of gaming the program, stressed by Becker, the proposal is unlikely to be effective because it violates the economic principles that ought to guide stimulus programs.
The theory of stimulus is Keynes’s, is (in my opinion) sound, and is as follows.
If, because a high rate of unemployment creates pessimism about the economic situation, people increase their savings at the same time that business is reducing its investing—and that is our situation today—the government can by financing projects through borrowing put the inert savings to work (inert because businesses aren’t borrowing people’s savings).
The projects require workers, so unemployment falls, and with it pessimism and the cash hoarding, by consumers and businesses alike, that pessimism induces.
With Keynes’s theory understood, it becomes possible to list the principles of effective stimulus:.
The stimulus must be large in order to make a substantial dent in unemployment.
The $787 billion stimulus enacted last February may have been too small; a $33 billion jobs subsidy is a drop in the bucket.
The stimulus must be implemented before recovery from a depression or recession is well under way—otherwise the government’s borrowing to finance the stimulus will slow the recovery by pushing up interest rates.
(That is, at some point in the recovery, business will resume investing and so will be competing with the government for capital.) If enactment of the job subsidy is delayed in Congress, or if procedures for preventing the gaming of the program are cumbersome, the subsidy expenditures may come too late to do   any   good.
The stimulus must be targeted on industries, and areas of the country, in which unemployment is high.
Like the $787 billion stimulus, the job-subsidy plan flunks this test as well.
Most important, a stimulus is designed to stimulate demand, not supply.
The economic problem for which a stimulus program is a solution is insufficient demand relative to the economy’s labor and other resources.
Because of overindebtedness and continued weaknesses in the financial system, consumers and businesses are reluctant to spend.
Businesses are reluctant to hire (that is one aspect of their reluctance to spend), so unemployment is high and wages are stagnant, which further depresses demand.
The idea behind the stimulus is for government demand to take the place of the missing private demand.
Government “buys” new roads, in lieu of consumers’ buying SUVs, and contractors meet the government’s demand by hiring unemployed construction workers.
The job-subsidy plan is not demand-focused, and so is unlikely to contribute to the economic recovery.
Suppose a firm in a depressed economy sells 100 earth-moving machines a year, and employs 200 workers.
If the government tells the firm it can save $5,000 on its taxes by increasing its work force to 201, the firm’s total costs will increase (by the wages and benefits of the additional worker less $5,000), but its revenues will not increase because adding a worker does not increase the demand for its product.
There is an enormous amount of idle productive capacity in the     U.S.
    economy at present.
There is thus a case, as liberal economists such as Paul Krugman keep urging, for further stimulus spending.
The problem is that such spending is irresponsible unless coupled with a credible commitment to repay, after the economy recovers, the money borrowed to finance the spending.
Not only is there no such commitment; at present the only realistic prospect is of staggering deficits stretching indefinitely into the future.
As a result there is at present no stomach for additional stimulus spending.
The government is reduced to impotent gestures, of which the job-subsidy plan is one.
On December 14, Becker and I blogged about the shortcomings of     GDP (Gross Domestic Product) as a welfare measure.
A related question is the relation between GDP or other measures of economic prosperity and happiness, or what utilitarians and welfare economists refer to as “utility.” The great utilitarian philosopher Jeremy Bentham defined utility as the excess of pleasure over pain, or equivalently (for he did not confine pleasure and pain to purely physical sensations) happiness.
Most people, including most economists, do not regard per capita income or other measures of economic welfare (such as GDP, which is the market value of all goods and services sold in the United States, whether for consumption or investment, in the course of one year) as an end in itself, but as a contributor to human happiness broadly conceived.
They expect the contribution to be positive, however.
Most people devote much of their time to trying to increase their income, which suggests that income and welfare are positively correlated.
I will question this expectation and this suggestion.
The     United States     has the highest per capita income of any large country, and so one might expect it to have the highest per capita happiness.
  But surveys indicate that this is not true; in one well-known study (  http://en.wikipedia.org/wiki/Satisfaction_with_Life_Index#International_Rankings_2006  ) (visited Jan.
9, 2010), for example, the United States ranked 23rd out of 178 countries.
  Denmark   was first,     Burundi     last; nevertheless there was a general though rough correlation between happiness and per capita income: the happiest countries are rich, the unhappiest poor.
And generally as countries become richer, their inhabitants become happier.
These correlations are confirmed in an important and careful study by the economists Betsey Stevenson and Justin Wolfers, available at     http://bpp.wharton.upenn.edu/betseys/papers/happiness.pdf       (visited Jan.
10, 2010).
Cross-national comparisons are of limited significance because other things affect happiness besides income, such as health, population density, religious beliefs, quality of public services, internal and external security, family structure, climate, and income equality (given declining marginal utility of income, the more equal incomes are—holding other things constant, an essential qualification, obviously—the higher average utility can be expected to be).
Thus the fact that the     United States     ranks only 23rd in happiness is not terribly meaningful.
However, an important finding in another   article by Stevenson and Wolfers, “The Decline of Female Happiness,” available at     http://bpp.wharton.upenn.edu/betseys/papers/Paradox%20of%20declining%20female%20happiness.pdf       (visited Jan.
9, 2010), is that in the United States men’s happiness is essentially unchanged since 1970, and women’s happiness has declined significantly, so that average U.S.
happiness has declined.
In 1970, the average woman was happier than the average man; today the reverse is true.
In most other developed countries, average male and female happiness has grown, but male happiness has grown relative to female happiness.
The authors adjust for compositional effects in the United States—such as changes in the racial and ethnic composition of the society, labor force participation, education, marriage and divorce, and age—and, surprisingly, find few differences.
(One difference is that blacks, especially but not only black women, are happier today than in 1970.) They speculate (plausibly, in my opinion) that because women are on average more risk-averse than men, they find the range of career and relationship choices open to women nowadays a source of unhappiness.
The   United States   has become more competitive in the last 40 years, and relatively to most other developed countries as well, and this may explain the different overall happiness trends, though it does not explain why male happiness in the     United States     has not declined.
Probably the most notable finding in the Stevenson-Wolfers study, though not emphasized by them, is that increases in per capita income, at least in the     United States    , do not seem to increase happiness.
    U.S.
    per capita income, adjusted for inflation, has more than doubled since 1970, yet happiness has declined.
The reason that happiness has not increased even though per capita income has increased may be that in comparing happiness at year   t   and at year   t   + 40, one is asking the inhabitants of two very different societies (whether it is the same person asked at both times or different people).
The people at year   t   didn’t know what conditions would be 40 years hence, and so couldn’t feel unhappy because they couldn’t experience those conditions.
If happiness is relative to existing opportunities, a change in those opportunities needn’t affect it.
Happiness moreover is a psychological, which is to say a biological, state, and biological states are not as variable as income is.
There are people in the world today who earn $1 a day, and people who earn $1 million a day, but it would be inconceivable than the latter was one million times happier than the former, just as no person in any society can run a million times faster than the slowest runner.
The human biology may simply be such that the elasticity of happiness to income is very low.
But one should distinguish between happiness and preferences, and hence between maximizing happiness and maximizing preference satisfaction.
People have a strong preference for more income over less and thus for a rising standard of living.
Adam Smith argued in   The Wealth of Nations   that people fooled themselves in thinking they would be happier with more money.
Maybe so; but as long as people do have this strong preference, economics can explain a great deal of human behavior.
Two of the largest bookstore chains—Barnes & Noble and Borders—are in danger of being forced into bankruptcy; their plight raises the broader question of whether bookstores will survive in any significant number, and, if not, what the consequences will be.
There are two clear threats, both Internet-related, to the bookstore.
The newest is the e-book, in which the contents of a book are transmitted over the Internet to an electronic reader owned by the book’s buyer.
No bookstore is involved.
Slightly older is the sale, as opposed to the delivery, of a book online; Amazon is the principal seller in this market.
No bookstore is involved unless Amazon doesn’t have the book in inventory; in that event the customer is referred by Amazon to a bookstore that has the book and will sell it online and deliver it to the buyer; the purchase is made through Amazon.
Most of the books that Amazon and the other online booksellers don’t carry in stock are out of print, and bookstores that stock such books tend to be small (though there are some exceptions), because the market for such books is tiny.
A   possible   third threat is diminished appetite for books.
I haven’t been able to find good statistics on annual sales of books in the United States (and anyway “books” is an extremely heterogeneous product category), but it would seem that the amount of entertainment and instruction available online is so great that online substitution for reading books must have reduced the demand for them.
At the same time, however, the demand for books should be stimulated by the fall in cost when books are bought online, cutting out the middleman—the bookstore—a point to which I’ll return shortly.
It seems inevitable that the number of books sold through bookstores will plummet.
Books bought through bookstores are more costly not only in price (to cover the costs of the bookstore), but also in customers’ time—the time required to travel to and from the bookstore, find the book one wants to buy, and complete the purchase (which takes more time than an online purchase).
The only offsetting advantages of the bookstore are the opportunity it provides for browsing and the fact that the customer can see and handle the book before buying it.
But these advantages are offset to a considerable extent (doubtless more than offset, for many customers) by the use by online sellers of artificial-intelligence programs to recommend books to their customers, by the much vaster inventory of an online seller like Amazon, by ease of search, by the reader reviews that the seller presents, and by the seller’s ability to allow customers to look inside the online book before ordering it, much as if he were leafing through a printed book in a bookstore.
It is true that Amazon’s book-recommendation program is primitive, and is no substitute for browsing in a well-stocked bookstore, but it will improve; one can foresee the day when customers will furnish (and Amazon store) comprehensive information about their age, sex, education, occupation, and reading tastes, which Amazon will use to create an initial list of recommended purchases, which it will refine as it receives orders from the customer plus supplementary information from the customer as the customer’s tastes and interests change.
At present fewer than 30 percent of all books are bought online (either in hard copy or as an e-book), but I have seen an estimate that this figure will grow to 75 percent within a few years.
Very few bookstores will have enough customers to survive if bookstore sales fall from 70 percent to 25 percent of all book sales, except those bookstores specializing in out of print books—whose customers will largely be online.
In time, moreover, with more and more publishing electronic, there will be fewer and fewer “out of print” books.
The substitution of online for bookstore distribution of books will provide a substantial social saving and, as I said, increase the demand for books by reducing their retail price.
As for the effect on publishers and authors of books, there is concern that it will be adverse, but that seems unlikely.
A seller tries to minimize his cost of distribution, just as he tries to minimize his other costs; the publisher is the ultimate seller, and the bookstore part of the chain of distribution.
But there is an important, and potentially relevant, exception, and that is where a distributor provides point-of-sale services that increase the demand for the product.
This is the rationale for resale price maintenance: manufacturers of some goods place a floor under the retail price of the goods, thus deliberately increasing the retailers’ margin, but hoping by doing so to induce them to engage in nonprice competition that will increase the demand for the goods.
Bookstore staffs, by decisions they make concerning choice and display of books to carry, and by making purchasing suggestions to customers, can, in principle, increase the demand for books.
But these services cannot guarantee the survival of many bookstores, because unless the services are valued by a greater margin than seems realistic to expect, there will be too few customers to defray the bookstore’s fixed costs at acceptable prices.
The question then becomes whether the loss of point-of-sale services that bookstores provide will hurt publishers (and therefore authors, whose prosperity is linked to that of publishers) more than it will help them by reducing their distribution costs.
That too is doubtful.
As technology continues its forward march, online booksellers will find it increasingly feasible to duplicate and indeed improve on the point-of-sale services that bookstores offer.
Bookstores will decline, and perhaps vanish when the current older generation, consisting of people habituated to printed books (as to printed newspapers), dies off.
Yet this may well represent genuine economic progress, just as department stores and supermarkets represent progress though they cause the demise of countless small retailers.
In discussing inequality of income and wealth (I’ll use “inequality” to cover both) one could focus on inequality between countries—rich countries versus poor countries—or on inequality of wealth within countries.
The two inequalities could change at different rates, even in different directions.
I will focus, as does Becker, on inequality within countries.
As he points out, it has grown.
This seems to be the result of four forces: (1) a trend away from redistribution—that is, from policies designed to equalize after-tax incomes by combining heavy income taxation with subsidies for poor people; (2) greater competitiveness, in part because of deregulation, more rapid innovation, freer international trade, and a decline in discrimination; (3) the growth in the size of markets (this is related to freer international trade), which increases the returns to excellence and innovation; and (4) an increasing return to IQ because of the changing character of production.
The first three points are related to the collapse of communism and the failure of collectivist policies in noncommunist countries (notably the failure of regulation in the United States in the 1970s and the ossification of labor unions despite government support), and the last to the decline of manual labor relative to brain work as economic activity becomes increasingly automated as a result of technological progress.
Most commentators on inequality would reword (4) as an increasing return to higher education, but it is very difficult to gauge the value added of a higher education, especially the economic value.
Think of such college dropouts as Bill Gates, Steve Jobs, and Mark Zuckerberg.
Higher education is indispensable for persons who want to be members of professions, and doubtless helpful for most people who want to go into business, but it will not make up for deficiencies in IQ.
Compare two people, A and B.
A does not go to college, and his lifetime earnings are $1 million.
B goes to college, and his lifetime earnings are $2 million.
In gauging the value added by a college education, one has to control for, among other things, difference in both cognitive and noncognitive abilities (IQ illustrating the first, ability to apply oneself to a task the second).
The point is not that B didn’t derive a benefit from college.
Maybe without college his lifetime earnings would have been only $1 million too.
But maybe if A had gone to college, his lifetime earnings would still be only $1 million because he wasn't smart enough, or motivated enough, to derive any benefit from college.
The University of Chicago economist James Heckman may well be right that the only hope for many Bs is early childhood intervention.
Thus the question is whether today in America everyone who can benefit from a college education gets a college education.
If so, increasing college enrollments would not reduce inequality, though school reform at the high school and elementary school levels might increase the number of persons who can benefit from a college education, although Heckman’s research provides a basis for skepticism.
Inequality is self-limiting to a degree; if it generates tremendous envy and resentment, the government will be under irresistible pressure to adopt redistributive policies.
(Equality is likewise self-limiting: too much, and it destroys incentives.) Oddly, increased inequality in the United States has not generated much envy or resentment.
The principal opponents of inequality are upper-middle-class liberals rather than poor or lower-middle-class people, and they failed to obtain a rescission of the Bush tax cuts for high-income taxpayers.
Obama’s health care reform is the only major redistributive measure adopted in the United States in recent times, and its major redistributive component is an expansion in Medicaid, which the states (which pay half the cost of Medicaid) are busily trying to undermine by reducing the medical treatments for which Medicaid will reimburse providers of health care.
If I am correct that inequality is not influencing public policy in the United States—is just not a political issue—then I don’t think there is an inequality “problem.” There is just the facts that in the United States a small fraction of the population has an enormous share of the nation’s income and wealth and, at the other end of the income distribution, there are many very poor people.
Are the rich a “problem”? I don’t think so.
All their money is either spent on consumption or invested, and either way it is economically productive; it’s not as if the rich hoarded their wealth in the form of gold bars.
The rich influence elections by their campaign contributions, but they would have the same influence with much less money, because it is not the absolute level of a rich person’s campaign contributions that sways elections but the level relative to contributions by other rich persons supporting competing candidates.
Warren Buffett warns that without stiff inheritance taxes (which the United States does not have), we will find ourselves in the grip of an “entrenched plutocracy” (according to the   Economist   article that Becker cites).
I don’t understand that concern.
The heirs of the rich spend their money on consumption or investment, just like their parents; dissipate it rapidly, if they’re dumb; but in any event do not by virtue of having inherited a lot of money block the upward striving of others.
Poverty is a problem, but if the rich are not a problem, then the problem of poverty is not a problem of inequality.
It looks like a problem of inequality only because the wealth of the wealthy seems an obvious source of money to alleviate poverty.
But it is not that taxing the rich would alleviate poverty, but that taxing the rich and using the tax revenues to raise the incomes of the poor would alleviate poverty.
Inequality should be a non-issue in the United States—and to a considerable degree it is.
A number of states are in quite desperate financial straits.
They have huge debt, and like members of the eurozone, cannot lighten their debt load by inflating their currency or by improving their trade balance by devaluation.
It is natural that they should be raising fees, such as college tuition.
Whether this is the best way to reduce debt is a separate question, but probably an academic one; the pattern of revenue enhancement and cost reduction that a state embraces depends on the political balance in the state, rather than on what is efficient or otherwise in the public interest.
As Becker points out, there are external benefits to higher education.
In the narrowest terms, college-educated persons (especially college graduates) have significantly higher incomes than the non-college-educated population, and those higher incomes generate higher tax revenues, which finance government expenditures that are used to finance government programs that largely benefit other people.
The amount of the external benefits cannot actually be measured, however, because the decision to attend college is not random; generally, it is the intellectually abler who attend college, and their higher incomes are the combined result of their personal characteristics and the increased skills that college imparts to them.
Nevertheless there is little doubt that college does generate external benefits, which creates a case for subsidy.
Whether it is a good case is a separate question.
Because college contributes to higher earnings, which are not taxed until earned (that is, the asset, consisting of future expected earnings, that is obtained by attending college is not taxed), attending college is attractive to anyone who has sufficient intelligence and discipline to benefit from it, and he or she can borrow to finance tuition and living expenses.
In any event, there is no case at all from an overall social standpoint for subsidizing students who would pay full college tuition, without the inducement of a subsidy; the subsidy does not induce students to obtain a college education who otherwise would not because they could not afford to; it is a windfall to their families.
Private colleges recognize this.
They charge very high tuition (though not high enough to cover all their costs—but they have other sources of funds, such as alumni donations), but grant scholarships or loans to students whose families can’t afford the tuition.
Charging low tuition to everyone, as public colleges do for residents of the state in which the college or university is located), does not make economic sense; it merely as I said provides windfalls to families willing and able to pay the full tuition.
As Becker points out, this results in regressive redistribution of income, because families that can pay full tuition are wealthier than the average taxpayer, who pays for the costs of public colleges that tuition doesn’t cover.
This situation presents a case for a virtuous tax increase (raising a fee for a public service is the equivalent of a tax): the increase helps to close the state’s fiscal gap; the burden of the increased tax is borne entirely by the well-to-do; and some of the higher revenue can be used to subsidize students unable to afford the higher tuition.
There is still an argument against the tuition increase, which resembles the economic argument for the moratorium on increasing the federal income tax rate even on taxpayers who have very high incomes—even incomes of a million dollars or more a year (a proposal for eliminating the Bush tax cuts for those taxpayers was made by liberals but rejected by Congress).
The argument is that any tax increase will reduce private spending (consumption and investment) unless the tax revenues are used to increase such spending; and given the still very shaky state of the U.S.
economy, any measure that reduces such spending is suspect.
Well-off families will have to allocate more of their income to their children’s education, and so will reduce their other spending.
The revenue from the higher tuition will flow into the state’s coffers, but the question is whether the effect of that flow in stimulating investment and consumption will be greater than the decline in personal spending by persons paying the higher tuition.
Conceivably, increasing tuition could retard economic recovery, though it is nevertheless a sensible long-term measure of fiscal reform because there is no reason to subsidize tuition of persons able and willing to pay it without subsidy.
I said earlier that the combination of tax (or fee) increases and spending cuts that states are adopting in an effort to alleviate their debt burden depends on the balance of politically effective interest groups.
Well-to-do families will fight efforts to increase college tuition across the board.
So of course will the state universities and other public colleges.
Low tuition, made up for by public subsidies, helps state and city colleges and universities attract students who would otherwise go to private colleges and universities.
Increasing tuition may increase state college revenues, assuming that demand is inelastic (meaning that the percentage fall in demand because of the higher price is less than the proportionate increase in price, so that revenue rises) within the range of the increase, but it will reduce the quality of the student body.
The proposed tuition increases raise the broader question whether states should be in the position of providing higher education, rather than leaving it to the market to private.
In fact state universities have been weaning themselves from state support.
At prestitgious public universities such as the University of Michigan, state subsidies now account for only about 10 percent of the university’s budget.
The fact that a state legislature can raise state university tuition at will, or stated otherwise reduce subsidy at will, creates the kind of financial uncertainty that has brought English universities low.
Universities supported by diverse financial sources are more stable, which is the main reason for the trend toward public universities’ seeking private support.
The recession-driven tuition increases will accelerate this trend.
That would be a good thing.
On October 29, 2006, shortly after the award of the Nobel Peace Prize to Muhammad Yunus and his Grameen Bank of Bangladesh, Becker and I blogged about microfinance, which Yunus and his bank had pioneered.
The term “microfinance” (or “microlending”) refers to the making of tiny loans to small farmers, shopkeepers, artisans, and other minute commercial enterprises in underdeveloped countries such as India and Bangladesh at high interest rates—sometimes as high as 20 percent a day.
In my blog posting, I called microfinance a worthy experiment, superior to philanthropy because the high interest rates that the microfinanciers charge should induce self-selection by the borrowers: a borrower has to have confidence in the project for which he is seeking microcredit in order to be willing to assume the burden of servicing his debt.
But I sounded a skeptical note.
I said that the “success [of microfinance] has yet to be demonstrated despite glowing appraisals by Kofi Annan and others.
It may simply be the latest development fad…The evidence for the efficacy of microfinance in stimulating production and alleviating poverty is so far anecdotal rather than systematic.
The idea of borrowing one's way out of poverty is passing strange.
And I am unaware of any historical examples of nations that climbed out of poverty on the backs of small entrepreneurs financed by credit.” I noted that the Grameen Bank had a surprisingly low default rate and pointed out that it “does not have written loan agreements and does not sue defaulters or invoke other legal remedies against them.
The natural inference to draw is that the bank is extremely selective in its choice of persons to whom it is willing to lend, and such selectivity, if imitated by other microfinanciers, must greatly limit the scope and impact of microfinance.” I concluded by “suggest[ing], albeit tentatively, that there may be a good deal less to microfinance than its boosters claim.”.
Microfinance has continued to expand since 2006.
The number of microloans made in India rose from 10 million in March 2007 to 26.7 million in March 2010.
By the end of 2009 total microloans stood at $70 billion, half of them in India and Bangladesh.
But a series of suicides by microborrowers in the fall of 2010 in the Indian state of Andhra Pradesh, the site of more than 25 percent of Indian microlending, led to charges that the microlenders, which are commercial rather than eleemosynary enterprises (though, as we’ll see, the Grameen Bank seems to straddle the two types of enteprise), were charging exorbitant interest rates and coercing people into taking out loans they could not afford to repay.
Politicians urged borrowers not to repay their microloans; and repayment rates, previously as high as 98 percent, plummeted to 10 to 20 percent.
The government of Andhra Pradesh has imposed strict limitations on microlending, and the Reserve Bank of India (India’s central bank) has proposed that similar controls be established throughout India—controls including a ceiling on interest rates and loan amounts and limiting eligibility to borrowers having an income above a specified level.
In December 2010 the prime minister of Bangladesh declared thatmicrolenders were “sucking blood from the poor in the name of poverty alleviation” and ordered an investigation of Grameen Bank.
Similar reactions to microlending have taken place in Nicaragua, another country that had embraced microlending with enthusiasm.
And then just a week ago, in a short article in the   New York Times   entitled “Sacrificing Microcredit for Megaprofits,”   www.nytimes.com/2011/01/15/opinion/15yunus.html?_r=2&scp=2&sq=microfinance&st=cse,Yunus   himself wrote: “when I began working…on what would eventually be called ‘microcredit,’ one of my goals was to eliminate the presence of loan sharks who grow rich by preying on the poor.
In 1983, I founded Grameen Bank to provide small loans that people, especially poor women, could use to bring themselves out of poverty.
At that time, I never imagined that one day microcredit would give rise to its own breed of loan sharks.
But it has.” He writes that the problem “began around 2005, when many lenders started looking for ways to make a profit on the loans by shifting from their status as nonprofit organizations to commercial enterprises.
In 2007, Compartamos, a Mexican bank, became Latin America’s first microcredit bank to go public.
And this past August [2010], SKS Microfinance, the largest bank of its kind in India, raised $358 million in an initial public offering.”.
It seems rather an odd point for Yunus to make, because the Grameen Bank is itself a stock corporation, not a nonprofit.
See   www.grameen-info.org/index.php?option=com_content&task=view&id=179&Itemid=145  .
Yunus says in the   Times   article that all the bank’s profits are returned to the borrowers in the form of dividends, but if so I don’t understand how the bank attracts equity capital.
He contends that his is the best model for microlending, explaining that “commercialization has been a terrible wrong turn for microfinance, and it indicates a worrying ‘mission drift’ in the motivation of those lending to the poor.
Poverty should be eradicated, not seen as a money-making opportunity… Instead of creating wholesale funds dedicated to lending money to microfinance institutions, as Bangladesh has done, these commercial organizations raise larger sums in volatile international financial markets, and then transmit financial risks to the poor…Some advocates of commercialization say it’s the only way to attract the money that’s needed to expand the availability of microcredit and to ‘liberate’ the system from dependence on foundations and other charitable donors.
But it is possible to harness investment in microcredit—  and even make a profit  —without working through either charities or global financial markets.” The phrase I’ve italicized is strange, because Yunus claims that his bank does not make a profit, but instead distributes any surplus of income over expenses to the borrowers, and urges that as the model for the microfinance industry.
The recent uproar in microlending puts one in mind of our own controversies over payday lending, title lending (borrowing at high interest rates with one’s car as security), credit-card lending, and subprime mortgage lending—all examples of loans at very high interest rates, made largely to unsophisticated consumers.
There is an inconclusive literature on the net social benefits of these forms of credit (sometimes lumped together in the term “fringe banking”).
The basic difference between our fringe banking and microfinance is that fringe banking is primarily consumer rather than commercial lending, but the small farmers and shopkeepers who take out microloans probably are even less sophisticated on average than the American customers for fringe banking.
Moreover, although what critics of fringe banking call “predatory lending” is not highly regulated in the United States and as a result fraud and other exploitive conduct may well abound, regulatory protections are undoubtedly far weaker in countries like India and Bangladesh.
Yunus’s mysterious nonprofit-profit model of microfinance cannot attract substantial capital, but commercialized microfinance seems increasingly unlikely to have substantial social benefits—and this with or without regulatory controls designed to protect unsophisticated borrowers.
Without the controls, there will undoubtedly be a good deal of fraud, and improvident borrowing without fraud.
With the controls, the amount of lending will be curtailed.
But with or without controls, the amount of lending will be limited by the very high default rates that can be anticipated unless there is very careful screening of would-be borrowers.
Interest rates will remain very high and will strangle many of the businesses that rely on microfinance.
Microfinance may turn out to be a niche service, with little overall impact.
International comparisons are tricky, as Becker points out, but the PISA (Programme for International Student Assessment), which tests 15-year-olds for proficiency in reading, math, and science by well designed standard tests conducted in thousands of schools all over the world, is a careful and responsible program, the results of which deserve to be taken seriously.
The latest results (which are for 2009) reveal among other things that although the United States spends more money per student on secondary school education than any other country except Switzerland and Austria, Americans’ performance on the PISA tests is mediocre.
In the latest tests Americans ranked 17 in reading, 24 in science, and 30 in math.
15-year-old kids in East Asian nations (including Australia and New Zealand), along with Finland, Switzerland, the Netherlands, Belgium, and Canada, outperform the United States in all three subjects.
Since 2000, when the PISA tests were first given, the United States has fallen in rank in reading and science, and is unchanged in math.
The rankings tend to be interpreted as measures of the quality of a nation’s pre-collegiate school system (primary and secondary education, since primary education influences performance in secondary schools).
But this may be a mistake.
Schooling is only one, though doubtless an important, input into performance on the PISA tests.
Another is IQ.
There have been some efforts to compare IQ across countries, notably by Richard Lynn and Tatu Vanhanen; see their 2006 book   IQ and Global Inequality  .
Their results cannot be regarded as definitive, given significant limitations in the data, but they are suggestive.
The authors find that the East Asian countries, which generally rank highest on the PISA tests (including reading—not just math and sciene), have the highest average IQs; the average IQ of Americans is lower because of our large black and Hispanic populations, which have lower average IQs than whites and Asians.
IQ is understood to reflect both genetic endowment and environmental factors, particularly factors operative very early in a child’s life, including prenatal care, maternal health, the educational level of the parents, family stability, and poverty (all these are correlated, and could of course reflect low IQs of parents as well as causing low IQs in their children).
The case for very early intervention in children’s development, powerfully urged by the distinguished University of Chicago economist James Heckman, can be understood as an effort to lift IQs in the black and Hispanic communities and by doing so improve the educational performance of black and Hispanic children, including performance on the PISA tests.
It is true that Heckman emphasizes noncognitive skills that facilitate learning, but these skills could also increase performance on IQ tests, indicating a positive effect on IQ.
The 2009 PISA test scores reveal that in American schools in which only a small percentage (no more than 10 percent) of the students receive free lunches or reduced-cost lunches, which are benefits provided to students from poor families, the PISA reading test scores are the highest in the world.
But in the many American schools in which 75 percent or more of the students are from poor families, the scores are the second lowest among the 34 countries of the OECD; and the OECD includes such countries as Mexico, Turkey, Portugal, and Slovakia.
If IQ is playing a significant role in America’s mediocre showing on the PISA tests, improvements in secondary school education are unlikely to have dramatic effects.
The white and Asian kids in American schools are already doing fine, for the most part; the black and Hispanic kids may not do much better until their early childhood environment is improved to the point at which black and Hispanic IQs are raised significantly.
Analysis of the PISA results has revealed some other interesting facts.
One is that higher teacher salaries dominate small class size as a factor in high PISA scores.
This is a reassuring finding because it suggests that secondary school education can be improved at no net increase in cost, since higher teacher salaries are offset by larger classes—if class size is raised in proportion to increases in teacher salaries, there is no net increase in the school’s cost, and there should actually be a reduction in cost in the long term because a reduction in the number of classrooms reduces the size and therefore cost of a school even if each classroom is larger.
Another reassuring finding, in light of all the agitation over charter schools and voucher systems, is that private schools on average do not outperform public schools after adjusting for the quality of students upon entrance and that competition for students does not seem to improve average performance either.
Of course these are generalizations across many countries and America’s individualistic culture may not fit them.
This observation is especially pertinent to another finding in the PISA report, which is that poor kids do better in a school that has mostly middle-class kids.
Our education system, both public and private, tends as a practical matter to be segregated according to family income and social class.
This is a reflection of economic inequality, which is great in the United States and growing.
Becker points out that despite the imperfections of its educational system, America remains preeminent in innovation.
This is important but it appears to be due in part to the nation’s attractiveness to immigrants.
Many of our innovators are foreign born; increasingly they are Asian.
The United States, as a result of generous immigration policies in the nineteenth and early twentieth centuries, also has the largest Jewish population in the world—larger than Israel’s, and a higher percentage of American Jews are Ashkenazi—that is, descendants of European Jewish immigrants.
This is significant because Ashkenazi Jews have a significantly higher average IQ than other Americans, including (though the margin is small) East Asians, and, as important, a very strong cultural orientation toward high achievement in business, science, and intellectual fields generally.
From the standpoint of innovation, a wide distribution of IQs is more important than the average IQ, because most innovations will come from persons with an above-average IQ, and in scientific and other technical fields from persons with a way-above-average IQ.
Moreover, because of the bell shape (normal distribution) of IQ across persons, a higher average IQ translates into a much longer upper tail—the part of the distribution that contains the highest IQs.
If as I am speculating (and I emphasize that it is speculation), IQ is a major factor in school performance, we should hesitate to place too much weight on variance in educational investments, methods, etc.
in explaining differences in that performance, relative to genetic and cultural factors and also (and importantly) to economic inequality.
I approach the issue of immigration reform (theoretical reformneither Becker nor I are considering the political obstacles to radical changes in immigration law) somewhat differently.
I begin by asking: why restrict immigration at all? The only answer I consider fully compatible with a market-oriented approach to social issues is that the immigrant might reduce the net social welfare of the United States, if for example he was unemployable or on the verge of retirement, or was a criminal, or was likely to require highly expensive medical treatment, or if he would impose greater costs in congestion or pollution than he would confer benefits, with benefits measured (crudely) by his income before taxes and by any consumer surplus that he might create.
I assume that the welfare of foreigners as such does not enter into the U.S.
social welfare function; but immigrants who create net benefits in the sense just indicated contribute to the strength and prosperity of the nation.
The problem of the undesirable immigrantthe immigrant who wants to free ride on the services and amenities that the United States provides its citizenscould be solved by means of a two-stage process.
In the first stage, the prospective immigrant would be screened for age, health, IQ, criminal record, English language capability, etc.; the screening need not be elaborate.
If the would-be immigrant passed in the sense that he seemed likely to add more to U.S.
welfare than he would take out, he would be admitted without charge.
If he flunked the screening test, an estimate would be made of the net cost (discounted to present value) that he would be likely to impose on the U.S.
if he lived here and he would be charged that amount for permission to immigrate.
An alternative, less revolutionary, approach to screening out free-rider immigrants would be, first, to deny immigrants access to Medicaid and other welfare programs until they had lived in the United States for a significant period of time, and, second, to auction off a certain number of immigrant visas to the highest bidders.
Immigrants willing to take their chances without access to welfare programs (not that all access could be deniedno one could be refused emergency medical treatment on a charity basis), and immigrants willing to bid high prices in an immigration auction, would be likely to be productive citizens, in the first case, and to cover any costs they would impose on the nations health or other welfare systems, in the second case.
Either the more or the less revolutionary alternative would impose significant transition costs, but that would be true of any radical change in immigration policy.
The obvious cost (though not really a cost, rather a redistribution of income) would be that by increasing the supply of labor, an immigration policy that made it easy for employable workers to enter the U.S.
labor force would reduce wages in the labor markets that the immigrants entered.
A closely related but subtler consequence is that the downward effect of large-scale immigration on wages (a short-run effect, in all likelihood) would complicate the process of determining the correct fee to prevent free riding: an immigrant who might be able to pay his way at the existing wage level might be unable to do so if the wage level fell as a result of massive immigration.
Similarly, congestion and pollution externalities might increase at an increasing rate with massive immigration, requiring a further adjustment in the fee charged the undesirables..
Either approach seems to me preferable to a flat fee for all would-be immigrants.
A flat fee would not do away with the need to screen, since some would-be immigrants might impose net costs on the U.S.
that were greater than the fee; that is why Beckers approach includes screening.
The flat fee would exclude two types of immigrant that should, in a market-oriented approach, be admitted.
One type would be undesirables willing and able to compensate the United States for the expected costs that they would impose--and so they would not be free riders after all; a very wealthy person on the verge of retirement would be an example of such an undesirable. The second type would be highly promising would-be immigrants (for example, persons with a high IQ) who for some reasonperhaps because they reside in extremely poor countriessimply could not pay the down payment on the fee.
The fee would, it is true, increase government revenues, which may seem a plus.
But it would do so at the usual cost of distorting the allocation of resources, in this case by excluding immigrants in the second class.
I note two complications.
First, it may be desirable to adhere to the current policy of granting asylum to foreigners who are escaping persecution, even if they do not seem likely to be able to pay or to earn enough to cover the costs theyll impose on this country.
My reason is not sentiment, but the fact that people who are persecuted tend to be either nonconformists or members of particularly successful minorities, and in either case they, or at least their children, are likely to be productive citizens even if their U.S.
employment prospects are dim.
Second, the United States in formulating immigration policy may have to worry about brain drain, and, what may be more important, leadership drain, from poor or unstable countries.
For example, it would be highly unfortunate if all the Iraqis who have the ability and motivation to build a democratic, free-market society fled to the United States.
Thus it may sometimes be in our national interest to exclude persons who would otherwise be highly desirable immigrants, in order to shore up forces or tendencies in their own countries that promote U.S.
interests.
However, I do not know how to mesh this concern with either my or Beckers proposals.
This was, as usual, a stimulating set of comments.
I cannot respond to all of them, but I will try to respond to some and in so doing clarify my original proposal.
It is apparent that a number of the commenters misunderstand my proposal.
I accept responsibility for not having explained it adequately; in addition, I modify it slightly in this response, in light of the comments.
A number of comments suggest that Becker's and my proposals are anti-immigrant or anti-poor.
That is incorrect.
As Becker explains in his response, his proposal would facilitate immigration by unskilled workers--as would mine, had it not been for my reference to IQ, which I retract below! Both of us contemplate that our proposals would lead to greater immigration.
That in turn would tend to redistribute wealth from rich to poor, because the vast majority of immigrants (other than some asylum seekers) raise their standard of living by coming to the United States.
This, by the way, was surely true of the people who were able to immigrate to America in the eighteenth century solely by virtue of the institution of indentured servitude.
Indentured servitude (which must not be confused with slavery) is a method of commitment that, like a mortgage, enables a person to obtain an economic advantage that he could not obtain otherwise.
Unfortunately this is a point that is very difficult for people not steeped in economic thinking to grasp.
But try!.
One comment misdescribes my proposal as one "to sell immigration rights." The only sale component concerns immigration slots auctioned to rich people who, because of age or health, would be unlikely to be productive citizens; the auction price would compensate the rest of us for supporting them in their sickness and old age.
I realize that I created the impression that I wanted immigration officials to assess whether each individual prospective immigrant would be likely to make a net contribution to the American economy, or to American society more broadly, as a condition of permitting him to immigrate.
That was error; it would be an excessively costly, perhaps indeed a completely infeasible, undertaking.
What I should have said is that the government should adopt a few simple criteria, perhaps limited to age, health, and criminal history, which could in most cases be readily determined, to screen would-be immigrants, and couple that with a residency requirement for welfare benefits (see below).
I should not have mentioned IQ, since as Becker points out we need additional unskilled as well as skilled workers and since it is difficult to design IQ tests that will yield comparable results for persons of different linguistic, cultural, and socio-economic background.
Congestion and pollution externalities are potentially strong objections to high levels of immigration, but they should affect policy at the level of deciding whether to place some overall limit on the annual immigration rate; they cannot be used to screen individual applicants.
Employability is important, but age and health are proxies for it; and disentitling new immigrants to social services for a limited period of time (probably no more than a year or two) is a way of discouraging immigration by workers who may be young and healthy and have a clean criminal record yet who for one reason or another are not attractive to U.S.
employers.
The purpose of this temporary residency requirement for entitlement to social benefits is not, as one comment asserts, to curb immigration because of the welfare state; it is to discourage free-rider immigrants.
To repeat, the overall effect of our proposals would be to increase the amount of immigration.
Moreover, even though the present patchwork of immigration laws is inefficient, I believe that the net effects of immigration, today as in the past, legal and illegal, on American society are positive, consistent with the study by Smith and Edmonston cited in one of the comments.
Speaking of the immigration law patchwork: an excellent comment, diffidently offered by an undergraduate, asks what features of the present system of immigration rights is my proposal intended to replace? All of them? No; I said that I thought we should continue to grant asylum to victims of persecution.
But I did not comment on the other grounds on which people are allowed to immigrate under existing law, such as national quotas, family reunification, lottery, and special skills.
Although family reunification has obvious appeal, I cannot think of a good reason to specify immigration quotas by nation.
A lottery would make sense only if the number of people who passed the screening test that I have proposed exceeded some overall ceiling on immigration derived from concern with congestion or pollution externalities; in that event, a lottery would be a cheap way of equilibrating applications to openings.
A special-skills exception would be superfluous (and is costly to administer) if the screening approach were adopted, because almost everyone who had special skills would pass the test.
One comment contains an intriguing hint that might be elaborated as follows: illegal immigration, being costly, tends to filter out would-be immigrants who are either faint of heart or don't have a really strong desire to live in the United States, while letting in would-be immigrants who are daring, ingenious, and optimistic about their chances for success in the U.S., albeit who also may have a below-average commitment to legality.
On balance, illegal immigrants may constitute a desirable class of immigrants.
If this is correct, it supports the Bush Administration's amnesty proposal.
There were, as usual, many very interesting comments.
Let me try to respond to a few.
One interesting suggestion is that an increase in demand for drugs, brought about by the Medicare prescription-drug benefit, will not, as I suggested, reduce average price by enabling the drug companies' heavy fixed investment in R&D to be spread over a greater output; the companies' patent monopolies will enable them to charge higher prices.
This is possible but not certain.
If average cost is not rising in output, an increase in demand will not lead to a higher price.
If, however, the demand curve facing a monopolist becomes less elastic (meaning that a small increase in price will have a less depressing effect on output), then the monopolist will raise his price (unless, perhaps, his average costs are falling).
That is a possible effect of the prescription-drug benefit: the benefit will slow the output reaction to a higher price.
However, this effect will be offset to some and perhaps a great extent by the fact that many patented drugs have good substitutes (for example, the different antidepressants and painkillers), since different molecules having the same therapeutic effect are separately patentable.
However, I grant that there may indeed be a price effect from the new benefit; one possible (and unlikely!) response would be to shorten the patent term for pharmaceutical drugs.
Some comments suggest that compulsory health insurance and socialized medicine are, contrary to what I argued in my posting, the same thing.
That is not so.
Compare automobile liability insurance.
It is compulsory in most (maybe, by now, in all) states, but since the insurance is written by private companies, it is hardly an example of socialized insurance.
Similarly, one could have compulsory education laws but no private schools, and one would then not speak of socialized education.
The analogy of medical to auto insurance was criticized in some of the comments on the ground that there is no upper limit to how much medical treatment one may need.
But, similarly, there is no upper limit to the amount of damage you can do driving carelessly, yet it is possible to buy essentially unlimited liability insurance.
The real difference is that medical insurance is more expensive--and would be   much   more expensive were it not for Medicare--than automobile liability insurance is because many more people require expensive medical treatment, especially toward the end of their lives, than cause serious auto accidents.
Many comments suggest that medical insurance would be so expensive if it weren't subsidized by the taxpayer through Medicare that it would be unaffordable.
But this is obviously wrong.
Here the analogy is to life insurance.
One can buy a very large life insurance policy cheaply at a young age because the insurance company invests the premiums and earns a return on the investment for many years before it has to pay out.
Similarly, medical expenses tend to rise with age.
Of course, young people have less disposable income than older--but one implication is that many older people, having high incomes, can afford to buy health insurance even though their premiums will be higher than if they had bought a lifetime policy when they were wrong.
It is true that some, and probably many, people will not be able to afford health insurance, and I agree that, as a practical matter, they cannot be denied treatment just because they can't pay for it.
So there will always be medical subsidies, just as there will always be subsidized pensions (social security).
But why not make those who can buy health insurance, including health insurance for their old age, do so? One attractive method of subsidy, which I borrow from the automobile insurance example, is to require each insurance company to insure, at premiums only moderately above the market level, the individuals who because of poor health cannot afford to buy health insurance at market rates.
The suggestion that you cannot deny, medical treatment to someone who refuses to buy health insurance is also true, but you can punish him for not buying it, just as we punish people for not paying taxes.
I like the suggestion that low deductibles in health insurance policies are actually cost savers because they encourage people to visit the doctor at the first sign of trouble, when the problem can probably be corrected at less cost than if they delay.
However, this reasoning does not justify the low deductibles in the prescription-drug plan.
They will just encourage pill popping.
In 2003 Congress created a prescription-drug benefit program for persons enrolled in Medicare.
It was estimated at the time that the program would cost the government $40 billion a year; a recent re-estimation, adding $30 billion a year,   New York Times  , has elicited proposals to curtail the benefit.
Given Medicare, I do not think that there is a principled objection to including a prescription-drug benefit in it.
Suppose Medicare were limited to hospital treatment.
Then critics would say, thats absurdit will only impel people to get hospital treatment that would cost society (though not the patient) less in a non-hospital setting.
It is similarly questionable to exclude prescription drugs from Medicare coverage.
Drugs are substitutes for other forms of medical treatment in many situations; therefore excluding them from coverage will induce people to seek other forms of treatment that may cost society more to provide.
This means, by the way, that in calculating the net social cost of the prescription-drug benefit, the cost of other treatments for which drugs, with their cost to the patient reduced by the Medicare subsidy, will substitute should be subtracted.
Concern has been expressed that increased demand for drugs may increase their price.
That is unlikely.
The principal cost of drugs is R&D.
The manufacturing cost is slight; and therefore an increase in output brought about by increased demand should, if anything, reduce average cost and hence, given competition, price.
The real issue is not the prescription-drug benefit but the overall cost of Medicare; currently (that is, without the prescription-drug benefit) that cost is running at almost $300 billion a year, which is about 3 percent of GDP.
As a matter of economic principle (and I think social justice as well), Medicare should be abolished.
Then the principal government medical-payment program would be Medicaid, a means-based system of social insurance that is part of the safety net for the indigent.
Were Medicare abolished, the nonpoor would finance health care in their old age by buying health insurance when they were young.
Insurance companies would sell policies with generous deductible and copayment provisions in order to discourage frivolous expenditures on health care and induce careful shopping among health-care providers.
The nonpoor could be required to purchase health insurance in order to prevent them from free riding on family or charitable institutions in the event they needed a medical treatment that they could not afford to pay for.
People who had chronic illnesses or other conditions that would deter medical insurers from writing insurance for them at affordable rates might be placed in assigned risk pools, as in the case of high-risk drivers, and allowed to buy insurance at rates only moderately higher than those charged healthy people; this would amount to a modest subsidy of the unhealthy by the healthy.
Economists are puzzled by the very low deductibles in Medicare (including the prescription-drug benefitthe annual deductible is only $250).
Almost everyone can pay the first few hundred dollars of a medical bill; it is the huge bills that people need insurance against in order to preserve their standard of living in the face of such a bill.
But government will not tolerate high deductibles when it is paying for medical care, because the higher the deductible the fewer the claims, and the fewer the claims the less sense people have that they are benefiting from the system.
They pay in taxes and premiums but rarely get a return and so rarely are reminded of the governments generosity to them.
People are quite happy to pay fire-insurance premiums their whole life without ever filing a claim, but politicians believe that the public will not support a government insurance programand be grateful to the politicians for itunless the program produces frequent payouts.
If Medicare were abolished, the insurance that replaced it would be cheaper because it probably would feature higher deductibles; it is true that low deductibles are common in many forms of private insurance, such as automobile collision insurance, but I think it would be different in the case of health insurance simply because private health insurance for the elderly, with no Medicare crutch, would be very costly.
The premiums would be much lower with high deductibles.
I do not think, however, that total expenditures on medical care would decline markedly if Medicare were abolished.
The reason is the enormous value that the vast majority of people place on longevity, good health, and freedom from pain and other physical discomfort.
(And, given this value, why shouldnt people who can afford to pay for it be required to do so rather than be subsidized by the taxpayer?) Pursuing a theme in my posting on social security, young people may be unwilling to pay for health insurance that will cover their expenses generously when they are old.
But when they reach old age they will demand treatment whether they have insurance or not, and no one who has a serious medical condition is refused treatment in this country although he or she may have to settle for less-than-cutting-edge treatment in a public hospital.
To prevent this free riding, a scheme of compulsory health insurance would have to require generous coverage in old age; and so aggregate health costs might not be much lower than under the present system, although with higher deductibles and copayments there would be some reduction.
The explanation usually offered for the fact that a substantial fraction of the population has no health insurance is that these are unfortunate people who cannot afford health insurance.
A better explanation is free riding.
A person who has no assets lacks a compelling reason to buy medical insurance; he will be able to obtain medical treatment free of charge, as a charity patient.
A person who does have assets but is young and healthy may prefer to gamble on not incurring large medical bills, rather than to subsidize the older and less healthy by being placed in the same insurance pool with them.
However, these temptations to free ride provide an argument for compulsory health insurance rather than, as often argued, for socialized medicine.
The cost of Medicare (or private substitutes) will continue to rise in relative as well as absolute terms.
The reason is that advances in medicine increase longevity and with it the number of years in which a person is likely to require expensive medical treatment.
It would thus be desirable from a cost standpoint if medical research could be reoriented from extending the lifespan of the elderly to making the elderly healthier.
It would incidentally reduce the cost of social security, because workers who become totally disabled before they reach retirement age become immediately entitled to social security.
This will become an increasing problem as the normal age of social security entitlement rises from 65 to 67 pursuant to legislation passed by Congress in 1983.
But of course benefits must be considered as well as costs.
If people value additional years of elderly life at more than the cost of the extension, the cost may be worthwhile, though it doesnt follow that it should be subsidized.
Young people find it strange that such a large fraction of overall medical expenses is incurred in the last few months of lifethat is, by people who are dying.
(Last-year-of-life medical care accounts for 26 percent of Medicare expenditures and 22 percent of all medical expenditures.
  PubMed  .
) Having nothing to look forward to, why are they willing to spend so much on a meager extension of life? There are several reasons.
One is that a good deal of end-of-life medical care is devoted to reducing suffering rather than to extending life.
Another reason is uncertainty as to whether one is really dying.
Another is that the (private) cost of care, however extensive, is negligible for persons who are covered by both Medicare and private medigap insurance that pays for the copayments that Medicare requires.
Still another reason for the heavy loading of medical expenses at the end of life is that for people who do not have a strong bequest motive, the opportunity cost of money spent in their last period of life is negligible because they will not be able to spend any money saved during that period.
Some of these excellent comments put me in mind of the following crude but suggestive way of stating the difference between liberals and conservatives: liberals think that the average person is good but dumb, conservatives that he or she is "bad" (in the sense of self-interested) but smart.
Liberals trust the intellectual elite (because they are good) to guide the masses (because they cannot guide themselves); conservatives distrust the elite (because the elite are bad and therefore dangerous) and think the masses can guide themselves.
So in the social security debate, liberals oppose private accounts because they do not think the average person competent to manage money for retirement but think government can be trusted to manage it; conservatives support private accounts because they give the opposite of the liberals' answers to the goodness and competence questions.
The basic contrast that I have suggested (something of a caricature, I admit) between the liberal and conservative world views has a further implication for the social security debate.
Beleving that people are good and therefore never, or at least very rarely, deserve to be poor, liberals favor redistribution of wealth from rich to poor, which a self-financed retirement system would be incapable of bringing about because everyone would be paying for his own retirement rather than for the retirement of others.
Conservatives recognize that people can be unlucky, and also (because in the conservative view people are "bad") that the elderly may free ride on their children, and on these grounds support public welfare for the indigent elderly.
Several comments take issue with my suggestion that social security is prone to politicization because the elderly vote disproportionately to their percentage of the population.
The commenters argue that the young could vote more if they wanted to and if they don't, it suggests that they are content with the status quo.
I disagree on two grounds.
The first is that the cost of voting is greater for the young because they are more likely to be employed and therefore to have a high opportunity cost of time--not only time spent voting but also time spent studying the candidates and the issues.
Second, children are disfranchised.
This creates serious distortions in public policy.
For example, it would make better sense to subsidize health care for children than for old people, because in the first case one would be adding to the stock of human capital.
I have argued that each parent should be given an extra one-half vote for each of his or her children, in order to redress the arbitrary imbalance of political power in favor of the elderly.
I would like to underscore a point made by Becker.
Compelling and providing are separable.
There are good reasons to require people to save for their retirement just as there are good reasons to require children to be educated.
But in neither case does it follow that the government should provide the required service.
That decision should depend on the relative competence of the public and private sectors in producing particular products and services.
Last week a congressional committee questioned representatives of Google, Yahoo, Microsoft, and Cisco concerning Chinese censorship and surveillance of Internet services (and in the case of Cisco, equipment) provided by these companies.
Google, for example, has acknowledged that it does not offer email, chat rooms, or blogs in China, but only Web search, image search, local search, and Google news and that it censors these programs so that Chinese customers cannot search for "democracy," "Falun Gong," and other topics that China wants to shield its people from.
Yahoo apparently provided information about one of its Chinese customers that led to his arrest and a 10-year prison sentence for political activity that would be legal in the United States.
Cisco is said to have sold equipment to the Chinese police that assists them in monitoring dissidents.
Members of Congress are incensed and are threatening legislation that might forbid U.S.
companies to knuckle under to political restrictions imposed by China as a condition of permitting our Internet companies to do business there.
In general, U.S.
companies, including Internet companies, are required to comply with the laws of every country in which they operate.
Thus, for example, they have agreed to block access, in France and Germany, to Nazi Web sites, pursuant to those countries' laws against Nazi advocacy.
They have agreed in a number of countries including the United States to block access to sites that infringe copyright.
Like other companies that possess information that is not considered the "property"`of the people who furnished it (and this is generally the case with respect to information voluntarily provided to an online vendor), the Internet companies often respond to informal government requests for information, and they are also subject to having such information subpoenaed.
Of course there is a difference between foreign laws that we regard as defensible, including some laws, such as those forbidding Nazi advocacy, that would be unconstitutional in the United States (which has by international standards an extravagant conception of freedom of speech), and laws that we regard as contrary to fundamental human rights, which is an accurate description of Chinese laws designed to suppress political freedom and, in the case of persecution of the Falun Gong and of some Christian sects, of religion as well; there are also forced-labor camps in China, torture, and other human rights violations.
If China were a small, poor country, its violations of human rights might induce international sanctions, such as were imposed on Rhodesia and South Africa before the fall of their racist regimes.
But because China is an enormous country, rapidly developing, soon to be--perhaps already--the second largest economy in the world, and very much open to investment by foreign, including U.S., companies, sanctions are out of the question as a practical matter.
A separate question is the effects of sanctions.
The theory of cartels is useful in illuminating that issue.
When competing firms get together and agree to raise price (and thus limit output, since increased price will cause some customers to switch to other products) in order to increase their profits above the competitive level, they face two problems.
First, each member of the cartel will have an incentive to cheat because by charging a price slightly below the cartel price it will have proportionately greater sales and its net revenue will rise.
Second, firms outside the cartel will have an incentive to increase their output by selling slightly below the cartel price, for the same reason that impels cheating.
The harder it is to cheat, and the smaller the fringe of competing firms outside the cartel, the more effective the cartel will be.
A sanctions regime is similar.
Each country that has agreed not to buy from or sell to the sanctions target will have an incentive to cheat, and countries that have not agreed to the imposition of sanctions will have an incentive to increase their trade in the embargoed goods.
So our Internet companies, were they under political or public relations pressure, or were compelled by U.S.
law, not to agree to conditions imposed by China would have an incentive to try to circumvent the ban; and Internet companies in countries that did not impose such a ban would have an incentive to enter the Chinese Internet market.
But it is not clear to me how effective such incentives would be.
The U.S.
Internet companies would be reluctant to violate, or perhaps even to circumvent, U.S.
law, since they are taking a big public relations hit from the revelations of their complicity with Chinese repression.
And in the short run at any rate it does not appear that foreign Internet companies can provide close substitutes for the services that our companies provide.
Of course in the long run an exclusion of our Internet companies from the vast Chinese market would stimulate the growth of foreign companies offering close substitutes for our companies‚Äô products.
This assumes that faced with abolishing censorship of the Internet or losing Google search and other Internet services that U.S.
companies uniquely provide, China would choose to lose the services.
This seems, however, by far the likelier outcome given the perceived threat to the regime that political and religious freedoms pose.
If so, then the only effect of the sanctions regime would be to slow Chinese economic growth slightly by reducing the Chinese people‚Äôs access to Internet services that promote economic efficiency.
One reason to think the effect will be slight is that China does have its own Internet providers, such as Baidu, which provides a Google-like search service, although not as good a one as Google.
The deeper question is whether it is in the U.S.
national interest either to promote Chinese democracy, religious freedom, etc.
or to impede Chinese economic growth by inducing it to curtail its people's access to the Internet beyond the current censorship.
The answer probably is "no."  Lifting the repression lid from Chinese society might, for all I know at any rate, have destabilizing effects that might result in a worse government (from our standpoint) than the present one.
Slowing Chinese economic growth might also be destabilizing, and would harm the world economy as a whole, and probably the U.S.
economy.
Then too, although there are inherent tensions between the United States and China, owing in part to the American military and political presence on the periphery of China, China is not an enemy and we don't want to make one by imposing sanctions on it.
Although the behavior of our companies may be offensive and their claim to be altruistically motivated is ludicrous, it is unlikely that efforts to prevent the companies from complying with ugly Chinese laws will help either the Chinese people or the American people.
A possible intermediate solution, however, would be to forbid U.S.
economies (or for them to agree under pressure of American public opinion) to assist the Chinese government in surveillance of their customers.
There is a difference between censorship and surveillance.
Most governments engage in some censorship, including our own (child pornography, national security secrets, copyright violations, defamation, false advertising, criminal solicitations, etc.).
But for our companies actively to assist a foreign, repressive regime to persecute its political and religious dissidents is a step beyond.
It is unlikely that the Chinese government would bar our Internet companies merely because they did not provide active assistance to Chinese police.
On February 27 of last year, almost exactly one year ago, I posted a longish note about the organizational issues raised by the controversy between Harvard President Lawrence Summers and his faculty critics, a controversy that has now culminated in his resignation.
Here is what I said (with a few deletions and other minor changes), based on my almost 40 years as either a full-time or part-time university faculty member and my current interest in organization theory (I am also an alumnus of the Harvard Law School):.
The 'case' against Summers made by his faculty critics is a four-legged stool: he had the temerity to challenge the absenteeism of a prominent faculty member, Cornel West, who as a result resigned in a huff; he is peremptory, perhaps even rude, in his dealings with faculty; he refuses to consult faculty on administrative matters, such as the expansion of the campus into Alston, across the Charles River from the traditional campus; and, most notoriously, he challenged the conventional left-liberal view that any underrepresentation of a group in a prestigious activity (e.g., women on the science faculties of Harvard) must be due to discrimination rather than to preferences or capabilities.
For these actions, Summers--the most exciting and dynamic president that Harvard has had since James Conant--has been (or at least has felt) compelled to undergo a humiliating course of communist-style ‚Äúreeducation,‚Äù involving repeated and increasingly abject confessions, self-criticism, and promises to reform."
He has been paraded in a metaphoric dunce cap.
To appreciate the sheer strangeness of the situation, imagine the reaction of the CEO of a business firm, and his board of directors, if after the CEO criticized one of the firm‚Äôs executives for absenteeism, ascribed the underrepresentation of women in the firm's executive ranks to preferences rather than discrimination, dealt in peremptory fashion with the firm's employees, and refused to share decision-making powers with them, was threatened with a vote of no confidence by the employees.
He and his board would tell them to go jump in the lake.
But of course there would be no danger that the employees would stage a vote of no confidence, because every employee would take for granted that a CEO can be brusque, can chew out underperforming employees, can delegate as much or as little authority to his subordinates as he deems good for the firm, and can deny accusations of discrimination.
If, however, for employees we substitute shareholders, the situation changes drastically."
The shareholders are the owners, the principals; the CEO is their agent.
He is deferential to them.
Evidently the members of the Harvard faculty consider themselves the owners of the institution.
They should not be the owners.
The economic literature on worker cooperatives identifies decisive objections to that form of organization that are fully applicable to university governance.
The workers have a shorter horizon than the institution.
Their interest is in getting as much from the institution as they can before they retire; what happens afterwards has no direct effect on them unless their pensions are dependent on the institution‚Äôs continued prosperity.
That consideration aside (it has no application to most professors' pensions), their incentive is to play a short-run game, to the disadvantage of the institution--and for the further reason that while the faculty as a group might be able to destroy the institution and if so hurt themselves, an individual professor who slacks off or otherwise acts against the best interests of the institution is unlikely to have much effect on the institution.
All this is true of Harvard."
The faculty are interested primarily in their own careers, and what is good for their careers and what is good for Harvard are only tenuously connected.
The individual faculty member who denounces Summers knows that his denunciation is unlikely to bring about Summers' departure, and even if it was decisive, and even if Summers is the best president that Harvard could find, an inferior replacement would be unlikely to do so much harm to Harvard as to have a discernible impact on the career of the denunciator.
What is more, that replacement might be more inclined to kow-tow to faculty, enhancing their careers at the expense of the long-run health of the institution.
Apart from the misalignment of faculty and university interests, faculty at research universities, like intellectuals generally, tend not to be responsible participants in collective action, such as university governance.
The academy does not select for people who have interpersonal skills, because most academic research is either solitary or conducted in groups of two or three, though there are exceptions, primarily in the hard sciences.
In addition, faculty are highly specialized, many in fields wholly unrelated to the financial and other practical questions that loom large in a university as large and affluent as Harvard.
Universities are increasingly complex enterprises."
Harvard has a multibillion-dollar annual budget.
It is ludicrous for English professors to think they have a useful contribution to make to decisions involving budgetary allocations, building programs, government relations, patent policy, investment decisions, and other key dimensions of modern university governance.
They are in no position to balance Summers' strengths in these areas with what they consider his weaknesses in relations with faculties, or his ideological views that they find offensive.
Because universities are organized as nonprofit entities, there are no shareholders, and hence no owners in the conventional sense.
As a practical matter, the university's trustees (the members of the Harvard Corporation) are the owners; they control the endowment and the other assets of the university and they appoint the president, who in turn appoints the administrative staff of the university.
The trustees' interests are better aligned with the university's interests than the faculty's are.
The trustees do not have a personal financial stake in the university's success, but the position of a trustee of a major university is prestigious and even visible, and trustees who botch their job will experience embarrassment and loss of reputation.
Of course, as part timers and (mostly) outsiders to academia, the trustees cannot actually manage the university."
Nor do they try.
Their principal function, besides general supervision and assistance in fund raising, is to hire a president, and to fire him if he performs badly.
(So they are much like the board of directors of a business firm.) That is a limited function which a board of trustees should be able to discharge competently.
The president is the CEO and he has both a reputational and a financial stake in the success of the institution.
The president and his administrative staff, not the trustees--and not the faculty--should manage the university.
The role of the faculty should be teaching, research, and appointments (subject to override by the president or provost) within their field of academic specialization.
So I would like to see faculty think of themselves as employees and leave governance to the university‚Äôs president.
And for the further reason that preoccupation with governance is a distraction from teaching and scholarship, and so reduces faculty output.
In doing so it compounds the bad effects of academic tenure, an institution that reduces the productivity of many academics.
Against all this it can be argued, first, that competition among universities will assure good performance regardless of the governance structure and, second, that a comparison of American with foreign universities shows that our universities must be doing something, or rather a lot of things, right, because our universities are the world's best."
Competition is indeed a powerful force for efficiency, but interuniversity competition is blunted by a variety of factors, including the lack of a profit incentive and the difficulty of evaluating a university‚Äôs output.
I agree that our universities are the best in the world, but comparisons of this sort are invitations to complacency.
(If the Harvard trustees were complacent, they wouldn't have appointed Summers president!) When the United States had monopolistic regulation of the telephone industry, as it did until the breakup of AT&T, we had the best telephone system in the world.
When we lost the war in Vietnam, we had the best armed forces in the world.
When the Civil Aeronautics Board administered an airline cartel, we had the best airlines in the world.
We have the best universities, but I believe that they would be even better if they were governed differently.
My belief is supported by the fact that American universities are evolving in the direction of greater conformity to the principles on which private businesses are run.
The time has come to retire the faculty slogan '‚Äúwe are the university.'‚Äù.
__________________________________.
The passage of a year has reinforced rather than undermined what I said about university governance.
It is clearer now than it was then that Summers' policies--ranging from greater emphasis on science, on modernizing and rationalizing the undergraduate curriculum and improving undergraduate teaching (a serious Harvard weakness since time immemorial), and on intelligent utilization of Harvard's extensive real estate, to tuition remission for students from families of modest means and blocking weak tenure candidates in weak disciplines--are entirely sound.
It is also clearer now than it was a year ago that Summers' blunt manner (I would prefer to call it forthright) were not the decisive factor in the faculty revolt that has led to his fall from power.
(Whether he was forced out, or he merely concluded that he could no longer be effective as president without the unwavering support of  the Harvard Corporation, is unimportant.) What was crucial was that he challenged the worker'-cooperative model of university governance (a model adhered to more closely by foreign universities--which is one reason they are on average inferior to our own), that an influential fraction of the faculty rebelled, and that a timid and inept set of trustees were unwilling to back Summers against the rebels.
I knew a year ago that Summers was embattled; I never thought it a battle he could lose.
I am greatly disappointed in the Harvard Corporation and would be gratified to see its members resign in embarrassment.
One sign of the Corporation's ineptitude is its decision that there shall be an 18-month period in which, in effect, Harvard will have no president and the faculty will consolidate its power.
But as serious is the signal that the Corporation is sending to potential candidates.
The signal is that only individuals willing to be weak presidents need apply for the job--individuals willing to concede a veto power to the Faculty of Arts and Sciences and devote their presidency to fund-raising, glad-handing, and back-office management.
Eugene Robinson, in a good-natured column in the   Washington Post   defending Summers' resignation but expressing hope that Summers, whom Robinson appears to admire, would become an active member of the Harvard faculty, argues that such a change in roles would mean that he was no longer an ineffective herder of cats but once again the big cat he was meant to be."" Cats cannot be herded, but faculty members merely do not want to be herded."
They have soft jobs with life tenure.
The loftier the institution, the greater the salary and prestige and the softer the job.
So little is demanded that retirement has few attractions.
The result is a faculty many of whose members are both smug and superannuated.
Summers' resignation should, but will not, precipitate serious thinking at Harvard about transformative change.
The following suggestions, quixotic in the short run, are offered as aids to thinking imaginatively about the governance of the nation's most prominent university:.
1
The members of the Harvard Corporation should resign; their successors should rescind Summers' resignation.
2
The reconstituted Corporation should redefine the lines of command of the university, making clear that faculty are not the owners or "citizens" of Harvard, but rather are honored employees.
3
A purely consultative University Senate should be created so that the university administration can obtain reliable, representative expressions of faculty opinion.
4
The president of the university should be authorized to appoint the department chairmen.
5
The anachronistic institution of tenure should be reexamined and perhaps jettisoned.
The market for university professors is highly competitive; a good person whose contract is not renewed can get a comparable job elsewhere.
(See my post on tenured employment of January 15 of this year.).
6
A generous buy-out program should be instituted in order to encourage early retirement and thus provide greater career opportunities for young academics.
If the suggested measures precipitated some, even many, resignations of faculty, the quitters could easily be replaced with individuals of equal or higher quality.
Traffic congestion is a classic negative externality.
As Becker explains, a driver does not consider the effect of his driving on the other users of the road, but only on himself.
The standard method of reducing congestion--building more roads--is not only very costly but to a degree self-defeating, since by reducing congestion (and thus the time cost of driving) it attracts more traffic.
Despite much road building, congestion measured by average commuting delays has increased substantially in recent years.
Becker makes the important point that the average per-hour private cost of commuting by car has fallen with the substantial improvements in automobile comfort.
But it probably has not fallen enough to fully offset the increased delay.
The usual recommendation by economists for dealing with negative externalities is to tax the activity that produces them.
The London solution described by Becker‚Äîa fee for driving into central London during weekdays‚Äîis a step in that direction, with impressive results, such as a 20 percent reduction in London vehicle traffic.
But it is doubtful that this success can be duplicated in the United States.
Before the imposition of the London commuting fee, 85 percent of the commuters were already using buses and other forms of public transportation rather than commuting by car.
This indicated both that most commuters thought such transportation a good alternative to driving (though of course the 15 percent might not) and, more important, that the public transportation system could easily absorb additional commuters.
A 20 percent decline in commuting by car translates into only a 3 percent shift to public transportation if commuters by car are only 15 percent of the total number of commuters before the fee is imposed.
True, there are other methods of economizing on driving besides switching to another mode of transportation, such as car pooling, but car pooling has the features that people who dislike public transportation dislike: less privacy and flexibility than driving by oneself.
I believe that among major U.S.
cities, only New York has comparable figures--some 80 percent of commuters to downtown Manhattan get there by means of public transportation, mainly subway, rather than by car.
All cars entering Manhattan pay heavy bridge or tunnel tolls, however, so there would doubtless be stiff resistance to the imposition of a commuting fee.
Mayor Bloomberg considered such an imposition but has backed off.
Resistance to a commuting fee would be much greater in cities that do not have good public transportation alternatives.
The reason is that in such cities, heavy commuting fees would reduce the number of commuters, hurting downtown businesses.
On average, only about 2 percent of American commuters use public transportation.
Notice also that by reducing congestion and hence the cost of commuting by car, a stiff commuting fee may have only a modest effect on congestion.
Indeed, the fee will induce some commuters to substitute driving, if they have a high cost of time, for public transportation.
The political obstacles to commuting fees have persuaded the traffic economist Richard Arnott that more attention should be paid to substitute methods of reducing traffic congestion.
A good deal of congestion is due to commuters hunting for parking places and to trucks blocking streets while unloading, as well as to bad driving (for example leading to more accidents), increased vehicle size (e.g., SUVs), poor road surfaces, road repairs, poor road design, weather, and bottlenecks.
The problem is that any measure that reduces congestion without imposing any additional cost on the commuter will, as I mentioned, tend to increase the amount of traffic as commuters and other drivers switch from public transportation to cars or make less effort to avoid rush-hour traffic.
A frequent suggestion for combating traffic congestion is staggered work hours.
A favorite suggestion of economists that would have a similar effect would be to make the commuting fee vary by time of day, so that it would be higher during rush hours.
But these suggestions involve a hidden cost: by reducing the overlap of working hours, they reduce one of the principal economies of urban business districts--the dense network of face-to-face interactions that such districts enable.
I conclude that until traffic congestion gets significantly worse, little will be done, and perhaps little should be done, to try to reduce it.
But I am not pessimistic.
In the long run what will reduce traffic congestion will be the continued digital revolution, which will not only increase the amount of telecommuting but also lead to a substantial substitution of virtual for face-to-face interactions in business, shopping, and even socializing.
The business district of the future and the mall of the future may be located in cyberspace.
The digital revolution has altered my own commuting.
With high-speed internet access I work at home much more than I did when I started as a judge 24 years ago, and rarely have occasion to drive during rush hour.
I cannot do justice to the 102 comments that my post evoked, but I will try to respond to the recurrent themes in them.
One comment suggests the use of the "minimax" (meaning, in this context, minimizing the maximum loss) decision rule to guide response to the risk of abrupt global warming, since no probability can be assigned to that risk.
This raises the interesting and important question of how if at all to adapt cost-benefit analysis to risks that cannot be quantified.
The problem with minimax is that it provides no definite guidance.
I prefer, as argued in my book   Catastrophe: Risk and Response   (2004), to retain as much of the structure of cost-benefit analysis as possible in situations where the probability of a catastrophe cannot be quantified.
In the case of abrupt global warming, this means trying to quantify the loss should such warming occur and the cost of averting the loss, and then see whether the probability implied by assuming that incurring the cost would be cost-justified is reasonable.
So suppose a loss of $1 trillion could be averted at a cost of $100 billion.
That cost would be worth incurring (ignoring a number of refinements and qualifications that would be necessary in a rigorous analysis) if the probability of the catastrophe were at least 10 percent.
The question would then be whether 10 percent was in the probability ballpark--assuming we could estimate the ballpark, though not the location of the ball.
I agree with the suggestion that prizes are a good way of motivating research; recently Al Gore teamed with the British billionaire Richard Branson to offer a $25 prize for the development of a workable method of removing carbon dioxide from the atmosphere.
Most of the comments, as I expected, are in a state of denial about global warming.
(Indeed, as I would have expected, one of them denies that cigarette smoking has been shown to have adverse health effects.) To many conservatives, global warming is a red flag.
The global warming skeptics point out that there are natural climate fluctuations, that anticapitalists are enthusiastic beaters of the drum for action against global warming, and that global warming would have good effects on agriculture in northern climes.
These points are correct, but do not support the skeptical position.
The existence of natural climate fluctuations increases the risk from human-caused global warming, because increased atmospheric concentrations of carbon dioxide increase the amplitude of the fluctuations.
The fact that the motives of some of the people who are worried about global warming are political is irrelevant to the scientific issues, not only because scientists use apolitical methods of testing their hypotheses, but also because there are politics on both sides of the global warming debate: if leftwingers exaggerate the danger of global warming, rightwingers belittle them excessively.
As for improving agricultural yields in northern climes, the transitional costs of relocating agriculture from (at present) tropical to arctic climes would be immense.
Nor would improvements in agricultural yields respond to the effects of inundation of low-lying land areas, the migration of tropical diseases to temperate climates, the effects of increasingly violent weather, and the possible deflection of the Gulf Stream, causing Europe's climate to become Siberian.
It is also untrue that a 7 degree Fahrenheit increase in average global temperatures by the end of the century is a "worst case" prediction.
That would imply a degree of certainty that we clearly do not have.
And it is untrue that warming and cooling in millennia prior to the Industrial Revolution were unrelated to human activity.
Substantial deforestation through burning, releasing large quantities of carbon dioxide into the atmosphere, began with the invention of agriculture some 8,000 years ago, and periods of reforestation (e.g., after the Black Death reduced the European population by a third) are correlated with global cooling.
So at least the paleoclimatologist William Ruddiman argues, and I do not sense that the skeptics have read his work.
Some of the skeptics believe that Becker and I are part of a leftwing conspiracy to foist a false belief in global warming on the world.
Anyone familiar with our work would know that we are conservatives.
What is true and important is that there is considerable uncertainty about predictions of climate change.
The climatologists' consensus may prove incorrect.
What is striking however is the thinning of the ranks of the dissenters over time.
Many of the skeptical commenters appear to have visceral rather than a reasoned hostility to the idea that global warming is a problem that might require costly solutions.
They and are not impartial readers of the scientific evidence.
One commenter describes global warming as "another cult religion just like Marxism or Lysenkoism." But neither Marx nor Lysenko ever commanded a scientific consensus for their views.
But I do agree with this otherwise rather intemperate commenter that Paul Ehrlich's   The Population Bomb   was total nonsense--and I have so said in my book   Public Intellectuals  .
One comment questions how heavy gasoline taxes could reduce our reliance on imported oil, since the cost of production of Middle Eastern oil is lower than that of oil produced in the United States.
Depending on how stiff the taxes were, however, our total consumption of oil would fall, including consumption of foreign oil, though the mix would indeed shift (as I said in my post) toward imported oil.
Another effect would therefore be to conserve our own oil--it would remain in the ground, available for future pumping, and a check on the behavior of foreign producers, such as threats by Iran to embargo oil.
Our dependence on foreign oil would diminish in still another sense: the incomes of the foreign oil producers would fall, reducing those countries' geopolitical influence, including influence over us.
We would be less dependent on their political whims.
The latest report of the Intergovernmental Panel on Climate Change, issued on Friday, confirms the scientific consensus that the emission of carbon dioxide and other greenhouse gases, as a result of the combustion of fossil fuels such as oil and gas, and other human activities (such as deforestation by burning), is having significant and on the whole negative effects by causing global temperatures and sea levels to rise.
See http//ipcc-wg1.ucar.edu/wg1/docs/WG1AR4_SPM_PlenaryApproved.pdf.
I discussed global warming in my book   Catastrophe: Risk and Response   (2004), I considered the evidence that global warming was a serious problem for which man-made emissions were the principal cause altogether convincing--and since then more evidence has accumulated and the voices of the dissenters are growing weaker.
The global-warming skeptics are beginning to sound like the people who for so many years, in the face of compelling evidence, denied that cigarette smoking had serious adverse effects on health.
What has changed since I wrote my book is that not only is the evidence that our activities (primarily the production of energy) are causing serious harm even more convincing, but also that the scientists are increasingly pessimistic.
It is now thought likely that by the end of the century global temperatures will have risen by an average of 7 degrees Fahrenheit and that the sea level will have risen by almost 2 feet.
Besides inundation of low-lying land areas, desertification of tropical farms, and migration of tropical diseases north, global warming is expected to produce ever more violent weather patterns--typhoons, cyclones, floods, and so forth.
There is much uncertainty in climate science, and climate scientists concede that their predictions may be off--but they may be off in either direction.
Far worse consequences are possible than those thought highly likely by the authors of the report, including a temperature increase of 12 rather than 7 degrees Fahrenheit, higher sea levels that could force the migration inland of tens of millions of people (or more), the deflection of the path of the Gulf Stream, causing Europe's climate to become Siberian, and abrupt, catastrophic sea-level rises due to the sliding of the Antarctic ice shelf into the ocean.
Not only has the consensus among scientists concerning the harmful anthropogenic (human-caused) character of global warming grown, but the scientific consensus is increasingly pessimistic: recent evidence indicates that the global-warming problem is more serious than scientists thought just a few years ago.
My own view, argued in the book, is that the risk of   abrupt   global warming--a catastrophe that could strike us at any time, with unknown though presumably low probability--is sufficiently costly in expected-cost terms (that is, multiplying the cost of the catastrophe by its probability) to warrant taking costly measures today to reduce emissions of carbon dioxide and other greenhouse gases.
Both the scientists and the policymakers, however, are mainly focused on the long-term costs of global warming--costs that will unfold over the remainder of this century.
That focus makes the choice of the discount rate important, and potentially decisive.
A discount rate is an interest rate used to equate a future cost or value to a present cost or value.
As a simple illustration (and ignoring complications such as risk aversion), if the interest rate is 5 percent, the present value of $1.05 to be received in a year is $1, because if you are given $1 today you can invest it and have $1.05 in a year.
That is financial discounting.
But discounting is important even when financial considerations are not the only ones involved in a choice.
If you have a very strong preference for spending money now rather than a year from now, you might prefer $1 today to $1.50 a year from now.
These approaches don't work well when the question is how much we should spend today to avert costs that global warming will impose in the year 2107.
Suppose we estimated that those costs would be $1 trillion.
Then at a discount rate of 5 percent, the present-value equivalent of the costs is only $7.6 billion, for that is the amount that, invested at 5 percent, would grow to $1 trillion in 100 years.
At 10 percent, the present value shrinks to $73 million.
So it is possible to argue that, rather than spending a substantial amount of money today to try to prevent losses from global warming in the future, we should be setting aside a modest amount of money every year--$73 million this year to deal with global warming in 2007, the same amount next year to deal with global warming in 2008, and so on.
Of course we would also want to spend money to prevent the lesser losses from global warming that we anticipate in earlier years.
For example, suppose we estimate that the loss in the year 2057 will be $100 billion.
Then at the same 10 percent interest rate, we would want to spend $852 million this year.
Thus two effects are being balanced in computing the present equivalent of future losses from global warming--the larger loss in the more distant future, and the greater shrinkage of the larger loss, because of its remoteness from today, by the operation of discounting.
The latter effect will often dominate, as in the examples, but of course this depends critically on the choice of discount rate.
At an interest rate of 3 percent, a $1 trillion loss in 2007 has a present value not of $73 million or $7.6 billion, but of $52 billion.
However, when either of the latter two figures is added to figures representing the present value of losses in intermediate years, the sum will be formidable.
A very high discount rate, implying that optimal current expenditures to avert the future consequences of global warming are slight, could be defended on the ground that the march of science is likely to deliver us from the consequences of global warming long before the end of the century.
Clean fuels for automobiles as well as for electrical plants (where already there is a clean substitute for oil or coal--nuclear power, though it is more expensive) will be developed, or carbon dioxide emissions from electrical plants will be piped underground, or artificial bacteria will be developed that "eat" atmospheric carbon dioxide.
These are not certainties but they are likely, and so they provide a good argument for using a high discount rate, such as 10 percent--and perhaps for considering no losses after 2107, on the theory that the problem of global warming is almost certain to be completely solved by then.
Nevertheless there are at least three arguments for incurring hefty current expenditures on trying to reduce carbon dioxide emissions in the near term.
The first is that global warming is already imposing costs, and these will probably increase steadily in the years ahead.
Discounting does not much affect those costs.
They may well be great enough to warrant remedial action now.
The second argument for incurring heavy expenditures today to reduce global warming is that there is a small risk of abrupt, catastrophic global warming at any time, and a small risk of a huge catastrophe can compute as a very large expected cost.
Any time could of course be well into the future, and so there is still a role for discounting, but it is minimized when the focus is on imminent dangers.
The third argument is that reducing our consumption of energy by a heavy energy tax would confer national security benefits by reducing our dependence on imported oil.
Our costly involvement in the Middle East is due in significant part to our economic interest in maintaining the flow of oil from there.
It is true that because our own oil is costly to extract, a heavy energy tax would not cause much if any substitution of domestic for foreign oil.
But that is fine; our oil would remain in the ground, available for consumption if we decide to take measures abroad, such as withdrawing from Iraq, that might reduce our oil imports.
Heavy U.S.
energy taxes would induce greater expenditures by industry on developing clean fuels and techniques for carbon sequestration; might persuade other big emitters like China and India to follow suit; and by reducing emissions of carbon dioxide slow the increase in the atmospheric concentration of the gas.
Drastic reductions might actually reduce that concentration, because carbon dioxide does eventually leach out of the atmosphere, though at a slower rate than it is built up by emissions.
I have little to add to Becker's convincing discussion.
One small point worth noting, however, is a new technology for sex selection, described in an interesting article by Denise Grady in the February 6   New York Times  .
It is called "sperm sorting" and enables male or female sperm to be concentrated in semen, greatly shifting the odds in favor of producing a child of one sex rather than the other.
The cost is only $4,000 to $6,000, which is much less than in vitro fertilization, since the "enriched" sperm can simply be inseminated in the woman rather than requiring in vitro fertilization.
Sex selection by sperm sorting may actually be cheaper than ultrasound plus abortion, the conventional method; if so, and it comes to dominate, the ethics of sex selection will be separable from the ethics of abortion motivated by sex selection.
The key points that Becker makes, both of which I agree with, are, first, that sex selection by U.S.
couples is unlikely to result in an unbalanced sex ratio; and, second, that in countries such as China and India in which there is a strong preference for male offspring, girls will be treated better if sex selection is permitted, since there will be fewer girls born to couples who did not want them.
Of course, as there will fewer girls, period, the net effect on total female utility is unclear: fewer reduces total utility but happier increases it.
Since the net effect is uncertain, feminist opponents of sex selection should consider whether, if unwanted girls are born, there are feasible techniques for improving their treatment so that if sex selection is forbidden (assuming that that is feasible--Becker suggests that it is not), there can be reasonable confidence that net female utility will increase rather than decrease.
I also agree with Becker that there is a tendency to self-selection, since as the percentage of girls and women declines, men's demand for them rises, and observing this couples will tend to shift their reproductive selection in favor of girls.
Since there is no reason why this tendency must overcome a preference for boys, an unbalanced sex ratio could persist indefinitely.
But this is unlikely in rapidly developing countries such as China and India.
A strong preference for male children tends to be found in societies in which there is a great deal of subsistence agriculture, a weak social insurance system, and a reliance on private violence (as in a revenge culture) to protect personal and property rights; all these factors increase the demand for male children.
As these conditions (the first two of which are important in China and India, and all three of which are important in Iraq, for example) change, the preference diminishes, as we observe in the wealthy societies of Europe and North America, where there is no longer a net preference for having male rather than female children.
Apparently sex selection is actually more common in urban areas than in rural areas of India.
But presumably the reason is that access to ultrasound for detecting the sex of a fetus, and to abortion, is greater in cities, and this effect could dominate the greater preference for sex selection in rural areas.
Urban Indians might prefer boys because of a lag in the adaptation of traditional values to urban conditions.
The transition to a 50-50 sex ratio, even if inevitable, is likely to take a long time.
Suppose at time 1 there is a large excess of male births, followed at time 2 by a dawning recognition that girls are more valuable than had been realized at time 1.
Probably time 1 and time 2 will be separated by 20 or 30 years (or more, if there is a "values lag," as I suggested earlier), and so there will be at least one entire adult generation in which the sex ratio is skewed in favor of males.
Should countries that face this imbalance worry about it to the extent of taking measures against it? We have a natural experiment, which can help us to answer the question, in societies that permit polygamy.
The effect of polygamy (technically polygyny--multiple wives--but polyandry is virtually unknown) is to raise the effective ratio of men to women, since a number of women are removed from the pool available to the nonpolygamous men.
In a society in which there are 100 men and 100 women, but 10 of the women are married to one of the men, the male-female sex ratio, so far as the rest of the society is concerned, is 99 to 90.
The result is to raise the average age of marriage for men and reduce it for women, reduce the percentage of married men and increase the percentage of married women, reduce promiscuity by increasing women's bargaining power, and possibly increase male emigration and female immigration.
None of these effects seem likely to harm society seriously as a whole.
In contrast, research that I discuss in my book   Sex and Reason   (1992) finds that the low effective male-female sex ratio of the black population in the United States (due largely to abnormally high rates of imprisonment and homicide of young black males) promotes promiscuity because there is more competition among women for men, and reduces the marriage rate and family formation.
In sum, sex selection, at least in favor of males, appears not to have negative external effects.
It presumably confers net private benefits (like other preference satisfaction), or otherwise it would not be practiced.
(There are no external effects in societies, such as that of the United States, in which sex selection is unbiased.) The case for forbidding it is therefore unconvincing (at least when sex selection is not implemented by abortion, to which there are independent objections) unless it can be shown to create a net decrease in female welfare.
Congress is on the verge of passing a bill that will forbid employers to discriminate against employees (including applicants for employments) on the basis of the results of genetic tests, and forbid health insurers to deny insurance or charge higher premiums on the basis of such results.
(Actually, the bill tightens up an existing law that was designed to do the same things but turned out to have loopholes.) The stated rationale of the bill is that it will encourage people to obtain such tests and use the results to seek treatment or make other decisions, such as deciding whether to have children.
That rationale is dubious for several reasons.
First, people who suspect they have a gene that causes or predisposes them to a serious disease have a strong incentive to be tested (especially if there are treatment options), an incentive that will often override the possible adverse effect of a positive test result on employment or insurance.
Second, in the absence of the law, employers and insurers could make such testing a condition of employment or insurance.
Third, persons who are confident that they do not have a genetic defect have an incentive to test voluntarily and disclose their negative results to employers or insurers--and some of these persons will be mistaken and discover that they indeed have such a defect.
So while some people are doubtless deterred from testing by concern with the effect on their employability or insurability, on balance it is unlikely that there will be more testing by virtue of the new law.
In a strict efficiency analysis, moreover, even if more people who are likely to have genetic defects will test for them as a result of the law, this would not necessarily be an argument in favor of the law.
There is no increase in efficiency when a person conceals information (or avoids obtaining information that he fears he would have to try conceal if he did obtain it) in order to obtain a benefit that he would not obtain if he disclosed it.
This would be obvious if a person who knew he was deathly ill bought a huge life insurance policy, concealing his illness from the insurer.
The situation is no different if the person knows he may be deathly ill and decides not to verify his suspicion lest the confirmation of it prevent him from obtaining the insurance policy.
In either case he is shifting his own expected costs (whether reduced longevity or medical expenses) to unconsenting others.
Analysis is complicated, however, by the possibility that a failure to test brought about by fear of the consequences for insurance or employment would impose costs on other people.
That would happen if a prompt diagnosis would enable treatment of a genetic defect at a lower cost, assuming that treatment expenses are paid for in part at least by third parties.
Then those third parties would be better off if the person tested.
Suppose for example that had the person tested positive, she would not have had a child; instead she had the child, and it is badly deformed, requiring enormous medical expenses paid largely by third parties.
That would be a genuine externality, whereas if the cost of a medical treatment is merely shifted from the individual to his employer or insurer (which means, of course, to the other insureds of this insurer), the externality would be merely pecuniary.
That is, it would be merely a transfer of wealth rather than an avoidable investment of scarce resources, as in the example just given where a medical expense is incurred that would not have been incurred had it not been for the failure to test.
But transfers often and here are likely to have such effects, and not merely to alter the distribution of wealth.
The cost of health insurance will rise if the new law goes into effect, and that rise will increase the number of persons who do not have health insurance, and their lack of insurance coverage may cause them to forgo tests and treatments that may, just as when a genetic test is forgone, avoid costlier treatments and other adverse consequences later on.
Employers' labor costs will rise too, resulting in lower net wages; and health is positively correlated with income, so again the transfer will have secondary effects in the form of more ill health.
So even if the new law led to more genetic testing--which probably it would not do, for the reasons stated at the outset--its social costs, from the standpoint of economic efficiency, would probably be negative.
The law might seem defensible on noneconomic grounds as a form of social insurance, since persons who test positive for genetic defects may be unable to obtain private health insurance.
The broader point is that the more that science reduces uncertainty about individuals' health, the less risk pooling there will be and the greater, therefore, the demand for social insurance.
In the limit, if everyone's health prospects were known with certainty, there would be no market for health insurance at all and this would exacerbate the effects of differential health on equality of wealth; no longer would the healthy be paying to insure the unhealthy.
If social insurance is desired, the question becomes whether to finance it through taxes or, as under the proposed law, to compel private industry to provide it.
The major difference is the identity of the "taxpayers": it is federal taxpayers in the first case and the members of the private insurers' insurance pools in the second.
The allocative effects of the social insurance "tax" will differ because higher income taxes do not have the same behavioral effects as higher health-insurance premiums.
The higher premiums cause people to leave the insurance pool; given current political concerns with the number of people who do not have health insurance, placing the "tax" on those who do have such insurance is questionable.
Eighty percent of Americans tell pollsters that they do not think that health insurers should be allowed to deny coverage or charge higher premiums to people with genetic defects.
This is an example of Americans' economic illiteracy.
The shorter the supply of a natural resource, the more important it is to have an institutional structure for allocating it efficiently among demanders, both present and future.
In this respect usable fresh water is not fundamentally different from other scarce resources, such as oil and gas.
The qualification in "usable" is important.
Global warming does not diminish the world's supply of fresh water, but it reduces the supply of usable fresh water.
Spring snowmelt is an important source of fresh water in many parts of the world, including California.
That source will diminish as rising global temperatures cause more precipitation to take the form of rain rather than snow--and rain is much harder to collect and distribute than the spring runoff from melting snow.
Higher global temperatures also increase the demand for water, as does an increasing, and increasingly prosperous, global population.
Of course, in principle, an increase in the demand for a good relative to its supply is not a problem.
Price quickly rises, reducing demand and thus reestablishing equilibrium; so no more shortage.
In the slightly longer run, moreover, the higher price leads to increased supply; in the case of water, one can anticipate greater use of desalination, that is, converting sea water into fresh water.
Between water conservation by consumers trying to reduce their water bill, and increased supply of fresh water by the water industry, there should be no shortage, in the sense of an imbalance between demand and supply resulting in queuing, black markets, degraded quality, technological stagnation, politicking (Becker mentions discrimination in water pricing in favor of households and farmers), and corruption.
The problem is that the market in fresh water is inefficient.
Becker focuses on the inefficient pricing of publicly owned water supplies--for example, charging a flat rate regardless of the quantity consumed, or failing to take account of reutilization (that is, the consumption of return flow).
But a deeper problem is the institutional structure.
One aspect is public ownership of water systems.
There is no reason why a city should own the water company any more than it should own the cable television company.
It is true that these are both networked services and therefore have aspects of natural monopoly; it would be wasteful to have multiple grids of water pipes in the same city.
But through the contractual process a city can exploit "competition for the market"--that is, it can award a contract for the sale of water to whatever provider offers the best deal for the city's residents.
A still deeper institutional problem is the inefficient system (or systems) of property rights in water.
In the western United States, where water is scarce, users obtain a property right by "appropriation," that is, by actually using water from a lake or stream.
The amount they take is recorded and that is their property right.
Any return flow can be appropriated by a downstream user.
Now suppose an upstream user wants to sell his appropriation.
He cannot do so without getting the consent of any downstream user who may be adversely affected by the sale because he had appropriated a portion of the upstream user's return flow.
There may be many of those users, thus greatly increasing the transaction costs of reallocating water to a higher-valued use.
In addition, because ownership of water rights is based on use, there is no incentive to hold water off the market, for future use; if one doesn't use the water one has appropriated, one loses one's property right.
The basic problem is that the same resource is jointly rather than singly owned, so that before it can be sold there must be a transaction among the owners, and the more owners, the higher that initial transaction cost.
The problem is greatly exacerbated when an interbasin transfer is being contemplated, that is, a transfer of water from one watershed to another.
For then all the users of return flow in the originating watershed will be deprived of their water.
Such problems are not unique to water, and are not insoluble.
A parallel problem in oil is solved by unitization.
Very often a number of separate oil companies will be drilling into the same underground oil field, and each has an incentive to take as much as it can as fast as it can (for example by drilling more wells), for what it leaves in the ground will be taken by other companies.
The oil-producing U.S.
states authorize "compulsory unitization," whereby if two-thirds of the owners of the land above a common oil field vote to conduct their operations under common management, the rest are bound.
(Requiring unanimity would created serious hold-out problems.) A similar regime might be feasible for the users of a lake or stream.
This would eliminate the inefficiency of a possession- or use-based system of property along with the inefficiencies associated with joint ownership.
In short, the solution to water shortages is likely to be privatization and intelligently designed property rights, using the institutional framework of natural resources such as oil, gas, coal, and other mineral resources as a model.
This solution seems, moreover, as apt to African nations facing acute water shortages as it is to the milder problems of U.S.
water supply.
I became acquainted with Bill Gates when some years ago I mediated (unsuccessfully) the Justice Department's antitrust suit against Microsoft.
I was reassured to discover that the world's wealthiest person is extremely intelligent and surprisingly unpretentious.
But I am disappointed by the recent speech on "creative capitalism" that he gave at the World Economic Forum in Davos last month.
Almost half the world's population is extremely poor, subsisting on less than $2 a day; a billion are thought to subsist on less than $1 a day.
Most of the very poor live in sub-Saharan Africa and in southern Asia.
Gates argues that the key to alleviating their poverty is "creative capitalism," whereby private firms in the United States and other wealthy countries seek both profits and "recognition" (praise) in serving the needs of the poor, for example by developing technologies designed specifically for their benefit.
C.
K.
Prahalad, a business school professor admired by Gates, notes that Microsoft is "experimenting in India with a program called FlexGo, where you prepay for a fully loaded PC.
When the payment runs out, the PC shuts down, and you prepay again to restart it.
It's a pay-as-you-go model for people with volatile wages who need, in effect, to finance the purchase.".
If there are good business opportunities in poor countries, however, it does not require Gates's urging for businesses to seek to exploit them.
So the only meat in his concept of creative capitalism is his proposal that businesses accept subnormal monetary returns in exchange for getting a good reputation as do gooders.
But if a reputation for good works has cash value, then, once again, there is no need for Gates to urge businesses to serve the poor; self-interest will be an adequate motivator.
If it is true as he says in his speech that "recognition enhances a company's reputation and appeals to customers; above all, it attracts good people to the organization," then creative capitalism pays because it enables a firm to charge higher prices to its customers and pay lower quality-adjusted wages to its employees.
Whether this is true of a given firm's customers and employees is something that the firm is better able to gauge than an outsider, even so distinguished a one as Bill Gates.
If on the other hand reputation does not have cash value, or enough cash value to offset the reduction in financial returns that would result from conducting one's business in such a manner as to obtain a reputation for altruism, then the motivation for creative capitalism would have to be businessmen‚Äôs feeling good about helping the disadvantaged.
But which businessmen--corporate managers or investors? Do shareholders--the corporation's owners--feel good when corporate management picks objects of charity, unless the charitable giving feeds the bottom line (as when a firm makes charitable donations to activities and institutions in the places in which it has its plants or offices)? Unless shareholders are eager to see their corporations give massive amounts to charities that are chosen not by the shareholders but by management and that do not contribute to corporate profits, it is hard to see how urging businesses to be disinterestedly charitable can have a significant effect.
A business that fails to maximize profits places itself at a competitive disadvantage relative to businesses that do maximize profits.
Only if charity contributes to profits is it a plausible investment for an investor-owned firm.
There is a hint in Gates's speech that profit maximization is the real goal, and the question for "recognition" a veneer.
When he talks up "business models that can make computing more accessible and more affordable," it sounds as if he may be trying to develop new markets for Microsoft.
That is also the implication in Prahalad's statement that I quoted.
Gates talks about "markets that are already there," that is, in poor countries, "but are untapped." In other words, there are business opportunities in poor countries, and business opportunities require imagination rather than altruism to exploit.
A curious omission in Gates's speech is a theory of why so many people are desperately poor.
When he says that "diseases like malaria that kill over a million people a year get far less attention than drugs to help with baldness," he does not pause to inquire why that is so.
It is so, first of all, because people in wealthy countries do not suffer from malaria, and, second, because cheap but highly effective methods of combating malaria, such as mosquito netting and indoor spraying of DDT (which would have few negative environmental effects, unlike outdoor spraying), are somehow not provided, but for reasons political and cultural rather than financial.
We know that a nation doesn't have to be rich in natural resources to be prosperous.
The essential ingredient of economic growth is human capital, and it depends primarily on the existence of a political system that prevents violence, enforces property rights, provides a minimum level of public goods, and minimizes governmental interference in the economy.
Without such institutions, economic growth will be stunted; altruistic capitalists will not cure their absence.
Gates has discovered the Adam Smith of   The Theory of Moral Sentiments  , where Smith argued that people are not purely self-interested, but instead are actuated, to a degree anyway, by altruism.
But modern studies of altruism find it concentrated within the family and trace it back to the "selfish gene"‚Äîhelping someone who shares one's genes may increase the spread of those genes in subsequent generations, and if so there will be natural selection for a degree of altruism.
And so as the relationship between people attenuates because of distance, race, and other factors, the degree of altruism declines.
That is one reason that Gates's argument that "recognition enhances a company's reputation and appeals to customers; above all, it attracts good people to the organization" falls short.
Few customers will pay more, and few skilled workers will accept lower wages, to benefit poor people in distant lands.
Finally, I take issue with Gates's assumption that alleviating world poverty is an unalloyed social good.
He calls himself an optimist, but some might describe him as a Pangloss, when he says that "the world is getting better" and will be better still if there are no more poor people.
If Gates said that prosperity, longevity, and other good things have increased in most of the world, he would be right.
But there is no basis for predicting that these trends will continue, given such threats to peace and prosperity as international terrorism, political instability, nuclear proliferation, and global warming.
And if creative capitalism does succeed in lifting billions of people out of poverty, the problem of global warming will become even graver than it is because the world demand for fossil fuels will soar.
The conservative position on gun control has been that people who commit crimes (whether they use guns or some other weapon, or no weapon for that matter) should be punished heavily, depending on the gravity of crime and the probability of detection of crimes of that character, but that the possession of a gun should not be punished.
This position is not responsive, however, to the problem of lunatics who use guns to commit mass murder as a prelude to committing suicide.
When neither deterrence nor incapacitation is effective against some type of crime, preventive measures must be taken, and they include raising the price of some essential input.
Because guns are more lethal than knives or fists, measure to raise the price of guns will not cause large-scale substitution into these other methods of murdering people; but it is important that measures to raise the price of guns also be taken against other efficient methods of mass murder, including explosives and biological weapons.
Becker's post explains convincingly how to raise the price of guns.
I want to address the question whether Americans'love of guns is primarily an economic phenomenon or primarily a cultural one, for if the latter maybe some other method of reducing the demand for guns would be more effective.
Most countries that we consider our peers, or at least approximate peers, have far lower rates of gun ownership.
The proximate cause is restrictive gun laws, but these are democratic countries and they would not have such strict gun laws if their population had the same love of guns that ours does.
Building on Becker's interesting discussion of social interactions, one might speculate that the reason for widespread gun ownership by Americans is an arms-race phenomenon.
Given a high rate of gun crimes, law-abiding people feel threatened and arm themselves.
The more people who are armed, and thus the larger the demand for and supply of guns, the easier it is for criminals to procure guns through theft; also, the criminal demand for guns rises because criminals want to protect themselves against armed victims.
And the more armed criminals there are, the more gun ownership by law-abiding people--the potential victims of gun crimes--there is.
The result is a spiraling increase in gun ownership.
But I do not find this explanation convincing, since the spiral could be broken by the type of measures that Becker describes that would raise the price of guns.
Moreover, it is apparent that a vast number of Americans   like   guns, rather than thinking of them merely as instruments of self-protection.
Their attitude towards guns is different from their attitudes toward locks and alarm systems.
History seems relevant here.
The United States was born in a revolution in which the arms used by the revolutionaries were to a large extent privately purchased and owned.
Hunting was widespread (in contrast, in England and other European countries, hunting was a monopoly or aristocrats), and guns were also required for personal defense against Indians in frontier settlements, which were numerous.
Hostility to standing armies led to the adoption of the Second Amendment to the Constitution, which created a right to bear arms tied to a policy of relying on the state militias as a defense not only against foreign invaders but also against domestic tyranny.
Private ownership and use of guns continued to play a large role in American life with the settlement of the West amidst Indian threats and widespread lawlessness memorialized in the immensely popular "Westerns" of twentieth-century cinema.
The Colt .45 revolver (the "Peacemaker") became the symbol of the pacification of the West.
Private violence, much of it gun-inflicted, characterized the South in the Reconstruction era and later much of the country during Prohibition.
Violent criminals such as Billy the Kid and John Dillinger became celebrated and in some quarters even admired.
I am not aware of another developed country that has had a similar romance of the gun.
The paranoid Right in the United States fears that efforts to disarm the population are a prelude to a military coup d‚Äô√©tat, though they do not explain how the possession of pistols, rifles, and shotguns would enable civilians to foil such a coup.
Suppose, then, that the demand for guns is more cultural than instrumental, in the same way that the demand for particular foods is often a function of upbringing rather than cost, nutrition, or healthfulness (though these factors of course influence demand).
How might such a demand be altered? Higher prices could do it, but the problem is that as long as the cultural demand is strong, the political system is unlikely to adopt measures that would make guns significantly more costly.
The National Rifle Association, an enormously skillful lobbying organization, has persuaded the public that measures to keep guns out of the hands of criminals are bound to limit, as a practical matter, gun ownership by the law abiding as well.
Government sometimes engages in campaigns of public education to change people's habits; smoking is a notable example.
Political opposition can be circumvented, to a degree anyway, when the campaign is mounted by a federal agency that enjoys a degree of autonomy--the Surgeon General of the United States, for example, although a presidential appointee, has often displayed a degree of political independence.
In addition, the politics of gun ownership and gun control are not uniform across the country.
In most cities, and throughout much of the northeast, guns are more feared than loved.
High-visibility local politicians, such as Mayor Bloomberg in New York and Mayor Daley in Chicago, can command large audiences for messages proclaiming the desirability of stronger gun controls.
A few more college mass-murder suicide episodes, and antigun messages may begin to strike strongly responsive chords.
Senator Grassley of Iowa has expressed concern recently about the conjunction of huge increases in the size of the endowments of many universities (and colleges--for the sake of brevity, I shall use "universities" to denote colleges as well) with continued increases in tuition.
Both types of increase have far exceeded the rate of inflation in recent years.
The senator urges that investment income generated by the endowments be used to reduce tuition, and he threatens to introduce legislation that would require (as a condition of the universities' retaining their favored tax status as charitable institutions) that universities spend a minimum of 5 percent of their endowment every year.
That is a condition already imposed on foundations, but universities are exempt.
Several major universities have announced recently that they will reduce tuition substantially to children of nonwealthy, though modestly affluent, families.
I do not know whether these announcements are a response to Senator Grassley or to the concerns of the public that may have inspired his proposal.
The 5 percent rule makes less sense for universities than for foundations.
Foundations normally derive all or most of their income from investing their original endowment, and so they are not in competition with each other.
Nor have they the spur of profit maximization.
They are governed by self-perpetuating boards of trustees; so there is no democratic check either.
The idea behind the 5 percent rule is to prevent the hoarding of endowment income, perhaps to provide high salaries and generous perks to staff and to prod the trustees to seek additional grants and thus compete with other foundations.
But the prod is slight so long as the average return on investing the endowment is at least 5 percent a year.
With inflation currently running at 4 percent, a 5 percent return is easily achieved.
Higher education, in contrast to the foundation sector, is a highly competitive industry, even though most universities are nonprofit.
They compete vigorously for students, faculty, and grants, which include alumni donations, foundation and other third-party donations, and government grants; state universities also receive money appropriated by the state legislature, but this is a diminishing source of the revenue of major state universities such as Berkeley, UCLA, Michigan, and Virginia.
The different sources of income are complementary: good students attract good faculty and vice versa, and a university's academic standing attracts donations and grants.
The universities' principal income consists of tuition plus donations and grants plus endowment income, but some universities have income from television contracts for their athletic teams, and others have income from patents developed by members of their faculty.
The major universities not only have large endowments but also receive large annual gifts from alumni and others.
Generally, wealthier universities are better, or at least more prestigious, than poorer ones, and so they can and do charge very high tuition even though they could "afford" to charge lower, or even zero, tuition; but that would not make economic sense for them.
Given the competitive structure of higher education, it is hard to see why government should step in and try to limit tuition.
The universities have a competitive incentive to provide financial aid to highly promising applicants who cannot afford full tuition; why those who can afford to pay for it should not be asked to pay for it escapes me.
Forcing abolition of tuition would be a subsidy for rich kids.
If universities were somehow prevented from charging tuition, moreover, applicants and their families would not have to think carefully about educational options.
A free university education would be attractive to many people for whom it would be a poor investment if they had to pay stiff tuition, though it would not be completely free in an economic sense because they would have to forgo income from working.
And a 5-percent-fits-all solution would make no sense for universities that had very small endowments and good reasons for wanting them to be larger.
The difficult question involves the federal income tax exemption for donations to universities.
It is a legitimate question why the federal taxpayer should be subsidizing Harvard, with its $35 billion endowment.
The only justification would be if the type of research and teaching that goes on at Harvard or the other major universities generates external benefits that, were it not for the subsidy, would be smaller by more than the subsidy.
This seems unlikely.
The cost of the scientific research and graduate scientific training conducted in these universities is already heavily subsidized by federal and corporate grants and contracts; and increasingly the scientific research done by universities is applied rather than basic and so is eligible for patent protection.
The contribution of nonscientific fields to welfare is not negligible, but one does have a sense that in many of them the marginal product is slight or even negative--is there really social value in having 400 English-language philosophy journals (the approximate number today) rather than 50? Because universities, though competitive, are not profit maximizers, because of age-old uncertainty concerning the effectiveness of various methods of teaching and the value of various forms of scholarship, and because of a tradition of faculty autonomy reinforced by the tenure system, universities have much the character of workers' cooperatives, which are not notably efficient enterprises.
Conservatives, who inveigh against big government, should not ignore tax subsidies.
There is no economic difference between giving universities federal money and allowing donors to universities to deduct their donations from their federal income taxes.
If the exemption were repealed, tax rates could be reduced without any increase in government spending, and in fact government spending would be reduced, since the cost of administering the exemption and the higher cost of collecting taxes when tax rates are higher would be eliminated.
Hence all tax subsidies deserve close scrutiny.
I am not convinced that the tax subsidy for donations to universities would survive that scrutiny.
A few peripheral points merit brief comment.
First, the endowments may not be as large, realistically, as they seem.
In recent years, a number of universities have been reporting annual increases in endowment in the 18 to 20 percent range.
Unless the endowment managers are geniuses, there are only two ways in which they can obtain such returns.
One is by generous year-end appraisals of endowment assets that are not frequently traded, so that their market value must be estimated.
The other is by leverage, which generates above-average returns in a rising market--and above-average losses in a falling one.
Second, the universities' argument, contra Grassley, that many contributions to university endowments come with restrictions that might make it hard for a university to spend 5 percent of its endowment every year is unconvincing.
Suppose 50 percent of the endowment could not be spent (or the earnings on it); then to meet the 5 percent requirement the university would have to spend 10 percent of the unrestricted portion of its endowment.
That would not be a hardship.
Third, Senator Grassley's concern that increased endowment income is being squandered on high salaries for university presidents is also unconvincing.
The reason those salaries are soaring is that universities are becoming increasingly large and complex, requiring management skills comparable to those required by substantial corporations.
There are four basic alternatives for dealing with illegal immigration: do nothing; do nothing about the illegal immigrants who are already in the United States but take measures to stop future illegal immigration; amnesty the existing illegals; deport them.
The first three alternatives are plausible; the last is not.
The United States does not have enough police and other paramilitary personnel, or sufficient detention facilities, to round up and deport 12 million persons (our prisons and jails are bursting with 2 million inmates), and even if it did, the shock to the economy would be profound, as the vast majority of the illegal immigrants are employed.
The mass deportation would create a serious labor shortage, resulting in skyrocketing wages and prices.
The first alternative, which is to do nothing, has a number of attractions, though doing nothing in response to a perceived problem is not in the American grain; fatalism is alien to American culture.
Most illegal immigrants are hard-working, many will return to their country of origin after accumulating some savings (but be replaced by others), most do pay taxes but do not receive social security and other benefits, they are less prone to commit crimes than the average American (the reason is that if convicted of a crime they would be deported after serving their prison term), and they consume less health care than the average citizen or lawful resident.
Their children attend public schools, which increases the costs to taxpayers, but the parents compensate by working hard for wages that may be depressed because of an illegal worker's precarious status, paying taxes, and receiving few other public benefits besides a free public education for their kids.
The fierce hostility that many conservatives feel toward illegal immigrants appears to be a compound of hostility to unlawful behavior (they are   illegal   immigrants, after all) and of fear that immigrants from Mexico and Central America will alter American culture, which is still primarily northern European.
The fear is similar to what many Americans felt about Irish and southern and eastern European immigration in the nineteenth and early twentieth century.
The fear proved to be unfounded.
Concerns with congestion externalities and national security support the second alternative, that of trying to stem further illegal immigration; in particular, there is a strong national security interest in reducing the porousness of our borders, which terrorists might take advantage of.
But this alternative is unstable, in the following sense.
It is infeasible to build and man, at reasonable cost, a wall or fence that would actually close our border with Mexico; and anyway we cannot literally close it because a great deal of lawful traffic in persons and goods moves back and forth across the U.S.-Mexican border.
The only way to block illegal immigration is to require all persons in the United States to carry biometric identification and to impose meaningful penalties on all employers (including household employers) of illegal immigrants, since no longer could an employer plead that he had been fooled by a false I.D.
But these measures would be equally effective against existing illegal immigrants, as well as newcomers, so that alternative two would in practice approximate alternative four (expulsion of all illegal immigrants), unless the measures were enforceable only against new immigrants--but how would an employer know whether a new applicant for a job was a recently arrived illegal immigrant or one  already living here?.
So in practice any measure for closing off future illegal immigration would have to be coupled with an amnesty for the current illegal immigrants.
The word "amnesty" is anathema in political debate over immigration, but the concept is inescapable.
Illegal immigrants are--illegal.
They are not supposed to be in the United States.
If we let them stay, on whatever terms, we are forgiving the illegality of their presence.
The grantor of the amnesty may demand a quid pro quo, but that does not make it any less an amnesty.
A tax amnesty is conditioned on the taxpayer's paying the taxes that he owes.
Similarly, an immigration amnesty, which would convert the illegal immigrant's status to that of a lawful resident eligible for eventual citizenship without having to leave the country, could be conditioned on the immigrant's paying a fine and learning English.
(Illegal immigrants who had committed crimes should not be eligible.) Of course, the fine must not be set so high, or other conditions of regularizing one's status made so severe (such as requiring the illegal immigrant to return to his country of origin and "stand in line" for a U.S.
visa), that most illegal immigrants would decide to remain illegal.
It is true that some amnesties come without conditions, such as President Carter's 1977 unconditional amnesty for Vietnam War draft dodgers.
Opposition to an amnesty for illegal immigrants may be colored by a failure to distinguish between conditional and unconditional amnesties.
The distinction is important.
The conditional amnesty that I am proposing is functionally the equivalent of Becker's proposal to sell to illegal immigrants the right to become lawful residents.
The objections to an immigration amnesty, even in its conditional form, are threefold.
First, it rewards illegal behavior.
But that is something done all the time without controversy.
A criminal who agrees to rat on an accomplice may be given a break in sentencing; that is the equivalent of rewarding an illegal immigrant for coming forward and paying a fine to regularize his status.
Second, it is argued that an amnesty would create an expectation of a future amnesty and thus encourage further illegal immigration.
But the argument just shows that the amnesty would have to be coupled with efforts, which as I have explained are feasible, to prevent further illegal immigration.
Third, it is argued that an amnesty would be unfair to those foreigners patiently waiting in line for permission to immigrate legally to the United States.
But why the United States should care about these people is obscure.
They are not Americans; we do not owe them anything.
If an amnesty solves our problems, the fact that it is in some global sense "unfair" to another set of foreigners deserves, in my opinion, no consideration.
The government has decided to impose a $500,000 ceiling on the senior executives of banks and other financial institutions that accept bailout money.
This is a bad idea, though politically inevitable because of public indignation at financiers, thus illustrating a point I make in my forthcoming book about the depression--for I insist that it is a depression, and not a mere recession, that the country is in--that a depression is a political rather than just an economic event.
(The book is entitled   A Failure of Capitalism: The Crisis of '08 and the Descent into Depression  , and will be published early in April by the Harvard University Press.).
It is a bad idea for three reasons.
First, it directs attention away from the really culpable parties in the depression, who are not the financiers.
They were engaged in risky lending, that is true; but the fact that a risk materializes does not prove that it was imprudent.
A small risk of bankruptcy--a risk that almost every business firm assumes--can be, when it is a risk faced by most firms in an industry and the industry is financial intermediation, catastrophic.
But the responsibility for preventing catastrophic risks to the economy caused by a collapse of the banking industry lies with the Federal Reserve, other regulatory bodies, and the Treasury Department.
A banker is not going to forgo a risk that should it materialize would wreck the economy, because his forbearance would have no consequence, as long as his competitors continued running the risk; it is a classic case of external costs, requiring government intervention.
Because the Federal Reserve under Alan Greenspan pushed interest rates too low and kept them low for too long, and because regulation of financial intermediaries had over the years dwindled and became especially lax during the Bush Administration, the bankers were allowed, and competition forced them, to take risks that could have and have had disastrous results.
If the government thinks that shaming the bankers and capping their pay will prevent future banking disasters, it will be distracted from making the regulatory changes that are necessary to restore effective public supervision of a vital industry.
Second, the pay cap contributes nothing to getting us out of the depression.
That can be done only by an active monetary policy, by recapitalizing the banking industry, and by a stimulus program (because the first two policies are not working well)--that is, by trying to stimulate demand for goods and services by putting unemployed or underemployed labor and other resources to work, as by a public-works program, the idea being that if private demand falls below supply, the equilibrium can be restored by substituting public demand for the missing private demand.
The pay ceiling does nothing along any of these lines.
One reason it does not is that the problem of overcompensation in the banking industry is more serious at the trading level than at the senior management level, since it's the traders who make the transactions.
I give an example in my book of how it can pay a trader to make an extremely risky trade.
The pay cap doesn't reach down that far in the corporate hierarchy.
Third, and worst, the pay ceiling will retard the recovery of the banking industry.
Not, I think, because it will drive the ablest executives into other fields; for the demand for their services in other fields is apt to be weak, though some may retire early rather than work for what they are apt to regard as a derisory salary.
But some will be hired by banking firms to which the pay cap does not apply because they do not want bailouts.
And those who remain in their present jobs and are subject to the cap will be distracted from their work.
They will have to make changes in their personal finances to adjust to their lower salary, and, human nature being what it is, they will spend time seeking ways to evade the ceiling--efforts that no doubt will be met by bureaucratic regulations designed to foil them.
Their time and attention will be deflected from the challenges facing their companies.
The pay ceiling will be more than a personal distraction, however.
It may cause senior management at some banks to refuse a bailout, to the detriment of recovery from the depression.
Worse, it will increase the volatility of the political and regulatory environment of the banking industry (a term I use broadly to include financial intermediaries in general, since the traditional barriers between banks and other such intermediaries have largely been taken down).
Critics of the bailouts complain that banks aren't lending the money that the government has given them, but instead are putting it in the pockets of their executives, in the form of high salaries, bonuses and perks.
All that money that is going to the executives, however, is just a drop in the bucket.
The banks are not lending the capital the government has given them not because they've squandered it on their executives but because the demand for loans is weak in a depression, because loans in a depression are at a high risk of default, and because the banks are still undercapitalized.
Having railed against the banks for taking too many risks, the government now wants them to take more risks!.
A compelling criticism of the bailout programs is that their erratic administration has left the banking industry uncertain as to what is coming next.
Are the banks going to be taken over by the government? Or subjected to new forms of regulation? What strings will be attached if they need additional capital? Will they be forced to lend money even though they are undercapitalized? If so, and they get into trouble, will the government bail them out again? Will they be made scapegoats for lax regulation? All else aside, a firm operating in so uncertain an environment is apt to hunker down and hoard its cash, for it must be prepared for anything.
The pay ceiling adds to the uncertainty of their environment by suggesting that they are to be subjected to populist regulation as well as to regulation singlemindedly concerned with getting us out of the depression as quickly as possible.
All this said, I don't deny that there is such a thing as executive overcompensation, owing to the weak incentives of boards of directors to police compensation.
But that's a long-term problem, rather than anything to do with fighting a depression.
I agree with Becker that China is not responsible for our current depression.
It is true that China's trade imbalance with the United States--it exports much more to us than we to them, and so has built up huge dollar balances that it has invested in the United States, mainly as Becker points out in the form of purchasing federal government bonds--was a factor in keeping interest rates low in the 2000-2005 period.
And it is true that those low interest rates are a major culprit in the housing bubble and the ensuing financial collapse.
The usual effect of an increase in the supply of money is to lower interest rates, because interest is the price of money.
But low interest rates were a policy of our Federal Reserve under the chairmanship of Alan Greenspan, who remained chairman until the end of the 2000-2005 period; the Federal Reserve can create all the money it wants; and so even if there had been no Chinese investment in the United States, interest rates would probably have remained low.
I say "probably" rather than "certainly" because the cheap Chinese exports to the United States were one of the factors that enabled the Federal Reserve's policy of low interest rates to avoid creating serious inflation.
Had there been inflation, the Fed would have raised interest rates, killing the housing bubble before it became serious.
Because houses are bought mainly with debt, cheap credit encourages home buying; and since the stock of housing expands only slowly, an increase in the demand for homes can and did result in a steep increase in home prices, which turned into a speculative bubble.
Furthermore, China is of course not the only country with a positive trade balance with the United States.
I do think that China's trade policy was bad for the United States.
I do not think that it is healthy for the United States to run huge budget deficits, whether they are financed by China or by anyone else.
One reason that we are in a depression and not merely the "severe recession" that is the preferred euphemism is that, because of those deficits, we cannot spend our way out of the depression without increasing the national debt to a point at which either horrendous inflation or huge tax increases will be required to pay it down.
But I would not "blame" China for giving us what we very much wanted, which was cheap goods and the financing of our debt.
Nor do I think that China's trade policy is foolish from China's standpoint.
In a fascinating chapter of   The General Theory of Employment, Interest and Money  , Keynes offered the following qualified defense of mercantilism (the policy of accumulating foreign exchange by running a persistent export surplus).
Suppose a country has weak domestic demand for goods and services, perhaps because its people are poor or they lack confidence in their economic prospects and therefore hoard money rather than spending it.
Because of this weak demand, labor and other productive resources will tend to be underemployed--unless producers have foreign markets.
By stimulating exports, the nation can increase the utilization of its productive resources, which by reducing unemployment can increase consumer confidence and thus increase spending and in turn investment.
Average incomes in China are very low, domestic demand therefore weak relative to potential output, and so it may make sense for China to encourage production for export, especially if it has a comparative advantage in producing goods that foreign countries such as the United States demand.
Moreover, because average incomes in China are so low, there may not be much demand for the types of product that the United States produces, and this would be an independent reason for our trade imbalance with China.
But it would not explain China's large dollar balances, unless no other country produces the types of product that Chinese consumers could afford to buy.
I have little to add to Becker's post, with which I agree.
One does understand the support in Congress for adding "Buy American" provisions to the stimulus package now moving through Congress (the "American Recovery and Reinvestment Act of 2009," as it is called).
Apart from the usual interest-group pressures, the goal of the Act, or at least the stated goal (for besides the goal of stimulating the economy, there is the goal of advancing President Obama's long-term policy agenda at an opportune time, by grafting the agenda onto the stimulus), is to increase employment, and that means employment in the United States, obviously.
If suppliers say of broadband equipment receive a government order and satisfy it by buying the equipment abroad, the increase in employment (if any) will take place in the country in which the equipment is bought.
An appropriate solution to this dilemma is to focus the stimulus package on goods and services that are made in the United States.
That is one more reason why, as some Republican senators are now urging, more of the stimulus money should go to construction, whether of roads or bridges or schools.
True, some inputs into these products, such as steel for bridges, may come from abroad, but most are local, in particular of course labor--and there is a lot of unemployment in the construction industry.
Indeed, government-financed construction, especially of transportation facilities, strikes me as the optimal Keynesian anti-depression program: the inputs are local, unemployment in the industry is great, our transportation infrastructure needs investment, improving it will confer external benefits (such as faster commuting and less wear and tear on vehicles), the costs can be eventually recovered, after the depression ends, in tolls and other user fees, and construction projects (especially repairs) can be commenced pretty quickly, especially if emphasis placed on funding state and local road and other infrastruture projects that have been interrupted or deferred by the states' depression-caused revenue shortfalls.
On the topic of sending stimulus money abroad, I think the big foundations, such as the Gates foundation (the biggest), should be strongly urged to redirect their extensive foreign charity to the United States at this time of depression.
I am not suggesting that his projects should "Buy American," in the sense of buying U.S.
products to give to foreign recipients of his charities.
The point is rather that charity should begin at home when home is suffering.
The bailouts and stimulus and other expenditures, over and above our already huge budget deficits, aimed at getting us out of our economic doldrums as fast as possible, are going to increase the national debt significantly and by doing so impose heavy costs for years to come.
The foundations in the aggregate spend many billions of dollars a year, and the substantial portion that goes to fight malaria in the Third World or promote agriculture or family planning there could be redirected--not all of course and not all at once, because the programs induce reliance on the part of the recipients--to the United States to help get us out of our economic predicament without assuming a staggering further burden of debt.
I grant that poor countries may be harder hit by what is a global depression than the United States, but I consider Americans' obligations to be primarily to Americans rather than to the inhabitants, however worthy, of foreign countries.
I am also inclined to think that charitable giving abroad is so closely entwined with the nation's foreign policy objectives that it should be regulated by the State Department rather than left entirely to private choice.
I have little to add to Becker's post, with which I agree.
One does understand the support in Congress for adding "Buy American" provisions to the stimulus package now moving through Congress (the "American Recovery and Reinvestment Act of 2009," as it is called).
Apart from the usual interest-group pressures, the goal of the Act, or at least the stated goal (for besides the goal of stimulating the economy, there is the goal of advancing President Obama's long-term policy agenda at an opportune time, by grafting the agenda onto the stimulus), is to increase employment, and that means employment in the United States, obviously.
If suppliers say of broadband equipment receive a government order and satisfy it by buying the equipment abroad, the increase in employment (if any) will take place in the country in which the equipment is bought.
An appropriate solution to this dilemma is to focus the stimulus package on goods and services that are made in the United States.
That is one more reason why, as some Republican senators are now urging, more of the stimulus money should go to construction, whether of roads or bridges or schools.
True, some inputs into these products, such as steel for bridges, may come from abroad, but most are local, in particular of course labor--and there is a lot of unemployment in the construction industry.
Indeed, government-financed construction, especially of transportation facilities, strikes me as the optimal Keynesian anti-depression program: the inputs are local, unemployment in the industry is great, our transportation infrastructure needs investment, improving it will confer external benefits (such as faster commuting and less wear and tear on vehicles), the costs can be eventually recovered, after the depression ends, in tolls and other user fees, and construction projects (especially repairs) can be commenced pretty quickly, especially if emphasis placed on funding state and local road and other infrastruture projects that have been interrupted or deferred by the states' depression-caused revenue shortfalls.
On the topic of sending stimulus money abroad, I think the big foundations, such as the Gates foundation (the biggest), should be strongly urged to redirect their extensive foreign charity to the United States at this time of depression.
I am not suggesting that his projects should "Buy American," in the sense of buying U.S.
products to give to foreign recipients of his charities.
The point is rather that charity should begin at home when home is suffering.
The bailouts and stimulus and other expenditures, over and above our already huge budget deficits, aimed at getting us out of our economic doldrums as fast as possible, are going to increase the national debt significantly and by doing so impose heavy costs for years to come.
The foundations in the aggregate spend many billions of dollars a year, and the substantial portion that goes to fight malaria in the Third World or promote agriculture or family planning there could be redirected--not all of course and not all at once, because the programs induce reliance on the part of the recipients--to the United States to help get us out of our economic predicament without assuming a staggering further burden of debt.
I grant that poor countries may be harder hit by what is a global depression than the United States, but I consider Americans' obligations to be primarily to Americans rather than to the inhabitants, however worthy, of foreign countries.
I am also inclined to think that charitable giving abroad is so closely entwined with the nation's foreign policy objectives that it should be regulated by the State Department rather than left entirely to private choice.
President Obama in his State of the Union Address announced a program to double     U.S.
    exports in the next five years and by doing so create 2 million new jobs.
The program mainly involves a $2 billion increase in loan guaranties to exporters by the Export-Import Bank and greater efforts to negotiate trade agreements with foreign countries and to enforce   U.S.
  laws against “unfair” international trade practices, such as “dumping” foreign goods in     United States     markets, at below-cost prices.
Of all the “job programs” undertaken or contemplated by our government, the President’s plan to double exports in five years seems to me the most fatuous.
Total     U.S.
    exports of good and services in 2009 were $1.553 trillion, and total imports $1.934 trillion (the net trade balance was therefore a minus $380.7 billion).
See U.S.
Bureau of the Census,   Foreign Trade Statistics  ,       www.census.gov/indicator/www/ustrade.html      .
    Exports were therefore about 11 percent of Gross Domestic Product.
If GDP increases by 2.5 percent a year for the next five years, and exports grow proportionately, then by 2015 they will have grown by 13 percent.
The President wants them to grow by 100 percent.
(If they would grow by 13 percent without any governmental effort, then the President’s 100 percent program, if it succeeded, would actually have increased exports by only 83 percent—though of course the government would take credit for all 100 percent!).
How his program could accomplish this is incomprehensible to me.
The increase in loan guaranties by the Export-Import Bank would reduce exporters’ interest costs by reducing their risk of losing money by extending credit to a foreign purchaser, but that would be a minor boon to exporters.
Likewise, anti-dumping enforcement and other efforts to prevent “unfair” pricing by foreign companies importing to the   United States   are likely simply to provoke retaliation--limiting our exports—as happened in our recent tiff with     China     over imports of tires.
That leaves the negotiation of trade agreements with foreign countries.
The problem with them, from a job-creation or deficit-reduction standpoint, is that they increase bilateral trade—imports as well as exports—and so have no average tendency to increase exports.
Moreover, they are difficult to negotiate because of opposition by producers and workers, in both countries (if it is a bilateral agreement), to allowing increased imports; the opposition to increased imports is based on the fact that they can result in reduced domestic production and employment.
At the same time, increased imports benefit consumers and some producers (imports are often inputs into domestically manufactured goods), but generally these effects are more diffuse than the losses of sales and employment caused by imports, and so do not have as much political weight.
A Democratic Administration is apt to be particularly sensitive to union opposition to free-trade agreements.
Increasing exports is a standard and perfectly sensible response to an economic downturn.
Exports are by definition domestically produced goods or services, so an increase in exports increases production and hence employment.
But the usual way of stimulating exports is by devaluation, which increases the amount of a nation’s goods and services that foreigners can obtain with their foreign currency.
Moreover, devaluation increases the domestic price of imports, which in turn stimulates domestic production to replace some of these now more expensive imported goods.
Our government has been trying to create a modest inflation, primarily in order to reduce debt burdens, reduce real wages (in order to reduce layoffs), and reduce hoarding (since inflation has the effect of a tax on cash balances) and thus stimulate consumption.
Inflation increases the price of exports, but, especially if it is modest, is likely to be offset by a fall in the value of the dollar relative to foreign currencies.
A high rate of inflation, however, which is a looming possibility because of the Federal Reserve’s "easy money" monetary policy, would probably have a significant negative effect on exports.
We have large trade deficits with     China     and other Asian countries that pursue protectionist policies, but there doesn’t seem to be anything we can do about that.
We have no leverage with those countries.
They accumulate large dollar reserves as a result of running large trade surpluses with us by virtue of their policies—and we borrow the dollars, becoming these countries’ debtors.
The indirect effect of our foreign debt on our exports is an illustration of the gravity of our fiscal situation, to which the government seems at last to be paying some heed—though not in its job programs.
An improvement in our trade balance would be a good thing because it would reduce the federal deficit as well as making us less dependent on the goodwill of foreign countries, but increased exports offset by increased imports would not affect the balance.
Finally and most questionably, the proposal to double our exports assumes that foreign export-oriented countries like   Germany   and     China     would stand idly by while we “stole” their export markets.
Obviously they would respond—with their own export-stimulus programs, which would be likely to negate ours.
When is it proper for government to try to protect people, in their capacity as consumers of goods and services, from themselves? And not just children or people with serious mental problems, but normal adults.
Can’t normal adults protect themselves? And if they can’t, won’t competition among sellers protect them?.
These questions are acutely raised by the proposal, now before Congress, to create a Consumer Financial Protection Agency that would protect consumers of financial products such as mortgages and credit cards and payday loans not only from misrepresentations by sellers of these products, but also from their own ignorance or poor judgment.
The proposal draws on behavioral economics, which teaches that cognitive and psychological limitations frequently lead consumers to make mistakes, even when there is no fraud by sellers.
The specific proposal seems to me misconceived.
Its premise is that the housing bubble and ensuing financial collapse were due in significant part to reckless borrowing to finance home purchases or borrow against home equity in order to obtain cash to buy other goods and services.
The argument is that people didn’t realize the risk involved in buying a house with very little (sometimes zero) equity, especially if they financed it by an adjustable-rate mortgage, which might become unaffordable by them if interest rates rose.
No doubt some people didn’t realize they were taking a risk, but I don’t think that that is the explanation for the housing bubble.
Almost no one, including sophisticated economists and financiers, realized that the steep increase in housing prices that ended in 2006 was a bubble phenomenon.
If it was not, then homebuying wasn’t really risky, because one could anticipate that the market value of one’s house would grow, and this would create sufficient equity to be able to refinance one’s mortgage on attractive terms.
There was a speculative element but it did not seem extreme because so few experts believed there was a housing bubble.
Among these experts notoriously was Ben Bernanke.
I want to contrast with the proposal to curtail risky borrowing by consumers two types of consumer protection that seem to me justifiable, and this regardless of the insights of behavioral economics.
One is requiring cigarette labeling and advertising to carry warnings of the health hazards of smoking.
This regulation is not very important today because everyone knows about these hazards, but it was important in the 1960s when the existence and gravity of the hazards were first confirmed.
Obviously individual consumers were not in a position to study the health effects of smoking—which cigarette manufacturers were busy denying—but one might think that advertisers of competing products would have had an incentive to frighten consumers away from smoking.
But this would not be a realistic expectation.
What would consumers think if a manufacturer of chewing gum advertised that chewing gum, unlike smoking cigarettes, does not cause lung cancer? Nor would cigarette manufacturers whose cigarettes contained less tar and nicotine than the average be strongly motivated to advertise the fact, because they would be telling the world that cigarettes are hazardous, at a time when this was not generally realized.
Automobile manufacturers were slow to offer seatbelts, perhaps fearing they would be advertising the dangers of driving—and charging a higher price (to cover the cost of the seatbelts) at the same time.
My second example is inspections of restaurants and food processors by government inspectors, to prevent food poisoning.
One can imagine leaving food safety to the market, reinforced by tort remedies against the sale of unsafe products.
But solvency limitations would make market and tort remedies ineffectual against many sellers, especially small and new ones—so the inspection regime actually facilitates new entry, which is a dominant feature of the restaurant industry.
Food poisoning can cause death, indeed multiple deaths, and when the consequences of a market failure are very grave, there is an argument for preventive regulation.
Now I want to discuss an important intermediate case, where the argument for consumer protection seems to me stronger than the case for consumer financial protection (other than against fraud), but not so strong as in the cases I just gave.
That is the case of obesity.
According to the Weight Control Information Network, which is part of the National Institutes of Health in the Department of Health and Human Services—and I believe reputable—two-thirds of American adults are overweight and one-third—an astonishing percentage—are obese, defined as having a Body Mass Index (the ratio of a person’s weight in kilograms to the square of his height in meters) of more than 30.
So, for example, a woman 5 foot 6 inches tall would be deemed obese if she weighed 180 pounds, and a man 6 foot 1 inch tall would be deemed obese if he weighed 227 pounds.
Obesity is measured differently for non-adults, but 17 percent of young children and 17.5 percent of adolescents are estimated to be obese.
These are startling figures, and considerably higher than in virtually any other country in the world.
My esteemed colleague Becker has argued, however, that American obesity is not excessive in an economic sense.
Obese people may simply have traded off the pleasures (and economy) of eating cheap, tasty, and nutritious food against the costs in disagreeable appearance, impaired mobility, the greater danger of and longer recovery time from surgery, and the much greater incidence of Type II diabetes and joint problems; there is also a greater risk of heart disease and possibly of dementia.
Becker believes that the long-run expected costs of obesity may be small if continued advances in medical technology eliminate or greatly reduce the health problems that obesity creates, and that the realization of this possibility is one of the factors that people consider in deciding whether to allow themselves to become obese.
I am skeptical.
The problem of obesity is concentrated in the poorer segment of the population, among people with limited education who may be unable to assess the health risks of obesity and as a result are unwilling to incur the slight added expense (or cost in diminished eating enjoyment of a diet less rich in sugar and butter).
They may also be imperfect agents of their children; and a person who becomes obese as a child will find it more difficult to avoid obesity than people who were thin children.
Governmental paternalism when directed to children is less problematic than paternalism toward adults.
There is also an externality, which is a nonpaternalistic justification for government intervention.
The government, meaning ultimately the taxpayer, now pays for half of total     U.S.
    medical expenses.
The average medical expenses incurred by obese people are substantially greater than those of the nonobese, even after allowance for the shorter life spans of obese people.
(It has been responsibly estimated that obesity and overweight add $150 billion a year to the nation’s medical bill.
This is somewhat too high because it ignores the effect of obesity in reducing longevity.
On the other hand it excludes the $40 billion a year that people spend on diet programs.) Much of the additional cost to Medicare, Medicaid, and other public programs is borne by taxpayers who are not obese.
Private health insurers are forbidden to “discriminate” against the obese by charging higher premiums to them, which is an unsound policy that should be changed (it won't be).
But probably many obese people are not insured, and their medical expenses are paid by charity or government, for example when they seek medical care in hospital emergency rooms and cannot pay the price they are billed for that care.
Failure to pay the full medical costs imposed by obesity distorts the decision of a person to become or remain obese; it is a subsidy for obesity.
I find it difficult to imagine a more grotesque subsidy.
But whether it would be desirable for the government to try to reduce obesity depends on the cost and efficacy of the measures it might take.
Some of the common proposals are likely to have only modest effects, such as requiring restaurant menus to disclose calories.
People who are motivated to avoid obesity know or can easily discover the approximate caloric content of the various foods, and people have most of their meals at home rather than in restaurants.
Somewhat more promising measures are: instruction in nutrition and the dangers of obesity in elementary and high schools; healthful school-lunch programs; expanded compulsory physical education in schools; restrictions on foods that can be purchased with food stamps; a tax on advertising fast food; a tax on video games; a ban on food advertisements aimed at children; a relaxation of regulations of health insurance that discourage charging higher premiums to the obese (and that thus subsidize obesity); a tax on soft drinks that contain sugar; and a calorie tax.
All would be relatively inexpensive measures that would have a good chance of paying for themselves.
The last, the calorie tax, which would probably be the most effective measure, would be a Pigouvian tax—a tax designed to internalize an externality, and, as such, defensible on standard economic grounds if I am correct that obesity creates an externality.
Still, such a tax can be criticized on two grounds.
One is that it would be strongly regressive.
But its regressive effect could be offset by a more generous food-stamp program.
The second objection, emphasized by Becker, is that a tax on calories penalizes people who are not obese, and they are the majority.
It is the same objection that     can be made to alcohol taxes as a means of curtailing drunk driving: most of the people taxed are not drunk drivers.
A more efficient anti-obesity tax, in principle, but utterly infeasible politically, would be a head tax measured by weight.
I am not much impressed by “fairness” objections to taxes.
Taxation is inherently arbitrary, because it doesn’t match the taxes paid by a person to the services he receives from government or the costs he imposes on society.
A calorie tax would raise considerable revenue, because like most Pigouvian taxes it would result in only limited substitution away from the taxed good, and the government at present is in desperate need of additional revenue.
A more efficient tax would be a tax on producers of food, based on the difference between the cost of the ingredients before processing and the price for the finished product.
The tax would therefore fall more heavily on highly processed foods, which tend to be higher in calories, than on lightly processed ones.
More study is necessary, however,before the costs and benefits of a well-designed program of obesity reduction can be responsibly assesssed.
I agree with Becker’s analysis, but will suggest a few additional points.
One is that the potentially lethal threat that online publishing poses to newspapers comes from the fact that a newspaper is a bundled good; no one is interested in all the different parts (news, sports, opinion, fashion, theater, travel, book reviews, financial data, classified ads, etc.).
In addition, much of the contents of any newspaper is ephemeral; people want their news now; it can become obsolete in a matter of hours.
Online publication offers both unbundling and immediacy.
That is why handbooks and other collective works are likely to migrate to the Web as well, as Becker suggests; also “quickie” books dealing with current crises.
The printed book has some important technological advantages over the online book.
One is that it is easier to skim, and, a related point, easier to go back and forth in.
It is also slightly easier to underline words in a printed book and write notes in the margins.
And many people find it tiring to read consecutively a long work on an electronic reader, though that may just be a matter of lack of experience with the new medium.
And on the other hand it is much easier to search an online book for a word or group of words, and, of course an online book adds nothing to weight: the weight of the electronic reader is the same no matter how many or how long the books stored in it are.
In the case of many books—since each form of publication has both advantages and disadvantages—the reader will want to have the book in both print and digital form.
The most important question raised by the new medium is what impact it will have on the total number of books (whether ebooks or printed books) published.
The impact should be unequivocally positive, on balance and in the long run.
A book published entirely online (that is, a book that is not also printed) is cheaper to produce, as well as providing greater value for some readers for the reasons I’ve discussed.
The publisher is protected by copyright against unauthorized copying of the work, whatever the medium in which it’s published.
We should think of the electronic book as a novel method of distribution, as the department store once was; and any improvement in distribution should help the industry that provides the goods to be distributed.
Another analogy is to the invention of the paperback book; by reducing the cost of books, it increased the demand for them.
It enabled publishers (and through them, authors) to increase their revenues by practicing price discrimination—charging a higher price for the hardback edition of a book and a lower price for a paperback edition published later, much like the different ticket prices for first-run and subsequent-run motion pictures.
The reason that Rupert Murdoch and others have said that the Kindle (the first popular electronic readers, although electronic readers have been around for a number of years) will kill the book industry is that Amazon (which makes the Kindle) is charging a very low price for many of the books that it sells for downloading to the Kindle.
It does this partly for promotional reasons and partly because it derives its revenue from a combination of the price of the Kindle and the price of the books; the Kindle’s customer will not pay as much for a book to download because he also has to pay for the Kindle.
But from a publisher’s standpoint, the price is the price he receives from the distributor; he wants to minimize the distributor’s spread, that is, the difference between what he charges the distributor and the price to the ultimate consumer, for the lower the latter price, the greater the demand for the book.
The publisher decides whether to license Amazon (or any other producer of an electronic reader) to publish his book, and can refuse if the price Amazon offers him will decrease his profits.
In short, it just is very hard to see how improvements in distribution hurt the maker of the distributed goods.
Probably what underlies the fear of the effect of the e-book on the book-publishing industry is a broader concern with the competitive impact of the Internet and the Web on the demand for books.
Books compete with other forms of intellectual property, the demand for which has grown with digitization: such forms as the numerous free online repositories of information (such as Wikipedia—which provides a substitute for countless reference books, including dictionaries and encyclopedias), but even video games, which compete for use of leisure time with books.
There are two important lessons that can be drawn from Becker’s discussion of gun control.
The first is that a problem that is not dealt with in its early stages may become insoluble.
It is not only the sheer infeasibility of removing 200 million guns from the American population, but also the emergence of a gun culture, that has ended hopes of disarming the population.
The more people who own guns, the more other people will want to own them as well for self-defense; and the further ownership spreads, the more normal it seems.
The ownership of guns has always been common in rural areas (the lower population density of the United States compared to Western Europe is an important reason why private ownership of guns is so much greater here), where there are hunting opportunities and police are spread thin.
But now it is common in the rough areas of cities as well.
Drug dealers cannot rely on police to enforce their deals and therefore have to arm themselves, and their law-abiding neighbors decide they had better be armed as well.
(The news media create an exaggerated fear of violent crime, and this also contributes to the demand for guns by law-abiding people.) If population density continues to grow and the drug trade were legalized, gun ownership might begin to fall.
Gun purchases soared in the economic crisis from which we are now (it seems) recovering.
Partly this may have been due to increased cash hoarding (the sale of safes also soared) and to an increase in property crimes, but it may have been due mainly to a generalized fear that increased the demand for symbols of security.
The second lesson is the unwisdom of the Supreme Court’s recent decisions that have created—on the basis of a tendentious interpretation of the drafting history of the Second Amendment and an intellectually untenable (as it seems to me) belief in “originalist” interpretations of the Constitution—a constitutional right to possess guns for personal self-defense.
The result is to impose a significant degree of nationwide uniformity on a problem that is not uniform throughout the nation.
The case for private gun ownership is much stronger in largely rural states, such as Arizona—states in which there is a deeply entrenched and historically understandable gun culture and a rationally greater lawful demand for private gun ownership than in the suburban areas of the densely populated midwestern, northeastern, and mid-Atlantic states—than it is in big cities with high crime rates—cities that have long had very strict gun laws many of which may now be ruled unconstitutional.
Though gun ownership cannot be forbidden any longer, it can (even under the new constitutional regime) be regulated, as Becker emphasizes.
Gun-registration laws aimed at denying gun ownership to lunatics and persons with a history of criminal activity, coupled with heavy punishment of dealers or customers who violate or evade the laws, should survive constitutional challenge.
Federal “felon in possession” laws already provide for heavy punishment of persons forbidden to own a gun because they have been convicted of a felony, if they are caught with a gun in their possession.
Loopholes in gun-registration laws, such as permitting the sale of guns at gun shows without requiring the screening of purchasers, can be closed.
And punishment can be enhanced, even more than at present, for persons who use a gun in committing a crime.
A reduction in the criminal use of guns would in turn reduce the demand by law-abiding persons, and as that demand fell so might the demand of guns by criminals, given stiff punishment costs.
A virtuous cycle might be initiated that would lead eventually to a significant overall decline in gun ownership.
A book published recently and entitled   Overdiagnosed: Making People Sick in the Pursuit of Health  , by three reputable physicians (H.
Gilbert Welch, Lisa M.
Schwartz, and Steve Woloshin), argues forcefully that the nation is spending too much money on preventive care.
This is doubtless regarded as heresy in some circles: the orthodox view is that prevention is the key to economizing on the expenses of health care: “an ounce of prevention is worth a pound of cure.” The recent health care reform act seeks to promote preventive care.
Preventive care does reduce health costs in some cases, but not in all, and maybe not in most.
The costs of prevention have to be weighed along with the benefits.
And private and social costs have to be distinguished.
Subsidy programs such as Medicare reduce the private costs of medical treatment to patients, but the social costs are not reduced; their incidence is merely shifted.
Generally, preventive care has two phases: screening and treatment.
The former might seem inexpensive, both in monetary cost and in risk to health, but is not, and for two reasons: the number of people who do not have a condition that is screened for invariably greatly exceeds the number of people who have the condition, so that the cumulative costs of screening are high.
And screening creates anxiety, both anxiety over the outcome and anxiety over what to do if the test for the disease in question is positive.
An example is the blood test for prostate cancer.
It turns out that a huge percentage of men have prostate cancer, but that most of the cancers are benign.
The treatments have serious side effects, so for many (especially for elderly) men diagnosed with prostate cancer it is uncertain what the best course of action is.
Another example of dubious preventive care is the treatment of mildly elevated blood pressure: blood pressure medicine has to be taken daily and of course must be paid for by someone, and has side effects though less serious ones than prostate treatments, while the benefits in reducing the risk of heart attacks or strokes are modest (unlike the case of highly elevated blood pressure).
There are many other examples in which the net benefits of screening for medical conditions followed by treatment if the results of the test for the disease are positive are slight or negative.
The tendency has been to move the goalposts: to screen for lesser and lesser abnormalities, even though the lesser the abnormality the lesser the expected disease cost to the patient and so the less likely the screening and follow-up treatment are to provide net benefits.
Moreover, mild abnormalities are far more common than severe ones, so that moving the goalposts greatly increases the number of persons who have to be screened.
When the threshold for excessive cholesterol was lowered from 240 to 200, the number of Americans with excessive cholesterol increased by almost 43 million and all of them are recommended to take drugs to reduce their cholesterol, even though the benefits for persons who are not at high risk of heart disease for other reasons are highly uncertain—yet many of these persons are taking the drugs along with persons who can anticipate a significant benefit.
The increased prevalence of screening and preventive treatment has increased the health awareness of Americans and by doing so has increased the innate anxiety that people feel about sickness and mortality.
Ordinarily we don’t question people’s consumption choices; and it might seem to follow that if people want to take, say, blood pressure medicine to prevent mild hypertension they should be assumed to be maximizing their utility and we should let them alone.
But there are reasons to think that screening and treatment of persons who flunk screening tests are excessive from the standpoint of overall social welfare—that aggregate utility would be increased by reallocating many of the resources now used for screening and preventive treatments to other activities.
We can identify these reasons by considering the full range of factors, other than cost-benefit analyses that support particular forms of screening and preventive treatment.
These factors are the incentives of medical researchers (many subsidized by government), health care providers (importantly including pharmaceutical manufacturers), medical malpractice lawyers, American cultural attitudes, our democratic political system, and patients who do not pay the full costs of their medical care.
Advances in medical research enable more abnormalities to be discovered sooner—the PSA test for possible prostate cancer is an example—and to be treated.
Physicians and other health care providers have an incentive to increase the demand for their services by creating new screening procedures and preventive treatments, although to the extent that preventive care does improve health (as much of it does), acute-care health providers face reduced demand for their services.
But apart from dentistry, it is hard to think of areas of health in which preventive care has reduced the overall demand for treatment.
Although preventive care sometimes involves surgery, as in the case of prostate cancer and other cancers that may be benign, usually it involves treatment with drugs, and thus is strongly promoted by the pharmaceutical industry, often by advertising directly to the consumer.
Fear of medical malpractice drives physicians to order tests for low-probability conditions, lest they be sued for failure to diagnose a treatable condition.
Distinctive features of American culture include a strong commitment to business models of economic activity, a high correlation between income and prestige, competitive drive, and a rejection of fatalism.
The medical profession, like the legal profession, has embraced a business as distinct from a professional model of service.
In a business model, success is measured by profit.
Physicians embrace opportunities for increasing their incomes by increasing the demand for their services.
Americans value longevity not only for the utility that additional years of life confer regardless of how long others live, but as a field of competition: prestige attaches to beating one’s contemporaries in the race to live as long as possible.
And this turns out to be for many people a very cheap competition because other people are paying for their medical treatments.
The subsidization of the old by the young in the Medicare program increases the demand for screening and preventive care by a politically prepotent voting bloc that has been able to shift most of its medical costs to others.
Legal restrictions on exclusions in health insurance policies, and the tax subsidy of employer-provided health benefits, create further gulfs between the costs of medical care to particular persons and what they pay for it.
So not only is there compelling evidence of what Welch and his coauthors call overdiagnosis; there are good reasons to believe the evidence because the incentive structure for screening and preventive care makes overdiagnosis a theoretical prediction as well as an empirical reality.
Dictatorships, as we are seeing in the Middle East today, and as we saw in Iran in 1979 and in the communist nations in 1989—not to mention France in 1789—have a way of imploding unexpectedly, the unexpectedness lying in the fact that no external event seems to have precipitated the collapse.
These events belong to chaos theory: if you rock a canoe, it will maintain an equilibrium until at some unpredictable point you rock it so hard that it capsizes.
So I will not be speaking of regimes that collapse because of a catastrophic military defeat, nor of regimes ended by civil war, nor of sesessions, such as the American revolution, but just of sudden collapses, unforeseen because there was no visible triggering event that might have been foreseen.
Over a long period of time, democratic and quasi-democratic nations change profoundly, but the change is gradual.
Dictatorial regimes change in fits and starts, so that most of the time they seem more stable than nonauthoritarian regimes.
They experience punctuated rather than incremental change.
There are several reasons.
The obvious one is lack of information.
A government that uses intimidation, surveillance, and control of media to quell dissent deprives itself of good information about the population’s concerns.
People keep their concerns to themselves out of fear.
Grievances are driven underground, to fester.
Not having a good handle on what people want, the government risks being blindsided by a sudden explosion of repressed anger.
Repression also fosters conspiracy; fearful of expressing themselves publicly, people learn to form secret cabals; they become experts at dissimulation.
Second, the leadership of an authoritarian regime has difficulty obtaining information even from its own officials, or more broadly of managing disagreement and absorbing and responding to criticism.
Without fixed terms of office and rules of succession, the position of leaders is insecure: they maintain their position by charisma or fear, by projecting an image of infallibility and omniscience, and these sources of power are undermined by criticism, which is often implicit in “bad news” conveyed to leaders by their subordinates.
Even without being critical, the subordinate who warns his leader about popular disaffection is implicitly claiming to have knowledge that the leader did not have.
Third, and again attributable to the absence of set rules for peaceful transition of leaders, authoritarian regimes tend to be conservative in the sense of reluctant to change even in response to known problems.
If you do not have a good handle on public opinion, it is very difficult to predict the consequences of change—change may convey weakness, create expectations that cannot be fulfilled, empower the advocates of change, and undermine belief in the infallibility and omniscience of the leadership.
Fourth, and again related to the absence of regular rules of appointment and succession, the leadership of authoritarian regimes tends to be old and sclerotic.
Retirement is dangerous.
The leader will have made enemies and when he relinquishes power he is defenseless against them.
By clinging to power he grows out of touch, and is ill equipped to respond decisively and effectively to a challenge.
Although there are exceptions (particularly in East Asia), authoritarian regimes tend to be bad for economic growth, and this is still another source of potential weakness.
No person can rule alone, or by fear alone; he has to reward his key officials, and so corruption tends to be common in such regimes.
Also, the military tends to be larger and more expensive than actually required for national defense, because the regime depends on force and therefore most cultivate the loyalty of the military.
It is not that a large, well-paid army is necessary to maintain internal order, but that if the army is not coddled it may overthrow the regime, or fail to come to its defense in crisis.
When an authoritarian regime suddenly collapses, this is seen by the world as an occasion for rejoicing—democracy has triumphed.
That is the response of most of the media to the current crises in Tunisia and Egypt.
And in fact a sudden collapse often is followed by a democratic interlude, as happened during the French, Russian, and Iranian revolutions.
But I emphasize “interlude”; there is nothing automatic about a democratic succession to a collapsed dictatorship.
Indeed it is likely that if, unlike the formerly communist nations of Central and Eastern Europe, a country has never had a democratic government for more than a brief period, the flowering of democracy in the wake of the collapse of the authoritarian regime will also be brief.
Admittedly there are numerous exceptions.
Russia is one, though it is less democratic today than it was immediately after the collapse of the communist regime.
Japan is a partial exception, because it did have a parliamentary government before World War II, though it was never really democratic.
India is a real exception, and there are others in Africa and Latin America.
As these examples show, nations are capable of transitioning from authoritarian to democratic societies.
There are many democratic nations today, but, apart from the ancient Greek city states, there were virtually none before the nineteenth century, and few before the twentieth.
Most of these, however, emerged from authoritarian government by a process of evolution, rather than suddenly; there were democratic roots in the American colonies and Great Britain, for example, long before democracy became the regime of either polity.
In a country without democratic or liberal traditions, the party to win the first election and become the governing party will think it the most natural thing in the world to endeavor to retain power, by whatever means available, once power is achieved.
And the party to win the first election might be whatever conspiratorial faction was best organized; it might have no commitment to democracy.
So the first election might be the last.
The old regime’s techniques and institutions of repression would be at hand to facilitate the takeover by the winning party.
That is why the media’s celebration of the emergence of democracy in the Middle East today is premature, and why the Obama Administration is beginning to back away from its public celebration of what is happening on the streets of Cairo and other Egyptian cities.
I realize from the comments that I should have said more about the specific issue of term limits for Supreme Court Justices.
The case for term limits for the lower federal judges (circuit and district judges) is weak.
As I said in my original posting, the institution of "senior status" largely takes care of the senility problem.
(Also, contrary to one of the commenters, incapacity is a recognized basis for removing a federal official by the impeachment process; the first federal judge impeached and removed from office was a drunkard and a lunatic, but had not engaged in wrongful conduct, such as taking a bribe.) There is shirking, chiefly in the form of excessive delegation of judicial functions to law clerks and other staff and excessive indulgence in leisure activities, such as travel, and the more serious form of misbehavior that consists of willful decision-making.
However, these problems are not serious enough to warrant a fundamental change that would reduce legal certainty by increasing judicial turnover and would make a federal judicial career less attractive and so reduce the field of selection, though the latter effect could be offset by salary increases--but that, of course, would be a cost also.
In addition, as I said, candidates for federal judgeships are carefully screened, and at an age when most people either have, or have not, established habits of work that will persist even if sticks and carrots are removed.
The issue of term limits for Supreme Court Justices is more challenging, first because the Court is, to a great extent, a political court, that is, a court the decisions of which are guided by the policy preferences of the judges, and second because Justices have less incentive to retire than the lower-court judges.
Let me start with the political issue.
The contrast with the lower federal courts, especially the courts of appeals, should not be overdrawn.
Plenty of cases that never get to the Supreme Court are political in the sense just indicated, but fewer, and they are the less politically fraught cases--otherwise the Court would take them up.
The best index to the political component of a court is the amount and character of controversy regarding appointments: controversy is greatest for Supreme Court Justices, less for court of appeals judges, and least for district (trial-level) judges; and the focus of controversy is most likely to be ideology rather than politics at the Supreme Court level, less likely at the court of appeals level, and least likely at the district court level.
But I don't think the political argument for imposing terms limits at the Supreme Court level is persuasive, despite the anomaly in a democratic system of having a corps of powerful political officials who serve for life, as in a monarchy.
A lesser objection is that by increasing turnover of Supreme Court Justices, term limits (depending of course on their length) would operate as a tremendous political distraction, since it would be known with certainty when a vacancy would occur.
So the political struggle over a successor would start sooner, and there would be more such struggles because there would be more vacancies.
More important, when we contrast democracy with dictatorship we aren't just comparing term lengths; we are also comparing incentives.
Officials who are elected for short, fixed terms and can be reelected have a strong incentive to conform their behavior to the preferences of the electorate, interest groups, public opinion, and other more or less democratic sources of influence on policy.
An official appointed (not elected) for a long fixed term, and ineligible for reappointment, is a tyrant, in the sense of being (largely) insulated from the normal political constraints on official behavior.
We may want that insulation; we may want a court to be an independent power base; but the premise of the movement for judicial term limits is that courts are   too   independent, too powerful.
The imposition of term limits would not reduce that power.
It would merely increase the number of power holders.
And as they would be holding power successively rather than simultaneously, there would be no competitive check on their exercise of power.
So I don't see how term limits would actually limit judicial power.
But there is still the retirement question.
People can get stale from serving in the same job for a great many years and most elderly people, before diagnosable senility sets in, experience a diminution in mental acuity and, especially, adaptability to novelty.
The combination of very long service in the same age with very great age is likely to produce a decline in performance.
And while long experience in a job can make one more efficient at it, beyond a point additional experience adds nothing.
This is certainly a problem for the Supreme Court, but perhaps not a terribly serious one.
There are four reasons.
First, if one's performance declines from a very high level, it may remain quite adequate; Holmes, Brandeis, Learned Hand, and other illustrious oldsters were not as sharp in their eighties as they had been, but they were sharp enough.
Second, the Supreme Court's workload is very light, with a long summer recess.
Third, the Justices have terrific staffs.
And fourth, the most important skills in law are verbal and rhetorical, and they tend to decline with age less rapidly than logical, theoretical, and mathematical skills.
On all these counts, Supreme Court judging is the quintessential geriatric profession.
Congress is on the verge of passing and the President of signing a major overhaul of the Bankruptcy Code.
(See   Summary   and   Changes  ).
The new bankruptcy law, popularly termed the Bankruptcy Reform Act, has engendered passionate debate inside and outside Congress.
The criticisms of the Act bespeak a failure to analyze it in economic terms.
The Act is complex, but the thrust is to make it more difficult for individuals to declare bankruptcy under Chapter 7 of the Bankruptcy Code.
Under Chapter 7, the bankrupts assets, minus exempt assets such as a home and work tools, are sold to repay creditors.
When the bankrupt is an individual rather than a corporation, his assets often are too limited to enable the creditors to be paid in full what they are owed; often the creditors receive just a few cents on the dollar.
However much or little they recover, at the conclusion of the bankruptcy proceeding the bankrupt (save in exceptional cases, as where the debt consists of a fine, or of damages owed because of a fraud committed by the debtor) receives a discharge, meaning that the creditors cannot go against him for the unpaid balance of his debts.
His debts are wiped out even if he has a high enough income to be able to repay them in full over a period of years.
An alternative procedure that individuals (and their creditors) can avail themselves of is Chapter 13 bankruptcy: instead of surrendering his nonexempt assets, the debtor agrees to make periodic payments to his creditors for as long as five years after the bankruptcy.
Although Chapter 13 is attractive to some individuals, especially those who have substantial assets, Chapter 7 is more attractive to individuals who have few nonexempt assets but some income, and there are many such individuals.
These individuals can run up huge credit card debts for purchases that do not create durable assets (such as food, travel, and entertainment), declare bankruptcy, wipe out their debts, and then start over.
The Bankruptcy Reform Act will force many debtors who have annual incomes in excess of the median income in their state of residence to go the Chapter 13 route and thus make periodic payments out of their income for a period of years.
The Act also increases the length of time that a bankrupt must wait, after receiving his discharge from his existing debts, before he can declare bankruptcy again and wipe out a new round of debts.
The Act contains still other provisions also intended to make it more difficult for individuals to wipe out their debts by declaring bankruptcy under Chapter 7.
Critics have derided the Act as mean-spirited and hard on the poor, but they overlook the most important effect that the bill is likely to have, and that is to reduce interest rates.
One component of an interest rate is compensation for the risk of default.
The higher that risk, the higher the interest rate.
This assumes of course that a creditor cannot, in the event of default, collect the debt owed him quickly, fully, and with little expense.
If bankruptcy were very cheap and the typical individual bankrupt had assets sufficient to cover his debts, or had no right to discharge his debts and could repay them, with interest, out of future income, default would not impose a substantial cost on creditors and so the risk of default would not have a substantial effect on interest rates.
But bankruptcy is costly and most individual bankrupts do not have assets sufficient to cover their debts, yet under existing law they have a right to a discharge of their debts no matter how far short of repaying their creditors their assets fall.
So default is costly and this is bound to be reflected in interest rates.
Note the irony of the critics complaint that credit-card interest rates are exorbitant; the so-called exorbitance is, to an extent anyway, an artifact of a bankruptcy law that by making bankruptcy inviting to credit-card debtors increases the risk of default and therefore the interest rate.
Notice moreover the vicious cycle created by the present system.
The greater the risk of default, the higher interest rates are; but the higher interest rates are, the greater is the risk of default, since interest rates represent a fixed cost to the debtor: if he loses his job and his income plummets, he still owes whatever he borrowed when he was flush.
Of course, an alternative possibility is that the high rates will discourage borrowing; this is a paternalistic goal of some opponents of the Act.
But the high rate of personal bankruptcies that the critics stress is evidence that the vicious cycle dominates the effect of high interest rates in discouraging borrowing.
I conclude that the new Act, by increasing the rights of creditors in bankruptcy (for remember that Chapter 13 enables a creditor to obtain repayment out of the debtors post-bankruptcy income, not just out of what may be his very limited nonexempt assets at the time of bankruptcy, as under Chapter 7), should reduce interest rates and thus make borrowers better off.
The most reckless borrowersthose most prone to file repeated Chapter 7 bankruptcieswill be made worse off.
But there will be fewer of these, precisely because they will be worse off than under the existing system.
If bankruptcy is more costly, there will be less of it.
Critics say that more than half of all individual bankrupts are not reckless borrowers but rather are unfortunate people who have been hit by unexpected medical expenses.
But this ignores the fact that whether one is forced into bankruptcy by a medical expense (or by an interruption of employment as a result of a medical problem) depends on ones other borrowing.
If one is already borrowed to the hilt, an unexpected medical expense may indeed force one over the edge.
But knowing that medical expenses are a risk in our society, prudent people avoid loading themselves to the hilt with nonmedical debt.
At a more fundamental level, one might ask why voluntary bankruptcy is ever permitted.
It is easy to understand involuntary bankruptcythat is, bankruptcy forced upon a debtor by his creditors.
Such bankruptcy overcomes the free-rider problem that would exist if multiple creditors were allowed to race each other to be first to seize the assets of a defaulting debtor, when the creditors as a whole might be better off with a more orderly liquidation.
But why should a debtor ever be permitted to write off his debts? One answer is that, assuming people are risk averse, voluntary bankruptcy operates as a kind of social insurance.
One cannot buy private insurance against going broke (for then people would indeed borrow recklessly), but even a prudent borrower could find himself broke as a result of an unforeseeable streak of bad luck.
However, the Bankruptcy Reform Act does not eliminate voluntary bankruptcy.
The social-insurance role is fulfilled by Chapter 13 as well as by Chapter 7, since after five years of partial payments the Chapter 13 bankrupt is entitled to a full discharge of the unpaid balance of his debts.
Behind the Bankruptcy Reform Act, as behind the Presidents proposal for social security reform, is an ideology of giving nonwealthy people greater responsibility for their own economic welfare, which entails subjecting them to additional financial risk.
Under the present system, the prudent and the imprudent consumer pay the same high interest rates, assuming creditors cant readily determine which consumers are prudent and which are imprudent.
By lowering interest rates on credit-card and other consumer debt while at the same time discouraging default, the Bankruptcy Reform Act will encourage consumers to exercise greater care in borrowingyet at the same time, because interest rates will be lower, the Act will enable prudent consumers (who do not face a high risk of bankruptcy) to borrow more and by doing so will increase their consumption options.
The Act will not redistribute wealth from the poor to the rich, but from the imprudent borrower to the prudent borrower.
I am in broad agreement with Becker.
But I am somewhat hesitant to describe the war against drugs as having been lost. By that token, so has the war against bank robbery, or any other crime, been lost, because there is a positive rate of these crimes as well.
As Becker explains, law enforcement activity raises the cost and hence price of illegal drugs and as a result of the price increase reduces their consumption.
If the object of the war on drugs is to reduce rather than completely eliminate the consumption of illegal drugs, then the war has been partially won.
Which is not to say that the partial victory has been worth the considerable costs.
If the resources used to wage the war were reallocated to other social projects, such as reducing violent crime, there would probably be a net social gain.
For one thing, it is particularly costly to enforce the law against a victimless crime, more precisely a crime that consists of a transaction between a willing seller and a willing buyer.
The low probability of apprehending such criminals has to be offset by very stiff sentences in order to maintain deterrence.
Yet if potential criminals have high discount rates, an increase in sentence length may have little incremental deterrent effect because the increase is tacked on at the end of the sentence.
The present disutility of an increase in sentence length from 20 to 30 years may, given discounting, be trivial.
Still another consideration is that if the principal effect of illegal drugs is to impair the health and productivity of the consumer of the drugs, then it is just another species of self-destructive behavior and we normally allow people to engage in such behavior if they want; it is an aspect of liberty.
Drug crimes are often thought to be inherently violent because of their association with guns, gangs, turf wars, and fatal overdoses.
Those characteristics are, however, merely artifacts of the fact that the sale of the drugs in question has been criminalized, so that the suppliers cannot use the usual, peaceable means of enforcing property rights and contracts and are not regulated in the interest of consumer safety, as legal drugs are.
To determine the full social effect of the war on drugs, we would have to know precisely how drug users respond to higher prices of drugs, since, from a consumer standpoint, higher prices are what the war on drugs achieves.
One possibility is that the user spends the same amount of money on drugs, but, because the price is higher, consumes less.
Another possibility is that he reduces his consumption so much that he has money left over, and he uses that to buy a harmless product.
A third possibility, however, is that he reduces his consumption enough to have money left over but he uses it to buy a legal mind-altering drug, such as liquor.
This seems in fact the likeliest response of someone who desires a certain level of mood alteration and faces a higher price for his drug of choice; he switches to a substitute that now costs him less because it is not burdened by costs imposed by law enforcement.
If that is the principal consequence of the war on drugs, it is hard to see what is gained even if one embraces the paternalistic rationale of the war.
The political source of the war on drugs is mysterious if, as I am inclined to believe, there is a legal substitute for every one of the illegal drugs: selective serotonin uptake reinhibitors (e.g., Prozac, Paxil, Zoloft) and other antidepressive drugs for cocaine, liquor and tranquillizers for heroin, cigarettes for marijuana, caffeine and steroids for uppers. Obviously these are not perfect substitutes; and some of the illegal drugs may be more potent or addictive or physically or psychologically injurious than the legal ones.
But it is apparent that our society has no general policy against the consumption of mind-altering substances, and there seems to be a certain arbitrariness in the choice of the subset to prohibit.
If these drugs were regulated instead of being prohibited, their content could be made less potent and addictive and consumers could be warned more systematically about their dangers, as they are about the dangers of cigarettes and prescription drugs.
As a judge sworn to enforce the law, I will continue as I always have to adjudicate drug cases without any hesitations based on my reservations about the wisdom of the war on drugs.
That is a legislative issue.
Oddly, one of the strongest cases for prohibiting drugs is the use of steroids by athletes.
The reason is the arms-race character of such use, or in economic terms the existence of an externality.
Ordinarily if a person uses a drug that injures his health, he bears the full costs, or at least most of the costs, of the injury.
But if an athlete uses steroids to increase his competitive performance, he imposes a cost on his competitors, which in turn may induce them to follow suit and use steroids themselves, provided the expected costs, including health costs, are lower than the expected benefits of being able to compete more effectively.
There is no offsetting social benefit from an across-the-board increase in athletes strength.
Football games are no more exciting when linesmen weigh 500 pounds than when they weigh 200 pounds; and baseball would be totally unmanageable if every player could hit every other pitch 1000 feet.
The fact that the price of illegal drugs is not only low but falling, and indeed has fallen to quite low levels, is often treated as compelling evidence that the war on drugs has failed.
That is true if the war metaphor is taken literally.
But if the war is redescribed realistically as a campaign to reduce the consumption of illegal drugs, it could be thought at least a partial success even if the price of illegal drugs is extremely low.
The reason lies in the distinction that economists draw between the full price of a good and its nominal price.
The nominal price is the dollar amount charged by the seller; the full price includes any additional costs borne by the buyer, such as search costs (the costs involved in finding and negotiating with the sellerin other words, shopping costs) and any health risks associated with the consumption of the good.
The war on drugs has had a significant effect on these additional costs.
As a result of the drugs illegality, it takes some effort to find a seller, there is a risk of arrest and prosecution, and there is a risk of an accidental overdose resulting from lack of quality control in the manufacture of the product.
(There is also a stigma to using illegal drugs, but this might remain if the drugs were legalized; heavy drinking, though legal, is stigmatized.) These costs would be eliminated if drugs were legal.
It might seem that with the drugs worth more to consumers, price would rise, but this is unlikely; price would be constrained to cost by competition, and the additional benefits of the drugsthat is, the benefits generated by removing the costs resulting from criminalizationwould be realized by consumers as consumer surplus (the difference between what a consumer would pay for a good and the price of the good).
With the good more valuable to consumers but the nominal price no higher, consumption would increase.
This point is potentially very important empirically, because the effect of criminalizing drugs on the full price of the drugs may be much greater than the effect on the nominal price.
Suppose criminalization raises the nominal price of a dose of cocaine from $1 to $1.l0, a 10 percent increase; then using Beckers elasticity estimate, legalizing cocaine would result in a 5 percent increase in demand.
But now suppose that the war on drugs has increased the full price of cocaine from $1 a dose to $2.10 a dose (the 10 increase in nominal price plus a $1 increase in other costs of consumption); then legalizing cocaine could be expected to have a much more dramatic effect on consumption.
However, as Becker points out, this effect could be offset by a tax (in the example, a $1.10 tax), though some incentive to smuggling would be created by so stiff a tax, as in the case of cigarettes.
The important thing is that because of the difference between full and nominal price, the tax might have to be very stiff.
Regarding performance-enhancing drugs, such as steroids, one comment points out that sports fans appreciate better performance, and notes that professional football is more popular than college football (alumni loyalties to one side).
But there is a difference between skill and strength; if the principal effect of steroids is to increase strength rather than skill, it is not clear that entertainment value is enhanced.
But suppose it is.
Then what must be considered is the tradeoff between the increased income that steroid-consuming athletes can expect to obtain and the risks to their health.
The tradeoff is complicated because some athletes will prefer the higher income and others will prefer to have better health and, being thus at a competitive disadvantage, will drop out of the sport.
It is unclear whether there will be a net increase in performance, since some killed athletes will be lost to the sport, though those that remain will be better performers.
Let me make clear that I have no ethical objection to performance-enhancing drugs.
Suppose theres a drug that adds 10 IQ points to everyone who takes it, and it has no adverse health consequences.
Once some people start taking the drug, this will put pressure on others to follow suit.
But I dont see any difference between this effect and that resulting from an effort by a young business person to gain a competitive edge by getting an MBA, which will place pressure on his competitors to do likewise.
That kind of competition improves economic welfare.
A number of good comments, as usual.
I will respond very briefly.
One comment  points out that since only 5 percent of incoming containers are inspected, the real danger of lethal cargo is created is in the foreign ports (operated by foreign companies) in which the containers are loaded into ships bound for the United States.
However, U.S.
port security can be thought of as a second line of defense should dangerous cargo not be detected in the originating port.
A second, very pessimistic comment considers port security basically hopeless--if terrorists obtain weapons of mass destruction, they will find a way to slip them into the United States.
That is certainly true with respect to  bioweaponry: smallpox virus sprayed in any international airport would create an epidemic in the United States.
But probably the greater danger is a nuclear or radioactive ("dirty") bomb, which would probably come in by ship.
I would like to know how much it would cost to inspect 100 percent of the cargoes that enter the United States.
I don't agree, by the way, that it is "protectionist" to trade off foreign investment benefits against security costs.
I don't think it would be protectionist to forbid Iran to buy Boeing or provide janitorial services at the Pentagon.
Finally, I do think that an excellent point made in one of the ccomments is that had the deal gone through, Dubai would have had a real stake in   enhancing   port security, since if a terrorist attack occurred at a port operated by Dubai Ports World, it would be a disaster for Dubai.
In retrospect, the Administration could have made a better case for the deal.
But I suspect that the political opposition would still have precluded acceptance of the deal.
One of the major questions that I asked in my book   Catastrophe: Risk and Response   (Oxford University Press, 2004) is what can be salvaged of cost-benefit analysis in situations of enormous uncertainty.
I think a lot, and refer the reader of this blog to chapter 3 of my book for explanation.
The war in Iraq (not discussed in my book) provides a test case for this proposition.
Apparently the Administration did not conduct a cost-benefit analysis before deciding for war.
Maybe it thought the benefits so obviously great that no reasonable estimate of cost would exceed them.
I believe that the Administration's only public estimate was that the war would cost no more than $60 billion and that some of this expense would be defrayed (as in the 1991 war with Iraq) by other countries.
The estimate seems to have assumed that the probability of a short, cheap (i.e., $60 billion maximum), victorious war was 1.
A responsible cost-benefit analysis would have costed alternative scenarios (such as short-victorious war, long-victorious war, long-losing war, and long-breakeven war), attached a probability or, more plausibly, a range of probabilities to each, and summed the expected costs generated by multiplying each cost estimate by its associated probability or range of probabilities.
Benefits to be valued would include (1) elimination of Iraq's weapons of mass destruction, (2) a demonstration of U.S.
military prowess that would intimidate hostile nations such as Iran and North Korea, (3) cost savings from eliminating the containment regime (the no-fly zones and sanctions enforcement designed to box in Saddam Hussein), and (4) improvement in our military capabilities as a result of wartime experience.
(1), (3), and (4) seem susceptible of quantification, though (1) would have been overestimated by virtually everyone because of the widespread and highly plausible, but erroneous, belief that Iraq had an active WMD program.
(2) could probably be ignored on the ground that it was likely to be offset by adverse reactions to our embracing a doctrine of preventive war.
I would have given no weight to the Wolfowitz project of promoting democracy in the Arab region, as it is completely uncertain whether democracy in that region is in the interests of the United States.
We have certainly not been pleased with the result of the democratic election in Palestine that has brought Hamas to power.
We would not like to see the Muslim Brotherhood take power in Egypt, though it may be the most popular political group there.
We were distinctly displeased with the result of the Iranian presidential election.
I would also ignore the effect of the Iraq war on our struggle against international terrorism.
I imagine the effect is negative, but there is too much uncertainty to try to quantify it.
In a paper first published in March 2003, very shortly before the war began, the economists Steven Davis, Kevin Murphy, and Robert Topel conducted a limited cost-benefit analysis.
It was basically just a comparison between the cost of going to war and the cost of continuing the containment policy.
They estimated the former as $125 billion maximum and the latter as between $380 billion and $630 billion.
The gravest weakness of their analysis was the failure to consider war alternatives to the short, cheap, victorious war that the Administration assumed.
They recently updated their paper and raised their estimate of the cost of the war to $323 billion, while allowing (no doubt chastened by their original underestimate) for the possibility that it might go higher.
This seems too low since the budgetary cost of the war is already $250 billion and increasing at the rate of $6 billion a month.
The costs can of course be capped at any time by U.S.
withdrawal from Iraq, but then the benefits of the war would have to be written down to zero except for the important and curiously ignored benefit that consists of having one‚Äôs armed forces engaged in a recent war.
The lessons of war cannot be duplicated by peacetime training, planning, and analysis.
Linda Bilmes and Joseph Stiglitz in a recent paper estimate the cost of the Iraq war as being between $1 trillion and $1.2 trillion.
As Becker points out, the estimate is based in part on entirely speculative estimates concerning the impact of the war on the price of oil.
My own view, moreover, is that higher oil prices are a very good thing from the standpoint of combating global warming, though I would prefer to see them brought about by high taxes on fossil fuels, which would have the additional benefit of reducing the wealth of oil-producing nations.
The Bilmes and Stiglitz paper usefully emphasizes, however, the costs resulting from the unexpectedly long deployments of our troops.
Apparently, as they point out, these were not anticipated and thus impounded in military salaries and benefits, and as a result the nation is having to incur increased recruitment and other personnel costs in order to maintain the armed forces at the desired level.
With the dubious (as Becker notes) cost items subtracted from the Bilmes-Stiglitz estimate, the total is still a sizable $840 billion, which as Becker points out approaches the high end of the Davis-Murphy-Topel current estimate.
I have two disagreements with Becker.
First, I do not think that a comparison of U.S.
military deaths in Iraq and Vietnam is meaningful.
Partly because of increased media coverage, there is much greater sensitivity to casualties today than there was in the Vietnam era (or think back to the Civil War--twice as many deaths as in World War II, in a population less than one-fourth as large).
Apparently the Administration has decided that it is imperative to reduce the number of U.S.
military deaths in Iraq, even though the total for 2005 was only 846, compared to 14,000 in 1968, the critical year of the Vietnam war.
Second, I would not count the welfare of Iraqis in a cost-benefit analysis of U.S.
warmaking.
I do not think most Americans want to sacrifice American lives and resources for the sake of foreigners.
There is some American altruism toward Iraqis, and to that extent increasing the welfare of Iraqis is a benefit to Americans, but, in my view, only to that extent.
And I think it is quite slight.
All this said, I do not think a decision to go to war should be based on cost-benefit analysis.
It would terrify the world if powerful nations conducted cost-benefit analyses of whether to go to war.
There are 192 nations besides the United States; should we ask the Defense Department to advise us which ones we should invade because the expected benefits would exceed the expected costs? Might a conquest of Canada produce net benefits for the United States? Rather, our policy should be to wage only defensive wars, though that would include aiding allies that have been attacked, which was a reasonable basis for our entry into the Vietnam war, though the results were deeply disappointing.
I also do not think a nation threatened with attack should base a decision whether to defend or surrender on cost-benefit analysis.
Rather, it should commit itself to fight regardless, as such a commitment will in most instances greatly increase the expected cost of the attack.
That is the economic logic of revenge and the basis of our policy of massive retaliation during the Cold War.
I said that the Administration did not conduct a cost-benefit analysis of the war in Iraq, and I have also said that I do not think a decision to go to war should be based on such an analysis.
But in the case of a war that though in a broad sense defensive is also optional because there is no immediate threat of attack by the enemy, cost-benefit analysis has an important role to play.
After 9/11, the danger to be anticipated from Saddam Hussein's possessing weapons of mass destruction, though uncertain, had to be reckoned greater than before.
And by virtue of the no-fly zones and the sanctions, the United States was already in a quasi-war with Iraq.
Against the background, the decision of the Administration to obtain a United National resolution demanding that Iraq re-admit the inspectors whom it had ousted in 1998 was reasonable and had the support of most nations.
Enforcing the demand required the United States to station large forces in Kuwait and elsewhere in attack range of Iraq.
In March 2003 the United States had the choice of permitting Saddam's cat-and-mouse game with the inspectors to continue, or invading.
That was the point at which a careful cost-benefit analysis might have indicated the desirability of holding off on invading for a month or two, although a significant cost would have been that it would have given Saddam more time to prepare and that having to fight in hot months would have impeded the invasion to a degree.
In addition, once the decision for war was taken, cost-benefit analysis of alternative scenarios--in particular of the possibility of a long war that we would lose or draw--might have indicated net benefits from committing more troops to the invasion and its immediate aftermath in order to prevent the rise of an insurgency.
There were as usual a number of interesting comments.
I respond to just a few.
First, on the numbers front, Steven Davis notes that I was wrong to say that the Davis, Murphy, and Topel updated study estimates the cost of the war at $323 billion; their updated estimate is $480-$630 billion.
Leigh, Wolfers, and Zitzewitz remind me that they made a prewar estimate of $1.1 trillion.
Second, I was too cryptic in saying that 9/11 increased the risk to be anticipated from Saddam Hussein's possessing weapons of mass destruction.
The thinking behind the statement was partly that 9/11 demonstrated a degree of danger to the United States from the Arab world that had not been fully understood, and partly that Saddam Hussein might feel "one-upped" by the demonstration of al Qaeda's ability to hit the United States hard, something Saddam had never succeeded in doing.
He might have been spurred by the example to  more aggressive action or even to cooperating with al Qaeda.
Third, I never suggested that the United States feared a direct attack by Saddam Hussein on the continental United States.
The danger to the United States would be that Saddam Hussein's possession of atom bombs or other weapons of mass destruction would give him a freer hand in the Middle East, where of course the United States has significant economic and other interests.
A re-invasion of Kuwait by Iraq would not have been out of the question had Saddam been allowed to obtain nuclear weapons and the missiles to deliver them long distances.
There is the further question of what would happen to Iraq after Saddam Hussein died or became incapacitated or was assassinated or otherwise overthrown.
If Iraq at that point had weapons of mass destruction, they might well fall into terrorist hands.
Last August, Becker and I blogged about the effort of a Chinese oil company to buy the American oil company Unocal.
That effort failed because of fears that China would use control of the company to the detriment of the United States.
Becker and I thought those fears chimerical.
In effect, by purchasing Unocal, the Chinese would have been giving us a hostage.
Moreover, if as feared (groundlessly) they ordered Unocal to sell all its oil to China, the only consequence would be that whatever supplier of oil to China Unocal would be replacing would now have unsold oil to sell to the United States.
Our oil supply, and oil prices, would not be affected.
Defenders of the thwarted transaction by which Dubai Ports World, a company owned by the government of Dubai, one of the United Arab Emirates, would (through its purchase of the British port operating company Peninsula & Oriental Steam Navigation Company (P&O)) have obtained control over loading and unloading ships at a half-dozen major U.S.
ports argued that the opposition to the transaction was as blatant and misguided a display of xenophobia as the opposition to China‚Äôs acquiring Unocal.
But the two cases are not symmetrical.
The opposition to the Dubai deal had nothing to do with fears that Dubai Ports World would deprive the United States of access to an essential raw material or otherwise harm the United States economically.
Nor, I think, had the opposition much to do with fears of foreign investment in general, since the transaction was between two foreign companies (P&O and DP World).
Protectionism is not a compelling explanation for opposition to the sale of assets by one foreign company to another even if some of the assets are located in the United States.
Toyota has factories in the United States, but if BMW bought Toyota, who would care? The defeat of the DP World transaction was the result of a groundswell of American popular opinion (though egged on by politicians and the media) that is more plausibly interpreted as anti-Muslim and anti-terrorist than as protectionist.
Dubai is an Arab nation with pre-9/11 links to al Qaeda, and U.S.
port security is notoriously lax.
No serious person thinks that Dubai would actually connive in a terrorist attack on the United States.
The fear is rather that an Arab company is more easily penetrated by Islamic terrorists than a non-Arab one.
The fear may be exaggerated, but many of the arguments made in defense of the Dubai deal seem dubious.
For example, it was argued that since British, Singapore, Chinese, and other foreign companies already control some port operations in the United States (P&O being one), the cat is out of the bag.
But the companies of non-Islamic nations are no more likely to be penetrated by terrorists than a U.S.
company is.
It is not, as some defenders argue, "racist" for Americans to differentiate Islamic nations and peoples from other nations and peoples with regard to security precautions.
It is merely realistic.
(Many of those defenders in fact support racial and ethnic profiling as a rational police and counterterrorism practice.) Nor is it a complete answer to the opposition to point out that the employees at the ports that Dubai Ports World would have operated are Americans and would be unlikely to be replaced by Arabs or other foreigners, or that the security of U.S.
ports is the responsibility of the Coast Guard and of U.S.
Customs and Border Protection (both components of the Department of Homeland Security) rather than of the port operator.
The local managers of DP World doubtless provide a stream of information to the company's headquarters in Dubai concerning various aspects of their port operations (including personnel), and this information might be valuable to anyone contemplating a terrorist attack on the United States who might have access to the company‚Äôs files.
And doubtless personnel from headquarters visit the ports from time to time on business.
Apparently the investigation made by the U.S.
government before approving the transaction was superficial.
I would have to know a lot more than I do about port security to be able to evaluate the risk that allowing DP World to control U.S.
port operations would create to national security.
The risk may be slight; but even so, running a slight risk of catastrophic loss is worthwhile only if the benefits of the risk-taking are considerable.
The benefits would have been slight had the government handled the matter more adroitly.
Apparently DP World, in buying P&O, was not particularly interested in the latter's U.S.
operations.
So when the proposed acquisition was first submitted to the U.S.
government for approval, our government might quietly have suggested (maybe it did, though there is no indication of this) that DP World spin off the U.S.
operations that it was acquiring to an American company.
Because this was not done, and DP World is being depicted as having backed off from acquiring the U.S.
port operations because of fierce grass-roots American opposition, concern has been expressed that we are poisoning our relations with the Islamic world in general and Dubai in particular, and even discouraging foreign investment in the United States by non-Islamic countries.
This concern seems overblown.
Take the last point first.
Because of our large budget deficit, the world is awash in dollars.
The companies holding those dollars are avid to invest in the United States, either by helping to finance our deficit or by acquiring U.S.
assets.
They are not likely to be deterred by a security concern focused on an Islamic nation.
Some Islamic nations may be angry with us; but those nations do not invest heavily in the United States.
Tiny Dubai can hardly afford to retaliate against the United States, to which it looks for protection against the stronger nations in its region, such as Iran.
Nor for that matter can Saudi Arabia.
Between us and the Islamic world (outside of frankly hostile nations such as Iran and Syria) the economic and security dependence is mutual.
Moreover, the ruling and business classes in these nations understand that the United States is a democracy, that our government must therefore be responsive to public opinion, and that Islamic terrorism and fanaticism, the French riots, and the riots over the Danish cartoons, have increased the hostility of Western populations to Muslims.
Demonstrations of the indignation of the American people over Muslim misconduct may even cause some Muslim leaders to rein in their followers.
Western hostility to radical Islam was one of several factors that made the government's defense of the DP World acquisition a distinctly uphill struggle quite apart from legitimate security concerns.
Another such "extraneeous" factor was our government‚Äôs surprising neglect of port security, which has made our ports seem the weakest link in our defense against terrorist attack by means of weapons of mass destruction, weapons that could probably be brought to the United States only by sea.
Another factor was the poor reputation of the Department of Homeland Security, which is responsible for port security; the Department's assurances that the DP World acquisition would not undermine national security were bound to fall on deaf ears.
Then too there is suspicion, based on our lucrative commerce with Dubai and on our failure to require strong counterterrorist measures by our chemical industry, that our government trades off terrorist risks against business interests on terms too favorable to the latter.
I have expressed concern before in this blog and in other media about what seems a crisis of competence in U.S.
government.
For reasons probably rooted in the sheer complexity of modern society, to which our governmental structure may not be well adapted, we have experienced in recent years a series of policy fiascoes, many of which seem to reflect an inability to plan ahead.
In the case of the DP World deal, this inability was expressed in the failure to foresee the public reaction to the deal.
There is a considerable irony in the latest French riots, which are mainly by high school, college, and university students protesting a new law that allows employers to fire employees (without cause) during their first two years of employment, if the employee is under 26 years of age.
The law, which has not yet gone into effect and will not if the government caves in to the rioters and their supporters (including public-employee unions, especially in transportation), is a response in part to a previous round of more serious riots, by French Muslims of mainly North African origin protesting their economic situation, which includes an astronomical unemployment rate particularly among the young.
(Becker and I blogged about those riots on November 13 of last year.) The overall unemployment rate of the under-26 population in France is in excess of 20 percent, which is greater than that of the adult population as a whole (the corresponding rate for the United States is about 10 percent).
The reason is that once hired, an employee can be fired only with great difficulty.
Employers are naturally reluctant to hire people, mainly young, many of whom haven‚Äôt worked before (or if they have, it can only have been for a short time), because the likelihood that they will do a good job is difficult to assess unless they have a record of prior employment.
The youth unemployment rate is even higher among Muslims in France, and they are among the rioters, which makes no sense in terms of their economic self-interest.
That ethnic French youth should be rioting against a law that would help the Muslims is perhaps surprising given the liberal ideology of most young people in France, yet may be, I will suggest, an expression of rational self-interest.
The youth unemployment rate is largely an artefact of French law.
If employers were free to fire employees without cause, as under "employment at will," the most common form of employment contract in the U.S.
private sector, they would be much more willing to take a chance on hiring workers without a record of satisfactory performance.
Tenuring just-hired workers may be good for those people lucky enough to land a job (though average wages will decline because the expected productivity of a worker will be lower than if he could be fired easily), but like other labor protections it is bad for the marginal workers, such as the Muslims who rioted the last time, and for the economy as a whole.
It is part of a complex of unwise laws in Europe that are contributing to Europe‚Äôs economic stagnation.
The rioting by the non-Muslim students may be rational.
For if they are the most likely to land jobs under a legal regime in which a newly hired worker cannot be fired in his or her first two years of employment, they may be harmed by the new law.
This cost-benefit analysis of rioting assumes that the cost to the students of rioting is low, but it does appear to be, since apparently they are not being expelled or suspended from school.
Even the widespread public support of the students may be rational, if that support is concentrated among workers who have tenured jobs and fear that if the new law, though limited to the under-26 work force, goes into effect and is successful in reducing unemployment, it will be the start of a slippery slope leading eventually to free labor markets on the U.S.
and British model.
What is particularly difficult to explain from a rational-choice perspective is the widespread public condonation of riots and strikes as methods of forestalling legislative changes.
If the public strongly opposes a law, it is much more efficent for that opposition to be expressed in a parliamentary vote to rescind the law than in riots and work stoppages that cause widespread inconvenience and other costs.
The inference, assuming the French people are as rational and well informed as other European peoples--which seems the sensible assumption when one consider the high level of education in France, the nation‚Äôs wealth, and the many French contributions to science and culture--is that their political system is not functioning properly; and indeed that seems to be the case.
Although the new law is, according to public opinion polls, opposed by 68 percent of the population, it was, of course, duly enacted by the French legislature.
Although representative democracy does not automatically translate popular majorities into laws, because of the operation of interest groups and the fact that intensity of preference or aversion is not captured in simple majoritarianism, it would be unusual in the United States for a law opposed by more than two-thirds of the population to pass, though a counterexample is the impeachment of President Clinton by a Republican-dominated House of Representatives.
It was opposed by about two-thirds of the U.S.
population--but of course Clinton was acquitted by the Senate.
A great country can have a lousy government.
(Our government is not doing so well these days.) The design of the French government may be unsound.
Ordinarily in a parliamentary system, the head of the government is a member of parliament, that is, an elected official; in a presidential system, too, the head of the government is an elected official.
But in France, the president, who is elected, appoints the prime minister.
The current prime minister, de Villepin, has never held elective office, and this is a considerable weakness from the standpoint of ability to gauge public opinion and assuage public anxieties, and more broadly from the standpoint of perceived legitimacy in a democratic society.
France has a long history of rioting, but so do many other countries (including the United States), which have outgrown it as their governments stabilized.
It seems more likely that the French propensity to riot is rooted in problems of government design than in a peculiarly French proclivity for rioting.
But this is a tentative suggestion.
For there do appear to be French cultural peculiarities, such as the effort to prevent changes in the French language and resistance to the use of English at academic and other conferences, and to foreign takeovers of French companies, that may be related not to the riots as such but to the intensity with which the French resist globalization and its concomitants, which include competition.
The new law that has provoked the riots is designed to make labor markets slightly more competitive.
This past January 1, in his year-end report to Congress on the federal judiciary, which he heads, Chief Justice Roberts urged Congress to raise federal judicial salaries.
They have not been increased (except for cost of living increases in some years), since a very large raise in 1991--from $89,500 to $125,100 for district judges (trial judges), and from $95,000 to $132,700 for circuit (appellate) judges.
The current salaries are $165,200 and $175,100, respectively.
The chief justice's report is not analytical.
It points out that federal judicial salaries have fallen in real (i.e., inflation-adjusted terms since 1969), but this is misleading because judicial salaries (cost of living increases to one side) are raised infrequently--and when they are raised, they are raised by a goodly amount.
1969, the base year picked by Chief Justice Roberts, was the year of a big raise (from $33,000 to $42,500 for circuit judges), and afterwards inflation ate away at the salary in real terms; and likewise after the next big raise, in 1991.
As a result, in most years since 1969, federal judges' salaries have been lower in real terms than their current salaries.
What is true, however, as also pointed out in the report, is that federal judicial salaries are now well behind those of deans and professors at leading law schools, whereas they used to be comparable.
And of course they are far behind the salaries of successful practicing lawyers.
That has always been true, but a novel twist is that judicial salaries are now lower than first-year associates' salaries at New York law firms, when the associates' bonuses are included.
The chief justice's report states that the federal judiciary is facing a crisis because of the salary lag.
It notes that 38 federal judges have left the bench in the last six years, and that 60 percent of newly appointed judges come from the public sector rather than from private practice, whereas the figure used to be only 35 percent.
To say that the wages in some job category are "too low" doesn't make much economic sense when one is talking about a job in the private sector.
An employer who has trouble finding workers of the requisite skill and experience at the wage that he's offering will raise the wage.
Even if there is an unanticipated demand for workers of a particular type, there will not be a "shortage"; the limited supply of workers will be allocated to the most urgent demanders, and other employers will substitute other inputs (including workers with less skill or experience) or curtail their output.
In the public sector, however, there is no automatic mechanism for equilibrating the supply of and demand for workers, so there may be shortages in particular jobs, and the existence of a shortage would be a signal that the legislature should raise the wages for those jobs.
No such signal is being emitted in the judicial sector.
There is not a shortage of applicants for federal judgeships, but instead an excess of applicants, as is true in many other government jobs (look how many people are running for President).
But because there are no very definite criteria for appointment to a federal judgeship, there is a possibility that the queue for the jobs is dominated by low-quality applicants.
Let us consider whether there is evidence of that.
Increased turnover could be a sign of job dissatisfaction due to a low wage.
But has turnover increased? The Roberts report gets to the figure 38 by lumping in retirements with resignations.
Federal judges (as I'm about to note) can when they reach retirement age (between 65 and 70 depending on their years of judicial service) either remain as senior judges, working part time, or retire, in which event they can take another job.
The decision to retire is less likely to be motivated by dissatisfaction with salary than the decision to resign, and it also has less impact, since the alternative is continued service as a senior judge, normally part time.
Resignations remain rare.
Since the beginning of 2000, only 12 judges have resigned, out of a total of some 800.
In the comparable six-year period 1969 to 1974, when there were only about 60 percent as many federal judges as there are now, 10 of them resigned, a higher percentage than in the last six years.
Resignations of circuit judges are especially rare; there have been only 8 since 1981.
The most serious omission in Chief Justice Roberts's report is the other compensation that judges receive besides their salaries.
Most judges who want to can teach a course or a seminar at a law school and receive another $25,000 in pay (the ceiling on outside income, apart from investment income and royalties, and a very low ceiling given current law school salaries‚Äîwhich benefits judges, since they can teach less to reach their ceiling, as it is an ever-diminishing percentage of a professor's salary).
The federal judicial pension is extremely generous--a judge can retire at age 65 with only 15 years of judicial service (or at 70 with 10 years), and receive his full salary for life; nor does he make any contribution to funding the pension.
The health benefits are also good.
Above all, a judgeship confers very substantial nonpecuniary benefits.
The job is less taxing than practicing law, more interesting (though this is partly a matter of taste), and highly prestigious.
Judges exercise considerable power, not only over the litigants in the cases before them but also in shaping the law for the future, and power is a highly valued form of compensation for many people.
Judges are public figures, even if only locally, to a degree that few even very successful lawyers are.
And judges are not at the beck and call of impatient and demanding clients, as even the most successful lawyers are.
I do not mean to suggest that every successful practitioner would exchange his $1 million or $2 million (or greater) annual income for a judge's salary.
But enough, out of a national population of a million lawyers, are willing to do so to enable the filling of vacancies in the federal courts, especially practitioners in their fifties who have built up a nice nest egg.
So I do not think that the increased draw of new judges from the public sector is a function of salary lag.
Partly it is due to the fact that the federal docket, especially at the district court level, is increasingly dominated by criminal, prisoner, and employment-discrimination cases, none of which are case categories particularly congenial to lawyers who have a commercial practice.
Partly it is due to the fact that many highly competent lawyers prefer to work for government, for example as career prosecutors, rather than engage in private practice, so that competition from public-sector lawyers for judgeships is greater than it once was.
And partly it is that ideology figures increasingly as a factor in federal judicial appointments, and both academics and career government lawyers are likely to have emitted clearer signs of ideological orientation than commercial practitioners.
Raising salaries would not do a great deal to attract commercial lawyers to judgeships.
The lawyer who doesn't want to exchange a $1 million income for a $175,000 income is unlikely to exchange it for a $225,000 income--Roberts doesn't name a figure to which he thinks judicial salaries should be raised, but he can hardly expect Congress to raise salaries by more than 30 percent, and that only intermittently, so that inflation will eat away at the salary until the next jump.
And one effect of raising judicial salaries would be to make the job a bigger patronage plum for ex-Congressmen, friends of Senators, and others with political connections, so that the average quality of the applicant pool might actually fall.
The best argument for raising judicial salaries, though not an argument that reflects well on the character of judges (but after all they're only people), is that people who have a great deal of discretion in their job, yet feel underpaid, may take their revenge by underperforming.
If a judge works 2000 hours a year, so that his hourly fee is less than $90, and he feels indignant at being paid so little, he may decide to work fewer hours, delegating more work to staff, or to work the same number of hours but with less concentration, or to increase his nonpecuniary compensation by bullying the lawyers who appear before him.
But this argument for raising judicial salaries is unlikely to receive a warm welcome from Congress.
But there is one compensation measure that is long overdue and could be effectuated at minimum cost to the federal fisc.
That would be to introduce a cost of living differential.
The cost of living differs very widely among different communities in the United States.
Boston‚Äôs cost of living is 40 percent above the average for the nation; the cost of living in Kanakee, Illinois, is 12 percent below the average; and these are not the extremes.
Modest cost of living differentials, constituting raises limited to high cost-of-living areas, for federal judges would go some distance toward remedying any perceived problem of judicial undercompensation.
One would have to know a great deal more about China than I do to be able to evaluate the law that the Chinese legislature has just approved ("Property Rights Law of the People's Republic of China," March 16, 2007, available in English translation at http://www.lehmanlaw.com/fileadmin/lehmanlaw_com/Laws___Regulations/Propoerty_Rights_Law_of_the_PRC__LLX__03162007_.pdf) codifying private (and also public) property rights.
Law on the books often differs from law in action (the Soviet Constitution of 1936 is a famous example), and so the new law may turn out to have rather limited significance--or may not.
If property rights are understood in practical terms, then socialist and even communist countries invariably recognize and enforce some private property rights (as well as of course the property rights of public entities).
For a property right is simply a right to exclude other people from the use of some thing of value.
So a tenant has a property right, and even in a communist country if someone enters without your permission the apartment you've rented from the state you can get the police to eject him.
Firms buy factories in China without worrying, or at least without worrying much, that other firms might hire thugs to seize or burn down the factories; the police would prevent that kind of private expropriation.
Even in its heyday, socialism (as distinct from communism) connoted merely redistributive taxation and public ownership of a handful of major industries; most property was privately owned and the owners had the full panoply of legal protections of those rights.
A socialist country such as the United Kingdom once was (though it was a distinctly watered-down socialism, despite the pretensions of the British Labour Party) might provide greater practical protection to rights of private property than a disordered capitalist state that had incompetent or corrupt judges and police.
The problem is less socialism versus capitalism than statism versus private ordering.
The threat to private property in a statist country is that the government will expropriate it.
Apparently a good deal of that goes on in China, with local Chinese governments taking farmers' land and selling or leasing it for industrial or urban development.
A major aim of the new property law appears to be to curb this practice.
But whether the aim will be achieved will depend on implementation "on the ground," as it were.
As Oliver Wendell Holmes argued in his famous article "The Path of the Law," from the standpoint of a lawyer and his client the law is merely a prediction of what government will do to the client if he does some act.
That the act may appear to violate a law is just the beginning of the predictive inquiry.
If because judges and police are corrupt or incompetent or inaccessible nothing very bad will happen to the client if he does an act that may be illegal, he is likely to go ahead and do it.
So maybe local governments in China will continue seizing farmers' property.
In a country of more than a billion people that despite its rapid development is still poor, has a weak legal infrastructure, and is rife with corruption, it must be difficult to implement national laws at the local level.
The new law may turn out to be largely aspirational.
But there is more to property law, including the new Chinese law, than limiting governmental expropriation of private property.
Becker rightly emphasizes the importance of a well-functioning system of property rights to the growth of developed economies.
In an underdeveloped economy, with economic activity largely local, family ties and reputational concerns may be such effective substitutes for legal enforcement of formal rights that the costs of such enforcement may exceed the benefits.
Some economic activities do not require investment, such as hunting and the gathering of wild fruits, nuts, or berries, and so the function of a property-rights system of encouraging investment may be unimportant.
And a country that consumes but does not produce intellectual property may be better off refusing to enforce intellectual-property rights.
And finally a poor country may not be able to afford the kind of legal infrastructure required to enforce complex property rights.
This can create a chicken and egg problem, if the absence of such rights keeps a nation so poor that it cannot afford the necessary machinery of enforcement.
A notable feature of the new Chinese law (which occupies 45 pages in the English translation that I cited) is its detailed provisions regarding secured lending.
Enforceable security interests enable lower interest rates, facilitating borrowing and lending, essential activities in a modern economy.
These and other provisions of the new law should reduce transaction costs and--to the extent enforced, a key and open question--enable China to continue its rapid economic growth.
I agree with Becker that marriage should not be subsidized.
The primary concern motivating proposals for a marriage subsidy is that children do better if they are raised in a household in which there are two parents.
(It is an open question whether it makes a big difference whether the two parents are of the same or different sexes.
My guess is that only if having parents of the same sex leads the child to be ridiculed by other children are children raised in homosexual households highly likely to suffer, and the more common such households become, the less ridicule there will be.) I assume it is true that children benefit from being raised in a household with two parents, but this point argues not for subsidizing marriages, many of which are childless (or the children are grown), as Becker notes, but for penalizing divorce or (if the parents are unmarried) separation (including deliberate single-parenthood).
Penalizing divorce, presumably limited to cases in which the divorcing couple has minor children, could operate as either a tax on or a subsidy of marriage: a tax because it would increase the cost of exit, but a subsidy because by increasing the cost of exit it would provide more security to each spouse.
It is unclear which effect would predominate, and therefore it is unclear whether the amount of cohabitation would rise or fall relative to marriage whether or not there was also a penalty for dissolving a cohabitation when there were minor children.
I do not think there should be either a marriage tax or a marriage penalty.
We are speaking here of Pigouvian taxes‚Äîthat is, taxes designed to alter behavior rather than to raise revenue for government.
The principal effect of a tax on or subsidy of marriage is likely to be to induce substitution of cohabitation for marriage, in the case of the tax, or of marriage for cohabitation, in the case of the subsidy.
When an activity has a close substitute, the principal effect of a tax on the activity is to induce substitution for the taxed activity, and in the case of a subsidy to induce substitution of the subsidized activity.
It seems unlikely that the decision to have children as a couple or as a single parent, or to stay together with the other parent after children are born and until they become adults, is strongly affected by the precise legal form of the relationship.
Given no-fault divorce and the declining stigma of nonmarital sex, the practical difference between marriage and purely contractual forms of family relationship has shrunk to a point at which tinkering with the marriage rate through taxes or subsidies seems unlikely to produce social gains.
Of course, a heavy tax on cohabitation (perhaps in the form of a heavy separation tax) would drive couples to marry--but an effect of taxing both divorce   and   separation might be to reduce the birthrate.
This is not certain, however, because each spouse would have greater assurance that the other spouse would remain part of the household, to help take care of the children either through personal services or financially; and this assurance would increase the willingness to have children.
Of course even if there were an exact contractual substitute for marriage, as in domestic partnership laws in force in some states and some foreign countries, many people would have strong religious, moral, or sentimental reasons to prefer marriage to the contractual substitute, and a few people would have strong moral or political reasons to prefer the contractual substitute.
These preferences should be honored.
But it is not clear why the legal, including tax and subsidy, consequences of the choice should differ.
A serious social problem is created by the practice of some poor women of having children with no expectation that the father will participate in the support or upbringing of the children, but instead with the expectation that the government will support them.
The practice--in which the role of government becomes that of financial father of children born out of wedlock--nurtures criminality and perpetuates poverty.
Subsidizing the production of children by persons who because they are poor single parents lack the resources to support their children properly is highly dubious social policy.
Welfare reform has reduced the problem but not eliminated it.
Whatever the solution, it is unlikely to be a marriage subsidy.
A man who does not want to be married and support children will marry if marriage is subsidized but will divorce or abandon his wife after pocketing the subsidy.
To prevent this gaming of the marriage subsidy would require costly and probably futile enforcement efforts by the government.
David Cameron, the Tory leader, whom Becker mentions, bases his pro-marriage policy on the following sentiment: "There's something special about marriage.
It's not about religion.
It's not about morality.
It's about commitment.
When you stand up there, in front of your friends and your family, in front of the world, whether it's in a church or anywhere else, what you're doing really means something.
Pledging yourself to another means doing something brave and important.
You are making a commitment.
You are publicly saying: it's not just about me, me me anymore.
It is about we--together, the two of us, through thick and thin.
That really matters." But more than 40 percent of British marriages end in divorce, suggesting that the public commitment involved in a wedding ceremony doesn't have much sticking power.
True, the number of cohabitations that end in separation is surely much higher, but many of them are entered into with no expectation of permanence.
So far as I am aware, those cohabitations that are entered into with such an expectation are no more (or perhaps not much more) likely to end in separation than marriages are to end in divorce.
The choice of a college or a professional or graduate school to attend is of course an important one, and also a difficult one because of the great differences across colleges and universities in prestige, programs, facilities, faculty, amenities, location, and expense.
Most of these differences translate into differences in the value of attendance at a particular school to the student.
It is easy enough to determine whether the school has nice facilities and a charming location, but difficult to determine what contribution attending it will make to one's human capital, which is the principal product of education.
As a result, education, including higher education, is what economists call a "credence" product, in the sense that its value cannot be determined by inspection or other reliable means before purchase, but must, in a broad sense, be taken on faith in the producer.
One might think that because most colleges and universities (for the sake of brevity, I'll generally use "college" to refer to any institution of higher education) are nonprofit institutions, they can be trusted to be candid in their marketing, but that notion is na√Øve.
Institutions of higher education are highly competitive, and if anything less scrupulous in their marketing than commercial sellers, because less subject to legal sanctions for misleading advertising (it is harder to prove that one's college experience did not "work" than that the camera one bought didn't work) and because of the illusion of moral and intellectual superiority to which college faculty and administrators can easily succumb.
Concern with reputation cannot be relied upon to keep colleges from making exaggerated claims of their "value added," because it is very difficult for the graduates to determine, even after a lifetime, how much of their human capital is due to their college experience.
There is some market control, however.
In particular, colleges that depend very heavily on alumni donations have stronger incentives than colleges that do not to avoid exaggerated claims that may cause disillusionment on the part of students after they graduate.
The combination of credence goods and unreliable sellers (in the sense of sellers not adequately deterred by legal or reputational concerns from engaging in misleading marketing efforts) produces a demand for third-party evaluations, on the model of   Consumers Reports  .
In the case of higher education, the traditional evaluations provided by high-school guidance counselors (and by college professors and college guidance counselors with respect to professional and graduate schools) has now been supplemented by the rankings published annually by U  .S.
News & World Report   since 1983.
These rankings are at once influential and (among academics and academic administrators) controversial.
The rankings raise several interesting economic questions: the effect of rankings on information costs, in general and with particular reference to higher education; the manipulability of rankings by the colleges themselves; the effect of the rankings on education; and why U  .S.
News & World Report  's annual rankings, though fiercely criticized by prominent universities (such as Stanford), face little competition.
(There are, however, some competing ranking systems, particularly for business schools.).
There is a tradeoff in communications between information content and what I'll call absorption cost.
Ranking does very well on the latter score--a ranking conveys an evaluation with great economy to the recipient; it gives the recipient an evaluation of multiple alternatives (in this case, alternative schools) at a glance.
But a ranking's information content often is small, because a ranking does not reveal the size of the value differences between the ranks.
One reason that disclosing the ranks of students has lost favor at elite colleges is that meritocratic standards for admission from a large applicant pool tend to create a student body most of which is rather homogeneous with respect to quality.
The quality difference between number 1 and number 2, or between the top 10 and the bottom 10, may be very great, but the quality difference between number 100 and number 200 may be small, at least relative to the appearance created by such a large rank-order difference.
The information content of college rankings, as in the case of   U.S.
News & World Report  's rankings, is particularly low because these are composite rankings.
That is, different attributes are ranked, and the ranks then combined (often with weighting) to produce a final ranking.
Ordinarily the weighting (even if every subordinate ranking is given the same weight) is arbitrary, which makes the final rank arbitrary.
  U.S.
News & World Report   ranks 15 separate indicators of quality to create its composite ranking of colleges.
The rankings, moreover, are manipulable by the schools, depending on the attributes that are ranked.
A common attribute is the ratio of applications to acceptances.
Both components of the ratio are manipulable--the number of applications by injecting a random element into acceptances, so that students who do not meet the normal admission criteria nevertheless have a chance of admission, which may motivate them to apply; and the number of acceptances by rejecting high-quality applicants who seem almost certain to be admitted by (and to accept) a higher-ranking school.
The effect of college ranking on the education industry is unclear, but my guess is that it is negative.
The principal information conferred, given the information limitations of ranking in general and composite ranking in particular, is simply the rank of the college.
But that is important to students (and their parents).
And rightly so.
Given the high costs of actually evaluating colleges, employers and even the admissions committees of professional and graduate schools are likely to give weight to a school's rank, and this will give applicants an incentive to apply to the highest-ranking school that they have a chance of being admitted to (if they can afford it).
The result will be to increase the school's rank, because SAT scores and other measures of the quality of admitted students are an important factor in a college's ranking.
That increase in turn will attract still better applicants, which may result in a further boost in the school's rank.
The result may be that a school will attract a quality of student, and attain a rank, that is disproportionate to the quality of its teaching program.
As a result, the value added by the college experience may be smaller than if rank were based solely on the quality of the college's programs, and so the students are getting less for their money than they could elsewhere.
However, this conclusion must be qualified in the following important respect: the clustering of the best students at a handful of highly ranked schools may, regardless of the quality of the schools' programs, contribute to the human capital formation of these students by exposing them to other smart kids and embedding them in a valuable social network of future leaders.
This may be a significant social as well as private benefit.
A final question is why, given the imperfection of   U.S.
News & World Report  's college ranking system, yet the boost that publishing its rankings has given the magazine's circulation, no significant competitor has appeared on the scene, at least for the magazine's college and law school rankings (the latter are particularly influential).
I conjecture that the market for other commercial rankings system for colleges would be weak, because the publisher of a new system could not make a convincing case that the new system was better than the established one.
It could not do that because the quality of a ranking system is even more difficult to evaluate than the quality of the education provided by a given college.
College applicants and their parents would thus have little incentive to consult the second system.
Becker makes two principal points in his interesting post: that free enterprise encourages people to take responsibility for their actions and thereby make better decisions; and that there is "a strong trend toward shifting responsibility to others.".
I would qualify these points as follows.
Free enterprise requires individuals to make a variety of decisions, concerning both production and consumption, that in a socialist system is the responsibility of government officials.
It does not follow that people in free-enterprise societies "take responsibility," in some psychological sense, for their actions.
The tendency to blame others when things go wrong is deeply rooted in human nature and I imagine no less common in America than in any other country.
In fact, in a free-market system, competition places significant limitations on the freedom of choice of consumers, investors, and workers.
But has the tendency toward shifting responsibility for our actions to other people perhaps become more common over time? Maybe so, with the erosion of belief in free will.
In the traditional sense of that concept, a sense most highly developed (so far as I know) in Christian theology, uncoerced decisions, such as a decision to commit or refrain from committing a crime, are deemed to be uncaused.
They are deemed the "free" choice of the person making them, so that if he makes the wrong choice he has no one to blame but himself.
(There is an odd exception: some Christians believe that a person can be "possessed" by the devil, in which event he is not responsible for his actions until the devil is exorcised.) I find it hard--maybe for lack of imagination--to believe that decisions have no cause.
I assume that they are determined by the balance of advantages and disadvantages as it appears to the decider, though he may not be fully conscious (or conscious at all) of the considerations that are moving him.
Those considerations are influenced by background, intelligence, experiences, and other factors most of which are not, in any meaningful sense, within a person's "control.".
On this view, to call a person "responsible" for a decision (such as the decision to take out a no-down-payment mortgage with an adjustable interest rate) is just to say that his process of weighing the pros and cons of the decision was not overborne by force or fraud or thwarted by a mental deficiency.
The decision may not have been blameworthy in any very deep sense; it may have been foreordained by psychological factors.
Becker mentions "greed." Why are some people greedy? Because they choose to be bad? Or because their psychology, which they are not responsible for, has produced in them an abnormal demand for money? All "freedom" means is not being subject to certain kinds of coercion.
Freedom so understood expands the opportunities open to people, but how they exploit their opportunities is the product of the interaction of their genetic and financial endowments, their upbringing and other environmental factors, and their good and bad luck.
Moral hazard is thus not a defect of the will, but a rational response to one's opportunity set.
If one has medical insurance without deductibles or copayments, the marginal cost of medical care will be low (even zero), so one will consume more of it.
If one is confident that in the event of a flood or an earthquake there will be a government bailout, one will buy less or no flood or earthquake insurance.
The government‚Äôs bailing out of investment companies, banks, and mortgagors will induce those entities to take more investment risks in the future than they otherwise would, and so will increase the risk of future housing bubbles and credit crunches.
This has, I think, always been so.
That is, there was never a time when, because people were averse to taking advantage of opportunities to shift costs to other people, moral hazard was not a social problem.
Criminals will sometimes try to place the blame for their crimes on a bad upbringing.
That is nothing new.
A criminal (or his lawyer) will make any argument that might reduce his sentence; he would be irrational not to do so.
And it is plausible that a bad upbringing, along with a low IQ, increases the likelihood that a person will become a criminal, by reducing his alternative legal opportunities.
But as Becker points out, most people with a bad upbringing (and equally most people with low IQs) do not become criminals.
This has, to my mind, a practical rather than a moral significance.
It suggests that the threat of punishment can deter even a person who has had a bad upbringing.
So by adding that threat to the considerations that a person will weigh in deciding whether to commit a crime, society can reduce the crime rate.
We may even want to punish the criminals with the bad upbringings more heavily than other criminals, in the belief that they can be deterred only by a threat of heavier punishment.
On this approach to crime and punishment, we punish criminals not because they "freely" chose to do bad things, but because by punishing them we can at tolerable cost reduce the prevalence of activities that generate net negative social costs.
We make people do the "right" thing not by appealing to the exercise of their free will but by increasing the cost to them of doing the wrong thing.
Fortunately, few judges, whether or not they believe in a strong sense of free will, allow the excuse of a bad upbringing to mitigate punishment.
As for the people who took out risky mortgages in the expectation that house prices would continue to rise, they should not be bailed out (that is the moral hazard problem) by government even, I think, if they were victims of fraud.
But if they were victims of fraud, they should have legal remedies against the people who defrauded them.
Of course, if there were no legal remedies against fraud, people would be more careful--but they would be too careful; they would incur high costs of self-protection.
It is cheaper to punish fraud, just as it is cheaper to punish burglary than to tell people to fortify their houses.
Sports doping--the use of anabolic steroids and other drugs to increase athletic performance, as Barry Bonds, Roger Clemens, and other prominent professional athletes have been accused of doing--is intensely controversial.
A recent article in   Nature  --Barbara Sahakian and Sharon Morein-Zamir, "Professor‚Äôs Little Helper," Dec.
2007--discusses the parallel phenomenon of "intelligence doping." The term refers to the use of drugs to enhance cognitive performance.
These are drugs like Adderall, Modafinil, and Provigil that are used to treat genuine disorders, such as attention deficit disorder in the case of the former and narcolepsy in the case of the latter.
But they can also be used by normal people, including students and academics, to improve cognitive functioning by increasing concentration, memory, wakefulness, and mental energy generally.
Coffee has many of the same effects, but they are much weaker.
As in the case of sports doping, there is concern that the use of these drugs may have long-term adverse effects on the health of the user.
There is even less evidence of this in the case of sports doping, however.
But this may be because these drugs are newer--which means that they are just the first wave of cognition-enhancing drugs and that the subsequent waves will be more effective.
Becker and I blogged about sports doping on August 27, 2006.
We pointed to the arms-race character of the practice.
Because of the importance attached to winning an athletic event, anything that increases an athlete's performance, such as taking steroids, places pressure on other athletes to do likewise.
The result is expense, and also possible ill health, without any certain improvement in the quality of athletic competition as perceived by fans.
That is not necessarily a compelling argument for trying to ban sports doping; indeed I consider the argument weak because of the difficulty and hence cost of monitoring drug use, especially the newer enhancement practice of "gene doping," and because of the existence of borderline enhancement practices (borderline between "natural" and "artificial"), such as training at a high altitude in order to increase one's production of red blood cells, which in turn enables a greater absorption of oxygen, or undergoing eye surgery to increase visual acuity.
If fans object for whatever reason to sports doping, then sports leagues and team owners will have an incentive to ban the practice; the argument for criminalizing the practice would then depend on whether purely private sanctions could achieve an adequate level of deterrence.
Suppose teams, leagues, and players all want to ban sports doping whether because of health concerns or fans' preferences, but that detection is extremely difficult, so that the probability of catching an athlete doing sports doping is very low.
Then the optimal punishment may be more severe than the team or league could impose.
The argument is the same as for why embezzlement is a crime, rather than the government's leaving it to the bank to punish the embezzler by firing him or suing him for the money he stole.
Fans appear to be ambivalent about banning sports doping, because they are concerned with absolute rather than just relative performance, and so enjoy the additional spectacle created by "bionic" athletes.
In fact neither the teams (and leagues) nor the players' unions seem enthusiastic about banning the practice, which suggests that it does not decrease--it may actually increase--the incomes of the teams and (on average) the players.
The case for banning intelligence doping is even weaker than the case for banning sports doping.
One reason is that there is a strong positive externality from increased cognitive functioning, since smart people usually cannot capture the entire social product of their work in the form of a higher income.
Like other producers, part of the benefit that their production occurs inures to consumers as consumer surplus.
An example is patentable inventions.
Because patents are limited in duration, usually to 20 years, any benefits that a patented invention generates after the patent expires enures to persons other than the patentee.
Even if there were no positive externality--even if the user of an intelligence-enhancing drug captured the entire incremental income generated by that use--there would be a social benefit, since the user is part of society, and hence no economic argument for banning.
What   is   a possible source of concern is that because there is competition based on intelligence, for example to get into good schools or win academic prizes or achieve success in commercial fields such as finance that place a premium on intellectual acuity, the availability of intelligence-enhancing drugs places pressure on persons who would prefer not to use them because of concerns over their possible negative health consequences to use them anyway.
There is also a danger that such drugs produce only very short-term effects, for example on exam performance, that may exaggerate a person‚Äôs long-term ability.
(This is one of the reasons for objecting to exam coaching.) But against this is the fact that it is even more difficult than in the case of sports doping to draw a line between permitted and forbidden uses of cognition-enhancing drugs.
It is hard to define "normal" cognitive functioning in a meaningful sense.
Should people with an IQ above 100, which is the average IQ, be forbidden to use such drugs, but people below that level permitted to use them until it brings them up to 100? That would be absurd.
The person with an IQ of 120 would argue compellingly that he should be allowed to take intelligence-enhancing drugs in order to be able to compete for good school placements and jobs with people having an IQ of 130.
And so on up.
Of course the naturally gifted will object to any "artificial" enhancements that enable others to compete with them.
But it is not obvious why their objections should be given weight from a public policy standpoint.
It is not as if allowing such enhancements would be likely to discourage the naturally gifted from developing and using their gifts (it might have the opposite effect, by creating greater competition for them), let alone discouraging bright people from seeking out other people to marry and produce children by.
It is no surprise that female enrollment in college has increased over the last half century.
The later age of marriage and childbearing and the greatly increased job opportunities of women explain the trend.
Another factor, stressed by Becker in his pathbreaking economic analysis of the family, is increased emphasis on quality rather than quantity of children; parental education is an important factor in the quality of children.
The fact that women tend on average to get better grades in college helps to explain their lower dropout rate, but this is nothing new; even in the era when women dropped out of college to marry and have children, they had higher grades than men.
That women are better students than men is pretty much a constant--and a puzzle.
When one observes members of one group outperforming another in a competitive environment in which, therefore, substitution of inputs is possible, a possible explanation is discrimination against the members of the superior group.
If a college wants to have the the best students it can attract, and the women attending the college have better grades than the men who attend it, something is wrong--the school could increase the quality of its student body by admitting more women and fewer men.
That it does not do so may be because it values other gender-dependent factors--for example, female students may prefer a lower ratio of female to male students than a purely meritocratic admissions policy would produce, and this preference may influence the college's admissions decisions.
But this is unlikely to be a good explanation for the superior female academic performance today.
The incentive to discriminate against female college applicants was much stronger in the old days, yet the female-male performance gap has not (so far as I can discover) diminished.
Women might outperform men academically because they worked harder, and they might work harder because they had more to gain from completing college successfully and doing so with high grades.
But as Becker points out, since male participation in the labor force continues (and probably will continue) to exceed that of women, and since there is a large wage premium for college graduates, men actually have more to gain from completing college than women do.
Yet not only do they drop out at a higher rate; but male college enrollment has not increased nearly as rapidly as female college enrollment has.
Women are not just catching up with men on the educational front; they are becoming better educated than men.
So there are two puzzles: why women get better grades than men, and why men have a lower elasticity of response to the effect of education on earnings than women do.
At this stage of our knowledge, the answers to these questions must be highly speculative; what follows, then, is guesswork.
The first question is, though, I think, a little easier than the second.
From the standpoint of most teachers, right up to and including the level of teachers of college undergraduates, the ideal student is well behaved, unaggressive, docile, patient, meticulous, and empathetic in the sense of intuiting the response to the teacher that is most likely to please the teacher.
Those are traits less characteristic of boys than of girls.
Moreover, there is more variance in IQ among boys than girls--to exaggerate, more morons and more geniuses--and both the morons and the geniuses are difficult for most teachers, the morons for obvious reasons, the geniuses because they are easily bored in a class geared to the comprehension of the average student.
So girls are easier to teach, and so are "rewarded" (not deliberately) with higher average grades.
Nothing in the suggested answers to the first question, however, can explain why males should be less responsive to the growing value of a college education than females.
One possibility is that there is nothing more that men can do to improve their academic performance, given genetic limitations.
Notice the curious fact that the more men in the lower tail of the male IQ distribution drop out at some stage in their academic career, the higher the average grades of the men who remain school should be; the "genius" tail pulls up the average, while the "moron" tail, being depleted because of dropouts, pulls it down less than it would if the students in that tail did not drop out disproportionately and thus cease to figure in the determination of grades.
Maybe the "genius" tail, because of the publicity that its members attract, has obscured the fact that women may on average be more intelligent, or at least have innately a suite of qualities more supportive of academic perfornance, than men.
The key is "innately." If aggressiveness and other psychological or cognitive qualities that inhibit male academic performance are innate, men may have maxed out long ago, while women did not reach their peak then because of factors extraneous to ability, such as lack of demand for women in high-skilled jobs, until recently.
Another possibility is that the decline of the conventional "patriarchal" family since the 1960s has been harder on boys than on girls.
Because of rampant divorce and illegitimacy, a boy's biological father is less likely to be a continuous presence during the boy's formative years, and this is only one factor in what appears to be a decline in the disciplining of children.
If docility is as I have suggested a factor in academic performance, a decline in discipline is more likely to harm the academic performance of boys than of girls because the former need more discipline to instill docility in them.
It is difficult to test this hypothesis empirically, however, because grade inflation bedevils any effort to use changes in average grades over time as a measure of the trend in academic performance.
But, to repeat, these suggested answer to the puzzle of the gender education gap are highly speculative--a stimulus (I hope) to further thought, not the end of the inquiry.
The death on February 27 of William Buckley provoked a surprising outpouring of praise, not limited to conservatives.
The praise was mixed with hyperbole.
He was credited with having created modern American conservatism, with having united free-market economists with social and other noneconomic conservatives, with being the person without whom there would never have been a Reagan presidency, and with being a formidable intellectual.
I doubt that any of those things is quite true.
He was colorful, rich, good-natured, a skillful polemicist and influential "public intellectual" (in my book   Public Intellectuals: A Study of Decline   [2001]) he ranked number 20 in "media mentions" for the period--long past his period of greatest influence--1995 to 2000), a   bricoleur  , defined by   Wikipedia   as "a person who creates things from scratch, is creative and resourceful: a person who collects information and things and then puts them together in a way that they were not originally designed to do." What he put together were conservative Catholicism; McCarthyism; belligerent, even militaristic anticommunism (roll back the Iron Curtain rather than contain the Soviet Union)--a position related, like his McCarthyism, to his religiosity, which made communism particularly odious to him--defense of the southern states' resistance to racial integration; hostility to big government' and (the basis of his hostility to the "nanny state") individualism, as expressed for example in his advocacy of legalizing marijuana and other mind-altering drugs (though I don't know when he began advocating legalization), and entrepreneurship.
All but repealing the drug laws were ingredients of an American conservatism of the 1950s that was outside the mainstream of the Republican Party of the time, though it stopped short of the John Birch Society.
Apart from his libertarian streak, Buckley's policy positions were not, for the most part, sound.
Joseph McCarthy appeared on the scene after the communist penetration (which was considerable) of the government had been eliminated by the Truman Administration.
The southern states' rights movement was disreputable.
Containment was probably the most sensible response to Soviet expansionism.
And religion is not, in my opinion anyway, a good basis for public policy.
Moreover, Buckley was a journalist, working under deadlines that resulted in most of his opinions being merely asserted rather than also well-supported.
His policy positions were not fully coherent: His enthusiasm for rolling back the Iron Curtain did not sort well with his dislike of big government, since wars and heavy defense expenditures increase the size of government, as President Eisenhower was well aware.
The suggestion in the obituaries that he united free-market economists with other conservatives is especially misleading.
Free-market economists have always been on a different track from the kind of political and social conservative that Buckley exemplified.
He was a friend of free markets, but on moral grounds rather than because he thought the market a more efficient method of allocating resources than the government, though he thought that also.
The conservative economic movement has had two major streams, which are convergent.
One is the Austrian school, whose best-known exemplar was Friedrich Hayek.
Hayek argued powerfully that socialism doesn't work, because it does not enable the aggregation of the information required to operate a modern economy; for that, the price system is necessary, because prices impound and transmit information far more effectively than a centralized economic controller can do.
Hayek's insight was vindicated by the collapse of the communist system.
But his influence has been mainly in Europe, where it has been, however, considerable, especially in the nations transitioning from communism.
The other stream, largely independent of the Austrian, originated with maverick economists, such as Milton Friedman, Aaron Director, and George Stigler, who at the height of the 1930s depression, when free-market economics was in the dog house and the Soviet Union's collectivist economy was widely admired including among economists, had the temerity (like Hayek) to argue that collectivist regulation of the economy was inferior to leaving the regulation of economic activity to the market.
The school expanded slowly after World War II; Ronald Coase, a brilliant English economist who moved to the United States, was an influential critic of regulation.
While Director and Stigler mounted a strong challenge to conventional views of antitrust, Stigler and especially Friedman challenged a wide range of governmental policies.
Other economists, and even a few economics-minded law professors, joined the free-market movement.
But the movement received virtually no hearing during the 1960s, the era of the "Great Society" programs of Lyndon Johnson.
However, the stagflation of the 1970s exposed the failure of conventional ‚Äúliberal‚Äù (in the welfare-state sense) policies, promoted increased acceptance of free-market economics, and stimulated the deregulation and privatization movements, which began in the Clinton Administration and expanded in the Reagan and (first) Bush Administration, continuing into the Clinton Administration, notably with welfare reform.
All this had nothing to do with William Buckley.
Most of the causes dearest to his heart were unrelated to economic policy, such as his belief about the proper strategies for defending against the Soviet Union, expelling Soviet agents from the federal government, or defeating our current enemies.
Buckley was a strong opponent of abortion, whereas economists, while they can tote up the costs of forbidding or permitting abortion, do not, as economists, have any position on whether a fetus should have the same legal status as a newborn child.
Economists might think that particular religious beliefs, such as Calvinism, with its emphasis on frugality and saving, promote social welfare, but they have no position on the truth of religion.
They value markets because markets are efficient, not because people have a moral entitlement (as John Stuart Mill believed) to engage in any and all conduct that does not create a palpable harm to other people ("my rights end where your nose begins").
Markets to an economist are just instruments, and for solving particular problems there are sometimes better instruments.
What is true is that a political movement based solely on free-market economics could not have achieved political power under conditions of modern American democracy.
Modern conservatism, to the extent that it is a coherent movement, combines free-market economics (to a degree) with political and social conservatism (tough on crime, strong on national defense, friendly to religion, critical of liberal social values, hostile to trial lawyers and judicial activism).
It was not a movement created by Buckley, able journalist and polemicist though he was.
The Federal Reserve's unsound monetary policy in the early 2000s pushed down interest rates excessively, resulting in asset-price inflation, particularly in houses because they are bought primarily with debt.
Eventually the bubble burst and house prices fell precipitately; they are still falling.
Becker's interesting post argues that the boom and bust in housing have not had as large an effect on consumption (and hence, this implies, on the nonfinancial economy) as the size of the price fluctuations might suggest.
He illustrates with the example of a homeowner who has a bequest motive: if people intend to leave their house to their kids, changes in the value of the house will affect the size of the bequest rather than current spending by the owner-parent.
More generally, an increase in home values increases the cost of housing by the same amount.
If all house prices double (and assume no other prices change), but the owner is not intending to downsize, he cannot "spend" the increased value of the house.
However, although increased home values are unlikely to be translated into equal increases in consumption spending, those increased values are likely to have a strongly positive effect on consumption.
To begin with, a significant amount of borrowing during the bubble involved the refinancing of existing mortgages rather than the financing of home purchases, and often the incentive for the refinancing was to obtain cash for consumption.
Furthermore, some people may downsize, or even become renters, because they want to increase their consumption expenditures, as they can do if they cash out some of the increased market value of their house.
And if people feel wealthier because the market value of their savings (which includes the value of a house) is rising, they may reallocate more of their income to consumption, which they can do without impairing the expected value of their savings if their houses are worth more.
In other words, a house is a "store of value" (as economists say) rather than just a home, and even if one has no intention of downsizing, the fact that one has a valuable house that could be sold if one needed cash provides a store of value that may persuade you that you can afford to consume more.
As a form of savings, a house is illiquid and risky relative to cash, but it is still savings, in the sense of an asset that one could turn into cash if necessary, to increase consumption in the future.
And if one's savings shoot up because of an increase in the price of one of one's assets, one may decide to allocate a portion of the increased savings to current consumption.
These considerations persuade me that the run-up in housing prices probably did increase consumption significantly.
More important, the collapse of those prices, together with the fall in the stock market, has almost certainly had a significant negative impact on current consumption expenditures.
Because of rising house and stock prices, the market value of people's savings rose during the housing and stock bubbles and as a result people reduced their savings rate, to the point where it actually was negative for a period during the early 2000s and was no higher than about 1 percent before the crash last September.
(This is related to the "store of value" point.) The personal savings rate has since risen to more than 4 percent.
The fall in house and stock prices, combined with increased unemployment and fear of unemployment, convinced many people that they didn't have enough precautionary (safe) savings, and so their current savings are heavily weighted toward cash and other riskless, or very low-risk, forms of savings.
The reallocation of income from consumption expenditures to very safe forms of savings reduces current consumption without increasing productive investment significantly, and so contributes to the depression.
Matching is a form of search in which the object is to create a relationship, such as marriage or employment (marriage could be regarded as a form of employment).
Matching is more complex and often more protracted than most searching for goods or services because the stakes tend to be greater.
The costs of exit may be high and often there are high opportunity costs as well.
The higher these costs of mistaken matching, the more it pays to invest in the search for a good match.
But a protracted search is very costly; for example, if one spends 10 years searching for the perfect mate, one has lost 10 years of benefits of marriage.
If one divides the process of searching for a match into two parts, which I'll call "screening" and "meeting," one can see more easily the precise benefits of Internet matching services.
The universes of potential mates, workers, and jobs are immense, and while very few members of these universes are suitable candidates for a particular match, it is very difficult to determine in advance who those few might be.
So the first task in the search for a match is to screen out the vast number of unsuitable candidates, and this is done much more quickly, completely, and accurately (and therefore costs less and confers greater benefits) by an Internet search than by such traditional alternatives as mixer dances, marriage brokers, the personal columns in magazines, wanted ads, and job fairs.
Once the candidates for a match have been reduced to a manageable number, the more time-intensive, face-to-face meeting phase of the matching process takes over; an efficient preliminary screening greatly reduces the aggregate costs of the expensive "meeting" phase of matching.
In job search, the meeting phase is the interviewing of the most promising candidates.
However, two forms of non-Internet marriage-matching screening should be mentioned, as they are also relatively new (and increasingly common) and highly efficient.
They are coeducational higher education and gender integration of the workplace.
In effect, both the college, which wants a homogeneous student body, and the employer, who wants a homogeneous work force within each job category, do the prescreening at no cost to the searcher, and the fact of taking classes with, engaging in extracurricular activities with, or working side by side with someone of the other sex reduces the incremental costs of the "meeting" phase of the search: you don't have to go out of your way to meet someone you work with or go to class with.
My guess would be, therefore, that the demand for Internet marriage screening would be less among students and among young workers in workplaces that have a fairly even balance of men and women, and greater among persons who are not in such advantageous situations and among persons who have high opportunity costs of time, like successful businessmen and professionals.
It should also be greater among persons who have idiosyncratic or minority tastes.
For example, since homosexuals are a relatively small fraction of the population, I would expect their demand for Internet match-screening services to be proportionately greater than that of heterosexuals.
Personal ads in magazines that have a specialized readership are another form of non-Internet preliminary screening.
The advantages of Internet job matching over newspaper want ads are particularly great, and this for four reasons.
First, in any community in which there is more than one newspaper, the job searcher (whether looking to be hired or to hire) has to buy and read both newspapers (or however many there are), even though he may derive no value whatsoever from anything in the second through nth papers except the want ads.
Second, a job search is often regional or national rather than local, and it is infeasible to do a regional or national job search by means of local newspapers.
Third, the costs of paper greatly limit the number of jobs that can be advertised in a newspaper and the amount of information that can be conveyed about each job or job hunter.
And fourth (though related to the first point), a newspaper is a bundled commodity, and people searching for jobs or workers may derive very little value from the other sticks in the bundle.
Beginning in the 1970s, the banking industry was extensively deregulated.
Other financial intermediaries, such as broker-dealers, hedge funds, and money-market funds, were permitted to offer close substitutes for services provided by commercial banks, and restrictions on banking were loosened so that the banks could fight back against their new competitors.
The deregulation program was complete by 1999, when the Glass-Steagall Act, separating investment and commercial banking, was repealed.
However, the Bush Administration, as part of its general free-market philosophy, instituted a regime of regulatory laxity that included bank and securities regulation.
This laxity, along with the Federal Reserve's error in depressing interest rates in the early 2000s, contributed to precipitating the banking collapse and the ensuing depression in which we find ourselves.
A natural response is to tighten up regulation.
In the case of commercial banks, this would not require new legislation.
The bank regulators have virtually plenary control over banks: thus the crack "what does a bank say when a regulator tells it to jump?" Answer: "How high"? Burned by the banking collapse and employed by an Administration less complacent about the self-correcting character of financial competition than the Bush Administration, current regulators will not allow banks to take risks though, paradoxically, may compel them to in order to increase the amount of money in circulation and thus stimulate economic activity.
The banks, being undercapitalized, are afraid to make risky loans; they thus don't have to be prevented from taking them, at least in the near term.
The current complaint about the banks is that they are hoarding  cash--that they are excessively risk averse--and thus are failing to provide the credit that the economy needs in order to recover.
To tighten regulation of banks at this point would thus not only be a case of closing the barn door after the horses have escaped, but also would undermine the government's policy of encouraging banks to lend.
The most challenging issue of financial re-regulation is bringing the nonbank financial intermediaries under the regulatory umbrella, in order to prevent effective bank regulation from simply shifting ever more financial intermediation to firms not shackled by regulation.
One can imagine imposing capital requirements, leverage limitations, or even reserve requirements, on nonbank financial intermediaries.
But this would require an elaborate regulatory apparatus that would cost a lot and, more important, might be ineffectual because of the complexity of modern finance and the heterogeneity of the nonbank intermediaries.
I would prefer to see, at least as an initial step, requiring greater regulation of specific financial instruments, in particular credit-default swaps, which are at present unregulated credit-insurance undertakings often with no backing in the form of either reserves or collateral.
Financial intermediaries find themselves both issuers and purchasers of such swaps (that is, both insurers and insureds), and because the swaps are not traded on an exchange, are not standardized, and are not regulated to assure that the issuer can honor his undertaking, they have been a source of debilitating uncertainty in the present crisis.
We would not be in the fix we're in were it not for the Federal Reserve's having pushed down interest rates too far and keeping them down too long, thereby setting the stage for a credit binge (including the housing bubble), and, relatedly, were it not for the very low personal savings rate of Americans and their investing almost their savings in risky assets such as houses and common stocks.
The problem of excessive borrowing can be addressed both by the Federal Reserve, which exercises a high degree of control over interest rates, and by the government's placing limits on credit-card and mortgage debt, for example by repealing the deductibility of mortgage interest from taxable income.
But the most important point I would make is that there should be no new regulatory measures until the depression reaches bottom and recovery begins (not that there can be certainty about when that point has been reached--there were several false bottoms in the 1930s depression).
Any regulatory initiatives at this time will simply increase the already great uncertainty in which the financial industry is operating; and as Keynes pointed out, anything that increases uncertainty in a depression causes hoarding, which can in turn precipitate a deflation likely to deepen and protract an economic downturn.
Of some $300 billion in annual American charitable giving, about 5 percent is spent abroad; almost 40 percent of that 5 percent is donated by the Bill & Melinda Gates Foundation, mainly for trying to alleviate Third World health problems (such as malaria and AIDS) and provide assistance to Third World agriculture.
Total American charitable giving abroad is more than half as great as total U.S.
governmental foreign aid, of which almost a third goes to Israel and Egypt.
Americans do not receive an income tax deduction for giving to foreign charities, but they do for giving to domestic charities that donate abroad (provided they don't just donate to foreign charities).
Should they? I am inclined to think they should not.
This is not, however, because I think that government foreign aid is a more efficient method of increasing Third World welfare.
Little U.S.
foreign aid goes to the Third World, and the efficacy of the aid that does go there is undermined by the requirement that the aid be used to purchase U.S.
goods and services.
Nor do I question the economic rationale for providing a tax exemption for charitable donations.
The argument is that charitable giving provides an external benefit; that is, if I want to increase the welfare of people in Bangladesh, I will benefit from a charitable donation for that purpose made by someone else and will therefore be inclined to give less.
Knowing this, that donor will donate less because the value of his donation is diminished by my free riding on it.
There is even an argument that placing any restriction on how a person uses his money reduces incentives to earn, but it is hard to believe that this would be a big effect of eliminating the tax exemption for charitable donations to foreign countries.
Although charitable donations to foreign recipients will thus diminish if the tax deduction is repealed, there is likely to be some substitution in favor of domestic recipients, which will increase welfare in the United States; and that seems to me a good thing, especially in a depression.
(Of course, the depression may induce donors to reallocate grants to the United States because the wealth of Americans is less; but perhaps not, because the decline in wealth in the Third World is probably as great or greater.) Also, to the extent that total charitable donations fall, tax revenues will rise, which is also a good thing, given the enormous budget deficits that we face.
I suspect, moreover, that charitable donations to Americans create more utility than charitable donations to people in poor countries, because the latter donations reduce pressures for desperately needed political, economic, and social reforms.
I doubt that Mugabe would still be ruling Zimbabwe if the West did not provide extensive food to its starving population.
I have not seen any attempt at a rigorous analysis of the net benefits of charitable contributions to Third World nations.
There is also some danger that charitable donations to foreign countries undermine U.S.
foreign policy.
But I do not put much weight on this factor, because the government can forbid donations to countries with which we are at odds, or to other countries where we think aid would undermine our foreign or security objectives.
An offset, moreover, is that donations from the United States, even if made by private individuals and foundations rather than by the government, build good will toward Americans.
I’ll describe the crisis briefly, then address two questions: whether the nations of the European Union, such as   Germany  , should try to bail out   Greece  ; and what the Greek crisis tells us about what is in store for the     United States    .
In the easy-money years of the early 2000s—for which we have Alan Greenspan, other central bankers, and President Bush and his foreign counterparts to thank—the Greek government borrowed a great deal of money from banks, mainly in   Europe  , to fund its huge public sector.
    Greece     has chronic difficulty in funding its government expenditures out of tax revenues because of rampant tax evasion.
And its bureaucracy appears to be either corrupt or incompetent or (probably) both, and as a result its published financial data are inaccurate and misled and continue to mislead lenders.
The global downturn, which has driven up unemployment in   Greece   (to 10 percent) as elsewhere, has weakened the Greek economy further, but what has precipitated the country into de facto bankruptcy is the realization by lenders that     Greece    , like so many other countries, is dangerously overindebted.
Its national debt, most of it owed to foreigners, of some $400 billion is greater than its Gross Domestic Product, and its current annual budget deficit is almost 13 percent of GDP, which means that its indebtedness is growing rapidly.
    Greece     like other borrowers has to roll over its debt continuously.
In 2010 it will have to replace some $65 billion in public debt, and fear of default has driven up the interest rate on new Greek government debt to 6 percent.
The Greek government has taken drastic-seeming measures to reduce its deficit.
It has imposed new excise taxes and increased existing ones, reduced wages and pensions of government employees and increased their retirement age, and reduced public services.
    Greece     has a huge public sector—40 percent of GDP is generated by the public sector, and 25 percent of Greek workers are public employees—and so the government can effectuate big reductions in public spending virtually by a stroke of the pen, though not without inciting riots.
Despite the measures taken by the government, it is desperately seeking financial aid from EU countries or failing that the International Monetary Fund: that is, it wants to borrow more money, and at lower interest rates than are available from private lenders, in order to avoid defaulting on its public debt or, alternatively, reducing government spending even more sharply than it is doing, with potentially serious political consequences.
Assuming that the Greek government, without foreign assistance, cannot avoid defaulting on its public debt because it has reached the limits of what the Greek people will acccept in the way of austerity measures imposed by their government, there is not much difference between a default on the one hand and borrowing—whether from EU countries or from the IMF on what undoubtedly would be onerous terms—on the other hand.
Either way,     Greece     will be broke.
Default would be the cleaner and simpler solution.
A cascade effect from a Greek default can be avoided by EU nations’ bailing out any creditors of     Greece     whose failure, because of a Greek default, would have macroeconomic significance.
Default would be a wake-up call for the Greek nation and put it on the path to competent economic management.
A bail out of     Greece     would be administratively complex.
The Greek government would try, for compelling domestic political reasons, to substitute bail out money for cuts in spending and tax increases, and the bailers out, whether EU nations or the IMF, would struggle to prevent the substitution.
Comparisons are being drawn between   Greece   and the     United States    .
Our national debt of $12.5 trillion is approaching our GDP of about $14.5 trillion and will probably exceed it within the next couple of years, as the debt seems likely to grow by about $1.5 trillion for several years to come, especially with the enactment of the health care bill and the spur that that enactment will impart to other spending programs.
Our annual federal budget deficit is now more than 10 percent of GDP.
We do not have the same tax-evasion problem as     Greece    , but we have low taxes and intense resistance to either raising them or reforming the tax system to obtain more revenue with less economic distortion.
Our public finances are transparent, so we will not slide into national bankruptcy inadvertently.
But we seem incapable either of cutting existing public spending or avoiding costly new public-spending programs.
What we have sustaining us is the status of the U.S.
dollar as the major international reserve currency (plus the fact that, since our debt is in dollars, we can reduce it by inflation, though not without cost;     Greece     can’t do that because it doesn’t have its own currency).
Many international transactions are in dollars even when the transacting parties have no American connection.
(There are other reserve currencies, mainly the Euro and the Yen, but the U.S.
dollar accounts for about two-thirds of the world’s total reserve currency.) If a Saudi Arabian oil company sells oil to   Singapore  , the sale will be in dollars, and this will require the central bank of     Singapore     to hold dollar reserves that it can exchange with Singaporean merchants for the local currency to enable those merchants to make purchases in dollars.
With the world’s central banks awash with dollars—and for the further reason that many foreign countries, such as China, Japan, Germany, and the oil-producing countries of the Middle East, export much more to the United States than they import from us and as a result accumulate large dollar balances—the United States can easily borrow to finance its public debt.
    Greece     doesn’t have its own currency, and so is in the approximate position of a private borrower.
This happy situation will enable us to avoid defaulting on our enormous public debt for the foreseeable future.
But it will perpetuate our fiscal improvidence.
I have little to add to Becker’s persuasive analysis.
Among the many reasons to regard the current economic situation as dire is the high incidence of long-term unemployment.
More than 40 percent of the unemployed have been unemployed for more than six months, and there are reasons to expect long-term unemployment to remain at or near its present level (or even rise) for some time to come.
One reason is shifts in consumption.
If consumption patterns remain unchanged throughout a downturn, or if the downturn is so short that when it’s over consumers resume their previous consumption pattern, then most of the unemployed can expect to be rehired in their old jobs, for which they are trained and in which they may have firm-specific human capital, making them more valuable to their old employer than they would be to a new one.
But if the downturn is protracted and consumers as a result make durable changes in consumption, many old jobs will disappear (“job destruction,” economists call this phenomenon).
The longer the downturn, moreover, the more young people will be competing for jobs when it ends, and this make it difficult for the long-term unemployed to find jobs.
It is hard for older workers to compete with younger ones for new jobs in which the older workers have no specific human capital.
Other things being equal, employers generally prefer younger workers because they have less interest in unions (younger workers are less likely to remain in the same job, and unions therefore cater to older workers) and lower salary expectations, and are healthier and so do not cost as much in health benefits as older workers do (not that all employers offer health benefits, of course).
Older workers are more likely, moreover, to own their home and to have ties to their community.
With house prices severely depressed, homeowners are reluctant to sell because there is not enough equity in their house to enable them to buy another house.
This makes them all the more reluctant to search for a job in a different community.
The longer a worker is out of work, the less likely he is to get another job comparable to the one he held.
This is not only because skills atrophy if unused, and the worker ages during his period of unemployment, but also because employers may take the worker’s long layoff as evidence that he lacks commitment to working or that other prospective employers found something lacking in him, which is why he has gone so long without finding another job.
The long-term unemployed exert political pressure for extension of unemployment benefits, and their plight—they are likely to have used up their savings—triggers an altruistic response that makes their political pressure more effect.
But unemployment benefits delay re-employment by reducing the cost of search for a new job.
And the cost of unemployment benefits contributes significantly to our soaring federal deficits—unemployment benefits are expected to cost the government about $250 billion this year.
What can be done about the problem of long-term unemployment? Nothing that is politically feasible.
A job-subsidy bill is wending its way through Congress.
It is hard to see how it can have any effect.
Apart from the reason Becker gives, a job subsidy is likely to have a very indirect and limited effect on demand for goods and services, and without an increase in demand firms have no incentive to add workers even if a new worker’s wage is subsidized.
Suppose a firm is producing 1,000 units of output a year with a work force of 30, and it adds a 31st employee and thereby qualifies for a $5,000 tax credit.
The firm’s total costs will have risen by the wages and benefits that he pays the new employee minus $5,000, unless the new, cheaper worker enables the firm to obtain a greater saving because the worker substitutes for some capital input.
Unless that happens, then because the firm’s sales will not have risen, the firm’s participating in the job-subsidy program will reduce its profits (revenue minus cost).
The only solution to the problem of long-term unemployment that would not impair the operation of the labor market would be, as Becker points out, rapid economic growth, which would increase the demand for labor by more than the annual increase in the number of persons in the labor force.
But it does not appear that any significant measures to accelerate economic growth are politically feasible.
Measures that might be effective, such as reforming the immigration laws, reforming the tax laws, reducing the power of unions, reducing the minimum wage, eliminating agricultural subsidies, fighting protectionism, and reducing government subsidies of pensions and health care, are off the political radar screen for now and the foreseeable future.
The filibuster is usually thought a peculiar institution unique to the U.S.
Senate.
Actually it originated in the ancient Roman Senate, has a long history in the British Parliament, is found in the legislatures of other English-origin nations as well, and was at one time employed by members of our House of Representatives.
Until quite recently it consisted simply of a legislator, or group of legislators, refusing to yield the floor, thus preventing the legislature from conducting other business.
Strom Thurmond appears to hold the record, having spoken on the floor of the Senate for 24 hours uninterruptedly.
Before there was a rule of “cloture” (a vote to limit debate), filibusters could be defeated only by the majority’s remaining in session, ready to vote on the bill being filibustered, until the filibustering senators gave up, exhausted.
Since 1975, a vote of 60 senators (previously it had been 67) can limit debate and thus end a filibuster.
Filibusters have become increasingly common (and therefore cloture votes as well), and this is usually ascribed to growing political polarization.
But a simpler explanation is that because the Senate is busier than it used to be, the announcement of a filibuster is generally enough to impel a cloture vote—the majority doesn’t want to take the time to try to wear out the filibusterers.
If there are enough votes for cloture, the filibuster never takes place; if there aren’t enough votes, the majority gives up and abandons the bill that was to be filibustered.
Hence the cost of filibustering has plummeted.
The filibuster, especially in its present streamlined form, creates a supermajority requirement to enact federal legislation.
Supermajoritarianism is not unknown to the U.S.
Constitution, which requires a two-thirds majority to overcome a presidential veto and a two-thirds vote to send a constitutional amendment to the states for ratification and three-fourths of the states must vote to ratify for the amendment to be adopted.
But there is no supermajority requirement to enact ordinary legislation that the President does not veto, though the framers of the Constitution may have known that there were filibusters in the House of Commons and if so may have realized there could be filibusters in the Senate.
The Senate could abolish the filibuster by changing its rules to allow a simple majority to end debate on a bill.
It is true that Senate rules require a two-thirds vote to change a Senate rule, but it is possible that the two-thirds rule could be changed by a simple majority.
There is no pressure in the Senate itself to abolish the filibuster.
The reason is that it benefits all Senators, not just those who expect to be in a minority, because it arms every Senator to demand concessions in exchange for voting for cloture.
Several Senators exacted what seemed exorbitant concessions to induce them to vote for the health reform bill.
The usual criticism of the filibuster is that it is undemocratic, but this is imprecise, quite apart from the fact that the Constitution is riddled with undemocratic features (such as the amendment provision that I mentioned and the rules for the appointment and tenure of federal judges, not to mention the Electoral College and the entitlement of every state to two Senate seats regardless of population).
A supermajority requirement for the enactment of legislation should just increase the “price” that the committed majority must “pay” for the votes of the uncommitted or strategic holdouts.
If 49 Senators oppose or pretend to oppose some bill and threaten a filibuster, the majority needs to pry only nine of the opponents away from the opposition bloc to defeat the filibuster threat.
The majority can offer concessions quite unrelated to the bill; alternatively, rather than “paying off” prospective     filibusterers, they may be able to threaten to withhold support from them on issues more important to them than defeating the bill favored by the majority.
If the holdouts are members of the majority party, the leadership may be able to coerce them by threatening to deny them choice committee assignments.
And in fact historically the filibuster has rarely resulted in paralyzing the federal legislative process.
The usual example of where it did paralyze it is the filibustering of civil rights legislation by Southern Senators such as Thurmond and Byrd in the 1950s.
But it has been argued that the filibuster would have been overcome had not many Northern Senators been only lukewarm in their support of civil rights; and it does seem unlikely that the civil rights revolution could have come much earlier than it did.
What has awakened controversy over the filibuster is of course the election of Scott Brown as Senator from     Massachusetts     in place of the deceased Ted Kennedy.
It is assumed that since the Senate and House had each voted a health reform bill—in the Senate, over an attempted filibuster—and the Democrats retain a strong (59 to 41) Senate majority even after Brown’s election, were it not for the filibuster a health reform bill would now be law.
This is far from certain.
The Senate and House bills were different in a number of respects, and the differences would have had to be ironed out in conference and a single draft then have had to be approved by a majority in both the House and the Senate.
With the vote on the House bill having been excruciatingly close, and a majority of the general public being opposed to either bill, an attempt to enact a compromise bill might have foundered.
Conversely, a single bill may still be enacted, despite Brown, with the aid of the “reconciliation” procedure for thwarting filibusters by a simple majority vote.
That procedure is intended for bills designed to reduce federal debt, but has sometimes been used outside its intended scope—and by both parties.
The Administration appears to be desperate to obtain passage of a health law by hook or crook, as otherwise the capacity of the President and the Democratic Party to govern will be called into question.
There is an argument for the filibuster, and hence for a supermajoritary requirement, in the case of the health care program being pushed by the Administration.
Because the program is unpopular among the general public, its enactment by a simple majority in both Houses would raise a valid question about the representative character of Congress.
Not that a legislature should always bow to popular opinion.
The theory of American government is representative rather than direct democracy (the latter illustrated by the referendums in     California     and other states—the Constitution makes no provision for federal referendums), and the representative is intended to season his constituents’ opinions with his own judgment rather than act simply as a transmission belt.
But the health care program has been kicking around in Congress for a year, and the inability of its supporters to convince the public of the program’s wisdom, coupled with the program’s enormous cost and its potentially disruptive consequences for the health care industry—the largest in the United States, accounting for a sixth of our $14 trillion Gross Domestic Product—and indeed the entire economy, may make people question the democratic legitimacy of enacting the program with just a simple majority in the House and Senate.
Becker is certainly right that it would be a mistake to enact a further stimulus program.
With half the existing $787 billion stimulus package (which has grown to $862 billion while no one was looking—and that’s the figure I’ll use henceforth) that Congress enacted in February of last year still unspent, and given the sluggishness with which our government moves, a new stimulus program would probably not come on line until 2011 or 2012, by which time its principal effect might be to increase our already staggering public debt.
Anyway the question of a new stimulus package is thoroughly academic because of the extreme unpopularity of the existing one, which only 6 percent Americans believe has had any positive impact on employment.
I think they’re wrong, and that the original stimulus was, on balance, a justified measure.
But I can well understand its unpopularity, and I share many of the reservations voiced by critics.
Because it is being financed by federal borrowing rather than by taxes, by the time the stimulus is fully implemented (probably early in 2011) it will have injected $862 billion into the economy: roughly $400 billion in 2009 and the same amount in 2010.
Each figure is a little less than 3 percent of GDP.
The economic effect of such an increase in GDP depends on what is done with the money.
Suppose all the recipients used it to buy Treasury bonds.
Then its economic effect would be zero: the government would be lending the money and then borrowing it back from the borrowers.
Obviously some, and probably much, of the money has been spent rather than saved, though     no one knows how much.
Since the personal savings rate is less than 5 percent and some personal savings finance private business activities, probably almost all of the stimulus money has been or will be spent.
This does not mean that it is or will be well spent, in the sense of financing activities that add more to economic welfare than the same amount used for private investment would do.
But the stimulus has not reduced private investment, as it might do if the borrowing to finance the stimulus raised interest rates.
Interest rates have been kept very low by the Federal Reserve.
Despite that, private investment has been anemic; net of depreciation it was negative in 2009.
Banks and consumers alike—heavily indebted and pessimistic about profit and income prospects—have trimmed their expenditures.
Banks continue to hoard some $1.2 trillion in excess reserves (lendable cash sitting in accounts in federal reserve banks rather than being lent or otherwise invested), and the personal savings rate, which before the current depression was only about 1 percent, has increased dramatically.
In effect, the Treasury has borrowed from Americans money that wasn’t being used productively, and from foreigners (but mainly from Americans) money that they preferred to lend than to spend, and has recirculated the money into the American economy.
Consumption expenditures rose in 2009 at the same time that incomes were falling and saving was increasing because stimulus money financed consumption that otherwise would not have materialized.
An increase in consumption stimulates an increase in production (or at least a more rapid drawdown of inventories, so that production recommences sooner), which in turn increases the demand for labor and so reduces unemployment.
No one knows how many people are employed who wouldn’t be were it not for the stimulus money.
There are almost 15 million unemployed Americans, and since the unemployment rate is almost 10 percent, this suggests that about 135 million are employed.
If the stimulus, which as I said is injecting about $400 billion a year into the economy, has increased the number of employed by 1 percent, that would reduce the number of unemployed by almost 10 percent.
Or stated differently, were it not for the stimulus, the unemployment rate might be almost 11 percent rather than almost 10 percent.
An unemployment rate of almost 11 percent would cause something akin to panic among businessmen, consumers, and politicians, with very bad consequences for the country.
So one can think of the stimulus program as a kind of insurance policy against potential economic and political unrest.
The stimulus has not, not yet anyway, “crowded out” private investment because there is so little demand for such investment at present even though interest rates are extremely low.
The Barro-Ricardian Equivalence hypothesis implies that people are reducing their consumption and investment in anticipation of having to pay increased taxes in the future to repay the money borrowed to finance the stimulus.
There may be something to this, but probably not much, because no one knows the form and incidence of taxes or other measures (inflation, devaluation, curtailment of government exenditures) that will be necessitated by the borrowing that is financing the stimulus.
Probably most people take the view that sufficient unto the day is the evil thereof, rather than curtailing spending in the light of some unknown future prospect of having to pay in some form for their present consumption.
The studies by Robert Barro and others that find evidence to support the Barro-Ricardian Equivalence hypothesis are considered unpersuasive by most economists.
The biggest objection to the stimulus is that it adds almost a trillion dollars to our enormous and rapidly growing federal deficit, in a political setting in which measures to reduce the deficit whether by tax increases, spending cuts, inflation, or stimulating more rapid economic growth seem either politically infeasible or economically undesirable, or both.
Yet to the extent that the stimulus has increased production, employment, and therefore incomes, it has, by increasing tax revenues, offset some of the increment to the deficit that the borrowing to finance it has added.
The stimulus was poorly designed.
A lot of it went to states, and the stimulus supporters brag that it saved hundreds of thousands of state public sector jobs.
But without the stimulus the states might have preserved the jobs, or most of them at any rate, by cutting inessential state expenditures.
    Any such cut would reduce the amount of money in circulation and therefore consumption and therefore production and therefore jobs, but how many and when are entirely unclear.
Moreover, a stimulus that saves public employees’ jobs directly and private employees’ jobs at best indirectly creates resentment among private employees who have lost or fear losing their jobs.
They think the government is in effect paying itself, or taking care of its “own” ahead of the broader public, although many of the public jobs saved (policemen, firemen, teachers) may be essential.
Federal financing of state employees’ jobs also retards necessary reforms of the swollen public-employee sector.
No effort was made to target the stimulus on industries and areas of the country in which unemployment is greatest; it is those industries and those areas in which the employment effect of the stimulus would have been maximized.
Indeed, stimulus moneys spent in areas or industries of low unemployment may not directly reduce unemployment at all, but do so only indirectly through the stimulus that spending imparts to production and hence employment.
Becker’s argument that the stimulus reflects Democratic Party priorities rather than national priorities is compelling.
The stimulus was also poorly executed, because its direction was placed in the hands of Vice President Biden, who has no management experience, to oversee; and he allotted only 20 percent of his time to the task.
The Administration should have hired an experienced manager, as it did to supervise the auto bankruptcies (which went quite quickly and smoothly), to oversee and expedite the stimulus.
And it has been poorly defended.
The critical public relations botch was Christina Romer’s prediction in January 2009 that, without the stimulus that the new Administration was planning, the unemployment rate would rise from its then rate of 7.2 percent to 8 percent.
With the stimulus, of course, the unemployment rate rose to 10 percent, though it has now fallen back to 9.7 percent.
It’s hard to get people to understand that trying to predict the effect of the stimulus was a chump’s game and that without the stimulus the unemployment rate could well be 11 percent.
Although the President is articulate and intelligent, and Romer and the other members of his economic team are competent and in some cases outstanding (Lawrence Summers, for example), none of them seems able to explain the theory behind a stimulus in words that people who are not economists or financiers can understand.
It doesn’t help that neither the members of the President’s team, nor Fed Chairman Bernanke, are gifted communicators, and that Bernanke, Treasury Secretary Geithner, and National Economic Council Director Summers, are implicated (Geithner, and especially Bernanke, deeply) in errors of policy that bear primary responsibility for the economic crisis—complacency, unsound monetary policy, and regulatory laxity.
The fact that so few Americans believe that the stimulus saved   any   jobs suggests a profound failure of communication on the part of the Administration, not to mention financial journalists and public-intellectual economists.
The bill has now become law.
Its length alone (some 2,700 pages, including the modifications made in the “reconciliation” process) precludes a full analysis within reasonable length limitations.
Although on balance I think the new law is a mistake, there are three things that can be said in its favor.
The first is that it is a genuine social experiment, and we are bound to learn a lot from it—about the size and elasticity of demand for medical services, the reliability of cost estimates by the Office of Management and Budget and the Congressional Budget Office, the reliability of advice given by health economists, the relative perceptiveness of liberal and conservative commentators, the ability of the federal government to manage a vast and highly complex program of social welfare, and, a related point, the relative efficiency of a lightly regulated market, versus a government-controlled market, in providing health services, and perhaps goods and services more generally.
Second, the very cost of the health-care program, which is likely to be far higher than predicted by its sponsors and not nearly offset by tax hikes, spending cuts, or economies in the provision of health care, may act as a wake-up call for the need for fiscal reform.
    Greece     is making real reforms in its economic system, because it has to; it’s broke.
If the new health law, piled on top of all the other measures that are causing the federal deficit to explode, causes real damage to the     United States    , the stage will be set for real reform.
Third, and related, the health law does contain some economizing measures, though fewer than the sponsors pretend.
(The best may be the tax on “Cadillac” health-insurance plans, which I discuss at the end of this comment.) For example, the requirement that all chains with more than 20 stores must publish calorie information on menus and signs may contribute to reducing obesity, though the effect will be modest.
But a typical measure included inthe new law that is touted as economizing—subsidizing preventive care—is unlikely to reduce overall health expenses.
The reason is not only that preventive care is often more costly than treatment, especially because it tends to be repetitive (annual tests for this and that, for example), but also that it is consumed by the healthy as well as by the sick, and there are more healthy than sick.
Even the sick, moreover, are potential users of preventive care—to prevent the illnesses they don’t yet suffer from.
More than the tepid economizing measures, however, the Administration will be under pressure to prove that the bill really will save money.
Responding to this pressure may produce significant economies, or at least raise the public consciousness concerning the need and opportunity for reducing health costs.
That’s the bright side of the new health law.
The dark side includes the timing of the measure: the uncertainty that the health law and the deliberations leading up to it have generated for business and consumers alike has probably retarded our economic recovery from the financial crisis.
But the law’s biggest negative is its costs.
The $100 billion or so of annual subsidies that the law mandates is just the beginning, but it is an ominous beginning.
It is true that these are transfer payments, rather than costs in an economic sense; but they are federal transfer payments and so increase the federal deficit, which even without themis growing by more than $1 trillion a year.
The subsidies will grow at the rate at which medical costs grow, which is between 5 and 10 percent a year—much greater than any plausible estimate of annual economic growth.
Indeed, as Greg Mankiw has argued, the health bill is likely to reduce the nation’s annual growth by increasing the income taxes on the well to do.
The biggest cost is likely to come from the law’s effect on the demand for health care.
One effect that can be expected is that, if one assumes (plausibly) that the supply curve of health care is upward-sloping, meaning that unit costs increase as demand increases, adding 30 million people to the health-insurance rolls (whether private or Medicaid) will increase overall health-care costs by more than the percentage increase in the number of persons insured.
A second demand-related cost effect will result from the fact that insurance,(even with deductibles and copayments, drives a wedge between the cost of a service and its price, and so increases demand.
(It’s like a restaurant with a buffet: the marginal cost of eating all you want is zero.) Persons who are uninsured are deterred from consuming medical services in quantity— because of cost (they are billed for such services at very high prices and may be forced into bankruptcy if unable to pay), because of difficulty of obtaining quality service from charity hospitals or other “free” providers, or simply because, though they can “afford” insurance, theyprefer to gamble on remaining healthy.
These persons, when they become insured, will increase their utilization of medical services, because those services will now be cheaper to them.
Health insurance may even induce some people to take worse care of their health: the lower the expense of treatment, the less benefit one derives from prevention, including nonmedical preventives such as a healthy diet, exercise, and avoidance of dangerous activities.
The additional costs of health care are likely either to be defrayed by higher taxes on upper-income people or avoided by reductions in the quantity or quality of medical services.
The idea that the costs of our health-care system can be significantly reduced by eliminating “unnecessary” treatment is as quixotic as the idea that the Pentagon budget can be significantly reduced by eliminating the “fat” in it.
One person’s “unnecessary” medical treatment is another person’s last hope for survival.
Cutting medical costs means reducing treatment, which will impair outcomes.
The only durable and culturally acceptable ways of reducing the nation’s health costs—which are, by comparison with other wealthy countries, excessive—are by eliminating the tax deductibility of employer-provided health benefits (and thus decoupling health insurance from employment, reducing the cost of insurance to the taxpayer, and discouraging overconsumption of medical services because the tax treatment of health benefits encourages employers to substitute them for wage increases), increasing deductibles and copayments in health-insurance policies in order to give people a greater incentive to take care of themselves, and changing Medicare from an entitlement program to a means-tested welfare program.
Unfortunately, there is no political support for any of these measures—although the heavy tax on “Cadillac” employer-provided health insurance, which is to go into effect in 2018—is a step in the direction of reducing the tax deductibility of employer-provided health insurance.
In practical effect, it will eliminate the tax deductibility of expensive employer-provided plans.
A union is a workers’ cartel.
Its goal is, by threatening to shut down the employer’s production by the workers’ refusing to work, to increase the workers’ full income, where “full” signifies that a worker’s income in economic terms consists not just of wages and benefits but also of the length of the workday, how dangerous and strenuous the work is, and job security.
Unions in the private sector are most effective in industries in which competition is weak and in industries that do not produce a storable commodity.
A good example of the latter is the airline industry; an airline cannot continue operating by selling from inventory produced before its workers went on strike.
A union can sometimes be of benefit to an employer by providing a check on abuse of power by supervisors, but the net effect of unionization on most employers is negative: unionization raises labor costs both directly and, by reducing the employer’s control over working conditions and job tenure, indirectly.
Unionization reduces, in short, the efficiency of labor markets, and exists only because of political pressures.
Between 1945 and 2009, the rate of unionization in the private sector fell from 45 percent to 7 percent, which is telling evidence of the inefficiency of unionization.
Historically, government employees were not permitted to unionize.
No more than any other employer did governments want their employees to be unionized, especially because governments provide mainly services rather than goods and so are highly vulnerable to work stoppages, particularly where essential services such as police protection are concerned.
When government was small, its employees were not numerous enough to obtain through the political process the right to unionize, but as the government sector grew, public employees became a more powerful interest group (they are a key constituency of the Democratic Party) and were able to obtain in many states and cities, and in many parts of the federal government, the right to unionize.
From very low levels in the 1950s, public-sector unionization grew by 2011 to encompass 36 percent of all public workers in the United States, prominently including teachers, police officers, firefighters, and postal workers.
Some public employees, such as police officers and firefighters, do not have the right to strike, and some states forbid strikes by other public employees as well, such as teachers, although in states that recognize teachers’ unions teachers generally do have the right to strike.
Even when public employees have no right to strike, the employer is required to bargain over wages and other terms and working conditions with the public employees’ union if there is one, and the employer’s duty to bargain provides some leverage to the unions in extracting favorable terms.
The net effect of public employee unions is difficult to gauge, however, because most public employees have considerable economic leverage, even without unionization, simply as a result of their status as voters—imagine if the workers in a private company could vote in elections for the board of directors.
And, partly to minimize political considerations in government staffing, government workers have long enjoyed a high degree of job security—more than most private-sector unions are able to negotiate for their constituents.
Nevertheless the recent political turmoil in Indiana, Ohio, and Wisconsin concerning the rights of public-employee unions suggests that such unions do make a difference: that they are a factor in the extraordinarily generous health and pension benefits that public employees receive, which are placing an immense debt burden on states and cities.
Moreover, the combination of the virtual disppearance of unions in the private sector with their attaining in the private sector a rate of unionization comparable to that of the heyday of unionization during the New Deal and World War II poses a political threat to public-sector unionization: workers in the private sector, who are the vast majority of all workers, ask themselves why their taxes should be supporting unionized public workers in jobs that confer job security, health benefits, and pension benefits unavailable in most private-sector jobs.
There is no good answer to that question except the raw political power of public employees magnified by their union rights.
The public-sector unions have overreached in much the same way that the United Auto Workers overreached in its dealings with the Detroit automakers.
The UAW concentrated on negotiating for generous health and pension benefits, which the automakers preferred to big wage increases because the benefits were deferred.
When the Detroit auto industry got into serious economic trouble because of foreign competition and the economic crisis that began in 2008, its benefits obligations became unsustainable and crushed the industry—and the union, which has become a mere shadow of its former self.
History may be repeating itself in the public sector, as taxpayers wake up to the fact that state and local governments are raising taxes and reducing services to pay for union-exacted benefits for public employees.
The earthquake-triggered disaster that has engulfed Japan is a textbook example of the public policy dilemmas posed by catastrophic risk (on which see my 2004 book   Catastrophe: Risk and Response  ).
A catastrophic risk, in the policy-relevant sense, is a very low (or unknown but believed to be low) probability of a very large loss.
If the probability of loss is high, strenuous efforts will be made to avert it or mitigate its consequences.
But if the probability is believed to be very low, the proper course to take will be difficult, both as a matter of sound policy and as a political matter (to which I return in the last paragraph of this comment), to determine and implement.
The relevant cost is the catastrophic loss if it occurs discounted (multiplied) by the probability of its occurring.
If that probability is believed to be very low, the expected cost may be reckoned to be low even if, should the loss occur, it would be catastrophic.
And if the expected cost is low but the cost of prevention is high, then doing nothing to prevent the risk from materializing may be the optimal course of (in)action.
Furthermore, even if the expected cost of catastrophe is high, the scale of the catastrophe—the loss that it inflicts—may be reducible to moderate proportions by responsive measures.
For example, while one way to deal with the risk of an epidemic is to vaccinate everyone, another is to quarantine the persons first infected, so that the epidemic is limited.
With these thoughts in mind, one can attempt a preliminary analysis of the Japanese response to the risk of the kind of disaster that occurred.
Earthquakes cannot be prevented or predicted, though they can be detected as soon as they occur and if they occur in the floor of an ocean, and as a result trigger a tsunami (a tidal wave), people in the path of the tsunami can receive a warning of minutes or hours, depending on the distance between the earthquake and populated areas.
Unlike the countries that border the Indian Ocean, whose populations suffered more than 200,000 deaths from a tsunami in 2004, Japan appears to have had a good early-warning system, as a result of which the number of deaths from the recent tsunami is in the tens of thousands rather than the hundreds of thousands.
The catastrophic effects of a major tsunami cannot be prevented by building seawalls; nor, in all likelihood, could the vulnerability of nuclear reactors to catastrophic damage from an earthquake or tsunami be reduced by moving the reactors inland, because they require access to large bodies of water for their normal operation, and it appears that the only large bodies of water available to Japan are the seas surrounding it.
But catastrophic damage to nuclear reactions could be reduced by building stronger containment vessels for the reactors.
The loss of life and the radiation hazards from damaged reactors could be minimized by creating a rapid response capability that could be mobilized as soon as an earthquake or tsunami struck.
Japan failed to create such a capability.
At considerable but not astronomical cost, Japan could have reduced the losses caused by the tsunami substantially by strengthening the containment vessels for its reactors and by creating a rapid response capability.
I do not know why it did not do so, considering that it is a wealthy, technologically highly sophisticated, and risk-averse society.
One possibility is that the probability of an earthquake within several hundred miles of Japan that would be as violent as the earthquake that triggered the tsunami turned out to be (9.0 on the Richter scale) was thought too slight to justify the cost of the protective measures that would have been necessary to minimize the consequences of such an earthquake.
Suppose the expected cost of a 9.0 earthquake was $100 billion, but there was believed to be only a 1 percent annual probability that it would occur; then preventive measures that cost $1 billion or more per year would not be cost justified.
The 1 percent figure is arbitrary; but considering that Japan had a 9.0 earthquake in 2011 that triggered a tsunami comparable to the recent one, that the Indian Ocean tsunami occurred only seven years ago, that on average a 9.0—or greater—earthquake occurs somewhere in the world every 20 years, and that Japan is in a region that is highly prone to earthquakes and tsunamis, 1 percent seems as good a guess as to its probability as any.
Strengthening the ability of its nuclear reactors to withstand a tsunami triggered by a 9.0 earthquake (or greater? 9.0 is not the ceiling—the Indian Ocean earthquake was 9.1 on the Richter scale) would be very expensive—Japan has more than 50 nuclear power plants, all on its coasts.
But how much would it have cost to have built them slightly inland, connected to the seas by canals? And the creation of a rapid-reaction capability would probably not be very expensive, although I have not seen cost figures.
It would not be surprising, however, if as seems to be the case Japan failed to take cost-justified measures to minimize the damage from a 9.0 or greater earthquake.
Politicians have limited time horizons.
If the annual probability of some catastrophe is 1 percent, and a politician’s horizon is 5 years, he will be reluctant to support significant expenditures to reduce the likelihood or magnitude of the catastrophe, because to do so would involve supporting either higher taxes or a reallocation of government expenditures from services that provide immediate benefits to constituents.
In principle, it is true, politicians would take a long view if their constituents did out of concern for their children and grandchildren.
But considering how the elderly cling to their social benefits, paid for by the young including their own young, I doubt the strength of that factor, although I do not know enough about Japanese politics to venture a guess on whether politicians’ truncated policy horizons was indeed a factor in Japan’s surprising lack of preparations for responding promptly and effectively to the kind of disaster that has occurred.
A final factor may be that Japan, unlike the United States, does not have an independent nuclear regulatory agency.
I agree with Becker that the current uprisings are a momentous development in world affairs, but I have no idea how they are going to turn out and thus what their ultimate effects will be.
This may be another 1848: a wave of revolts swept over Europe, yet when the dust cleared everything was pretty much as it had been.
A populist revolt in a country without democratic traditions, even if it is successful in overturning the existing autocratic government, may be succeeded, as in the French, Russian, and Iranian revolutions, by simply a new form of autocracy.
Suppose, though, that whether or not the uprisings bring to power stable democratic regimes, the new governments will be more democratic than the old ones; then the question will be the effect of quasi-democracy on a nation’s economy.
It is possible to have democracy without economic freedom: India illustrated this combination for decades after it became an independent nation democratic from the outset.
Britain was democratic and socialist.
Economic freedom varies widely across democratic nations, and across autocratic ones, producing some strange rankings of economic freedom.
The   2011 Index of Economic Freedom   jointly sponsored by the Heritage Foundation and the   Wall Street Journal   ranks Singapore second and Bahrain tenth, Norway thirtieth, Oman ahead of South Korea, the United Arab Emirates ahead of France and Hungary, Tunisia and Egypt ahead of Serbia and India, and Syria ahead of Ukraine.
It’s a crazy quilt from the standpoint of correlating democracy with economic freedom.
The wealthy countries tend to be democratic (with the principal exception of some of the small oil-producing countries), but are they wealthy because they are democratic or democratic because they are wealthy? In wealthy countries in which there is a reasonable equality of incomes across persons, people are self-confident and independent and so don’t like to be told what to do or say by government, and so there is strong pressure for liberty and democracy.
Poor countries are much less likely to be democratic, though there are exceptions, of which the most important is India.
It is doubtful that the masses that have revolted in the Middle East will be content with democracy or understand the benefits of economic freedom.
Most likely there will be a strong demand for redistibution of wealth, which in turn will foment strife with business interests, impair economic progress, and undermine the new governments, democratic or otherwise, of these nations.
Discontent may grow and may result in the installation of Islamist governments that will curtail such economic freedom as these nations have.
Or maybe not—I may be too pessimistic; the future of these countries is highly uncertain.
However, given the enormous pressure these countries will be under to maximize their revenues, if forced to guess I would guess that they will continue to rely on the international oil companies to produce and market their oil rather than imitate Mexico’s example and try to manage their oil industry themselves.
The Mexican expropriation took place in 1938 and was a distinctly (and defiantly) socialistic measure, whereas the current Middle East revolutionaries do not appear to be old-fashioned socialists eager to nationalize the means of production.
But it is perilous to make predictions when we do not yet know what elements of these societies will become politically dominant.
The March 6 issue of the   New York Times   contains an interesting article by the economist Paul Krugman entitled “Degrees and Dollars,” available online at   http://www.nytimes.com/2011/03/07/opinion/07krugman.html  .
In it Krugman challenges the conventional view that we need to invest more in education because “everyone knows that the jobs of the future will require ever higher levels of skill.” Krugman argues that since about 1990 “the U.S.
job market has been characterized not by a general rise in the demand for skill, but by ‘hollowing out’: both high-wage and low-wage employment have grown rapidly, but medium-wage jobs—the kinds of jobs we count on to support a strong middle class—have lagged behind.” He expects the trend to continue, noting a recent newspaper article about the growing use of software to do legal research, potentially replacing hordes of lawyers and paralegals engaged in document review in big cases.
He argues that computers are good at doing both cognitive and manual work that can be performed effectively by following rules, whereas work involving a high degree of discretion or imagination, ranging from writing poetry to inventing a new social network to running a corporation (not his examples—he doesn’t give any examples of the type of nonmanual work that he thinks will resist automation successfully), along with some manual labor (he instances truck drivers and janitors, but could add waiters and retail sales personnel), cannot be.
He argues that demand will grow for jobs in the categories in which work cannot be automated, and will decline for jobs in the categories that can be.
Hence, Krugman concludes, it would be “wishful thinking” to believe that “putting more kids through college can restore the middle-class society we used to have.” Instead, the focus should be on “restor[ing] the bargaining power that labor has lost over the last 30 years” and “guarantee[ing] the essentials, above all health care, to every citizen.” These are pathetic prescriptions.
Unionization spurs automation by making labor more costly relative to capital, and providing health care to every citizen will increase the tax burden on persons with middle-class incomes, and not just on wealthy persons, because the costs of universal health care are too staggering to be borne entirely by the wealthy.
But the more interesting question is whether Krugman is right to be pessimistic about the future returns to a college education.
The market disagrees.
If the market agreed with him, college enrollments would be plummeting because college is expensive, not only in tuition but also in opportunity costs—the income forgone by being in school.
College enrollments continue to increase relative to population, and, more important from a market perspective, more and more high-school students express a desire to go to college, even though if Krugman is right their college education will not produce lifetime earnings increments sufficient to offset the cost of tuition and the cost of their forgone earnings during their college years.
(Of course a high unemployment rate, by reducing the opportunity costs of college, reduces overall college costs and so helps to sustain high enrollments, but this is not a major factor driving demand for college education.).
The market could be wrong, or (though this is unlikely) the nonpecuniary benefits of college could be increasing faster than the pecuniary benefits are falling.
A high school student, and his parents, are hardly in a good position to predict the structure of the labor market in ten or twenty years.
Most people base most of their expectations for the future on simple extrapolation from the recent past.
Often that’s the best one can do in the presence of profound uncertainty.
But the economic crash that began in 2008 has made us more alert for discontinuities in economic trends.
The future doesn't always repeat the past.
Krugman may be right that computers will replace many midlevel jobs—the current example is employment by bookstores, which is being killed by Amazon.com.
Income inequaltiy has been growing in America (illustrating that there are negative as well as positive trends) and may continue to grow.
One can even imagine the emergence of a large servant class, as in nineteenth-century England.
Technological advance continues at a dizzying rate; no one can be sure where it is leading us.
One reason for skepticism, though, is that Krugman may have too limited a conception of the benefits of a college education.
He may think that what a college education does is impart knowledge that the graduate needs either for his career or for the next stage in his education, such as law school or medical school.
But it may be that the greater value of college lies elsewhere: in providing association with people of above-average intelligence, in imbuing young people with general work skills (discipline, working under supervision, following directions, being evaluated, basic writing and speaking skills, some foreign language proficiency), in fostering ambition, and in providing information to future employers about job applicants that enables better matching of workers with jobs.
It is plausible that these noninformational benefits of a college education are valuable across a vast range of jobs and will enable the holders of those jobs to obtain good middle-class incomes.
Police officers, prison guards, firemen, noncommissioned officers in the military, sales personnel, nannies, tour guides, secretaries, hotel receptionists, IT staff, medical technicians, store managers, auto mechanics, and many other workers who do not require a college education for their jobs may nevertheless be of significantly greater value to their employer if they have such an education and may be compensated accordingly.
Finally, a college education may not only increase incomes in ways different from building up a stock of knowledge but also may reduce people’s living costs by making them more adept at household management.
Buteven if this is right (and Krugman wrong), it doesn’t follow that there should be massive increases in publicexpenditures on education with a view toward increasing the number of people who obtain a college education.
The number of people who have the IQ and character traits that would enable them to benefit, in the ways described above, from a college education is inherently limited, and maybe all or most of them will obtain a college education under existing conditions for financingsuch aneducation.
Recent scandals involving charges of plagiarism by professors and other writers treat plagiarism as (1) a well-defined concept that (2) is unequivocally deserving of condemnation.
It is neither.
Take the second point first.
The idea that copying another persons ideas or expression (the form of words in which the idea is encapsulated), without the persons authorization and without explicit acknowledgment of the copying, is reprehensible is, in general, clearly false.
Think of the remarkable series of plagiarisms that links Ovids Pyramus and Thisbe with Shakespeares Romeo and Juliet and Leonard Bernsteins West Side Story.
Think of James Joyces Ulysses and of contemporary parodies, which invariably copy extensively from the originalotherwise the reader or viewer would not recognize the parody as a parody.
Most judicial opinions nowadays are written by law clerks but signed by judges, without acknowledgment of the clerks authorship.
This is a general characteristic of government documents, CEOs speeches, and books by celebrities.
When unauthorized copying is not disapproved, it isnt called plagiarism. Which means that the word, rather than denoting a definite, well-recognized category of conduct, is a label attached to instances of unauthorized copying of which the society, or some influential group within it, disapproves.
In general, disapproval of such copying, and therefore of plagiarism, is reserved for cases of fraud.
The clearest example is a students buying an essay that he then submits for course credit.
By doing this he commits a fraud that harms competing students and prospective employers.
Another clear example is the professor, or other professional writer, who steals ideas or expression from another professor or writer, and by doing so obtains royalties or tenure or some other benefit that he would not have gotten were the truth knownagain, a case of fraud.
It is less serious than the student fraud, however, because it is more likely to be caught.
A student essay is not published and so will not be widely read.
A published work is quite likely to be read or brought to the attention of the author of the purloined work.
The easier it is to detect a wrongful act, the lesser is the punishment required to deter (most of) it; this may be whyto the outrage of studentsplagiarism by faculty tends to be punished less severely than plagiarism by students.
Moreover, whereas a student plagiarism has absolutely no social value, plagiarism in a published work may have such value.
If what is plagiarized is a good idea, the plagiarism creates value by disseminating it further than the original author may have done.
Moreover, the plagiarist may add his own input to the plagiarized idea and as a result produce a superior work.
I lumped together copying a professors work and copying the work of another type of professional writer, say a writer of popular history.
In both cases, the copying will probably be a copyright infringement.
In both cases, too, the copying will be a form of fraud.
What will differ in the two cases is the injury that the fraud inflicts.
In the case of the popular writer, the injury will be a loss of royalties or other feesand will usually be negligible, unless the plagiarist is trying to produce a substitute for the work, rather than just enhancing a noncompeting work with incidental material from another book.
The academic writer will usually suffer no loss of royalties even if the plagiarized work is a direct substitute, because few academic writings generate royalties (textbooks are the principal exception).
But he may suffer grievously nevertheless, because recognition of original contributions is the key currency of academic reward and that recognition is blurred when someone fails to acknowledge anothers priority.
The contrast in this regard with judicial opinions is very striking.
Far from flaunting their originality, judges try to conceal it.
They like to pretend that rather than making up new law, they are merely applying existing law made by others.
So they do not complain at all if another judge or a law professor steals novels ideas that they have managed without acknowledgment to smuggle into some of their opinions.
Perhaps the most difficult current question about plagiarism concerns the managed book, or more broadly the use of research assistants or other aides in the creation of a book.
The term refers to a book in which the nominal author is actually an editoran assembler and maybe a reviserof work done by persons whom he has hired.
He is much like a movie director.
He presides over the composition of the work rather than being the composer.
The phenomenon is not new; according to An Unfinished Life, Robert Dalleks recent biography of John F.
Kennedy (a biography highly favorable to its subject, but not uncritical), Profiles in Courage was a managed book (not Dalleks term, though).
Many judicial opinions are of this character.
It seems likely that many multivolume treatises by (that is, nominally by) law professors are managed books in which most of the actual writing is done by student research assistantsthough I am guessing; I have no actual evidence.
Let me say, as someone who has written a number of books, that the idea of writing a managed book is not to my personal taste.
I think that the person who writes a first draft largely controls the final product, even if it is carefully edited by the author of the managed book.
But the issue of plagiarism has nothing to do with the taste of particular writers.
It is an issue of fraud.
So the question regarding the managed book is whether failure to disclose that most of the actual writing was done by persons other than the nominal author misleads readers to their detriment.
That depends mainly on the conventions, and hence expectations, of a particular field.
A professional historian who authored a managed book without disclosure of the fact would be committing a fraud because his fellow historians would think hed written it himself.
At the opposite extreme, few lawyers care whether a judicial opinion is written by a law clerk or by the judge, provided they think its the judges decision (the bottom line, the outcome), which it almost always is.
In between is the legal treatisethe American legal treatise, that is; for it has long been the norm in Germany and other European countries for academic law books to be written by the assistant to the professor under whose name the book will be published.
That is not the norm in the United States.
I believe without knowing that the delegation of the writing of extensive portions of such works is recent, and much of the profession, including the treatise authors colleagues, may be unaware of the trendif there is a trend, of which I am not certain.
It would be prudent, therefore, for such treatise writers to acknowledge the coauthoship or first-draft responsibility of their students, in order to avoid a charge of plagiarism.
These were interesting comments.
One that particularly struck me is that a small country may lack enough human capital to man key offices in government, universities, the professions, and businesses optimally.
Suppose that there is a threshold number required to manage even a small government, university, etc.
and that the percentage of people qualified for these managerial positions is very small.
That is not a problem for a large country but can be one for a small one.
It might follow that outstanding institutions would be found only in large countries.
One answer is that small countries can frequently free ride on the institutions of the large countries--even in government, by joining a federation or confederation.
A question was raised concerning my statement that Pakistan had split in two--the questioner thought I might have meant India.
British India indeed split, in fact into four pieces--India, Pakistan, Burma, and Ceylon (Sri Lanka).
I was (as another comment notes) referring to the fact that Bangladesh used to be East Pakistan but became independent after a war between India and Pakistan.
I agree with Becker that the costs to nations of being small have declined and that this decline, along with the dismantling of the colonial empires (mainly of Great Britain and France), are factors in the growth in the number of nations since World War Two.
I am going to focus, however, on the benefits side of nation size.
Even if the costs of being small decline, unless there are benefits to being small one would not expect the decline to affect the number of nations, especially if we assume as we should that there are significant transitional costs to splitting up a nation.
The question of what determines the size or scope of a nation has parallels concerning the size of business firms and other private and public organizations, and even the size of animals.
In the case of a firm, size is determined mainly by the relation of size to average cost.
When the firm is very small, an increase in its size, by permitting greater specialization of its workforce, is likely to reduce the average cost of the firms output and thus make it more competitive.
But beyond some point the gains from specialization will be exhausted and average costs will begin to rise because of increased costs of control.
Effective control of a huge firm may require multiple layers of hierarchy, slowing and distorting information flows, although decentralization, as in a multidivisional firm like General Motors or General Electric, may enable the number of layers of supervision, and the associated costs, to be minimized.
Economies and diseconomies of scale (or scoperoughly, cost as a function of the number of products a firm produces as distinct from the quantity it produces of a single product) in the conventional economic sense also play a role in the determination of the size of countries.
But other factors play a role as well, such as the advantage of size in defending against other nations.
Here the analogy is to animals.
Large animals are less vulnerable to predators than small ones are, and as a result tend to survive longer.
(I am speaking of the individual animal, not the species.).
Historically, size has been enormously important to national survivorship.
The nations that have disappeared completely, such as Prussia, Burgundy, the Republic of Texas, and the countless small kingdoms and principalities in Italy and Germany before the unification of those nations in the second half of the nineteenth century, have generally been small countries, though Becker is correct to note the continued survival of tiny niche countries, such as Monaco; this suggests that there is no minimum efficient size of a country, as there is of a steel producer.
Large nations, however, have frequently fissured, such as Austria-Hungary and the Soviet Union, suggesting the existence of diseconomies of scale in the market for nations.
Pakistan, a large but noncontinguous state, split in two.
South Africa lost Namibia, Indonesia lost Papua New Guinea, Ethiopia gained and then lost Eritrea, and so on.
What is new is that smallish nations, like Yugoslavia and Czechoslovakia, have also split; nevertheless, the splitting of small nations remains an infrequent phenomenon.
As Becker explains, with free trade the gains in specialization to a nation from having a large internal market diminish.
And changes in military technology have reduced the military value of a large population, though not of a large GNP, which is a function in part of population.
Nevertheless, if one glances over the entire history of nation formation and dissolution since the middle ages, one sees that the decisive factor has been the rise of nationalism.
Nationalism is the belief that national boundaries should follow the contours of a nation in the sense of a population that has a common language, race or ethnicity, religion, historical origin, or culture, at least if that population lives in a contiguous area rather than being a diffuse minority in a larger polity, as some nations in the sense just defined, such as Jews and Armenians, are.
The territorial nations of Israel and Armenia are limited to the areas in which the members of the ethnographic nation inhabit a compact, contiguous geographical area.
The greater the differencesin values, skills, language, and so forthbetween two nations that inhabit adjacent territories, the fewer their common interests, and this complicates governance if they are made parts of a single territorial nation, in much the same way that corporate governance would be complicated in a firm that sold life insurance, diamonds, and hubcaps.
The added costs may be offset, however, and in the nation case by defense considerations as well as by economic ones.
If barriers to trade make large internal markets important for economic growth, then different nations in the ethnographic sense may share a single territorial nation. As those barriers recede and the military value of a large population declines, we can expect the nationalist principle to prevail.
But this need not necessarily result in smaller nations.
The mergers of the two Vietnams and of the two Germanies, and the reincorporation of Goa into India and Hong Kong into China, are examples of post-World II Two boundary changes that have increased the size of nations.
(And in all likelihood someday the two Koreas will be united and Taiwan will be absorbed into China.) It may be an accident that the number of nations in the world has increased since the World War II.
The number could have declined if more ethnographic nations had been divided up among different territorial nations rather than being combined in single territorial nations.
The merger of the two Germanies may have been an economic mistake, as Becker persuasively argues.
But the diseconomies of scale in a nationalistic state, that is, a state with a homogeneous population (despite its racial, religious, and cultural heterogeneity, the U.S.
population is homogeneous compared for example to Belgium, with its sharp regional division between French-speaking and Dutch-speaking populations, or even Switzerland), are small within a broad range.
For just as a business firm can minimize diseconomies of scale and scope by decentralization, so a nation can greatly reduce those diseconomies by federalism.
As a result, a large nation like the United States is able to compete economically with much smaller nations.
In addition, its population size and consequent aggregate wealth enable it to achieve great military power, which prosperous small nations cannot do.
The analysis is incomplete, however, because one observes that many adjacent nations having a common language and culture do not merge: the U.S.
and Canada, for example; Mexico and the other nations of Central America; the Spanish-speaking South American countries; Germany and the German-speaking Swiss cantons; and the Arab nations of the Middle East and North Africa.
The explanation offered by Adam Smith for the American Revolution may have general application: within each ethnographic nation there is a governing class that anticipates greater benefits from ruling its nation than from sharing power with other elites within a broader territorial union.
I have been terribly remiss in responding to comments these past few weeks.
Let me try to make limited amend:.
National Cultures  : One comment helps to dispel the mystery of French productivity, by pointing out that regulation has shrunk the service sector in France relative to the rest of the economy, and service sectors tend to have the lowest productivity because they are so labor-intensive.
Lobbying  : Three good comments bearing on the puzzle that the expenditures on lobbying seem very small relative to the potential gains.
One, which help to solve the puzzle, is that huge contributions would be too conspicuous, and thus boomerang--it would be obvious to the politicians' constituents that some group or industry was trying to buy favorable legislation.
Another comment, which also helps to solve the puzzle, is that much less than the entire federal budget is in play in any given year: quite apart from entitlements, which consume a large part of the federal budget but are not subject to fundamental change from year to year, much of the federal budget is committed and cannot be altered by lobbying.
Lobbyists work at the margin.
The third comment, which cuts the other way, is that there are huge potential rents from legislative changes that do not affect the federal budget, such as a law making environmental regulations more or less stringent.
Tax Simplification  : Taxes must not be viewed as mere revenue generators.
They are also means of regulation, for example of externalities.
I would like to see heavy taxes on carbon dioxide emissions.
But regulatory taxes are generally not part of income tax, where the tax-preparation expenses that were the focus of Becker's and my posts are mainly incurred.
Equality  : I had emphasized product improvements as a source of real though not pecuniary increases in income that have helped to reconcile people to the fact that, for many of them, their money incomes have not risen in recent years.
One comment points out insightfully that if income is defined in terms of the services that are yielded by the products (and services) that we buy, it is much more equal than if it is defined in money terms.
The comment compares a Camry to a Lexus.
The Lexus is a better car, but it costs three times as much and is it three times better? No.
The 18-year-old Macallan (a single-malt Scotch) costs about  twice as much as the 12-year-old, but the difference in taste is very slight.
This seems a general characteristic of luxury goods.
This is the sense in which people of widely different incomes can all consider themselves middle-class without being delusional.
In the wake of the Jack Abramoff scandal, measures are under consideration in Congress to restrict  lobbying more than at present by requiring more lobbyists to register (and thus provide more information on lobbying activities to the interested public), by requiring more public disclosure of existing lobbyists‚Äô activities, and by forbidding lobbyists to buy meals for members of Congress.
Citizens‚Äô groups want much tighter restrictions on lobbying than anything Congress is contemplating, arguing that lobbying skews government policy.
Extensive restrictions have been placed on contributions to political campaigns, which are analogous to lobbying.
Republicans, who used to oppose efforts to restrict campaign contributions by PACs (political action committees), are now seeking to place restrictions on a type of PAC called a ‚Äú527,‚Äù which can accept unlimited contributions to engage in political advocacy, provided the 527 avoids supporting a candidate explicitly.
The Democrats, who were in the forefront of advocating limits on PACs, are opposing limits on 527s, which are primarily liberal.
Lobbyists provide information to members of Congress and other officials, and campaign contributions are used to sponsor political advertising, efforts to register voters thought likely to support the candidate on whose behalf the efforts are made, and other political activities, most of which are broadly informational in the sense of seeking to familiarize the electorate with the candidate and his program.
Hence restricting lobbying and campaign contributions is likely to reduce the flow of information to government officials and to voters, and this might seem a substantial interference with the political marketplace.
The main concerns about lobbying and campaign contributions are first that they are wasteful and second that they are a form of quasi-bribery and distort legislation and policy.
They are indeed wasteful in an arms-race sense: if one candidate (or industry) spends heavily on advertising, his competitors have to do likewise lest they be drowned out; the incremental information furnished the official or the voter may be slight.
This is less of a problem with lobbying than with campaign contributions.
Members of Congress and their staffs are spread very thin and would find it difficult to function without the information provided by lobbyists.
Most voters, in contrast, have very little interest in political information, even in hard-fought presidential campaigns, in part because they know that their vote isn't going to swing the election.
As for the distorting effect of lobbying on policy, it probably is slight.
Of course there are many examples of special-interest legislation that reduce overall social welfare, but there would be much special-interest legislation without any lobbying, since in a democratic society legislators have to be attentive to the preferences of influential constituents.
Much such legislation is, moreover, quite inconsequential from an overall social-welfare standpoint.
Liberal activists denounce "corporate subsidies," many of which consist simply of tax breaks.
The usual effect of giving a tax break is merely to shift the incidence of taxation--if one taxpayer pays less in taxes, another will pay more--with uncertain and perhaps often trivial effects on resource allocation.
Most economists consider taxation of corporations inefficient because its effect is to tax investors twice, so tax breaks for corporations probably increase social welfare.
The aggregate effects of lobbying, moreover, may be rather trivial.
This is suggested by the fact that annual expenses on lobbying Congress are only about $1.5 billion, even though the total federal budget is more than $2.5   trillion  , and the regulatory powers of Congress place much of our $12 trillion economy under congressional sway as well.
There are two possible inferences to be drawn from the disparity between the amount spent on lobbying and the total economic rents that Congress could confer on lobbyists' clients.
One is that the marginal cost of influencing a member of Congress by a given amount rises very steeply.
Perhaps the first nice meal you buy him increases by .001 the probability of his supporting your pet rent-seeking project but you would have to buy him 10 nice meals to increase the probability of his supporting you by another .001, and so on.
The second and complementary possibility is that most members of Congress are not bribable, and that all that most lobbyists get for their efforts is access that enables them to furnish information useful to the member.
There is (returning to the previous point) only so much that one can spend on generating information; moreover, and because information is relatively cheap to obtain and communicate to a small number of people, even relatively unorganized and impecunious groups who oppose a proposed project can provide offsetting information to the members of Congress.
The lobbying market should therefore be competitive.
Campaign financing presents graver issues than lobbying does because a member of Congress cannot be reelected unless he spends a substantial amount of money on his campaign, and the people who contribute that money, many of them anyway, expect something in return.
Even so, the effect can be exaggerated.
If both parties have roughly equal levels of financial support, a candidate doesn't have to change his political stripes in order to raise money; and donors who share his political views will not be asking him for something he doesn't want to give them.
In the 2004 presidential and congressional elections, total campaign expenditures were approximately $3 billion, a figure that reformers consider shockingly high.
It is actually low relative to the stakes in choosing a President and a Congress; it is four one-thousands of the GDP.
I am not a Pollyanna when it comes to evaluating the U.S.
government.
I believe that it may well be quite incompetent to deal with the problems that the nation is facing in an era of profound global political insecurity interacting with the breakneck pace of technological change.
But government incompetence is better illustrated by the congressional reaction to the Abramoff scandal than by Congress's failure to enact "meaningful" campaign and lobbying reforms.
An intelligent legislature, learning of a scandal, would first want to determine the likely frequency and consequences of such scandals and the adequacy of existing law to limit their recurrence.
This inquiry would quickly reveal that Abramoff had pleaded guilty to criminal activity along with two congressional aides, that other members of Congress were under criminal investigation, and that an immensely powerful member (Tom DeLay) had been forced by the scandal to resign from Congress.
The inquiry would further reveal that the scandal was actually an artifact of a surpassingly foolish law, namely the Indian casino law, which by conferring enormous rents randomly on Indian tribes had generated rampant rent-seeking, frequently shading into bribery.
(Becker and I blogged about the law on January 9 of this year.) What the inquiry would not reveal would be a good reason for amending the lobbying laws.
An article by the economists Edward Lazear (now chairman of the Presiden's Council of Economic Advisers) and James Poterba published in   The Economists‚Äô Voice   last December estimates the annual costs of preparing federal tax returns at $100 billion and, like Becker, uses this high figure as the basis for arguing for simplification of the federal income tax.
A difficult project that, as far as I know, has not yet been undertaken would be to estimate the actual savings from simplification.
Unfortunately, they might turn out to be modest.
H&R Block obtains total revenues of almost $2 billion a year from preparing tax returns for almost 20 million taxpayers, most of rather modest means and, presumably, rather uncomplicated returns.
The average expense of tax preparation to these taxpayers is thus $100.
The total number of federal income tax returns filed this year will be almost 140 million.
If one assumes that the bedrock expense of preparing each of these returns is $100, then a simplified income tax system would cost $14 billion.
This would represent a considerable saving over the present system, but the $14 billion figure is undoubtedly a gross underestimate in two respects.
First, it ignores the time cost to the taxpayer (emphasized by Becker) of obtaining, and forwarding to the tax preparer, the information needed to complete a tax return.
Second, drastic simplification would impose significant social costs.
There are compelling economic justifications for allowing some deductions or credits, examples being charitable contributions, expenses for the production of income, and foreign and other duplicative taxes.
Computing these items often involves unavoidable complications, such as how to value charitable gifts that are made in kind rather than in cash and how to determine when business expenses are really expenses rather than disguised income.
To the economically efficient deductions and credits must be added certain sacred-cow deductions and credits that aren't going away, of which the most attractive is the earned-income credit.
Moreover, even if there were no deductions, there would be bound to be complications in computing tax due on nonsalary income.
And some income that escapes taxation at present, such as the imputed rental income of owned housing, should be taxed in order to avoid distortions, and an attempt to do so would impose additional tax-preparation costs.
All this is not to suggest that tax simplification is not a good idea and would not produce genuine cost savings, though probably only in the 10 percent range.
Two measures that would tend to produce savings without simplification would be, first, not allowing tax-preparation fees to be deducted from income tax and reducing marginal tax rates, since the higher those rates, the greater the benefit of efforts to find tax loopholes and hence the more cost that will be incurred in such efforts.
Because the potential benefits from tax simplification are likely to be modest, perhaps greater political effort should be devoted to trying to make the tax system more efficient in the sense of maximizing the ratio of tax revenue to the distorting effects of taxation on the allocation of resources.
An ideal tax is a tax on a good or service or activity that is inelastic (Adam Smith's example was a tax on salt).
Such a tax will not induce many people to substitute some other good or service or activity for the taxed one, and such substitution both is inefficient and reduces the revenue collected by the tax.
The extraordinary congressional reaction to the recent increase in retail gasoline prices, though distressing to economists, is not surprising.
Since the latter part of last year, the average retail gasoline price has risen from slightly over $2 a gallon to $3 a gallon, largely as a result of increases in the price of crude oil.
The rapidity of the increase in gasoline prices has made it difficult for many consumers to adjust by altering the amount of their driving; demand tends to be inelastic in the short run.
Suppose you drive 10,000 miles a year and have a modest income, say $40,000.
You probably buy about 500 gallons of gasoline.
If the price per gallon rises by $1 and you are able to reduce the amount you drive by only 10 percent and so buy 450 gallons, your total expenditure on gasoline will rise from $1,000 (500 x $2) to $1,350 (450 x $3), an increase equal to almost 1 percent of your income.
For people of modest income, such an increase in expense is palpable.
The fact that in inflation-adjusted dollars the price of gasoline is roughly the same as it was in 1949 and much lower than it was in 1982, or that retail gasoline prices are twice as high in the United Kingdom (and several other European countries) as in the United States, is no consolation to these people.
Moreover, because people buy gasoline frequently, they are very conscious of changes in its price.
And there is so much publicity about oil and so close an identification of the Bush Administration with the oil industry that people are primed to think of gasoline prices as having a special economic and political significance, and to suspect that increases in such prices are a result of malign influences.
In fact the cause of the price spike is primarily, as I said, the increase in crude oil prices, and that increase is in turn primarily the result of rapid growth in demand for oil by China (now the world's second-largest consumer of oil) and India, a growth that has outpaced supply.
The notion that this represents a crisis--that the world is running out of oil--is ridiculous.
In the short run, with demand rising faster than supply, price rises steeply, producing "obscene" profits since roughly the same quantity is being sold at higher prices.
In the longer run, consumption falls as consumers search out substitutes; supply rises as previously uneconomical sources of oil become economical; and so profits fall back to a normal level.
One of the principal measures being mulled by Congress to respond to the pseudo-crisis--a $100 income-tax rebate to all federal taxpayers even if they don't  own cars or other vehicles and have incomes of $150,000 ($219,000 in the case of a couple filing a joint return)--has the virtue of simplicity and, since it does not affect the price of gasoline, will not discourage, at least directly, efforts by consumers to economize, though if people think it signals that the government will help them pay for gasoline, they will have less incentive to reduce the amount of driving or switch to more fuel-efficient cars or to public transportation.
As a measure for alleviating hardship, the $100 rebate is absurd because it is at once trivial in amount and not limited to low-income taxpayers.
Other proposals being considered by Congress would if adopted reduce the price of gasoline, as by cutting gasoline taxes.
Such measures would have worse effects on demand and prices: by increasing demand, they would drive prices back up.
But allowing more drilling, which is also proposed, would increase supply, though not immediately.
On the demand side, requiring that new vehicles have better gas mileage is similar to hiking gasoline taxes, by making cars more expensive.
But the effects are delayed, and it is a measure inferior to a tax because it prescribes one method of adjusting to higher gasoline prices rather than allowing consumers to choose how best to adjust.
Many consumers would prefer to drive less (substitute public transportation, telecommute, car-pool, move closer to work, etc.)  than to buy a more expensive car that gets better gas mileage.
From the broad national standpoint, we should welcome high gasoline prices because it is in the national interest to reduce our consumption of gasoline, and high prices will do that, dramatically so in the long run when more substitution is possible.
The burning of gasoline in vehicles creates pollution and emits carbon dioxide that contributes significantly to global warming; and curtailing driving in order to reduce the consumption of gasoline would alleviate traffic congestion.
Furthermore, a large part of the world's oil supply comes from nations such as Venezuela, Nigeria, Iraq, Iran, Saudi Arabia, and Russia that are actually or potentially unstable, hostile to the United States, or both, and it would be prudent to reduce our dependence on such suppliers.
And in fact output has fallen recently in the first four nations in the list, which has contributed to the price spike.
But the best way to keep gasoline prices high may be through heavy taxes, which might actually reduce the cost of oil and hence the incomes of the oil-exporting nations (which is in the U.S.
national interest to the extent that  those nations are indeed  hostile, as Iran notably is).
If, by increasing the price of gasoline, taxes reduce consumption, the price of oil will decline because the average cost of oil increases with the quantity produced.
Just as an increase in demand will cause higher-cost oil to be produced--oil that would not have been economical to produce when the market price was lower--so a reduction in demand will cause that higher-cost oil to be withdrawn from the market and so the average price of oil will fall.
In effect, income of the producing nations will be transferred to the consuming nations in the form of gasoline taxes imposed by those nations.
As Becker points out in this comment, higher taxes will dampen the incentive of oil companies to invest in exploring for and developing new sources of oil, since their net revenue from selling oil produced from such sources will be reduced.
However, I am unenthusiastic about creating incentives for producing more oil, because of my concern about global warming.
(See my book   Catastrophe: Risk and Response   [Oxford University Press, 2004].) Stiff taxes will put pressure on the energy industry to achieve technological breakthroughs (such as sequestration of carbon dioxide) that will greatly reduce the use of fossil fuels.
Unfortunately, a population ignorant of economics and suspicious of the Administration's motives probably cannot be brought to understand the social benefits of high gasoline prices and heavy gasoline taxes.
Becker explains the rising income inequality in the United States persuasively; I would add only that as society becomes more competitive and more meritocratic, income inequality is likely to rise simply as a consequence of the underlying inequality--which is very great--between people that is due to differences in IQ, energy, health, social skills, character, ambition, physical attractiveness, talent, and luck.
Public policies designed to reduce income inequality, such as highly progressive income taxation and middle-class subsidies, are likely to reduce the aggregate wealth of society, and therefore should not be adopted unless rising income inequality is a social problem.
Is it? That depends, I think, on average income (and hence on the wealth of society as a whole), on whether incomes are rising (at all levels), and on the particular way in which the income distribution is skewed.
The higher the average income in a society, the less likely is inequality to cause envy or social unrest.
The reason is that, given diminishing marginal utility of income, people who are well off do not have a strong sense of deprivation by reason of their not having an even higher income.
If, moreover, their income is rising, they are more likely to derive satisfaction from a comparison of their present income to their former income than to be dissatisfied by the fact that some other people‚Äôs incomes have risen even more.
In my book   Frontiers of Legal Theory  , ch.
3 (2001), I present empirical evidence supporting a positive correlation between political stability on the one and average, and rising, income on the other hand.
It is true that progressive taxation and other income-equalizing policies are found in rich rather than poor countries.
But that is partly because poor countries lack the governmental infrastructure for administering complex policies and partly because these societies have powerful social norms of equality.
Studies of peasant societies find that "black" envy is widespread in them--that is, if your neighbor has a nicer barn than yours, you'd prefer to burn it down than to exert yourself to build an equally good barn.
White envy, in contrast, better described as emulation, promotes economic growth.
As for the way in which a society's income distribution is skewed, if, though average income is high and rising, there is a very small, very wealthy, upper class, a tiny middle class, and a huge lower class, the society is likely to be unstable.
Because the majority of the population will not be well off, and the upper and middle classes small, there will be few defenders of the existing distribution.
The United States has a high average income, incomes are rising for most groups in the population--though  more slowly than for the wealthiest--and most of the population is middle or upper class.
It is therefore not surprising that rising income inequality has not generated noticeable social unrest or calls for return of heavy progressive taxation.
Moreover, when nonpecuniary income is taken into account, there is less inequality than the income statistics suggest.
In a democratic and rights-oriented society such as the United States, all citizens have a bundle of equal political rights (to the vote, to the free exercise of religion, to be free from unreasonable searches and seizures, and so forth), which are a form of income, and equal political duties, which are a form of expense.
Rich people as well as ordinary and poor are prosecuted for crime, and, as in the recent spate of corporate scandals, often punished very heavily.
What is more, income statistics do not record the enormous secular  improvement in the quality of products and services, and hence in the utility that purchases confer on consumers.
Think only of the extraordinary improvements in the quality of automobiles, medical care, and electronic products.
Americans whose income has not increased faster than the rate of  inflation are nevertheless living far better than they used to live.
They know this and it is one reason they are not clamoring for income redistribution.
A cultural factor that reduces the social tensions that might otherwise arise from a sharp and rising inequality of Americans' incomes is that the United States, unlike the countries of Europe, has no aristocratic tradition.
There is no suite of tastes, accent, bearing, etc., that distinguishes the rich in America from the nonrich.
The rich have more and better goods, but they do not act as if they were a "superior" sort of person, refined, well bred, looking down on the average Joe.
The rich play golf, but so does the middle class.
The middle class follows sports, but so does the upper class.
Finally, rising income inequality in the United States is due in part to increased immigration, since immigrants, legal as well as illegal, tend to work for lower wages than citizens.
Immigrants do not, however, compare themselves with wealthy citizens, but rather with the much lower wages they could expect to earn in their countries of origin.
Rather than immigrants envying wealthy citizens, many citizens are hostile to poor immigrants!.
The "problem" of income inequality should not be confused with the problem of poverty.
The first, I have argued, is, at least in the United States at present, a pseudo-problem.
Poverty is a genuine social problem, because by definition it signifies a lack of the resources necessary for a decent life.
It is only tenuously if at all related to income inequality, since one could have zero poverty in a society in which the gap between the income of the worst-off members of society was huge--imagine if the poorest person in America earned $100,000 a year and the wealthiest $1 billion.
The more competitive and meritocratic a society, the more intractable the problem of poverty.
The reason is that in such a society the poor tend to be people who are not productive because they simply do not have the abilities that are in demand by employers.
It is unlikely that everybody (other than the severely disabled) can be trained up to a level at which there is a demand for his or her labor, and so there is likely to be an irreducible amount of poverty even in a wealthy society such as ours, unless we provide generous welfare benefits--which will discourage work.
In my recent post on health care reform, I mistakenly suggested that all poor people are eligible for Medicaid.
In fact, only about a third of uninsured adults are eligible, as eligibility is largely limited to (poor) parents, children, and the elderly.
The Pacific Research Institute takes issue with my criticism of its calculation of the costs of the U.S.
tort system, and has asked me to give the link to its point by point response,which is at http://www.pacificresearch.org/press/opd/2007/opd_07-04-23lm.html, and also to its concise response (which I believe is less informative), which is at http://www.pacificresearch.org/press/opd/2007/opd_07-04-20lm.html.
Becker marshals convincing evidence that people who have more education have on average higher earnings and that the spread has been growing.
But it is a bit of a leap to conclude that there are high (and increasing) returns to education.
Correlation is not causation.
Suppose what are increasing are not the returns to education but the returns to intelligence, and suppose that people with high IQs both enjoy education more than other people do and are more likely to be admitted to college or a graduate or professional school because teachers prefer teaching (and learning from!) them and because good students are more likely (because they are more intelligent, not because they are good students) to be affluent, and therefore generous, alumni.
Now if this is correct, one might expect many intelligent people to bypass college, because it is so costly; but few do.
However, colleges and graduate (including professional) schools provide a screening and certifying function.
Someone who graduates with good grades from a good college demonstrates intelligence more convincingly than if he simply tells a potential employer that he's smart; and he also demonstrates a degree of discipline and docility, valuable to employers, that a good performance on an IQ test would not demonstrate.
(This is an important point; if all colleges did was separate the smart from the less smart, college would be an inefficient alternative to simple testing.) An apprentice system would be a substitute (and there is evidence that in Germany it is a highly efficient substitute), but employers naturally prefer to shift a portion of the cost of screening potential employees to colleges and universities.
Because those institutions are supported by taxpayers and alumni as well as by students, employers do not bear the full cost of screening.
These points are consistent with higher education being a good private investment, but do not suggest that it is either a particularly good social investment (it does improve matching of employees to employers, but at great cost) or that its value has much to do with the institution's educational program.
Another good that higher (or for that matter lower) education provides is the creation of social networks, consisting of the students who get to know each other.
They learn something from associating with other intelligent kids and they form friendships with them that may carry over into adult life and become business or professional relationships that enhance the graduate's income.
Again that would be a benefit having little or no connection with the school's educational program.
There may be little value added by the program to the contribution that attending college makes to a person‚Äôs income.
Orley Ashenfelter, Alan Krueger,  and many other economists have worried about the possibility that the correlation between education and earnings is not causal and have tried a variety of ingenious methods for correcting for differences in ability, such as comparing the earnings of twins who have different amounts of education, on the theory that they have similar native ability, or comparing the earnings of people who have different years of schooling just as a function of the arbitrary age cutoffs that determine when one starts school, or seeing whether an increase in the age at which students are permitted to drop out is associated with an increase in the earnings of that cohort compared with its predecessors.
Some studies correct for performance on standardized tests assumed to measure intelligence rather than knowledge.
Most studies find that education has a substantial effect on earnings independent of native ability, and the convergence is impressive.
However, the studies are convincing mainly about the benefits of precollege education.
I am skeptical that it should be a national priority, or perhaps any concern at all, to increase the number of people who attend or graduate from college.
Presumably the college drop-outs, and the kids who don't go to college at all, do not expect further education to create benefits commensurate with the cost, including the foregone earnings from starting work earlier.
This would be an entirely rational decision for someone who was not particularly intelligent and who did not anticipate network benefits from continued schooling because the students with whom he would associate would not form a valuable network of which he would be a part, either because he could not get into a good school, in the sense of one populated by highly promising students, or because if he did get into a good school the other students in the school would not consider him worth networking with.
This assumes that enticing the unwilling or the unmotivated to attend or complete college would not confer social benefits in excess of the private benefits (which I suggested in the preceding paragraph would probably exceed the private costs).
But the marginal students are unlikely to be kids who, with a little more education, would make the kind of contribution to society that a worker is unable to capture in his wage.
Nor are these marginal students likely to be educated into an interest in political and societal matters that will make them more conscientious voters or otherwise better citizens.
A study published last month, and favorably summarized in an op-ed in the   Wall Street Journal  , estimates that the American tort law system costs the nation $865 billion a year.
The study, entitled   Jackpot Justice: The Cost of America's Tort System  , was written by Lawrence J.
McQuillan and other members of the staff of the Pacific Research Institute, which published the study.
(The study can be downloaded at www.pacificresearch.org.) How did the authors arrive at that figure, and is it meaningful?.
They begin by estimating that the nominal (that is, dollar expenditures as distinct from social costs) annual cost of the tort system, consisting mainly of attorneys' fees and other costs of administering the system plus the amount of money paid to tort claimants in judgments and settlements, is $279 billion, of which $128 billion is the amount paid out to claimants.
The estimate comes from a report (  U.S.
Tort Costs: 2003 Update  ) by Tillinghast-Towers Perrin, a consulting firm for the insurance industry, with the report's estimate updated to 2006.
It is impossible to determine from Tillinghast-Towers Perrin‚Äôs report what the sources for most of its data are, and so the figures I have quoted must be taken with a grain of salt; indeed, so far as I can tell, they may be completely unreliable.
They are almost certainly exaggerated, given the financial connection between the firm and the insurance industry.
The authors of   Jackpot Justice   know the difference between a cost, which in economic terms is a reduction in the amount of valuable resources, and a transfer of wealth from one person to another that doesn't reduce the total amount of resources but merely redistributes them.
The $128 billion figure is a transfer, not a cost.
But as the authors point out, the opportunity to obtain a wealth transfer can generate a cost--a cost incurred to obtain the transfer (incurring costs to obtain a wealth transfer, when socially unproductive, economists call "rent-seeking").
They assume, without analysis or evidence, that the entire $128 billion is translated into a cost.
They further assume that 28 percent of the $128 billion transfer represents a deadweight cost, that is, a loss of value.
They base this assumption on a study which found that increasing the corporate tax rate by $1 generates 28 cents in deadweight costs.
The basis of that finding was that a tax, like a monopoly markup, causes the taxpayer, like a consumer, to substitute for the taxed item or activity something that may cost society more to provide but looks cheaper because it's untaxed, or taxed at a lower rate.
The authors of   Jackpot Justice   do not explain why a tort transfer would have the same effect.
Of course the threat of tort liability might well alter the behavior of potential injurers--indeed, it is intended to do so--but it might alter that behavior in the direction of greater efficiency, by making potential injurers internalize accident costs.
That is the objective of tort law, though imperfectly achieved.
Without tort liability, firms would have weak incentives to invest in safety measures to benefit potential victims of the firms‚Äô activities, unless the victims were either their employees or their customers.
With the addition of 28 percent of $128 billion ($36 billion) to $128 billion in assumed rent-seeking expenditures, the authors jack up their estimate of the annual social cost of the tort system to $164 billion.
To this they add another $36 billion, on the assumption (it seems---this part of the report is none too clear) that efforts to avoid the deadweight cost will cost that amount.
This appears to be double counting, based on the assumption that $36 billion is spent every year in a futile effort to avoid a $36 billion cost.
It is possible, however, that some--maybe considerable--costs are being incurred to prevent that deadweight loss from rising from $36 billion to some higher level‚Äîfor example, legal-counseling costs (not included in the attorneys' fees incurred in actual litigation) or costs in curtailed new-product development (see below).
But there is no basis for supposing that the sum of such costs would equal $36 billion; nor do the authors make any effort to defend the figure or estimate the actual costs.
They add to their new total of $200 billion the $128 billion transfer, for a grand total of $328 billion.
The addition is improper, since the transfer is not a cost.
They are adding apples and oranges.
They borrow from another study an estimate that tort liability generates some 2,000 accidental deaths a year by discouraging the introduction of risk-reducing products and services.
But they fail to offset against that figure (and its monetization) the number of accidental deaths that are prevented by the deterrent effect of tort liability.
In the absence of liability, potential injurers would spend less on safety, at least with regard to potential victims with whom the injurers lack actual or potential contractual relations.
The authors of   Jackpot Justice   cite a respectable economic study by Daniel Kessler and Mark McClellan which finds that malpractice liability increases hospital costs by 5 to 9 percent; and they treat the entire amount as social waste.
But as I said earlier, the aim of liability is to induce potential injurers to spend more on safety, and so the fact that they do spend more cannot be adjudged a failure to improve social welfare.
Medical-malpractice law is in its administration rife with inefficiency, but it would be surprising if eliminating it entirely would be all social benefit and no social cost (nor in fact do the authors argue for eliminating it, as we‚Äôll see).
The authors argue that products-liability law is responsible for the loss of $359 billion in new products.
They base this on a study by the economists Kip Viscusi and Michael Moore which found that when the ratio of liability costs to sales revenues exceeds 5 percent, firms reduce their investment in R & D.
The authors of   Jackpot Justice   identify product markets such as power tools, fireworks, and cigarette lighters where they believe (relying however on the questionable data in the Tillinghast-Tower Perrin reports) that this ratio is exceeded.
Then, using Viscusi and Moore's estimate that 6 percent of the products of the industries that the two economists studied are new products,   Jackpot Justice   multiplies the output of these markets by 6 percent; with certain other adjustments, this calculation produces the estimate of $359 billion in lost sales.
This is not, however, as the authors believe, a social cost.
The social cost is the consumer surplus that the sales of the new products would have produced.
Suppose that a product costs $10 and is sold in a competitive market for $10, but that consumers would pay $12 for it if it were priced at $12.
Then if the product is not produced, there is a loss of consumer surplus of $2.
That, not the $10 in lost sales revenue, is the social cost of not producing the product.
The sum of $328 billion and $359 billion is $687 billion, which is almost $200 billion short of the authors' grand total of $865 billion.
The excess malpractice costs and accidental-death costs they estimate at less than $50 billion, so there is still a big gap.
I can't figure out how they fill it.
So far in the report, there is nothing about the benefits of the tort system.
To estimate those benefits, the authors compare the percentage of U.S.
GDP that is accounted for by our tort system with the percentage of GDP accounted for by the tort systems of other developed countries.
They base all these percentages on the dubious Tillinghast-Towers Perrin report.
The U.S.
percentage is estimated at 2.2 percent, twice Germany's and roughly three times Japan's and the United Kingdom's.
The average for the foreign countries in the comparison is 0.9 percent, so the authors of   Jackpot Justice   conclude that the benefits of the U.S.
tort system are equal to only 0.9 percent of our GDP.
The possibility that our more costly system might generate greater benefits (though not necessarily equal to the greater costs) is ignored.
But a more serious weakness is the implicit assumption that a tort system generates benefits exactly equal to its costs.
It might generate much greater benefits.
Politics, which the authors assume lead to greater than optimal liability, might instead lead, in the countries they compare to the United States, to less than optimal liability.
And even if investment in the U.S.
tort system has been carried to the point at which the last dollar spent on the system generates exactly one dollar (no more) in benefits, the total costs may be far below the total benefits because average cost may be much lower than marginal cost.
This is apart from the possibility that politics may have prevented our investing enough in the tort system to equate benefits and costs at the margin.
The authors' estimate of the benefits (= costs) of the average foreign tort system, when subtracted from the $865 billion "cost" of our system, results (with some further adjustments) in an estimate of an annual excess of costs over benefits of almost $600 billion.
The figure, however--the authors' estimate of the net social loss created by our tort system--is, as I have tried to show, fictitious.
I share Becker's concerns with the favorable tax treatment of employee stock ownership plans.
Such treatment would be justifiable only if such plans conferred benefits on society that could not be generated more cheaply by other means.
Proponents of the law that authorized ESOPs and conferred favorable tax treatment on them argued that ESOPs would unlock a new source of capital‚Äînamely workers, who contribute capital to the corporations that employ them when they take part of their compensation in the form of participation in an ESOP.
But there is no shortage of capital, so no justification for subsidizing investment in corporate stock.
If anything, ESOPs can be criticized from an overall social-welfare standpoint as an antitakeover device that we do not need: workers are unlikely to vote for a takeover, as it might jeopardize their jobs.
As Becker points out, abolishing the favorable tax treatment of ESOPs would permit a market test of this form of corporate governance.
(In confining my discussion to cases of governance, I focus on situations in which, as in United Air Lines before its bankruptcy, or the proposed reorganization of the Tribune Company, the ESOP owns all or a controlling amount of the common stock of the corporation.) I believe that it would usually flunk the market test.
Granted, the ESOP has an advantage over the conventional worker-owned firm: the value of a firm's capital stock is the discounted present value of its expected future earnings, so that a worker who owns ESOP shares has, at least in his role as part owner, the same horizon as the corporation itself, rather than the truncated horizon of the worker in a conventional worker-controlled firm (a cooperative), who cannot benefit from anything the corporation does after he retires and who consequently has no financial stake in maximizing the corporation's present value.
But this advantage of the ESOP over the conventional worker-controlled form will usually be modest.
A worker will trade off any long-term benefits to the corporation from a corporate action that would increase the value of his shares against whatever short-term benefits, in the form of a higher salary or greater fringe benefits or a lighter workload, an alternative course of action would confer on him; and usually the tradeoff will favor increased compensation for work over increased stock value.
It is true that to be entitled to the tax benefits of the ESOP form, the workers' shares must be placed in trust, and the trustee must vote them to maximize share value; he cannot trade a lower share value for higher employee compensation of the worker owners.
(And so he cannot oppose a takeover that would maximize share value, even if it would do so by laying off many of the workers.) If the favorable tax treatment of ESOPs were abolished, there would be no requirement of placing ESOP shares in trust.
But that would still be an attractive choice for the ESOP in order to reduce the misalignment of incentives in conventional worker-controlled firms.
But overcoming the problem of incentive incompatibility would not create an affirmative reason for a worker to own shares in the corporation that he happens to work for rather than in some other corporation, a mutual fund, etc.
Becker rightly rejects the notion that having an ownership interest closely aligns a worker's incentives with those of the corporation.
Unless the corporation is very small, which obviously is not the case with United Air Lines or the Tribune Company, the efforts of an individual worker will not have a significant effect on the market price of the corporation's shares and hence on the worker's wealth.
Of course, some workers may not realize this (they may exaggerate the contribution that their working harder would make to the firm's bottom line); or they may, by virtue of being "owners," become altruistic toward "their" company; but such workers would be likely to buy shares in the company voluntarily (or take part of their compensation in the form of shares), without all the workers having to do so.
The ESOP has one genuine advantage over the conventional corporate form, an advantage that played a role in the decision to convert the ownership of United Air Lines to an ESOP.
It can smooth labor relations by increasing the cost to workers of striking or otherwise pressuring the corporation to incur greater labor costs.
Even though, as I have suggested, workers' work-compensation gains will usually exceed the losses in share value that will result from the corporation's greater labor costs, their demands will be moderated by the cost to them in lower share value.
This depends however on the shares being held in trust, so that the workers' interest as workers is not reflected in how the shares are voted; otherwise workers may use control of management to increase rather than moderate their demands for employee compensation.
But as I have said, the trust format could be retained even if it were no longer required.
Against the possible (tax-independent) advantages of the ESOP form stands the powerful disadvantage of underdiversification.
The shares in their employer's ESOP are likely to be the principal financial asset of the workers.
If they are risk averse, they will be bearing uncompensated risk by holding an underdiversified portfolio.
The consequences were dramatically demonstrated by the United Air Lines bankruptcy.
The trustee was sued for not having sold United stock before the collapse, but because the purpose of an ESOP is to hold stock in one company, namely the employer of the participants in the ESOP, an ESOP trustee does not have the usual trustee's duty of diversification; what exactly his duty is to protect the participants against excessive risk is unclear.
A further complication is presented by employee turnover.
An employee who quits and goes to work for some other employer cannot remain a participant in his former employer's ESOP.
His shares must be redeemed‚Äîbut at what price? If the ESOP owns all the common stock in the employer, the fixing of a redemption value will be awkward.
If it is too low, this will reduce the value of the shares to other employees who anticipate quitting at some future time; if too high, it will reduce the value of the shares by diminishing the corporation's assets, out of which the price to redeem departing employees' shares is paid.
Still another complication is reconciling the competing interests of different classes of ESOP shareholder, such as active and retired employees.
To summarize, were it not for the favorable tax treatment of ESOPs, one would not expect the device to be common except in small corporations (and perhaps not even there, since the partnership and the closely held corporation provide attractive alternative governance forms) and in some firms that have particularly troubled labor relations.
The early Presidential campaigning season is replete with complaints, discussions, and proposals concerning the health care system, widely regarded as broken.
Democratic candidate John Edwards has proposed a comprehensive reform that he calls "Universal Health Care through Shared Responsibility," http://johnedwards.com/about/issues/health-care-overview.pdf.
Although the Edwards plan is commendable for its detail and clarity, there is an element of fantasy in Presidential candidates' proposing detailed, specific reforms, on any complex issue, so far before the election; for the feasibility of reform depends on economic and political conditions, including the political makeup of Congress, when the President takes office.
But, passing that point, the current concern about the health system, which generates plans such as the Edwards plan, may be misplaced.
It is true that health costs are rising faster than the inflation rate.
But rising costs, even of "essential" products and services, such as food, health care, and national defense, do not necessarily demonstrate the existence of a problem.
Costs may be rising because quality is rising, which is true of health care (new and better therapies and diagnostic tools), or because demand is rising (and average cost is not flat or declining), which is also true; as people live longer, their demand for health care rises because more health care is required to keep people alive and healthy the older they are.
In addition, much health care is in fact discretionary (cosmetic surgery is only one example; others are treatment for mild depression and other mild emotional or cognitive problems and treatments designed to enhance athletic ability), and demand for it can be expected to rise if quality rises relative to price.
It is also true that Americans spend much more on health care on average than the people in other wealthy countries do, without greater longevity to show for these expenditures.
But health care does much more than extend life; it alleviates pain, discomfort, disfigurement, limited mobility, visual and hearing impairments, and mental suffering, and it is not clear that foreign health systems, which also involve considerable costs in queuing, do these things as well.
In addition, the better a nation's health care is, the riskier the population's life style is likely to be, because the cost of obesity and other risk factors for disease is less.
Also misplaced is concern that the United States is becoming less competitive because employers pay for their employees' health insurance, rather than the government.
Employers do not really pay for their employees' health insurance; employees (and the taxpayer, who subsidizes employee health insurance) do, because by raising labor costs employee health insurance reduces the wages that employers are willing to pay.
Employers who committed themselves to assuming open-ended obligations for employees' (including retired employees') health costs have only themselves to blame for having assumed a risk that has materialized because of the rapid growth in those costs.
Nor is it a scandal, or even a serious inefficiency, that many millions of workers do not have health insurance.
Their health care is paid for my Medicaid if they cannot afford to buy health insurance whether directly or as part of an employee group health insurance plan.
Those that can afford health insurance but forgo it either are young and healthy or are risk preferrers, gambling that they will avoid illnesses requiring expensive treatment.
If their gamble fails, they will have to pay out of their pockets, and when their pockets are empty (their assets depleted) go onto Medicaid, where their care will be subsidized.
Although there is no reason why people who can afford health insurance should be treated at the taxpayer's expense, the effect on the aggregate cost of health care may be neutral or even negative, because Medicaid patients receive less pricey treatment on average than patients who have health insurance.
If the young and healthy who today choose to go without health insurance were forced to buy health insurance, there would be two potentially offsetting effects: the average cost of health insurance would fall because the insurance pool was getting an influx of people with below-average expected health costs (though this assumes that the insurance companies wouldn't be permitted to charge lower rates on the basis of age and health, in which event the influx of young and healthy would not affect the rates charged the existing members of the insurance pool); at the same time the average cost of insurance would be rising because aggregate demand was rising, since being forced to have health insurance the newly insured would consume more health care in order to get their money‚Äôs worth.
Some people are uninsured because of lack of "portability." Suppose a person insured under his employer's group policy leaves the job, and cannot find another one; or suppose that while working for that employer he had contracted a serious chronic illness.
In either case, he may find it impossible to buy health insurance at a rate that he can afford.
But the idea that such experiences demonstrate a market failure comes from a misunderstanding of the economic function of insurance.
The economic basis of insurance is declining marginal utility of income: the more money we have, the less utility (pleasure, happiness) an additional dollar would confer.
Suppose your life savings is $2 million, and you are asked to flip a coin: heads you win $3 million, tails you lose your $2 million.
The expected value of the bet is $500,000 ($3 million x .5 ‚Äì $2 million x .5), and thus is positive; but most people would refuse the bet, because the pain of losing their entire savings of $2 million would exceed the pleasure of winning $3 million.
The implication for health insurance is that such insurance should be limited to catastrophic long-term illnesses that would drain an individual's or a family's resources and thus be equivalent in the example to losing $2 million.
Such cases are rare, and therefore such insurance would be cheap, especially for young people, who would be unlikely to encounter such a catastrophe for many years, during which the insurance company would be earning interest on the premiums paid by the insured; so the premiums would be low.
Someone who purchased such insurance at the outset of adulthood would not have to worry much about the lack of portability of an employer's group health insurance because he would have his catastrophic coverage even if he switched jobs or had no job.
Nor finally should we think it a scandal that such a large percentage of the Gross Domestic Product (about $2 trillion out of a total of $14 trillion) goes for health care.
One reason it is so large is that other expenses of living, such as food and clothing, have become such a small fraction of overall spending.
The question should be not whether the percentage of spending that goes to health is increasing but whether there are better things to spend some of the $2 trillion on than health care.
Notice that liberals' concern with the increasing percentage of expenditures on health parallels conservatives' concern with the increasing percentage of GDP that is spent on government services.
In both cases the proper concern is not the percentage of overall spending that goes for a particular class of services but whether there are better uses for some of the money being spent for those services.
There are two ways to reduce the aggregate cost of health care, if this is considered a worthy objective, as I am inclined to doubt.
One would be to ration demand.
If the supply curve for health care is upward sloping, as undoubtedly it is, then capping demand would result in lower prices by forcing the market down the supply curve.
But rationing demand would be fiercely resisted by patients, for obvious reasons.
The second way to reduce aggregate health costs would be to force down the price of treatment by exercise of the government's potential monopsony power.
Suppose all doctors were employed by the government.
Then their wages would be low because if you wanted to be a doctor, as many people do, you would not have any alternative to accepting the government's wage.
Of course the quality of care would decline.
Or suppose (and this the tendency toward which some of the current proposals are tending) that the government bought all the drugs that are produced, having forbidden the drug companies to sell to any other purchaser.
Then the price of drugs would be much lower than it is today, but so would be the quality, since the incentives for innovation would be diminished by the lower price.
We should be wary of proposals that if adopted would not reduce (and might increase) aggregate costs, but instead would shift the costs to another class of payees, such as taxpayers (the Edwards plan contemplates additional federal subsidies for health care, which are paid for out of taxes) or future consumers of drugs.
The war in Iraq is intensely unpopular, disfavored by a strong majority of Americans, and fiercely opposed by the far Left.
The President is also highly unpopular.
The situation thus resembles the situation with respect to the Vietnam war in 1968 after the Tet Offensive.
So why are there no violent protests, as there were in 1968 and indeed until the United States withdrew its troops from Vietnam?.
The obvious answer is that there is no longer a draft; all the U.S.
soldiers in Iraq are volunteers.
But I do not consider that a sufficient answer, apart from the facts that only about a third of the persons drafted during the period of the Vietnam war served in Vietnam and, more important, that there were abundant escape hatches for persons of draft age who wanted to avoid military service altogether.
Most of these involved continued education, and protesters were drawn disproportionately from the educated class.
What is more, many of the protesters were either women or too old for the draft.
Still another source of doubt that the draft was solely responsible for the scale and virulence of the Vietnam protests is that, partly because all our soldiers today are volunteers, they are more popular than soldiers in the Vietnam era were, and casualties among them therefore arouse even greater sympathy.
Indeed the military as a whole is one of the most respected institutions in America tpdau, which was not true in the 1960s.
Another puzzle is that although Lyndon Johnson was intensely unpopular with the Left, it was only on account of the war; he was a liberal in domestic policy.
George Bush is unpopular with the Left in all respects, not just the war, and so one might think him a more attractive target for protesters.
Another possible explanation for the difference in public reaction is that U.S.
casualties in Iraq are far lower than they were in Vietnam.
Almost 15,000 U.S.
troops were killed in action in Vietnam in 1968, whereas the annual death toll of U.S.
troops in Iraq is currently only about 1,000 a year.
However, there is much greater sensitivity to casualties now than there was in the earlier era.
The very low U.S.
death rates in the invasion of Afghanistan and in the two invasions of Iraq make the slightly more than 3,000 U.S.
deaths in Iraq since the completion of the 2003 invasion seem shockingly high by comparison.
I believe that five factors are as or more important than the end of the draft or the lower casualties in explaining the absence of violent protests against the Iraq war.
The first is that the opponents of the war in Iraq have the support of one of the two political parties.
Lyndon Johnson was of course a Democrat, and the Republican Party did not oppose the war (the Democrats were divided).
The Left knows that violent protests against the war would weaken Democratic Party opposition and the likelihood of a Democratic President's being elected in 2008.
Moreover, they have less need to protest because they are aligned with a powerful political force.
Stated differently, protests would have a modest incremental effect on ending our military involvement in Iraq, and perhaps even a negative effect.
Second, the opportunity costs of time are higher today than they were in the 1960s and early 1970s for potential protesters.
This is partly because of higher wages, especially for educated people, and the fact that a higher percentage of women are employed.
The greater competitiveness of the economy discourages people from taking risks with their careers by protesting.
It discourages college students as well as the employed, because someone who gets the reputation in college of being a violent protester, or is suspended or simply gets very low grades because of the distraction of engaging in protest activities, will see his opportunities for a good job diminish.
Third, the great expansion of the electronic media, including the advent of blogs, gives people outlets to blow off steam that are much cheaper, in cost of time, than street demonstrations or acts of violence.
The electronic media enable a message to be communicated to far more people than street demonstrations do, and at lower cost, so one expects substitution in favor of the media.
Fourth is a learning factor.
The violent protests against the Vietnam war probably did not shorten the war, but instead helped Nixon become President.
All together, these four factors suggest that the costs of violent protests have risen, and the benefits fallen, since the 1960s; hence the lower level of protest today, despite the parallels between the protracted, seemingly stalemated, Iraq and Vietnam wars.
But there is a fifth factors, cultural rather than economic or easily expressed in economic terms: For many of the Vietnam war protesters, the war was a symbol of what they believed to be deeper and broader problems with the United States and the entire Western world.
They thought the "system" rotten and entertained Utopian hopes of overthrowing it and substituting a socialist or anarchist paradise.
This belief gave the war more resonance as a target.
Partly because of the collapse of communism, partly because of greater prosperity, few Americans are hostile to the American system.
Most blame the Iraq war on the incompetence of the Bush Administration rather than on some more pervasive social or political pathology.
This tempers their anger and their willingness to take career risks by engaging in protests against the war.
It used to be thought more widely than it is now that in a competitive market, the compensation of workers, on the assumption that it is left to the market, will be efficient.
That of course is a major assumption, given unions, minimum wage laws, laws against employment discrimination, and other regulations of employment.
But such regulations do not bear significantly on the employment of executives and professionals, and it is they with whose compensation I shall be concerned.
Shouldn't we expect that they at least--corporate executives, lawyers, and other elite workers--are efficiently compensated, provided their employers are operating in a vigorously competitive market, as most markets nowadays, other than those that are natural monopolies (that is, markets in which economies of scale are obtainable over the entire range of feasible output), but fewer and fewer markets are naturally monopolistic?.
The answer should be yes, but increasingly it seems, as a matter both of theory and of evidence, that to implement efficient methods of compensating executives and professionals is extremely difficult, and maybe as a practical matter impossible for a free-market system to accomplish.
There is a long-standing concern that corporate executives are more risk averse than a corporation's shareholders, because the latter can eliminate firm-specific risk by holding a diversified portfolio, while the former cannot, because they have firm-specific human capital that they will lose if the firm tanks.
The solution to this problem was thought to consist in making stock options a large part of the executive's compensation, so that his incentives would be closely aligned with those of the shareholders.
True, because he would bear more risk, he would have to be paid more in total compensation than if he did not receive a large part of his compensation in the form of stock options.
But the cost to the corporation of the additional pay would presumably be offset by the gain to the shareholders from the executives' enhanced incentives to maximize shareholder wealth.
But we are beginning to realize that the grant of stock options may make corporate executives take more risks than the shareholders desire.
Suppose that instead of being compensated for bearing risk just by being paid a higher salary or given even more stock options, the executive is guaranteed generous retirement and severance benefits that are unaffected by the price of the corporation‚Äôs stock.
Now he has a hedge against risk, and can take more risks in operating the corporation because his personal downside risk has been truncated.
Perhaps this was a factor in the recent stock market bubbles--the one that burst in 2000 with the crash of the high-tech stocks and the one that burst this year as a result of the collapse of the subprime mortgage market and the resulting credit crunch.
A bubble is both a repellent and a lure.
It is a lure because during the bubble values are rising steeply, so an investor who exits before the bubble has peaked may be leaving a good deal of money on the table.
He will be especially loath to do that if he is hedged against the consequences of the bubble's eventual bursting.
Boards of directors could devise compensation schemes that limited the attractiveness of risky undertakings, but they have little incentive to do so.
The boards tend to be dominated by CEOs and other high corporate executives of other firms, who have an interest in keeping executive compensation high and who are abetted by compensation consultants who naturally recommend generous compensation packages to directors who are recipients of generous compensation and therefore believe that the CEOs of the companies on whose boards they sit should be paid top dollar.
It is not clear what the free-market antidote to this tendency to ratchet up executive compensation is.
The compensation of the CEO and other high officials of a large corporation is usually only a small part of the corporation's costs, so shaving such compensation is unlikely to be a powerful competitive weapon.
But more important, what rival corporation would have the governance structure that would enable such shaving to be accomplished by overcoming the obstacles that I have discussed? The private-equity firm is a partial answer, because it has only a few shareholders and so need not delegate compensation to a board of directors that has other interests besides the welfare of the shareholders at heart.
The reason it is only a partial answer is that there are too few owners of capital who want or have the ability or experience to participate as actively in management as the private-equity entrepreneurs and there are too many efficiently large corporations for all of them to have the good fortune of being owned by a handful of entrepreneurial investors.
There is a vast pool of passive equity capital that can be put to work only in companies that are organized in the traditional board-governed corporate form.
Here is another though related example of a stubborn efficiency-in-compensation problem, also in a highly competitive sector of the economy: law-firm billing practices.
Major law firms, with few exceptions, base their bills to their clients on the number of hours that the firm's lawyers work on the client's case or other project.
In other words, they bill on the basis of inputs rather than outputs.
This is rational when output is difficult to evaluate, as is often the case with a law firm's output because of the uncertainty of litigation (in nonlitigation practice, because of legal and factual uncertainties).
The fact that a firm loses a case doesn't mean that it did a bad job; both the winner's firm and the loser's firm may have done equally good jobs--the lawyers don't control the outcome.
A law firm can give the client a pretty good idea of the quality of the lawyers it assigns to the client's case, because there are observable proxies for a lawyer's unobservable quality, proxies such as his educational and employment history.
What the client cannot readily judge is whether the law firm put in excessive hours on the case, and the result, according to persistent and cumulatively persuasive anecdotage, is a tendency for law firms to invest hours in a case beyond the point at which the marginal value of the additional hour is just equal to the marginal cost to the client.
Young lawyers often feel that they are being assigned work to do that has little value to the client but that will increase the firm's income because the firm bills its lawyers' time at a considerably higher rate than the cost of that time to the firm.
The very high turnover at many law firms is attributed in part to dissatisfaction of young lawyers with the amount of busywork that they are assigned, work that bores them and does not contribute to the development of their professional skills, yet may be very time-consuming.
The problem is compounded by the distorted incentives of corporate general counsels.
A general counsel wants to show his boss, the corporate CEO, that he monitors expenses carefully, and, since he knows that he is likely to lose at least some of his cases, he also wants to be able to avoid if possible being blamed by his boss for the loss.
Hourly billing serves both of these ends.
The law firm and the general counsel play a little game, in which the law firm prices its hours on the assumption that it will not be able to collect its billing rates on all of them, and the general counsel reduces the number of hours that he is willing to pay for.
He can then show his CEO that he squeezed the water out of the law firm's bills.
At the same time, by paying a prominent law firm by the hour, he can assure his CEO, in the event a case is lost, that he had told the firm to do as much work as was needed to maximize the likelihood of a favorable outcome, rather than paying a fixed rate agreed to at the outset that might have induced the law firm to skimp on the amount of work it put into the case.
One can imagine a law firm's adopting a different method of pricing, in which it would charge at the outset a fixed fee, subject to adjustments up or down at the end of the case based on outcome, amount of work, or some other performance measure or combination of such measures.
The conventional law firm billing system is a form of cost-plus pricing, which is considered wasteful.
But litigation is risky, and cost-plus pricing diminishes risk by eliminating a contractor's incentive to cut corners.
If the disutility of risk to a general counsel is great, he will prefer to "overpay" law firms rather than trying to explain to the CEO that the novel compensation deal that he worked out with the law firm that lost the case was not a factor in the loss; that he had not been penny wise and pound foolish.
Although the compensation practices that I have described seem inefficient, it does not follow that corrective measures would be appropriate.
They would be costly and the net benefits might well be negative.
It is efficient to live with a good deal of inefficiency.
Stated otherwise, the fact that competitive markets contain large pockets of inefficiency is not in itself inefficient.
For example, while cartel pricing is inefficient, if the cost of preventing cartelization exceeded the benefits one wouldn't want to prevent it.
Yet cartel pricing would still be inefficient in the sense of misallocating resources, relative to the allocation under competition.
We must live with a good deal of inefficiency, but it is still inefficiency.
Becker is right of course that a growing demand for food, resulting from world population growth, relative to supply cannot explain the very steep food-price increases that have occurred since 2006; world food prices are 75 percent higher than they were that year and obviously world population has not grown by that percentage, But I do not take this to be a refutation of Malthus, whose insights have relevance to the modern world.
Malthus argued that if a population is living at the subsistence level, if population increases geometrically (for example, a couple has three children, each of the three children eventually marries and produces three children, and so on) but food production only arithmetically, there will be more people than can be fed, and so population will decline through starvation, disease, or war until a new equilibrium is reached.
(Because the population is assumed to be living at the subsistence level, the equilibrium cannot be achieved through higher food prices.) Malthus did not foresee the technological advances that have resulted in a faster rate of increase in the food supply than in the population, or increases in wealth that enable food prices to rise to prevent shortages should demand outrun supply.
Nor did he foresee modern contraception technology, or China‚Äôs one-child policy.
But given his assumptions, his analysis is sound and it gave Darwin the clue he needed to develop the theory of natural selection.
In Malthus's model people kill each other to avoid starvation, and those who do best in the desperate struggle survive--hence survival of the fittest as determined by a competitive process.
As Becker points out, Paul Ehrlich and others predicted in the 1970s (beginning with the first "Earth Day," in 1970) mass starvation as a result of continuing population growth.
They were wrong, in part by failing to predict the Green Revolution, which greatly reduced the cost of food production.
The situation today is different.
The demand for agricultural products has grown, though not as a result of population growth; instead as a result of increased demand for ethanol and other biofuels, and for food that requires more agricultural acreage to produce.
Today, besides people and pigs eating corn, our motor vehicles "eat" corn that has been converted into ethanol.
And in China and India, which together contain a third of the world's population, increased wealth has led to an increased demand for meat, in China for beef.
Cattle eat corn and other crops and are in turn eaten, but the amount of crops consumed in this process is several times greater than the amount that would be consumed if people ate the crops directly, rather than indirectly by eating vegetarian farm animals.
China's consumption of beef, which has been growing rapidly for a number of years, is expected to grow 4 percent this year--yet it will still be only about 15 percent of U.S.
beef consumption per capita.
Increased demand for agricultural products should lead to increased supply, but the supply response is limited because of the higher price of gasoline, an important input into food production, and because of scarcity of good agricultural land (in part a result of population growth), which implies an upward-sloping supply curve for food..
The fact that increased demand for agricultural products, and resulting high prices, are due to factors other than growth of population does not make a demand-supply imbalance any the less serious.
We may be seeing the beginnings of an attenuated Malthusian response in Egypt, where there have been riots recently over food prices.
Egypt is a poor country, and to avoid violence the government has had to increase its food subsidies--making the country poorer and hence more vulnerable to political instability, which could result in an Islamic insurrection.
In poor countries today, as in ancient Rome, keeping the urban population happy is the foremost political imperative, because urban riots, especially in a nation's capital, can bring the government down.
Urban residents are not farmers, so rising food prices only hurt, and do not help, them.
But urban food subsidies immiserate the rural population, and limits on food exports, designed to control domestic food prices, disrupt the international agriculture market.
Our ethanol subsidies, and equivalent policies, such as the European Union's rejection of genetically modified foods, and the wealthy nations' (including the United States') tariffs on agricultural imports, could in principle be abandoned in order to increase the supply of food.
But domestic interest-group pressures (which in the United States include the disproportionate influence that Iowa exerts in presidential politics) make reform unlikely.
I no longer believe that deregulation has been a complete, an unqualified, success.
As I indicated in my posting of last week, deregulation of the airline industry appears to be a factor in the serious deterioration of service, which I believe has imposed substantial costs on travelers, particularly but not only business travelers; and the partial deregulation of electricity supply may have been a factor in the western energy crisis of 2000 to 2001 and the ensuing Enron debacle.
The deregulation of trucking, natural gas, and pipelines has, in contrast, probably been an unqualified success, and likewise the deregulation of the long-distance telecommunications and telecommunications terminal equipment markets, achieved by a combination of deregulatory moves by the Federal Communications Commission beginning in 1968 and the government antitrust suit that culminated in the breakup of AT&T in 1983.
Although one must be tentative in evaluating current events, I suspect that the deregulation (though again partial) of banking has been a factor in the current credit crisis.
The reason is related to Becker's very sensible suggestion that, given the moral hazard created by government bailouts of failing financial institutions, a tighter ceiling should be placed on the risks that banks are permitted to take.
Because of federal deposit insurance, banks are able to borrow at low rates and depositors (the lenders) have no incentive to monitor what the banks do with their money.
This encourages risk taking that is excessive from an overall social standpoint and was the major factor in the savings and loan collapse of the 1980s.
Deregulation, by removing a variety of restrictions on permitted banking activities, has allowed commercial banks to engage in riskier activities than they previously had been allowed to engage in, such as investing in derivatives and in subprime mortgages, and thus deregulation helped to bring on the current credit crunch.
At the same time, investment banks such as Bear Sterns have been allowed to engage in what is functionally commercial banking; their lenders do not have deposit insurance--but their lenders are banks that for the reason stated above are happy to make risky loans.
The Federal Deposit Insurance Reform Act of 2005 required the FDIC to base deposit insurance premiums on an assessment of the riskiness of each banking institution, and last year the Commission issued regulations implementing the statutory directive.
But, as far as I can judge, the risk-assessed premiums vary within a very narrow band and are not based on an in-depth assessment of the individual bank‚Äôs riskiness.
Now it is tempting to think that deregulation has nothing to do with this, that the problem is that the banks mistakenly believed that their lending was not risky.
I am skeptical.
I do not think that bubbles are primarily due to avoidable error.
I think they are due to inherent uncertainty about when the bubble will burst.
You don't want to sell (or lend, in the case of banks) when the bubble is still growing, because then you may be leaving a lot of money on the table.
There were warnings about an impending collapse of housing prices years ago, but anyone who heeded them lost a great deal of money before his ship came in.
(Remember how Warren Buffett was criticized in the late 1990s for missing out on the high-tech stock boom.) I suspect that the commercial and investment banks and hedge funds were engaged in rational risk taking, but that (except in the case of the smaller hedge funds--the largest, judging from the bailout of Long-Term Capital Management in 1998, are also considered by federal regulators too large to be permitted to go broke) they took excessive risks because of the moral hazard created by deposit insurance and bailout prospects.
Perhaps what the savings and loan and now the broader financial-industry crises reveal is the danger of partial deregulation.
Full deregulation would entail eliminating both government deposit insurance (especially insurance that is not experience-rated or otherwise proportioned to risk) and bailouts.
Partial deregulation can create the worst of all possible worlds, as the western energy crisis may also illustrate, by encouraging firms to take risks secure in the knowledge that the downside risk is truncated.
There has I think been a tendency of recent Administrations, both Republican and Democratic but especially the former, not to take regulation very seriously.
This tendency expresses itself in deep cuts in staff and in the appointment of regulatory administrators who are either political hacks or are ideologically opposed to regulation.
(I have long thought it troublesome that Alan Greenspan was a follower of Ayn Rand.) This would be fine if zero regulation were the social desideratum, but it is not.
The correct approach is to carve down regulation to the optimal level but then finance and staff and enforce the remaining regulatory duties competently and in good faith.
Judging by the number of scandals in recent years involving the regulation of health, safety, and the environment, this is not being done.
And to these examples should probably be added the weak regulation of questionable mortgage practices and of rating agencies' conflicts of interest and, more basically, a failure to appreciate the gravity of the moral hazard problem in the financial industry.
Airline delay has increased in the last five years, and the statistics understate the amount of delay because airlines have increased scheduled flight times--the flight from Chicago to Washington used to be scheduled for an hour and a half; now it is scheduled for two hours.
Flights are horribly crowded, food and beverage service has deteriorated in first class and virtually disappeared in coach, and the incidence of mislaid baggage has increased.
Delay is the main problem, and the one that I shall focus on.
Many culprits have been named--high fuel costs that have contributed to deferred maintenance that results in cancellations, the failure of the Federal Aviation Administration to upgrade the air traffic control system so that it can handle more traffic with less spacing between aircraft, more turbulent weather perhaps due to global warming, and crowded aircraft that result in delays in boarding and hence in departure.
But all these seem to me to miss the point.
Persistent delay is usually the result of a failure to use price to equate demand and supply.
When demand increases in advance of an increase in supply, failure to raise price results in buyers' incurring cost in the form of delay rather than in the form of a higher price.
The cost of delay is a deadweight loss, whereas a higher price would be merely a wealth transfer to the sellers and would finance an increase in supply.
Some delay in the provision of services is unavoidable because of fluctuations in demand; it usually is wasteful to increase supply to the point at which every spike in demand can be accommodated without rationing (i.e., queuing, delay).
But the persistent delays that airline passengers have been encountering for many years now cannot be explained by demand uncertainty.
The delays impose enormous costs, particularly but not only on business travelers.
The value of Americans' time is high.
So why are airline prices so low? The answer may lie in the lumpiness of airline service.
(This was pointed out many years ago by the Chicago economist Lester Telser, and was repeated last week by Holman Jenkins in the   Wall Street Journal  .) The fixed costs of modern passenger aircraft are very high, but the marginal costs--the costs of carrying one more passenger if the plane is not full--are very low.
At any price above marginal cost, the airline is better off selling a ticket than flying with the seat empty.
Competition between airlines will therefore exert strong downward pressure on price.
Prices tend to be pushed down to a level at which the airlines find it difficult to finance the purchase of new planes.
As the existing planes age, equipment failures become more frequent, contributing to delays and cancellations.
Airlines prefer delays to cancellations, because they get to keep the fares, and they resist raising prices to reduce congestion because that will make it more difficult to fill the planes, and an empty seat is, as explained, very costly in revenue forgone.
Furthermore, airline service is quite uniform across airlines, which makes travelers more sensitive to airline prices than, say, to hotel prices, since hotels compete in many other dimensions besides price.
Another aspect of lumpiness that should be noted is the difficulty of adjusting prices to different passenger time costs.
Business travelers have higher time costs than leisure travelers, but there are not enough business travelers to fill a plane of efficient size, and even if there were, no one airline could significantly reduce the problem of delay, just as no one driver can affect traffic congestion by reducing the number of his trips.
I am not aware that the delay costs of airline service, and the costs of the other disamenities (the very crowded airplanes and slow boarding and deplaning in coach) in the current market, have been quantified, but assuming that they are, as I suspect, very substantial, the question arises what if anything should be done to alleviate the problem.
One possibility would be to allow the airlines to agree on minimum prices: in other words, to exempt the airlines from section 1 of the Sherman Act, which forbids competitors to agree on prices.
The problem is that the airlines would fix a profit-maximizing minimum price, and it probably would exceed the price necessary to reduce congestion to the optimal level.
Moreover, any increase in the price level would attract inefficient entry.
Another possibility would be to return to the regulatory system administered by the Civil Aeronautics Board before the deregulation of the airline industry in 1978.
The CAB did not regulate rates, but it controlled entry into city pairs and used that control to limit entry to the point that flights were frequent and uncrowded.
If a flight was canceled or delayed, it was usually easy to get a seat on another flight leaving soon.
But with entry tightly limited, prices were above the competitive level; planes were not just uncrowded, they flew nearly empty.
Prices have fallen sharply since deregulation.
Competition has also led the airlines to adopt a variety of cost-saving measures.
Pilots' wages are now much lower.
Before deregulation, the powerful pilots' union (powerful because of the enormous costs of a work stoppage to a company that cannot produce for inventory and thus make up some of the revenue that it loses from a strike) was able to extract some of the airlines' regulation-enabled cartel profits, in the form of supracompetitive wages for pilots.
Another option would be to encourage, or at least place no antitrust or other obstacles in the way of, mergers between airlines.
If there were only two airlines on every route, tacit collusion between them would probably keep prices high but not so high as if there were a single airline or an explicit price-fixing agreement.
But any increase in prices would attract entry, pushing prices back down.
Moreover, mergers often result in higher rather than lower costs.
A better alternative than any I have discussed thus far would be a heavy tax on airline transportation, with the tax rate varying according to the contribution of a particular route, time, or type of plane to congestion (for example, in general large planes would be taxed less heavily per passenger than small ones, because for a given number of passengers there are fewer big planes to clog the airways and runways than there would be small ones).
To the extent effective, the tax would eliminate the deadweight cost of congestion.
The Federal Reserve's unsound monetary policy in the early 2000s pushed down interest rates excessively, resulting in asset-price inflation, particularly in houses because they are bought primarily with debt.
Eventually the bubble burst and house prices fell precipitately; they are still falling.
Becker's interesting post argues that the boom and bust in housing have not had as large an effect on consumption (and hence, this implies, on the nonfinancial economy) as the size of the price fluctuations might suggest.
He illustrates with the example of a homeowner who has a bequest motive: if people intend to leave their house to their kids, changes in the value of the house will affect the size of the bequest rather than current spending by the owner-parent.
More generally, an increase in home values increases the cost of housing by the same amount.
If all house prices double (and assume no other prices change), but the owner is not intending to downsize, he cannot "spend" the increased value of the house.
However, although increased home values are unlikely to be translated into equal increases in consumption spending, those increased values are likely to have a strongly positive effect on consumption.
To begin with, a significant amount of borrowing during the bubble involved the refinancing of existing mortgages rather than the financing of home purchases, and often the incentive for the refinancing was to obtain cash for consumption.
Furthermore, some people may downsize, or even become renters, because they want to increase their consumption expenditures, as they can do if they cash out some of the increased market value of their house.
And if people feel wealthier because the market value of their savings (which includes the value of a house) is rising, they may reallocate more of their income to consumption, which they can do without impairing the expected value of their savings if their houses are worth more.
In other words, a house is a "store of value" (as economists say) rather than just a home, and even if one has no intention of downsizing, the fact that one has a valuable house that could be sold if one needed cash provides a store of value that may persuade you that you can afford to consume more.
As a form of savings, a house is illiquid and risky relative to cash, but it is still savings, in the sense of an asset that one could turn into cash if necessary, to increase consumption in the future.
And if one's savings shoot up because of an increase in the price of one of one's assets, one may decide to allocate a portion of the increased savings to current consumption.
These considerations persuade me that the run-up in housing prices probably did increase consumption significantly.
More important, the collapse of those prices, together with the fall in the stock market, has almost certainly had a significant negative impact on current consumption expenditures.
Because of rising house and stock prices, the market value of people's savings rose during the housing and stock bubbles and as a result people reduced their savings rate, to the point where it actually was negative for a period during the early 2000s and was no higher than about 1 percent before the crash last September.
(This is related to the "store of value" point.) The personal savings rate has since risen to more than 4 percent.
The fall in house and stock prices, combined with increased unemployment and fear of unemployment, convinced many people that they didn't have enough precautionary (safe) savings, and so their current savings are heavily weighted toward cash and other riskless, or very low-risk, forms of savings.
The reallocation of income from consumption expenditures to very safe forms of savings reduces current consumption without increasing productive investment significantly, and so contributes to the depression.
Even in a democracy, it is believed that certain government functions should be placed beyond the control of democratic politics.
The usual example is the judiciary (though most state judges in the United States are elected, this is a considerable anomaly).
But another example is the central bank, which in the case of the United States is the Federal Reserve.
A central bank has considerable, often decisive, influence over short-term interest rates, and, through them, over long-run interest rates as well.
Typically (and to oversimplify), a central bank reduces short-term interest rates by buying short-term government securities, which pumps cash into the economy when the cash is deposited in bank accounts and then withdrawn and spent.
Interest is the price that people or firms demand to part with cash--the more cash there is in the economy, the lower that price will be.
In addition, by increasing the demand for these securities, the purchase increases their price, which in turn reduces their yield--the interest that they command.
The central bank increases short-term interest rates by the reverse operation--selling short-term government securities, which sucks cash out of the economy, since the central bank can retire the cash rather than having to spend it.
Long-term rates tend to follow the path of short, both because of substitutability and because the more cash the banks have to lend, and so the less they have to pay for the capital that they lend, the lower the interest rates at which they will lend, including lending long term, because competition will tend to keep the spread between the banks' borrowing and lending costs from increasing just because their borrowing costs are falling.
The reason for making the central bank politically independent is that the bank's power over interest rates could be abused for political ends.
Suppose the economy, though not in recession, is somewhat sluggish, and the government, perhaps because an election is looming, wants to juice it up.
So it orders the central bank to reduce interest rates by buying government securities, thus pumping money into the economy.
Reduced interest rates will stimulate lending, borrowing, and therefore economic activity, but the increase in the money supply can (since the economy is merely sluggish, and not in recession) create inflation.
Very low interest rates in the early 2000s in the United States caused asset-price inflation, with destructive consequences, as we know.
Inflation can have other political objectives besides stimulating the economy in order to improve a government's popularity.
It is a method of taxation.
Suppliers are required by law to accept the official currency in payment of debts, so government can buy goods and services just by issuing money to its employees and other suppliers without having to raise the money by borrowing or by (explicit) taxation.
The suppliers will respond by raising prices, but if the government refuses to pay (for example, refuses to raise wages), then the suppliers, to the extent dependent on the government for business (or employment), will have to accept the cheapened money.
In addition, inflation can be used to benefit some groups in society at the expense of others.
Inflation benefits debtors, when debt is not indexed for inflation, and hurts creditors.
A strongly pro-creditor central bank might even engender deflation, which would mean that debtors would be repaying their debts in dollars worth more in purchasing power than when they took out their loans.
A central bank might do that (reduce the money supply, so that the purchasing power of a given quantity of money increases) in order to strengthen its currency, which would enable the country to buy imports more cheaply and increase the return on its foreign investments.
(That was the ground on which Britain deflated by returning to the gold standard after having gone off it in World War I.
That was a government decision; there was no independent central bank.).
Since the harms of inflation are now widely recognized, a central bank that focuses on limiting inflation will be reasonably popular; and since the value of its being independent of political influences so that it will limit inflation (and deflation) will be recognized, its independence will not be challenged.
But the independence of the central bank in the United States, as in other countries, is not guaranteed by the Constitution, as the independence of the federal judiciary is.
It is a matter of statute, and Congress could eliminate or reduce the Federal Reserve's independence from the normal political process at any time.
Its independence is therefore legally precarious.
That is part of the reason why the modern Federal Reserve has focused on controlling inflation, and, specifically, why it did not prick the housing bubble of the early 2000s, as it could have done at any time by pushing up interest rates, until the bubble got completely out of hand in 2006 and 2007.
Had it pricked the bubble earlier, precipitating a fall in housing prices with consequent defaults and foreclosures, at a time when it was unclear that the run up in housing prices   was   a bubble, it would have been blamed for causing a recession, because proof of a bubble is difficult.
But in retrospect the hit that the Federal Reserve would have taken by pricking the bubble would have done less damage to its prospects for continued independence than the current depression, and the Fed's response, may be doing.
Had the Fed merely pushed down interest rates when it became apparent last summer that the economy was sliding into a recession or worse, it would have been doing something that it was expected to do: the converse of raising interest rates to prevent inflation is lowering interest rates to prevent recession, and this is consistent with stabilization, which is part of the Fed's explicit statutory mandate.
The Fed did lower the federal funds (overnight bank lending) interest rate, which has become the conventional way in which it influences interest rates.
That rate is now virtually zero, yet the reduction has not done the trick.
The reason is that the impairment of the banks' capital (because of their heavy involvement in home mortgage lending) has discouraged the banks from lending, since lending is risky.
And so the fact that they can borrow from one another at essentially a zero rate of interest to meet loan demands has not incited them to lend in amounts necessary to maintain economic activity at a normal level.
The Fed in some desperation therefore began last fall lending substantial sums to banks in an effort to increase their safe capital to a point at which they would increase their lending by relaxing their credit standards and reducing interest rates on their loans.
The Fed also began buying up private debt (as distinct from government securities), for example credit card debt, in the hope that the sellers of the debt would use the cash they received for their debt from the Fed to issue more debt, that is, to lend more.
It even has begun buying long-term private and public (Treasury) debt.
The dangers to the Federal Reserve's independence that are created by such activities are twofold.
First, the scale of the Fed's intervention is so great as to create a serious risk of a future inflation, albeit a risk that, at present, the bond markets (judging from long-term interest rates) do not consider large.
The Fed in the last year has expanded the supply of money by about a trillion dollars, and is intending to expand it further.
In principle, it can reverse the expansion process by selling Treasury securities (and the other debt that it has bought) and retiring the cash it receives from the sale.
The problem is that a sudden large withdrawal of cash from the economy could cause interest rates to spike, bringing on a recession, as when the Fed reduced the money supply in 1979-1982 to break the 1970s inflation, which was getting out of hand (it reached 15 percent in 1979).
A gradual withdrawal might be too slow to prevent inflation.
It is true that when the Fed buys short-term debt, such as credit-card debt, the transaction unwinds naturally in a short time: the debt is paid by the debtors, and the cash received from them can be retired.
But this assumes that the debt is paid in full, which it may not be, and that the Fed does not immediately buy more short-term debt, and perhaps feel obliged to continue doing so, because the market has become dependent on its participation.
And the Fed as I said is buying long-term as well as short-term debt, and that does not unwind automatically in the short term; it can be sold but it might be sold at a loss, depleting the Fed's balance sheet and leaving excess cash in the economy to create inflation.
If the Fed's actions precipitate inflation or have other untoward consequences, there is likely to be a political backlash against the Fed.
We live at present in a blame culture, and really the Fed is lucky that so far most of the public's and the Congress's and the media's ire has been directed at the bankers rather than at Greenspan or Bernanke.
Second, and perhaps more ominous, the types of intervention that the Fed is now engaged in can create an impression of politicization of financial policy or even of impropriety.
If the Fed merely issues an offer to buy some specified quantity of Treasury bills, or an offer to sell some specified quantity of those bills, it is not picking and choosing among companies or industries.
But if it decides, or participates in deciding, whether Bank X should be allowed to fail while Bank Y receives a huge bailout, or when it uses its position as a bank's creditor to alter its management or influence its business decisions, it invites accusations of favoritism or worse.
(Or when it decides to buy one type of private debt rather than another.) The latest portent is the allegation that Bernanke, the Fed's chairman, participated with Henry Paulson, the then Secretary of the Treasury, in pressuring Bank of America last December not only to go through with its planned purchase of Merrill Lynch but also to conceal Merrill Lynch's immense losses from Bank of America's shareholders.
I have no idea whether the allegation is true; but that it should be made at all is an example of the political danger to the Federal Reserve if it becomes involved in the operation of individual banks.
I am not suggesting that the Federal Reserve is wrong to take radical measures to combat a depression.
The Fed's "easy money" monetary policy may have warded off a deflationary spiral, which would have been disastrous (there is still a mild deflation--the Consumer Price Index for example is below what it was a year ago--and it could still get worse).
And the Fed's bank bailouts may well have limited the decline in lending touched off by the near collapse of the banking industry last September.
I merely contend that such measures pose greater threats to the Fed's political independence than would early intervention to prick the housing bubble and by doing so perhaps have prevented the grave economic situation in which the nation finds itself.
The Dow Jones Industrial Average peaked at 14,200 on October 9, 2007, fell to 9,600 on November 4, 2008 (election day), kept falling, to 6,400 on March 6, 2009, and since then has risen sharply, to 8,100.
(I have rounded to the nearest hundred.
I use movements in the DJIA rather than in the S&P 500 because the DJIA is composed of heavily traded stocks and thus gives a clearer view of market-price changes.) What explains these gyrations? The housing bubble had already burst when the market peaked.
Yet stocks of financial firms heavily invested in housing were flying high, and have now lost much of their value.
The stock market was overpriced in October 2007, just as it had been at the peak of the dot-com bubble in the late 1990s, and on the eve the stock market crash of October 1929, and at other times as well.
This raises the question whether and in what sense the stock market is an "efficient" market.
It was Mark Twain who first, more than a century ago, advised investors to put all their eggs in one basket and watch the basket.
His advice was picked up by businessmen like Andrew Carnegie and Bernard Baruch and became conventional investment wisdom.
Modern finance theory demolished that conventional wisdom by showing that it is virtually impossible, certainly for the vast majority of investors, including professionals such as mutual fund managers, Wall Street gurus, securities analysts, and finance professors, to beat the market, in the sense of consistently identifying overpriced stocks to sell and underpriced ones to buy.
(For a valuable collection of articles on this theme, see www.cxoadvisory.com/blog/internal/blog-analysts-experts/.) Much more sensible is a strategy of buying and holding a diversified portfolio of stocks (and other securities as well), thus minimizing trading costs and other transaction costs, along with variance, which investors who are risk averse, as most investors are, do not like.
Even if the expected value of a particular stock is equal to the expected value of a diversified portfolio, the risk of being wiped out is much less if one holds a diversified portfolio than if one owns a single stock.
Of course, some active traders (stock pickers or market timers) are lucky, just as some gamblers are, and earn supernormal returns from active trading.
Others obtain supernormal returns in up markets by investing borrowed money (leverage)--and incur supernormal losses in down markets if they are investing with borrowed money, since the cost of that money is fixed, which is why investing with borrowed money yields supernormal returns if stock prices bought with the borrowed money are rising.
More important, supernormal returns are possible for some investors as a matter of skill or sharp tactics when trading on private information is permitted (or done anyway), or when markets are illiquid or rigged, or when few analysts study the companies whose stock is traded.
The difficulty of beating the market other than by luck or leverage or the market deficiencies just mentioned, whether by active trading of particular stocks believed to be overpriced or underpriced by the market or by trying to time market turns, suggests that when investors trading on public information--information that, by definition of "public," is equally accessible to all of them--will obtain only a normal profit.
That is one definition of an efficient market: a market in which competition is so effective that it squeezes out economic rents, which is to say returns in excess of costs.
There is good evidence that organized exchanges in mature economies are efficient in that sense, as most modern finance theorists believe.
But how can their belief be squared with the frequency of investment bubbles? Investors in October 2007 may have had equal access to all available public information about banks and other firms, but they seem not to have drawn a correct inference from that information.
Bubble behavior is exhibit number 1 to the claim by some behavioral economists that stock market investors often act irrationally.
For example, buying in a rising market or selling in a falling one (both illustrating what is called "serial momentum" or "momentum trading") is said to illustrate "herding" behavior.
I do not agree.
Nor do I think investors should be criticized for the behavior that has led to the stock market gyrations that I mentioned at the outset.
What is missing in the behavioral analysis is the distinction first made by the University of Chicago economist Frank Knight, in the 1920s, between calculable risk, that is, a risk to which an objective probability can be attached, and uncertainty, which is a risk to which such a probability cannot be attached.
Insurance is based on calculable risks; an objective, quantitative estimate of the risk of an accident or other insured event enables the fixing of an insurance premium, a price equal to (if one ignores administrative costs) the expected cost of the loss insured against.
The estimates of probable loss used to calculate insurance premiums are based primarily on past experience (frequencies), and if the future differs unpredictably the insurance company may incur windfall gains or losses.
So there is some Knightian uncertainty even in insurance markets, but it is generally much less than in the stock market.
A vast number of decisions that people make, including investors, are decisions under uncertainty in Knight's sense.
When one has to choose between on the one hand marrying one's present girlfriend or boyfriend and on the other hand continuing to search for a "better" marriage partner, one cannot base the choice on a quantitative estimate of the probability that one choice will have better results than the other.
A businessman who has to decide whether to invest in a project that will not yield revenues for several years is likewise making a decision under uncertainty because he cannot estimate the probabilities of many of the contingencies that, if they materialize, will make the project profitable or unprofitable.
And an investor who decides to put more of his savings in the stock market, or shift some of his stock to an alternative investment, cannot estimate the probability that the price of the stock will rise or fall, and within what interval of time, and how far.
He knows, moreover, that what moves stock prices is not the best estimate of future corporate profits as such, but the behavior of the investing public, which is influenced by other things besides beliefs concerning the future course of such profits.
For example, when stock prices begin to fall, the market value of savings invested in the market falls and this may make cautious investors move their money into safer forms of saving to make sure they have enough protection against a rainy day--a decision that has little or nothing to do with predicting future stock prices.
This precautionary motive has almost certainly been a factor in the steep fall of stock prices in the current economic downturn.
The personal savings rate had plummeted in the early 2000s, and the housing collapse depleted the savings of many people, especially those whose principal investment was their house, so that when stock prices fell many of these people reduced their spending and increased their precautionary savings.
This pushed down economic output, increased the rate of unemployment, reduced corporate profits, and so caused the stock market to fall even farther.
But the impetus for the market decline, in this analysis, was not a judgment about corporate profits but a desire for safer savings.
But what about stock market bubbles? The explanation may lie in the fact that under Knightian uncertainty, often the best, though not a good, predictor of the future is the immediate past.
If there is no weather forecasting, probably the best guess as to tomorrow's weather is that it will be similar to today's.
If stock prices are rising, this suggests that something is happening to make people think that corporate profits will be greater in the foreseeable future.
One might counter by asking why, if investors are expecting stock prices to continue rising, prices don't immediately jump to their peak value.
But there is some inertia in trading, and, more important, no one can know the market peak in advance; for if everyone knew that, no one would sell at the current price or buy at the peak price, and trading would come to a halt.
So suppose that in 2007 you had money to invest.
You could buy a CD, a Treasury security, mutual-fund shares, etc.
Why would you think that the fact that stock prices had been rising made them a poor investment, so that rather than buy stocks you should sell them short?.
Yet I believe that the Federal Reserve should have lanced the housing bubble no later than 2006 by raising short-term interest rates (which would have pushed up long-term rates as well by increasing the borrowing costs of banks and other financial intermediaries and thus the rates they would have to charge for lending their borrowed capital), and if this did not burst the stock market bubble (the bubble that reached its maximum expansion in October 2007) to lance that bubble as well, by increasing margin requirements.
But how can this suggestion be squared with my argument that buying stock (or, I would add, houses) in a bubble is rational behavior? The answer is that an individual investor in making an investment decision does not consider the effect of the decision on the economy as a whole; that is not his business, and anyway an individual investment decision is unlikely to have economy-wide effects.
Protecting the economy is the business of government.
Even if the Federal Reserve could not have spotted the housing or credit or stock market bubbles before they burst, it knew or should have known that these booms   could   be bubbles and that if so they would burst and when they burst they could bring down the economy.
This made the expected cost of the booms high, even though that cost could not be quantified (another example of Knightian uncertainty)--high enough to justify intervention, or, at the very least, the formulation of contingency plans to deal with worst-case scenarios.
Last fall the federal government lent hundreds of billions of dollars to major banks and other financial intermediaries pursuant to a program called the Troubled Asset Relief Program (TARP).
Some of the recipients, notably Goldman Sachs and JPMorgan Chase, want to repay the money.
Essentially that means buying back the preferred stock that the government received in exchange for the loans; they thus were loans without a maturity date.
I do not know whether, as a matter of law, the government's consent to repayment is required, although it would not be surprising if it were, since, as I said, the government received preferred stock for the loans rather than just a promise to repay.
But maybe there's something in the loan contracts that entitles the banks to repay when they want to; I do not know, but I'll assume that the government's consent is required and consider whether that consent should be given.
The answer depends in part on an understanding of why the loans were made.
Last September it appeared that most of the nation's major banks and related financial intermediaries were either insolvent or in danger of becoming so; TARP was designed to save them.
Had the banks been allowed to go broke, the depression in which we now find ourselves would be even more severe than it is; think of the chaos that ensued when Lehman Brothers, one of the lesser financial intermediaries, was allowed to fail that month.
No private investor was willing to step in and save the tottering banks, so the federal government stepped in instead.
TARP proved highly unpopular with Congress and the general public.
The main reason was that the banks were seen to be "hoarding" the money they had received from the government, rather than lending it.
TARP had been sold to a public suspicious of "Wall Street" (an echo of age-old hostility to finance as "sterile," rather than "productive" like making a physical product) in part as a way of stimulating economic activity by enabling banks to increase their loans.
The idea was that the hundreds of billions of dollars fed the banks by the government would go out as loans to the bank's customers.
And loans do stimulate economic activity.
Many businesses rely on bank loans to bridge the gap between incurring costs of production or distribution and later receiving revenues from the sale of goods and services; and both businesses and consumers use borrowing to bring production and consumption, respectively, forward--borrowing to spend means consuming more today and less in the future (the eventual repayment of the loan will reduce the amount of money that the borrower has for spending on investment or consumption).
A depression is a severe contraction of output, and borrowing is a way of increasing output by increasing the amount of money that people and firms have for immediate spending.
The hoped-for expansion of lending did not take place.
Contrary to myth, at no point did banks cease to make loans, although they came close to doing so last September and October.
But they did reduce their lending, both by raising interest rates and by increasing credit standards and, in some cases, by refusing to lend to other than their best, established customers.
The money that the banks received from the government was mainly either hoarded quite literally, or used to buy bonds or other assets (including in some cases other banks).
By literal hoarding I mean keeping the money received from the government in cash or a cash equivalent, such as a bank account in a federal reserve bank.
Banks are required to keep a modest percentage of their demand deposits in cash; these are their "required reserves." Any excess cash they have is called "excess reserves," and since cash does not earn interest, banks usually try to minimize their excess reserves.
In 2007 their excess reserves amounted to only about $2 billion; today, they are almost $800 billion.
When banks are worried,"excess" reserves are not really excess.
Why did the bankers not lend the money they received from the government? There are five reasons: (1) because banks were undercapitalized, as a result of being overinvested in assets that had lost much of their value, such as mortgage loans and interests in mortgage-backed securities; (2) because they anticipated big losses from their outstanding credit card and commercial real estate loans, and perhaps from other loans as well (this is related to the next point); (3) because lending in a depression is highly risky--the default risk is high, and if the lender tries to compensate for the risk by charging a very interest rate this will increase the risk of default, because interest is a fixed cost of the borrower, that is, invariant to his revenues; (4) because as businesses reduced their output their need for borrowing fell and the risk of default (as I just mentioned) rose, making them reluctant to borrow; and (5) because consumer borrowing fell as a result of consumers' being overindebted as a consequence of the fall in house and stock values, the principal source of their savings.
Of course failing businesses and unemployed or otherwise necessitous consumers might want desperately to borrow, risky as their borrowing would be.
But they are unattractive customers for banks, especially when the banks, because they too are overindebted, are trying to reduce the riskiness of their loan portfolios.
These reasons for the banks' reluctance to use federal money to make loans are perfectly good reasons, and do not invalidate the TARP because without the additional capital that the program contributed the banks would have been even more chary about lending.
But the reasons that despite TARP bank lending is continuing to decline were never adequately explained to the Congress or to the public, and as a result the failure of the banks to lend more was and is seen as sinister.
And when it turned out that banks were continuing to pay high bonuses and other generous-seeming compensation to their employees, Congress and the public accused the banks of being a conduit between the federal taxpayer and the "stupid, greedy, reckless" bankers who had brought down the banking industry and, with it, the nonfinancial economy.
Again, no one explained that (1) bankers are smart, and the collapse of the industry last fall was due to a combination of dumb Federal Reserve interest-rate policy in the early 2000s and the excessive deregulation of financial intermediation, beginning in the 1970s; (2) bonuses are a more efficient form of compensation than salary, because they are more closely aligned with performance; and (3) the problem of overcompensation is a problem of senior management, because of its de facto control, in a large, publicly trade corporation, of the board of directors; most recipients of bonuses in financial intermediaries are not part of senior management.
Somehow Congress and much of the public got into its head that the banks had hired dopes and deliberately overpaid them, which would make no sense from the standpoint of senior management.
The uproar over the banking industry has led to legal restrictions on compensation, proposals for other highly intrusive forms of regulation, even threats to "nationalize" major banks (that is, confiscate them), congressional witch hunts, interference with banks' use of private aircraft and with promotional activity at resorts, adverse publicity, and other inroads into the autonomy and efficiency of financial intermediation.
So naturally the banks are scrambling to repay the TARP money as fast as they can, in the hope of getting the government, the public, the politicians, and the media out of their hair.
I can't see a good reason not to allow them to repay the loans.
Repayment will go some distance toward reducing the astronomically mounting federal deficit and will allay, to some extent at any rate, the public and legislative hostility to banks and bankers.
That hostility is counterproductive.
By increasing the uncertainty of the banks' business environment, the attacks on the banks increase their incentive to hoard cash or to make safe investments rather than to make loans.
(So who is being stupid and reckless?) There is I suppose a danger that some banks would repay prematurely, that is, before the risk of insolvency has been dispelled, but that is unlikely.
As Becker points out, a bank would be reluctant to repay its government loans if it anticipated a significant probability of having to return soon to the government, hat in hand, for a further loan because it has gotten into more financial difficulties.
I reply to a comment on the following passage in my posting last week on housing: "The reallocation of income from consumption expenditures to very safe forms of savings reduces current consumption without increasing productive investment significantly, and so contributes to the depression." The commenter states: "surely much of the conservative savings are going into FDIC-backed bank accounts where each $100,000 backs over a million bucks of lending capability.
So current consumption is reduced by one unit while lending in enhanced by 12 units?".
The error in the statement is the assumption that every dollar in an FDIC-insured bank account (or any other bank account, for that matter, according to the logic of the statement) is lent.
Banks do not lend all their capital, especially in a depression.
If a bank is undercapitalized, if it is worried that borrowers will default at an unpredictably higher rate (and that charging them a sky-high interest rate to compensate will increase the default rate), or if the demand for borrowing is down because many consumers and producers are overindebted and have to curtail their spending and avoid taking on more debt, then the bank will hoard most or even all of any cash it receives.
Excess bank reserves, which are cash or equivalent (balances in the bank's account with a federal reserve bank), rose from $2 billion in August 2008 to $725 billion in March of this year.
That is money the banks could lend (a bank is not permitted to lend its   required   reserves), but is not lending.
This is rational hoarding, but it means that people who are depositing their money in bank accounts are not doing much if anything to increase lending and thereby stimulate economic activity.
Between 1997 and 2008, median     U.S.
    household income fell by 4 percent after adjustment for inflation.
It presumably did not rise in 2009, and may not in 2010 either.
A median is not an average; average income rose because the incomes of high earners rose, and so the effect was to increase the inequality of the income distribution.
Three factors appear to have contributed significantly to this trend.
One is the continuing increase in the returns to IQ and education as the United States shifts to a highly automated economy; another was and is the historically unprecedented revenue of the finance industry during this period, much of it received by financial executives in the form of very high incomes; and third is the steep increase in premiums for employer-provided health insurance: the increase was almost 80 percent between 2000 and 2009.
Much of this is nominally paid by the employer, but because it is a cost of labor it substitutes for wage increases and so holds wages down.
There is no reason to think these trends will not continue; and until unemployment falls to a normal level, it is hard to see what might work to overcome the trends if they do continue.
In considering the effect of wage stagnation and growing income inequality, it is important to distinguish between money income and standard of living.
As long as the quality of goods and services increases (largely because of technological innovation in a broad sense that includes new business methods as well as scientific and engineering progress) faster than their cost, the standard of living will rise even if incomes do not.
The quality of health care continues increasing rapidly, and part at least of the rapid rise in health insurance premiums is payment for that increased quality.
The quality-adjusted cost of consumer electronics has plummeted in the same period.
But even if the standard of living has increased for most people whose incomes have not risen, or have even fallen, this would not alleviate the growing political problems that wage stagnation and the resulting increase in economic quality are likely to create, if they haven’t done so already.
People take for granted most improvements in goods and services, and do not consider the improvements to be full compensation for a flat or declining income.
Then too liquidity constraints may exclude people from access to many of the improvements; this is a problem for many people who cannot afford health insurance.
Economic anxiety arising from wage stagnation was masked until the fall of 2008 by the Federal Reserve’s low interest rate policies; people could borrow cheaply to maintain and even increase their consumption.
Now they realize they are overindebted and cannot continue to support consumption by borrowing.
Economic anxiety can produce dire economic consequences by increasing the demand for trade protection, for restrictions on immigration, for union protections, for other anticompetitive measures, and for government subsidies; it can also create class resentment, and thus lead to inefficient regulatory policies, as we may be seeing with proposals to “rein in” the “greedy” banks.
One reason I continue to believe that what we have gone through in the last two years is a depression and not a mere recession is that it has raised economic anxiety to a politically dangerous peak.
I regard the “tea party” movement as rooted in a widespread sense (not limited to those who identify with the movement) that something is seriously wrong with the country.
My analysis suggests that measures to reduce income inequality, especially measures that raise the median household income (as distinct from reducing inequality by leveling down the incomes of the well off, which would have serious disincentive effects), could increase economic efficiency by reducing political pressures for inefficient policies.
That was the rationale for “socialist measures,” beginning with     Bismarck    , designed to secure capitalism against communism and other radical political ideologies.
And the measures worked!.
The problem is that the social safety net has become too expensive to be expanded further without jeopardizing the nation’s solvency, given our huge and growing public debt.
The only measures that would address wage stagnation without increasing our public indebtedness further would be subsidies that could be realistically defended as profitable investments in human capital (such as public subvention of college tuition), and essentially costless regulatory changes such as eliminating the tax subsidy for employer-provided health benefits, eliminating or at least reducing many other tax subsidies, instituting means testing for Medicare and Social Security benefits, relaxing certain safety and environmental laws to reduce costs to businesses, weakening teachers’ unions and other public employee unions and reducing the number of public employees, further privatization of public services, reducing tariff barriers, and allowing greater immigration of highly skilled workers.
In the type of VAT that is simplest to understand, the retailer pays the tax, which is a percentage of the difference between the price at which he sells his goods or services and the cost of his labor, materials, and other business expenses (such as rent and insurance).
In other words, he is paying tax on the value that he added to the inputs he bought.
His suppliers likewise pay the VAT on their sales revenues minus their costs—that is, on the value they added in the production-distribution process.
Because (assuming no exemptions) the tax base for a VAT is so broad—all goods and services—a VAT can generate enormous tax revenues at a low tax rate, which reduces the distortionary effect of the tax.
Moreover, a tax proportioned to value added is a reasonable if crude proxy for the government services that a business firm (and hence its customersd) receives.
The VAT also avoids the double taxation of savings under a corporate plus individual income tax system, further encourages savings by making consumption more costly, and reduces the disincentive effects of heavy income taxation.
The VAT is criticized as regressive, but if the tax rate is low the regressive effect is slight, and in any event could be offset by subsidies for the poor.
The VAT is also criticized as inflationary, because it causes retail prices and thus the consumer price index to increase, but the empirical evidence (most nations have a VAT, so there are abundant data for studying its effects) is that at worst the imposition of a VAT causes a one-time blip in the index.
Of course the benefits of the VAT are greatest if it is substituted for income taxes and other inefficient taxes rather than being added to the existing tax system to generate additional tax revenues.
(The inflationary effect would disappear if the VAT merely substituted for other taxes that are passed on to consumers.) Not only is such a substitution politically infeasible, but the cost of transitioning from a tax system based on income to one based on value added at the successive stages of production and distribution would be immense.
Becker’s main objection to the adoption of the VAT by the federal government, which is similar to the objection to taxes on Internet sales and indeed any new taxes that do not merely replace existing taxes, is that by increasing government revenues it will increase the size of government relative to the private economy, and if (as is doubtless true) government is less efficient, the result will be a reduction in economic welfare.
An efficient tax is less costly and so is likely to be set at a level that generates more tax revenue than an inefficient one; and, as Becker notes, because it is less costly, it is likely to grow more over time than an inefficient tax.
I agree but on the other side of the issue is our awful fiscal situation.
Our public debt is soaring at a rate of more than $1 trillion a year, and for political reasons it is extremely unlikely that the debt will be brought under control by higher tax rates, spending cuts (or forbearance to adopt new spending programs), or a rate of economic growth faster than the rate of growth of the public debt.
The fact that the dollar remains the strongest major currency, which is why it remains the dominant international reserve currency, is enabling the Treasury to borrow at low rates.
But this will not last if we continue on the road of fiscal imprudence; and as interest rates on the public debt rise, compounding the deficit, we could find ourselves in the position that     Greece     is in.
Devaluation is a standard response to excessive public debt, but would not make sense for the     United States    : first because foreign trade in not a very large part of our economy, and second and more important because devaluation would greatly impair the international standing of the dollar.
Inflation is another standard response to excessive debt, but it would hurt our foreign trade, and the rate of inflation would have to be high because most of our public debt is short term, so that moderate inflation would be largely ineffectual because interest rates on the debt would adjust quickly.
But high rates of inflation have negative effects on the economy.
In light of the nation’s fiscal bind, the imposition of a federal VAT becomes a more attractive prospect.
One immediate beneficial effect, provided that the VAT was not entirely additive to existing taxes but was coupled with some reduction in corporate and payroll taxes, would be a reduction in export prices and therefore an increase in exports and hence a reduction in our trade deficit, which is a contributor to our public debt.
The General Agreement on Tariffs and Trade permits VAT to be rebated on exports, thus lowering the cost to the foreign buyers.
More important, the VAT would increase federal tax revenues with minimal distortion because it is an efficient tax.
To the extent (even if modest) that it replaced less efficient taxes, it would increase economic efficiency and thus increase the rate of economic growth.
Most important, by discouraging consumption in favor of savings, a VAT would reduce the interest rate on our public debt and the Treasury’s dependence on foreign lenders.
Marriage rates have declined steeply in the     United States     and other Western nations.
The number of marriages in the     United States     in 1950, per one thousand population, was 11; it is now 7—and the number is much lower in Western European nations.
Of Americans aged 25 to 44, 62 percent of women are married and 59 percent of men; 8 percent of women are cohabiting and 10 percent of men.
Divorce is frequent; only two-thirds of American marriages last more than 10 years.
Forty percent of children are born out of wedlock.
It is easier to explain the decline in marriage rates than to assess the significance of the decline for the health of the society.
Decline in infant mortality and increase in job opportunities for women (and hence increased opportunity cost of motherhood) have reduced the demand for children and thereby raised the average age of marriage, which leads to a reduced number of marriages.
And women, being abler to support themselves than in the past, are more picky about marriage, and that reduces the marriage rate.
Moreover, with many more women working in the market than only in the household, the gains from specialization in marriage have fallen.
In addition, both sexes have greater access to sex outside of marriage; marrying   for   sex is becoming a thing of the past as taboos against extramarital sex disappear.
And legal changes have reduced the difference between marriage and close substitutes like cohabitation: no longer are children born out of wedlock subjected to disadvantages associated with “illegitimacy,” such as being denied rights of inheritance; and no-fault divorce has lowered the cost of divorce.
In a pluralistic society, widespread practices tend to become normative.
The more unmarried people there are, the more the unmarried state seems normal.
Between 1930 and 1990, the percentage of     U.S.
    households consisting of a married couple (with or without children) declined from 84 percent to 56 percent.
In the 1950s (and earlier)—an era of greater social conformity than at present—unmarried men in particular were suspected of being homosexual (at a time when homosexuality was strongly reprobated), selfish (incapable of commitment), or otherwise “lacking something.”.
There is considerable evidence that married people are happier and healthier than unmarried, but the direction of causation is unclear.
Happier and healthier people are more desirable as marriage partners and also better able to cope with the strains that are inevitable in a close relationship with another person to whom one is not related.
The American population has over time become healthier and even somewhat happier, yet these trends have not reversed the decline in the marriage rate.
Will the marriage rate go all the way to zero, or close to it, and be replaced by a pure contractual regime (the triumph of freedom of contract)? I think not.
The reason is that even in an era of no-fault divorce and a high divorce rate, marriage signifies commitment in a way that no other adult relationship does—if only because marrying couples greatly exaggerate the likelihood of never divorcing.
Partly because of the exaggerated expectations that people bring to marriage, it is socially and emotionally much more difficult to terminate a marriage than a cohabitation (euphemistically but imprecisely termed nowadays a “committed relationship”).
And it is difficult to imagine satisfactory contractual substitutes--which would have to define marital obligations and their satisfactory performance and specify sanctions for breach--that would create a comparable commitment.
So as long there is a demand for a a really committed relationship, although the commitment cannot be nearly as strong as when divorce was difficult or even forbidden altogether (but when short life spans sharply limited the duration of most marriages), one can expect marriage to persist.
This is provided that commitment yields substantial expected benefits.
It does, as is most easily seen in a culture in which divorce is strongly disapproved or even forbidden (or remarriage forbidden).
For then each spouse has a strong incentive to invest in the marriage, as when the wife takes time off from work to have children and provide extensive child care and the husband invests in the children by providing material support.
When children are born out of wedlock, the entire burden of child care is likely to fall on the mother, to the detriment of the children.
A puzzle is why the marriage rate of college-educated women, which used to be substantially lower than that of other women, has risen, though it remains slightly lower than theirs.
One might think that educated women's demand for marriage would fall as the labor-market demand for educated women rose.But atthe same time their better income prospects attract men.
Also, the general rise in the age of marriage reduces the relative effect of education on age of marriage (women—men also—tend to postpone marriage until they complete their education).
There may also be an investment motive.
With increased returns to education and fewer children per family, educated people are motivated and able to invest in their children’s upbringing and schooling more than the uneducated have eitherthe resources or felt need to do.
Men seek out intelligent women (and vice versa) as marriage partners in the hope of having children who will be more educable and successful.
Although I do not place any weight on long-term economic forecasts, there is no doubt that we face a growing burden of federal and state entitlement spending—“entitlement” signifying that expenditure levels are automatically financed, rather than having to be reauthorized every year as defense expenditures (and indeed all nonentitlement government expenditures) are.
Government entitlement spending is concentrated on pensions and elder health care.
Both forms of spending increase as a function of the growing percentage of elderly people in the population, and healthcare spending grows additionally because of increases in cost caused by new technologies plus the normal upward-sloping supply curve, implying that increases in demand for health care (because of the increasing number of elderly) cause a rise in average and therefore total costs.
Becker rightly adds to entitlement costs the cost of servicing government debt, since lenders to government have an entitlement to the repayment of their money with interest at the rate specified in the loan.
Costs of debt service can easily grow at a compound rate, because the more the government borrows, the higher the interest rate it is likely to have to pay; instead of borrowing $1 billion at 5 percent interest, for a total annual interest expense of $50 million, it might have to pay 7 percent interest to borrow $2 billion, for a total annual interest expense of $140 million—which is   more   than twice the interest expense on a loan that is twice as large.
Because of the     United States    ’ huge public debt (the part of federal government debt that the government is contractually committed to repay), interest rates on the public debt are likely to rise, creating the compounding effect that I just illustrated.
I am less concerned about nonfederal government debt than about federal government debt.
The nonfederal debt problem centers on public workers’ pensions, and public workers are of course a minority of all workers.
Moreover, because states and cities cannot create money, and because they compete with other states and cities for businesses and people, they are compelled by market forces to restrict spending.
The federal government’s entitlement obligations extend to the entire elderly population of the     United States    .
And any commitment to federal fiscal prudence is diluted by knowledge that the     United States     can always inflate its way out of deficits or borrow at low rates as long as the dollar remains the principal international reserve currency, so that foreign nations   have   to buy dollars from us, which means have to lend to us.
Deficit projections are pretty worthless.
At the beginning of 2007 the Congressional Budget Office, which has an inflated reputation but is at least nonpartisan, projected the federal deficit for fiscal 2010 at $333 billion (it will be at least four times that)—and that was a short-term projection.
In 2001 it had predicted a 10-year budget surplus of more than $3 trillion.
Its forecasts are largely just extrapolations, which assume that the future will be just like the past.
All that can be said about future deficits with an approach to confidence is that if nothing is done they will grow, and that nothing is likely to be done until they grow to a point at which there is a palpable impact on the standard of living.
In 1983, Congress amended the social security act to provide that, for people born in 1938, the age of eligibility for full social security benefits would rise gradually from 65 to 67.
(Hence the first effects of the reform were not felt until 2003, when people born in 1938 reached the age of 65, and the full effect will not be felt until 2026, when people born in 1959 reach 67—it is the deferral of the hurt that made the program politically feasible.) It is a sad commentary on our political system that there is no movement today for a similar reform, which would raise the future age of entitlement to full social security benefits to 70 in recognition of continuing increases in longevity, health, and income.
We are in ostrich mode so far as dealing with our fiscal problems is concerned, even though the problems are far more serious than they were in 1983.
The basic problem is that our two political parties, although they pretend to be ideologically opposed and certainly do disagree on a number of details of public policy (many of which however are economically inconsequential), are agreed on the basics of fiscal policy: that taxes are bad and government spending is good.
The Democrats used to believe that since spending was good, taxes had to be heavy, and Republicans that since taxes are bad, spending had to be limited so that taxes could be low.
Eventually the parties discovered from election results that taxes are unpopular and spending popular, so Democrats stopped pushing for higher taxes (except on very high earners) and Republicans for lower spending.
Both parties have embraced fiscal irresponsibility.
Becker’s focus is the impact of food price inflation on the poor, and I have no disagreement with his analysis.
I want to discuss commodity price inflation in general, and its political consequences, but I want to begin my discussion with Becker’s point about the effect on food policy of the greater political clout of urban than of rural populations.
That greater political clout has been a factor in government policies since Rome provided its citizens with free bread.
The concentration of population in a city, especially in a nation’s capital, makes urban residents a potential threat to political stability.
It does this by facilitating large-scale demonstrations, riots, and other mob action (not only because there are many people to fill the streets, but also because information spreads very rapidly in a city, facilitating the coordination of a large group of people), with consequences—which can include bringing down the government—that we are seeing in the Middle East and North Africa today.
Hence it makes political sense for a government to provide more generous food subsidies to urban than to rural residents even though the latter are needier.
It appears that rapidly rising food prices have been a major factor in the recent and continuing unrest in the Arab countries.
As Becker emphasizes, food prices are a big part of the budget of families in poor countries, even of urban residents, with their higher incomes.
In fact the demonstrators who brought down the Tunisian and Egyptian governments were complaining vociferously about surging food prices, though they had other serious complaints as well.
During the Egyptian crisis Mubarak promised higher subsidies in an (unsuccessful) effort to quell unrest.
Other governments in what used to be the Third World will doubtless take the hint, and increase food subsidies   particularly   for city people, who are far more likely to bring down a regime than rural people; and so the inefficiencies and hardships that Becker points to as consequences of economically unsound food policies are likely to become even greater.
The broader point is that commodity price inflation has inescapable political consequences.
A characteristic of commodity prices is that they can and often do change very rapidly and steeply, and if the commodities are consumer commodities these changes are quickly observed and cause alarm.
A notable example is gasoline.
Prices of gasoline change rapidly, and like other consumer commodities purchases are frequent, so that the consumer is kept continuously aware of price instability.
Prices for food and fuel are excluded from the usual measures of inflation because of their instability, but they contribute to measured inflation (if they continue rising) indirectly because they are inputs into other goods and services, as are many other commodities whose prices have also been rising.
The rise in these prices reflects worldwide increases in demand, especially in rapidly growing economies such as those of China, India, and Brazil, and concerns with inflation, against which investment in commodities is a hedge.
If food prices keep rising, we can expect more unrest in the Third World and slower economic growth.
The increased unrest, however, need not diminish long-run economic growth or long-run political stability, if it is the catalyst for the replacement of authoritarian regimes with more-democratic ones—though that can only be a hope, and not a prediction.
Rising food prices thus may—or may not—have a silver lining.
Last week the Department of Transportation announced a set of regulations that it calls “Enhancing Airline Passsenger Protections II,” the “II” referring to the fact that similar “protections” were imposed more than a year ago and are now being expanded.
The new regulations are summarized in an article in the   New York Times   published on April 20.
Accompanying the new regulations is a study by a pair of economic-consulting firms (one of them the well-known Econometrica) that purports to quantify the financial benefits of the regulations and concludes that they are modest but positive.
Here is the study’s summary of the requirements imposed on airlines by the new regulation:.
“1 Expansion of tarmac delay contingency plan requirements and extension of EAPP1 Final Rule requirements to cover foreign carriers.
2 Expanded tarmac delay reporting and application to foreign carriers.
3 Establishment of minimum standards for customer service plans (CSPs) and extension of EAPP1 Final Rule requirements to cover foreign carriers."
4 Extension of requirements to post contracts of carriage, tarmac delay contingency plans and CSPs on websites to foreign carriers.
5 Extension of EAPP1 Final Rule requirements for carriers to respond to consumer complaints to cover foreign carriers."
6 Changes in denied boarding compensation (DBC) policy.
7 Full-fare advertising and prohibition on opt-out provisions."
8 Expanded requirements for disclosure of baggage and other optional fees.
9 Prohibition on post-purchase price increases."
10 Prompt passenger notification of flight status changes.
11 Limitations on venue provisions in contracts of carriage.""."
Do these regulations make economic sense? (They probably make political sense.).
I won’t try to discuss all of them, but will sample each of the three kinds of requirement that the new regulations impose: (1) requirements that alter the contract between the passenger and the airline, ostensibly to make it more favorable to the passenger; (2) requirements concerning fuller disclosure of the terms of the contract; and (related to both other categories) (3) requirements designed to prevent exploitation of consumers by trickery.
(1) includes requiring refunds for lost luggage and increasing the compensation required for passengers who are bumped from a flight because of overbooking; (2) includes such things as requiring that taxes be disclosed as part of the purchase price rather than separately and that baggage fees be more clearly disclosed to passengers in advance; (3) includes such things as post-purchase-price increases and limiting where a passenger can sue an airline (“venue”).
(1) doesn’t make any economic sense.
If airlines have to make refunds for lost baggage or increase overbooking compensation, they will have to raise prices to cover the higher costs resulting from these requirements.
If passengers preferred to pay the higher price necessary to obtain these benefits, one would expect the airlines to revise their contracts with passengers accordingly.
The same thing is true with regard to limiting the amount of time that an airline can keep a plane on the airport tarmac, awaiting clearance to take off, without giving the passengers a chance to leave the plane.
The new regulations, in merely extending the limitations it has already imposed on domestic airlines to foreign ones, can be thought just to be closing a loophole; the economic question is why the matter of tarmac delay can’t be left to implicit negotiation between airlines and passengers, resulting in an optimal combination of ticket price, inconvenience from being stuck in a plane, and likelihood of cancellation of flights if the combination is weighted in favor of compensation for passengers inconvenienced by tarmac delay.
(2) is another questionable category of regulations.
Competitive firms don’t disclose negative features of their product (which can include price) unless there is a competitive advantage to doing so.
Assuming airline taxes don’t differentiate among airlines, disclosing the taxes as part of the ticket price rather than in fine print unlikely to be noticed is not going to give any airline a competitive advantage, and it may reduce demand slightly by revealing air travel to be somewhat more expensive relative to substitutes than passengers may have thought.
The effect is likely to be trivial, however.
As for baggage fees, if an airline decides to compete by charging lower fees than its competitors, presumably it will advertise the fact; it will not need government prodding.
(3) is really a variant of (2).
Take the venue limitations.
Venue (where the airline can be sued) is a term in the contract between the airline and the passengers, which the Department of Transportation is forcing the airlines to delete.
But many consumes are unaware of the meaning or significance of such a provision, and requiring disclosure of it would merely burden the passenger with information that he doesn’t want or need—for how likely is he to want to sue the airline? So I don’t regard these provisions of the regulations as particularly objectionable on economic grounds.
The issues become slightly more difficult when one takes account of the airline industry’s somewhat unusual cost structure.
It can be argued that the airline industry would not really be very price competitive if left to itself because the provision of airline sservice has a high ratio of fixed to variable costs.
Such a ratio makes price cutting a perilous method of trying to increase profits, since if prices spiral down to marginal cost the airlines are not able to cover their fixed costs.
And price changes are necessarily published in advance and airlines cannot feasibly “steal” passengers from each other with secret discounts.
So tacit collusion on price should be feasible, and one might expect therefore that the airlines would do anything to pack their planes, since the cost of adding a passenger in a plan that has empty seats is very slight—anything but offer a lower price than a competitor for a given quality of service.
But if tacit collusion on price, coupled with the cost structure of the industry, caused the airlines to substitute nonprice for price competition, one would expect the quality of service to be too high (that is, that consumers would prefer a lower price at the cost of lower quality), as in the days when regulation by the Civil Aeronautics Board largely prevented price competition, resulting in extravagant nonprice competition in the form of ultra-frequent flights and lavish in-flights meals and even entertainment.
The quality of airline service has of course been steadily deteriorating.
Moreover, entry either of new airlines, or of existing airlines into routes they haven’t previously served, is easy, and must keep prices down.
And the ratio of fixed to variable costs in the provision of airline service has declined because of rising fuel costs, a variable cost that now accounts for 40 percent of an airline’s total costs.
Moreover, if airlines don’t want to compete on price, requiring fuller disclosure won’t have much, if any, effect.
But the cost structure not of the industry but of the individual airliner—the high ratio of fixed to variable costs in the operation of a particular airliner—is relevant to the possibility of “market failures” that might, in principle anyway (always a vital qualification), justify regulation.
I said that the airlines are desperate to pack their planes, because an additional passenger adds much more to revenue than to cost.
But packing planes, necessarily with passengers have a variety of preferences, limits product differentiation.
There is some: think of first class, and the recent movement toward varying the leg room of coach seats.
But it isn’t feasible for airlines to offer passengers different waiting times on the tarmac when bad weather delays flight clearance; they’re all in the same boat.
And when stiff baggage fees result in delays in boarding and deboarding, and difficulty in finding bin space for one’s carry-on luggage, because the fees have caused many passengers to substitute carry-on for checked baggage, all passengers (in coach) are in the same boat.
The quality-sensitive passengers may incur a great deal of disutility from the degradation of quality that enables the airlines to attract a lot of quality-insensitive (in other words, price-sensitive) passengers, yet the former may not be willing to pay the very high price necessary to compensate the airlines for losing price-sensitive passengers deterred by the high price.
This is not a genuine market failure, however— technological constraints, rather than collusion or externalities, prevent the industry from providing a level of service desired by many passengers—but it helps to explain the public pressure for quality-enhancing regulations.
The Department of Transportation thinks or pretends to think that these regulations increase overall satisfaction with airline service, but this fails to consider that if passengers as a whole wanted better service at a higher price, the industry would provide it voluntarily.
Service can’t be decoupled from price.
The airline is providing a bundled or “one size fits all” product to most of its customers, and unbundling can increase demand by enabling a better matching of price to consumer preference.
But except for the tiny sliver of flyers who can afford private planes, the cost structure of the airline industry, rooted in airliner design, prevents significant quality differentiation.
I conclude that while regulations that forbid certain deceptive practices in the marketing of airline services may be justifiable, regulations that require more detailed disclosures, and in particular regulations that require quality improvements, are not justified, at least on economic grounds.
The consulting report is wrong to think they’ll increase aggregate welfare, although they will modestly increase the welfare of the complainers—illustrating that the squeaky wheel is indeed greased first.
The most common type of cartel is an agreement among competitors not to sell their product below a fixed price that will generate monopoly profits for the parties to the agreement.
But another type of cartel, termed monopsonistic (from the Greek words for “one” and “purchasing of food”) rather than monopolistic (one seller, versus one buyer in a monopsonized market), is an agreement among competitors not to pay more than a fixed price for a key input, such as labor.
By agreeing to pay less, the cartel purchases less of the input (and perhaps of lower quality), because less is supplied at the lower price (and suppliers may lower quality to compensate, by reducing their costs, for the lower price they receive).
The National Collegiate Athletic Association behaves monopsonistically in forbidding its member colleges and universities to pay its athletes.
Although cartels, including monopsonistic ones, are generally deemed to be illegal per se under American antitrust law, the NCAA’s monopsonistic behavior has thus far not been successfully challenged.
The justification that the NCAA offers—that collegiate athletes are students and would be corrupted by being salaried—coupled with the fact that the members of the NCAA, and the NCAA itself, are formally not-for-profit institutions, have had sufficient appeal to enable the association to continue to impose and enforce its rule against paying student athletes, and a number of subsidiary rules designed to prevent the cheating by cartel members that plagues most cartels.
As Becker points out, were it not for the monopsonistic rule against paying student athletes, these athletes would be paid; the monopsony transfers wealth from them to their “employers,” the colleges.
A further consequence is that college teams are smaller and, more important, of lower quality than they would be if the student athletes were paid.
One might ask why colleges choose to collude on the student athlete dimension rather than on some other dimension, such as tuition—agreeing to minimum tuition levels, or maximum scholarships.
The answer I think lies in my earlier point—the “justification” (specious though it may be) that paying student athletes would corrupt the educational process, an argument that draws on a tradition of admiration for amateurism even in adult athletic competition, as in tennis until 1968.
Efforts to fix the price for a college education would encounter sharper antitrust challenges—and indeed the Ivy League schools were forced by antitrust litigation to drop their attempt to limit competition in scholarship aid, a form of price fixing—in effect colluding on tuition discounts, which is what a scholarship is.
College athletics would be less profitable for colleges if the student athlete market were competitive.
If permitted, colleges would continue to agree to limit recruitment of athletes who could not satisfy degree requirements and to require athletes to attend classes and thus be bona fide students, because otherwise competition for the best athletes would tend to eliminate the “student athlete”; college teams would be largely composed of athletes who had no interest in or capacity to obtain a college education; awarding them a degree would be meaningless.
The college would be engaged in a business unrelated to its academic mission and would thus have to pay taxes on its teams’ earnings.
Worse, alumni donations to their alma mater, which are stimulated by the success of the college’s teams, would wilt if the teams were composed of non-students.
If the University of Chicago bought the Chicago Bears, and renamed the team the University of Chicago Bears, would alumni of the University of Chicago write bigger checks to the University?.
For similar reasons, I don’t think eliminating the rule against paying student athletes wouldresult in their beingpaid actual salaries.
The concept of a student who is also a   professional   athlete would trouble alumni.
I expect that instead the student athletes would receive exceptionally generous scholarships—scholarships that would yield more than the full cost of tuition and living expenses.
But the sky would not be the limit, since, facing higher labor costs, college teams would be less lucrative.
A possible legal complication in repealing the rule against athlete salaries would be the salary disparity between male and female college athletes.
The only really lucrative college sports are football, a male sport, and men’s basketball.
Competitive salaries for college football and basketball players would vastly exceed those for other sports, including women’s sports.
Paying lower salaries to women athletes could invite challenges under Title IX of the Education Amendments of 1972, which among other things forbids sex discrimination in education that receives federal subsidies.
The strongest argument against eliminating the NCAA cartel is that it would make colleges and universities poorer, and this would be a social loss if one assumes (plausibly) that higher education creates external benefits.
Of course the government could replace the lost revenues with subsidies financed by taxes.
But while monopsony is inefficient, tax increases create distortions similar to those created by monopoly and monopsony.
Paul D.
Ryan, the Republican Congressman who is the chairman of the House of Representatives’ Budget Committee, has proposed an ambitious plan for capping federal expenditures and eventually eliminating, or at least greatly reducing, the national debt.
The plan is detailed, and I will omit most of the details.
The significance of the plan lies not in its details, or indeed in any of its proposals, but rather in the willingness of a major politician to challenge entitlements spending.
This is only part of the plan but it has great symbolic significance, displays political courage, may open a productive dialogue, and challenges President Obama to propose his own plan for limiting such spending, which he has thus far been too timid (or politically realistic!) to do.
Ryan’s main proposals are as follows:.
1
Repeal Obama’s health-care legislation, but reform health insurance by abolishing favorable tax treatment of employer-provided health insurance and giving everyone a tax credit to assist or enable the purchase of a private health insurance policy.
2
Replace Medicaid with a subsidy for purchasing private health insurance.
Also make it a block-grant program rather than a matching program: that is, each state would get a sum of money allocable to Medicaid beneficiaries, rather than, as at present, the federal government’s matching state expenditures for Medicaid, which reduces the cost to the states of expanding their Medicaid enrollments.
3
Replace Medicare with a subsidy for purchase of private health insurance.
But this provision of the plan would not take effect until 2021; so only persons not yet 55 years old would be affected by it.
4
Make the Bush tax custs permanent, simplify the income tax, limit the maximum income tax rate for both individuals and corporations to 25 percent, and make up the loss in tax revenues by closing unspecified tax loopholes and imposing a form of VAT (value-added tax)—in effect a sales tax—on businesses.
5
Freeze discretionary federal spending at 2008 levels for five years.
Require the vote of a two-thirds majority in Congress to increase tax rates or impose new taxes, and enact automatic caps on increases in entitlement spending.
Health-care entitlements spending would be indexed to the average of the consumer price index and annual increases in the cost of health care.
Total federal spending could not exceed 20 percent of GDP, compared to today’s 25 percent, which President Obama hopes to reduce eventually (by unspecified means) to 22 or 23 percent.
The goal of Ryan’s plan is to reduce federal deficits by $4.4 trillion over the next ten years, and to do so entirely by reducing federal spending (by a total of $6.2 trillion).
Although there are good ideas in Ryan’s plan, what it really shows is that, barring a miracle, the fiscal condition of the federal government will continue to deteriorate.
Even if the plan were enacted in full—a political impossibility—it probably would make only a small contribution to reducing future deficits.
The annual federal deficit, and therefore the national debt, grows when federal revenues fall short of federal expenditures.
Revenues can be increased by higher taxes or by more rapid economic growth, so that the same tax rate produces rapidly increasing revenue; reducing the tax rate can, in some circumstances, actually increase total tax revenues, either by stimulating economic growth or by inducing more reporting of taxable income or the shifting of other income sources to taxable income.
For example, reducing the corporate income tax rate would induce American multinational corporations to repatriate more of their foreign profits.
Alternatively, federal spending can be reduced either directly, or indirectly by reducing demand for federal financial assistance—the hope of health-care reformers.
In theory the most promising route to narrowing the gap between revenue and expenditure is economic growth, because the economy is much larger than the government.
A 5 percent increase in GDP would amount to more than $700 billion; half of $700 billion would equal almost 10 percent of annual federal expenditures (currently about $3.5 trillion a year).
Compounding could make aggregate economic growth over a period of years greatly exceed increases in federal expenditures.
But the Ryan plan would be unlikely to have substantial effects on the rate of economic growth.
Judging from the effects of the Bush tax cuts, the Clinton tax increases, and the high rate of growth in the post-World War II decades, in which tax rates were much higher than at present, the modest changes in tax rates proposed in Ryan’s plan would not have a significant net effect on economic growth and hence on taxable revenues, bearing in mind that the stimulus effects of tax cuts are offset to a greater or lesser extent by the direct effects of the cuts in reducing tax revenues.
Of course, merely correlating past tax rates with past growth rates is crude empiricism, but my impression of the more sophisticated empirical studies of the effect of tax rates on economic growth is that they are inconclusive with respect to the effect of incremental changes from modest levels.
I am surprised that the Ryan plan does not propose the outright elimination of the corporate income tax, which might have dramatic effects on the repatriation of foreign earnings of American corporations.
The repeal of Obama’s health-care legislation would have a positive effect on economic growth by alleviating the concerns of small business, but probably the effect would be small.
The effect of the plan on economic growth might actually be negative, if, as is entirely possible, the plan if enacted would actually increase the federal deficit.
It would be especially likely to do so over the next decade, when Medicare would be unaffected because the reform of it that the plan proposes would not take effect until 2021.
Medicare expenditures would grow uncontrollably over that 10-year period.
There would be some Medicaid savings, but probably not too many because Medicaid is already starved for resources.
With entitlements largely unaffected over the next decade, to keep spending from rising at current rates would require drastic reductions in nondefense discretionary spending (defense spending cannot be reduced much, because of growing instability abroad).
Such reductions are not politically feasible.
So the deficit would keep rising at least until 2011, and the long-term fiscal health of the nation would thus be riding on the Medicare reform that would take effect in that year.
But ten years from now the percentage of the U.S.
population that is 65 years old or older, now 13 percent, is expected to reach 16 percent.
That will not only increase the costs of Medicare (quite apart from cost increases owing to technological advances in medical treatment); it will also increase the average age of the elderly population, which will further increase the demand for health care, and—critically—increase the political power of the elderly.
On the basis of past voting behavior of elderly people, there seems little prospect that altruism toward their children and grandchildren will curtail narrowly self-interested (one might even say selfish) voting by elderly people to preserve their entitlements.
As I said at the outset, the fact that a major politician is willing to advocate concrete entitlements reform is promising, but the key compromise that Ryan has made with political realities—deferring his proposed Medicare reforms for a decade—renders the plan economically very questionable.
Perhaps some politician will be bold enough to advocate that all entitlements programs, including social security as well as Medicare, be means-tested, as Medicaid is.
There is no reason why people who can afford to provide for their retirement should be subsidized by the government, which is to say by the taxpayer.
But such a reform does not appear to be politically feasible.
I want to respond to a single comment because it reflects interesting misunderstandings of the blogging phenomenon.
The comment takes Becker and me to task for not responding to all the comments (on our postings) that are criticial of us.
By thus not responding, we are said by the commenter to be shutting off debate.
We are not shutting off debate, and this for three reasons.
First, a failure to respond to a criticism may end a debate in the sense of leaving nothing more to be said, but it does not "stifle" debate or silence the critic--on the contrary, it leaves the critic with the last word.
Second, the comments are public--they are accessible to readers of the blog, at no cost, with a click.
We "enable" (as the blogging expression has it) all comments; we engage in no censorship.
Someone who reads a comment critical of a posting of mine or Becker's and observes that we have not responded to it will be more inclined to agree with the critic than if we did respond.
And third, and most important, bloggers have no "market power" that might enable them to limit debate.
Not only are there 10 million blogs, but because it is costless (except for opportunity costs of time--though these can of course be significant and are one reason why Becker and I do not respond to all the comments on our postings) to create or post to a blog, and as any posting is diffused throughout the "blogosphere" essentially instantaneously, there is no way in which inaccuracies in a blog can be insulated from prompt correction.
There is nothing to prevent the commenter fronm creating his own "anti-Becker-Posner" blog devoted to correcting our mistakes and omissions!.
There is another point worth noting.
Inaccuracies in blogs are less pernicious than inaccuracies in the mainstream media even apart from the superior opportunity for prompt correction of bloggers' errors.
The reason is that bloggers are known not to employ fact checkers or editors; there is no pretense that they have the resources to eliminate all errors in their postings.
The mainstream media, in contrast, represent to their public that they endeavor assiduously to prevent errors from finding their way into articles and broadcasts.
They ask the public to repose trust in them.
Bloggers do not.
That is why serious errors by the mainstream media are played as scandals; they are not merely mistakes--they are breaches of trust.
The rise of the Internet has created a host of social, economic, political, and regulatory issues, a few of which we address in this weeks postings.
(Another, at present under consideration by the Supreme Court in the   Grokster   case, is copyright infringement by means of file sharing.) One that is naturally near to our heart concerns proposals to regulate blogging, either formally or through voluntary adoption of ethical standards.
Many blogs are electronic counterparts of newspapers, magazines, and other advertiser-supported mainstream media, and the argument is that since the mainstream media have adopted ethical standards concerning such matters as reliance on anonymous sources and retraction of errors (with electronic media such as television stations subject to formal regulation), so should bloggers.
The idea of parity among media is attractive, since exempting the producer of a close substitute of a taxed or regulated product from taxation or regulation tends to promote inefficiency; the exemption operates as a subsidy.
The parity issue is starkly presented by the question whether to tax Internet transactions, discussed below.
With respect to blogs, the contention is that exempting them from ethical or other informal (or formal) regulation subsidizes their competition with the mainstream media.
Not that they are or would be totally exempt from controls over content; there is no legal exemption for a blog that defames someone, invades the persons right of privacy, exhibits child pornography, reveals classified information, infringes copyright, or otherwise violates generally applicable laws, though in many cases the bloggers will not have sufficient resources to make suing them for money damages an attractive course of action.
But there is no compulsion on bloggers to comply with the ethical standards applicable to the conventional media.
Moreover, they face less market pressure to comply with ethical standards than the conventional media, because they generally are not supported by advertising revenues (though this is changing) and thus dont have to worry about offending advertisersor for that matter viewers, since bloggers do not charge for visiting their sites.
Nevertheless I think this exemption of blogging from the ethical standards applicable to the mainstream media makes good economic sense because of economic and technological differences between those media and the blogosphere. There are vastly more bloggers than there are newspapers, magazines, and radio and television stations.
In fact, there are some 10 million blogs, though most of them are personal rather than oriented to news and opinionyet there are doubtless thousands of the latter.
The large number is related to the fact that there is no set-up or operating expense for blogging except telecommunications line charges and, of course, the opportunity cost of the bloggers time.
As a result not only of the large number or blogs but of the speed of transmission and the fact that many bloggers are far more specialized than journalists, the blogosphere pools information with extraordinary completeness and rapidity, in a speeded-up version of Friedrich Hayeks well-attested model of how the economic market pools information efficiently despite its decentralized character, its lack of a master coordinator.
The blogosphere is a larger and faster-paced network than even the global marketplace.
This means that errors in a blog, to the extent they concern matters of public interest and concern, are corrected almost instantaneously.
Not only by other bloggers, but by the bloggers readers, who post comments to the blog; the comments become a further part of the information network.
There is also greater political diversity in the blogosphere than in the mainstream media, because a conventional journalists career appeals disproportionately to liberals.
The self-correcting machinery of the blogosphere is more efficient than the internal fact-checking departments of conventional media enterprises.
This is not only because many more people (not only the bloggers, but also, as I have just noted, their audience, which can communicate with them instanteously by means of the comment feature that most blogs enable) are watching out for mistakes; it is also because corrections are disseminated virtually instantaneously throughout the network.
In contrast, even when the mainstream media catch mistakes, it may, especially in the case of the print media, take days or weeks to communicate a retraction to the public.
The process is especially deficient in the case of newspaper retractions, which are printed inconspicuously and, in all likelihood, rarely read.
Given these differences between blogging and the mainstream media, the case for imposing ethical standards on bloggers is weak.
Moreover, there is no way in which thousands or millions of bloggers could agree to adhere to a set of standards, whereas such (benign) collusion may be feasible in a highly concentrated media industry, such as the newspaper industry.
A criticism of blogging that has some merit is that there is less advance filtering than in the case of the mainstream media, which because of fear of offending advertisers (more precisely, the advertisers customers and so derivatively the advertiers) engage in a degree of self-censorship, much of it desirablecensoring out hate speech, wild rumors, and fantastic conjectures and misinformation.
Blogging is less constrained and there is a valid concern that it encourages and reinforces antisocial tendencies.
The problem is not limited to blogging but includes other uses of the Internet, such as chat rooms.
It is not, however, a problem amenable to solution or even alleviation by a program of promoting the adoption of voluntary ethical standards, and there are practical as well as legal obstacles to official censorship.
There may also be an actual social value in allowing antisocial elements to blow off streamand by doing so to identify themselves to law enforcement and intelligence agencies, which can monitor blogs and chat rooms for dangerous movements.
What is more, self-censorship motivated simply by a concern with avoiding offense may impair the marketplace of ideas by excluding heterodox ideas and perpetuating comfortable myths.
A genuine externality created by the Internet that may call for some kind of government regulation is the phenomenon of spam. Spam is advertising (broadly understood to include the various scams that seem to occupy a significant segment of spam space) that is emailed to peoples computers.
Because the cost of spamming is very low, enormous volumes of spam are emailed despite a very low response rate, the explanation being that the lower the cost of advertising, the lower the break-even response rate is.
Something like 75 percent of all email is spam, and this figure is expected to rise to 95 percent by next year in the absence of regulation.
Spam imposes costs (without offsetting benefits) of two kinds.
First, most of it is of no interest whatsoever to recipients and some of it is downright offensive; hence receipt imposes a cost.
(An earlier example of the first cost was faxed advertisements, which cost the recipient the paper the fax was printed on.) That in itself does not differentiate spam sharply from many other forms of advertising, such as junk mail, but the bother of discarding junk mail is trivial compared to having to pick through a flood of spam to find the emails one wants to read.
In the case of much other advertising, moreover, such as the advertising one finds in newspapers and magazines and on television, the recipient is compensated for having to encounter unwanted advertising, in the form of a lower price for a tied product that he wants to consumer, such as free television (i.e., paid for by advertisers).
Second, the cost of filtering out spam (the demand for such filtering being further evidence that spam imposes net costs on most of the people who receive it) to the computer industry, and of binning in in hard drives and servers, is already in the billions of dollars a year, for which the spammers dont pay.
Spam thus creates a negative externality, a form of pollution.
True, spam is not completely worthlessif (setting aside the scams) it generated no sales at all, it would be abandoned as an advertising medium.
But the worthwhile spam can be preserved by a system in which spam is allowed to be sent to people who subscribe to the receipt of either all spam or particular categories of spam.
A different kind of externality has been created by the federal law (the Internet Tax Freedom Act, enacted in 1998) that bars state and local taxation of sales made over the Internet.
The effect is to divert sales from conventional retail outlets to Internet sellers, such as Amazon.com.
The exemption of e-commerce from normal taxes operates as a subsidy of that commerce.
The most frequently heard arguments for the exemption are, first, that it is necessary to encourage the infant industry of e-commerce, and second that it is necessary to prevent a further growth of government.
I find neither argument convincing.
An infant-industry argument may make a little bit of sense for a country that has a promising industrial future but cannot finance the start-up costs of industry other than by preventing import competition, i.e., forcing consumers to finance those costs by paying supracompetitive prices.
(Just a little bit, at most, because the global financial market is huge and highly efficient and therefore should be able to finance any promising commercial venture.) But there has never been difficulty in financing Internet-related ventures, even after the bursting of the dot-com bubble.
And the feeding the beast argument is unconvincing because state and local taxation, unlike federal taxation, is effectively constrained by competition, since businesses and individuals can move with relative ease from high-tax to low-tax states.
If states obtained substantial revenues from taxing Internet sales, there would be pressure to reduce tax rates.
Banning state and local taxation of e-commerce seems a gratuitous blow to federalism.
In summary, I see no pressing need for imposing ethical standards on bloggers, but controls over spamming, and a repeal of the Internet Tax Freedom Act, deserve serious consideration.
My posting precipitated an interesting debate in the comments about the ethical, political, and economic issues presented by inequality of wealth, which has become very great in twenty-first century America.
These issues are important, but estate taxation is only peripherally related to them.
Without unforeseeable increases in estate tax rates, coupled with extremely stiff gift taxes, estate taxation will continue to have negligible effects on wealth inequality.
I agree with Becker, moreover, that even without explicit gifts wealthy people can transfer substantial wealth to their children by investing in the children's human capital (earning capacity)--not to mention the genetic endowment that high-IQ parents transfer to their children.
I believe that these transfers are increasingly important, for two reasons.
First, with the decline in the importance of strength and stamina as factors of production, the economic return to intelligence has risen.
And second, with the breakdown of traditional barriers, such as religion and ethnicity, to assortative mating (likes with likes), there is more matching of IQs in marriage and so a greater production of highly intelligent people.
I do not think there would be an ethical objection to efforts to reduce the inequality of wealth.
Even if one does not regard one's genetic endowment as a form of unearned luck (as I do not--"luck" to me refers to purely adventitious factors in one's success or failure in life), luck plays an enormous role in wealth; stated differently, the variance in wealth is much greater than the variance in intelligence, character, effort, or all these things combined.
And if wealth could be equalized costlessly, there would be a net gain in economic welfare because of the phenomenon of declining marginal utility of income--that last dollar is worth more to a poor person than to a rich, so transferring a dollar from the rich to the poor will increase aggregate utility, and this effect could continue until incomes were equalized.
The objection to efforts to equalize wealth, including by drastic changes in estate and gift taxation, is that they are very costly.
The have adverse incentive effects on both rich and poor, and a variety of other negative consequences as well.
Paradoxically, equalizing wealth can increase envy, because one is more likely to envy someone who is slightly better off than one is than someone who is unimaginably better off.
Few people envy Bill Gates, because they cannot imagine what they would do with so much money; but they know very well what they would do with the additional income of their slightly wealthier next-door neighbor.
It is important to recognize, moreover (a point that Oliver Wendell Holmes, Jr.
stressed repeatedly), that personal wealth, no matter how great, is a part, and a constructive part, of aggregate social wealth.
The rich do not burn their money, or put in boxes under their beds.
If saved, the money is invested; if consumed, it provides incomes to the people who produce the goods that the rich buy; if given to charity or to politicians, it affects, not necessarily for the worse, the social, cultural, and political character of the society.
The opportunity to amass wealth also channels the ambitions of aggressive people into relatively harmless channels, even if Samuel Johnson exaggerated when he said that people are rarely as innocently engaged as when they are making money.
I agree with Becker that inheritance taxes are preferable to estate taxes and that consumption taxes are preferable to income taxes.
However, I do not share his strong opposition to the federal estate tax.
The government needs revenues, and taxation is the principal means of obtaining those revenues.
An ideal tax from an economic standpoint is one that generates substantial revenue without distorting the allocation of resources.
These turn out to be linked concerns.
A narrow-based tax, such as a tax on sports cars, is undesirable from this standpoint because, unless the tax rate is very low, in which event little revenue will be generated, the tax will deflect consumers to close substitutes that are not subject to the tax, and thus little revenue will be generated, because of substitution away from the taxed product.
A recent article in the   New York Times   calls the estate tax perfect on the ground that it does not distort the allocation of resources because people cant escape death.
Were this true, the estate tax would be analogous to a head taxsay a tax of $1,000 a year on every U.S.
citizen.
Such a tax would be difficult to escape because substituting away from the taxed activity (or rather status) would require expatriation, which is very costly to an individual.
The tax would generate almost $300 billion a year in revenue.
The estate tax is not nearly so perfect as a head tax, because contrary to the Times article it is easily avoidable, as a head tax is not.
I emphasize easily; for that is the reason the problem with the estate tax is not, as it might seem to be, that it is a tax on savings and can thus be avoided only by consuming all ones wealth before death and therefore is likely to distort peoples consumption decisions, which would make it an inefficient tax.
People who have no bequest motivethat is, who are not interested in having a positive balance when they dieare not greatly affected by estate taxation because they already have every incentive to dissipate their wealth before they die.
People who do have a bequest motive are not much affected by the estate tax either, because they can transfer their assets to their children or to other intended heirs before death, reserving the income from the assets for their lifetime (the equivalent of an annuity, which expires on the annuitants death, leaving nothing for distribution to heirs).
As a result of these incentives and opportunities, the estate tax does not generate much revenue for the government.
It collects some, howeverand I do not consider 1 percent of total federal tax revenues a trivial amountand its distortionary effects are probably modest.
Becker is correct that it is costly in effort expended by lawyers to minimize the impact of the tax on their clients (though he may exaggerate the cost by attributing the entire income of estate-planning lawyers to tax counseling, when even in the absence of estate taxation legal counseling would be required for the preparation of wills and other documents required for large estates).
But in that respect, it is no different from the income tax.
Were the estate tax abolished, the revenue it generates would have to be made up by some other tax, which is as likely to be as distortionary as the estate tax and would invite avoidance efforts by tax lawyers.
The more interesting question to me, though I have no good answer to it, is whether the estate tax should be stiffened, or other measures used to limit bequests.
An article in last Fridays   Wall Street Journal  , echoing remarks that President Summers of Harvard made at a conference at the Kennedy School that same day, notes a possible, and possibly troublesome, decline in social mobility in the United States.
Wealthy people seem increasingly able to guarantee that their children and even grandchildren will remain in the upper income tier, leaving fewer places for the children and grandchildren of the poor to occupy.
Through legacy admissions (as at Harvard!), expensive private schooling and tutoring, including tutoring in taking college admission tests, as well as by means of direct transfers of wealth, wealthy people are able to purchase a secure place for their children and grandchildren in the upper class.
Even if, as Becker argues, social mobility has not actually declined in recent decades, it is lower than it used to be and the conditions for a decline seem in place.
But the normative implications are unclear.
For example, one concern with declining social mobility is a fear that rich kids wont work as hard as poor ones, so that economic growth will lag.
But if they dont work as hard, they will lose jobs to the poor.
They may continue to live quite well by clipping coupons, but the poor (or rather the former poor) will occupy the high-paying jobs.
And because rich kids can take financial risks that the poor cannot, and risk-taking is important to innovation and hence to economic growth, bequests may lead to an increase rather than a reduction in economic growth, though this depends on the balance between the drag on growth from rich kids working less hard and the spur to growth from their taking more financial risks.
Furthermore, the positive correlation between parents and childrens wealth may conceal the actual causality.
It may be that the parents are wealthy in part because of a genetic endowment that they pass on to their children, who because of that favorable endowment would become wealthy even if they didnt inherit any money.
But probably wealth does enhance the advantage in having such an endowment.
If lack of social mobility is a problem, nevertheless it is unlikely to be solved by trying to limit bequests, since wealthy people can transfer much of their wealth during their lifetime.
To have an impact on the transmission of wealth across generations, therefore, a stiff tax on bequests would have to be complemented by a stiff tax on gifts, and gift would have to be broadly defined to include such things as paying $50,000 a year for tuition and expected donations at a fancy private school in New York City.
And really stiff estate and gift taxation, even if feasible, would be undesirable because of the disincentive effect on the work effort of those peopleand they are numerouswho, to a significant degree, are motivated to become rich by a desire to make their children and grandchildren better off than they would otherwise be.
There is a traditional concern with dynastic fortunesthat is, with accumulations of wealth that are so great that they confer disproportionate political power on a family.
The founder will usually be too busy making money to participate heavily in public affairs, though there are exceptions, such as Joseph P.
Kennedy, President Kennedys father; Michael Bloomberg, New Yorks mayor; and George Soros, the billionaire backer of the Democratic Party.
The next generation, the generation of the inheritors rather than the creators of wealth, may decide to devote full time to public matters, for good or for will.
Concern with accumulating political power over generations lies behind the esoteric rule against perpetuities, which forbids making a bequest that will not take effect until more than 21 years after the death of currently living personsthis to prevent transmitting wealth to ones remote unborn descendants.
As a curious tandem to the movement of abolish the federal estate tax, many states are allowing the rule against perpetuities (a rule of state rather than federal law) to be undone by the device of the well-named dynasty trust, whereby a wealthy person places money in trust with instructions that the trustee invest the money for the benefit of specified beneficiaries, such as the descendantshowever remoteof the creator of the trust for as long as there are such descendants.
Depending on the amount doled out by the trustee in each generation, the trust might over time accumulate enormous assets simply by the operation of compound interest.
The device is quite recent yet already some $100 billion have been placed in dynastic trusts.
See the study by law professors   Robert Sitkoff and Max Schanzenbach  , forthcoming in the   Yale Law Journal  .
Should we worry about the dynastic trust? Probably some degree of wealth inequality is potentially destabilizing politically.
But on the other hand the creation of centers of private power acts as an offset to growing governmental power and so may actually serve to preserve liberty.
Notice in this connection that abolishing the estate tax would reduce the incentive to make charitable bequests, which are tax exempt.
I should note finally two possibly illusory aspects of the proposed abolition of federal estate tax.
One is that states may respond by increasing their own estate taxes, which are less efficient than federal estate taxes because it is easier to evade a tax by moving from one state to another than by expatriating oneself, and so such taxes affect locational decisions more than the federal tax.
Second, the current estate tax gives heirs a stepped up basis in capital that they inherit.
That is, should they later sell a capital asset that they inherited, the cost basis for computing how much capital-gains they owe will be the value of the asset at the death of their testator rather than the cost that the testator incurred to acquire it originally.
So abolition of the federal estate tax would be offset to an unknown degree by increased capital-gains taxation of heirs, and also by increased administrative expense since it is often difficult to determine the original value of an asset that was acquired many years earlier.
There were as usual a number of very fine comments, some of which require me to extend my analysis in various ways.
For example, one comment points out that people don't merely conceal discreditable information about themselves; it is more accurate to say that they make a   selective presentation   of the facts about themselves to other people, and this selective presentation may be more informative about the person than an unscreened flood of information about him.
(The sociologist Erving Goffman wrote about this in a great book called   The Presentation of Self in Everyday Life  --it is an essentially theatrical conception of human social interactions: we create a self or selves to manage our interactions with other people.
I formulate his analysis in economic terms in Chapter 25 of my book   Overcoming Law   (1995).) This is related to but extends my point that requiring an SEC-regulated prospectus-like disclosure of all material information about a person would flood society with trivial information.
It would not only be distracting, it would actually reduce our knowledge of people by preventing them from signaling through their self-presentation how their intentions, motivations, and purposes should be understood.
A point also made in the comments that is related but that I do not fully agree with is that people should be able to conceal information about themelves that would trigger ignorant or irrational prejudices on the part of other people.
I think that in general people should be allowed to make their own judgments about other people--should be able, thus, to decide for themselves what is material information.
But psychologists do correctly point out the operation of what they call the "availability heuristic," which is the tendency of the mind to be seized by features of a situation that are particularly arresting though not necessarily important from a rational standpoint.
So knowing that a person was HIV-positive or had had a sex-change operation or had (in one commenter's analysis, "anal warts"--an arresting condition that I had never heard of before, and would prefer not to have heard of) might overwhelm one's attention to all the other facts about the person that might be more important in relation to the particular transaction that one was contemplating having with the person, such as retaining him or her as one's accountant.
The problem with this insight is that it does not enable a sharp line to be drawn between concealment designed to take advantage of other people and concealment designed to prevent other people from reacting irrationally to one's offer of a transaction.
Another point I have reservations about is whether reducing the privacy of medical records would cause people to shun doctors and if so whether this would be altogether a bad thing.
People will not, merely to protect privacy, avoid seeking treatment for serious illness; and as for trivial illness, there is only a slight privacy stake and anyway it might be all to the good if people were less quick to seek medical treatment, including cosmetic surgery, for conditions that either are not serious or that are not medical conditions, in the sense of illness, at all.
A further problem with concealment of medical records is that it disrupts the efficient operation of insurance markets.
Low-risk people get pooled with high-risk because the high-risk people conceal their riskiness.
The result is an arbitrary redistribution of wealth from the former to the latter.
One comment reminded me--as a public official, I should not have needed this reminder--that a frequent and proper motive for concealment of information about oneself is simply self-protection: you don't want criminals to know the details of your financial situation, where you bank, your address and phone number, your credit card number, etc.
That is, it is entirely proper to try to conceal information about yourself from people you   don't   want to transact with, unless you are the outlaw and the people you're trying to avoid are the law.
One interesting comment points out the distorted understanding that can arise from the fact that the rich have more privacy than the poor; privacy is what economists call a "superior good," meaning that one consumes relatively more of it as one's income rises.
If the poor have less privacy, so that more is known about their misbehavior, a false impression may be created that they misbehave more than the rich, whereas the amount of misbehavior may be the same and the only difference may be that the rich conceal it better.
Against this, however, is the fact that there is much more curiosity about the private lives of the rich--which may be one reason why privacy is a superior good.
The rich spend more to protect privacy, but the media, to gratify public interest in the lives of the rich, spend more on penetrating the privacy shield.
So maybe we know as much about the private lives of the rich as we do about the poor, or even more.
Let me pursue the "superior good" point a little further.
Because privacy is a superior good, the amount has grown as society has become wealthier.
Americans have far more privacy today than they did a hundred years ago.
This results in greater happiness on the one hand, but (qualifying the increase in happiness) greater opportunities for fraud in personal and commercial relations, and indeed for terrorism, which requires a high degree of secrecy in preparations, on the other hand.
Because of the serious downsides of privacy, the growth of privacy incites a responsive growth of methods of surveillance, now greatly facilitated by digitization, which in turn incites a countermovement for encryption and other methods of restoring privacy in the face of digitization.
So there is a kind of privacy arms race.
Most efforts to link terrorism and poverty are politically motivated, just like most efforts to link crime and poverty.
Liberals do not like either force or poverty, and so faced with crime or terrorism they prefer a solution that involves alleviating poverty rather than one that involves applying force.
They sometimes dress up this politically motivated preference by distinguishing between the "root causes" of terrorism or crime, on the hand, and the "symptoms"--i.e., overt acts of terrorism or criminality--on the other hand, and arguing that a problem can be solved only by removing the root causes.
But this is incorrect.
It's the effects--the acts of terrorism, the criminal offenses--that we care about, and often the effects can be eliminated at lower cost than the causes.
In any event, there is little basis either theoretical or empirical for thinking that poverty causes terrorism.
In addressing the issue we need to distinguish between the individual level and the population level.
As Becker points out, the terrorists who commit the most significant acts of terrorism are unlikely to be at the bottom of the income/education distribution; indeed the leaders are likely to be even further from the bottom.
The ablest terrorists are the leaders, and the ablest members of a terrorist group are also the scarcest and so they rationally take the fewest risks.
Terrorism requires not only leadership (apart from "lone wolf" terrorism, which I discuss below) but also motivation, invariably political in at least a broad sense; and the motivation is often supplied, or at least articulated and enhanced, by intellectuals.
Think of the role of the Russian "intelligentsia" (intellectuals preoccupied with politics) in Russian terrorism culminating in the Bolshevik revolution.
At the population level, it is difficult to see why wealthy countries should have less terrorism than poor ones; and among the wealthy countries that have been afflicted by terrorism, one has only to think of the United Kingdom (plagued by the Irish Republican Army for so many years), Germany (the Baader-Meinhof gang), Italy (the Red Brigades), Japan (the Red Army and Aun Shinrikyo), Spain (the Basque separatists), Saudi Arabia (al Qaeda)--and the United States.
Think of "Bloody Kansas" in the 1850s, the Ku Klux Klan, the anarchists of the late nineteenth and early twentieth centuries such as Sacco and Vanzetti and the assassin of President McKinley, the Puerto Rican separatists who tried to kill President Truman and turned later to bombing buildings, the Weathermen, the Black Panthers, the Unabomber, Timothy McVeigh, and the unknown anthrax assailant of October 2001.
These examples make me skeptical that democracy and liberty are antidotes, even partial ones, to terrorism.
For one thing, democracy and liberty encourage independent thinking, which may nourish Utopian fantasies that can breed violence.
The kind of lone-wolf terrorism illustrated by the career of the Unabomber is particularly likely to flourish in a society that encourages people to think for themselves, because the diversity of individual thought is very great and one tail of the distribution consists of dangerous maniacs.
For another thing, democracy and liberty create expectations of privacy and free speech that make it difficult to repress subversive movements in their incipience.
As Becker points out, the direction of causation may be from absence of terrorism to democracy and liberty rather than vice versa, since, as civil libertarians warn us, fear of terrorism tilts the balance between security and liberty away from liberty.
What does seem to be true (or so at least I found in a study reported in Chapter 3 of my book Frontiers of Legal Theory [Harvard University Press, 2001]) is that a nation’s per capita income is positively related to political stability.
Wealthy nations can create the institutional framework required for political stability.
But political stability is entirely consistent with terrorist activity.
All the wealthy nations that have a terrorist problem are, with the possible exception of Saudi Arabia, politically stable.
The media are full of stories about the compensation of chief executive officers of American companies.
The theme of the stories is that CEOs are paid too much.
The economics of compensation are fascinating.
In the simplest economic model, a worker, right up to the level of CEO, is paid his marginal product--essentially, his contribution to the firm's net income.
But simple observation reveals numerous departures from the model.
For example, wages vary across the employees of the same rank in the same company by much less than differences in their contribution to the company, and employees who do satisfactory work can expect real (that is, inflation-adjusted) annual increases in their wages throughout their career with the firm, even though their contribution will not be increasing at the same rate, and eventually not at all.
Let us see what sense can be made of the curious pattern of CEO compensation.
American CEOs make much more on average than their counterparts in other countries--about twice as much.
You might think that this was because Americans at all levels earn more than their foreign counterparts, but this is not so; the difference between U.S.
and foreign wages is much smaller below the CEO level.
In other words, wages are more skewed in favor of CEOs in American companies.
The disparity is related to the fact that salaries are a much smaller fraction of American CEOs' incomes (less than a half) than of foreign CEOs' incomes, with the rest consisting of bonuses but mainly of stock options.
Both the fraction of CEO income that is nonsalary, and total CEO income, have been rising, dramatically in the United States, over recent decades.
But there is a recent tendency of foreign CEO compensation policies toward convergence with the American practice.
One can speculate about the causes of some of these differences.
Stock options and other incentive-based compensation methods impart risk (variance) to CEOs' incomes, which reinforces the risk inherent in the fact that a CEO's human capital (earning capacity) may be specific to his firm, so that if he lost his job because his company had been doing badly (perhaps for reasons beyond his control), he would take a double hit--lower pay as the company declined and lower pay in his next job.
Because business executives (as distinct from entrepreneurs) do not like risk, they demand a higher wage if the wage is going to have a substantial risky component.
This may explain some of the difference between American and foreign CEO compensation, but surely not all or even much of it--especially since job turnover at the CEO level is actually greater in Europe than in the United States.
Another possible difference is that stock ownership tends to be more concentrated abroad than in the United States.
The more concentrated it is, the more incentive shareholders have to monitor the performance of their firm's managers because they have more at stake.
The more effective that monitoring is, the less need there is to create incentive-based compensation schemes: the stick is substituted for the carrot.
Cultural factors may be important.
European countries in particular are more egalitarian than the United States, suggesting that envy is likely to play a bigger role in compensation there.
Astronomical ratios of CEO to blue-collar wages in the same company cause little resentment in the United States compared to what they would cause in Europe, though wide disparities between workers at the same level does engender resentment here even if the disparities track differences in productivity.
Envy might reduce average incomes at the same time that it reduced variance in incomes, if the more generous compensation of American CEOs merely reflects the greater contribution that they make to their companies' success.
But there are two reasons to doubt this, and thus to suspect that American CEO incomes are padded to some degree.
First, the most significant "incentive" component in these incomes--stock options--are not well correlated with the CEO's contribution to the value of his company and thus of the value of its stock.
Many things move a company's stock besides the decisions of its CEO.
To tie a CEO's income to the value of his company's stock is a bit like tying the salary of the President of the United States to the U.S.
GNP.
Second, the choice of stock options as the principal method of providing nonsalary compensation to CEOs seems related to the fact that traditionally the income generated by these options, unlike salary or bonus income, was not reported as a corporate expense.
Of course security analysts and stockholders large enough to follow closely the affairs of the companies in which they invest can calculate the expense of stock options, but the ordinary public cannot, and this is important because even in the United States envy is a factor that can influence policy and public opinion.
A spate of recent articles has explained the ingenious devices by which CEO compensation that would strike the average person as grossly excessive is concealed from the public, and these articles, along with well-publicized corporate scandals, may place some downward pressure on CEO compensation.
Companies cannot afford to ignore public opinion completely, because adverse public opinion can power legislative or regulatory measures harmful to a company or an industry.
It might seem that, provided the shareholders--the owners of the company--are made aware of the actual compensation received by the CEO, competition will drive that compensation down to approximately the level at which the CEO is just being paid his marginal product, with appropriate adjustment for risk.
But given the size of companies, the cost to a major company of even a grossly overpaid CEO is so slight when divided among the shareholders that no shareholder (assuming dispersed ownership) will have an incentive to do anything about that excess expense.
What about the board of directors? Their incentive to minimize what from the overall corporate standpoint is only a minor cost is also weak, and may be offset by rather minor economic and psychological factors.
The board is likely to be dominated by highly paid business executives, including CEOs, who have a personal economic interest in high corporate salaries and a natural psychological tendency to believe that such salaries accurately reflect the intrinsic worth of their recipients.
Becker in his comment on this post (below) cites an interesting paper by Gaibaix and Landier which argues that the increase in CEO compensation is a function of the growth in the market value of firms.
The basic idea is that the CEO of a more valuable firm is more productive, since if he increases value by say 1 percent the increase in absolute value will be greater the more valuable the firm is.
If there are two equally skilled managers and one manages a grocery store and one manages IBM, the manager of IBM is  probably creating greater value.
The theory is too new to evaluate with any confidence.
I am somewhat skeptical because rapid increases in CEO compensation should attract more talent to management, and the resulting greater competition for CEO positions should dampen the increase in compensation.
An alternative explanation for the correlation between firm value and CEO compensation, one that is consistent with the evidence that such compensation is often excessive from an efficiency standpoint, is that the greater the firm's market value, the easier it is to "hide" the compensation of the top executives.
Suppose that a 10 percent increase in value is associated with a 3 percent increase in CEO compensation; then the percentage of the firm's value that is going to the CEO will have fallen.
This may be one reason why many mergers fail to increase earnings per share, although the overall value of the enterprise will be greater after the merger (there will be more shares): the increase in overall value enables the CEO to increase his compensation regardless of whether he will be creating greater value as the manager of the larger enterprise.
The comments on my post indicate strong feelings and powerful disagreement, mirroring the strong feelings and powerful disagreement in Congress and in the nation as a whole.
It should, however, be possible for Congress  to work out a compromise along the following lines:.
1
By a combination of sticks and carrots, it should be possible to induce the vast majority of illegal immigrants in this country either to step forward, admit their illegal status, regularize it, and thus enter the path to eventual citizenship (without having to leave the country), or depart for good.
The only objections to this course that I can see are "unfairness" to would-be immigrants waiting patiently in the immigration queue--and I do not think the interests of foreigners should weigh heavily in U.S.
public policy--and the injustice of "rewarding" illegality (the "amnesty" issue).
But illegal immigration is not so serious a crime as to demand obeisance to Kant's claim that even if a society were about to dissolve, justice would require that it execute any condemned criminals.
I take a more relaxed, pragmatic view of the dictates of legal justice.
2
By a combination of mandatory biometric ID for all people in the United States (a measure that would have independent value in crime control and terrorism prevention) and heavy penalties on employers of illegal immigrants, future illegal immigration could be largely halted without need to build an expensive Berlin Wall between the United States and Mexico.
3
Reform of immigration law and reorganization of the various agencies in the Department of Homeland Security that administer the law would shorten the queue for legal immigrants (and thus alleviate the "fairness" objection to "amnesty"), adjust the supply of immigrants to the demand of American employers, and switch preferential teratment from foreigners who  have family connections in the Unied States  to foreigners who have valuable skills.
There are reportedly 25,000 employees of private security firms in Iraq.
Some 80 percent are employed by U.S.
firms.
It appears that most though not all of these employees are Americans, although I have not been able to locate a statistical breakdown.
These employees provide armed guards for U.S.
diplomats, journalists, and businessmen that ordinarily would be provided by the military, as well as providing military services (guarding convoys, training Iraqi troops, supplying food, and interrogation) under contract to the Pentagon.
There is controversy over both the cost and discipline of these private security personnel.
Privatization is a perennial issue in economics, and it was part of the deregulation movement that began in the late 1970s.
The issue reflects the fact that there is no hard-and-fast line between the provision of services by government and by the private sector, and that private provision of services is generally more efficient than public because political interference is less.
Conventionally it is thought that only government can provide services that cannot be denied to people who refuse to pay for them, so that efficiency in a broader sense requires public provision of such services.
The classic example is national defense.
If I install an antimissile defense in my back yard, it will of necessity protect my neighbor as well even if he refuses to contribute to its cost.
Because of such free riding, the argument goes, national defense will be underprovided if it is left to the free market.
That is correct, but it does not entail the actual   provision   of the service by government.
The government must tax my neighbor to make him contribute to the national defense, but it can turn the tax revenues over to private companies to provide the actual service.
The government already contracts out the manufacture of military weaponry.
It could in principle contract out the operation of that weaponry as well.
Education is a source of nonexcludable external benefits (everyone benefits from an educated population), so it is properly supported by taxes, but it doesn't follow that we need public schools, and indeed there is a great deal of private education indirectly financed by taxes; maybe all of it could be.
And likewise there is a long history of mercenaries, including the Hessians whom the British employed against us in the Revolutionary War and the "soldiers of fortune" heavily employed in Africa's incessant civil wars.
The Pope's Swiss Guards are a mercenary force.
Swiss have been mercenaries since the Middle Ages.
The French Foreign Legion is a quasi-mercenary force.
Part of it rebelled against the DeGaulle government in the early 1960s over his decision to withdraw from Algeria, and disloyalty is a traditional concern about mercenaries, though surely not a concern about American private security personnel in Iraq.
Indeed, the term "mercenary" is usually reserved for foreigners; that is why members of the U.S.
armed forces today are not referred to as mercenaries even though they are employed voluntarily rather than conscripted.
By the same token, however, non-Americans employed by private security companies in Iraq are mercenaries.
Since we have a volunteer army, why should there be any concern about contracting out some or even many of its tasks? Many employees of the Defense Department are civilian; soldiers in a voluntary army are employees rather than slaves; and, as I mentioned, the manufacture of the weaponry is contracted out.
Instead of just providing weapons and recruits, why not let the private market provide entire military formations? So Blackwater, one of the leading U.S.
security contractors operating in Iraq, might be paid to furnish a tank battalion, complete with tanks and other equipment, officers, and enlisted personnel, to fight under U.S.
command alongside army, marine, and national guard battalions.
But that would probably be inefficient, because military units that fight together have to be very closely coordinated, and that is difficult when they have different organizational cultures.
(The enlisted personnel of the French Foreign Legion are subject to full military discipline, and the officers are members of the regular French army; that is why I called the Legion only "quasi-mercenary.") The contract security personnel in Iraq do not fight alongside the U.S.
military but instead operate in a service or supporting rather than combat role, though not without risk--hundreds of them have been killed and many others wounded.
At first glance it might seem redundant for the military to hire contractors who in turn hire, say, armed guards, rather than to hire the guards directly, as soldiers.
Soldiers are paid only between one-half and one-tenth as much as the security personnel furnished by contractors for service in Iraq, although the comparison is misleading because the soldiers tend to be less experienced (most of the private security personnel are veterans) and because pension, medical, housing, and other fringe benefits of soldiers are much more generous.
This in itself is odd because if the two classes of worker--soldiers and contract security personnel--are doing the same work, why isn't the structure of their compensation the same? One reason is that for many soldiers the military is their career, while most of the contract security personnel in Iraq are temporary workers.
Another is that there are nonpecuniary benefits to military service that are absent from its private substitute, including patriotic pride and the prestige that membership in our armed forces confers.
The difference between temporary and permanent workers is the basis for the principal economic rationale for the heavy use of contract security personnel in Iraq.
The military needs "temps." The need is not unique to the military, of course.
The private sector has many companies that provide temporary workers on a contract basis to firms that could hire permanent employees to do the work, thus cutting out the middleman.
But if the firm's demand for workers fluctuates, it may be cheaper to match supply to demand by contracting with companies that have arrangements with workers available for temporary jobs than to hire additional permanent employees but then lay them off when demand is slack, or to go hunting in the labor market, whenever there is a surge in demand, for qualified individuals who want to do temporary work.
In the past, the end of a war or other national emergency that had caused a surge in the number of military personnel has led to large reductions in those personnel, which made a military career economically insecure.
In order to place 20,000 additional soldiers on duty in Iraq, the military would probably have to hire a total of 60,000, since soldiers are rotated in and out of Iraq about every three years, and these soldiers might be surplus if the war ended or there was a large withdrawal of U.S.
troops.
Such fluctuations can be avoided by the use of temps.
But of course we have temps built into the existing, pre-contracting-out system.
They are the members of the National Guard and other reserve units.
They are part-time soldiers available for temporary duty in Iraq and other war zones.
So a proper cost-benefit analysis of the contracting-out program in Iraq (which has not to my knowledge been conducted) would compare the costs of the contracts with the cost of enlarging National Guard or other reserve formations to a point at which fewer or perhaps no contract security personnel would be needed.
The comparison might favor the contractors simply because the private provision of services tends as I said to be more efficient than the public.
There are, however, two residual concerns with the contract approach that should be considered.
Both are political.
The first is a suspicion that the use of the contractors is motivated not by cost considerations but rather by a political objective of concealing from the American public the extent of the U.S.
commitment of troops to Iraq.
The U.S.
has about 130,000 troops in Iraq at present.
The number would be about 150,000 if contract security personnel were replaced by U.S.
soldiers; the number of casualties would also be higher.
 Increases in either number would reduce political support for the war.
The second is that contract personnel are less restrained in their use of force than our soldiers because the U.S.
military command is less concerned about misbehavior of contract personnel than misbehavior of soldiers.
The contract personnel are not in the chain of command; apparently they are also immune from prosecution by Iraqi authorities.
According to one U.S.
general, "These guys run loose in this country and do stupid stuff.
There's  no authority over them, so you can't  come down on them hard when they escalate force‚Ä¶They shoot people, and someone else has to deal with the aftermath.
It happens all over the place." Yet the military is concerned with maintaining the goodwill of the Iraqi population, and that goodwill is impaired by excessive use of force by any foreign personnel.
One might think, therefore, that the contracts would subject the employees to full military discipline--but if this were done, it would be difficult to maintain the fiction that they are not really soldiers and so shouldn't be counted in the total of U.S.
military personnel in Iraq.
Competition for these contracts should induce the contractors to screen the people they hire, but the screening is likely to be imperfect, and as a result the absence of a credible threat of criminal punishment, whether military or civilian, may indeed create a situation in which contract security personnel are less restrained in their use of force than our soldiers are.
I have been surprised at the virulence of the response to the President's proposals for dealing with the problem of illegal immigration.
I had not realized there was so much hostility to illegal immigrants, who are mainly from Mexico and Central America.
Many Americans seem to regard anything short of expelling the entire illegal-immigration population, which may be as large as 12 million (though my guess is that it is much lower), as a form of "amnesty" that would be immoral because it would reward illegality.
Well, that is what amnesties do; they forgive crimes.
But they are a conventional policy tool, and should not be despised.
They are particularly common as a means of dealing with tax evasion.
Tax evasion is extremely common because it is so difficult to detect.
A tax amnesty in effect sells the tax evader immunity from punishment in exchange for payment of back taxes due.
The amnesty is attractive to the government because it raises revenue and to the tax evader because it enables him to buy his way out of the risk of being prosecuted should he be caught.
It is a mutually beneficial trade.
The objection to amnesties is that they increase the incentive to commit the amnestied crime in the future by holding out the prospect of future amnesties.
The objection is superficial.
The government will (if it is being sensible) trade off the gain in revenue from the amnesty against the future loss of tax revenues that is likely to be caused by the prospect of future amnesties, and so  it will set the amnesty "price"  at the level that maximizes the net gain in revenue.
For example, if it reckons that the prospect of future amnesties will lead to significantly more tax evasion in the future, it can condition the amnesty on the tax evader's paying not merely the back taxes he owes but a substantial penalty as well.
It is the same with an immigration "amnesty," if one wants to describe the President's plan in those terms.
In exchange for not risking being deported, illegal immigrants can be required to pay not only back taxes due but also a fine greater than the $2,000 currently proposed.
Of course, the stiffer the penalty, the fewer illegal immigrants will step forward and acknowledge their status, and so the less effective the "amnesty" will be.
I would not favor a stiff penalty.
The Americans who for one reason or another are most concerned about illegal immigration are not much or maybe at all concerned about legal immigration, and so converting illegal to legal immigrants should be regarded by them as a highly beneficial step.
There is antipathy to "rewarding" legal immigrants, who have jumped the queue of people trying to immigrate to the United States--a queue that can take many years to get to the head of legally.
But what is the alternative? It is not feasible to deport millions of people from the United States, and those who would like to do this should accept as a second-best solution regularizing the status of the illegal immigrants.
Nor is it clear that the people waiting patiently in the queue are "better" people or would be better Americans than the illegals.
Many of them may be in the queue not because they want to be Americans but because they want to preserve the option to relocate to the United States should conditions or opportunities in their home country worsen.
It would be desirable in principle to get control of our borders, but it probably is impossible.
Our border with Mexico is almost 2000 miles long, and that figure  ignores our Pacific Ocean and Gulf of Mexico coastlines, which are proximate to Mexico and Central America.
Fencing and patrolling a border sound like straightforward measures, but in practice are extremely costly; imagine the interruptions in the extensive commerce between the United States and Mexico that would ensue from erecting a Berlin Wall, with checkpoints at which vehicles are carefully searched, between the two countries.
If all Americans were required to carry biometric identification, if any clandestine entry into the United States were punished as a serious crime, and if the employment of an illegal alien were made a federal felony with a mandatory minimum punishment of 10 years in prison, the problem of illegal immigration would be solved more or less overnight, and the millions of illegal immigrants would be on their way back to Mexico and Central America (and in lesser numbers to China and other poor countries that supply us with many illegal immigrants).
This exodus--this de facto deportation of the illegal immigrant population--would disrupt the economies both of the United States and of Mexico.
Once something is identified as a problem, Americans, not being fatalists, insist that there be a solution.
But there is only one worthwhile solution to this particular problem, and it is one over which Americans have little control.
The solution is for Mexico and the other poor countries from which illegal immigrants come to become rich.
As soon as per capita income in a country reaches about a third of the American level, immigration from that country dries up.
Emigration is very costly emotionally as well as financially, given language and other barriers to a smooth transition to a new country, and so is frequent only when there are enormous wealth disparities between one's  homeland and a rich country like the United States.
The more one worries about illegal immigrants, the more one should favor policies designed to bring about greater global income equality.
The basic economic objection to crime is that a crime is a costly but sterile transaction.
It redistributes wealth, which doesn't  increase the size of the social pie; and therefore the costs involved in crime‚Äîthe time and other inputs of the criminal, and the defensive measures taken by potential victims‚Äîare a deadweight loss to society.
But notice that the economic definition of crime as a sterile transaction (or coerced transfer payment) does not correspond to the legal definition of crime; in law, a crime is anything that the government forbids on pain of criminal penalties.
Victimless crimes tend to be productive transactions, which make the parties better off (at least by their own lights).
Attempting to deter or prevent such transactions are likely therefore to reduce the overall social welfare, like other interferences with the operation of free markets.
Of course there may be external costs, costs external to the parties to the drug transaction or other victimless crime, that in some cases justify punishment, but this is probably not true in general.
So the first question to consider in assessing the effects of crime on economic welfare is how much of what is criminalized should be.
Bribery of officials is, as Becker points out, an interesting mixed case.
It is a voluntary transaction with external costs, but sometimes the social benefits exceed those costs, as where the bribe results in circumvention of an inefficient restriction on commercial activity.
Robberies, kidnappings, and other coerced transfers involve a reallocation of resources from productive commercial activities to a zero-sum game of attack and defend.
But I am not sure that we should expect this type of criminality to be a simple function of poor prospects for legal employment.
If everyone has poor prospects, and is therefore poor, the gains from crime will be meager; both the benefits of crime and the opportunity costs of the criminal (mainly what he could earn in legal employment) will be depressed.
Steeply unequal incomes would seem a better explanation for a high crime rate in a poor country, since despite the overall poverty there would be attractive targets for criminals.
The high rate of kidnapping in Mexico City seems related to the fact that the city has many wealthy residents as well as many poor ones.
Another factor in high crime rates in poor countries is that it is difficult to finance an effective apparatus for fighting crime.
It requires police and judges who are paid enough not to be readily bribable by criminal gangs; in addition, the police must be sufficiently numerous and well-armed to be capable of protecting judges, witnesses, and criminal investigators.
So there is a chicken and egg problem: a poor country has difficulty affording the means of preventing crimes rates from skyrocketing, and the high crime rates help to keep the country poor.
It is true that the aggregate expense of even well-paid police and judges is likely to be only a small percentage of GDP even in a poor country.
But it is difficult to set a wage scale for a class of workers that is grossly in excess of prevailing wages for work involving similar skills and education.
A possible response to resource limitations on crime fighting is very severe punishment of convicted criminals.
The threat of punishment has a deterrent effect, provided the probability of punishment is not negligible; and as Becker long ago pointed out in his famous 1968 article on crime and punishment, making the threat is a lot cheaper than hiring a huge police force.
In other words, resources on maintaining a high probability of apprehending and punishing criminals can be economized on, without lose of deterrence, by jacking up the punishment of those who are apprehended; the expected cost of punishment need be no higher.
Moreover, the probability of apprehension and conviction can be cheaply enhanced, too, by reducing the procedural rights of criminal defendants.
In addition, purging the statute books of victimless crimes, and eliminating foolish regulations that invite bribery to circumvent them, can reduce demands on the criminal justice system and permit refocusing it on the crimes that impose the greatest costs on the society.
The recent attempt by Rupert Murdoch to buy Dow Jones, the owner of the   Wall Street Journal,   and the vocal dissatisfaction of shareholders of the New York Times Company with the company's management, are reminders of the curious ownership structure of these and other media enterprises.
(The   Washington Post   is another example.) They are companies in which a family that has owned the newspaper or other media outlet for a long time continues to own a majority of the voting common stock of the company but has sold a majority of the common stock as a whole to outside investors.
In other words, the owners of the company do not control it.
There are two interrelated oddities to be explained and evaluated: why family ownership is so common in the media world (it is common elsewhere as well--in fact about a third of all Fortune 500 companies are family-owned--but seems to be more common among newspapers and magazines--think of the Chandlers, the Hearsts, the Sulzbergers, the Grahams, the Pulitzers, the Irvings, the Bancrofts, the Bradleys, the Peretzes), and why the family owners divide control from ownership, retaining the first.
A third question is why this model is under increasing pressure.
The reason the families give for the first two phenomena is that the ownership of a newspaper or other media organ (but for simplicity I confine my discussion to newspapers) is a "public trust" because of the role of the press in a democracy.
The idea is that if people unrelated to the founder (or some long-ago acquirer, as in the case of the   Wall Street Journal   and the   New York Times   controlled the newspaper, they would manage it with the aim of maximizing profits and thus would give the consumer what he wanted rather than what he needed in order to be an informed citizen.
This is not a ridiculous argument, because most people read newspapers in order to be entertained, to read classified advertisements, and to have their opinions, prejudices, and so forth reinforced, rather than to be challenged.
People don't like to be challenged and are uncomfortable when they find themselves in a state of doubt.
So one can imagine a public-spirited (or simply an opinionated) newspaper owner deciding to reduce the price or increase the quality of his newspaper in order to lure people to read it and be challenged.
You might subscribe to the   New York Times   because it was cheap and had a lot of ads and had useful advice on health in "Science Times," but your eye would stray from time to time to the news articles and editorials and op-eds, and so you would become a better-informed citizen.
The effectiveness of this strategy depends, however, on the aggregative character of a newspaper--on the fact that it contains a hodge-podge of interleaved material, so that you get the editorials even if you just want the classified ads.
The rise of the Internet media has resulted in the disaggregation of media components.
You can get pretty much any media component (classified ads, health advice, celebrity gossip, sports, food tips, etc.) separately, without having to peruse the news or editorial pages.
Newspapers are no longer an effective medium for educating or edifying an uninterested public.
Furthermore, the fact that some ancestral figure, and one of his descendants (such as Mr.
Sulzberger), want to answer what they consider the high public calling of controlling a newspaper doesn‚Äôt mean that the other descendants--the rest of the family, who have nothing to do with the newspaper's editorial policy--derive satisfaction from trading their profits for the publisher's continued influence over the product.
Why should they? It is not their opinions that are being pushed on the public, but some distant cousin's.
With each new generation, the number of slices into which the profit pie is cut grows larger, and each slice thinner, and with the newspapers under tremendous financial pressure from the Internet, family members grow ever more restive.
The separation of ownership and control in large companies is an old story.
But it was a story about hired managers' being the imperfect agents of the shareholders (the owners) because the shareholders were too numerous, and their individual stakes in the company too small, to make them effective monitors of the managers, who would therefore have opportunities to pursue their private ends at the expense of the nominal owners.
Amputating common stockholders' voting rights is something else.
While it is true that the individual shareholder is unlikely to have any effective control over the enterprise, the shareholders as a whole, represented by the board of directors, have a degree of control--less than they are supposed to have, because boards of directors are rarely completely independent of the hired management, but still some.
If you take away the right of the majority of the shareholders to control the board of directors, you take away all their control, though they may still have influence, especially if they have some voting rights and can ally with dissident members of the control group (the family, in the case of the family-controlled corporation).
This stripping of control from the majority of the shareholders is awkward because owners of a corporation‚Äôs common stock are the residual risk bearers.
They do not have a fixed return, like bondholders.
Their fortunes rise and fall with the corporation's profits, and so one might wonder why anyone would own stock in a corporation controlled by a group that justified its control as necessary to avoid maximizing the company's profits! The answer has to be that the company must offer its shares to the public at a discount to compensate them for their lack of control and diminished profit expectations.
The family corporation is a viable enterprise form for the same reason that families are viable social groupings: relations of trust based on intimate knowledge, altruism, reciprocity, and threat of ostracism can be good substitutes for relations based on contract and reputation.
The advantages of the family form have to be traded off, however, against the disadvantage, which it shares with hereditary monarchy, that a genetic connection is no guarantor of equality of aptitude or motivation.
As a family expands over generations and the founder's genetic endowment becomes increasingly diluted and bonds of altruism fray, the disadvantages of the family enterprise grow relative to the advantages.
And when on top of that the family-controlled enterprise is faced with sharp new challenges, as is happening today in the newspaper industry because of the rise of the Internet, the disadvantages of family control become disabling.
Probably, therefore, the days of the family-owned newspaper are numbered.
I find little to disagree with in Becker's post, except with regard to his disapproval of amnesty--but here our disagreement may be merely terminological, as I shall explain.
The path of reform, if one ignores the politics of immigration reform, seems obvious.
If there are indeed 12 million illegal immigrants in the United States, then since only a tiny fraction will ever be deported, the status of all of them ought to be regularized, which means put on the path to U.S.
citizenship.
That is amnesty, which for some reason horrifies a lot of people.
Amnesties are a long-established device for dealing with social problems.
Tax amnesties are especially common.
An amnesty need not, and in the case of tax amnesties is not, a get-out-of-jail-free card.
The taxpayer has to pay his back taxes in order to be spared criminal punishment.
He benefits because paying taxes is a less severe penalty than being imprisoned for nonpayment of taxes, and the government benefits by obtaining additional tax revenues that it would not have obtained had it not offered the amnesty but instead had tried to catch the tax cheats, because it would often fail.
Similarly, in the immigration case, the illegal immigrant is offered the chance to avoid deportation at the cost of having to pay a fine (or in Becker‚Äôs proposal, a fee for purchasing the right to remain in the United States as a lawful resident.).
Hence the program currently before Congress is an amnesty program although the politicians carefully avoid the word.
If the fine is too stiff, however, many illegal immigrants will prefer to remain in that status since the probability of being caught and deported is for most of them slight.
I would not, save in exceptional circumstances, approve of amnesty for people who commit serious crimes.
However, an illegal immigrant whose only violation of U.S.
law is entering the country without authorization or overstaying a tourist visa is, though of course subject to deportation ("removal," as it is now called), not treated as a criminal.
I agree with Becker that there is no sense in limiting amnesty to a subclass of illegal immigrants.
That would just leave several million illegal immigrants in the country, their status unchanged.
There is also the difficulty of determining how long an illegal immigrant has been in this country.
In the context of legal proceedings potentially involving 12 million persons, anything that requires difficult evidentiary determinations in even a small percentage of those proceedings would place enormous strain on the adjudicative machinery of the federal government.
Becker is correct that the downside of an amnesty is that it reduces deterrence by creating an expectation of a future amnesty.
But I do not consider that a substantial objection in the present instance.
The reason is that there are two parts to sensible immigration reform.
Amnesty is only one.
The other is sealing our borders against future illegal immigration.
If we do not seal our borders--which I do not mean literally, as that is impossible: I mean if we take effective measures to drastically reduce the flow of future illegal immigration--then in a few years we will be back where we are today, with once again millions of illegal immigrants.
If we do succeed in drastically reducing the inflow of illegal immigrants, we won't have to worry a great deal about the effect of the prospect of a future amnesty on illegal immigration.
For there are two methods of preventing illegal immigration.
One is to deter it by threat of sanctions, such as deportation or criminal punishment.
The other is physically to prevent the entry of an immigrant who is not authorized to enter the country.
It is a substitute for deterrence as a mode of prevention, and if it is effective the need for deterrence is reduced.
It is possible too that the inflow of illegal Mexican immigrants will slow drastically.
As I mentioned in one of my earlier posts on immigration reform, when a nation's average GDP reaches one-third the U.S.
level, illegal immigration to the United States drops to a very low level.
Mexico is a potentially wealthy country, held back from realizing its potential by its political culture.
If there is anything we can do to help Mexico prosper, it will reduce our problem of illegal immigration.
But even without our help, Mexico may turn the corner to prosperity, as so many countries have done in recent years.
The problem with sealing the borders, apart from the cost of building, maintaining, and patrolling an immensely long fence on the Mexican border, as well as controlling our coastlines, is that virtually anyone can obtain a tourist visa to enter the United States, and once here can disappear.
However, there are other measures for reducing illegal immigration, including requiring all persons in the United States to carry biometric identification, imposing stiffer penalties on employers of illegal immigrants, and criminalizing first-time illegal immigrants rather than just repeats.
But what I think would be particularly promising would simply be to make legal immigration from Mexico and Central America much easier.
I am not enthusiastic about guest-worker programs.
Guest workers may disappear into the illegal-immigrant pool, and if they have children in the United States the children will be U.S.
citizens, so that sending the guest workers back to their country of origin may result in breaking up families.
I agree completely with Becker that we should allow a million highly skilled workers a year into the United States.
Everyone will benefit because workers are usually unable to capture in their wages their entire social product.
Even the highly skilled workers already in this country are likely to benefit in the long run, even if their wages are temporarily depressed by the surge in competition from the new immigrants.
The reason is that the increased number of highly skilled workers will increase the rate of technological progress in U.S.
industry, which in turn will increase the demand for highly skilled workers.
There were many very interesting comments.
Let me begin my response with a correction.
I should not have described the Vietnam War protests as "violent." There was some violence, but my subject was not protests that were violent, but rather protests that took the form of street demonstrations,picketing, and marches (sit-ins, disruptive though rarely violent, would be intermediate between violent and completely peaceful protests), for my analysis shows why we have not seen many such protests against the Iraq War.
I thank Lawrence Caroline for catching my mistake.
One comment raises the interesting question of the motivation to engage in a protest, given that the costs are borne by the individual protester, yet the benefits are diffuse.
But that is true of much expressive activity, as when a person applauds at a concert, though realizing that the musicians can't hear his applause.
Hence the more costly the expressive activity, the more effectively it communicates the depth of the protester's feeling.
That is why street demonstrations are more likely to influence public opinion than comments on a blog; it is so cheap (in time, etc.) to post such a comment that the decision to do so conveys no information about the intensity of the belief that motivated the protest.
Another comment points out quite plausibly that one reason for the lower temperature of the current protests is that there is no sympathy for the enemy.
In the Vietnam era a small but highly vocal number of Americans were sympathetic to communism, and a greater number mistakenly believed that Ho Chi Minh was not a real  communist but rather was an agrarian reforrmer.
Some Americans oppose the Iraq War because they consider preventive wars immoral, but most oppose it because they think it unwinnable--a waste of lives and money.
Also, one plank in the opposition platform is that the Administration went to war not realizing how difficult it would be to end it.
Well, it is very difficult to end it, so even opponents hesitate to press for an immediate withdrawal, as they would have done with respect to the Vietnam War.
I think too that there is some sense among opponents that President Bush will not withdraw from Iraq no matter what and that his successor will withdraw posthaste, so that the die is cast and protests will have no efficacy.
But I do not agree with the commenter who suggested that opponents are pulling their punches because they want the U.S.
to remain in Iraq in order to increase the punishment of Bush and the military!.
I was very interested in the comments that suggest that the Soviet Union fomented many of the Vietnam War protests both here and abroad.
That is a factor  fortunately missing from the present situation.
The subprime mortgage debacle, efforts by New York City to ban trans fats in restaurants, the discovery of lead in toys manufactured in China, and concerns about safety inspections of airplanes and laxity in regulation of new drugs have brought to the fore the issue of the optimal scope and methods of regulation designed to protect consumers.
There are two reasons to think that consumers might need more protection than is provided by competition among sellers, even as backed up by court-enforced law.
Few opponents of regulation doubt the appropriateness of such judicially enforced rules as the implied warranty of fitness and safety that accompanies the sale of products.
The first reason for thinking that it might make economic sense to add a layer of regulation to competition plus court-enforced law is the high costs to consumers of obtaining information about products and services (but I will confine my attention to products).
The busier people are and hence the higher their costs of time, and the more complex that products are, the higher consumer information costs will be.
Product information could be thought a product in itself that a competitive market would generate in optimal quantities, but that is far from certain.
The problem is what might be called "fouling one‚Äôs nest." If a cigarette producer advertises its cigarettes as "safer," it is implying that cigarettes are unsafe, and this could reduce consumption.
Now in fact everyone knows about the dangers of smoking, so that is not a serious problem; but it is a problem when the hazards of a product are not widely known.
A restaurant that advertises that its food contains less trans fat or less salt than other restaurants is telling consumers that there are bad things in restaurant food.
Moreover, and probably more important, it is very difficult for an advertiser to explain why trans fat or salt or butter is bad for one.
I believe that the obesity epidemic must be due in in part to the ignorance of many consumers, especially if they are poorly educated, of the causes and consequences of obesity.
There are three possible responses to the problem created by consumer information costs.
The first is to require producers to provide more information; the second is to ban products upon on the basis of a judgment that if consumers knew the score they would not buy the product in question; and the third is to leave the burden of information on the consumer, thereby increasing the incentive of a consumer to inform himself about the products he buys.
Often the preferred ranking will be 2, 1, and 3.
Banning the product eliminates information costs, though to justify so drastic a measure requires a high degree of confidence that informed consumers would not buy the product if they knew the facts about it.
If as I believe trans fats have close and much more healthful substitutes that cost little more than trans fats, the attempt to ban trans fats in New York City restaurants made sense.
Forcing sellers to provide more information to consumers can paradoxically raise consumer information costs by requiring consumers to sort through more warnings and interpret and evaluate them.
There is also a lulling effect: required warnings create the impression that the government is protecting consumers by regulating sellers, which it may not in fact be doing; or may create resentment because consumers feel overloaded with unnecessary warnings: a "crying wolf" problem.
A related problem is that consumers have very different stocks of information, making it difficult or even impossible to draft a warning that will provide a significant net increment in consumer knowledge.
Finally, encouraging consumers to become better informed about products on their own, in lieu of relying on government regulation, might be excessively costly.
It would force consumers with high time costs to reallocate high-value time to the study of consumer products, at a cost and a cost of this reallocation that might exceed the cost of regulation.
Take the case of health inspections of restaurants.
My guess is that those inspections add little to the cost of restaurant food (I am assuming the inspections are financed by a restaurant tax).
In their absence a consumer could not just drop in on a new restaurant with confidence that he would not get sick because of unsanitary conditions.
(So such regulation may encourage entry into the restaurant industry.) No doubt services would spring up to rate the healthfulness of different restaurants, just as services like Zagat rate the quality of the food and service offered in different restaurants.
But the inspectors employed by a private service would not have the powers of public inspectors--to inspect without notice and shut down a restaurant found to have unhealthful conditions.
Perhaps some restaurants would consent to grant such powers to a private service, but then the consumer in evaluating the private inspection services might be faced with a formidable search cost to determine the best service.
Apart from costs of obaining information, there is the distinct problem of evaluating or processing information.
This is the domain of the cognitive quirks that have been illuminated by the recent literature (increasingly influential in economics) in cognitive psychology.
An example is the seeming inability of many consumers to appreciate the practical identity between an item priced at $9.99 and the identical item priced at $10.00.
Merchants' unquestionably sound conviction that consumers exaggerate the difference between these two prices is the only thing keeping the penny in circulation, as it costs more than a penny for the U.S.
Mint to produce a penny.
I do not think these quirks provide a compelling reason for additional regulation of consumer products and services.
Such regulation would amount to telling consumers that they can't think straight, and would reduce consumer utility, at least in the short run, by denying them $9.99 "bargains." I would however favor incorporating into the curricula of high schools, and perhaps even elementary schools, courses in cognitive psychology that would make students alert to the pitfalls that await them as a result of cognitive defects that, though hard-wired in the brain, are avoidable if one is alert to their existence.
The existence of cognitive deficiencies may have been a factor in the subprime mortgage debacle, though on the consumer rather than the producer side.
Many consumers may have been incapable of properly evaluating the risks of heavy borrowing; cognitive psychologists have found that average and even very intelligent people have difficulty handling probabilities.
On the producer side of markets, however, there are forces for minimizing the effect of cognitive deficiencies.
People who don't handle probabilities well are not going to thrive in the insurance business or other financial businesses.
They will be selected out by competition; the analogy is to natural selection in biological evolution.
I suspect that the housing bubble and ensuing credit crunch reflect, on the business side of the market, not so much irrational optimism as risk taking that was rational given asymmetries of loss and gain.
Generous severance benefits truncate downside risk for the top management of large companies, and speculation in the face of a known bubble can be rational because until the bubble bursts values are rising very rapidly; the trick is to jump off the hurtling train just before it crashes.
The Mexican and New York City programs are well described in Becker's post and in a recent article in the   Financial Times   by Christopher Grimes, "Do the Right Thing," May 24, 2008, www.ft.com/cms/s/0/a2f1b24a-292a-11dd-96ce-000077b07658.html?nclick_check=1.
I cannot comment on the Mexican program; nor do I oppose social experiments financed by private money, as in New York.
But I am skeptical about the New York program, and if I were a New Yorker I would be reluctant to support public financing of it.
Before Milton Friedman proposed to replace welfare programs with a negative income tax--that is, a cash grant with few if any strings attached--welfare programs were in part devices by which the government endeavored to buy good behavior from the poor.
Hence food stamps, but not food stamps that could be used to buy liquor.
Or money earmarked for health or education.
Friedman's criticism of such programs was that people have a better sense of their needs than government bureaucrats, so that if the government simply gave poor people money they would allocate it more efficiently than the welfare bureaucracy would do.
This philosophy was eventually adopted by the federal government in the form of the earned income tax credit.
The danger in giving the poor money (or anything else for that matter) is that it will reduce their incentive to work; this problem was addressed by the replacement of welfare by workfare at the state and later the federal level.
Friedman's analysis requires qualification, however, when the issue is the welfare of children.
The reason is that not all parents balance their own welfare with that of their children in an impartial manner.
That is why we have laws forbidding child neglect and abuse.
It is also why we have compulsory-schooling laws and forbid child labor.
These are paternalistic laws in a quite literal sense, but are justified to the extent that there is legitimate concern that not all parents are faithful agents of their children.
Nevertheless, as a general rule parents both know better than welfare officials what is good for their children and love their children more than the officials, however well meaning, do, so any proposal to expand the role of government in controlling children should be viewed with caution.
Public school is both free and compulsory, and schooling adds considerably to a child's lifetime income prospects, so we must ask why some parents do not compel their children to attend school regularly.
One reason might be that some of them do not value their children's welfare.
Another that they cannot control their children.
And a third that they do not think their children benefit significantly from regular attendance.
I would guess that the second and third reasons are more common than the first.
Paying children to go to school would probably have at least some effect in countering all three cases.
However, the benefits would be limited to children who, but for the payment, would attend school less frequently.
I do not know how those children could be identified in advance, which means that the program would confer windfalls on some, perhaps many, children.
(It would be odd to disqualify children on the basis of their good attendance!) In addition, there would be substantial costs, both direct and indirect, to the program.
The direct costs would consist of the costs of distributing the money to the kids, making sure that it is not appropriated by the parents, and monitoring the children's school attendance.
(So: more bureaucracy.) The indirect costs would include perverse incentive effects--some parents would spend less on their children to offset the payments that the children would be receiving for staying in school.
Also, giving children their own source of income would reduce parental control and by doing so weaken already weak families.
And some children contribute more to family welfare by occasional truancy than by consistent school attendance--for example, they may be older children helping to take care of younger siblings in households in which the parents work full time, or in which there is only one parent.
Also, how does one end such a program? If the payments are suddenly withdrawn, will the kids feel aggrieved and resume truancy with a vengeance?.
The largest indirect cost, I would guess, would consist in relaxed pressure to improve the public schools or to allow them to be bypassed by means of voucher systems.
High rates of truancy may be due in significant part to low quality of schools.
Paying children to attend school will reduce truancy rates some but without improving school quality, and perhaps without improving the education of the children receiving the payments.
(School quality may actually decrease, with more crowded classrooms--crowded by kids who don't really want to be there.) Suppose that a school is in session 200 days a year, a student is truant 10 of those days, and if paid to attend would be truant only 5 days.
Then the effect of the payment would be to increase the number of days the child was in school by only 2.5 percent.
If it's a bad school, there might be zero benefit from this modest increase in attendance.
Granted, there are many children in New York who are truant for much longer periods.
An article by Harold O.
Levy and Kimberly Henry, "Mistaking Attendance,"   New York Times  , Sept.
2, 2007, www.nytimes.com/2007/09/02/opinion/02levy-1.html?_r=2&ex=1189396800&en=1d2692cb89c474d7&ei=5070&emc=eta1&oref=slogin&oref=slogin, estimates that 30 percent of New York public school students miss a month of school every year.
But they may be children who for mental or psychological reasons, or extreme family circumstances, cannot benefit significantly from additional schooling.
The beneficial effects of paying children to go to school are likely to be concentrated on the kids who are casual rather than extreme truants, and those benefits, as suggested by my numerical example, may be slight.
Another component of the program is paying children for performing well on standardized exams.
Such measures reward work more directly than paying for attendance, and also avoid the bad signal that is emitted by bribing people to do what the law requires them to do (i.e., attend school until 16 or 18, depending on the state), but they may largely reward intelligence rather than study.
Working hard in school is no guaranty of getting good grades.
Scholarships for promising students and awards for high performance have good effects, but the paid students are unlikely to qualify in competition with students who do not have to be paid to attend school.
Paying children to attend school is a band-aid approach at best.
Far better would be a voucher system that would create competition among the public schools to serve children better.
The President has expressed dissatisfaction with the proposed Farm Bill wending its way through Congress.
He wants farmers whose annual incomes exceed $200,000 to be denied subsidies; the present cutoff is $2.6 million and Congress will not go below $950,000.
The President's concern with farm subsidies cannot be taken very seriously, since in 2002 the Republican Congress with Administration connivance greatly increased these subsidies and at the same time repealed some of the modest reforms that the Clinton Administration had introduced in 1996.
The Administration's current proposals would, if enacted, be a step in the right direction, but they will not be enacted, and, judging from the 2002 legislation, they are intended I suspect merely to embarrass the Democratic Congress.
The deregulation movement passed agriculture by, leaving in place a series of government programs that lack any economic justification and at the same time are regressive.
They should offend liberals on the latter score and conservatives on the former; their firm entrenchment in American public policy illustrates the limitations of the American democratic system.
A million farmers receive subsidies in a variety of forms (direct crop subsidies, R&D, crop insurance, federal loans, ethanol tariffs, export subsidies, emergency relief, the food-stamp program, and more), which will cost in the aggregate, under the pending Farm Bill, some $50 billion a year, or $50,000 per farmer on average.
Farm subsidies account for about a sixth of total farm revenues.
So, not surprisingly, the income of the average farmer is actually above the average of all American incomes, and anyway 74 percent of the subsidies go to the 10 percent largest farm enterprises.
The subsidies are regressive, especially during a recession coinciding with worldwide food shortages (i.e., high prices).
There is no justification for the Farm Bill in terms of social welfare.
The agriculture industry does not exhibit the symptoms, such as large fixed costs, that make unregulated competition problematic in some industries, such as the airline industry, about which Becker and I blogged recently.
It is true that crops are vulnerable to disease, drought, floods, and other natural disasters, but the global insurance industry insures against such disasters, and in addition large agricultural enterprises can reduce the risk of such disasters by diversifying crops and by owning farm land in different parts of the nation and the world.
If a farm enterprise grows soybeans in different regions, a soybean blight in one region, by reducing the supply of soybeans, will increase the price of soybeans, so the enterprise will be hedged, at least partially, against the risk of disaster.
Supply fluctuations due to natural disaster create instability in farm prices, but farmers can hedge against such instability by purchasing future or forward contracts.
There is no "market failure" problem that would justify regulating the farm industry.
All the subsidies should be repealed.
This of course will not happen, and that is a lesson in the limitations of democracy, at least as practiced in the United States at this time, though I doubt that it is peculiarities of American democracy that explain the farm programs, for their European counterparts are far more generous.
The small number of American farmers is, paradoxically, a factor that facilitates their obtaining transfer payments from taxpayers.
They are so few that they can organize effectively, and being few the average benefit they derive (the $50,000 a year) creates a strong incentive to contribute time and money to securing the subsidies.
The free-rider problem that plagues collective action is minimized when the benefit to the individual member of the collective group is great.
Then too many of the members of the farm community and hence recipients of the subsidies are wealthy, and the wealthy have great influence in Congress as a result of the lack of effective limitations on private financing of congressional campaigns and on lobbying generally.
In addition, the allocation of two senators to each state regardless of population enhances the political power of sparsely populated states, which tend to be disproportionately agricultural.
The key role of Iowa in the presidential electoral process is a further barrier to the abolition of farm subsidies, and the final factor is the alliance of urban with farm interests in support of the food-stamp program, itself inferior to a negative income tax, which would give the poor money but allow them to make their own consumption choices.
A puzzle about the farm programs is the heavy emphasis on money subsidies, since by reducing the cost of farming they encourage greater output, which results in lower prices for farm products, thus offsetting some, perhaps much, of the effect of the subsidies.
(The lower prices are not a social benefit, because as the result of subsidization they are below cost.) Acreage restrictions, which used to be the core of federal farm policy, and which correspond to the type of entry-limiting regulations imposed on airlines, railroads, trucking, pipelines, long-distance telecommunications, banking, and the wholesale sale of electricity, before the deregulation movement, are more efficient at raising farmers‚Äô incomes by reducing output, in effect cartelizing agriculture.
Those restrictions have been reduced, but between them and export subsidies (which reduce the supply of agricultural products to American consumers) farm prices in America are higher than they would be without the farm programs, and this contributes to the regressive effects of the programs.
As Becker explains, we cannot predict the future price of oil.
But it is unlikely to rise in the foreseeable future to $200 a barrel, especially if we think in inflation-adjusted terms.
Oil prices in real terms have fluctuated a great deal.
In December 2007 dollars the price of oil was below $20 in 1946, above $100 in 1979, and only about $10 a recently as 1998.
High prices affect both demand and supply; the recent price peaks have already reduced demand for gasoline in the United States and increased efforts to discover and exploit new oil fields.
The United States has large untapped oil reserves both offshore and in Alaska, and there are many other untapped reserves elsewhere in the world.
Supply is responding to the high price of oil and will respond more.
If Iraq ever stabilizes, its output of oil will increase.
Were the world price of oil to rise to a level close to $200, both demand and (with a lag) supply would respond.
Oil trapped in sand and shale--a potentially very large supply--would become economical.
In the longer run, very high oil prices will further stimulate the development of alternative fuels.
Major political or natural catastrophes could of course alter the picture.
Middle eastern oil supplies are vulnerable to the ever-present threat of war in that region, and the oil industries of Venezuela, Nigeria, and possibly even Saudi Arabia are vulnerable to political unrest, civil war, or terrorism.
I would like to see the price of oil rise to $200, despite the worldwide recession that would probably result, provided that it rises as a result of heavy taxes on oil or (better) carbon emissions.
The taxes would jump start the development of clean fuels, and the financial impact on consumers could be buffered by returning a portion of the tax revenues in the form of income tax credits.
That would not reduce the effect of the taxes on the demand for oil or the incentives to develop alternative fuels, because the marginal cost (the production and distribution cost plus the tax) of oil to consumers would not be affected.
Higher oil prices are necessary to check global warming, reduce traffic congestion, and reduce dependence on foreign oil, so much of which is produced by countries that are either unstable or hostile to the United States.
Heavy taxes on oil would reduce not only the amount of oil we import but also the revenue per barrel of the oil exporting nations, so there would be a double negative effect on those countries' oil revenues: they would sell less oil and earn less per unit sold.
The reason for the latter effect is the upward-sloping supply curve for oil.
Suppose the first million barrels of oil can be produced at a cost of $1 per barrel and the second million at $2 per barrel.
If total demand is one million barrels, the suppliers break even: they have revenues of $1 million and costs of $1 million.
If total demand is two million barrels, the suppliers have revenues of $4 million (because the price of all barrels is determined by the price that the marginal purchaser is willing to pay) but costs of only $3 million ($1 million for the first million barrels, $2 million of the second).
The lower the price of oil received by the oil producers (that is, the price net of tax), the lower their net income.
Unfortunately I cannot see a confluence of political forces that would make heavy taxes on oil feasible.
We seem to be experiencing a democratic failure, in which long-term problems simply cannot be addressed.
Articles in the   New England Journal of Medicine   on April 30, and in the   New York Times   on May 19, discuss a proposal now before Congress to impose a tax on sugar-sweetened sodas in order to reduce obesity.
Taxes are ordinarily intended to raise revenue, but some taxes, such as taxes on alcohol and tobacco--and on carbon emissions, should such a tax ever be passed--are designed not to raise revenue but to alter behavior, and the more they succeed in altering behavior the less revenue they generate.
Sugar-sweetened sodas are high in calories, are drunk in great quantity, and because they have little nutritional value don't substitute for other foods; they are a net addition to caloric intake.
The   NEJM   article estimates that consumption of such sodas adds an average of 125 to 150 calories per day to the average American's diet, and cites studies that estimate that the elasticity of demand for such products is about -1, so that a 10 percent soda tax tax could be expected to reduce consumption by about 10 percent, with the result, according to the author of reducing the average person's weight by about 2 pounds a year.
I am skeptical, because the author ignores the possibility of substituting untaxed sugar-sweetened foods or beverages.
People who crave sugar will find no dearth of substitutes for sugar-sweetened sodas.
Moreover, most consumers of these sodas are not and never will be obese.
They may well be overweight, but all that that means is that they are heavier than the "ideal" weight calculated by physicians; if they are only slightly or even moderately heavier, the consequences for health or social or professional success are apparently slight.
To the extent that a soda tax would cause substitution of equally sugared foods, it would not only have no effect on obesity; it would yield no revenue--a material consideration because supporters of the tax hope, albeit inconsistently, that it will both reduce obesity significantly and contribute significantly to financing the Administration's ambitious and very costly program of health-care reform.
There are many obese Americans, in the sense of ones who are grossly overweight (with some being morbidly obese), and we should consider whether society should be concerned with obesity if not with mere overweight.
Obesity impairs health, and, in most segments of the population it diminishes social and professional success as well, and so it can be regarded as self-destructive behavior.
Some of it is involuntary--there are people whose genes make it virtually impossible for them to avoid becoming obese--but most obesity could be avoided by careful diet and exercise.
The obese are people who by dietary choice and preference for a sedentary style of life have traded off the costs of obesity against the costs of being thin and have decided (at least in a "revealed preference" sense--they may not have consciously chosen a style of life that predisposes them to obesity) that the costs of thinness preponderate over the benefits.
And in general we do not try to prevent people from making such tradeoffs.
But there are two situations in which preventing people from choosing the style of life that maximizes their utility can be defended (provided certain assumptions are made about cost and efficacy) on economic grounds.
One is where consumers are unable to evaluate a product or to act upon their evaluation; another is where a voluntary transaction imposes costs on other people which the transactors do not take into account.
The first is a significant factor in the soda market.
The sellers advertise very heavily to children, who do not have the knowledge or the self-control that they would need to be able to resist such advertising.
In well-ordered households, the parents regulate children's access to television and the Internet and know they should limit the children's consumption of sugar-flavored drinks and do limit it.
But in many modern American households, especially but not only those in which there is only one parent, children's access to soda and soda advertising is not restricted.
The solution, though, is not a tax on sodas, as such a tax would have only a small effect.
A ban on advertising would be preferable; it would probably impose only slight costs on adult consumers of such drinks, because the advertising of such drinks contains little information.
It is true that such a ban would reduce new entry into the soda market and that this might lead to higher prices, but if so that would reinforce the effect on sales of the ban on advertising.
As to whether by increasing obesity the sale of sugar-flavored sodas imposes costs on other people besides the buyers, the evidence is mixed.
Obese people have more health problems than the non-obese and hence higher annual medical costs; they also lose more time at work because of illness.
Their poorer health increases the medical costs of other people in their insurance pools and reduces the productivity of their employers, assuming realistically that employers cannot selectively reduce the wages or health benefits of their obese employees.
Cutting the other way, obese people have a reduced life expectancy, and the shorter a person's life, the less an above-average annual cost of medical care translates into an above-average total (lifetime) cost.
But assuming nevertheless that the net social costs of obesity are positive, this would be a ground for arguing for taxing obesity, but such a tax would be unacceptable as well as cruel.
The alternative of a soda tax would be unlikely to have much effect, for the reasons stated earlier.
Are there better ways of fighting obesity, assuming it is worth fighting? Probably not.
Education would probably have very little effect, because almost all people know that being fat has bad consequences and that eating foods rich in sugar and butter and not exercising increase the likelihood of becoming obese.
Obesity is concentrated in the lower middle class, which contains a high proportion of people who have very high discount rates, which prevents them from giving significant weight to the future consequences of present behavior.
Children may be ignorant about the costs of obesity and the effects on it of sugar, but because of lack of self-control and children's inability to imagine themselves as middle-aged adults, I doubt that trying to educate them in the dangers of drinking sugar-sweetened beverages would be effective.
A tax on calories, or on high-calorie foods or ingredients, would be difficult to design and administer and would impose welfare losses, without significant offsetting wealth gains, on thin people.
A further problem is that fattening foods, including sugar-flavored sodas, have fallen in price over time relative to fruits and vegetables and other healthful foods, so that a tax on calories would be highly regressive.
A modest measure would be to bar the sale or other provision of sugar-flavored sodas and other fattening foods in schools, and the substitution of nutritious low-calorie school lunches for the present fare.
In addition, more school time could be allotted  to physical education, which in recent years has diminished in most schools.
The cost of these measures would be modest and they would have some effect in reducing obesity.
My book   A Failure of Capitalism: The Crisis of '08 and the Descent into Depression   was published a couple of weeks ago, but it had been completed on February 2.
In order to bring the book up to date, reflecting events since then and also some fresh thinking and reading on my part, I have decided to do some update blogging of the book under the auspices of the   Atlantic Monthly  , which hosts a number of blogs.
The address of my   Atlantic   blog is http://correspondents.theatlantic.com/richard_posner/.
The entries are, as in the Becker-Posner blog, in reverse chronological order.
The first two entries have now been posted.
I will not be using any material from the Becker-Posner blog in the   Atlantic   blog (the name of which is A Failure of Capitalism), or vice versa.
My blogging with Professor Becker will therefore be unaffected.
My post last week on the decline of the conservative movement in the United States received more than 200 comments.
Many of them were very thoughtful, and many others were very shrill.
It is apparent that global warming, abortion, and guns, in approximately that order, arouse particular emotions among many passionate self-described conservatives.
About the first of these three issues, I wish to clarify my position briefly.
I do not think there is much doubt that carbon emissions generated by human activities increase the amount of carbon dioxide in the atmosphere and by doing so raise surface temperatures.
How much they raise them and with what consequences remain uncertain.
I merely think that the risk of catastrophic global warming is sufficiently great to warrant more vigorous remedial efforts than have been attempted thus far by the United States.
About abortion, my personal position is the same as Becker's.
I will add only that I think the legality of abortion should be determined by legislatures rather than by courts.
I think Roe v.
Wade was a mistaken decision, though probably one that we shall have to live with.
Similarly, I think private gun ownership should be a matter for legislative determination, rather than judicial.
The Second Amendment is unclear about whether there is a right to own guns for personal self-defense or hunting, and I don't think delving into eighteenth-century documents argued to bear on the meaning of the amendment is a sensible way of doing constitutional law in the twenty-first century.
Some commenters seem to believe that because I am critical of the current conservative movement, I must be a liberal--maybe even a left-wing Democrat.
To those commenters, disbelief in global warming, in the regulation of gun ownership, and in the criminalization of early as well as late abortions is a litmus test of "true" conservatism.
There are, in fact, multiple conservatisms, as Becker and I have emphasized.
Like Becker, I believe in limited government and so do not support government activities that cannot be justified convincingly by reference to considerations of economic prosperity, basic individual liberties, or domestic or national security.
I do not favor the curtailment of individual liberties on the basis of religious beliefs, nostalgia for the "good old days," or traditional social beliefs (such as distaste for racial minorities or homosexuals) that cannot be related to economic, libertarian, or security values.
One of Reagan's great political achievements was to unite the diverse conservatisms in a single political movement that managed to gain the support of a majority of the American people.
That unity has now dissolved, and it will require skillful political entrepreneurship plus overreaching by liberal politicians (or the kind of left-wing extremism that marred the late 1960s and early 1970s) to restore it.
The ideological division within the conservative movement has been compounded by a decline in intellectual and managerial competence--a tendency to substitute will for intelligence ("I believe it so it must be so").
Some commenters note the intellectual and ethical failings of liberals, and they are right to do so.
But it is only at the Right, at present, that anti-intellectualism is embraced and extolled.
I am less bold than Becker, and so I will make no predictions about the future of the wo rld economy.
I do have some reservations about treating Asia as a unit, however.
Even if one stops at the eastern border of Pakistan, the Asian countries are far from uniform in their economic prospects.
For they include such politically and economically challenged nations as Pakistan, Bangladesh, and Burma, along with  Australia and New Zealand, which are not culturally or ethnically Asian; and Japan, which has a rapidly declining population and is economically stagnant, albeit at a high level.
The fact that there is such heterogeneity in the Asian world suggests that individual country factors predominate over factors that distinguish Asia as a whole from the other continents.
online casino with paypal.
casino casino gamble gambling gambling online online.
virtual roulette online casino gaming.
online casino links.
online casino.
online casino sign up bonus.
online play casino slot machines.
casino online slots.
casino gambling law online.
betting casino gambling online sports.
online casino gambling casino.
online sports books and casino.
best casino slot online.
online casino review.
vegas online casino.
casino casinos online.
secure online casino.
internet casino online directory.
legal online casino daily search statistics.
trucos casino online.
casino game online slot.
online blackjack casino gambling.
casino gambling holdem online poker texas.
best online casino.
casino online coupon usa forum.
online casino list.
money online casino.
online casino royal.
vegas online casino gambling.
no deposit online casino listings usa.
casino gambling online poker uk.
rules of poker casino online.
online casino poker.
online casino christmas no deposit.
casino gamble money online win.
casino online in usa.
grand bay online casino codes.
casino grand mgm online.
us online casino reviews.
online casino wheel of fortune.
grand hotel online casino.
casino royal 07 online poker.
casino blackjack game online game bonus.
internet casino and gambling online.
casino gambling compare online uk.
eurolinx online casino.
no deposit online casino usa new listings.
top 10 online casino no deposit freebies.
online casino no deposit bonuses.
casino forum online.
What is a common to a number of the Asian countries is mercantilism, which is to say the policy of accumulating large cash balances (in the old days, it was gold) by devaluing the currency, so that exports are cheap and imports dear.
The result is an export surplus; and if a country sells more than it buys, it takes in more foreign currency than it spends in its own currency.
China, aggressively mercantilist, has accumulated almost two trillion U.S.
dollars.
The mercantilist policy of China and other East Asian countries has been attributed to the financial trouble that a number of these countries got into in the late 1990s when their governments were pursuing the opposite policy--that of encouraging imports and, in particular, foreign investment in their countries.
As a result (much like the United States in the 2000s!) these countries accumulated large foreign exchange deficits, which ballooned when the investors shifted many of their investments to other parts of the world.
The deficits reached a level at which the countries had to push interest rates up to depression-causing levels in order to prevent the flight of capital from reaching a point at which the countries' credit systems would collapse.
Once burned, twice shy; the East Asian countries switched to an export-first policy, which by enabling them to accumulate large dollar balances have prevented a recurrence of capital flight.
I am calling it "mercantilist" but in part, perhaps major part, it should be viewed as precautionary--to prevent a repetition of the economic crsis of the 1990s.
Yet China had already begun to emphasize exporting.
The reason may lie in John Maynard Keynes's analysis of mercantilism.
He argued that if domestic demand for goods and services is weak, perhaps because of a low propensity to consume, there is likely to be a lot of unemployment, as otherwise supply would exceed demand.
By devaluing the currency and thus making exports cheaper and so increasing the demand for exports, government can increase employment, because the higher output is, whether consumed domestically or abroad, the more workers are needed.
The Chinese population was (and is) poor, so domestic demand was weak, and overall demand and therefore output could be increased by pushing exports.
The success of such a policy would depend on the foreign demand for goods that Chinese industry was able to produce at reasonable cost, but that demand proved to be strong.
The large dollar balances accumulated as a consequence of the export-first policy were available for investment.
As a result, China is today the world's largest creditor.
Should the United States and other debtor nations reduce their foreign borrowing, China's (and other East Asian countries') mercantilist policies will become less attractive because interest rates will fall.
Moreover, as domestic demand in those countries grows, there will be pressure to make imports cheaper and to divert production from satisfying foreign demand to satisfying domestic demand.
On both counts, trade balances will become more even.
But how even? Japan, despite its very high standard of living, had, until the current economic downturn, a strongly positive balance of trade.
An unusually high propensity to save, coupled with an inefficient system for distributing consumer goods and services, keeps domestic demand down.
It remains to be seen whether, as China's economy grows, it will become more like Japan, or more like the United States.
I sense intellectual deterioration of the once-vital conservative movement in the United States.
As I shall explain, this may be a testament to its success.
Until the late 1960s (when I was in my late twenties), I was barely conscious of the existence of a conservative movement.
It was obscure and marginal, symbolized by figures like Barry Goldwater (slaughtered by Lyndon Johnson in the 1964 presidential election), Ayn Rand, Russell Kirk, and William Buckley--figures who had no appeal for me.
More powerful conservative thinkers, such as Milton Friedman and Friedrich Hayek, and other distinguished conservative economists, such as George Stigler, were on the scene, but were not well known outside the economics profession.
The domestic disorder of the late 1960s, the excesses of Johnson's "Great Society," significant advances in the economics of antitrust and regulation, the "stagflation" of the 1970s, and the belief (which turned out to be mistaken) that the Soviet Union was winning the Cold War--all these developments stimulated the growth of a varied and vibrant conservative movement, which finally achieved electoral success with the election of Ronald Reagan in 1981.
The movement included the free-market economics associated with the "Chicago School" (and therefore deregulation, privatization, monetarism, low taxes, and a rejection of Keynesian macroeconomics), "neoconservatism" in the sense of a strong military and a rejection of liberal internationalism, and cultural conservatism, involving respect for traditional values, resistance to feminism and affirmative action, and a tough line on crime.
The end of the Cold War, the collapse of the Soviet Union, the surge of prosperity worldwide that marked the global triumph of capitalism, the essentially conservative policies, especially in economics, of the Clinton administration, and finally the election and early years of the Bush Administration, marked the apogee of the conservative movement.
But there were signs that it had not only already peaked, but was beginning to decline.
Leading conservative intellectual figures grew old and died (Friedman, Hayek, Jeanne Kirkpatrick, Buckley, etc.) and others as they aged became silent or less active (such as Robert Bork, Irving Kristol, and Gertrude Himmelfarb), and their successors lacked equivalent public prominence, as conservatism grew strident and populist.
By the end of the Clinton administration, I was content to celebrate the triumph of conservatism as I understood it, and had no desire for other than incremental changes in the economic and social structure of the United States.
I saw no need for the estate tax to be abolished, marginal personal-income tax rates further reduced, the government shrunk, pragmatism in constitutional law jettisoned in favor of "originalism," the rights of gun owners enlarged, our military posture strengthened, the rise of homosexual rights resisted, or the role of religion in the public sphere expanded.
All these became causes embraced by the new conservatism that crested with the reelection of Bush in 2004.
My theme is the intellectual decline of conservatism, and it is notable that the policies of the new conservatism are powered largely by emotion and religion and have for the most part weak intellectual groundings.
That the policies are weak in conception, have largely failed in execution, and are political flops is therefore unsurprising.
The major blows to conservatism, culminating in the election and programs of Obama, have been fourfold: the failure of military force to achieve U.S.
foreign policy objectives; the inanity of trying to substitute will for intellect, as in the denial of global warming, the use of religious criteria in the selection of public officials, the neglect of management and expertise in government; a continued preoccupation with abortion; and fiscal incontinence in the form of massive budget deficits, the Medicare drug plan, excessive foreign borrowing, and asset-price inflation.
By the fall of 2008, the face of the Republican Party had become Sarah Palin and Joe the Plumber.
Conservative intellectuals had no party.
And then came the financial crash last September and the ensuing depression.
These unanticipated and shocking events have exposed significant analytical weaknesses in core beliefs of conservative economists concerning the business cycle and the macroeconomy generally.
Friedmanite monetarism and the efficient-market theory of finance have taken some sharp hits, and there is renewed respect for the macroeconomic thought of John Maynard Kenyes, a conservatives'   b√™te noire  .
There are signs and portents of liberal excess in the policies and plans of the new administration.
There will thus be plenty of targets for informed conservative critique.
At this writing, however, the conservative movement is at its lowest ebb since 1964.
But with this cardinal difference: the movement has so far succeeded in shifting the center of American politics and social thought that it can rest, for at least a little while, on its laurels.
Eugene Fama, a brilliant economist at the University of Chicago, is one of the principal founders of modern finance theory, and is an especially strong proponent of the “efficient markets” theory of asset pricing, whereby the prices of common stocks or other traded assets are assumed to impound the best available information about their value, including future value discounted to the present.
Fama is not a dogmatic proponent of the theory; some of his pioneering research has identified anomalies in stock pricing that seem to contradict the theory.
But he supports the theory in the main and one consequence is that he is extremely skeptical of the existence of asset-price “bubbles.”.
A bubble is an increase in the price of an asset that cannot be explained by changes in conditions of demand or supply that could be expected to alter the value of the asset relative to the value of other goods or services.
(More precisely, a bubble can be defined as a     disequilibrium event involving a steep increase in price that persists for a significant time, cannot be explained by fundamentals, and, after peaking, quickly gives way to a steep decrease in price.)     We experienced what is widely believed (and what I believe) to be a bubble in housing prices between 1996 and May 2006, when average housing prices plateaued and immediately began to decline.
Between 2002 and May 2006 the median price of a house rose by almost 50 percent.
Between then and 2010 it fell by a third, and it has since risen slightly.
One can imagine factors that would explain a steep rise and later a steep fall in the price of housing.
They might include a sharp increase in incomes or wealth, or in population, or big changes in construction costs, zoning and building codes, commuting costs, family size, and so on, but such changes cannot account for the pattern of housing prices.
What seems to have happened was that the prosperity of the second half of the 1990s increased the demand for and hence (because the housing stock is very durable, so that increases in demand are not immediately reflected in increases in supply) the price of houses, and that the upward trend continued into the early 2000s because of the mistaken decision of the Federal Reserve to push short-term interest rates way down in 2001, to keep them down, to raise them only gradually, and to lower them again if necessary to prevent the market from dropping (the “Greenspan put”).
Because houses are bought mainly with debt (a mortgage), and because short-term interest rates influence long-term rates, such as mortgage interest rates—and in fact those rates fell in the wake of the Fed’s pushing down short-term rates—people found it cheaper to finance a house purchase and so demand continued to increase, driving up price.
So far, no bubble.
But even after the Fed started raising short-term interest rates, albeit very gradually, in 2004, and mortgage interest rates began to rise as well, house prices continued their rapid climb.
At this point, rising house prices became a bubble phenomenon.
Prices continued rising because prices were rising.
People who did not own a house watched prices rise and inferred that other people knew or thought that houses were underpriced—were a good value, a good investment.
Observing such behavior, and inferring that therefore prices might well continue rising, people who didn’t own a house began to think it was a good time to buy a house; indeed some people began buying houses as a speculation.
The buying frenzy was facilitated by the adjustable-rate mortgage, which enabled people to buy houses with little or no down payment and very low (“teaser”) interest rates for the first couple of years followed by a much higher “reset” rate.
If during that period house prices continued to rise, the buyer would have substantial equity in the house and would be able to refinance his house with a conventional 30-year mortgage at a low rate, and so would never have to pay the reset rate on his original, adjustable-rate mortgage.
If prices didn’t rise, he could abandon the house at the end of the two years, ordinarily at no cost except a moving cost.
Buying a house or other asset because other people are doing so may seem an example of irrational “herd” behavior.
But herd behavior is not irrational.
If you are an antelope, and you see your fellow antelopes begin to stampede, you are well advised to join them, because they may be fleeing from a lion.
We commonly take our cues from people who we believe have desires and aversions similar to our own.
Fama believes that the housing “bubble” was not a bubble—that, rather, people rationally if mistakenly believed until May 2006 that houses were underpriced, and beginning then believed that they were overpriced.
But he acknowledges that he has not been able to identify the demand or supply factors that would have given rise to such beliefs.
Nothing much about the housing industry seems to have changed over the period of a few years in which housing prices rose by almost 50 percent and then plunged to nearly their previous level.
If rational (or at least not demonstrably irrational) herd behavior explains the bubble, what explains its bursting? Obviously the price climb must end well short of the point at which the entire Gross Domestic Product is being spent on housing.
Eventually everybody who wants a house and can afford it at the existing price level has bought it.
But why don’t prices level off when that happens, rather than fall? The reason is that the satiation point is not an equilibrium.
When prices stop rising, people who counted on continued price increases to enable them to refinance their mortgage, or who had bought houses as speculative investments, begin to abandon or sell the houses, and with the supply of housing rising but demand not rising, prices fall.
Now reverse bubble thinking sets in.
People infer from declining house prices that other people think houses are not a good investment after all.
They begin to worry that as more people abandon their houses or put them up for sale, prices will continue falling, so they decide to get out while the getting is good, and as more people do that prices fall faster and farther.
The run that brought down Lehman Brothers in September 2008 and threatened to bring down much of the rest of the banking industry was a similar phenomenon.
Most of Lehman’s capital was short term, and unlike deposits in commercial banks its capital was not federally insured.
When it was realized that Lehman was heavily invested in mortgage-backed securities, whose value was plummeting in the wake of the bursting of the housing bubble, the suppliers of Lehman’s short-term capital began withdrawing their capital from Lehman—less because they thought that Lehman’s assets no longer exceeded its liabilities than because they feared that   other   suppliers of Lehman’s capital thought Lehman was broke and therefore would withdraw their capital as fast as possible and   that  —a classic bank run—would break Lehman.
It was another example of rational herd behavior.
What brought down Lehman and threatened to bring down much of the rest of the banking industry was the bursting of another bubble—the housing-credit bubble.
The firms that finance the purchase of housing, whether they are mortgage lenders or purchasers of securitized mortgage debt, are essentially joint venturers in the housing market.
If they are financing a housing bubble and it bursts, they are losers along with the house owners.
The bursting of the housing bubble will precipitate defaults and reduce the value of the house as the collateral for the mortgage.
So why did the sophisticated finance industry finance a housing bubble whose bursting was bound to hurt the industry? There were plenty of warnings that there was a housing bubble; why did the industry ignore them? I think it was another though somewhat more complex example of rational herd behavior.
The major assets of a modern financial institution are short-term capital and talented staff, and both are highly mobile assets that the institution will lose if it is less profitable than its competitors, and it will be less profitable if it refuses to make risky mortgage loans.
Just as the adjustable-rate mortgagee’s downside risk is truncated by his ability to abandon the home if house prices don’t rise, so the financial institution’s downside risk is truncated by limited liability, which protects shareholders and managers from having to pay their company’s debts out of their own pockets.
Thus I don’t think bubble behavior is necessarily or even characteristically irrational.
Often, including in the case of the housing and housing-credit bubbles, it is a rational adaptation to uncertainty.
It is not efficient behavior in an overall social sense, and so efforts at detection and prevention of bubbles are probably worthwhile.
Given the abundant warning signs and explicit warnings of a housing bubble and a housing-credit bubble, the failure of the Federal Reserve under Greenspan and Bernanke, the federal housing authorities, other economic organs of government, and almost the entire economics profession to detect these bubbles cries out for an explanation.
Speculators have never been popular, and they have never been as unpopular as they are in the     United States     today.
Increasingly they are blamed for the economic crisis.
Probably they should be rewarded for making the crisis less grave than it would otherwise have been.
There is a wide range of speculative activities, but my focus will be on financial speculation, which I’ll define as a bet on the future price of some commodity or asset, which could be a house or a bond—to pick the two speculative assets centrally involved in the crisis.
(Mortgage-backed securities and collateralized debt obligations, the specific financial instruments at the center of the crisis, are essentially bonds or bond clusters—debt obligations or packages of debt obligations that pay a contractually fixed interest rate or rates.) In the 2000s, until the crash, there was a great deal of speculation in housing prices, including by people who bought a house with a mortgage that they could afford only if the value of the house increased.
They would buy the house with no down payment and very low (sometimes zero) interest rates usually for two years, after which the interest rate would be “reset” at a higher level—a level they could not afford unless their house appreciated significantly in value, in which event they would have equity in the house and could use it to refinance the house with a normal mortgage at a normal interest rate.
So they wouldn’t have to pay the reset rate.
At the other end of the market from the speculating home buyer was the speculating investor.
Buying MBSs (mortgage-backed securities) and CDOs (collateralized debt obligations, often an assemblage of the riskier slices of mortgage-backed securities) entailed speculating on future housing prices, because the direction of those prices—up or down—would affect the default rate on the mortgages in which the buyers of the securities were investing indirectly.
If the default rate rose because housing prices cratered, the securities might not pay the agreed-on interest rate, and so their value would fall.
Some very smart, very unconventional people, though they were only a tiny minority of the financial community, began thinking, some as early as 2005, that housing prices might well crash, that the housing boom was a bubble—house prices were rising because house prices were rising, convincing people that they would keep on rising.
The “contrarians”—the subject of Michael Lewis’s new book,   The Big Short  —wanted to put their money where their mouth was.
But while it is easy to bet on a rise in the future price of some asset, simply by buying the asset, it is not so easy to bet on a fall in that price.
If it is a stock (or other security, including a bond), you can borrow it and agree to sell the stock to someone at some specified date in the future at a specified price.
If as you expect the price falls, you can buy the stock that you’ve agreed to sell at a price lower than the sale price, deliver the stock you borrowed to the buyer and be paid the agreed-on price, pocket the difference, and deliver the cheap stock you just bought to the person you borrowed the stock from for the speculation, thus completing the transaction.
The process I have just described is selling short.
Selling short is risky, because the price of the stock may rise over the price specified in the short sale when you expected it to fall (which means you’ll have to buy at a price higher than the price specified in the sale contract the stock that you need in order to return stock equivalent to what you borrowed), and costly, because you have to pay interest to the person you borrowed the stock from.
As an alternative to short selling, you can buy a credit default swap, which is a form of insurance on debt—not necessarily your own debt.
If there is a bond that you expect to go into default (it might be a bond backed by a collection of mortgages), you can buy insurance against the resulting loss in the bond’s value.
So if there is a default, the issuer of the credit default swap pays you, and so you gain just as the short seller gains when the price of the stock or bond that he’s shorted falls.
Like other speculators, short sellers and buyers of credit default swaps that insure strangers’ debt are unpopular because they are trading on and therefore hoping for a future calamity.
When the price of an asset falls as a result of speculative activity, the speculators are blamed.
That’s like blaming a thermometer for a fall in temperature.
Provided the speculators do not spread false rumors about the assets they’re hoping to see fall in price, or engage in other fraud, their activity is socially beneficial.
It adds to the information in the market and by doing so tends to bring about a more rapid and complete alignment between prices and underlying values.
It’s hard to sell houses short, but one can speculate that housing prices will fall by selling mortgage-based bonds short, since as I said a housing crash will increase the mortgage default rate and thus reduce the value of bonds that are based on mortgages.
Had there been rampant short selling of such bonds in the early 2000s, the price of those bonds would have fallen because a high level of short selling would have been a signal of widespread doubt that housing prices would continue to rise.
When bond prices fall, yield rises, because the interest rate of a bond is a fixed percentage of the bond’s face value.
(So if the value of a bond that pays 2 percent interest falls in half, the interest rate to buyers of the bond rises to 4 percent.) With interest rates on mortgage bonds higher and housing prices therefore lower (because mortgage interest is a major cost of buying a house), we might have been spared the housing bubble whose bursting triggered the economic crisis that the nation and the world are still struggling to climb out of.
The case for central bank independence from the political branches of the government is simple.
Central banks control the amount of money in the economy.
For example, by selling short-term government securities for cash, they reduce the amount of money in the economy and this drives up short-term interest rates, while by buying such securities for cash they increase the amount of money in the economy and that drives down short-term interest rates.
(Long-term interest rates are also affected, and in the same direction.) Politicians like the money supply to increase before elections, because a reduction in interest rates stimulates economic activity; consumers borrow more to consume, and businesses borrow more to invest in production.
In principle, consumers and businesses should anticipate inflation (if the money supply is increasing faster than the output of goods and services), resulting in higher long-term interest rates and various distortions in economic activity, and take preventive measures that will reduce the stimulative effect of the central’s bank low-interest-rate policy.
But we know from the reaction of consumers and producers to the very low interest rates of the early 2000s that the effect of very low rates on consumption and production are not fully and immediately offset by anticipation of future consequences.
Thus if a nation’s central bank is controlled by politicians, it can be expected to reduce short-term interest rates at particular phases in the electoral cycle, and this tendency, because unrelated to any economic reasons for low interest rates, can be expected to have an inflationary effect.
Moreover, inflation can easily get out of hand.
When inflation is anticipated, the amount of money in circulation increases; people hold smaller cash balances because inflation erodes the value of cash.
The more rapidly money circulates, the higher the ratio of money to output and therefore the higher the rate of inflation.
(Money that does not circulate—money that people keep under their mattresses, for example—are not really part of the money supply because they are not exchanged for goods or services.).
As inflation mounts, the cure—a sharp reduction in the money supply and concomitant increase in interest rates—becomes more painful.
When Paul Volcker, the chairman of the Federal Reserve, pushed short-term interest rates to 20 percent in August 1981 to break an inflation rate that had reached 15 percent, he precipitated a very sharp recession.
President Reagan was furious but Volcker stuck to his guns.
A politically dependent Federal Reserve probably would not have done so.
In fact the Federal Reserve is not completely independent from politics.
Unlike the Supreme Court, its independence is not dictated by the Constitution.
The     United States     did not have a central bank when the Constitution was promulgated, and the Constitution didn’t require the creation of one.
The Federal Reserve dates only from 1911, and before then experiments with central banking in the     United States     had been sporadic.
The Federal Reserve’s independence—which is a function of the long terms of the members of the Federal Reserve Board (14 years, though the chairman’s term is only four years, albeit renewable), the fact that they cannot be removed before the expiration of their terms, the fact that the Federal Reserve is self-financed rather than financed by annual congressional appropriations, and the fact that the members of the Open Market Committee (the organ of the Federal Reserve that controls the money supply) include presidents of the local federal reserve banks, who are chosen by private banks rather than by the President—is a gift of Congress; and what Congress has given, Congress can take back.
Hence Federal Reserve chairmen and members can’t just thumb their nose at Congress.
Particularly not in an economic crisis, such as hit the country and the world in September 2008.
Essentially the Federal Reserve recapitalized the banking industry by buying its mortgage-backed securities (and other bank debt as well), thus pouring cash into the banking system.
(As did the Treasury Department.) By greatly expanding the money supply, the Fed sowed the seeds of a future inflation—but in times of economic desperation the attitude is: let the future take care of itself.
The Supreme Court is the best example of a government institution that is outside political control.
The Justices can as a practical matter be removed from office only if they commit crimes, and their decisions on matters of constitutional law can be nullified only by the very cumbersome process of amending the Constitution.
Also, there is widespread public respect for the Supreme Court, and for courts and judges in general.
The Federal Reserve has neither constitutional standing nor the enthusiastic support of the people.
Its close links to the banking industry are noted and very few people have even the slightest understanding of the Fed’s role and responsibilities.
It performed ineptly in the run up to the financial crisis and in refusing to bail out Lehman Brothers.
Bernanke’s reappointment drew sharp opposition in the Senate, and there is some indication that Senate Majority Leader Reid extracted from Bernanke during the confirmation process a quasi-promise not to raise short-term interest rates too soon, lest by doing so the Fed choke of an economic recovery.
So the Fed is best described as quasi-independent rather than independent.
A constitutionally independent Fed—an institution parallel to the Supreme Court—would create something close to a dictatorship over the business cycle, and this is too much power for a democratic society (perhaps any society) to cede to a bevy of economists and financiers.
But the quasi-independence of the Fed, by giving it a great deal of discretion over monetary policy (even if the discretion is not complete), worries some economists, who think the Fed apt to misuse it, whether because of unsound economic theories or in an effort to mollify the political branches.
But occasional proposals, as by Milton Friedman, to tie the Fed to a precise formula for increasing or decreasing short-term interest rates seem too rigid, because a formula cannot prescribe the correct response to unpredictable shocks to the economy, as we experienced in the financial collapse of 2008.
I agree with Becker that the underemployment rate is a more meaningful measure of the health of the labor market than the unemployment rate, but they are closely correlated, and both have changed little since last fall.
Thus the recovery has been slow.
I would emphasize the economic as distinct from the political reasons why it has been slow, though the latter have played a role as well.
Producers responded to the economic crisis that crested with the collapse of Lehman Brothers in mid-September 2008 by slashing prices and costs.
Slashing prices tends to keep output up while slashing costs increases labor productivity (output per unit of labor) because it involves layoffs and (what has the same effect) outsourcing to foreign countries.
If all the recent productivity gains had taken the form of outsourcing production, there would be no reason to expect that lower prices would have reduced unemployment even though they would have tended to maintain output and therefore consumption.
If higher productivity were achieved by technological advances rather than by layoffs or outsourcing, we would expect it to herald rapid economic growth.
If it’s just the result of layoffs and outsourcing, however, it will fall when, faced with increased demand, producers do more hiring.
But how fast will demand increase? There is pent-up demand from postponement of purchases (especially of durable goods, where postponing replacing is feasible) and selling from inventory rather than from new production.
These probably are the most important factors in the recent increases in GDP.
But unless there are other factors pushing demand, GDP will plateau when the pent-up demand is satisfied and inventories are restocked to a normal level.
Such plateauing is possible because no fewer than six factors are weighing on the economy.
One is the continued tightness of credit.
Interest rates are low but credit standards have risen, partly under pressure from federal bank examiners trying to prevent further bank failures, and partly from the new credit card law.
The tighter credit is, the less production and consumption there is, because both producers and consumers depend heavily on credit to maintain their desired level of current production and consumption.
And just as outsourcing has been used to achieve cost reductions, speculating in foreign currencies (the “carry trade”) and other high-risk lending and investing have been the source of the enormous recent bank profits.
Federal Reserve policy has enabled the banks to borrow at very low rates, and they find it more profitable to invest the foreign funds abroad (or in trading) than to lend into a depressed consumer and small-business market with high expected default rates.
A second factor in the sluggishness of the economic recovery is the housing market.
Housing prices remain very depressed, and because houses are the major component of consumer wealth, depressed housing prices spell reduced wealth for much of the population.
A reduction in personal wealth tends to lead to a reallocation between spending and savings, in favor of the latter, and this slows current economic activity.
Third is the European financial crisis sparked by the insolvency of     Greece     (no mere liquidity crisis in that country—it’s broke), which has driven down the value of the Euro relative to the dollar.
The consequence is to make   U.S.
  exports more costly, which retards production in the     United States    .
And Greek (or Portuguese, or Spanish, etc.) bond defaults will hurt     U.S.
    banks that have large investments in those countries.
Fourth is the dreadful finances of   U.S.
  states and cities (  California  ,   New York  , and     Illinois     are close to being bankrupt), which will require Greece-like austerity measures, which will slow economic activity.
Fifth is the huge and mounting public debt of the United States, which creates a likelihood of either tax increases and spending cuts, both of which measures would reduce economic activity in the short run, or of inflation designed to reduce the debt burden—a danger rendered more acute by the increase in the money supply engineered by the Federal Reserve.
And sixth are the uncertainties created by the Obama Administration’s economic policies.
In fact the Administration has not pursued a radical leftwing course, as some have feared.
Its policies have been, in general, continuous with those ofthe Bush Administration.
But the health care law has created enormous uncertainty for business, which cannot calculate the effects of the law on its costs, and for many individuals as well.
(There is also uncertainty concerning taxes and spending, as I said, but the federal debt cannot be blamed primarily on this Administration.
It is largely the result of the Bush Administration’s fiscal incontinence combined with the unavoidable effects of a very severe economic crisis.).
I agree with Becker that the Administration’s pro-union policy is no help, but it’s not clear that it will amount to much.
The long-term decline in unionization seems irreversible.
The exception of course is public employees’ unions, but the fiscal distress of state and local governments is likely to weaken the public employee unions, just as in     Greece    .
Uncertainty is an enormous retardant to economic recovery, and the Administration could do more to allay it by, for example, giving up on banker bashing.
For in the face of uncertainty—in the sense of risk that cannot be quantified and thus embedded in a cost-benefit analysis—both producers and consumers tend to freeze.
That is a rational response to uncertainty: one freezes both to protect oneself against unknown dangers and to gain time for learning more about them.
But the effect of such freezing is to depress economic activity by channeling investing and spending into inert savings, such as government securities (which does however tend to alleviate the deficit).
Political turmoil in the United States, two unresolved wars abroad, dangerous political instability in nuclear-armed countries such as Pakistan and North Korea (and soon perhaps Iran), the European economic crisis—all are sources of uncertainty that have economic effects just by virtue of their uncertainty, apart from any direct effects on the government, business, or consumers.
I agree with Becker that wealth creates the conditions for democracy, but I would suggest a slightly more complex causal sequence: wealth creates the preconditions for liberty (i.e., rights), and liberty the preconditions for effective democracy.
As John F.
O.
Bilson explained in a 1982 article (Civil Liberty—An Econometric Investigation,   Kyklos  , vol.
35, pp.
94, 103), “Almost any reasonable theory of freedom would predict a positive correlation between freedom and real income.
On the demand side, freedom must be considered a lux­ury good so that the re­sources devoted to the attainment of in­di­vidual freedom are likely to be greater when per capita in­come is high.
On the supply side, it is undoubt­edly more costly to repress a wealthy person than a poor person and the need to do so is probably less acute.” As people become wealthier and therefore more self-confident, and education (another “superior good” in the economist’s sense (what Bilson calls a “luxury good”)—the demand for it is a positive function of income) becomes more widespread and secure property rights become more highly valued, and with society able to afford, as the demand for law and order grows, a sophisticated security apparatus (including an independent judiciary) that maintains law and order without creating destabilizing resentments, what Bilson calls “freedom” and I call “liberty” become established features of the society.
Pretty soon, however, people want more than “negative” liberty, the protection of personal security and property rights; they want a say in the choice of their rulers—they want the right to vote; it is an expansion, or at least the illusion of an expansion, in their liberty in the broad sense of having control over one’s destiny to the maximum feasible extent.
For this progression to work, the distribution of income and wealth mustn’t be too skewed—if the entire wealth of a country is concentrated in a tiny class, the demand for rights by the people as a whole, or at least a large swatch of people, will be weak, if Bilson and I are correct that liberty is a superior good.
It is no surprise, therefore, that democracy emerged in countries like Great Britain, the United States, France, and Germany after—though often long after—a considerable degree of liberty in the narrow sense that does not include the right to vote had been obtained by the citizens of these countries as a result of the rise of a substantial middle class.
(This leaves unexplained the democracy without liberty found in a few ancient polities, such as Athens.) Magna Carta and the English Declaration of Rights of 1689 long preceded English democracy, and when the U.S.
Bill of Rights (the first ten amendments to the Constitution of 1787) was enacted in 1789 the Constitution provided a limited role for voting.
Apart from limitations on who could vote, the only federal officials for whom the people could vote directly were the members of the House of Representatives.
All other federal officials were either elected indirectly (Senators and the President and Vice President) or appointed (judges and executive branch officials).
Democracy without liberty—the ancient Athenian formula—is highly risky, since it is easy for the first elected officials to refuse to allow (or to rig) the next election.
The rarity of such polities suggests that such a democracy is not an equilibrium.
More important, while a country need not be wealthy to be democratic, democracy without liberty is an unsatisfactory form of government because of the instability to which it conduces that I’ve just mentioned.
But liberty is expensive, so how realistic is it to suppose that a poor country can be effectively democratic? India is the principal exception (and its democracy was suspended during the 1975-1977 “state of emergency” rule by Indira Gandhi), but a misleading one, in light of India’s very long and successful colonial occupation by Great Britain that preceded independence, though democracy has been a flop in other former British dependencies, notably Pakistan, formerly a part of British India.
Latin America has a long history of unstable democracy.
The normal evolution is from autocracy to democracy with liberty the intermediate stage.
This has been the pattern (though not an unvarying pattern) not only in Europe, but also in East Asia.
Yet liberty and democracy sometimes arrive at the same time, as they did in the former Soviet sphere.
It will be interesting to see whether this happens in any of the North African and Middle Eastern countries in which people are rebelling against autocratic governments, or whether there will be an intermediate stage of non- or semi-democratic government combined with enlarged personal liberty.
Although these countries (with the exception of the small oil-rich countries) are poor by Western standards, they are not so poor (as many African countries are) that they cannot afford to provide their citizens with liberty, the precondition to stable, functioning democracy.
On May 3, the United Nations issued its   2010 Revision of World Population Projections  , which, according to the media, predicts that the world’s population, expected to reach 7 billion by the end of this year, will be 10.1 billion by the end of the century.
But the media reports have tended to be imprecise.
The UN report offers three predictions—a high, medium, and low—depending on different assumptions.
The high is almost 16 billion and the low 6.2 billion (which is actually lower than the current world population), and a cautious appraisal of the report is that it provides a plausible basis for thinking that the world population will probably be between 6 and 16 billion 89 years from now.
Of course much could happen between now and then to push the world population far outside the range, such as an asteroid collision that wiped out the entire human population.
World population depends on fertility (the birth rate) and mortality (the death rate).
In predicting the former, the UN divides countries into those with birth rates below the replacement level of 2.1 births per woman (42 percent of the world’s population lives in such countries), countries with birth rates slightly above replacement level (40 percent), and countries with very high birth rates (the remaining 18 percent).
The death rates in reach group of countries is calculated, and then the expected population growth within each group is estimated using the current rate of population increase (birth rate minus death rate) as the baseline for each.
(For the world as a whole, the UN report expects life expectancy at birth to increase from 68 today to 86 in 2100.).
The fall in population in countries with birth rates below replacement levels is expected to level off, which seems plausible, but what mainly drives the 10.1 billion 16 billion predictions of total population at the end of the century is the assumption that birth rates will continue to be very high in the countries (mostly in Africa, Asia, and South America) that currently have high birth rates.
How realistic is such an expectation? Much depends on changes in the status of women.
If employment and wage rates of women rise, thus increasing the opportunity costs of children, birth rates will decline.
These changes are likely to be correlated with increased wealth, which in turn will accelerate the fall of death rates in these countries.
But the decline in death rates is likely to be less than the decline in birth rates (notice how the UN report predicts a 26 percent increase in longevity [(86 – 68)/68], versus a 44 percent [(10.1 – 7)/7] increase in population (the midrange forecast), and if so the rate of population increase will slow in the high-fertility countries.
Really no one knows what the world population will be in 2100.
The history of population projections is not promising.
An example is how the Bureau of the Census, failing to anticipate the post-World War II “baby boom,” underpredicted U.S.
fertility rates, then overpredicted them by failing to predict the end of the baby boom.
And techniques of predicting fertility and mortality don’t seem to have improved much, judging by results.
And the longer-range the forecast, the larger the forecasting error.
But suppose world population will reach 10.1 billion by the end of this century.
Would that be a good or a bad thing? Arguably a good thing, on several grounds.
One is that it would enable greater specialization, which reduces costs.
Second is that it would increase the returns to innovation by increasing the size of markets, though an offset is that innovation can produce immensely destructive as well as constructive technology.
Third, the more people there will be, the more high-IQ people there will be, and hence the faster the growth of knowledge will be; though a possible offset is that the more evil geniuses and other monsters there also will be; persons of great potential for evil, such as Hitler, Stalin, and Mao, presumably are rare.
Fourth, if the total subjective welfare of the 10.1 billion exceeds that of a smaller population, or (depending on one’s version of utilitarianism) the average welfare of the greater population is greater than that of the smaller one, the world will be a happier place in a utilitarian sense (the excess of pleasure over pain will be greater).
The downside of population growth is the pressure it places on the environment and natural resources, especially the former, since the price system provides efficient rationing of resource use.
Rapid population growth will increase the problem of global warming and the rate of extinctions and other biodiversity decline, as well as create congestion externalities, though these concerns have diminishing significance the longer the time span of concern.
Continued population growth could combine with an acceleration of global warming to precipitate a global catastrophe (perhaps a catastrophic water shortage) within the next few decades, but 89 years from now the march of technology may enable such problems to be solved.
Think of the technological advances of the last 89 years (that is, since 1922), and imagine a comparable rate of technological advance applied to the current level of technology, which is so much higher than that of 1922.
But the beneficent effects of population growth, like the estimates of that growth, are highly uncertain.
The risk averse among us might prefer a lower rate of population growth in order to reduce the downside risks of that growth, even though the upside potential would be reduced as well.
I want to consider what an investment bubble is, why it arises, whether it’s irrational, and whether the current valuations of social network enterprises such as LinkedIn and Facebook are a bubble phenomenon.
Many finance theorists regard the price of an asset, such as a corporation, as the discounted value of its predicted profits.
Assets thus are overvalued ex ante if the prediction is unrealistically optimistic and are overvalued ex post if, though it may been the most sensible prediction given what was or could be known when made, it turned out to be exaggerated.
Because of uncertainty such disappointments are inevitable.
But the premise is that investors are driven by predictions of profits.
Finance theorists who think that all trading is guided by profit predictions exclude the possibility of bubbles, in which asset prices rise steeply and then collapse, seemingly without regard to estimations of future profits.
An extremely high price-earnings ratio is a symptom of a bubble, since a very rapid and steep increase in future earnings would be necessary to make the asset in question a “good” investment in a conventional sense, and such increases are rare.
Although a bubble thus violates the assumption that asset prices are driven by profit (or loss) expectations, that doesn’t make buying in a bubble, even by a speculator who thinks it’s a bubble, irrational.
As Keynes pointed out, a stock market speculator is not interested in the profitability of the firms whose stocks are traded in the market, as such.
He is interested in the behavior of the other traders.
If he thinks they will bid up the price of a stock he will see an opportunity for gains by buying the stock even if he thinks it’s a dog.
In other words, he buys not because he thinks the stock is undervalue but because others are buying it.
This is a rational strategy, although risky.
It is rational because if prices continue rising the speculator will make money, at least for a time.
It is risky because other investors may stop buying, and start selling, and prices will fall.
The bubble speculator can try to protect himself by moving in and out of the market rapidly, enabling him to lock in gains before deciding whether to try for a further profit.
Increased trading activity is in fact a symptom that a bubble is reaching its peak.
Bubbles flourish in periods of “new era” thinking.
The late 1920s were heralded as a new economic era because of the tremendous boom in automobile production and the advent of new methods of consumer credit, such as installment buying.
This set off a stock-buying frenzy that set the stage for the stock market collapse that began in October 1929.
The rise of dot-com commerce in the 1990s was thought to herald a “new era” in commerce, and again there was a stock bubble and eventual bust.
The advent of novel methods of financing home purchases, including adjustable-rate mortgages and mortgage securitization, gave rise to the housing bubble of the 2000s that ended abruptly in the housing crash of 2008.
These bubbles were based on calculation rather than irrational exuberance.
People who bought early in the bubble and sold before it burst did well.
There are plenty of suckers and fools, but that is a constant, and bubbles are only occasional.
Although rational, bubble trading creates external costs, and so should be discouraged, or at least limited.
The externality is macroeconomic.
A bubble causes asset-price inflation, which increases borrowing because asset owners have more to offer as collateral for loans.
The recent housing bubble caused an enormous increase in consumer debt.
When a bubble bursts and asset prices plummet, loan collateral falls in value, resulting in contraction of credit.
Debt cannot easily be rolled over, defaults skyrocket, and a downward spiral in buying and selling ensues.
A bubble in a single stock would not have macroeconomic consequences.
But a bubble is likely to involve all the stocks in a particular segment of an industry (or even the entire industry), because whatever is pushing up the price of one stock is likely to push up the stocks of companies that sell the same or similar goods or services.
The belief in a “new era” is unlikely to affect only one company.
The simplest way to try to control bubbles is to limit borrowing for stock purchases (that is, buying stock on margin).
But that is unlikely to have much effect on a bubble limited to one industry, such as online social networking, unless the industry is huge, like housing (the housing market is larger than the entire stock market).
The dot-com bubble of the 1990s, when it burst, caused a recession that would have been of little consequence had it not been for the Federal Reserve’s incompetent response (pushing interest rates way down and by doing so setting the stage for the housing bubble, because houses are bought largely with debt and therefore housing prices can soar when interest rates are very low).
Similarly, if the social-network stock-buying frenzy is a bubble, its bursting is unlikely to have a substantial effect on the economy as a whole.
The online social network industry is tiny—the entire industry employs only a few thousand people—and the collapse of its stock values, which is no sure thing—though the astronomical price-earnings ratio of LinkedIn is a bubble symptom—would not be an economic disaster, even in the current fragile status of the U.S.
and world economy.
The Medicare program subsidizes medical care for the elderly so heavily as to create serious concern about the fiscal soundness of the federal government.
And, as longevity rises, the size of the subsidy rises, and the rise in cost is compounded by the increasing cost of medical technology.
Among possible measures that would reduce the rate at which the cost of Medicare is increasing would be means-testing and—the focus of this piece—shifting the balance of subsidized R&D so that more is spent on increasing the quality of life of elderly people and less on extending their (our) lives.
We need to recognize that the public subsidy of medical care for the elderly is not limited to the Medicare program (and Medicaid as well, which provides medigap insurance—insurance against the part of the cost of medical care not covered by Medicare—to millions of Medicare participants who can’t afford private medigap insurance), but includes much of the public expenditure on medical R&D, since the elderly are by far the principal beneficiaries of continued advances in medical knowledge and treatments.
Although federal expenditures on medical R&D are small relative to Medicare, they have a multiplier effect: every year of life added by advances in medical technology increases the size of the elderly population and hence the cost of the Medicare program.
Thus, life-extending medical research can aggravate the nation’s fiscal problems directly (as a major spending program in a political culture that abhors tax increases) and indirectly by increasing the Medicare (and elderly Medicaid) population.
Could there be an offset? In National Bureau of Economic Research working papers in 2007 and 2010, Becker and coauthors point out that even elderly, frail people value life extension.
The question is how much they value it, and how much we want to subsidize it.
A 2006 article by Kevin Murphy and Robert Topel in the   Journal of Political Economy   entitled “The Value of Health and Longevity” estimates enormous gains to real national income from extending life, but they base their estimates on “value of life” estimates that are inappropriate for this purpose.
What is misleadingly called “value of life” in the economic literature is not that at all; it is a way of estimating optimal expenditures on precaution.
Suppose that by observing the behavior of persons engaged in activities or occupations that involve a somewhat elevated level of risk, say a 1 in 10,000 probability of a fatal accident per year, we learn that the average person facing such a risk demands compensation in some form (such as a wage premium) of a shade more than $500 a year, and therefore an expenditure of $500 on a measure that would prevent one such accident would be cost justified.
All the analysis tells us is that the value that a person places on avoiding a 1 in 10,000 fatal accident is $500, so if the accident can be prevented for less there is a social gain.
But this tells us nothing about the utility that a 25-year-old would derive from knowing that his life expectancy had risen from 80 to 81 or that an 80-year-old would derive from learning that his life expectancy had risen from 83 to 84.
Hence it’s impossible to determine the optimal level of expenditures on medical care that principally adds years at advanced ages, as distinct from reducing mortality among people who are not yet elderly.
A complication remarked in the Becker paper (as earlier in my book   Aging and Old Age  ) is the nonlinearity of “value of life.” The fact that a person would accept a 1 in 10,000 annual probability of death in exchange for $500 doesn’t imply that he would accept a 100 percent probability in exchange for $5 million on the theory that he values his life at $5 million and is therefore indifferent between the life and the cash.
Dead he would derive no utility from the cash unless he had an unusually generous bequest motive, and rarely would that be strong enough to make him indifferent between life and death.
If he has no bequest motive, he will spend all his money on extending his life even for a very short time (unless the additional life has negative utility to him because he expects it to involve great suffering), and let us say that he has $10 million and so will spend it all for a few additional months of life.
That does not mean that his value of life is greater than that of a much younger person who “values” his life at only $5 million.
And thanks to Medicare even elderly persons who have a very strong bequest motive have no incentive to economize on medical treatment (at least if they also have medigap insurance).
The opportunity cost of medical treatment is zero to them.
So we can’t have a clear idea of the welfare gains from extending the life of elderly people.
But we can say with reasonable confidence that the welfare of the elderly, and of altruistic members of their families, could be enhanced, without a significant increase in the longevity of the elderly, by redirecting medical research toward diseases or conditions that impair quality of life without necessarily shortening it, or at least without shortening it commensurately.
Dementia (which comes in many forms, but Alzheimer’s appears to be by far the most common) is the foremost example.
It does shorten life somewhat, but on the other hand, as it is largely a function of age, its prevalence is increased by increases in longevity; and dementia is not only psychologically very hard both on the demented and on their families, but also very costly in the amount of care that demented persons require.
Blindness, deafness, loss of mobility, and Parkinson’s Disease and related degenerative nerve diseases (life shortening, but often the effect on lifespan is less than the effect on quality of life) are other examples of diseases where investing in medical research might yield substantial increases in elderly utility without significantly increasing longevity.
Stroke is an example of a medical condition that both reduces longevity and has often dramatic negative effects on quality of life.
Yet the National Institutes of Health expect this year to spend only $18 million on dementia research, $154 million on Parkinson’s research, and $337 milliion on stroke research, compared to more than $8 billion on cancer, heart disease, and diabetes research, even though the diabetes “epidemic,” while real, is due largely to obesity and bad diet.
(Eye disease, however, seems generously funded at $817 million.) Of course the gravity of a disease is not the only factor determining the optimal level of research expenditures; another of at least equal importance is the likelihood that the research will be productive.
Optimal allocation of NIH money also requires consideration of the allocation of private research money across diseases.
But the enormous expenditures on cancer research have not been very productive, and very promising research programs in dementia (notably research on an Alzheimer’s vaccine) are greatly underfunded both publicly and privately.
In my view this is a regrettable imbalance.
My focus is somewhat narrower than Becker’s; it will be on subsidizing the U.S.
oil industry by means of tax breaks.
The current and I think healthy concern with the growing gap between federal revenue and federal spending has focused attention on all sorts of questionable fiscal arrangements.
One of these is tax subsidies.
Conservatives have managed to make tax increases seem un-American, yet it is obvious that the few politically feasible spending cuts, both present and future, that are under discussion cannot begin to close the revenue-expenditure gap.
Hence the attention to tax subsidies.
The term is misleading.
A tax subsidy is not an expenditure, but a selective tax reduction, as distinct from some general or uniform reduction.
Hence to eliminate a tax subsidy is to raise taxes.
But eliminating a tax “subsidy” sounds like reducing wasteful government spending rather than raising taxes, so it has more popular appeal than an explicit tax increase.
But that doesn’t mean that it’s any more feasible politically.
The American political system is not that democratic, or at least not that populist.
The fact that tax subsidies tend to be targeted on particular activities means that a proposal to eliminate a tax subsidy catalyzes interest-group opposition, often formidable since if the interest group were weak, the tax subsidy would not have been legislated in the first place.
Tax subsidies are eliminated from time to time, and it would be interesting to speculate on the conditions that make that possible, but I will not attempt that here.
Not all subsidies are bad; not all tax subsidies are bad, for there is no economic reason for thinking that all activities should be taxed at the same rate.
A subsidy is defensible on economic grounds if it encourages the production of benefits that would be underproduced from an overall social-welfare standpoint were it not for subsidy.
That is the argument for allowing expenditures for research and development to be written off (deducted from taxable income) on an accelerated schedule; R&D is underproduced from an overall social-welfare standpoint because even with a patent system one firm’s R&D is quite likely to confer benefits on other firms for which the firm conducting the R&D will not be compensated; note in this connection that one requirement for a patent is that the applicant disclose the invention, and that disclosure may convey valuable information to competitors even though they cannot practice the patented invention without the patentee’s authorization.
Is there a similar case for giving oil producers subsidies? The principal tax subsidies for the oil industry are as follows: a “domestic manufacturing deduction” that allows oil and gas companies to deduct an extra 6 percent of their taxable income; a deduction for “intangible costs,” which are costs for investments in oil exploration or production that have no salvage value, such as clearing land to enable an oil well to be drilled—the oil companies are not required to amortize these costs over the entire expected life of the oil well—and last the companies are permitted to deduct royalties they pay to foreign government, on the ground that royalties paid to a government are really a tax.
The aggregate values of these subsidies to the U.S.
oil industry is approximately $5 billion a year, almost as much as the industry pays in federal income tax ($5.7 billion).
The industry's total profits exceed $30 billion, so it would not be facing a crushing burden if the subsidies were to be eliminated; the Obama Administration proposes to eliminate only $2 billion of the subsidies.
The first two types of subsidy (the domestic manufacturing and intangible costs deductions) are likely to increase domestic oil production, and the industry argues that expanded domestic production creates external benefits (that is, benefits not reaped by the oil companiess) by reducing our dependence on foreign oil, much of it produced by hostile or unstable countries.
(The third subsidy, treating royalties paid to foreign governments as deductible taxes,can’t be defended on this ground—it encourages American oil companies to increase their production abroad.) This is true, but the effect is probably small, especially relative to imposing a stiff tariff on oil imports (as suggested by Becker).
The tariff would actually generate revenue for the federal government without being called a tax (though that is what a tariff is), reduce the income of foreign oil-producing countries, and increase domestic production by making foreign oil more costly.
In addition, as Becker also mentions, more U.S.
public lands, and more territorial waters of the Gulf, Atlantic, and Pacific coasts, could be opened to drilling for oil.
The advocates of eliminating the tax subsidies for the oil companies argue that the oil industry’s profits are excessive in relation to the high prices of gasoline at the point, but eliminating the subsidies would result in higher, rather than lower, gasoline prices because it would reduce overall production of oil.
But that wouldn’t be a bad thing! Our problems with oil are not limited to oil imports, but include the environmental damage (particularly the effect on climate) caused by the burning of oil, oil spills, and traffic congestion.
High prices for gasoline, which reduce demand and therefore consumption, are the equivalent of a pollution tax, and should be encouraged.
They would also reduce imports.
So both the advocacy of the tax subsidies for the oil industry and the advocacy for eliminating them are unsound, but the case for eliminating them is strong.
Oil is not our future, and the expansion of the industry should not be encouraged.
The oil companies even acknowledge, or at least pretend to acknowledge, a willingness to give up their subsidies if subsidies to other industries are likewise abolished.
This is not a good argument either, because those subsidies, though most of them are no more justifible than the tax subsidies for the oil industry, do not impose costs on that industry.
The argument amounts to saying that since the world is imperfect, I should be free to cheat and steal.
It stands to reason that if retirement benefits are chintzy, people who reach retirement age, provided they are allowed to accept their retirement benefits while still working, will continue working.
That appears to be the main reason why, as Becker explains, although Japanese workers obtain their full retirement benefits at 60 the average retirement age is 69.
However, there is nothing paradoxical about the disjunction between the nominal retirement age, that is, the age at which one begins to receive one's full retirement benefits, and the actual retirement age.
Indeed, the earlier the official retirement age, the later the actual retirement age is likely to be because retirement benefits are always lower the earlier they are taken.
If one were entitled to full retirement benefits at the age of 30, the benefits obviously would be too low to support one‚Äôs standard of living--indeed, so low that one might not be able to afford to retire ever!.
The disjunction works in the opposite direction in the United States.
The "official" retirement age is higher than in Japan, at 65, but the actual retirement age is lower.
The explanation is the same: the higher the retirement age, the larger the retirement benefits, and so the smaller the incentive to work past that age.
A curious feature of the Japanese system is the tendency to demote workers when they reach the official retirement age of 60.
But this does not appear to be a consequence of the law.
In the U.S., many workers are entitled to take early retirement at reduced benefits, and if they do so their employer can rehire them at a lower wage.
(Thus U.S.
companies can adjust the age profile of their workforce by early-retirement programs, which create monetary inducements to early retirement.) But this is not, as far as I know, particularly common.
It seems that Japanese employers have devised a system of staged retirement--partial at 60, with an appropriately small pension because the employee is continuing to work, albeit at a reduced salary to reflect his reduced productivity, and full at (about) 69.
A staged system, by matching salary to productivity, seems more efficient; but, if so, one wonders why more American employers do not adopt it, as they could do by rehiring at a reduced wage employees who had taken early retirement at age 60.
One possibility is that Americans value leisure more than Japanese do and that this rather than differences in pension law or practices explains the earlier average retirement age in the United States.
The fact that private as well as public pension plans tend to be less generous in Japan is consistent with this conjecture.
The less people value leisure, the later they will want to retire and so the less money they will want to put aside for retirement.
For a detailed description and economic analysis of the Japanese retirement system, see Bernard H.
  Casey  , ‚ÄúReforming the Japanese Retirement Income System: A Special Case?‚Äù (Sept.
2004).
The Fifth Amendment permits the use of eminent domain, in which government takes private property without negotiation but must pay the owner the market value of the property, only if the taking is for a "public use." (The Fifth Amendment is applicable only to action by the federal government, but the Fourteenth Amendment, which applies to state and local government action, has been interpreted to incorporate the "public use" limitation on eminent domain.) In   Kelo v.
City of New London  , which the Supreme Court decided on June 23, the city took private residential property as part of a redevelopment plan under which the property would be turned over to private developers for office space and parking.
Whether the case was "correctly" decided depends on one‚Äôs theory of constitutional adjudication, which might in turn point one to the origins of the "public use" provision and to the Supreme Court's precedents.
I want to abstract from the legal questions and ask three practical questions: When if ever is eminent domain proper? Is it ever proper when the private property taken is going to be transferred to another private entity rather than being kept by the government for some governmental use, such as a post office or an army base? And is the power granted local municipalities by the Kelo decision likely to be abused?.
Generally, government should be required to buy the property it wants in the open market, like anyone else.
If it is allowed to confiscate property without paying the full price, it will be led to substitute property for other inputs that may cost less to society to produce but that are more costly to the government (a private rather than social cost) than land because the government has to pay the full price for them.
This assumes that government in its procurement decisions tries to minimize dollar costs rather than full social costs, but the assumption is realistic.
When the government does take property by eminent domain, it has to pay the owner the market value of the property, but that value will be less than the owner values the property--otherwise he would sell it to the government at market value and there would be no need for the government to incur the cost of eminent domain proceedings.
Generally, property is worth more to the owner than the market price (which is why it's owned by him rather than by someone else), because it fits his tastes or needs best as a consequence of its location or improvements (which is why he bought it rather than some other piece of property) or because relocation costs would be high.
Real estate is a heterogeneous good and so a particular parcel in the hands of a particular owner will generally yield him an idiosyncratic value that is on top of the market value.
Eminent domain operates to tax away that value; if market value is $X and total value (including idiosyncratic) is $1.2X, then if the government takes it by eminent domain it pays for it in effect by spending $X out of the government's own coffers and $.2X out of the owner's pocket.
This is an arbitrary form of taxation and one that, as I said, creates the illusion that an input is cheap because its money price is less than its social cost, and as a result causes a misallocation of resources.
The only justification for eminent domain is that sometimes a landowner may be in a position to exercise holdout power, enabling him to obtain a monopoly rent in the absence of an eminent domain right.
The clearest example is that of a right of way company, such as a railroad or a pipeline, which to provide service between two points needs an easement from every single one of the intervening landowners.
Knowing this, each landowner has an incentive to hang back, refusing to sell to the right of way company except for an exorbitant price.
Each hopes to be the last holdout after the company has purchased an easement from every other landowner--easements that will be worthless if it doesn't obtain an easement from that last holdout.
Most right of way companies are private, which answers my second question: the rationale for eminent domain is unrelated to whether the party exercising the eminent domain power is the government or a private firm.
Right of way companies are not the only private enterprises that can make an argument for the use of the eminent domain power.
The argument is available in other cases in which a large number of separately owned contiguous parcels have to be acquired for a project that will create greater value than the parcels generate in their present use.
It is impossible to tell from the opinions in the Kelo case whether that was such a case.
Pfizer had decided to build a large research facility adjacent to a 90-acre stretch of downtown and waterfront property in New London and the City hoped that Pfizer's presence would attract other businesses to the neighborhood.
The plaintiffs' residential properties were on portions of the 90-acre tract earmarked for office space and parking, and it might have been impossible to develop these areas for those uses if the areas were spotted with houses (the plaintiffs owned 15 houses in all in the two areas).
The Court, however, did not discuss whether there was a holdout problem; it thought it enough to justify the taking that the City had a bona fide and reasonable belief that the planned redevelpment would generate net benefits for the City and its residents as a whole, although the plaintiffs of course would lose any idiosyncratic values that they obtained from their property.
However, in the absence of a holdout problem, there is no need for eminent domain‚Äîprivate developers will rush in without need for City assistance if indeed the property would be worth more in a different use from the present ones.
The Court was mindful of the possibility of abuse of the eminent domain power; it made clear that there would not be a public use if all a municipality did was take property from one person and give it to another, with no showing of an increase in overall value.
But the Court did not consider whether development plans such as New London's actually on average increase value for the municipality that undertakes them, or rather are usually  the product of rent-seeking political deals.
Thus the actual impact of the Court's decision on economic welfare cannot readily be determined.
It is possible that what really motivated the Court was a simple unwillingness to become involved (or to involve the lower courts) in the details of urban redevelopment plans; a flat rule against takings in which the land ends up in the hands of private companies would, as I have explained, be unsound.
Another practical defense of the decision is that the more limitations are placed on the private development of condemned land, the more active the government itself will become in development, and that would be inefficient.
If the City of New London had guilt office space, parking, etc.
on land condemned from private owners, a challenge based on the "public use" limitation would be unlikely to succeed--unless the Court confined public use to holdout situations and was prepared to try to determine, case by case, whether a genuine holdout situation existed.
Wal-Mart, the nation's largest private employer, has become embroiled recently in a number of controversies.
One concerns health insurance.
Wal-Mart provides health insurance to fewer than half its employees (though, as some critics neglect to note, many of the others are covered by spouses' health insurance or by Medicare), and it charges those employees whom it does cover a significant fraction of the total insurance premiums.
Critics say, first, that Wal-Mart is being "miserly" toward its employees, who tend to be near the bottom of the economic ladder, and, second, that it is exporting medical costs that it should be defraying to publicly financed health systems, such as Medicaid, to which the uninsured who cannot afford to pay their medical expenses out of their own pocket turn.
Some of the critics want employers to be required by law to provide health insurance for all their employees.
Economic analysis suggests that these criticisms, especially the first, lack merit, and that employer-mandated health insurance is not a good idea.
This is not, however, because employee health insurance is likely to be more costly than individually purchased insurance, in which event it would be obvious why many employees would want to forgo it.
Actually, it's likely to be less costly.
Insurance is cheaper when all members of a group satisfying specified eligibility requirements are required to join the insurance plan, because without the compulsory feature those members having the lowest incidence of whatever risk is being insured against, such as the risk of incurring medical costs, would tend to drop out of the plan, since they would be subsidizing the higher-risk individuals in the plan; and the result of this dropping out would be an upward spiral in the cost of the insurance.
That is why individual policies are more costly than group policies.
Another, and quite arbitrary, attraction of employee group health insurance is that like many other fringe benefits, it is not taxable.
If an individual earns $50,000 and spends $5,000 to buy health insurance, he pays income tax on the full $50,000, and suppose the amount of the tax is $10,000 (20 percent).
Then after paying for the health insurance policy he has $35,000 left.
But if his employer pays him a salary of $45,000 (on which the income tax is, let us say, $9,000--which assumes the same 20 percent income-tax rate, though it might well be lower) and gives him a health insurance policy that costs the employer $5,000, the employee has $36,000 ($45,000 salary minus $9,000 tax) and so he is better off.
But probably few Wal-Mart employees pay much income tax--which may be a partial explanation for why Wal-Mart does not offer health insurance to more of its employees.
It is entirely rational for a subset of employees, especially low-income employees, to prefer not to be covered by their employer's group health insurance policy even if they have no other health insurance.
The basic reason is fact that from the employer's standpoint, the cost of a fringe benefit is no different from the cost of a wage.
If the employer is prepared to pay an employee a salary of $45,000 and give him an insurance policy that costs the employer $5,000, then if the employee doesn't want the insurance the employer will be willing to pay him a salary of $50,000.
Suppose the employee has no significant assets--a realistic assumption if he is a low-income employee.
Then if he becomes ill he'll be able to obtain medical care free of charge under Medicaid, though it will be of lower quality than paid-for care.
Suppose the value of that lower-quality care is only $3,000.
Nevertheless the employee is better off without the insurance; his net income will be $53,000 ($50,000 in salary plus $3,000 in insurance value) versus $50,000 ($45,000 in salary plus an insurance policy worth $5,000) with the insurance.
Even if the employee is paid only the minimum wage (which for simplicity I'll assume is $5 an hour), so that the employer, were he to provide health insurance, would be forbidden to make a compensating wage cut, the employee would be better off without the insurance.
Suppose the minimum wage, multiplied by 2000 (a 40-hour work week for 50 weeks), yields an annual wage of $10,000.
If that is all that the employee's work is worth to the employer, the employer will not offer the insurance.
If the employer does offer the insurance, say at a cost to him of $2,000, then he would be willing to pay the employee more than the minimum wage--an additional $1 an hour ($2,000 divided by 2000)--if the employee forewent the insurance and relied instead on Medicaid.
The second criticism of Wal-Mart's refusal to provide health insurance to all its employees who do not have other coverage has somewhat greater merit.
There is an externality: employees who lack health insurance usually lack significant assets as well, so when they get sick the taxpayer pays their medical expenses.
These employees thus externalize the costs of their medical treatment.
This is true even though there is a sense in which a program like Medicaid does not eliminate the insurance principle but merely substitutes social for private insurance, with the taxes that pay for Medicaid corresponding to conventional insurance premiums.
But only the poor are eligible for Medicaid, and they do not pay their actuarially fair tax to support the program.
Otherwise there wouldn't have to be a Medicaid program.
But the externality cannot be fully eliminated by passing a law that would require Wal-Mart and other employers of low-income employees to insure all their employees.
This is clearest in the case of minimum-wage employees who at present are not insured.
Since the labor cost that an employer incurs is the sum of the wage he pays and the cost of any fringe benefits, forcing the employer to incur a total labor cost of $12,000 for an employee worth to the employer only $10,000 will simply cause that employee to be fired, with little prospect of obtaining another job; so he will lose his health insurance and be thrown back on Medicaid.
Suppose instead that the employer is willing to incur a total labor cost of $12,000 for this employee, but the latter prefers a cash wage in that amount and no insurance, and now suppose as before that the employer is forced to insure him.
The employer will reduce the employee's wage to $10,000, which may inflict significant hardship because the employee needs the cash more than he wants insurance (if he has no assets, he may well not need or want   any   health insurance).
Notice the perverse redistributive effect: the average taxpayer, who is indeed made better off because the employee is now paying for his own health care, is wealthier than the average low-income employee.
The analysis is slightly complicated by the fact that if low-income employees have equally good alternatives to working for their current employer, they may not have to accept a reduction in wage equal to the increased cost to their employer of being forced to provide health insurance.
Suppose in the example just given that although the health-insurance policy costs the employer $2,000, it is worth only $200 to the employee, so that he will perceive a reduction in his wage to $10,000 as a reduction in his full income from $12,000 to $10,200.
And suppose that when he took the job with Wal-Mart he turned down an equivalent job with another employer that would have paid him $11,500 and that this job is still open.
Then it might seem that Wal-Mart, to retain the employee, would have to pay him $11,300, since that amount, plus the $200 that is the value he derives from the health insurance policy, equals $11,500.
But this ignores the fact that the other employer, too, presumably is being subjected to the requirement of providing health insurance.
It too will see its labor costs soar and therefore it will not pay as high a wage as before the requirement was imposed.
I mentioned that Wal-Mart is also criticized for making its employees pick up a big portion of the health insurance tab.
But this may actually benefit the employees.
Suppose that if Wal-Mart paid the entire tab, the average cost to the company of health insurance would be $5,000 per employee per year.
If it charges the employee $1,000 a year in premiums, the cost to Wal-Mart will be only $4,000, so it will be willing to raise the employee‚Äôs salary by $1,000.
This may seem a complete wash, but it is not.
For with the employee paying a big chunk of the premiums, the total cost is likely to be lower than $5,000, which would permit a net wage increase.
The reason it is likely to be lower is that employees will economize on their demand for medical care if they incur a positive marginal cost for that care.
A government subsidy of the production of a good is defensible if the good generates external benefits, i.e., benefits not captured by the producer, in which event the good will be underproduced if left entirely to the market.
This is not usually true of agricultural production.
But Switzerland may be an exception because of its heavy dependence on tourism and the undoubted contribution that Swiss farms make to the beauty of the Swiss countryside.
However, before determining how much of a subsidy to provide or indeed whether to provide any subsidy, one would have to determine how much and what kind of agriculture would be produced with no subsidy.
Perhaps the reduction in agricultural acreage would be too slight to make significant inroads into tourist revenues.
Assuming the effect could be measured, the proper method of financing the subsidy would be by a tax on the tourism industry.
As for the form of the subsidy, this should depend on the effect on touristic values.
The aim presumably would be to increase  the amount of agricultural acreage, and perhaps dairy production because the cows with their bells are, to the tourist, particularly attractive adornments of Swiss farms.
Quite apart from tourism, Swiss people themselves may derive pleasure from their agricultural countryside.
That would constitute an additional external benefit that might justify subsidy, but it would be very hard to measure.
Contingent valuation surveys ask people what they would pay for various environmental amenities if such amenities were priced.
But the responses are not reliable.
People are being asked to put a price on goods that are not sold in markets, and they have no relevant experience with pricing such goods.
The surveys tend also to focus on a single amenity (as in my Swiss example), which produces exaggerated responses because the respondents are not being asked to allocate a limited budget among a range of possible subsidies.
Switzerland may be a special case; it is inconceivable that agricultural subsidies in general are justifiable in terms of positive externalities.
A country like France, for example, which receives a quarter of the European Union's generous allocation for such subsidies, has highly productive agriculture and its huge tourist industry is far less dependent on bucolic vistas than Swiss tourism is.
Agricultural subsidies generally reflect, as Becker points out, the operation of interest-group politics.
A related feature in the European context is job protection--it may be especially difficult for many farmers to find alternative employment outside the agriculture sector.
Our ethanol subsidy is a particulary disgraceful example of the genre, especially given the availability of much cheaper sugar-based Brazilian ethanol blocked by a high tariff from competing with the ethanol produced from our corn.
It is possible though unproven that ethanol as a fuel involves a net reduction in carbon dioxide emissions compared to gasoline and so may help to limit global warming.
I qualify with "unproven" because while ethanol is not a fossil fuel and so burning it does not emit carbon dioxide, its production requires fossil fuel.
Even if ethanol as a fuel has definite advantages from the standpoint of controlling global warming, this is a poor argument for a subsidy of it, as the subsidy can distort the efficient choice of inputs into the manufacture of fuel.
Better would be a tax on carbon dioxide emissions; this would give producers and consumers of fuels and of products utilizing fuels, such as cars and electricity, an incentive to search out the cheapest substitutes for fossil fuels, which might or not include ethanol.
Although the percentage of farm revenues generated by subsidy is less than half in the United States what it is in the EU countries (16 percent versus 34 percent, according to the OECD study discussed by Becker), the efficient rate is probably zero.
The fact that it is positive may reflect not just the operation of interest-group politics but the skewed representation of states in the U.S.
Senate.
Because each state has two Senators regardless of population, thinly populated agricultural states have disproportionate influence which they can use by means of logrolling to attract support for generous subsidies having no public-interest justification whatsoever.
I want to note one particularly acute comment--that awarding grants of federal money to localities on the basis of the "quality" of their grant proposals just rewards skillful grant writers.
I think that is probably true.
This is not like grant applications for money for scientific research, which are evaluated by distinguished scientists.
Counterterrorism is not a science, and the "peers" who reviewed the grant applications for the Department of Homeland Security appear to have been a miscellaneous assortment of persons engaged in emergency response and other security-related activities.
The room for subjective, political judgments must have been large.
I also agree with the commenter who criticized me for suggesting that DHS had experienced "political pain" by cutting the allocations for New York City and Washington, D.C.
DHS received criticism, but since both NYC and Washington are solidly Democratic, the political pain has been more than offset by the gratitude of cities in states that lean Republican or are toss-ups.
Now for all I know politics played no role in th allocations, but the lack of transparency in the "peer review" process makes it difficult to dispel suspicion of political motives.
Commenters debated over whether cities or the federal goverrnment have better information about optimal counterterrorist measures for a given city.
Thinking further about that issue, I now incline to the view that the only respect in which a city has the comparative advantage is with respect to measures for gathering information about residents who might be terrorist supporters and for patroling local sites and facilities (like the New York subway system).
These information-gathering and patrolman-on-the-beat activities, which incidentally are labor-intensive,  are ones for which grants to cities make sense.
But when it comes to capital expenditures, such as for radiation and pathogen detectors, radiation shields, communications equipment, and decontamination facilities, the federal government probably has the comparative advantage.
Apart from being able to extract price concessioms by buying in bulk, the federal government can assure compatibility across cities where needed (for example, in communications equipment), exploit economies of scale, base expenditures on more sophisticated appreciation of threats and technology, and resist granstmanship and local political pressures.
Thus, on reflection, I am inclined to change my mind and conclude that DHS has it backwards in emphasizing grants to cities for capital rather than for personnel expenditures.
There were a number of interesting comments; I limit myself to respomding to three.
One is that the lessee overpaid; this is possible if the lessee is ambitious to acquire other U.S.
highway systems and wants, and is willing to pay for,  "first mover" advantages; other states will be more inclined to deal with a pivate highway operator who has a track record.
The second comment is that the private lessee of the Indiana Toll Road will have an incentive to skimp on maintenance in order to reduce costs and thus maximize profits.
Notice the tension with the first point: if the lessee is ambitious to acquire additional highway systems, it will want to create a good reputation for honoring its maintenance obligations under the lease of the Indiana Toll Road.
In Europe, which has a number of private operators of highway systems, maintenance has not been a problem, maybe because poor maintenance is quickly detected.
Third is the question of checks on abuses of monopoly power.
Some commenters fear the monopoly power of the lessee of the Indiana Toll Road, others argue that the state could offset it by building a parallel road.
Speaking from personal experience as a frequent user of the Indiana Toll Road, it has ample capacity even at existing toll rates, which means that the construction of a competing toll road is unlikely, since it would create excess capacity, and competition under conditions of excess capacity is likely to result in prices that do not cover total costs.
One comment indicates that one term of the lease is a promise by the State of Indiana not to improve certain roads that run parallel to the Indiana Toll Road, and that promise would tend to secure the lesee's monopoly and thus increase the price of the lease.
As I said in my post, while monopoly pricing has misallocative effects, so do taxes, for which the revenue from the lease may be a substitute.
The State of Indiana has just leased the Indiana Toll Road--a 157-mile-long highway in northern Indiana that connects Illinois to Ohio--to a Spanish-Australian consortium for 75 years for $3.8 billion, to be paid in a lump sum.
(The deal has been challenged in the Indiana state courts.) The lease is complex, imposing many duties on the lessee (such as to install electronic toll collection, in which Indiana has lagged).
A key provision is that the consortium will not be able to raise toll rates until 2016 (for passenger cars--2010 for trucks) and then only by the greatest of 2 percent a year, the consumer inflation rate (CPI), or the annual increase in GDP.
(On the eve of the lease, Indiana raised toll rates--which hadn't changed since 1985--significantly.) Two years ago Chicago made a similar lease of the Chicago Skyway, an 8-mile stretch that connects Chicago to the Indiana Toll Road, for $1.8 billion.
There is considerable interest in other states as well in leasing toll roads to private entities.
The idea of privatizing toll roads is an attractive one from an economic standpoint.
Private companies are more efficient than public ones, at least in the limited sense of economizing on costs.
I call this sense of efficiency "limited" because there are other dimensions of efficiency, for example the allocative; a monopolist might be very effective in limiting his costs, but by charging a monopoly price he would distort the allocation of resources.
Some of his customers would be induced by the high price to switch to substitutes that cost more to make than the monopolist‚Äôs product but that, being priced at the competitive rather than the monopoly price, seemed cheaper to consumers.
(This is the standard economic objection to monopoly.) The reason for the superior ability of private companies to control costs is that they have both a strong financial incentive and also competitive pressure to do so--factors that operate weakly or not all in the case of public agencies--and that their pricing and purchasing decisions, including decisions regarding wages and labor relations, are not distorted by political pressures and corruption.
There is a long history of price-fixing in highway construction and maintenance, attributable in part to bidding rules that, in endeavoring to prevent corruption, facilitate bid rigging.
For example, if to prevent corruption contracts are always awarded to the low bidder, a bid-rigging conspiracy will always know whether one of its members is cheating, if the low bidder, who gets the contract, was not the bidder that the conspiracy assigned to make the low bid.
If cheating on a conspiracy is readily detectable, cheating is less likely and therefore the conspiracy more effective.
The problem of allocative efficiency looms when, for example, there are exernalities; but the solution to the problem rarely requires public ownership.
One significant externality associated with vehicular transportation is the congestion externality: no driver is likely to consider the effect of his driving on the convenience of other drivers, because there is no way in which he can exact compensation from drivers for not driving or driving less and therefore improving their driving time.
That externality is internalized by a toll road, because congestion reduces the quality of the driving experience and so the amount each driver is willing to pay in tolls; the owner of the toll road will trade that willingess to pay off against the reduction in the number of drivers as a result of a higher toll.
Another externality, however, will not be internalized by the toll-road operators.
That is the contribution that driving makes to pollution and global warming.
But public ownership is not necessary in order to internalize this externality.
The government can force its internalizing by imposing a tax on driving.
There is, however, in the toll-road setting another source of allocative inefficiency, and that is monopoly, which I have mentioned already.
Drivers who do not have good alternatives to using the Indiana Toll Road can be made to pay tolls that exceed wear and tear, congestion effects, social costs of pollution, and other costs of the road, engendering inefficient substitutions by drivers unwilling to pay those tolls.
To an extent, the toll-road operator may be able to discourage substitution by price discrimination, but this is unlikely to be fully effective and indeed can actually increase the allocative inefficiency of the monopoly.
The monopoly issue raises the question: what exactly was Indiana selling when it leased the toll road for $3.8 billion? The higher the tolls and the greater the lessee's freedom to raise the tolls in the future, the higher the price that the state can command for the lease.
If the lease placed no limitations on tolls, the state would be selling an unregulated monopoly.
If the lease could constrain the lessee to charge tolls just equal to the cost of operating the toll road (including maintenance, repairs, snow removal, lighting, and the collection of the tolls), the market price of the lease would be significantly lower.
To the extent that the state wants to maximize its take from the lease, it will be creating allocative inefficiency by conferring monopoly power on the lessee.
It is difficult to determine whether the $3.8 billion price tag for the Indiana Toll Road is closer to the competitive or the monopoly price level.
On the one hand, the lessee cannot raise tolls until 2010 or 2016 (depending on the type of vehicle), and increases after that are capped.
On the other hand, the tolls were raised significantly just before the lease, and allowing the operator in 2010 to begin raising toll rates annually by the increase in GDP may confer windfall gains, since the cost of operating the toll road may not increase at so great a rate.
One would have to know a great deal more about the economics of operating a highway than I do to figure out whether the terms of the lease confer monopoly power on the lessee.
I do not regard the monopoly concern as a strong objection to the leasing of the toll road, however.
The reason is that most, maybe all, taxes have monopoly-like effects, in the sense of driving a wedge between cost and price.
Suppose the lease price would have been only $2 billion had the state imposed more stringent limitations on toll increases.
Then the state would have $1.8 billion less in revenue and would presumably make up the difference by increasing tax rates or imposing additional taxes, and these measures would have allocative effects similar to those of higher tolls charged by the lessee of the toll road.
If the monopoly issue is therefore considered a wash, the principal effect of the lease will be the positive one of reducing the quality-adjusted cost of operating the toll road and the lease is clearly a good idea.
Toll roads are more attractive candidates for privatization than non-toll roads because it is easy to charge user fees; tolls are user fees.
It would be harder to charge for the use of city streets, though no longer impossible, given electronic technology for monitoring drivers.
Privatizing certain security services pose special problems as well, as Becker and I discussed in our May 28 posts about security contractors in Iraq.
But public services the cost of which is defrayed in whole or significant part by user fees are good candidates for privatization, including Amtrak, the Postal Service, building and restaurant inspections, veterans' hospitals, and federal, state, and local airports.
The privatization movement has a long way to go before achieving an optimal mixture of public and private service providers.
Against all this it will be argued--it is an argument emphasized by opponents of leasing the Indiana Toll Road--that privatization, at least when it takes the form of a sale or long-term lease of government property for a lump sum, beggars the future by depriving government of an income-producing asset.
The argument, at least in its simplest form, is unsound, because the state is not disposing of an asset but merely changing its form: from a highway to cash.
The subtler form of the argument is that, given the truncated horizons of elected officials, the state will not invest the cash wisely for the long term, but will squander it on short-term projects.
This is a danger--how great a one I do not know.
It would be an interesting study to trace the uses to which privatizing governments here and abroad have put the proceeds of sales of public assets.
The Department of Homeland Security will be distributing some $700 million this coming year to American cities for antiterrorism measures.
The amounts allocated to New York and Washington, which are generally regarded as the prime U.S.
targets for a terrorist attack, are about 40 percent lower than the current year's allocations, and this has engendered indignation on the part of officials of those cities.
Other large cities have seen their allocations cut sharply as well.
In part the change in allocations is due to the fact that Congress cut the overall amount of money for this program, but in larger part it is because of a deliberate decision to shift money to smaller cities.
Michael Chertoff, the Secretary of Homeland Security, defends the shift on two grounds: that the money should be used to build physical capacity to respond to terrorism rather than to fund recurring expenses such as salaries of emergency-response personnel, and that New York, Washington, and a few other major cities have received the lion's share of the grants since the beginning of the program because they are the prime targets but their urgent needs have been attended to and it is now time to attend to the needs of the lesser targets.
The interesting policy questions are, first, should the federal government be making such grants to cities, and, second, what should be the basis for deciding how large a grant to make to each city? Taking the first question first, there is no doubt that the federal government and not just states and municipalities should spend money to protect the nation from terrorist attacks, since, as we know from the 9/11 attacks, an attack on a city (or on any other major target) has consequences far beyond the state in which the city is located.
But should the government finance defensive measures by the cities or should it spend the money itself? The argument usually heard for the grant approach is that the locals know better their vulnerabilities and how best to reduce them.
But the argument is weak because while the locals do know a great deal about the competence of their response personnel, they know little about terrorist threats--terrorist plans, methods, preferred targets, and so forth.
Moreover, when a pot of federal money has to be divided up among state or local governments, pork-barrel politics are bound to distort the allocation.
Concern with this problem led DHS to employ anonymous committees of local security and emergency-response officials to vet the grants, but partly because of their anonymity and partly because such officials are only quasi-professional, this version of peer review was not highly credible.
Furthermore, the locals may use the federal money simply to replace the expenditures they would otherwise have made on antiterror measures.
Suppose a city wants to spend $10 million on such measures and would spend it out of its own funds, but it gets a grant of $10 million from DHS.
Then it may simply reallocate the $10 million in its own funds that it would have spent on such measures to some unrelated program.
To the extent that such reallocations occur, the $700 million DHS program, with all its entailed paperwork, peer reviews, and political controversy, is not a security measure at all but just a general federal subsidy of local government.
Notice, moreover, that the less of its own money the city spends, the less secure it is against terrorist attacks, and it can use the lack of security to argue for an increased federal grant next year!.
All this said, probably some sort of grant program makes sense simply because optimal antiterrorism measures require enlisting local facilities and personnel, and cities may underspend on these because the benefits will accrue in part, maybe major part, to other, perhaps far distant, cities; that is the externality point with which I began.
I am puzzled why the program should favor communications equipment, computers, emergency vehicles, pathogen detectors, containment shields, and other capital goods over salaries; effective antiterrorism measures tend in fact to be labor-intensive.
Becker, however, suggests an explanation in his comment.
Moving to the second question, how should the amount received by each city be determined, one encounters baffling problems of measurement.
Ideally, one would like the grant moneys to be allocated in such a way as to maximize the excess of benefits over costs.
The costs are relatively straightforward, but the benefits are not.
The benefits of an antiterrorism measure, for each potential target, depend on (1) the value of the target (not just in terms of financial loss, of course) to the United States, (2) the likelihood of its being attacked, (3) the likely damage to the target if it is attacked (which requires consideration of the range of possible attacks), and (4) the efficacy of a given measure to prevent the attack or reduce the damage caused by it.
(2) and (3) are probably the most difficult to estimate accurately, because to do so would require extensive knowledge of the plans, resources, number, location, and motivations of potential terrorists.
But (4) is very difficult too, because the effectiveness of increasing the number of policemen, or of installing surveillance cameras on every block, or of increasing the number of SWAT teams, or of taking other measures of prevention or response, is extremely difficult to assess in advance.
About all that can be said with any confidence is that cities and other targets that are near the nation's borders (including coastlines) are probably more likely to be attacked than cities and other targets that are well inland, that larger cities are more likely to be attacked than smaller ones because the larger the city the easier it is for a terrorist to hide and move about in it without being noticed, that attacks on large cities are likely to kill more people and do more property damage than attacks on small ones, that among coastal cities New York and Washington probably are the prime targets because of their symbolic significance, but that to neglect the defense of the small inland cities would simply make them the prime targets, and that an attack on such a city might sow even greater fear nationwide than another attack on a large coastal city by making people feel that nowhere is safe.
But no numbers can be attached to these probabilities.
They belong to the realm of uncertainty rather than of risk, to borrow a useful distinction made by statisticians: risk can be quantified, uncertainty cannot be.
This analysis suggests that more antiterrorism resources should indeed be allocated to the large coastal cities than to other potential targets, but that is the pattern even after the recent cuts.
What seems indeterminate is the precise amount of money that should go to each city.
That makes one wonder why DHS was willing to incur the political pain of drastically altering the existing grant pattern.
The public is upset by the casualties that our soldiers are suffering in the Iraq war, and it might seem that their upset would cause no puzzlement even to an economist.
But there is an economic puzzle.
It is this.
Ours is an all-volunteer military.
No one is forced to join.
Everyone who does join realizes that he may find himself in a combat zone.
This is an expected cost of military employment and in a competitive labor market will be reflected in the wage.
That is, the wage rate in a competitive labor market will compensate a worker for any risks that the particular employment can be expected to create--a proposition that goes back to Adam Smith.
If the risk materializes, the employee has no cause to complain, provided it was the risk that he understood the job involved or should have understood it involved when he signed up for it, because he was compensated in advance.
Yet that is not how the public views our military casualties.
That is the economic puzzle which I address.
What is not puzzling is why the families and friends of a killed or injured soldier grieve.
Ex ante compensation for a loss does not wipe out the loss, even if it is a purely financial loss.
It just provides the inducement to bear the risk of incurring the loss.
One's spouse might consent to one's working at a very dangerous job, yet still grieve when one was killed at the job.
Nor is it a puzzle why, as in the recent search for the three American soldiers captured by the enemy in Iraq, immense resources are devoted to rescuing soldiers, rather than writing them off as having consented ex ante to their plight.
The compensating wage for bearing risk varies, obviously, with the risk, and the risk in turn depends on efforts that are and will be made to minimize the risk, including body armor, rescue, medical treatment, and so forth.
Knowing that one's fellow soldiers do not just abandon one when the cost of rescue would be disproportionate to any tactical value of the rescue reduces the wage that a volunteer army has to pay to attract soldiers of the quality it wants.
But the question remains how to explain the upset that the public feels at our mounting casualties in the Iraq war.
Is it just shock at seeing photographs of dead and badly injured Americans? But in fact such photographs are rarely shown.
Or is it perhaps that the risk of death and injury is greater than our soldiers had reason to expect when they signed up? Were this the concern, one would expect sympathy to be withdrawn from soldiers killed or injured who signed up within the last two years, for by two years ago it was clear that a great many recruits would be fighting in Iraq before the war ended.
The case of soldiers who joined the military before the September 11, 2001, terrorist attacks indicated that the United States could be expected to be involved in more military operations than previously anticipated might be thought different.
But most of those soldier completed their military obligation and so be allowed to resign without penalty years ago.
The situation of those who "re-upped" is no different from that of recent recruits.
Could there be a paternalistic concern--that recruits are not calculating the risk of death or injury accurately and as a result are not receiving an adequately compensatory wage differential over a safe job? This is unlikely.
One reason is that a great, and probably unobtainable, amount of information would be required in order to calculate that differential.
The risk of death or injury in combat is an example of what statisticians describe as "uncertainty" rather than "risk," reserving the latter term for situations in which a numerical probability can be estimated.
The incidence and length of wars, the probability of serving in a combat zone and for how long, and the amount and severity of the fighting in that zone are all imponderables.
The resulting uncertainty argues for an alternative to building ex ante compensation into the soldier's wage when he is hired.
Hence the practice of paying combat pay as a bonus to the soldier's ordinary wage.
At present, soldiers serving in combat zones, mainly Iraq and Afghanistan, receive $225 a month as combat pay on top of their regular wage.
The $7,000 bonus paid Marines who agree to be deployed to a combat zone for seven months is a similar response to the difficulty of fixing conventional ex ante compensation.
A further complication is illuminated by the economic concept of monopsony.
The term refers to a situation in which there is no competition on the buying side of the market, as distinct from no competition on the selling side (monopoly).
In a monopsonized market sellers receive less than they would in a competitive market because of their lack of alternatives.
Persons who join the military to obtain or exercise technical skills have civilian alternatives, so the military has to compete with civilian employers for the services of such persons.
But if you want to be a combat soldier, there is only one possible employer (if you are an American) and that is the U.S.
government.
So the government can pay a low wage to persons desiring that employment--in fact it seems that it can pay a lower wage than it does to its military technicians (adjusting for the value of the technical training that the latter receive) even though the latter are less exposed to combat risks.
I suspect that the main reason for public distress at U.S.
military casualties is altruism, which is stronger in a family setting but extends to strangers as well, as in charitable giving.
Most people are grateful to those who protect them, even if the protectors are well compensated.
But what of those Americans who believe that our involvement in Iraq is a mistake and that our soldiers, or at least most of them, should be withdrawn? Most of the critics of the war realize that the soldiers are trying to protect us, even if the soldiers are mistaken in believing that they are doing so.
If anything, critics feel sorrier for the troops than supporters of the war, because they think that the casualties represent sheer loss, so that the soldiers are deluded as well as endangered.
IHere is a puzzle: effectiveness in senior leadership positions in government does not seem to be well correlated with intelligence.
Washington was a better President than Jefferson, though less able intellectually.
Franklin Roosevelt, Harry Truman, Dwight Eisenhower, and Ronald Reagan were not as bright as Herbert Hoover, Richard Nixon, Jimmy Carter, or Bill Clinton.
Lincoln, a brilliant lawyer, is an exception; Theodore Roosevelt perhaps another exception; and doubtless there are others.
But overall the correlation between intelligence and effectiveness in the Presidency may actually be negative.
Even more striking are the failures of Kennedy and Johnson's national security team in Vietnam and George W.
Bush's national security team in Iraq.
McNamara and his whiz kids (such as Daniel Ellsberg, Harold Brown, and Alan Enthoven), the Bundies, Walt Rostow, George Ball‚Äîthese were extremely able people, many of them (like McNamara and McGeorge Bundy) truly brilliant.
And Bush assembled an outstanding national security team--Cheney, Rumsfeld, Powell, Wolfowitz, Rice, Tenet (appointed by Clinton but held over by Bush).
Two members of the team--Cheney and Rumsfeld--were former secretaries of defense! And Powell was a former chairman of the joint chiefs of staff.
It could just be bad luck, but I think not.
Economists distinguish between general and specific human capital, the first created by IQ and education and the second by training and experience in a particular job.
A person who has a large amount of general human capital is likely to find a job in which that capital, augmented by on the job training and experience, is highly productive.
The resulting success will make him an attractive candidate for a high-level government job.
The high-level jobs are filled generally by lateral entries from quite different jobs, rather than by civil servants.
Some of these high-level jobs are technical; an example is the chairmanship of the Federal Reserve Board.
Such jobs are relatively easy to fill with persons who can be predicted with reasonable confidence to do a good job.
But there is a tendency to exaggerate the versatility of the combined general-specific human capital that a lateral entrant brings to a high-level government job of a managerial or advisory rather than technical character.
There are several characteristics of such a job that actually militate against the prospects for the success of an extremely intelligent person.
First, these are "ensemble" jobs in the sense that many different skills or aptitudes are necessary to successful performance; if one of these, such as intelligence, is very highly developed, a person may neglect the others.
Second,  it may not be possible to use step-by-step, logical reasoning to solve the problems laid at the feet of the occupant of a job like secretary of defense or secretary of state or national security adviser.
Such questions as what to do in Vietnam or what to do in Iraq do not lend themselves to rigorous analysis because there is not enough information to analyze.
Intelligence is not designed for coping with situations that are not complex, but rather are profoundly uncertain.
Having great information-processing skills is not worth a lot if you have no reliable information.
Third, leaders or managers should be more intelligent than their followers or subordinates, but not too much more intelligent.
If they are too much more intelligent, they will have difficulty assessing the capacities and limitations of their underlings and they will be tempted to substitute their intelligence for their underlings' knowledge.
Analysis and knowledge are, to an extent, substitutes.
You can multiply two numbers rapidly if you have good computational skills or if, though your computational skills are mediocre, you have memorized the multiplication table.
Knowledge in government resides in civil servants, and they tend on average to be less intelligent (also of course less powerful)  than brilliant laterals.
So the latter are tempted to think that they can make decisions with minimal assistance from the civil servants.
The temptation is reinforced by a failure to distinguish between intuition and step-by-step reasoning.
Cognitive psychologists explain that the human unconscious contains more information than we can access at a conscious level.
As Herbert Simon (an economist and psychologist) explained, conscious attention is a severely limited faculty and must be carefully rationed.
Through intuition, however, we can access the larger repository of unconscious information.
Hence we speak of a person as having "experience" or "good judgment" or "common sense," as distinguished from being brilliant in the sense of being quick or having a good (conscious) memory.
So now imagine a confrontation between a brilliant person who has no knowledge about Vietnam or Iraq, and a career State Department officer who has spent his whole career working on conditions in one of those countries, who knows the language, has lived there, and is steeped in the country's history, culture, and politics.
Suppose he offers some advice to the brilliant senior official, and the latter asks him to explain and justify the advice.
He may be unable to do so because he may be drawing on a repository of information below the conscious level.
The brilliant official may be irritated at his inability to extract much more than a conclusion from the expert.
What is required at the top levels of government is not brilliance, but managerial skill, which is a different thing, and includes knowing when to defer to the superior knowledge of a more experienced but less mentally agile subordinate.
Moreover, so specialized is management as a job that success in managing a business may not translate at all into success in managing a government agency.
The firm-specific human capital that a person acquired in a career of management in a business firm may have no value for the management of a government agency, or for that matter a university, a private foundation, or an international organization.
Indeed, an experienced manager of a firm may falter and have to be fired if a change in the firm's environment requires a different type of management skill.
A striking example of the specialized character of leadership human capital is Larry Summers.
A truly brilliant person and successful secretary of the treasury, he failed as president of Harvard University though he seemed to many people (myself included) to be an outstanding choice.
I have the highest personal and professional regard for Summers and blame the failure of his presidency not on him but on the Harvard faculty of arts and sciences.
But the fact is that he failed, because he was not able to port his very considerable suite of intellectual and managerial assets to the management of an organization critically different from the Treasury Department.
The subprime-mortgage imbroglio is just the latest chapter in an age-old concern with the charging of interest, especially to individuals.
Medieval Christianity forbade the charging of interest on the ground that it was unnatural for money to increase (as by lending $100 at a 10 percent interest rate so that at the end of the year the $100 has grown to $110), because unlike pregnancy there was no mechanism by which an inanimate object such as money could reproduce itself.
Behind this superstition lay undoubtedly a hostility to commercial society, which persists today in some quarters of the Muslim world; Islam forbids charging interest although substitutes are tolerated.
The concern with lending has persisted into modernity even in Western societies.
Usury laws, which set a ceiling on interest rates, and the Truth in Lending Act, which requires detailed disclosure of annualized interest rates in consumer loans, are examples of this concern.
The relaxation of usury laws--a natural concomitant of the spread of free-market ideas in American society--allowed lenders to offer loans at very high interest rates to borrowers with poor credit ratings.
Payday loans, which charge astronomical interest rates to persons who need money to tide them over till their next paycheck, and subprime mortgage loans sometimes at annual rates 4 or 5 percent higher than mortgage loans to borrowers who have good credit, were consequence of the relaxation.
I agree with Becker that credit is no different from any other commodity.
For government to place a ceiling on price prevents people from buying the commodity who would be willing to pay a higher price, and thus it prevents a mutually beneficial, and therefore value-maximizing, transaction.
The argument for the ceiling is that people who have a poor credit record have demonstrated their incompetence to borrow and so should for their own good be prevented from borrowing more.
That is not a compelling argument, apart from any general objections to government paternalism than one may have.
A person may have a poor credit record, yet know that he can pay a high interest rate and that he will be better off despite the cost.
As Becker notes, although the rate of default on subprime mortgage loans is high, still, the vast majority of those loans are repaid.
For many people they are the only route to home ownership, which is greatly valued by the owners but has also been thought (perhaps dubiously) to have social value; that at any rate is the rationale for the tax deductibility of mortgage interest.
I do think that there is reason to think that the subprime mortgage market is imperfect, though not reason enough to warrant government interference with that market.
The subprime mortgage lenders have engaged in aggressive marketing that may have deflected borrowers from shopping for better terms in the prime market.
There are of course many gullible consumers and many people who have difficulty understanding the cumulative costs of high interest.
There are also many people who like to speculate or otherwise gamble without a good appreciation of the odds.
Perhaps there is even something of a "bubble" aspect to the subprime market.
When housing prices were rising, borrowing to buy a house even at a high interest rate (interest rates generally were low until very recently, but high to subprime borrowers) was a leveraged investment, both on the borrowing side and on the lending side.
The borrowers expected to repay the high interest out of the rapid appreciation in the value of the house, and the lenders expected to be cushioned against the consequences of a high rate of defaults by those same rising prices: if they had to foreclose, the house would be worth enough more than the mortgage to enable the lender to recoup.
A bubble arises not because people fail to perceive that an asset is overvalued, but because they think the perception is not widespread and therefore the asset will maintain or increase its market value.
No one wants to sell an asset while its price is still rising, but if enough people think that way the price may rise to a point at which a slight perturbation in the market may cause a crash.
Given the riskiness of subprime mortgage loans, a modest decline in housing prices or rise in interest rates (many subprime mortgages were at floating rather than fixed rates) could precipitate enough unexpected defaults to create distress not only among subprime borrowers but also among the lenders.
Apparently that is what has happened.
Although the result is not a happy one, I do not perceive adequate grounds for government intervention.
Proposals for limiting subprime loans have the quality of closing the barn door after the horses have escaped.
The subprime "crash" has presumably educated both borrowers and lenders in the riskiness of the market, and if subprime lending persists it will not be because of ignorance of the risk.
Of course if subprime lenders have resorted or are resorting to fraud in inducing such loans, they should be punished, but for that no new laws are required.
I have very little to add to Becker's excellent discussion.
One puzzle remains is why women have better college grades than men.
One possibility is that colleges discriminate against men in admissions.
For if colleges admitted blindly on the basis of academic prowess, they would keep admitting women until male and female grades were equal at the margin.
The average grades of women might still be higher than those of men.
But this would be surprising, unless most of the students in the applicant pool were women.
Discrimination against women in admission to college would not be irrational if male alumni are expected to be on average more generous donors, either because of higher average earnings or because, as Becker notes, men are likely to dominate the upper tail of the income distribution; alumni in the upper tail are likely to be disproportionately generous donors.
Another possibility, unrelated to current sex discrimination, but perhaps to historical discrimination against women, is "legacy" admissions.
If alumni children are favored by college admissions officers (largely for financial reasons--admitting alumni children increases expected donations by alumni), and the alumni parents are disproportionately male because men used to go to college in higher numbers than women, this could explain why males are being admitted who are expected to be poorer students than women who could have been admitted in their place.
However, given that alumni are likely to have an equal number of male and female children, this explanation would work only if alumni prefer their sons to be admitted to the same school.
Still another possible explanation for the higher average grades of female than male students is that men get as much out of college as women do even when male grades are lower, because there is more to college than academic performance and the "more" may be more valuable on average to men than to women.
Male sports and other male social activities in college may build teamwork, and networks, that create more valuable human capital for men than these activities would do for women, perhaps because men will have greater participation in the labor market, where teamwork and connections are vital assets.
On this view (proposed by Asher Meir in correspondence), male students substitute nonacademic for academic college activities, resulting in lower average grades that are, however, offset by the social human capital that they acquire from engaging in the nonacademic activities.
Whether the wage gap between men and women will continue to narrow because the ratio of male to female college students will continue to fall seems to me speculative.
The ratio may not fall at all if colleges see advantages in the current ratio, though this would leave unexplained why it has fallen as far as it has already.
If the ratio does not continue to fall, I do not see what would drive female wages up relative to male wages.
Rising prosperity may actually induce many women to substitute household for market work, because diminishing marginal utility of money income, combined with higher income tax rates at higher incomes, would tend to make untaxed household income more attractive.
There were some excellent comments on my posted comment of a couple of weeks ago, which I have been slow in responding to.
One commenter pointed out that a possible reason for colleges to favor male applicants is that there is greater variance in performance among men than among women, and so, in the words of the commenter, "colleges, especially the good ones, tend to be risk-takers in admission, since it is disproportionately valuable for them to get the very top students." Another comment points out that a college may want to admit a certain minimum number of men in order to provide more dating opportunities for women.
Maybe there is a tipping phenomenon at work as well--that if there are too few men, male applications drop because men don't want to be thought attending a "women's college.".
A number of comments expressed puzzlement with the proposition that the higher average grades of women could signify discrimination in favor of men rather than of women.
The puzzlement is understandable because of a typo: in the third line of my post, for which I apologize: "men" should be "women." The easiest way to understand the point is to imagine that the average woman's grade point average is an A and the average man's a D.
Then it would be evident that the college was discriminating in favor of men, because it was admitting D men in preference to A or B women (I say "or B"  to allow for the possibility that the college has admitted all its A applicants).
Another comment pointed out that if there is discrimination in the job market, women will have a stronger incentive than men to get good grades in order to improve their job-market prospects.
Anti-Semitism has been thought a factor in pushing Jews to excel in their studies.
A newspaper is a bundled product.
A bundled product is one that combines a number of products the demands for which may be quite different--some consumers may want some of the products in the bundle, other consumers may want other products in the bundle.
(Another good example is the Windows operating system, a bundle of a number of different programs.) Bundling is efficient if the cost to the consumer of the bundled products that he doesn't want is less than the cost saving from bundling.
A particular newspaper reader might want just the sports section and the classified ads, but if for example delivery costs are high, the price of separate sports and classified-ad "newspapers" might exceed that of a newspaper that contained both those and other sections as well, even though this reader was not interested in the other sections.
Bundling also facilitates price discrimination by snagging consumers who place a high value on particular products in the bundle.
It also increases the risk of entry by single-product competitors because the marginal cost to the consumer of the bundle of any component of it is zero.
He gets the sports section for "free" (in the sense that the newspaper costs him no less if he throws the sports section away without reading it) but would have to pay a positive price for a free-standing sports newspaper.
Like other intellectual products, a newspaper has high fixed costs (the newsroom, etc.) but low marginal costs (the cost of printing and selling one more copy), and so there is a tendency to natural monopoly in local newspaper markets.
It is offset, however, to an extent, by differences in content, outlook, and so forth among different newspapers, which limits substitutability and therefore makes some degree of competition viable.
Nevertheless newspapers tend to be quite profitable (as recently as 2006, the average ratio of profit to revenue was 17 percent, which is high relative to industry as a whole), because competition is limited.
High newspaper profits sometimes are attributed to the fact that most information comes free from public sources and that newspapers deal directly with their customers and so economize on distribution costs.
But low costs are not a reason for high profits, since competition tends to push revenues down to costs.
High profits may seem inconsistent with declining revenues, but are not if the firm, seeing no future for itself, ceases investing in the its future and instead cuts costs to the bone (thus treating the firm's product or service as a "cash cow").
Many newspapers are doing that.
Still, newspaper profits are plummeting, and with them the value of the companies.
The reason is declining ad revenues (an inflation-adjusted decline of 20 percent between 2000 and 2007, and a further decline this year).
This is a function in part of declining newspaper circulation but more profoundly of unbundling, as unbundling is the cause both of the declining ad revenues and of declining circulation.
The Web provides a virtually costless method of distributing the products that are bundled in a newspaper.
The distribution is not only cheaper, but better, because it avoids the time and space constraints of hard copy delivered on a daily (rather than instantaneous) basis and space-constrained by the cost of paper.
The unbundling goes deeper than the section level (classified ads, the sports section, etc.), for every section of a newspaper is itself a bundle.
The news section bundles a variety of news stories that different readers value differently; readers who have no interest in foreign policy nevertheless pay for a newspaper that may maintain costly foreign bureaus in order to produce good stories on foreign policy.
The Web provides a customized news service that enables the tastes of particular readers to be identified and then satisfied by instantaneous and often costless delivery of a product laser-focused on those tastes.
The bother associated with the physical bulk of the newspaper is also eliminated.
A study by comScore, Inc.
in March of this year found that persons 65 and older are almost six times as likely to read a newspaper six days a week than persons aged 25 to 34 (and almost ten times as likely as those aged 18 to 24).
The principal reason for the difference is not I think that older people have more leisure, because people in the 45 to 54 year old bracket, who do not have more leisure than the young, are more than twice as likely to read a newspaper six days a week than the young cohort.
The reason, rather, is that younger people are much more comfortable getting information online than older people are; they have grown up in the electronic revolution.
This will not change as they get older.
It appears that the only hope for the newspapers is to go online, and they have done this and have attracted many viewers to their Web sites.
But they have not been able to charge for online ads anything like what they can charge for ads in their hard-copy editions.
The reason I think is that there is much more competition in online advertising than in print advertising, especially for advertising, such as classified advertising, that is primarily informational; for the information in the ads is often available online at no or nominal cost from other sources, such as Craigslist.
Moreover, the online newspaper is still a bundled product, and the Web provides close substitutes for all the sticks in the bundle.
The blogs are a big factor here; in the aggregate, they not only are nimbler, but contain a vastly greater body of specialized knowledge, than the newspapers or other conventional media (as Dan Rather learned to his sorrow).
Suppose, then, that the newspapers are doomed, or, more realistically, that they are likely to continue to shrink, eventually becoming a retirement service, like Elderhostel.
Are there social consequences that should trouble us? A common argument is that if news is customized to the tastes and interests of every individual in the society, people will not be exposed to conflicting views and as a result will become incapable of active civic engagement, for example as voters.
That is implausible.
It is important to distinguish between opinion and fact.
Most people do not want their opinions challenged.
So if they are liberal they read the   New York Times   and if they are conservative they read the   Wall Street Journal  .
But people are both interested in, and influenced by, facts, such as the fall of communism or the rise in gasoline prices, and they will learn these facts (and more quickly) on the Web even if they do not read newspapers.
The few people who actually read, compare, and take seriously opposing views on matters of public policy will continue to do so after they stop subscribing to print newspapers.
With the rise of the blogs, moreover, the amount of information and opinion reaching the public is far greater than in the heyday of the print newspapers.
A second concern, to which the rise of the blogs may be only a partial answer, is that Internet news services (such as Google News) are parasitic on the print newspapers' large staffs of reporters, so that if they drive the newspapers out of business the Internet news services will lose much of their content.
The copyright law cannot prevent this, because a newspaper can prevent the copying only of its articles--that is, of the verbal form in which the information in an article is expressed--and not the information itself.
And it cannot prevent a news service from simply sending the viewer to the newspaper article via a Web link.
The concern, in short, is that the Internet will kill the goose that lays the golden egg.
But this is unlikely.
If online viewers want the level of news and opinion that print reporters generate, the Internet news services will hire reporters, defraying the cost out of their online advertising revenues, which will be greater for an Internet news service that attracts additional viewers by offering them richer, newspaper-type fare.
Indeed, long after newspapers like the   New York Times   and the   Washington Post   have ceased print publication, their Web sites may be among the leading Internet news services.
The aggregate amount of news and opinion may be less, however, because unbundling will eliminate internal subsidies, for example of the news and op-ed pages by revenues from classified ads.
David Brooks is one of the most thoughtful newspaper columnists.
In a recent op-ed ("The Great Seduction,"   New York Times  , June 10, 2008, p.
A 23), he argues that the founders of the nation  "built a moral structure around money.
The Puritan legacy inhibited luxury and self-indulgence.
Benjamin Franklin spread a practical gospel that emphasized hard work, temperance and frugality‚Ä¶For centuries, [the nation] remained industrious, ambitious and frugal." But, Brooks continues, over the past 30 years much of that legacy "has been shredded," while "the institutions that encourage debt and living for the moment have been strengthened.‚Äù"And here he mentions "an explosion of debt that inhibits social mobility and ruins lives," because of "people with little access to 401(k)'s or financial planning but plenty of access to payday lenders, credit cards and lottery agents." Among other "agents of destruction" are state lotteries--"a tax on stupidity," which tells people "they don't have to work to build for the future.
They can strike it rich for nothing." Other culprits are the astronomical interest rates charged by payday lenders; and the aggressive marketing of credit cards by banks and other financial institutions, as a result of which by the time college students are in their senior year more than half of them have at least four different credit cards.
The cures that Brooks offers include "rais[ing] consciousness about debt," encouraging foundations and churches to offer short-term loans in competition with payday lenders, strengthening usury laws, and taxing consumption rather than income, thus encouraging saving.
All this is very interesting, but is it correct? I have my doubts, except about the desirability of eliminating double taxation of savings, a problem with our income tax.
Max Weber argued convincingly in his famous book   The Protestant Ethic and the Spirit of Capitalism   that the frugality and industriousness promoted by the early Protestants in opposition to the opulence of the Roman Catholic Church were values conducive to and perhaps critical in the rise of commercial society.
Protestants who believed in predestination wanted to show by their modesty, austerity, and avoidance of lavish display that they were predestined for salvation.
But saving plays a less important role in economic progress today than it did in the sixteenth century.
Its role in powering economic growth has been taken over, to a large extent, by technology.
The great rise in standards of living worldwide is due far more to technological progress than to high rates of savings, that is, to deferring consumption.
At the same time, now that we have efficient debt instruments that in former times did not exist or were extremely costly, the role of personal debt (Brooks does not criticize corporate or government debt) in human welfare is more apparent than it was.
Apart from its role in solving short-term liquidity problems resulting from delay in the receipt of income, debt enables consumption to be smoothed over the life cycle.
Without debt, a family might have to wait 20 years before it could afford to buy a house.
Of course, debt creates risk for both lender and borrower, as the subprime mortgage crisis has dramatically illustrated.
But if the risks are understood, it is unclear why the assumption of them should be thought harmful to personal or social welfare.
At worst, debt leads to bankruptcy, but bankruptcy is not the end of the world either for the borrower or for the lender.
In situations of desperate poverty, one can expect a heavy debt load; but such a load can also be positively correlated with prosperity, which cushions the risks that debt creates.
It is especially odd to suggest as Brooks does that taking on debt is antithetical to hard work; on the contrary, it increases the incentive to work hard by making it at easier for people to obtain the goods and services they want by borrowing the money they need to pay for them, yet at the same time increasing the risk of bankruptcy should they slack off on their work and so let their income fall.
The very high interest rates for payday loans tell us that many people will pay a very high premium to shift consumption from future to present.
As long as they understand what interest rates are and what interest rates they are paying, it is hard to see why their preference for present over future consumption, and hence for spending and borrowing rather than saving, should have social implications.
People who take out payday loans are unlikely to be potential savers (i.e., lenders); and by taking on heavy debt they force themselves to work very hard; and I have suggested that saving is not as important as it once was.
I particularly do not understand how, if high interest rates for payday loans are a problem, loans by foundations and churches are a solution.
If, as I assume Brooks must mean, these loans are to made be at lower interest rates than payday loans, the former payday borrowers will borrow more.
If to try to prevent this the charitable lenders ration their credit tightly, the payday borrowers will borrow what they can from those lenders and top off with a payday loan; their total debt burden is unlikely to fall.
As for the "tax on stupidity," it is of course irresistible to finance as much as government as possible by a system of voluntary taxation, which is what a state lottery is.
And I don‚Äôt think "stupid" is the right word to describe all or even most of the people who buy lottery tickets.
I do think that some of them consider themselves "lucky" and so in effect recalculate the odds in their favor.
That is stupid; in a game of chance, "luck" is randomly distributed.
Some people, though, simply enjoy risk.
Others like to daydream, and a daydream is more realistic if there is some chance it may come true, even if a very small chance.
And finally and most interestingly, there are people whose marginal utility of income is U-shaped rather than everywhere declining.
Usually we think of it as declining: my second million dollars confers less utility on me than my first million, and that is why I would not pay a million dollars for a lottery ticket that gave me a 50.1 percent or probably even an 80 percent probability of winning $2 million.
But maybe I lead a rather drab life, and this might make such a gamble rational even if it were not actuarially fair.
Suppose that for a $2 lottery ticket I obtain a one in a million chance of winning $1 million.
It is not a fair gamble because the expected value of $1 million discounted by .000001 is $1, not $2.
But if having $1 million would transform my life, the expected utility of the gamble may exceed $2, and then it is rationally attractive.
Brooks complains that government sponsorship of lotteries sends an official and therefore authoritative message that a person can strike it rich for nothing.
But of course that is true, even when there are no lotteries.
(And he gives no indication of wanting to forbid private lotteries.) You can inherit great wealth.
More commonly, you may be able to leverage modest talents into great wealth by the luck of being in the right job at the right time.
Brooks himself complains in his op-ed about the message sent by the fact that hedge fund managers often make more money than people who "build a socially useful product." Only the latter, he believes, should earn fortunes.
But he doesn't propose an excess-profits tax on hedge fund managers; he accepts the legitimacy of their fortunes at the same time that he attributes those fortunes to luck.
There is also an echo of the traditional but erroneous suspicion of speculation as an activity that does not create social wealth but merely shifts it around.
That is incorrect.
Speculation aligns prices (whether commodity prices or the prices of companies) with values and so creates more accurate signals for production and investment.
It is a vital economic service.
That is not to say that speculators "deserve" higher incomes than ditch diggers.
Desert doesn't enter.
Incomes are determined by supply and demand.
What is true is that easy credit facilitates bubbles, such as the housing bubble and the related mortgage-financing bubble, and the bursting of a bubble can, as we have been relearning recently, cause economic dislocations.
This may require some regulatory adjustments; it does not require a return to Calvinism.
Although I worry more than Becker does about the environmental consequences of the production and consumption of oil, and although I want oil prices to remain high--indeed to continue rising--I largely agree with his analysis of the rival proposals for dealing with the present "crisis": allowing more drilling for oil on the outer continental shelf and in Alaska versus imposing an excess profits tax on the oil companies.
I agree with him that the former is a good idea and the latter a bad one.
But I will qualify my agreement by suggesting policy adjustments to minimize the adverse effects of allowing more drilling or of imposing an excess profits tax.
Expanded drilling in U.S.
territory (including our territorial waters) will reduce both U.S.
dependence on foreign oil and the wealth of foreign oil-producing countries, many of which are hostile or potentially hostile to the United States.
These are important benefits.
But there are also significant costs.
Any increase in the production of oil from the seabed and from the fragile Alaskan tundra will create environmental damage, both directly, because of the environmental damage caused by the drilling itself (such as, in the case of offshore drilling, the dumping into the ocean of "drill cuttings"‚Äîthe solids that are brought to the surface in drilling an oil well), and indirectly, as a consequence of increased production of oil, because of oil spills by tankers, traffic congestion and highway wear and tear, and, most ominously, increased carbon emissions from the burning of oil as a fuel.
Becker notes correctly that the less oil we produce, the more that foreign nations will produce.
But given the high price of oil, increasing out oil production will increase total world production rather than just substitute for foreign production.
So there will be more tanker spills and more carbon emissions if offshore and Alaska drilling is allowed, since the supply of oil will be greater.
The problems created by an increased supply of oil can be minimized by an increase in the federal gasoline tax (better still would be imposing a tax on carbon emissions, since such a tax would create an incentive to reduce the amount of emissions per unit of gasoline consumed) calibrated to prevent gasoline prices from declining as a consequence of increased production of oil and hence increased supply.
Already the shock of $4 a gallon gasoline has caused a modest decline in U.S.
consumption of oil, yet $4 is little more than half the retail price of a gallon of gasoline in most European countries.
Distances are shorter in Europe, and so U.S.
gasoline prices would not have to double in order to make substantial inroads into our oil consumption.
But they should not be allowed to fall as a result of increased world supply due to offshore and Alaska drilling.
A gasoline or carbon-emissions tax must not be confused with a tax on the profits of oil companies, which, because of the uncertainties involved in exploring for oil, will, as Becker points out, reduce the incentive to find and exploit new domestic oil fields.
(In contrast, a heavy tax on gasoline will increase the incentive to find energy substitutes for oil.) In addition, imposing excess profits taxes sends a bad signal to the business community: that success will be penalized.
And there is a danger that the proceeds of the tax would be used to subsidize the purchase of gasoline in order to reduce gasoline prices.
The demand would rise without stimulating domestic production, so we would have the worst of all possible worlds: high consumption of oil and increased dependence on foreign production.
But in the unhappy event that an excess profits tax   is   imposed, at least it should be limited to profits from existing oil fields, to minimize the dampening effect on the incentive to develop new fields.
Because the environmental risks of offshore and Alaska drilling are greater than those of drilling for oil on land in the lower 48 states, an environmental excise tax should be placed on the oil produced from offshore and Alaska wells.
It is not enough to rely on the tort system to provide sanctions for oil spills.
Many of the environmental effects of drilling for oil are individually too small to invite tort suits, yet the cumulative effects can be very large.
That is true with respect to effects on fisheries and on the frequency of tanker spills.
The more oil that is transported by sea, the more spills there will be, but it will rarely if ever be possible to ascribe a particular spill to a particular producer of the oil that was spilled.
An environmental tax is therefore necessary to induce the oil companies to internalize the environmental costs that their activities impose.
The increased percentage of persons who go to college is not surprising.
Advances in technology have reduced the demand for brawn and increased the demand for brains.
But several significant questions (concerning college education in the United States, to which I confine this comment) remain:.
The first is why female college enrollment has increased so much faster than male college enrollment, and why female college students do much better, as measured by grades and graduation rate, than male.
If college is more valuable to a woman in the labor market than to a housewife, then as more women work relative to engaging in full-time household production, women's demand for a college education will rise; apparently this factor has dominated the effect of advances in technology on both sexes, for otherwise their rates of enrollment would be growing at the same rate.
Of course technology, in the form of labor-saving household appliances, more reliable contraception (including abortion), the higher ratio of light to heavy work, and reductions in infant mortality (a factor in limiting the size of families) may underlie the increase in women‚Äôs participation in the labor market.
But only the increase in the ratio of light to heavy work is a change in the technology of work that favors women by reducing the demand for brawn and hence for male labor relative to female.
But why are proportionately more women going to college and, once there, outperforming the male students? One answer may be that they get more out of college than men do.
Maybe they gravitate to fields in which college learning is more valuable than it is in the fields that men gravitate to.
Suppose that men have a comparative advantage (as they probably do) in jobs that involve danger, disagreeable working conditions, upper-body strength (of course), and financial risk.
Those are jobs to which going to college, or in some instances (such as financial risk taking) concentrating once there on academic performance, may not contribute a great deal.
Another question is whether college attendance or graduation is the right variable for estimating the returns to education.
Suppose that high schools deteriorate; that would increase the demand for college, especially for community colleges that may offer a level of teaching no different from that of a good high school.
Most high schools are public and do not compete for students.
The college market is far more competitive.
A community college may offer a superior high school education.
And finally, how much more will college attendance increase? Will it go to 100 percent (currently, about 60 percent of high school graduates go on to college--of course many kids drop out of high school)? That depends on two factors: the brain/brawn tradeoff, and IQ (or some alternative measure of intellectual aptitude).
If the intellectual demands of work relative to the physical demands continue to increase, the demand for college will also increase.
IQ is, though, a limiting factor.
But it is less of a limiting factor than one might think.
The reason is that a frequent byproduct of technological advance is deskilling.
Fifty years ago, a driver had to know how to change a tire and put chains on a tire, how to check the engine's oil level and the water level in the radiator, and how to start a car in freezing weather.
These skills are no longer required.
Most cashiers no longer need to know how to make change; the cash register tells them how much change to give the customer.
Printers no longer need to know how to set type upside down.
With advances in neuroscience, artificial intelligence, computer science, robotics, and nanotechnology, many jobs that require a college education today will require little in the way of education tomorrow.
Many people may then defer college until retirement, in order to increase the returns to leisure by widening their cultural horizons.
The term "infrastructure" is much used but lacks a clear definition.
I shall use it to mean inputs, often provided by government rather than private enterprise, into a very large variety of products and services.
Good examples are transportation, communications (including the Internet), education, the environment (including water resources), public health (in the sense of prevention of communicable diseases), and law enforcement (including the judiciary).
Current concerns with our allegedly deteriorating infrastructure focus on road and air transportation and on primary and secondary education, and I shall confine my discussion to them.
I am going to bracket road transportation and education because the problems of both could probably be solved satisfactorily by privatization or, in the case of education, semi-privatization.
Air transportation I discuss separately because its problems almost certainly require a governmental solution.
America's roads (including bridges, which carry mainly roads) have been deteriorating, as is obvious to any user of the interstate highway system.
The reason for the deterioration is that the system is carrying vastly more traffic than it was designed for in the 1950s.
The result is not only rough surfaces which wear out tires and slow down traffic, but also delays due both to construction and to the sheer increase in traffic volume.
Congestion and therefore potholes and delay have also increased on local and commuter roads.
Wear and tear, and delay, are real costs, but the costs of road building and road improving to reduce the costs of wear and tear and of delay are real too, and the challenge is to spend only up to the point where the last dollar spent yields a dollar in benefit.
Government seems incapable of doing that.
Privatization should be able to.
This is obvious in the case of toll roads, where in fact a privatization movement is under way, as we have discussed in a prior post.
The toll can be set equal to the cost that a vehicle imposes on the road in congestion and wear and tear.
(That cost varies with the size and weight of the vehicle, but the toll can be made to vary with these factors as well.).
There is a concern about monopoly; there are not always good alternatives to a particular route.
If the state or other public owner of the toll road auctions the road to the highest bidder, the winning bid will capitalize monopoly rents, and the tolls will therefore contain a monopoly markup.
That is inefficient, but the revenue that the state obtains from the auction is a tax substitute, and taxes have the same general misallocative effects as monopoly prices.
The state can if it wish avoid the monopoly problem by specifying a minimum quality of service and then auctioning the road to the lowest bidder who agrees to provide that quality.
Not all roads are toll roads, but given modern technology all can be made toll roads.
Electronic toll systems are available that do not require vehicles to slow down, and these can be adapted even to local roads, so that the entire street system of a city or an entire metropolitan area could be privatized.
This may seem a bizarre suggestion, but it is no more bizarre that allowing a private company to own the telephone or cable grid of a city.
The street system is just another grid, and the potential monopoly problems can be dealt with in the same way that cities deal with telephone or cable monopolies, where auctioning franchises may again be the most efficient approach.
The American system of public education is heavily criticized.
Costs per pupil are high relative both to private education and to public education in foreign countries that turn out better students.
Many parents are voting with their feet, as it were, by putting their kids in private (including parochial) schools, home schooling the kids, or moving to communities that have better public schools.
These are alternatives are costly to parents.
It is not clear why government is in the business of operating any educational facilities.
It is appropriate for government to require that children attend school up to a specified age, to fix minimum educational standards with regard both to curriculum and to performance, and to finance the costs of education for impecunious parents.
None of these things requires that government own and operate schools.
But public education is not about to be abandoned, and it could be semi-privatized by adoption of a voucher system, which by permitting parents to choose among public schools would force public schools to compete with each other.
Air transportation presents a baffling problem in infrastructure deterioration.
The deterioration in airline service in the last five years has been dramatic, involving as it does not only extraordinary delays but also horribly crowded airplanes and crummy airports.
The delays are masked by the fact that the airlines have increased the scheduled time of flights and by the difficulty of measuring delay resulting from canceled flights and missed connecting flights.
A study this past month by the majority staff of the Joint Economic Committee of Congress entitled "Your Flight Has Been Delayed Again" estimates the annual cost of airline delay at some $41 billion.
Of this amount $12 billion is attributed to traveler time costs.
That estimate strikes me as too low.
It includes the delay due to schedule change, but excludes delays due to missing connecting flights, to canceled flights, and to lost time when in order to avoid missing an appointment one has to take an earlier flight and hence if it is not delayed has dead time on arrival.
The uncertainties of air travel also cause some potential travelers to substitute, at some cost, another activity (maybe even another job, requiring less travel).
On the other hand, the $41 billion figure is misleading because it is the cost of total air transportation delay, not avoidable delay; optimal delay is not zero, because of unavoidable weather conditions and equipment failures.
The deterioration in airline service is puzzling because the high cost of aviation fuel has virtually bankrupted much of the industry, and so one would expect a contraction; this is beginning yet there is no expectation that flight delays will diminish.
The reason is that the airline industry has very heavy fixed costs, so that even when high fuel prices push up its marginal costs, each flight, provided the revenue from the flight exceeds the marginal cost of the flight, will contribute something to the airline's fixed costs, and so airlines are reluctant to reduce the number of their flights.
This explains, moreover, why airline service has deteriorated.
When demand grows, as it has done rapidly in recent years, the industry responds by adding flights.
No airline has an incentive to balance the revenue from additional flights against the costs in additional delay, because one airline's reducing the number of its flights would have little effect on delay, but a dramatic negative effect on its revenues.
One culprit in the deterioration of airline service is Congress, and another is the Administration; between them, they have failed to create a modern air traffic control system that would reduce delay by reducing the safety-required spacing of planes both on the runway and in the air.
But that would not solve the basic problem, which as I have said arises from the fact that no individual airline bears the full costs of the delays it creates.
The airways are like a highway of fixed size with no tolls, facing an increase in traffic.
Another apparent culprit is the airports, but they are in much the same position as the airlines.
They are locally owned and in principle (but not in practice) can control congestion by limiting the number of takeoffs and landings either directly or by fees.
If every city had two airports, if most airline traffic were between just two cities, and if two companies each owned one airport in each city, then the competitive situation would be identical to that of two competing toll roads, and, assuming a modern air traffic control system, the optimal amount of delay would be achieved.
But these conditions are not satisfied.
No single airport can optimize congestion, because it does not control the traffic to and from, and hence delay at, other airports, and that delay will in turn cause delay at its airport.
The best solution might be a federal airplane congestion tax that would vary from route to route depending on delays, which vary considerable across regions and specific airports.
The revenues could be used to update the air traffic control system.
An alternative might be to allow some limited collusion among the airlines, enabling each airline to reduce the number of flights without losing business to its competitors.
But that would be in essence a return to the system of airline regulation prior to the abolition of the Civil Aeronautics Board, and that was a thoroughly unsatisfactory system.
I have blogged at considerable length about the July 17 report, see http://correspondents.theatlantic.com/richard_posner/, and also written a short op-ed on the subject, published in the   New York Times   on July 25 ("Our Crisis of Regulation," p.
21).
I have emphasized both what seem to me fundamental failings in the report and weaknesses in particular proposals.
The fundamental failings include prematurity, one-sidedness, and overambitiousness, and let me dwell for just a moment on the first of these, or rather one aspect of the first, and that is the Administration's determination to revamp financial regulation in light of the financial crisis of last fall before the causes of that crisis have been determined.
In other words, first the sentence, then the trial to determine guilt, specifically the guilt of the finance industry (of "banking" in a broad sense that includes other financial intermediaries--the members of the "shadow banking" system, of which more shortly).
Without pointing to evidence, the report asserts that the financial crisis was the product of irrational decisions both by lenders and borrowers and of major gaps in the structure of financial regulation.
Ignored is the role of error and inattention by the regulators, notably including the Federal Reserve and the Securities and Exchange Commission; the deregulation movement in finance; lax enforcement of the remaining regulations; and failures of understanding by the economics profession.
And thus the role of the Fed in forcing interest rates too far down, and keeping them too far down for too long, during the early years of this decade, and in neglecting growing signs of housing and credit bubbles (caused by low interest rates), goes unmentioned.
Since senior economic officials in the Administration were implicated in these failures of regulation, and since the thrust of the report is that we need more regulation, it is not surprising that the report should give regulators a pass.
It should be a rule of regulatory reform that before the regulatory structure is changed, which is likely to be a time-consuming endeavor with at least some unanticipated consequences, the government make sure that the regulators are employing their existing powers to the full.
And indeed just last week the SEC announced that it is imposing reserve and capital requirements on money-market funds, requirement that had they been in force last September would have reduced the systemic consequences of Lehman Brothers' collapse (see below).
Had this rule been honored by the authors of the report, there would have been much less emphasis on structural reform, as in the proposed creation of new regulatory entities and the proposed expansion in the powers of the Federal Reserve.
The centerpiece of the Administration's proposal, and the only specific proposal in the report that I will discuss in this comment, is the proposal to authorize the Federal Reserve Board to regulate any financial enterprise that creates "systemic risk." The Fed would designate the enterprise a "Tier 1 Financial Holding Company," and having done so would have the same (perhaps even greater) powers that it has over commercial banks that are members of the Federal Reserve System.
Its focus would be on "macroprudential" regulation--that is, on assuring that a failure of the Tier 1 FHC would not imperil the financial system as a whole.
The Fed would be expected to limit the leverage of these firms (the debt-equity ratio in their capital structure) and take other measures to reduce the risk of failure, for example by forbidding them to engage in proprietary trading (that is, speculating with their assets).
To prevent the gaming of this new regulatory power by firms that would go up to the very edge of whatever line was chosen to separate Tier 1 FHCs from other nonbanks, the Fed would have a broad discretion in so classifying financial firms.
Financial firms that are not commercial banks are now significantly larger sources of credit than banks, and they can create systemic risk.
An example is (or rather was, because it is now defunct) Lehman Brothers, a broker-dealer.
Lehman, among its other activities, was a dealer in the commercial paper and money-market markets.
It would issue its own commercial paper (short-term promissory notes) to money-market funds and use the money it borrowed in this manner from the funds to buy commercial paper from (that is, lend to) nonfinancial firms with sterling credit records, such as Proctor & Gamble, that finance their day-to-day operations by issuing commercial paper.
When, last September, Lehman Brothers became insolvent because of losses in other parts of its business, it could not repay its loans from the money-market funds or lend money to issuers of commercial paper.
The commercial-paper and money-market funds froze, contributing to the credit crisis.
Lehman was not among the largest nonbank financial enterprises, but because of its interdependence with other participants in the overall credit market its sudden collapse had serious repercussions.
Although the Federal Reserve claims that it lacked the legal authority to save Lehman from collapsing by lending it the money it would have needed to stave off bankruptcy, the claim is unpersuasive.
Section 13(a) of the Federal Reserve Act authorizes the Federal Reserve to lend money to a nonbank provided the loan is "secured to the satisfaction of the Federal reserve bank." Lehman did not have good security for the loan it needed, but, in the emergency circumstances of a collapsing global financial system, the Fed could, it seems to me, have been "satisfied" with whatever security Lehman could have offered.
If this interpretation seems a stretch, Congress could amend the statute easily enough to add "in the circumstances" or "in the sole discretion of the Federal Reserve Board," after "satisfaction," or it could delete the reference to security altogether.
But the fact that the Federal Reserve, has, as it seems to me, all the power it needs to prevent a nonbank that poses systemic risk from failing, and in failing carrying part or all of the entire financial system with it, is not a rebuttal of the Administration's proposal, because the government would like to be able to prevent the collapse of such enterprises rather than having to spend tens or hundreds of billions of dollars to save them.
The first question to ask, however (it is not addressed in the Administration's report), is whether these enterprises that are not banks but might create systemic risk are already regulated.
I mentioned money-market funds, which are regulated by the SEC, as are broker-dealers.
One might think that closer liaison between the SEC and the Fed would go far to minimize the "macroprudential risk" posed by broker-dealers.
Most important, if the Federal Reserve simply identified the firms that it believes pose systemic risk, a combination of market forces, public and legislative opinion, and the implicit risk of regulation would probably impel the firms to take steps to reduce the systemic risk that they pose.
This possibility should at least be explored before the Federal Reserve is given enhanced regulatory powers.
After all, the principal reason--or so at least I think--for the financial collapse last September was that the regulators were asleep at the switch.
They are now awake, indeed insomniac.
If the Federal Reserve needs some additional staff, and perhaps authority to require financial information from financial enterprises that it does not at present regulation in order to identify the firms that pose systemic risk to the financial system, and perhaps some minor tinkering with the Federal Reserve Act to clarify its existing authority to deal with nonbank banks, these modest reforms can be adopted without restructuring the entire system of financial regulation, as the report proposes.
It is understandable why there is widespread concern with the American system of health care.
The nation spends about 15 percent of its very large Gross Domestic Product on health care, which is almost twice as much per capita as the nations that we consider our peers spend, yet outcomes, at least as measured by longevity, are no better in the United States than in those other nations, or for that matter in many much less wealthy nations.
We provide much greater health care to elderly people at the end of their life than other nations do, though without much to show for it in increased longevity.
Some 45 million people--15 percent of the population--have no health insurance, either private or public.
They are either charity patients, or pay the full price of any medical treatment they receive--or at least are charged the full price, for a common sequel to an expensive medical procedure for an uninsured patient is the patient's declaring bankruptcy in order to wipe out his medical debt.
The Administration wants every American to have medical insurance.
The details are unclear, but the thrust of the Administration's plan is those who can afford to buy medical insurance, either directly or through their employer, would be required to do so and that those who cannot would have their insurance subsidized.
The cost to the government alone of the Administration's program is estimated by the Administration itself to be $120 billion a year.
How it will be financed remains up in the air, along with many other crucial details.
Probably part of the cost will be defrayed by limiting the tax deductibiliy of employer-provided health insurance.
But most of it, at least in the short run, will simply be added to the government's huge budget deficit--so huge that amounts like $120 billion are beginning to seem like small change.
The Administration claims that in the long run the aggregate cost of health care will actually fall.
Indeed, the hope is that the $120 billion annual cost will not have to be funded at all, but instead will be offset by various reforms that the Administration proposes, including digitization of health records, allocation of greater resources to preventive care, and evaluating the performance of hospitals and other medical providers more carefully, to determine which medical procedures are really useful, and limiting reimbursement to providers accordingly.
I don't think the program makes fiscal sense.
If enacted in anything like the form that the Administration is urging on Congress, it would be immensely costly and would thus add significantly to our national debt, which is already growing at a fast clip because of the decline of tax revenues as a result of the current depression and the immense government expenditures on trying to speed economic recovery.
Ignored in estimates of the cost of the health care program is the effect of insurance on the demand for medical services.
When people, because they lack health insurance, have to pay for medical services or encounter long queues in hospital emergency rooms, they have an incentive to economize on medical treatment.
If they have health insurance, the marginal cost of treatment in excellent medical facilities falls to the cost of a deductible or copayment; and it is the marginal cost that the insured consumer of medical services confronts--the cost of the health insurance premium itself is a fixed cost, which is not affected by how much treatment the insured receives.
Because the supply of medical services is not highly elastic, an increase in the demand for those services will increase average as well as total cost.
I would not object if a program of universal health insurance could be financed by reducing or eliminating the tax deductibility of health insurance.
But only a modest reduction, if that, in its deductibility is politically feasible.
The reforms that the Administration contends will not only pay for the program but also reduce the aggregate costs of health care in the United States are probably pie in the sky.
Digitization of medical records does increase efficiency: it makes it easier to change doctors, track health histories, and coordinate medical services.
But the net savings are likely to be modest or even negative, because anything that lowers the average cost of a given quality of health care increases demand, just as broadening insurance coverage does.
Preventive care--another efficiency measure touted by health-care reformers--is potentially very costly, because by definition it provides health services to people who are not yet ill.
Advances in preventive care are not limited to telling people to exercise and eat healthful foods, but increasingly are dominated by massive and costly programs of screening and follow-up.
Such programs, and the treatments that ensue for persons found to have a treatable condition, may extend life, but often this means keeping alive very sick people who will require expensive care for the remainder of their prolonged life.
An effort to create a form of benchmark competition between hospitals and between doctors, by careful evaluation of outcomes and by using the results of the evaluation to calibrate reimbursement by insurers so that the best-performing health-care providers will be rewarded and the worst punished, is likely to founder on the difficulty of adjusting for differences in outcomes that are not attributable to the efficiency of the health-care provider.
In addition, efforts to limit treatment by limiting reimbursement, especially efforts by government to do so, are deeply unpalatable both to patients and to doctors and hospitals.
A patient convinced by his doctor that a particular treatment is his only hope for continued life will not be reassured to be told that in the opinion of the government's experts, the treatment would not be cost-justified because it is very costly and is unlikely to be successful.
Insurers, and employer health-benefits plans, try to do this kind of financial triage now, but their lack of success is reflected in the enormous annual cost of American health care.
A deep problem is the replacement, in the medical profession as in the legal profession, of a professional model of service with a business model.
In the professional model, the service provider is assured a good but not extravagant income by limitations on competition, and in exchange he is expected to avoid exploiting the ignorance of patients as he could do by performing unnecessary or low-value procedures.
In the business model, the service provider endeavors to maximize his net revenues.
In the case of medicine, the disparity of knowledge between provider and patient, coupled with the fear and desperation that serious illness (or just the possibility of it) engenders, enables the profit-maximizing provider often to convince the patient to undergo costly low-value treatments.
Certainly the profit-maximizing health-care provider will be very relucant to refuse to provide a treatment that the patient insists upon, his insistence being made convincing by the fact that insurance will pay all or most of the cost.
Insurers do try to limit their costs by refusing to approve low-value procedures--but in the face of combined pressure by provider and patient, the insurer is often forced to back down.
To return to the initial puzzle of why our peer nations are able to provide what seems, judging by outcomes, a level of health equal or superior to that of Americans at far lower cost, the only convincing answer is that the health-care providers in those nations limit treatment.
I am not sure of the explanation, but the possibilities include: the professional model is more tenacious in societies less committed to free markets and a commercial culture than the United States; more of their hospitals are public and more of their doctors are public employees, who are therefore salaried rather than entrepreneurial; and Americans, being less fatalistic than most other peoples, have a more intense demand for life-extending procedures.
These are reasons why a national health plan modeled, as the Administration's appears to be, on the health plans of peer nations with much lower aggregate health costs is unlikely to work well, or at least to generate net cost savings.
Of course if people value extension of life very highly--and there is evidence that, in the United States at least, most people do--a very costly health care system may be cost-justified, in the sense that the benefits exceed the costs.
Yet the benefits seem rather illusory, since the extra money we spend on health care does not seem to produce better outcomes.
But international comparisons of health that are limited as they largely are to differences in longevity are crude.
They ignore health benefits unrelated to longevity, such as the benefits conferred by cosmetic surgery and the possibility that the additional costs of health care in the United States enable people to live more dangerous, strenuous, or self-indulgent lives and by doing so confer utility.
Warren Buffett, who is a wit as well as a multibillionaire, said with reference to the fact that Bernard Madoff's long-running Ponzi scheme came to light during the financial collapse of last fall that until the tide goes out, you don't know who's swimming naked.
A year ago Becker and I blogged about the decline of the newspaper industry.
A year later the decline has accelerated.
The economic crisis has hurt the newspaper industry as it has so many industries.
The question is whether it will recover (or at least rejoin its slower downward path of last year) when the economy as a whole recovers; or has the economic crisis merely revealed the terminal status of the industry.
I am pessimistic about a recovery by the newspapers.
One reason is the current economic situation.
A serious, protracted economic crisis can result in changes in consumer behavior that persist after the end of the crisis.
A change in consumption, even in some sense involuntary, can be a learning experience.
People make what they think will be merely temporary adjustments in their consumption behavior to reduce financial distress but may discover that they like elements of their new consumption pattern; and businesses too, which have reduced their newspaper (and other print-media) ad expenditures drastically.
They may never go back.
Newspaper ad revenues fell by almost 8 percent in 2007, a surprising drop in a non-recession year (the current economic downturn began in the late fall of that year), and by almost 23 percent the following year, and accelerated this year.
In the first quarter of 2009 newspaper ad revenues fell 30 percent from their level in the first quarter of 2008.
This fall in revenue, amplified by drops in print circulation (about 5 percent last year, and running at 7 percent this year--and readership is declining in all age groups, not just the young), have precipitated bankruptcies of major newspaper companies and, more important, the disappearance of a number of newspapers, including major ones, such as the   Rocky Mountain News   and the   Seattle Post-Intelligencer  .
Falling revenues have led to layoffs of some 20,000 employees of the remaining newspapers.
Print journalism has come to be regarded as a dying profession.
Online viewership and revenues have grown but not nearly enough to offset the decline in ad revenues.
Even the most prestigious newspapers, such as the   New York Times  , the   Wall Street Journal,   the   Washington Post  , and   USA Today  , have experienced staggering losses.
News, as well the other information found in newspapers, is available online for nothing, including at the websites of the newspapers themselves, who thus are giving away content.
The fact that online viewing is rising as print circulation is falling indicates a shift of consumers from the paid to the free medium.
The economic downturn has doubtless accelerated the trend, but economic recovery is unlikely to reverse it.
To repeat my earlier point, many of the people who have switched under economic pressure to the free medium may find themselves as happy or happier and hence will not switch back when their financial condition improves.
Moreover, while in many industries a reduction in output need not entail any reduction in the quality of the product, in newspaper it does entail a reduction in quality.
Most of the costs of a newspaper are fixed costs, that is, costs invariant to output--for they are journalists' salaries.
A newspaper with shrinking revenues can shrink its costs only by reducing the number of reporters, columnists, and editors, and when it does that quality falls, and therefore demand, and falling demand means falling revenues and therefore increased pressure to economize--by cutting the journalist staff some more.
This vicious cycle, amplified by the economic downturn, may continue until very little of the newspaper industry is left.
So what will happen to news and information? Online news is free for two reasons.
First, in the case of a newspaper, the marginal cost of providing content online is virtually zero, since it is the same content (or a selection of the content) in a different medium.
Second, online providers of news who are not affiliated with a newspaper can provide links to newspaper websites and paraphrase articles in newspapers, in neither case being required to compensate the newspaper.
As newspaper revenues decline, newspaper content becomes thinner and thinner--but by the same token so does the linked or paraphrased newspaper content found in web sites that have no affiliation with a newspaper.
If eventually newspapers vanish, online providers will have higher advertising revenues (because newspaper advertising will have disappeared) and may decide to charge for access to their online news, and so the critical question is whether online advertising revenues will defray the costly news-gathering expenses incurred at this time by newspapers.
Imagine if the   New York Times   migrated entirely to the World Wide Web.
Could it support, out of advertising and subscriber revenues, as large a news-gathering apparatus as it does today? This seems unlikely, because it is much easier to create a web site and free ride on other sites than to create a print newspaper and free ride on other print newspapers, in part because of the lag in print publication; what is staler than last week's news.
Expanding copyright law to bar online access to copyrighted materials without the copyright holder's consent, or to bar linking to or paraphrasing copyrighted materials without the copyright holder's consent, might be necessary to keep free riding on content financed by online newspapers from so impairing the incentive to create costly news-gathering operations that news services like Reuters and the Associated Press would become the only professional, nongovernmental sources of news and opinion.
I agree with Professor Bebchuk of Harvard, and others, that there is a problem with the compensation of top executives at publicly held corporations (that is, corporations in which ownership is widely dispersed), so that control resides in the board of directors.
The problem is that the individual directors do not have strong incentives to limit the pay of the CEO and other top executives.
By limiting his and their pay, the board would narrow the field of selection, and if the company got into trouble they would be criticized for having been penny wise and pound foolish in resisting their "compensation consultant's" advice to pay top dollar.
In addition, the directors often owe their lucrative directorships, and their continuation in them, to the CEO.
The movement toward "independent" directors (as distinct from directors who are officers of the corporation) does not cure the incentive problems, but rather compounds them by making the board less knowledgeable about the corporation.
So there is a basis for concern with the compensation of top management in publicly held corporations, but it is not a momentous concern and costly measures to ally it would not be justifiable.
Modest measures, such as making it easier for shareholders to replace directors than under the existing, Soviet-style system in which shareholders vote for or against the slate proposed by management, and requiring full disclosure and monetization of all forms of compensation paid CEOs and other top executives, may be sensible; but nothing more should be attempted.
The solving of the overcompensation problem would have little if any effect on risk taking by bankers and other financiers, so probably any efforts to solve it should be postponed until the economy recovers from its present sickness.
A distinct problem is that of compensation of executives of firms that are owned or controlled by the federal government, such as General Motors, American International Group, Fannie Mae, and Freddie Mac, and (or) that are recipients of federal bailouts.
These are troubled firms, and the concern is that management may try to funnel the federal moneys that the firms have received into dividends and bonuses so that shareholders and executives will be protected should the company fail completely.
The danger in other words is that when a firm is teetering on the edge of bankruptcy, management may stiff the firms' creditors by funneling some of the firm's remaining assets to managers and shareholders.
The time to deal with this problem, however, is when the bailout is made; suitable conditions can be attached to it.
To instead appoint a "pay czar" to deal with executive salaries of bailout recipients on an ad hoc basis creates all the problems that Becker discusses.
These problems are especially grave with regard to General Motors and Chrysler, as these are fast-failing firms that need to be able to offer high salaries to attract able executives.
Between efforts by the "pay czar" to limit these companies' flexibility in compensation, and the efforts by Congress to limit the companies' ability to import vehicles and close plants and dealerships, the government is doing to best to minimize its chances of ever recovering its $60 billion investment in the two firms.
This is called shooting oneself in the foot, or, alternatively, politics as usual.
Still another distinct problem is that of compensation practices of banks and other financial intermediaries.
Here the problem is not the compensation of top management, but the compensation of traders and other investment officers at the operational level.
The concern is that compensating them on the basis of the profitability of the individual deals that they make motivates them to take excessive risks.
Suppose a deal has a positive expected value, but there is a 1 percent chance that it will fail in a way that imposes heavy costs on the corporation, and perhaps, because of the chain-reaction effect of the failure of a major bank (as we saw last September, when Lehman Brothers went broke), on the financial system as a whole.
The trader who makes the deal may not worry much about that risk, because a 1 percent annual risk of disaster is very unlikely to materialize in the short run; the probability that an annual risk of 1 percent will materialize in 10 years is only 10 percent (actually a shade less).
Financial firms that worry as they should about such a catastrophic risk (since the firm makes many deals, which multiplies the risk of disaster), typically try to reduce it by employing "risk managers" who review proposed deals.
Because this method of limiting risk failed to avert the financial collapse of last September, there are suggestions that it be supplemented or replaced by rules limiting the cash bonuses paid to traders, instead compensating them in restricted stock of the corporation, which they cannot sell for a number of years, or authorizing the corporation to "claw back" any bonus they receive should the risk involved in one or more of their deals later materialize and reduce or eliminate the profit that the corporation made on the deals.
It might seem that top management would have all the incentive it needed to prevent its subordinates from taking risks that would jeopardize the solvency of the company.
But that is not true, because the private cost of bankruptcy is truncated by limited liability (the shareholders cannot be forced to pay the corporation's debts), but the social cost, as we have learned, can include a devastating global economic shock.
An external cost is a conventional justification for regulatory intervention--in principle.
But the specific suggestions for curbing risk taking by traders are problematic.
There are many influences on the value of a corporation's stock besides the outcome of a particular deal, and a claw-back possibility can greatly reduce the present value of a bonus, as well as complicating the recipient's tax and other financial planning.
I conclude that it is premature to start regulating compensation practices in the banking industry; there are other ways of reducing financial risk that are less problematic.
Notice that this problem has nothing to do with boards of directors' inability under existing rules to control the compensation of top executives, because traders are not top executives.
Management has no incentive to overpay its subordinates! Nor has this problem anything to do with government ownership or control, or a risk of insolvency that might induce top management to try to appropriate a firm's remaining assets.
Any monkeying by government with compensation practices, especially below the top level of management and especially in financial firms, will impair the ability of American firms to compete with foreign firms.
The banking business is thoroughly international, and unless all countries act in lock step with the United States in regulating compensation practices, many of our ablest financiers will be lured to foreign banks.
One can only hope that the appointment of a "pay czar" is merely a sop to ignorant public and congressional opinion, and that Mr.
Feinberg will be suitably restrained in the exercise of his powers.
Secretary of the Treasury Geithner seems unenthusiastic about the government's imposing more than cosmetic changes on corporate compensation.
practices.
More power to him.
Sub-Saharan Africa became and remains the world’s poorest region in the post-colonial era.
This is generally attributed to bad governments and to foreign aid, the latter because it enables countries to defer necessary reforms and enriches (and thereby helps to entrench) the countries’ generally bad (inept, vicious, corrupt) governing class.
Beginning in the mid-1990s, however, economic growth rates in the sub-Saharan countries rose briskly, reaching 6 percent in the 2000s (and this after having dropped between 1980 and 2000).
This rate is not as impressive as it seems, because the sub-Saharan African population is growing by more than 2 percent a year.
The increase in per capita income (less than 4 percent) is a more meaningful indicator of economic development.
The economic growth rate dropped to 2 percent in 2009 as a result of the global economic crisis, which means that there was a decline in per capita income, but it is expected to reach 4 percent this year.
(On the economic improvement in   Africa   since the mid-1990s, see Jorge Arbache et al., “Is Africa’s Economy at a Turning Point?” (World Bank Feb.
2008),     http://www-wds.worldbank.org/external/default/WDSContentServer/IW3P/IB/2008/04/14/000158349_20080414093531/Rendered/PDF/wps4519.pdf  ; on the impact of the current crisis on the economies of the sub-Saharan African nations, see “Sub-Saharan Africa: Back to High Growth” (IMF Apr.
2010),     http://www.imf.org/external/pubs/ft/reo/2010/afr/eng/sreo0410.pdf  .) Sub-Saharan   Africa   was not hit as hard by the economic crisis as the other regions of the world because its financial sector is small.
Most of the capital that flows into the region comes in the form of foreign aid or foreign direct investment, rather than loans, and so the region was less affected by the credit crunch that followed the banking crisis of September 2008 than the rest of the world was.
A major factor in the region’s increased growth rate since the mid-nineties has been increased demand for commodities, such as oil and gold, which are major African exports, by China, India, and other rapidly developing countries; the increased demand has resulted in higher prices for these commodities.
Many sub-Saharan African countries are net importers of commodities, and thus have been hurt by the higher prices.
The countries that are the major commodity exporters, such as   Nigeria  ,   Angola  , and     South Africa    , have grown at faster rates on average than the other countries of the region.
Some of these other countries, however, have had fast rates of economic growth as well, and this may be due mainly to improvements in their governments in the areas of protection of property rights, curtailment of corruption, encouragement of private enterprise, and management of monetary and fiscal policy.
Will sub-Saharan Africa, having as I said weathered the global economic crisis rather well, take off economically and catch up with other regions of the world, such as East Asia, which a half century ago was poorer than sub-Saharan Africa? Perhaps so, but I have my doubts.
To begin with, I have no idea how accurate these countries’ economic statistics are; the Greek debacle has reminded us of the importance of determining the accuracy of economic statistics before opining on a country’s economic performance.
Moreover, the higher growth rates of the sub-Saharan African countries    in recent years may be in part an artifact of the very substantial increase in foreign aid (roughly a tripling between 2000 and 2008) to those countries.
And exports of raw materials (other than farm products) are not a very promising route to prosperity.
Often they do not create a great many jobs in the exporting nation, in which event most of the income from the exports go either to the owners, many of whom are foreign, or to the governments of the exporting nations—and this means, in corrupt governments, into the pockets of government officials.
Commodity prices are volatile, moreover, and no one can know whether they’ll be higher in real terms a decade from now.
Levels of education and health are very low in sub-Saharan African countries; life expectancy is low and is actually declining; productivity is very low; fertility though declining remains very high; poverty of course is widespread; ethnic conflict (often violent) and political violence are common; corruption is endemic; opportunities for women are meager.
These impediments to economic growth will probably change very slowly because they are deeply rooted in African culture.
And the rest of the world will not stand still while they change.
Conservative economists believe that the basic regulatory mechanism of a capitalist economy should be antitrust law, designed to preserve competition; because as long as a market is competitive, the self-interest of producers and consumers should operate to maximize the value of the market’s output.
Of course if the activity of the market participants produces external costs or benefits—costs or benefits to nonparticipants, as in the case of pollution, competition will not optimize output.
Output will be too large from an overall social standpoint if the externality is a cost, as in the pollution example, and too small if the externality is a benefit, as in the case of education; hence the government’s subsidization of education.
A college that is operated for profit might seem a worthy object of subsidization.
That was the view of the Bush Administration and led to the relaxation of a number of restrictions on the subsidization of such colleges, and so the number of students enrolled in them soared; it is now almost one million.
Under what is known as the “Title IV” program, the government makes loans to college students to finance their education.
Students at for-profit colleges are eligible for thesse loans.
Title IV loans to such students have increased by 500 percent since 2000 and now amount to $26.5 billion a year, which is more than a quarter of all Title IV loans.
A for-profit college may not derive 90 percent or more of its revenue from such loans, and the default rate of its students may not exceed 25 percent for three consecutive years.
The for-profit colleges tend to keep just below the 90 percent and 25 percent ceilings, which means that the bulk of their revenue is derived from the federal government and that the rate of default of their students on the federal loans is very high.
Defaulting on a student loan is particularly painful to the defaulter, moreover, because the unpaid balance of a federal student loan can’t be discharged in bankruptcy.
Still, even the government can’t squeeze water out of a stone, so many of the defaulted loans are never repaid.
The default rate is so high because the dropout rate from for-profit colleges is so high; it probably exceeds 50 percent on average.
(The overall college dropout rate is high too—about a third—but considerably lower than the for-profit college dropout rate.)   In 2007, students at for-profit colleges were only 7 percent of all students in higher education (they are now close to 10 percent), yet they were 44 percent of all students who defaulted on their federal loans.
Steven Eisman is a very able hedge fund manager who was one of the few financiers to spot (and profit from) the financial collapse that crested in September 2008.
He is one of the heroes depicted in Michael Lewis’s fine recent book,   The Big Short  .
Eisman believes that there is an uncanny resemblance between the financial situation of the for-profit colleges and that of the banks before the collapse.
See Steven Eisman, “Subprime Goes to College,” May 26, 2010,   www.marketfolly.com/2010/05/steve-eisman-frontpoint-partners-ira.html  .
In both cases loans—mortgage loans in the bank case, student loans in the for-profit college case—were made to people who were at high risk of defaulting, and in both cases “rating agencies” (credit-rating agencies in the case of the banks, college accreditation agencies in the case of colleges), were afflicted with a conflict of interest because they were paid by the institutions whose securities (in the case of the banks) or educational programs (in the case of the colleges) they were rating.
(For criticism of the accreditation agencies, see Melissa Kornm “New Scrutiny of Groups That Accredit Universities,”   Wall Street Journal  , June 7, 2010, p.
A8,     http://online.wsj.com/article/SB20001424052748703340904575285014094515910.html  .).
Eisman thinks that the federal government is likely to lose $275 billion on its Title IV loans over the next decade.
These defaults will not have the macroeconomic consequences of the financial collapse, but they will slow our economic recovery and increase the federal deficit.
The government is concerned.
The Department of Education has proposed denying eligibility for federal-financed student loans to students who cannot repay their loans within 10 years by annual payments of no more than 8 percent of their starting salary.
See Tamar Lewin, “Facing Cuts in Federal Aid, For-Profit Colleges Are in a Fight,”       New York         Times  , June 6, 2010, p.
A21,   www.nytimes.com/2010/06/06/education/06gain.html?scp=1&sq=Eisman&st=cse  .
This would mean, for example, that a student whose first job paid $18,000 a year could borrow no more than $10,000.
The Department of Education has delayed action on the proposal, apparently in response to lobbying by the for-profit colleges.
The proposal may eventually be watered down, if not abandoned outright, because no interest group has a big stake in shrinking the industry.
If it were adopted, this “gainful employment” rule as it called might reduce student enrollments at such colleges by a third, by driving a number of for-profit colleges out of business.
It would also lead to reductions in tuition for students in the surviving for-profit colleges by reducing the amount they could borrow from the government to pay college tuition.
Despite their high drop-out rates, these colleges charge high tuition (often higher than public colleges charge) because the students can borrow most of it.
The colleges are also very profitable, so most of them will be able to survive with lower tuition—which is a bit of a puzzle, since one expects competition to drive the average price of a product or service down to cost (including an allowance for profit, viewed as the cost of equity capital).
It is possible, however, that the industry faces a sharply rising average-cost curve, so that the costs of the efficient firms are lower than the market price.
In addition, demand for for-profit college education has been rising rapidly, and when demand for a product rises at a fast rate profits may rise because of delay in expanding supply.
The aggregate cost of the for-profit college industry is great.
The $275 billion default cost to the federal government anticipated by Eisman is not a cost in the economic sense, but a transfer; it is money that goes from the government to the students to the colleges and stops at the colleges rather than being repaid by the students.
But we cannot be insensitive to large government transfers, because they increase the federal deficit at a time when the national debt is growing at an alarming rate from an already very large base.
These transfers are not costs but they give rise to costs.
The (other) costs of the industry, consisting of the opportunity costs of the teachers (and other staff) and the students, the considerable marketing expenses that the colleges incur to build enrolment, and other expenses, are substantial and the question is whether they generate commensurable benefits.
Assuming that almost 90 percent of the industry’s revenues are the federal student loans and that the industry’s total costs are 90 percent of its revenues—the rest being profits in excess of the cost of equity capital—the total annual costs of the industry are equal to the student loans: $26.5 billion.
The benefits conferred by the for-profit college industry would consist in the first instance of any increased income of the graduates of these colleges (or of the drop-outs, assuming they attend for a significant time before dropping out) as a result of their having attended college, but also of the nonpecuniary benefits of their college experience and the benefits to society as a whole of a more educated population; the second and third types of benefit are impossible to measure, however.
It’s hard to believe that the dropouts obtain benefits commensurate with the costs, especially when we consider their opportunity costs—they might have been working and earning an income rather than attending college—and the interest and other costs to them of the loans, which remember they can’t shuck off by declaring bankruptcy.
Among the graduates there are many defaults as well, which suggests that they don’t gain a lot from college attendance so far as bolstering their incomes is concerned.
So it is quite likely that on balance the costs of the for-profit colleges exceed the benefits, and that costs and benefits would be brought into closer alignment if the new 10 percent-8 percent rule were adopted.
The big puzzle is why (to return to my opening point) the for-profit college market is not self-regulating—why, for example, for-profit colleges don’t emerge that set higher entrance standards and as a result can advertise truthfully that their students are less likely to drop out and therefore more likely to derive a net benefit from attending.
Stated differently, if the 10 percent-8 percent rule is optimal, why doesn’t competition drive the industry to that level without the government’s having to intervene? For remember that most for-profit colleges would survive under such a regime, and some surely would thrive—in fact would be able to charge higher tuition than they do now because they would be offering a better product.
Does anyone think that Harvard is hurting itself by having very high entrance standards? (The dropout rate at Harvard is 3 percent, and some of the dropouts, like Bill Gates and Mark Zuckerberg, go on to become billionaires.).
The solution of the puzzle may be, as Eisman argues, that the private-college industry, which is at a disadvantage in competing against nonprofit colleges because of the tax advantages, donor income, and direct state and federal support of nonprofit (including public) colleges, has targeted a class of people who cannot gain admission to those colleges because they do not meet their entrance standards.
There is evidence that just as in the case of the marketing of mortgage loans during the housing bubble of the early 2000s, the for-profit colleges use aggressive advertising to attract students from low-income families that lack financial sophistication and the ability to evaluate the benefits of attending a for-profit college.
These people—who may be the only people who would consider a for-profit college, because no other college would admit them—almost by definition have little information about higher education and are therefore prey to skillful marketing that even if literally truthful may create a misleading impression of the benefits of attendance at a for-profit college.
For-profit colleges often pay recruiters by the number of enrollments that a recruiter generates.
(The Department of Education is trying to prevent that with a new regulation.) Recruiters have been known to recruit at homeless shelters.
An alternative possibility, however, is that most of the people who attend a for-profit college understand the risk of failure but prefer to gamble on succeeding in obtaining a college degree and using the credential and what they have learned to obtain a much better job as a result—a job that will enable them to repay their loan and derive a net benefit from having borrowed it.
(This is likewise a theory of why during the housing boom so many people took out adjustable rate mortgage loans, or home equity loans, that they could not “afford”—they were gambling, many with their eyes open.) College graduates earn substantially higher salaries than less-educated workers, but it is doubtful whether, in the aggregate, graduates of for-profit colleges earn enough more to compensate for the costs and the dropout risk.
As Becker explains, advances in medical technology (which are very costly) and (related to those advances) increases in longevity, create the prospect of very great increases in social security and Medicare outlays in future years.
The prospect is increasingly worrisome because of the large annual federal deficits that the nation has been running and the resulting increase in the national debt.
Although rapid economic growth would (as Becker has emphasized in previous posts) make the debt manageable, we may very well be facing a longish period of below-average economic growth as a result of persistent high unemployment, economic uncertainty, and anti-growth public policies such as encouraging unionism.
Becker sets forth a program for reining in the growth of entitlement expenditures.
The question I wish to address is the political realism of that or alternative programs for limiting such growth.
It is extremely difficult to marshal political support to deal at present with future problems.
Future problems by definition are not felt in the present, and so it is difficult to mobilize public opinion in support of solutions unless they are costless.
And politicians have a short time horizon, which means that they will not benefit politically from measures that impose present costs but yield benefits to the voting public after the politician has left office.
If, however, the costs can be deferred to the future as well, the current public may not object to a measure that confers benefits only in the future.
That was the approach of the 1983 social security reform, which raised the age of full entitlement to social security benefits from 65 to 67 gradually for persons born in or after 1938—and they were only 45 when the reform was enacted.
And only persons born in 1960 or later would have to wait until they were 67 to be entitled to full benefits; they were only 23 in 1983.
Both the costs (in reduced entitlements) and the benefits (in reduced entitlements expense) were pushed into the relatively distant future.
The discounted present effects were thus slight, and so the reform was relatively uncontroversial.
The problem with repeating such a measure now is that we probably can’t afford to defer entitlements reform for 30 or 40 years.
Suppose Congress increased the age of full entitlement to social security benefits to 70 for persons who are 23 years old today.
The reform would not take take full effect until they reached 67, which would be 44 years from now, although there would be some savings earlier if, as with the 1983 reform, a gradual rise in the retirement age began for persons who are 45 years old today; they would first begin to feel the reform in 22 years, when they reached 67.
The same problem of delay would afflict a reform of Medicare designed to raise the eligibility age for Medicare in tandem with increasing the social security retirement age.
Moreover, while young people can view with relative equanimity the prospect of having to work a couple of years more for full social security benefits—which are anyway rather meager—I don’t think they’d feel the same way about losing Medicare in their late sixties.
A big difference is that there is no ceiling on Medicare benefits, and the expected benefit of receiving them is therefore much greater than the expected benefits of social security—and young people know it.
It seems to me that, in the short run, the only realistic measures for reining in social security and Medicare are a combination of higher payroll taxes and means testing.
In the long run, social security and Medicare benefits will be cut to affordable levels when the   United States   finds itself in the same desperate fiscal situation as     Greece    , but probably not before.
The only hopeful note is that the current widespread public concern with deficits may embolden some legislators and the even the Administration to propose strong measures to head off a fiscal crisis exacerbated by entitlements spending.
The phenomenon of regulatory capture—the transformation of a regulatory agency into an anticompetitive tool of the regulated industry—is real, but I think Fannie Mae and Freddie Mac are more accurately regarded as examples, though no less unlovely, of something else: a capitalist-socialist hybrid.
They were not regulatory agencies; until they collapsed during the financial crisis of 2008 and were taken over by the federal government, they were private corporations that had been chartered by Congress to promote home ownership.
Their status as GSEs (government-sponsored enterprises) created an expectation that the government would guarantee their debts.
This expectation enabled them to borrow at lower interest rates than other private corporations.
They were supposed to promote home ownership by buying or guaranteeing home mortgages.
They did that; they also pioneered mortgage securitization—in effect turning mortgages into bonds, which are more liquid than mortgages and so could be sold all over the world, bringing more capital into the U.S.
residential real estate market, thus promoting home ownership, just as Congress wanted.
Because of the low interest rates they paid, Fannie and Freddie were immensely profitable until the financial crisis brought them down.
As Becker points out, Fannie and Freddie were effective in obtaining congressional and presidential assistance to ward off threats to their activities and their profits.
But I don’t think that that assistance, unseemly as it was, and perhaps corrupt as well, was the basic problem of Fannie and Freddie, or the cause of their collapse; nor do I think their collapse was of any great consequence for the nation.
I don’t think there was ever a good reason to promote home ownership over renting (so I would favor the abolition of the deductibility of mortgage interest from federal income tax).
It ties up a lot of the capital of individuals and reduces labor mobility.
Maybe it makes for more responsible citizens by giving people a property interest, but there must be better candidates for federal largesse.
And even if there were a good reason for government to promote home ownership, federal chartering of mortgage institutions would not be a sensible means of implementation.
Are the external benefits of home ownership, if any, so great that the mortgage-interest tax deduction is not subsidy enough? True, the lower the interest rates that Fannie and Freddie paid to borrow money, the riskier the mortgage loans they would agree to underwrite by buying or guaranteeing the loans, but home ownership is not promoted in any meaningful sense by the granting of mortgages to people likely to default.
Conservative critics led by Peter Wallison of the American Enterprise Institute lay on Fannie and Freddie a significant measure of blame for the housing bubble of the early 2000s and the ensuing financial crisis of September 2008.
But these critics have not persuaded me.
Private banks like Morgan Stanley and Goldman Sachs and Countrywide bought mortgages, securitized them, and sold interests in them (these firms also bought mortgage-backed securities created by other financial firms)—a sequence wholly separate from the activities of Fannie and Freddie.
It was an immensely profitable activity, so there is no reason to think that had there been no Fannie and Freddie the volume of mortgage-backed securities would have been less than it was.
Whether a market has X firms or X – 1 firm is unlikely to affect the volume of market activity.
I don’t think Fannie and Freddie took more risks than their competitors; the difference is that they were more deeply committed to the housing market (that was their mission) than most other firms, so less likely to survive a housing bubble.
The financial crisis might actually have been worse without Fannie and Freddie.
They collapsed and were simply taken over by the federal government.
Had their debts instead been debts of Morgan and Goldman and other private banks, those banks might have collapsed and been taken over by the federal government as well, providing daunting challenges to the government’s ability to run the banking system.
The cost to society of the government’s taking over Fannie and Freddie is hard to estimate.
The takeover resulted in a transfer payment to creditors of Fannie and Freddie from (ultimately) the federal taxpayer.
Had there been no Fannie or Freddie, other mortgage companies would have had more debt, and the owners of that debt would also have been bailed out by the federal government, in all likelihood.
Congress would do well to abolish Fannie and Freddie.
But it won’t.
The constellation of political forces that supports subsidizing home ownership is too strong.
But if Fannie and Freddie are not at the root of the financial collapse and ensuing depression (as I insist we should call it, eschewing the tepid euphemism “recession”), this is not to exonerate the government.
I would go so far as to contend that the government is entirely to blame for the crisis (see my book   The Crisis of Capitalist Democracy   [Harvard University Press, 2010]).
The housing bubble was enabled to expand to bursting point by unsound interest rate policies followed by the Federal Reserve in the early 2000s, under the chairmanship of Alan Greenspan.
Interest rates were kept too low, and because a house is a product bought mainly with debt, low interest rates reduce the cost of acquiring a house.
The result was a surge in demand for housing, forcing up price because the housing stock cannot be rapidly expanded.
(The Fed controls only short-term interest rates directly, but short-term interest rates influence long-term rates; moreover, interest rates on adjustable-rate mortgages are in fact short term, and those mortgages became immensely popular during the bubble, as they facilitated speculation on rising house prices.) House prices rose so steeply that the increases became self-sustaining (that’s the bubble phenomenon).
The Fed and the other banking agencies were oblivious to the bubble, and the Fed (by then under Bernanke) made a disastrous error by allowing Lehman Brothers to fail after having led the banking industry to believe that the Fed would not permit a major bank failure—whereupon the industry lost all confidence in government policy.
And finally financial deregulation had gone too far, enabling—and by virtue of competitive pressure compelling—banks and other financial firms to take risks that were excessive from a social though not private standpoint.
But the role of Fannie Mae and Freddie Mac in all this probably was minor.
A great many colleges give preference in admission to children of alumni.
(This includes public colleges because few colleges any more rely entirely on public funding.) It is widely believed, with some evidence, that alumni give more generously to their alma mater if their children are admitted to, and especially if, having been admitted, they attend, the college that they (the alumni) attended.
This is the passionate belief of admissions officer, and if it were false it would have been abandoned, because it attracts a good deal of criticism.
The “legacy” preference is not a mere tie breaker; as in the case of preferences for blacks, Hispanics, and athletes, it amounts in effect to pretending that the applicant had a substantially higher score on the Scholastic Aptitude Test than he or she actually had.
The preferences for blacks and Hispanics are defended on the ground (possibly spurious in most cases, reflecting pressure from government rather than educational theory) that they are better than their high-school records or SAT scores suggest, because of a disadvantaged upbringing; in fact the colleges, at least the elite ones, try to admit the   non  disadvantaged members of these minorities because they are more likely to graduate.
But the athletic and legacy preferences have strictly financial motivations.
Alumni donations amount to some 28 percent of total college revenues, and both athletic prowess and admission of alumni offspring are considered important to alumni donations.
The practice of favoring alumni children seems to be particularly strong in elite colleges, though statistics are hard to come by because most colleges will not reveal the admission rate of alumni children relative to the admission rate for applicants who do not belong to preferred minorities, are not good athletes, and are not alumni children.
Not that such statistics would be meaningful by themselves, at least in the case of alumni children admitted to elite schools.
By definition one of the parents attended such a school, namely the one their child has applied to; and given assortative matching in marriage, there is a good chance that the other parent went to the same, or another, elite school, as well.
Hence their children are probably of above-average quality (unless their parents were second- or third-generation legacy admits), in which event they would be admitted at a higher rate than non-legacy applicants even without legacy preference.
What one needs to know is how great the preference for alumni children is and therefore how many are admitted who wouldn’t be without the preference.
We don’t know how great it is, but it must be significant; otherwise colleges wouldn’t bother to give the preference—most alumni children would be admitted without it.
Admitting an applicant to an elite college because one or both of his parents went there is offensive in a nation whose constitution forbids the granting of titles of nobility.
It confers an arbitrary advantage, based on ancestry rather than on merit or promise.
The advantage is not primarily that the preferentially admitted student receives a better education; he may not, because the fact that he would not have been admitted without a preference suggests that he may not be a motivated or talented student; and you cannot make silk purses out of sow’s ears.
But his degree from an elite college will open doors; prospective employers, even if they discover that an applicant is the child of an alumnus of the college he attended, will not know whether he was one of the alumni children who wouldn’t have been admitted had it not been for the connection.
The student will also derive social benefits (prestige, maybe some extra refinement) and valuable contacts from hobnobbing with the other students; elite colleges have a disproportionate number of elite students.
The same argument is made against allowing gifts and bequests by wealthy parents (or, more realistically, against light or no taxation of such transfers): that they confer an arbitrary advantage.
It is further argued that they blunt the children’s ambition.
But there are counterarguments, which cumulatively are compelling: heavy taxation of gifts and bequests increases consumption at the expense of investment; reduces parents’ utility and their incentive to work hard; weakens family bonds; and reduces constructive risk taking by young people by reducing the number of young people who have financial security.
Abandoning legacy admissions would have much smaller effects.
The major one would be a reduction in alumni giving, and no one knows how much of a reduction there would be, or the extent to which it could be offset by more vigorous solicitation of alumni donations.
But the benefits from abolishing legacy admissions would also be slight, if I am right that one of the principal benefits to the legatees is social—the benefits from rubbing shoulders with elite students.
For the students whom the alumni children displace at the elite schools still go to college.
They just will drop down a notch in the college pecking order; and their presence in colleges at that level will be a boon to the other students in those colleges.
Furthermore, if one asks who exactly will replace the alumni children if legacy admissions disappear, the answer—better qualified students—must itself be qualified by recognition that parents invest money to improve their children’s college admission prospects, and their investment may enable a somewhat less able applicant to be admitted in preference to an abler one.
The parents may have financed a highly expensive elementary and high school education for their children at an elite private school, hired tutors for them, and financed extracurricular activites of the kind that impress admissions officers.
The greater the inequality of income in the society—and it is very great in our society at present—the more that wealthy people will be able to gain access to elite colleges for their children, and by doing so underwrite the children’s future economic success and social prestige.
This point suggests that, as long as colleges are private (including de facto private, which is increasingly the case with public colleges, many of which like the University of Michigan, receive only a small fraction of their money from the state), they will follow a business model, and a business model of education is inherently tilted in favor of applicants who may not be the most promising students, whether they are members of preferred minorities, athletes, children of very wealthy people even if neither parent is an alumnus (such children are favored by many colleges), children with an impressive resume attributable to a significant effect in their parents’ investment in them, or alumni children.
Although most colleges—and all elite colleges—are nonprofit institutions, the only difference between most nonprofit institutions and for-profit institutions is that the former cannot distribute surplus revenue (profit) to the people who supply them with capital: there are no shareholders or dividends.
They are no less competitive than private institutions.
One can imagine a system in which all education, from kindergarten to graduate and professional school, would be financed by the government, but few would argue that such a system would be an improvement over what we have, even though what we have has many unattractive features.
The ultimate question from an economic standpoint is whether legacy admissions create a harmful externality, which is to say impose a cost on society that the parties to the practice that causes the externality do not take into account.
The college and its legacy admits like the practice of legacy admissions and do not want to change it; should they be forced to because of the harm the practice causes to other members of society?.
I think not.
The harms are of two kinds.
First, legacy admissions undermine the matching of the best students with the best colleges.
That kind of “assortative mating” is probably most efficient from the standpoint of maximizing the gains from education; and education creates benefits for society as a whole.
With legacy admissions, there are fewer places for the best students in the best colleges; some of those best are relegated to the next tier of colleges.
On the other hand, students learn from other students as well as from teachers, and those “best” who are knocked down a tier thus benefit their fellow students in the second-tier colleges to which they are admitted.
The net loss to educational efficiency caused by legacy may well be slight—it may even be negative if abolition of legacies would result in a significant reduction in alumni donations and therefore in college resources.
Second, legacy admissions increase inequality, or at least may increase it (this qualification is explained below).
Inequality, beyond a point we may have reached, is harmful to society as a whole; the enormous federal deficit is in part a result of efforts by the Bush Administration as well as the Obama Administration to offset wage stagnation and rsulting increased inequality by heavy borrowing to fund entitlement programs.
But the increase in inequality brought about by legacy admissions is probably infinitesimal.
Moreover, legacy admissions may actually reduce inequality, because the abler person pushed down to a lower-tier school may have lower lifetime earnings as a result, and if as is likely he is upper middle class, the reduction in his lifetime earnings will reduce inequality.
I continue to find the practice distasteful.
But I don’t think public intervention (which might take the form of federal and state regulations conditioning financial assistance to colleges on the abandonment of the practice) is warranted.
Becker is right to emphasize the role of demand and supply elasticities (how price responds to changes in demand and supply) in the startling fluctuations in the price of oil and oil derivatives such as gasoline, to deemphasize the role of speculation in those fluctuations, and to point out the social utility of speculation.
If the responsiveness of supply to an increase in demand is sluggish, price will rise steeply in the short run, to ration the limited supply among the clamoring demanders—and it will rise especially cheaply if the demanders would rather pay a high price than do without.
If demand falls, price will plummet if supply is sluggish, because supply will not fall proportionately to demand—there will be oversupply.
An exogenous increase in supply will cause price to drop sharply because inelastic demand implies that price must fall very far to attract purchasers for the additional demand, and thus clear the market.
Increased demand or reduced supply will also have sharp effect, though in the opposite direction.
So the dramatic fluctuations in oil prices that Becker documents do not require the assumption of nefarious activities by speculators.
And, as Becker also points out, those activities are not, in general, nefarious.
Without speculation, the only information about values would be supplied by the actions of suppliers and consumers and others in the chain of distribution from producer to ultimate consumer.
With speculation, information is often supplied by persons and firms that buy or sell on the basis of their opinion of how prices will change.
Yet speculators can destabilize markets.
Speculators are interested in price changes, and know that price changes are driven in part by other speculators.
In an asset-price bubble, speculators (and others) buy in the expectation of rising prices, and may believe that the asset is actually overpriced yet they keep buying because they think that other market participants believe (erroneously) that it is still underprice.
The decision of the United States and other countries that have governmental oil reserves to release a total of 60 million barrels of oil (half of it from the Strategic Petroleum Reserve, which is what the U.S.
government’s reserve supply of oil is called) could be a shrewd speculation on oil being overvalued.
If it is overvalued, prices will fall, so by selling now the countries will receive more income than if they waited.
The sale itself, by increasing supply, will reduce the price of oil; these countries are net importers of oil and so would benefit from lower oil prices.
But this is implausible; there is no reason to think oil overvalued.
The current high prices reflects the loss of about a million barrels a day of Libyan oil because of the civil war in that country.
The reduction is small as a percentage of world output, but the low elasticity of supply enables a small reduction in supply to have a big impact on price.
By the same token, the modest release from the reserve—two million barrels a day for 30 days—may have a dramatic though short-term effect on oil prices, an effect that would be extended if we and other countries continued to release reserves.
Our Strategic Petroleum Reserve contains more than 700 million barrels, so we could release a million barrels a day for a year and still have almost half the current reserves.
But what is the point of reducing our reserves? They have after all value as a hedge against catastrophic hits to oil supply, such as were caused by Hurricane Katrina and the BP Gulf of Mexico mishap.
One possibility is that it is another short-term stimulus measure, such as “cash for clunkers” and the various mortgage-relief programs.
The effects of these measures is slight, because businesses and consumers realize that they are short term and so merely alter the timing of purchases rather than increasing aggregate spending.
But with the 2012 elections approaching, the Administration has an incentive to create even short-term bursts of prosperity.
Consumers are extremely conscious of gas prices because of the frequency with which gasoline is purchased and the frequency with which the retail prices changes, which increases consciousness of gas prices relative to other consumer products.
Hence a sudden, sharp drop in gas prices as a result of an increased supply of oil can create a feeling, but only a transient one, of improved economic prospects.
The U.S.
economy is stagnant; the proposition that we had a mere “recession” which “ended” two years ago, is, like the terminology of sovereign default (“debt restructuring”), just an exercise in euphemism.
Real (that is, inflation-adjusted) GDP per capita has declined by almost 3 percent since 2007.
(This is on the assumption that the first-quarter 2011 increase in GDP at an annual rate of 1.8 percent, not adjusted for inflation or population growth, will be the full-year real per capita increase—in fact, if unadjusted GDP grows by less than 3 percent for the year as a whole, the real per capita GDP will decline.) At the same time, the national debt has soared (it is currently $14.3 trillion, of which $9.7 trillion is “public debt”—that is, debt owed bondholders rather than social security annuitants and other entitlement holders), and unemployment exceeds 9 percent, with about half the unemployed not having worked for at least six months.
The sharp and rapid decline of the economy that began with the financial crisis of September 2008 was expected to be followed by a sharp and rapid rise (making for a V-shaped economic cycle) when the crisis was resolved by the bank bailouts and other emergency measures taken by the Federal Reserve and the Treasury Department in the fall of 2008, and by the $878 billion stimulus enacted by Congress in February 2009.
The economy did pick up in the fall of 2009, but progress since has been fitful.
Causal analysis is the Achilles heel of business-cycle economics.
National economies are so complex, and so different from each other and over time (making cross-sectional and time-series analyses of business cycles inconclusive), that it is rare that a phase in the cycle can be explained satisfactorily, especially if an estimate of magnitude rather than just of direction is desired.
For what it's worth, I think the major impediment to economic growth at present is uncertainty on the part of the key economic actors, namely businessmen and consumers.
Businessmen are hesitant to hire and invest and consumers to spend, in both cases because of uncertainty about their economic prospects.
I use “uncertainty” in the sense in which the economists Frank Knight and John Maynard Keynes distinguished between risk and uncertainty.
Risk was a probability that could be estimated, uncertainty a risk that could not be estimated.
The distinction is unpopular among economists because a nonquantifiable risk greatly complicates statistical analysis of economic phenomena, but it seems to me a real and important distinction when one is dealing with the business cycle.
And there is a growing literature in economics on “ambiguity aversion,” by which is meant aversion to uncertainty in the Knight-Keynes sense.
When a businessman has to decide whether to invest in a new product or a new plant or other facility, with the success of his decision dependent on future revenues and costs, there is bound to be an irreducible element—and possibly a very large element—of uncertainty; and likewise when a consumer has to decide on a major purchase, such as a house, or whether to seek a new job, or marry, or move to another part of the country, or retire, or seek more education.
Any decision the success of which depends on future events is likely to involve uncertainty.
And a common and usually sensible respond to uncertainty is simply to postpone the decision in the hope that the uncertainty will dissipate as new information becomes available.
But the more postponement there is of investment, hiring, purchasing, and other economic decisions, the lower the level of economic activity.
We observe this today with the enormous cash balances that large firms have accumulated and the drooping demand for houses.
The greater the uncertainty, the less forward-oriented economic activity there is likely to be, with adverse effects on investment, employment, and consumer spending.
At present the U.S.
economy is afflicted with at least five major sources of uncertainty.
One is the economy of the eurozone.
If Greece defaults on its public debt, which remains a possibility in the near future (a year or two years from now), this may have a domino effect.
The dominos are not just the other weak eurozone countries—Ireland, Spain, Portugal, and Italy—but also the French and other European banks that are heavily invested in those countries and the American banks and (especially money-market funds) that are heavily invested in European banks.
Second is uncertainty about whether and on what terms Congress will raise the U.S.
public-debt ceiling.
Default is unlikely, but no one knows what deal the Republicans and Democrats will strike to avert default.
Undoubtedly it will involve significant cuts in federal spending, and these cuts will hurt numerous businesses and individuals.
Third is uncertainty about federal regulation of the financial and health sectors.
The ambitious health-care and financial-regulation reform statutes enacted by Congress in 2009 are very long and complicated, but at the same time incomplete—completion of these regulatory edifices was delegated to regulatory agencies that have not come close to finishing their work.
No one can know how tightly banks and other financial institutions are going to be regulated or how the price of health care is going to be affected; and the cost and availability both of credit and of health care are of immense concern to businesses and individuals alike.
Fourth is a widespread suspicion in the business community that President Obama is in the pocket of the labor unions, is viscerally hostile to business, and is entirely focused on winning reelection.
The suspicion is (in my opinion) greatly exaggerated, but is real.
Fifth, there is a sense that politicians the world over, notably including the United States, are preoccupied with the very near term and are simply postponing the day of reckoning with the world’s economic problems that grew out of the financial crisis of September 2008 and the ensuing global economic crisis, which is still with us.
The Greek example is a good one.
Because Greece is stuck with the euro, it cannot climb out of its economic hole by devaluing its currency, a tried and true recipe for dealing with a severe economic downturn, because it increases exports and reduces imports, and both effects stimulate domestic employment.
Greece as a result is broke, and if it had defaulted last year the eurozone might by now be in tolerable economic shape.
Instead Greece seems about to receive a fiscal bandaid that will keep it going for a year or two, which means that any U.S.
firm that has a stake in the eurozone and a planning horizon of at least a couple of years must cope with the uncertainty of Greece’s economic future.
A parallel example is the efforts of our government to revive the housing market by providing relief to mortgagors; the efforts appear to have prolonged the depression of the housing market, which had it been allowed to hit bottom might be on the mend today.
Similarly, the federal fiscal imbalance and mounting deficit are unlikely to receive more than bandaid treatment until the November 2012 elections; businesses and consumers will be tempted to hold their breath until then.
Suppose that finally after those elections the government gets serious and closes the fiscal gap through some combination of higher taxes, reduced government spending, and programs designed sensibly or otherwise to stimulate economic growth.
Because the gap is so large, and will probably be larger 16 months from now, and because the combination that will close it is likely to have profound and uneven economic consequences, many businesses and consumers alike will want to put many of their own economic plans on hold until they have a clearer idea of the terms of the gap-closing combination, which they may not have for years.
I don’t want to give an exaggerated picture of the consequences of Knight-Keynes uncertainty.
It does not paralyze economic activity, but it slows it down and may be a large factor in the current sluggishness of the U.S.
economy.
These were fascinating comments, to which I cannot do full justice.
One issue that a number of comments raise is whether honosexuality is truly genetic or otherwise innate, or is a choice.
Some distinctions will help to focus the issue.
First, there is a fundamental difference between homosexual   behavior   and homosexual   orientation  .
Many heterosexuals engage in homosexual behavior when there is no heterosexual alternative open to them--e.g., men in prisons or on naval vessels (before they were integrated), teenage boys in all-male boarding schools, and young men in Mediterranean cultures where marriage is late and women are largely unavailable for sex outside of marriage.
I discuss this "opportunistic" homosexuality at length in my 1995 book   Sex and Reason  .
There is also opportunistic heterosexuality--notably, men and women who enter heterosexual marriages to have children and/or conceal their homosexuality.
I can't see a good reason for encouraging either kind of behavior.
Concerning homosexual orientation as distinct from behavior, there is an important distinction between   genetic   and   innate   determinants.
One could be born possessing a particular trait because of one's genes, or because of some accident during pregnancy.
In either case, the trait would not be "chosen." In the case of homosexual orientation, there is a fair amount of evidence that it is genetic; I discuss this in my book; also, a recent study reported in the   New York Times   actually found a gene that causes homosexual behavior in fruit flies.
The evidence that homosexual orientation is, if not genetic, certainly innate is more diffuse but cumulatively persuasive, and includes the fact that homosexuality seems to crop up in all human cultures, including ones that reprobate it, and the almost complete failure of efforts to "cure" homosexuality despite the strong incentives of those who submit to such treatments.
The "cure" when successful consists not of acquiring heterosexual orientation but of suppressing homosexual behavior.
If homosexual orientation were simply a "bad habit," it could be broken more easily than cigarette smoking, an addiction that has a physical as well as a psychological dimension; no one thinks it can be.
Once the innate character of homosexual orientation is accepted, the way is paved for analogizing, as a number of the comments do, the claim of equality for homosexuals to the claim of equality for members of racial minorities--the latter a claim almost universally accepted in this society.
I accept the analogy, and of course also accept that the Supreme Court in   Loving v.
Virginia   held that it is unconstitutonal for government to forbid interracial marriage.
My reasons for nevertheless opposing courts' ruling that it is unconstitutional to forbid gay marriage are twofold.
First, the courts would benefit from a period in which experience with gay marriage in one or more states and several foreign nations, together with growing experience with civil unions, would lay a solider empirical basis than exists now for assessing the consequences of gay marriage.
There is value in social experiments, and hence in not terminating them prematurely.
(Compare the bad practice of terminating clinical drug trials as soon as there is some, but often incomplete and inconclusive, evidence that the drug being tested does have some therapeutic value, which the placebo administered to the control group does not.).
Second, and at the risk of seeming to take a   Realpolitik   approach to constitutional law, I don't think it's the business of the courts to buck public opinion that is as strong as the current tide of public opinion running against gay marriage.
That is the lesson of the response to   Roe v.
Wade  , even though there is far more public support for abortion rights than for gay marriage.
Because the basis in conventional legal materials for creating a constitutional right either to abortion or to gay marriage is extremely thin, opponents cannot be persuaded that the creation of these rights by courts is anything other than a political act by a tiny, unelected, unrepresentative, elite committee of lawyers.
It is true of course, as one of the comments pointed out, that   Brown v.
Board of Education  , in declaring racial segregation in public schools illegal, outraged many Southerners, but this was understood to be the reaction of a national minority (Southern whites) selfishly motivated by a desire to maintain black people in a politically and economically subordinate, dependent position.
  Brown   would have been unthinkable--and in my pragmatic view unsound--had the case arisen in 1900 rather than the 1950s, because in 1900 the vast majority of the American population would have considered compelled racial integration of public schools improper.
Moreover, I do not think the opposition to gay marriage is primarily motivated by a desire, even an unconscious one, to subordinate homosexuals to heterosexuals, or by a fear of homosexual recruitment--though that is a factor, as I mentioned in my original posting.
I think the main basis for the opposition is religious and (responding to another comment) that such opposition is different from opposition based on a scientific error.
Religion is not scientific, but there is a difference between a belief that is demonstrably based on error and a belief based on a system of thought that science neither supports nor refutes.
(Science cannot refute the existence of the soul, for example, because there is no scientific test that could refute it.
Sophisticated religions are careful to place their key claims beyond the possibility of scientific falsification.) In a democratic society, one has to respect religious beliefs; and no reasonable theory of the meaning of the religion clauses of the First Amendment permits one to argue that religious belief cannot be permitted to influence secular law.
No one supposes that punishing murder is an establishment of religion just because the Ten Commandments--a religious code--states "Thou shalt not kill.".
Although I don't think that courts should force gay marriage on the society, the arguments against gay marriage do not strike me as compelling.
For example, it is true that the institution of marriage is oriented toward the production and rearing of children, but there are so many childless marriages, including second or third marriages of divorcees or widows or widowers, that it would be arbitrary to forbid gay marriage on the ground that, on average, such marriages produce fewer children.
And indeed, if it is correct that most gay marriages will be between lesbians, the average number of children in gay marriages may not be significantly below the norm, since most lesbians, like other women, I imagine want to have children.
Nor am I aware of evidence that children raised in homosexual households are on average more maladjusted, unhappy, antisocial, etc.
than the rest of us; the evidence I reviewed in   Sex and Reason  did not support such a hypothesis, though the book was published a decade ago and perhaps there is now some evidence to support it.
Finally, it is unclear to me that marriage is any longer   heavily   subsidized--there is after all the "marriage tax" to consider.
One last point.
One of the comments points out correctly that civil unions do not give homosexuals the full equivalent of marriage because the federal government refuses to recognize them as marriage equivalents for purposes of social security and other federal benefits.
But that is equally true of gay marriages contracted in Massachusetts: the federal government does not recognize these as "marriage" for federal-law purposes.
I believe that the prospects for federal recognition of civil unions would be greater if the homosexual-rights movement dropped gay marriage and focused entirely on civil unions.
Gay   marriage   is the red flag before the bull.
A long article by Robert S.
Leiken, "Europe's Angry Muslims," in the July/August 2005 issue of   Foreign Affairs  , written before the recent London bombings, when it is read in conjunction with the economist-columnist Paul Krugman's column in the   New York Times   this past Friday (July 29), entitled "French Family Values," brings into focus important issues of immigration policy, and, more fundamentally, of the different economic and cultural models of the United States and Western Europe.
Leiken points out the strong appeal of Islamic extremism to the large Muslim minorities in countries such as France, Spain, the United Kingdom, and the Netherlands (in France, the Muslim population is approaching 10 percent of the total population; in Netherlands, 6 percent), including many second-generation Muslims--Muslims who were born in these countries but have not adopted their political or cultural values.
The widespread penetration of these nations by Islamic extremists lies behind political murder in the Netherlands, bombings in the United Kingdom and Spain, and widespread anti-Semitic vandalism in France.
The nations of Western Europe appear to be riddled with Islamist terrorist cells that also incubate plots to attack the United States.
The non-Muslim populations of Western Europe are increasingly and in some instants lethally hostile to their Muslim minorities.
In contrast, although there are several million Muslims in the United States (more than in the U.K., for example, though constituting a smaller percentage--about 1 percent versus almost 3 percent), most of them, like their counterparts in Europe, of Middle Eastern or Central Asian heritage, the American Muslim community is well integrated.
It is prosperous (with a median income actually slightly above the national average), so far unthreatening (though security officials believe there are some terrorist cells; and heavy-handed tactics by the FBI since the 9/11 attacks have caused some disaffection among American Muslims), and not objects of significant hostility by non-Muslim Americans.
Krugman's column does not mention Europe's Muslims, but in defense of the French (more broadly, the European) model, argues that the French have made a good trade--their average incomes are significantly lower than those of Americans, but they work a good deal less.
This is partly because of a much higher unemployment rate than in the United States (Krugman's complacency about high unemployment is notable), but mainly because Western Europeans work fewer hours per week, take much longer vacations, and retire earlier.
In effect they trade material goods for leisure, a trade that Krugman regards as a sign of high civilization.
Krugman (here relying on a recent working paper by the economists Alberto Alesina, Edward Glaeser, and Bruce Sacerdote) recognizes that the greater leisure of the French and other Europeans is, as it were, forced, because it is the product of laws that restrict labor mobility and hence work opportunities, make it difficult to fire lazy workers, provide a variety of economic benefits uncoupled from work, and even restrict the number of hours a week a person can work.
But, further relying on the working paper, Krugman argues that without compulsion, workers could not get the amount of leisure they really want, because leisure is not worth as much if other people don't have it, assuming leisure has a strong social component--that you engage in leisure activities with other people, and therefore suffer a loss if they don't have leisure time to spend with you.
Krugman's failure to relate the European model to Europe's Muslim problem is telling.
To point to the upside of Europe's social model without mentioning the most serious downside is to provide bad advice to our own policymakers.
The assimilation of immigrants by the United States, compared to the inability of the European nations to assimilate them--with potentially catastrophic results for those nations--is not unrelated to the differences between economic regulation in the United States and Europe.
Because the U.S.
does not have a generous safety net--because it is still a nation in which the risk of economic failure is significant--it tends to attract immigrants who have values conducive to upward economic mobility, including a willingness to conform to the customs and attitudes of their new country.
And because the U.S.
does not have employment laws that discourage new hiring or restrict labor mobility (geographical or occupational), immigrants can compete for jobs on terms of substantial equality with the existing population.
Given the highly competitive character of the U.S.
economy, in contrast to the economies of Europe, employers cannot afford to discriminate against able workers merely because they are foreign and perhaps do not yet have a good command of English.
By the second generation, most immigrant families are fully assimilated, whatever their religious beliefs or ethnic origins.
In contrast, even in a country such as France that has a declared policy of requiring all immigrants to assimilate, immigrants from alien cultures, such as that of the Islamic world, tend to be marginalized and isolated, even in the second and later generations.
European unfriendliness to immigrants might be thought a cultural rather than an economic phenomenon, but the paper by Alesina, Glaeser, and Sacerdote on which Krugman relies argues that the European preference for leisure, also supposedly cultural, rests on policy, specifically the employment laws.
So too in all likelihood is the difficulty European nations have in assimilating immigrants.
The less fluid, less competitive, less market-oriented, and indeed less materialistic (the only color important to businessmen is green) a national economy is, the less opportunity it will provide to alien entrants.
Advocates of the European model point to the pockets of poverty in the United States, but may not realize that poverty cannot be abolished without recourse to measures that produce the social pathologies that we observe in Europe.
Social mobility implies the opportunity to fail.
If society protects jobs, the employment opportunities of ambitious newcomers are reduced and they may end up at the embittered margin of society.
Thus, it is not poverty that breeds extremism; it is social policies intended in part to eradicate poverty that do so, by obstructing exit from minority subcultures.
If Muslims in European societies do not feel a part of those societies because public policy does not enable them to compete for the jobs held by non-Muslims--if instead, excluded from identifying with the culture of the nation in which they reside they perforce identify with the worldwide Muslim culture--some of them are bound to adopt the extremist views that are common in that culture.
The resulting danger to Europe and to the world is not offset by long vacations.
These were, as usual, very interesting comments.
The most common was that corporate CEOs are overpaid, and this is argued to contradict concerns expressed by Becker and me with corporations' conceiving their role as that of charitable, or "socially responsible," enterprises, rather than as pure profit maximizers--if they are really maximizing profits, why are they overpaying their CEOS?.
I don't know whether CEOs are in general overpaid, but let's assume they are.
This would imply that the shareholders, who are the nominal owners of the corporation, are incapable of controlling management--in other words, that there is a problem of what economists call "agency costs." (The term signifies the costs resulting from the principal's inability to constrain his agents to serve his goals exclusively.) If so, we certainly would not want the management to make gifts to charity, because that would exacerbate the problem of agency costs.
Managers who are not controlled by shareholders can't be presumed to make charitable gifts that will promote the shareholders' welfare, rather than the managers' own.
So, paradoxically, liberals who believe that CEOs are not honest agents of the shareholders should be more critical of corporate "social responsibility" than conservatives!.
One comment that I am quite sympathetic to is that the social return to profit-maximizing activities may actually be higher than the social return to corporate philanthropy, when "corporate philanthropy" isn't just a fancy name for public relations.
As I argued in an earlier post, philanthropy directed at poor countries may actually reduce the welfare of those countries, and the same is probably true to an extent of purely domestic charity.
The general effect of charity is to postpone the making of difficult decisions.
For example, philanthropic gifts, private or public, to the arts retard serious efforts by artists and artistic organizations to create work for which there is a genuine interest on the part of the public, and philanthropic gifts to universities help to shield them from competitive pressures.
Also, the agency costs problem is particularly acute in the charitable arena, as donors to charitable enterprises have even less control over the enterprises than shareholders do over their corporations.
This is not to suggest that the net benefit of charitable giving is negative, but only to raise a question whether the net benefit of the expenditures devoted to charity might be greater if directed to commercial or other private ends instead.
Let me respond finally to two comments about the issue of criminal liability of corporations.
One comment suggested that as long as individual managers are liable for crimes committed by them on the corporation's behalf, there is no reason to impose liability on the corporation as well, and therefore no need to place a duty of law-abidingness on the corporation.
But imposing criminal liability on errant managers is insufficient.
If their crimes benefit the corporation (though imposing, as I assumed in my original posting, greater costs on society as a whole), then their salaries will be set by a rational corporation at a level that will compensate them for assuming the risk of criminal punishment.
The other comment pointed out correctly that corporate criminal liability is not an either-or thing.
Often it will be unclear whether a proposed course of action will violate criminal laws applicable to the corporation.
There is no reason the corporation should be thought under a duty to avoid all   risk   of legal liability; that would be paralyzing, and also would discourage socially worthwhile tests through litigation of the outer bounds of liability.
Rather, the duty should be to avoid conduct known to carry a very high probability of being deemed criminal.
With respect to many civil as distinct from laws, it is unclear whether they should be thought to impose a duty of compliance, or merely to impose a price (in the form of an expected liability cost) for engaging in particular activity.
If the law imposes only damages liability for a particular unlawful act, and the damages are fully compensatory, then a corporation that commits the act is not imposing net social costs; it would not engage in the act unless the expected benefits exceeded the exepected costs to any victim of the act, as measured by the damages that the victim would be entitled to recover.
The surprising decision of Spain, once the most Catholic country in Europe (except for Ireland), to recognize gay marriage‚Äîa decision that comes in the wake of a similar decision by Canada and, of course, by the Supreme Judicial Court of Massachusetts‚Äîpresents an appropriate occasion on which to consider what light economic analysis might shed on the issue.
Economics focuses on the consequences of social action.
One clear negative consequence is the outrage felt by opponents of gay marriage and of homosexual rights in general.
Philosophers like John Stuart Mill would not consider that such outrage should figure in the social-welfare calculus; Mill famously argued in   On Liberty   that an individual has no valid interest in the activities of other people that don't affect him except psychologically.
(Mill had in mind the indignation felt by English people at Mormon polygamy occurring thousands of miles away in Utah.) But that is not a good economic argument because there is no difference from an economic standpoint between physical and emotional harm; either one lowers the utility of the harmed person.
The issue is more complicated to the extent that some of the outrage is based on fear that making homosexual relationships respectable by permitting homosexual marriage will encourage homosexuality.
Most people don't want their children to become homosexuals, and this aversion is a factor in the utility calculus.
However, they are probably mistaken in thinking that homosexuality is chosen; there is compelling evidence that sexual orientation is an innate (probably genetic) rather than acquired characteristic.
It is not clear what weight, if any, society should give to opinions formed on the basis of scientific error.
Obviously there are benefits to homosexual couples from marriage‚Äîotherwise there would be no pressure to extend marriage rights to them.
(Whether, given the alternative of civil unions, there are   incremental   benefits to marriage is a separate question that I discuss later.) Some of these benefits appear to impose no significant costs on others and thus are clear social gains: an example is that a married person does not have to have a will in order to bequeath his property at death to his spouse.
Unless "outrage" costs are high, such benefits would, in an economic analysis, warrant recognizing gay marriage.
However, other benefits to married couples impose costs on third parties; an example is social security spousal and survivor benefits, to the extent they are not (and usually they are not) fully financed by the social security taxes paid by the person bestowing or obtaining the benefits.
But such redistributive effects are equally imposed by heterosexual marriage, so they don't make a strong argument against homosexual marriage, especially since homosexual marriages are unlikely to be a significant fraction of all marriages.
Only 2 to 3 percent of the population is homosexual and, judging from experience thus far, lesbians, who are far outnumbered by male homosexuals, seem much more interested in homosexual marriage than men are.
Although I am not able to verify this figure, I believe that about two-thirds of gay marriages are lesbian, even though only about a third of homosexuals are lesbian.
If this pattern persists, the total number of gay marriages will probably be very small relative to the number of heterosexual marriages.
The more fundamental economic question is why marriage is a legal status.
One can imagine an approach whereby marriage would be a purely religious or ceremonial status having no legal consequences at all, so that couples, married or not, who wanted their relationship legally defined would make contracts on whatever terms they preferred.
There could be five-year marriages, "open" marriages, marriages that could be dissolved at will (like employment at will), marriages that couldn't be dissolved at all, and so forth, and alimony and property settlement would be freely negotiable as well.
The analogy would be to partnership law, which allows the partners to define the terms of their relationship, including the terms of dissolution.
As with all contracts, the law would impose limits to protect third-party interests, notably those of children.
If outrage costs are set to one side, a purely contractual approach to (or replacement for) marriage makes sense from an economic standpoint because it would permit people to define their legal relationships in accordance with their particular preferences and needs.
For those who did not want to bother to negotiate a "marriage" contract, the law could provide a default, one-size-fits-all solution‚Äîthe conventional marital status embodied in state marriage statutes.
That would reduce transaction costs for those people content with the standard "form contract." The law would, however, have to decide what contractual relationships qualified for social security and other public benefits to which spouses are entitled under current law.
The contract approach to marriage may seem radical, but that is because of a lack of historical perspective.
Marriage has changed enormously over the course of history.
In many cultures, it has signified the purchase of a woman by a man's family.
In other cultures, instead of brideprice, there is dowry (an approximation to the purchase price for a husband, paid by the wife's family.).
Arranged marriages, often of children, have been common.
Divorce at will by the man only has been common; likewise, of course, polygamous marriage (including in the Old Testament).
Trial marriages, defeasible if the wife fails to become pregnant, were a Scandinavian institution.
Shia law recognizes temporary marriages.
Companionate marriage, in which husband and wife are expected to be best friends, is a modern institution.
In short, marriage has changed greatly in history, and it would be foolish to think that the current marriage conventions will remain fixed for all time.
With the rise of no-fault divorce, the enforcement of prenuptial agreements, and the decline of alimony, marriage is evolving in the direction of contract.
That evolution has contributed to the movement for gay marriage.
For, as marriage becomes more like a contract, it becomes harder to see why homosexuals‚Äîwho as I say are free to form other contracts‚Äîshould be excluded from its benefits.
Under a contractual approach, gay marriage as an issue would disappear, because the state would not be being asked to "recognize" gay marriage and by doing so offend people who are distressed by homosexuality.
No one thinks that homosexuals should be forbidden to make contracts, and marriage would be just a contract so far as any legal consequences were concerned.
It would be left to individual religious sects to decide whether to permit church marriages of homosexuals.
The most remarkable aspect of the current controversy is that it is mainly about a word‚Äî"marriage." The reason is that although most Americans still oppose civil unions (among American states, only Vermont and Connecticut authorize civil unions, though New Jersey authorizes a related arrangement called domestic partnership; a number of foreign nations now authorize civil unions, some under the name "registered partnership"), I imagine that if the homosexual-rights lobby dropped marriage from its agenda and put all its effort into lobbying for civil unions, many states would soon recognize them, and eventually the federal government would follow suit and grant parties to such unions the legal status of spouses for purposes of social security and other federal laws; when that happened, there would be no practical difference between civil unions and marriage.
Why so much passion is expended over the word "marriage" baffles me.
After all, even today, and even more so if civil unions were officially recognized, homosexual couples can call themselves "married" if they want to.
And this brings to the fore the disadvantage of treating marriage as a legal status.
Were it just a contract, government would have no role in deciding what word the parties could use to describe the relationship created by it.
Although personally I would not be upset if Illinois (where I live) or any other state decided to recognize homosexual marriage, I disagree with contentions that the Constitution should be interpreted to require state recognition of homosexual marriage on the ground that it is a violation of equal protection of the laws to discriminate against homosexuals by denying them that right.
Given civil unions, and contractual substitutes for marriage even short of civil unions, the discrimination involved in denying the right of homosexual marriage seems to me too slight (though I would not call it trivial) to warrant the courts in bucking strong public opinion; and here it should be noted that although the margin in the polls by which homosexual marriage is opposed is not great, the opponents tend to feel more strongly than the supporters.
Most supporters of homosexual marriage, apart from homosexuals themselves (not all of whom favor homosexual marriage, however), and some (not all) of their parents, support it out of a belief in tolerance rather than because of a strong personal stake, whereas many of the opponents are passionately opposed, some because they fear homosexual recruitment, contagion, etc., but more I think because they believe that official recognition of homosexual marriage would disvalue their own, heterosexual marriages.
Of course it is often the duty of courts to buck public opinion; many constitutional rights are designed for the protection of minorities.
But when, as in this case, there is no strong basis in the text or accepted meaning of the Constitution for the recognition of a new right, and that recognition would cause a powerful public backlash against the courts, the counsel of prudence is to withhold recognition.
Doing so would have the additional advantage of allowing a period of social experimentation from which we might learn more about the consequences of homosexual marriage.
One state, Massachusetts, already recognizes homosexual marriage, as do a small but growing number of foreign nations (Spain, Canada, Belgium, and the Netherlands).
Perhaps without judicial intervention gay marriage will in the relatively near future sweep the world‚Äîand if not it may be for reasons that reveal unexpected wisdom in the passionate public opposition to the measure.
I agree with almost everything that Becker says, but will suggest a few qualifications.
I can think of one situation in which "pure" charitable donations by corporations, i.e., donations that do not increase profitability, could benefit shareholders.
Assuming that most shareholders make some charitable donations, they might want the corporations they invest in to make modest charitable donations on the theory that a corporation will have more information about what are worthwhile charitable enterprises than an individual does.
For example, charities differ greatly in the amount of money that they spend on their own administration, including salaries and perquisites for the employees of the charity, relative to the amount they give to the actual objects of charity.
Presumably corporations are in a better position to determine which charities are efficient than individuals are; if so, then shareholders may impliedly consent to some amount of charitable giving by their corporations.
But not much.
The reason is that one person's charity is another person's deviltry: a shareholder who is opposed to abortion on religious grounds would be offended if his corporation contributed to Planned Parenthood.
The practical significance of this point is that corporations avoid controversial charities, so that the issue of implied consent becomes whether the shareholder would like his corporation to make a modest contribution to some set of uncontroversial charities.
For the reason suggested above, the answer may be "yes"--and for the additional reason that there is a tax angle.
If the shareholder receives a dividend, the corporation will have paid corporate income tax on the income from which the dividend is paid.
Suppose the corporation and the shareholder are both in the 20 percent bracket.
The corporation earns $10, pays $2 in tax, and gives the shareholder $8.
The shareholder gives the $8 to charity, which costs him $6.40, since he gets a 20 percent tax deduction.
If the shareholder wants the charity to have $10, it has to dig into his pockets for another $2, which costs him $1.60 (because of the 20 percent deduction), and so the total cost to him of giving the charity $10 is $8.
Now suppose that, instead, the corporation gives the $10 to charity, a deductible expense, at a cost to it therefore of $8.
Then the charity receives $10 rather than, as before, only $8.
The shareholder loses his $2 deduction, which means that the total cost to him of the transfer is, as before, $8.
But the corporation is better off to the tune of $2, since it avoids the corporate income tax on the $10 in income that it gave the charity.
And anything that benefits the corporation benefits the shareholder.
Given product market as well as capital market competitive pressures, charitable spending that is not profit-maximizing because the cost exceeds the private benefits that Becker lists (public relations, advertising, government relations, and so forth) is unlikely to be significant.
Even if corporate managers are not effectively constrained to profit maximization by their shareholders, expenditures that do not reduce the cost or increase the quality of the corporation's products will place it a competitive disadvantage with firms that do not make such expenditures.
A more difficult question has to do with a corporation's policy on obeying laws.
From a strict shareholder standpoint, it might seem that corporate managers should obey the law only when the expected costs of violating it would exceed the expected benefits, so that managers would have a duty to their shareholders to disobey the law, perhaps especially in countries in which law enforcement is very weak, a country for example which had a law against child labor but was unable to enforce the law.
This would be a case of a pure clash between ethical and profit-maximization duties.
My view is that, given external (i.e., social as distinct from private) benefits of compliance with law, the ethical argument should prevail, so that a shareholder would be precluded from complaining that corporate management, by failing to violate the law even when it could get away with it, was violating its fiduciary duty to shareholders.
Another argument based on an externality, an argument that lies behind the law that forbids U.S.
firms to engage in bribery abroad, even in countries where bribery is extremely common, is that reducing the amount of bribery in those countries will benefit U.S.
firms in the long run by making the markets in these countries more open, to the advantage of efficient firms.
The fact that it will sometimes be in the shareholder interest for management to violate the law provides, moreover, a ground for punishing corporate managers sufficiently severely for corporate crimes that the punishment is not offset by shareholder gains for which the managers could be expected to be rewarded.
Concern has been voiced in some quarters that Israel should not be punishing Lebanon for the acts of Hezbollah, because Lebanon's army has not attacked Israel and it is unclear whether Lebanon has the ability to disarm or otherwise restrain Hezbollah.
(There is also, however, doubt whether Lebanon has the will to do so.) In other words, Israel's conduct is being criticized as an exercise of collective punishment (likewise its military measures in Gaza), which involves punishing a collective for the act of an individual member, even if some or all of the other members of the collective bear no responsibility for the act.
Israel has responded that since Hezbollah is a part of the Lebanese government, its acts are the Lebanese government's acts.
That may be, but is to one side of the issue of the appropriateness of collective punishment.
Israel has also defended its actions as targeted exclusively on Hezbollah, with any harms to Lebanese who are not part of Hezbollah's armed wing being inevitable accidents of war.
Without taking sides, but assuming for the sake of argument that Israel is engaged, in part anyway, in the deliberate infliction of collective punishment, I want to discuss the economics of collective punishment, which is a conventional legal tool that is efficient in many of its applications.
An important modern example is the employer's liability for injuries resulting from acts by its employees within the scope of their duties.
The employer may have exercised due care in the selection, training, assigning, monitoring, and disciplining of the employee who caused the accident, but if the employee was at fault and therefore is liable to the victim, the employer is also liable no matter how faultless its behavior.
And usually it is the employer that ends up paying the entire judgment in the suit by the victim because the employee is more often that not judgment-proof.
The law allows the employer to seek indemnity from the employee for any judgment the employer is required to pay the victim of the employee's tort, because the employee is the primary wrongdoer.
But the judgment-proof problem renders the employer's right of indemnity of little or no value in most cases.
Another important example of collective punishment in law is the rule that all members of a conspiracy are criminally liable for the crimes committed by any member within the scope of the conspiracy, provided it was forseseeable.
So if one member of a drug gang beats up a defaulting customer, the other members are apt to be guilty of assault and battery as well even though they had nothing to do with the beating.
A related rule, the felony-murder rule, makes a criminal guilty of first-degree murder if a killing occurs in the course of his crime, even if the killing is by someone else and he did not authorize or even expect it--as in the case where a policeman in the course of trying to thwart the crime accidentally kills a bystander.
The theory behind these rules--the theory behind collective punishment in general--is that someone other than the actual perpetrator of a wrongful act may have more information that he could, if motivated, use to prevent the act than the government has.
The employer may have been faultless in the particular case, but knowing that it is liable anyway will give it a strong incentive to exert control over its employees to prevent accidents--even by such indirect measures as reducing its work force by substituting robots or other mechanical devices for fallible human workers.
Similarly, conspirators have an incentive to police their members to avoid getting themselves into unnecessary trouble; and the perpetrators of a bank robbery, for example, have an incentive to avoid being armed or provoking bank guards or police.
Collective punishment can properly be criticized when the cost of punishment to the innocent members of the collective is disproportionate to the benefits.
This would be true if the government executed the family members of murderers.
Such a measure would create powerful incentives for family members to monitor each other's behavior, and the murder rate would drop.
Or would it? The law would deter the formation of families; and it might even induce families to murder members whom they thought likely to commit murders, since the family might be better able to conceal a murder within the family than the family member who was murdered would have been able to conceal his own murders.
In addition, even if the family-responsibility law was effective in reducing the murder rate, the rate of killing might rise; for suppose there were 10 percent fewer murders but for every murder that did occur an average of two family members would be executed.
The example, while extreme, illustrates the essential point about collective punishment: that it is an extremely costly method of punishment, because several or many people are punished for the wrongful act of one.
For example, if the cost of punishment to a person punished is X, then if he is a member of a group of ten, all of whom are punished collectively for his act, the punishment cost is 10X rather than X.
So collective punishment is properly regarded as highly exceptional.
It is most likely to be optimal if either the collective punishment is very mild or the cost to the punisher of failing to prevent the wrongful act is very great, and in either case if in addition the alternative of individual punishment is inadequate.
The first case is illustrated by mild collective punishment of children.
It includes things like a parent's punishing both his squabbling children because he cannot figure out which one was at fault, or a teacher's keeping the entire class after school because he cannot determine which child threw a spitball at him.
These are easy cases because the innocent member or members of the collectively punished group have the necessary information, and ability to act effectively on it, for preventing the misbehavior; they can do so at much lower cost than the punisher because the punisher cannot readily obtain the information necessary to identify the actual wrongdoer; yet the costs to the group of the punishment are slight.
The second case--optimal collective punishment when the cost of failing to prevent the wrongful is great--may be illustrated by Israel's policy of demolishing the houses of the families of suicide bombers.
The suicide bomber himself is not deterrable, the harm he does is great, and the punishment method, while severe, is mild relative to the harm that a successful suicide bomber can inflict.
Because warfare is inherently indiscriminate, innocent persons whose only connection to the fighting is that they live in the combat zone are unavoidably "punished," but this is not collective punishment as a deliberate policy.
For one thing, those persons will usually have no ability to restrain the combatants on their side.
As for the conflict in Lebanon, however, a nation is undoubtedly responsible for predatory acts committed against another nation by groups operating openly on the nation's territory.
That responsibility is an example of the  kind of collective responsibility that warrants collective punishment for its breach, as in the somewhat parallel case of the employer's liability for the torts of its employees when they are committed within the scope of employment.
But how do you "punish" a nation? The nation is the collective of its citizens.
Punishing the nation means punishing its citizens even if there is nothing they can do or could ever have done to to prevent the actions for which they are being held responsible.
Assessment of the reasonableness of the punisher‚Äôs course of action would then depend on such factors as the alternatives open to the punisher, the amount of damage inflicted by the group that the collectively punished population failed to prevent, the amount of damage that collective punishment inflicts on that population, and the likelihood that the punishment will succeed in getting the punished nation to take effective steps to prevent similar attacks by the rogue group in the future.
The last point is vital because it is extremely difficult for one nation to prevent an attack mounted by a terrorist group from the territory of a nation that has acted as the group's willing or unwilling host.
That is why that nation is responsible for restraining the group and why, therefore, it may be a proper candidate for collective punishment.
A final point.
I said earlier that a law imposing capital punishment on family members who failed to prevent one of their members from committing a murder would discourage family formation.
In other words, collective punishment tends to cause defection from the group.
This may be in the punisher's interest: if Lebanese flee southern Lebanon so as not to be "collectively punished" for the acts of Hezbollah, Israel will have a freer hand in dealing with Hezbollah there.
Becker's comprehensive analysis leaves me with little to add, especially as I am not permitted to comment publicly on the constitutionality of the "big box"  ordinance because (if it does go into effect) its constitutionality is likely to be challenged, and in my court to boot.
The first-order economic analysis of minimum wage laws shows that they reduce employment by raising the price of labor; the Law of Demand teaches that an increase in the price of a good reduces the quantity of it that is demanded.
A second-order analysis complicates the picture.
Price affects supply as well as demand.
An increase in the price of labor might attract into the labor force individuals who, at the existing price, prefer to go to school, engage in crime, work part time, or subsist on welfare.
If, moreover, there is a large sector exempt from the law, the law's main effect may be to shift workers to the exempt sector rather than to reduce overall employment.
The higher wages in the covered sector, by driving up employers' costs in that sector, will tend to reduce the demand for the products and services produced by those employers and to increase the demand for substitute products and services produced in the exempt sector, which in turn will increase the demand for labor in that sector.
What seems relatively clear, however, is that the brunt of the disemployment effect of the minimum wage will be felt by marginal workers.
For example, some teenagers whose marginal product (that is, their contribution to the employer's profits) was just at or only slightly above the minimum wage will  if the minimum wage is raised be replaced by slightly more productive teenagers from affluent households who were not attracted to working when the wage was lower.
The smaller the sector covered by the minimum wage law (and the coverage of the "big box" ordinance is very limited), the more dramatic the disemployment effects of the law are likely to be.
The demand for labor as a whole is inelastic, but the demand for labor by an individual company or a small group of companies is likely to be quite elastic.
Not because the company can easily substitute capital for labor, but because it cannot pass on increased costs to its customers if it has many competitors who have lower labor costs by virtue of being exempt from the minimum wage.
Such a company, assuming it faces an upward-sloping average-cost curve (meaning that its average cost rises with its output--the normal assumption about a firm's cost structure in a market with many firms, because if its costs were invariant to its output it could expand indefinitely), can control its labor costs only by reducing its output and thus laying off workers.
One especially draconian way of doing this is by relocating the firm's plants or other facilities from the jurisdiction imposing the high minimum wage to a jurisdiction that has a lower minimum wage.
Becker points out that this may be a consequence of the Chicago ordinance because it does not reach Chicago's suburbs.
It is a reason for believing that state minimum wages are likely to have fewer disemployment effects that local minimum wages, and the federal minimum wage fewer disemployment effects than state minimum wages.
At the current minimum wage in Illinois of $7.75 an hour, an employee who works 2000 hours a year (a 40-hour week with two weeks of annual vacation) and is paid the minimum wage earns only $15,500 a year.
This is a pittance, though if the minimum-wage employee's spouse is employed at a significantly higher wage, the family's income may not be at a hardship level.
Similarly, the minimum-wage employee may be an elderly person who receives social security and Medicare and may have a company pension in addition.
These possibilities show that minimum wage laws, even if they had no disemployment effects, would be a clumsy instrument for combating poverty.
A better approach than raising the mininum wage would be increasing the earned-income tax credit (negative income tax), which is a method of increasing the earnings of marginal workers without confronting their employer with a  higher cost of labor and thus inducing the employer to discharge those workers whose marginal product is lower than the minimum wage.
But this would be difficult for an individual city or even state to do; it would require federal action.
Articles by Eric Lipton in the June 18 and 19 issues of the   New York Times   discussed the "revolving door" phenomenon with specific reference to the Department of Homeland Security.
According to Lipton, although the Department is only three and a half years old, already more than two-thirds of its senior executives have quit for jobs in the private sector, mostly working for companies that have or seek contracts with the Department, which has an annual budget of some $40 billion.
These executives, some of whom had come to the Department from the private sector for brief stints in government sservice, are paid multiples of their government salaries when they leave to join or rejoin the private sector.
Although departing government employees are forbidden to lobby their former government employer for a year, the prohibition is particularly porous in the case of the Department of Homeland Security because a former employee is permitted to lobby from the start any unit in the Department for which he did not work.
The Department is a conglomerate of 22 formerly separate agencies, with overlapping responsibilities, and there are subunits with each of the agencies.
Should the revolving door be stopped or slowed? Two considerations favor the revolving door.
First, people who have served in government have useful information about government's needs and procedures; that information can enable a better matching of government contractors with the agencies that purchase their services.
Second, the opportunity for lucrative private employment after a stint of public service reduces the cost to the government of obtaining able employees.
The compensation of government employees includes not only their government salaries but also the enhanced private earning capacity that they acquire by their government service.
But these points are persuasive only with regard to career government employees, in the sense of people who worked for the government--initially at least at a junior level--for many years.
They accrue valuable knowledge over the course of their employment and the prospect of eventual private-sector employment substantially increases in real terms their meager compensation as government employees.
Not that there isn't a loss; many of the ablest and most experienced government employees leave government well before normal retirement age, while the least able stay till or beyond that age because of the difficulty of firing government workers.
The system can also produce transitional crises, as illustrated by the hemorrhaging of government security personnel in the wake of the September 11, 2001, terrorist attacks.
The attacks caused a surge in private demand for security personnel, resulting in a sudden and substantial loss of experienced CIA, FBI, and other security officers to the private sector at greatly increased salaries.
The increased ratio of private to government salaries represented a windfall for these officers because it had not been anticipated.
In the long run, however, these windfalls become anticipations that will enable the government security services to hire abler people because they will foresee superior private-sector opportunities.
In other words, as a result of the continuing concern with terrorism, working for government security agencies confers on one more human capital than before 9/11.
Meanwhile, however, there is tremendous turnover in government security agencies, and a resulting decline in the quality of those agencies, as senior officers vacate their positions for the private sector and are replaced by inexperienced juniors.
The impact on quality is aggravated by the disuptive effect of rapid turnover in any organization.
The exodus of officials of the Department of Homeland Security for the private sector, about which Lipton wrote, is a distinct phenomenon.
Many of them are people who had come to work for Tom Ridge in the White House when he was the President's homeland security advisor and went with him to the Department when it was formed in March of 2003 and he became the first head of it.
Many did not have extensive or relevant government experience.
Moreover, the Department has from the outset been grossly mismanaged.
The fault lies mainly in the design and structure of the Department and in the haste with which it was created; but no one considers it, even given these constraints, a well-managed enterprise.
The companies that have hired these officials do not care, however, because they are not hiring DHS officials for their managerial expertise.
They are hiring them in the hope that it will facilitate the obtaining of contracts with the Department.
The Department needs contracts, of course, and its former officials doubtless have a good sense of how a contractor can make an attractive pitch to the Department; otherwise the contractors would not have hired these officials at high salaries.
But whether the officials are actually knowledgeable about the Department's needs is another matter.
Many of them were birds of passage, who never became real experts on security.
There is warranted suspicion that many of them got their high positions in the Department by reason of political contacts, and those contacts may enable them to land contracts for their new employers that are not in the government's best interest.
So the first reason I gave for why "revolving door" practices may serve the public interest is probably absent in the case of senior officials.
And likewise the second.
The prospect of subsequent reemployment by the private sector probably attracts few able nongovernment people to government jobs.
It is disruptive to give up one's private job to work for government for a short time with the aim of then returning to the private sector at a higher level--a level one might well have attained in the ordinary course of promotions and job changes had one remained in the private sector.
Moreover, there is what economists call a "last period" problem that is more serious in the "bird of passage" case than in the case of the career government employee.
An individual in the last period of his employment (or a company that is about to go out of business) is not restrained in his self-interested behavior by concern that his employer will fire him (or, in the case of the company, that its customers will desert it).
Any government employee who has decided to seek private employment may be tempted to make decisions that will make him more attractive to prospective private employers; the added problem with the "birds of passage" is that their entire government service is last period because they know they are going to return to the private sector soon.
All their decisions as government officials may be influenced by a desire to position themselves for as lucrative a reentry into the private sector as possible.
What might be done to alleviate the revolving-door problem? One possibility would be to restructure the civil service so that it paid better and, as important, reached higher in the government system.
In the United Kingdom, civil servants occupy the highest posts in government just below the ministerial level.
The opportunity to become a permanent undersecretary is an inducement to the ablest civil servants to remain in government service for their entire, or at least a very long, career.
In our government quite junior officials, such as assistant secretaries of department and even many deputy assistant secretaries, are appointed from outside the ranks of the civil servants.
These are, many of them, the birds of passage; and the diminished promotion opportunities for the civil servants makes a civil service career much less attractive for able and ambitious people than it would otherwise be.
The major exception of course is the military, a branch (realistically) of the civil service in which one can rise to a very high rank, because there is no lateral entry into the uniformed service.
The CIA and FBI are other exceptions, since among their top officials ordinarily only the director himself is appointed from outside the agency staff.
Of course there would be costs in strengthening the civil service--one being that the able people it attracts might be more productive in the private sector.
But the challenges faced by the American government at present are so acute that we must take steps to improve governmental efficiency, and reform of civil service may be one of them.
I am in broad agreement with Becker's excellent analysis.
As discrimination declines, replaced by affirmative action, explanations for lagging achievement that are based on discrimination lose their plausibility.
They were never entrely plausible, given Jewish achievement in the face of fierce discrimination, though it is argued by Stephen Pinker in a recent issue of the   New Republic   that discrimination against Jews in the Middle Ages, by forcing them into middleman occupations where intelligence is a more valued asset than in farming or soldiering, resulted in the more intelligent Jews having a higher birth rate (because they were better off) than the less intelligent Jews and so, through the operation  of natural selection,discrimination can be "credited" with some of the responsibility for the high average IQ of Jews today--even its genetic component.
(Hitler may have had something to do with this as well, as it is plausible that the most intelligent European Jews saw the handwriting on the wall earliest  and left Europe in the 1930s before it was too late.).
As Becker points out, the mean performance of women in college and university is superior to that of the men, but the variance of male performance is greater and as a result there are more male geniuses.
There is no reason why the difference in variance should result in higher average male earnings; that higher average is probably the result of women's spending less time in the work force because of pregnancy and child care.
Women's greater proclivity for child care may well have a biological basis, as may the difference in variance that I mentioned.
In the "ancestral environment"--the term that anthropologists use to describe the prehistoric period in which human beings reached approximately their current biological state--women who were "steady" would have tended to have the maximum number of children, while natural selection might favor variance in male abilities because  variance would produce some outstanding men who would tend to reproduce more than other men (including the "steadies") in the polygamous conditions of  prehistoric society.
If the explanation based on evolutionary biology  is correct, women will continue to be "underrepresented" in high-achievement positions in many fields; why anyone should care is beyond me.
But it doesn't follow that their average earnings will continue to be significantly lower than those of men.
Women's lesser commitment to the labormarket may be balanced by their greater ability than men  to perform most jobs, assuming academic performance is a good proxy for aptitude for today's  desirable jobs.
With the decline in the importance of physical strength and stamina as a job qualification, women may be able to perform most jobs better than men on average, though men may continue to dominate the top--but also the bottom--tier of the labor market.
The achievement lag in black males is troublesome from a social standpoint, as it seems correlated with definite social pathologies, such as enormous overrepresentation in criminal activities.
Moreover, it is a matter of a lower mean rather than less variance.
If and to the extent that that lower mean is a result of lower IQ, not much can be done because IQ has a strong genetic component--and what is not innate may still be innate rather than cultural (a product of conditions in the womb, for example).
The genetic and environmental influences on abilities interact, as Becker says, but in addition the genetic can influence the environmental: many low-IQ mothers may be un able to take care of themselves adequately in pregnancy, contributing to their children's having innate intellectual deficiencies due to poor material nutrition or health care.
Differences in the mean achievements of racial or gender groups must be kept in perspective.
General intelligence (IQ) follows a bell-shaped distribution, and two bell-shaped distributions that have different means will still overlap to a great extent unless the means are very far apart.
The differences will be greatest in the tails of the distributions.
The achievement lag of Hispanic males may be a transitional phenomenon; they may still be adjusting to an American male culture that is quite different from the "macho" culture of Latin America, which is not conducive to vocational achievement under modern American conditions.
Like Becker, I view affirmative action as a matter of choice for colleges and universities, at least when the institutions are private rather than public.
Higher education is highly competitive, and I am reluctant to have the government tell its institutions what policies are best.
Academic freedom implies a high degree of academic autonomy, including autonomy in the administration of the institutions of higher education.
Personally, however, I would like to see a few of the top colleges abolish all preferences unrelated to academic merit--no athletic scholarships, no affirmative action, no favoritism for the children of professors or of major donors, and no legacy admissions.
That would be a useful experiment in the benefits and perhaps costs of meritocracy.
It would have the incidental effect of giving us a better idea of the extent of real differences across race and gender in academic capability.
What can economics contribute to decisions on the further conduct of the war in Iraq? I set to one side all issues concerning the initiation of the war, the adequacy of intelligence and planning, the mistakes made in the conduct of the war since the invasion of Iraq in March 2003, and the costs that have already been incurred.
(See Becker's and my postings of March 19, 2006, concerning the costs and benefits of the war up to that date.) All those are bygones and should not be allowed to influence current decision making.
The correct perspective is an ex ante one.
Rational decision making has the general form of cost-benefit analysis.
That is, one compares alternatives and picks the one that offers the greatest surplus of anticipated benefits over anticipated costs.
This requires monetizing benefits and costs and discounting (multiplying) them by the probability that they would actually be realized if the particular alternative were chosen.
The challenge to the application of cost-benefit analysis to the question of what the United States should do in Iraq lies in the difficulty of monetizing many of the relevant costs and benefits and of estimating the probabilities that they will be realized by particular courses of action that should be considered and thus compared.
An initial distinction should be drawn between monetized and monetizable costs.
Our expenditures on military and civilian operations in Iraq are of course monetized, but not the deaths and injuries that our troops sustain.
They are, however, monetizable.
The greater the risk of death or injury, the higher the wage that is demanded to enlist or re-enlist.
That wage premium (as discussed in my post of June 4 of this year), to the extent that it has risen as a result of the Iraq war, provides a basis for estimating the cost of anticipated deaths and injuries to our troops from continued involvement in Iraq.
In effect, those costs are impounded in the wage premium.
The very slow pace at which the army is being expanded is widely considered a sign of inefficiency, but an alternative possibility is that the expansion is being deliberately slowed out of concern that the wage premium necessary for a rapid expansion would be staggering.
Another important monetizable cost of the war is, if experience with the war in Vietnam is a reliable guide, the tendency to conceal the full costs of an unpopular war by deferring maintenance and replacement of equipment, drawing down reserve stocks of equipment and supplies, and cannibalizing spare parts from equipment not in use.
There is also the present cost of long-run medical and disability benefits for the thousands of permanently injured veterans of the war.
When the readily monetizable costs of the war are added to the monetized costs now running at some $140 billion a year, the total monetized and monetizable costs could be twice that amount.
There are also nonmonetizable costs, of course, such as the contribution that continuing the war makes to recruitment and training of Muslim extremists who may want to attack the United States either directly, or indirectly by destroying regimes friendly to the United States or by disrupting the production or transportation of Middle Eastern oil.
The presence of U.S.
troops anywhere in the Middle East apparently acts as a provocation to many Muslims.
Against this it is argued by the Bush Administration that if we withdraw from Iraq, the terrorists who are attacking our troops there will as it were follow us to the United States.
That is possible, but there are two contrary arguments.
The first is that al Qaeda is Sunni, and if we leave Iraq the Sunnis there will find themselves hard pressed by the Shiites, who control the government; so al Qaeda may continue to be preoccupied with Iraq for years.
Second, if our presence in Iraq endangers us by fostering recruitment and training of Islamic terrorists, it seems contradictory to claim that our absence would act as a similar provocation.
Another possible nonmonetizable cost is the boost to the terrorists that would be given by our acknowledging defeat in Iraq.
Terrorist recruiters would argue that Islamic extremism was winning its global struggle with the West and that this was proof that God is on the side of the extremists.
There is also a natural attraction to being on the winning team--the winning side in history.
Again, though, there is an element of paradox in arguing that our invading Iraq was a provocation and that our withdrawing from Iraq would be an equal or (the position of the Administration) a greater provocation.
The cost most emphasized by the Administration is the possibility of chaos in Iraq if we leave--an intensified civil war with interventions by Iraq, Saudi Arabia, Turkey, and possibly other countries as well.
Interventions by foreign countries in civil wars are common.
Another possibility would be a partition of Iraq into Shiite, Sunni, and Kurdish states, on the model of the Yugoslav brakeup, which was accompanied by great violence.
The United States would be blamed, and this might well increase Muslim hostility to the United States.
Against this it can be argued, first, that withdrawal of U.S.
troops might induce the contending factions in Iraq to settle their differences rather than inviting intervention by the neighboring countries, and, second, that whenever we leave, there will be anarchy in our wake because we are unprepared to commit the forces that would be required to pacify such a populous, violent, and fissiparous nation as Iraq.
If the nonmonetizable costs of continuing the war are ignored, either on the ground that the best guess is that they are likely to be a wash or that they are unquantifiable because no one can predict the consequences of our withdrawal, then the case for withdrawal becomes compelling: on one side would be costs probably in excess of $200 billion a year and on the other side no calculable or even probable benefits.
Moreover, there are nonmonetizable costs to our continued involvement in Iraq, in particular the distraction of our government from other foreign policy problems and perhaps domestic problems as well.
The benefits of our staying in Iraq seem in current thinking to be limited to averting the costs I have mentioned.
There is little expectation of a victory that would transform Iraq and the Middle East and weaken the terrorist threat to the United States.
An intermediate approach to valuing our continued involvement in Iraq would exploit the notion of option value, an important concept of decision theory.
An option is a device for deferring a transaction until more is known about its value.
We can think of the many billions of dollars that the United States is currently spending on the war in Iraq as the purchase of an option to delay a decision on whether to leave until we have more information about the likely consequences of leaving.
That is a prudent course when potentially very large consequences cannot be evaluated at present but may be evaluatable in six months or a year.
That seems to be the thinking of the Administration.
The objection is that there is no indication that waiting is producing any information.
The optimal strategy for the strongest Iraqi factions, which is to say the Shiites and the Kurds, is simply to lie low until the United States withdraws.
The Sunnis have less grounds for optimism concerning their position  when the U.S.
withdraws, and so they are showing signs of willingness to cooperate with us.
But it is unclear how that willingness translates into a forecast of what the future holds for Iraq whether we withdraw in the near term or persist indefinitely.
The Yugoslav precedent suggests that when the lid on a cauldron of smoldering ethnic hatred is lifted, civil war ensues.
That process is already well under way in Iraq.
A critical variable that receives insufficient attention by the media is the condition of the Iraq armed forces and police.
Is it improving? At what rate? What is the desertion rate from the armed forces (a very good measure of effectiveness)? The great failure in Vietnam was the failure to create a South Vietnamese security structure that could stand up against the North Vietnamese without U.S.
aid.
No matter how successful the United States is in suppressing violence in Iraq, our departure will be followed by collapse if we leave a security vacuum.
Since the current emphasis appears to be on quelling violence rather than on creating a viable Iraqi security structure (which may be impossible), the option value of our continued involvement seems slight.
I agree with Becker's excellent piece and have little to add.
He is certainly correct that the political saleability of a carbon tax aimed at reducing carbon dioxide emissions that contribute significantly to global warming would be greatly enhanced by emphasizing the national-security benefits.
That way both environmentalists, who tend to be liberals, and national-security hawks, who are almost always conservatives, might be induced to join in supporting such a tax.
It is an important detail to note that a carbon tax and a tax on gasoline or other fuels are not identical.
A tax on gasoline would have a direct effect in reducing demand for oil, thus reducing, as Becker points out, the oil revenues of oil-producing nations.
The tax would reduce demand, and since oil is produced with the usual upward-sloping supply curve, the price of oil is equal to the supply cost of the marginal output and thus generates enormous revenues for low-cost producers.
But a gasoline tax would be inferior to a carbon tax from the standpoint of limiting global warming, because producers of oil, refiners of gasoline, and producers of cars and other products that burn fossil fuels would have no incentive to adopt processes that would reduce the amount of carbon dioxide emissions per barrel of oil, gallon of gasoline, etc.
A carbon tax would create such an incentive and would also have a strong indirect negative effect on the demand for fossil fuels.
I would put greater weight on the environmental issue than on the national-security benefits of reduced demand for oil.
I would thus be disinclined to encourage the substitution of coal for oil, as that would do nothing to reduce carbon-dioxide emissions though it would reduce our dependence on oil.
If as I believe our greatest national-security threat is from terrorism, the benefits of reducing the world's demand for oil would be modest.
The expense of terrorist attacks is small relative to the aggregate resources of countries that finance or permit their citizens to finance terrorism, and would still be small relative to those resources after the wealth of some of those countries declined somewhat as a result of a reduced worldwide demand for oil.
A separate concern is worldwide (and hence our) dependence on unstable sources of oil in countries like Venezuela, Iraq, Nigeria, Iran, and potentially Saudi Arabia.
Coupled with growing demand for oil by China, India, and other developing countries, an uncertain supply could cause the price of oil to spike.
That would not be altogether a bad thing because it would limit demand and thus reduce carbon dioxide emissions.
Moreover, the spike would be a politically appealing occasion for the imposition of a stiff carbon tax that would reduce the revenues of the producing countries and (to the extent the tax did not reduce demand in the United States) transfer some of those revenues to the U.S.
Treasury.
The tax might not increase the price of oil to consumers significantly, because laid on top of the price spike it might so reduce the demand for oil that the cost of production fell steeply, assuming an inelastic supply.
A point Becker does not touch on is the importance of international cooperation to deal with environmental problems.
Although the United States is about a quarter of the world's economy, even a 10 percent decline (at present unforeseeable) in our carbon-dioxide emissions and our burning of fossil fuels would have only a modest effect on global warming and on the overall demand for oil.
Indeed the entire effect might be offset by soaring demand (and concomitant increases in carbon dioxide emissions) by China and other developing countries.
There are very serious free-rider problems involved in reining in the use of fossil fuels by developing countries.
Yet in other areas of global conflict, such as intellectual property (consuming countries in the developing world do not want to pay royalties to the producers of intellectual property), it has proved possible to overcome free-rider problems to a considerable extent through aggressive efforts to achieve international cooperation.
The Administration seems not to have exerted such efforts with respect to environmental matters, and in national security too has generally preferred to go it alone.
The limits of unilateralism were underscored in an article in the   Wall Street Journal   on July 20 explaining how smog in San Francisco and Los Angeles is being exacerbated by enormous plumes of polluted air ("rivers of polluted air," the author called them) blown eastward over the Pacific Ocean from China.
Of course we cannot order China to stop polluting.
But there is much that China wants from us that we should be able to give the Chinese at relatively low cost in exchange for better environmental controls.
One would like to see the Administration more active in this area, but one of the casualties of the war in Iraq is distraction from other urgent global problems.
The economist Robert H.
Frank, in an article in the   New York Times   on July 5 entitled "A Career in Hedge Funds and the Price of Overcrowding," argues that the immense incomes of the most successful hedge-fund managers and private-equity entrepreneurs are drawing excessive resources into those activities.
I believe that this is possible, but much less certain than Frank suggests.
An English economist named Arnold Plant argued long ago that patent and copyright laws could have the effect of attracting excessive resources into the production of patented or copyrighted products.
The reason was that patent and copyright protection, by excluding competition, enables the patentee and the copyright holder to obtain monopoly profits.
Equally productive activities in competitive markets would not generate such profits, and therefore resources would flow from them into the monopolized markets until the profits were equalized in the two sectors.
From an overall efficiency standpoint resources would be flowing to a less socially valuable use; they would be socially more valuable in the competitive markets.
This problem is real (though it might of course be offset by the role of patent and copyright protection in enabling external benefits to be internalized) and is dramatized by the phenomenon of the "patent race." Suppose that for an investment of $1 million a product having a commercial value of $4 million can be invented and brought to market in three years, but that for an investment of $2 million it can be invented and brought to market in two years and eleven months.
The extra month of output would be unlikely to have a value to society equal to or greater than the extra $1 million spent to get it to market a month sooner, yet if that investment would enable the investor to obtain a patent because he was the first to invent, it would yield him a net of $2 million ($4 million minus $2 million).
The problem is not that the successful inventor obtains a return in excess of his cost; this is essential to motivate invention because of the risk of failure.
The problem is that he may carry his investment beyond the point at which an additional dollar in investment would yield a dollar in additional value to society.
I am skeptical that the situation in the financial management market is the same.
No doubt, as Frank argues, there are diminishing returns to financial management because there are only so many underexplored financial opportunities.
But suppose, plausibly, that there is enormous uncertainty concerning the design and implementation of investment strategies.
The higher the rewards for success, the more people (as Frank emphasizes) will be attracted to a career in financial management, and the likelier therefore that stars will emerge.
If these winners create enormous social values, this may "pay" for the losers, who were lured by the prospect of becoming winners from alternative career prospects in which their social product would have been greater.
It is not like a race for buried treasure or to exhaust a coal mine or an oil field, because there is no fixed quantity of financial opportunities.
New ones keep opening up all the time.
So it seems that Frank has really posed an empirical question rather than being able to offer (as he thinks he has done) a theoretical answer.
One empirical dimension is the actual social value added of star financial managers.
Here one might be tempted to distinguish between hedge funds, which invest but do not manage, and private equity firms, which restructure the companies they acquire in order to increase the companies' value.
It is easier to see the contribution of restructuring to social value, and harder to see the contribution of trading in securities.
But to the extent that hedge funds invest in new enterprises or buy stock or other securities issued by enterprises, they contribute directly to production.
And even when just buying securities owned by investors rather than issued by companies to raise capital, hedge funds and other investment companies contribute to a more accurate valuation of securities, which plays a vital role in directing economic resources to their most valuable uses and users.
A company whose stock price rises because investors have correctly determined it to be undervalued can raise capital at lower cost and thus attract resources to an activity in which the resources will be worth more than they are worth in their present use.
But there is no economic law that says that the reward of a financial manager is always equal to the contribution that his management makes to the efficiency of the economy.
It may be much greater.
This is most easily seen by supposing that luck plays a large role in investment success.
Then a career in financial management might attract substantial resources (in the form mainly of the opportunity costs of the time of the financial managers) that produced private rather than social value--private value in the form of large rewards that were the product of luck rather than skill.
That would support Frank's conclusion.
Frank points to overconfidence bias as a factor in attracting people to the hedge funds and private equity firms irrespective of the social value of such careers.
That bias has been well documented, but so has a force that tugs in the opposite direction--risk aversion.
Kenneth Arrow long ago argued that because of risk aversion, there is underinvestment in risky but socially productive activities; his example was innovation.
Overconfidence bias, to the extent it offsets risk aversion, may actually improve economic efficiency, a possibility that Frank ignores.
Outsourcing is a form of vertical "de-integration." "Vertical integration" refers to the form of business structure in which a firm owns a supplier or distributor rather than buying (from the seller) and selling (to the distributor).
Hierarchical direction within an organization is substituted for contracting for output.
The tradeoff is between the agency costs involved in directing people‚Äôs work and the transaction costs involved in arms-length contracts.
As markets grow, enabling greater specialization, there is a tendency to de-integration; vertical integration is attractive when the market for an input is so limited (maybe to just a single firm) that the supplier faces monopsony, which integration overcomes.
Outsourcing is famously illustrated by IBM's decision to outsource the production of the operating systems for its computers to Microsoft; previously IBM made the operating systems itself.
As this example shows, there is nothing in the definition of outsourcing to connect it to foreign commerce.
But the current anxiety about outsourcing focuses on the outsourcing of software development and other high-tech services to foreign nations, particularly India, and hardship to American skilled American workers whose jobs are outsourced.
Oddly, Americans who are opposed to free trade don't mind as much when Americans buy from foreigners as when they hire them, though the effect is the same.
If Microsoft purchases software from an Indian company, the effect on American jobs is no different than if it hires Indian software engineers to work for Microsoft in India--or, for that matter, in the United States.
If the latter arrangement is preferred, it makes no sense for Congress to make it difficult for American companies to hire highly skilled foreigners to work in the United States.
In any event, the harder it is to obtain visas for highly skilled foreigners, the greater the incentive to outsource production to those highly skilled foreigners in their native lands.
So restricting visas seems a futile measure for trying to protect American high-tech jobs.
The obvious difference between outsourcing and importing labor is that the foreign immigrants would command higher wages in the United States than in their native country.
But they would also be more productive, because they would be working side by side with their American colleagues.
Despite sophisticated video conferencing, face to face interactions are still considered highly important to productivity.
It is the low wages in countries like India that makes outsourcing so attractive, but as Becker points out, U.S.
and other foreign (foreign to India, that is) demand results in bidding up the wages of highly skilled Indians in India, which acts as a brake on outsourcing.
It could be argued that outsourcing high-tech jobs creates human capital in the outsource nations, like India, and that the result may be greater competition from the high-tech sectors in those nations in the future.
A U.S.
company might not take this effect of its outsourcing into account in deciding whether or how much to outsource, because of free-riding problems.
Its forbearance to outsource would benefits its competitors, but it would not be compensated by their for conferring the benefit.
If this is a concern, it argues for relaxing visa restrictions on high-tech foreigners, since once established in the United States they are unlikely to take the skills they learn here back to their native country.
Some will, but most will rapidly become assimilated Americans.
However, there would be no externality if the foreign workers in their outsource jobs pay for their own training in the form of accepting lower wages.
The costs of outsourcing are concentrated on Americans who lose their jobs or are paid less because of outsourcing to foreign countries, and the benefits, though they probably exceed the costs, are diffuse.
The benefits in the form of consumer surplus and greater labor demand (but not necessarily in the jobs that are outsourced) in the United States as a result of the reduction in costs that the outsourcing firms experience as a result of outsourcing--if there were no net reduction in quality-adjusted costs, there would be no outsourcing.
When the costs of a policy or practice are concentrated but the benefits diffuse, there is an asymmetry of political pressure, and this may explain the visa resterictions.
But from a neutral standpoint of aggregate (and average) economic welfare, there is no compelling case for limiting outsourcing--or for stinginess in granting work permits to highly skilled foreigners.
I agree with Becker that it would be good if universities (if everybody) were permitted to impose a mandatory retirement age on their employees.
As a matter of theory, however, the removal in 1994 of the professorial exemption from the Age Discrimination in Employment Act's ban on mandatory retirement ages need not have affected the average age of retirement of professors.
In general, a law that affects only one term in a contract should have little effect on behavior, because its effect can usually be nullified by a change in another term.
Eliminating mandatory retirement age is a good example.
If a university wants professors to retire at, say, age 65, it can pay them to do so; that is, it can buy out their tenure contract.
In the long run, the professoriat itself will pay for the buyouts, at least in part, because the opportunity for a buyout is a valuable option for which professors will be willing to pay by accepting a somewhat lower wage.
(See the discussion of mandatory retirement in chapter 13 of my 1995 book   Aging and Old Age  .).
Even if the result of abolishing mandatory retirement age is higher costs for universities, to the extent that all competing universities are affected, they will be able to shift most of the cost to students in the form of higher tuition.
And to the extent that even generous buyouts are refused, universities can offset the effect by increased hiring of young faculty, albeit at increased cost.
For just as higher energy costs need not alter the age mix of the faculty, neither need the abolition of mandatory retirement do so.
Of course, this assumes that universities want a youthful faculty.
As Becker points out, and I below, there is a good reason for universities to want to have a youthful faculty: young faculty tend to be more innovative.
The average age of professors has increased, but the increase may largely have resulted from factors unrelated to the abolition of mandatory retirement ages: namely, continued rather dramatic increases in the health and energy--the youthfulness--of the elderly (which may narrow the productivity gap with young faculty); lighter workloads in elite universities; and delegation of teaching to teaching assistants and non-tenure-track teachers, reducing the demand for tenure-track faculty and hence increasing the average age of tenured faculty.
The political divergence between old and young faculty (the older being more leftwing) is at first glance odd.
If the adoption of a political ideology is driven by information, then since the information available to young and old is the same there should be no age-related difference in ideology.
It is plausible that the young would be drawn to more extreme positions, whether left or right, on the political spectrum because lack of experience would make them more susceptible to radical schemes.
But in academia it seems that Marxist and other extreme positions are more commonly embraced by the old than by the young.
I doubt that the adoption of a political ideology is normally a result of a rational weighing of information.
I think it is more commonly a matter of temperament interacting with aspects of personal identity (such as race and sex), life experiences, and nonrational beliefs, such as religious beliefs.
(I argue this in my recently published book   How Judges Think  .) This makes ideology resistant to change based on new information.
The expansion of the universities in the 1960s, together with the waning of antisemitism in university admissions and faculty appointments, resulted in a large influx of Jews, and Jews, for reasons never adequately explained, are disproportionately left-leaning.
In addition, the expansion must have lowered the age of faculty, and for the further reason that teaching provided a refuge from the draft during the Vietnam War.
The extreme to which the youth of the 1960s was drawn was leftist, and the left in the 1960s was farther to the left than today's left.
If, therefore, ideology is largely resistant to information, there will be a tendency for a person's ideological identity to persist notwithstanding events, such as the collapse of the Soviet Union and the rise of free-market ideology, that might be expected to move a "rational" ideologue rightward.
Becker rightly points to the danger that an increased age of university faculty members will result in reduced innovation.
But this cannot be seen as an automatic or inevitable consequence of the age discrimination law even apart from the theoretical argument that I began with, because, assuming an inverse correlation between productivity and age, universities can lower the age profile of their faculty without violating the law and probably without even having to expand the faculty.
The age-discrimination law applies equally to private businesses, but one does not hear it argued that there are too many old employees in private firms.
Universities could abandon tenure and adopt performance-based compensation schemes.
In addition, they could reduce the possibly too methodologically conservative influence of older faculty by reducing the role of faculty in appointing new faculty members.
Last week New York City began enforcing an ordinance that requires fast-food chains to post on menus and menu boards the number of calories in each menu item, in the same type size as the item itself.
(The ordinance is rather complicated, see www.nyc.gov/html/doh/downloads/pdf/cdp/calorie_compliance_guide.pdf, visited July 24, 2008; my summary is a simplification.) The stated purpose of the ordinance is to reduce obesity.
The ordinance will be criticized as being at once unnecessary, because information about calorie content can be conveyed without requiring that it be printed in large type on the menu (an alternative would be publication on the chain‚Äôs website, or the posting of a separate notice in the restaurant), and paternalistic, because people concerned about their weight have the incentive and ability to inform themselves about the number of calories that they consume.
The ordinance may also be ineffectual, because most people eat most of their food at home rather than in fast-food outlets; anticompetitive, because small chains will incur the same costs as large ones to certify the caloric content of their offerings; blind to the effect of competition in forcing retail firms, including restaurants, to disclose whatever information will give them an advantage in competing for calorie-conscious consumers; unhelpful, because it will contribute to information overload on consumers bombarded with all sorts of warnings; and not based on a responsible cost-benefit analysis.
These are legitimate criticisms, but they may not be conclusive.
A law aimed at reducing obesity would be paternalistic if obesity did not produce external costs, but it does, because obese people consume a disproportionate amount of medical resources, and there is extensive public and private subsidization of medical expenses (private through insurance pools that are unable or forbidden to identify and reject high-risk insureds).
However, the size of the externality is in question, because obese people die on average at a younger age than thin people, and so consume medical resources for fewer years on average than thin people do.
While some obesity has strictly physical causes, most is due to poor eating habits and lack of exercise and is therefore treatable by changes in behavior.
If the necessary changes can be induced by low-cost informational warnings, the result is likely to be a reduction in the external costs of obesity.
However, government programs designed to educate consumers in the causes and consequences of obesity have not been effective.
Fast food is one of the factors that is responsible for the obesity "epidemic" in the United States and other wealthy countries.
Economic studies find that weight rises with lower relative prices of fast-food and full-service restaurants and the wider availability of such restaurants and hence the lower full price of eating at them.
Partly because some of the costs of obesity are external, competition among restaurants or other food providers cannot be counted upon to optimize caloric intake.
An obese person will not eat less in order to reduce the social costs of medical subsidies.
It is not even clear that competition will produce the caloric intake desired by consumers for purely selfish reasons of health, medical expense, and appearance.
Firms are reluctant to advertise relative safety, because it alerts the consumer to the existence of danger.
Cigarette and auto companies were traditionally reluctant to advertise safer cigarettes and safer cars, as that might get consumers thinking and as a result induce substitution away from the product.
Prominent display of calorie numbers might persuade consumers to avoid fast-food chains rather than to look for the chain with the lowest calorie numbers.
This is especially likely because the high-calorie items on the menu tend to be the tastiest.
Inexpensive food rich in butter, cream, sugar, and egg yolk generally tastes better than inexpensive food low in those ingredients; low-calorie foods that taste good tend to use expensive ingredients.
For people who want to be thin, there is an abundance of information that enables them to adopt a healthful diet.
Neither ignorance nor externalities seem to be the important forces in the growth of obesity.
More important may be exploitation by food sellers of people's addictive tendencies, which have biological roots.
In the "ancestral environment," to which human beings are biologically adapted, a taste for high-calorie foods had great survival value.
As Becker has emphasized in academic work, the choice of an addictive life style may be freely chosen and the life style itself may be socially productive and personally satisfying; Becker and I, for example, are addicted to work.
But many obese persons became addicted to high-calorie foods as children, and a child's choice of an addictive life style is not an authentic choice, to which society need defer.
Nor can parents be assumed to be the perfect agents of their children, protecting them from unwise choices; it takes a lot of parental work to keep children physically active in the era of the video game, and away from rich foods.
So there is a case to be made for public efforts to reduce obesity.
The significance of the New York City ordinance lies in its requiring that calorie numbers be printed next to the food items on menus and menu boards and in large type.
The purpose is less to inform than to frighten.
Psychologists have shown (what is anyway pretty obvious) that people respond more to information that is presented to them in a dramatic, memorable form than to information that is presented as an abstraction or is merely remembered rather than being pushed in one's face; that is the theory beyond requiring reckless drivers to watch videotapes of accidents and requiring cigarette ads to contain fearsome threats.
It is one thing to know that a Big Mac has a lot of calories, and another thing to have the number emblazoned on the menu board, next to a mouth-watering picture.
The warnings--for that is what the display of high calorie numbers amounts to--may create fear of high-calorie foods, not only in fast-food chains but generally.
If so, and if as a result there is less obesity, there will be a reduction in medical expense and possibly a gain in happiness if, as one suspects, thin people are on average happier than fat people.
No one can know in advance the net effects of the ordinance.
Its effect on obesity may be small, and it will impose costs of compliance on the fast-food chains subject to them and as a result cause the price of fast food to rise, though perhaps by a trivial amount--and the increase in price will contribute, albeit modestly, to efforts to reduce obesity.
An increase in general education, by tending to reduce people‚Äôs discount rates, may have a greater effect than the ordinance in checking obesity, because the ill effects of obesity are greater in the long term than in the short term and education tends to reduce discount rates.
The argument for the New York City ordinance thus comes down to the argument for social experimentation generally: that it will yield valuable information about the effects of public interventions designed to alter life styles.
I therefore favor the ordinance, though without great optimism that it will contribute significantly to a reduction in obesity.
Eyebrows were raised when Leona Helmsley left $12 million to her dog in her will, and they were raised even farther when it was learned recently that she had signed a "mission statement" indicating her wish that the charitable trust created by her will, which has an estimated $5 to $8 billion in net assets, be devoted to the welfare of dogs.
The judge supervising the implementation of her will cut the bequest to her dog from $12 to $2 million, and it is uncertain how much of the charitable trust will actually be devoted to dogs rather than to other objects of charity, since the mission statement is (according to news reports) not binding on the trustees who will be administering the trust.
Section 408 of the Uniform Trust Act makes trusts for pets enforceable (historically they were not--such trusts were called "honorary trusts" and it was up to the trustee to decide whether to enforce the trust even if commanded to do so by the document creating it, as was not the case with the Helmsley charitable trust), but only up to the amount actually required to maintain the pet in comfortable circumstances.
This would not necessarily limit the amount left to dogs as a class; there are so many dogs that even $8 billion could be spent on them without any individual dog receiving more than necessary for its maintenance in comfortable circumstances.
The possibility that dogs will receive billions of dollars from a bequest presents three interesting questions: why would a person leave so much money to dogs; should such bequests be permitted by law; and should charitable bequests be subject to estate tax, rather than, as they are now, exempt from it?.
Some pets are kept for essentially practical purposes, such as mousing in the case of cats and home protection in the case of dogs.
But increasingly pets are child substitutes or personal companions and, as such, love objects, and hence natural objects of bequests, particularly for childless or wealthy people.
And it is natural to extend one's affection to the entire species, just as, if you love any persons, even if just members of your family, it is natural to have at least an attenuated regard for the welfare of other people, even for the human species as a whole; and so with dogs and cats if you have a pet of one of those species.
Therefore, odd as it may seem, Mrs.
Helmsley's desire to spend billions of dollars on dogs is more easily understood than her desire to give her dog $12 million, since above a far lower amount (probably far below the $2 million allowed by the probate judge) it is inconceivable that the money could increase the dog's welfare; hence the size of the gift makes no sense as an altruistic measure.
 This may explain why the Uniform Trust Act authorizes the judge to cut down the amount of the bequest to a pet to the pet's maximum comfort level.
What makes a trust of $5 billion to $8 billion for dogs seem eccentric is that so much is spent on them already.
A bequest of that amount for endangered animal species or other animal-protective purposes would be easy to understand as an environmental measure, but not a bequest for dogs or cats.
However, a fundamental premise of normative economics is the subjectivity of values: value is determined by personal preference, and the preferences of adults who are   compos mentis   and back their preferences with money are not to be questioned by others unless the expression of those preferences would cause uncompensated harms to unconsenting third parties.
Moreover, bequests will decline if judges pick and choose which to enforce; and to the extent that bequest motives are a significant force in motivating people to earn money, people may not work as hard to accumulate an estate if judges will not honor their bequests, or, even if they do work as hard, they may save less because consumption becomes more attractive relative to saving when the objects for which people save are not fully chosen by them.
As I said, a bequest for a specified animal that greatly exceeds any conceivable estimate of what the animal needs to be as happy as it can be cannot be rationally altruistic, so perhaps the authority that the Uniform Trust Act confers on trustees to cut back such bequests to reasonable limits is justifiable--and for the additional reason that excessive wealth actually endangers an animal, since once it dies the money will go to residuary legatees; and killing an animal is not considered murder (though it can be a lesser crime) and is easier to arrange and conceal than killing a human being.
Expensive security precautions have in fact been taken for the protection of Mrs.
Helmsley's dog.
These concerns do not attend a bequest for a large class of animals.
The size of the Helmsley trust does suggest that it might be sensible to impose a ceiling on the charitable exemption from estate tax.
For example, the law might exempt the first $1 billion of a person's charitable gifts (whether made during his or her lifetime or at death), but above that level such gifts would be taxed at the ordinary gift and estate tax rates.
It is hard to believe that such a change in law would significantly affect work incentives, and it would therefore be an efficient tax.
If it did not reduce people's effort level, it would not reduce aggregate personal income, but (because it would reduce the size of bequests and other charitable gifts), it would merely spread it about somewhat differently.
Given that much charitable spending is wasteful because of the weak incentives for efficiency of the staffs of charitable enterprises, economic efficiency might be increased if there were fewer and smaller charitable trusts.
The economic study that Becker discusses treats gasoline taxes as a form of regulatory taxation, that is, taxation aimed at altering behavior rather than at collecting revenue.
A gasoline tax is an excise tax, and excise taxes are a common method of raising revenue to pay for government.
The best excise tax from a revenue-raising standpoint is one that causes minimum substitution against the taxed good or service, since (in the absence of externalities) such substitution distorts the efficient allocation of resources and reduces the revenues that the tax was supposed to generate.
A regulatory tax aims at substitution because of the externalities caused by the taxed good or service, but complete substitution is rarely achieved (and indeed would usually be inefficient), and so a regulatory tax raises revenue as well as altering behavior.
My guess is that the very high gasoline taxes in Europe, which are primarily responsible for the fact that the price of gasoline in Europe is on average almost twice the U.S.
price, are intended and effective as revenue-raising devices, since those taxes antedate the current concerns with global warming, dependence on oil supplies from hostile or unstable nations, pollution, and acute traffic congestion.
Whether from a revenue standpoint a stiff gasoline tax is an efficient tax, I do not know.
But my guess is that it is.
Since distances are shorter in Europe and public transportation far more extensive, Europeans can substitute against gasoline more easily than Americans can; nevertheless the very high price of gasoline in Europe, though for years it has been higher than U.S.
prices are now, has not prevented demand for gasoline from growing, though in part this is due to extensive European construction of new non-toll highways and roads.
An excise tax on a single commodity will not generate a great deal of revenue, because of its narrow base, but can be justified as part of a comprehensive system of excise taxes.
It is likely, judging from U.S.
consumers' reaction to the recent increase in the price of gasoline, that a steep hike in the gasoline tax (I am treating the state and federal gasoline taxes as a single tax) would cause a further reduction in demand.
Consumers would drive less (some of them by moving closer to work--and telecommuting would increase) and would switch at a higher rate to vehicles with better gas mileage.
At some point, however, the fall in demand might cause the price of oil to decline.
The reason is that the supply curve for oil is upward-sloping, meaning that a reduction in demand and hence in supply will reduce price.
I say "might" cause the price of oil to decline because world demand for oil might continue to rise even if U.S.
demand fell, in which event the world price would not decline.
I wonder, too, whether the recent decline in U.S.
gasoline consumption doesn't represent to some degree an irrational panic reaction.
To take a huge loss on the sale of your SUV in a market that is depressed because so many other people are doing the same thing at the same time is unlikely to be justified by the gains from the improved gas mileage of the car you buy with the modest proceeds of the sale.
Likewise, driving a substantial distance to save a few cents a gallon on the gas you buy is unlikely to be worthwhile.
A recent article suggests that people fixate on the price of gasoline because unlike most regularly purchased items, such as food, gasoline is purchased separately from other items so that its price is not buried in a bill for multiple items.
The economic study that Becker cites finds only modest externalities from gasoline consumption, and this argues for keeping our gasoline taxes low if we think of such taxes as primarily regulatory rather than revenue-raising.
But except for its effect in reducing highway accidents by reducing the amount of driving, a gasoline tax is not an efficient regulatory tax.
Congestion should be taxed directly, since people who travel on uncongested roads do not contribute to congestion.
And the carbon emissions from the burning of fossil fuels (including gasoline) should be taxed, not gasoline, because a tax on gasoline does not create an incentive to produce lower emissions per gallon.
Furthermore, taxing gasoline but not aviation fuel will increase the demand for air transportation, a potent source of both congestion and carbon emissions.
Even the conventional pollutants produced by the internal-combustion engine do not argue strongly for a regulatory gasoline tax, because these pollutants, in the form of smog for example, reduce global warming by blocking sunlight.
And from the standpoint of reducing our dangerous dependence on foreign oil, the proper tax is one on oil, rather than one on just one oil product.
Hence the case for higher gasoline taxes should rest primarily on the efficiency of such taxes as revenue-raising devices.
Even if as I suspect they are efficient revenue-raising taxes, the time to impose this is when gasoline prices fall, not now when consumers are screaming.
Once people adjust to a price of $4.50 per gallon of gasoline, any fall in that price can be offset by an increase in gasoline taxes.
A complication is that a tax on carbon emissions will, depending on how stiff it is, retard any natural, market-drive reduction in the price of gasoline.
A further complication is that the calculation of an optimal carbon-emissions tax is impossible because the costs of global warming and the benefits (in reducing those costs) from a tax on carbon emissions cannot at present be estimated with even minimal confidence.
Professor Robert Sitkoff of Harvard Law School, an expert on trusts and estates, points out two errors in my post and also suggests a further point about trust governance.
He writes that the uniform act is styled the "Uniform Trust Code," not "Act," and that section 408(c) authorizes the court--not the trustee, as stated in the second to last paragraph of the post--to reduce a bequest for the care of an animal.
Limiting the power to reduce the gift to the court is critical especially when the trustee is the remainder beneficiary, as it is easier to reallocate a bequest to oneself than to undertake the distasteful act of killing the animal.
But notice the governance problem posed by a trust for a pet animal.
Normally a trust must be for the benefit of an ascertainable beneficiary.
This rule, which the English call the "beneficiary principle," ensures that there is someone with an economic incentive to police the trustee's conduct.
Contrast the world of charitable trusts, where the absence of such a person leaves supervision (such as it is) in the hands of the distracted (at best) state attorneys general.
For a pet trust, the UTC addresses the enforcement problem by authorizing the donor or the court to name an enforcer.
In functional terms, therefore, the Code treats dogs and other pet animals as if they were children.
Both children and pets are permissible beneficiaries, but both require an alternate enforcement mechanism (albeit one that creates another agency relationship) because neither can bring suit themselves.
Last December, the McKinsey consulting firm published a report which states that despite the much higher per capita spending on health care in the U.S.
compared with peer countries, the longevity of Americans (even if only that of white Americans is considered) is lower than the average of the comparison countries.
This is true, according to McKinsey, even though the prevalence of disease is less in the United States than in those countries (with the principal exception of diabetes, a consequence of Americans' obesity).
Because Americans smoke less than the people in those countries, smoking-related diseases are actually lower in the United States.
The report attributes to the higher cost of health care in the United States to higher physician incomes, physicans' control over the number of medical procedures and their ownership of testing and other facilities, which drives up utilization, much higher prices for procedures, higher drug prices, and other factors.
To which should be added the exemption of employer-provided health benefits from employees' income tax and the very high overhead costs of health insurers.
Against this, the Preston-Ho article that Becker summarizes points out that the more extensive screening and aggressive treatment of selected cancers, notably breast cancer and prostate cancer, in the United States result in lower mortality from those cancers than in the peer countries.
That is an important point, but it does not establish the superiority of our health-care system.
To establish (or refute) that superiority would require conducting a cost-benefit analysis.
I have my doubts that such an analysis would vindicate the U.S.
system.
We spend some $2.5 trillion a year on health care.
Our peer countries spend about 60 percent as much per capita on health care and this implies that if we spent at the same level as they, our annual health-care expenditures would be $1.5 trillion.
The question, therefore, is what benefits are we obtaining for the additional $1 trillion that we are spending? Suppose the additional screening for and treatment of cancer that we do compared to what the peer countries do is $100 million a year (I have not been able to find an estimate of that cost); that would leave $900 million in "excess" health-care expenditures to explain.
A related point is that the causes of the lesser emphasis in the peer countries on cancer screening and treatment have not been explained.
Is it simply a lack of money? Or is it a medical judgment? There is some skepticism in medical circles concerning the overall efficacy both of mammography and of screening for and treatments of prostate cancer.
Treatments for prostate cancer are expensive in dollar terms but more so in side effects, which often are permanent.
Different people, and perhaps different populations, make different tradeoffs among the various factors that affect a decision on screening and treatment.
I also question the Preston-Ho suggestion that the shorter average life span in the United States compared to that in the peer countries should be treated as a completely exogenous factor.
Treating it as such results from an artificial distinction between medical care and public health.
Obesity is not a disease, but it is a serious public health problem.
A rational allocation of health-care resources might require a shift in resources from end-of-life medical treatments to preventing obesity.
Such a shift might increase longevity much more cheaply and effectively than more screening for cancer.
So might greater efforts to reduce the murder rate, improve prenatal and infant medical care, reduce speed limits, reduce unsafe sex, increase liquor and cigarette taxes, improve education, reduce poverty, and prohibit motorcycles.
It might be argued that the additional costs of health care that are created by obesity have an offsetting benefit: they reduce the cost of being obese and so increase the net benefits of heavy eating.
But the higher health costs of the obese are externalized, in part anyway, to the taxpayer (also to the other members of their insurancce risk pool, if health insurance companies aren't allowed to discriminate).
I doubt, moreover, that the obese gain more in enjoyment of food than they lose in the health and other costs of being obese.
Much obesity is a result of ignorance (both of calories and of the health effects of obesity), bad habits picked up from parents and peers, negligent parenting, and poor impulse control (i.e., very high discount rates).
And speaking of obesity, its prevalence in the United States undermines studies that find that people attach great value to small improvements in quality and quantity of life.
The fact that so many Americans eat badly, don't exercise, drink (or "text") when they drive, and otherwise endanger their life and health, implies, since one can eat well, drive sober, and exercise, etc., at relatively low cost, that people don't value small improvements in quality and quantity of life very much--unless the improvements are paid for by someone else!.
Even if we are receiving $1 trillion in benefits from the "extra" $1 trillion that we paying for medical care, it doesn't follow that the $1 trillion in extra costs isn't too much.
The reason is that we face, in my opinion, a fiscal crisis; something will have to give and maybe it should be some medical care.
The national debt this year will almost equal the Gross Domestic Product (true, the "public" debt--debt owed to entities outside the federal government--is lower than the overall national debt, but the debt owed the social security trust fund, for example, is a real measure of likely future fiscal obligations), and it will continue to soar at least until the economy, and with it federal tax revenues, recover.
But it probably it will soar beyond that because the Bush Administration established a precedent of $500 billion annual federal budget deficits that the Obama Administration will follow and probably raise.
The health-care reform wending its way through Congress will expand benefits without, it now appears, controlling costs.
It is a misfortune that Congress didn't begin with trying to control costs, and then consider whether the nation can afford to expand benefits.
The energy bill just passed by the House of Representatives and awaiting action in the Senate is extremely long and complex, and cap and trade is only one part of it; but like Becker I will confine my remarks to that part, and thus treat the cap and trade component as if it were a separate bill.
I will also assume that it will be enacted, and in much its present form.
Becker's analysis of the political realities surrounding the bill are persuasive; and he probably is also right that revenues generated by a tax (in lieu of a quota) approach to carbon emissions would be dissipated on other government programs, such as health-care reform, rather than used to pay off some of the nation's mounting public debt.
Moreover, imposing heavy new taxes in the midst of a depression would retard economic recovery.
In principle, a stiff tax on carbon emissions (and other greenhouse gases, but for the moment I'll confine myself to carbon) is, it seems to me, superior to the quota (cap and trade) approach.
(I develop this argument in my book Catastrophe: Risk and Response [2004].) Not because it would necessarily reduce carbon emissions more than a quota approach would do, but because it would stimulate research into ways of solving the global-warming problem technologically.
The higher costs of energy to energy producers would create strong incentives to develop technologies that would solve the problem, including technologies for removing carbon dioxide and other greenhouse gases from the atmosphere, which may well be a more promising approach than trying to induce the substitution of "clean" energy sources for fossil fuels.
(To create incentives for developing technologies for taking carbon dioxide out of the atmosphere, a carbon-emissions tax would have to be complemented by a negative tax--a bounty--for carbon dioxide removed from the atmosphere.).
The incentive effect of the cap and trade bill is weaker.
Each energy producer will receive a quota, and many of the producers will be within their quota or able to meet it at low cost.
Those producers that cannot comply with their quota may be able to purchase the rights of other producers (that is the "trade" component of cap and trade) at modest cost; for the aggregate reduction in carbon emissions required by the bill--a 17 percent reduction over the 2005 level, phased in between 2012 and 2020--is modest.
Moreover, they may be able to buy additional allowances from the federal government (which is holding some allowances in inventory, as it were) at a modest price (depending on what the government decides to charge), or--what looks like a potentially huge loophole--"offsets" to the emissions that they cause.
This means undertaking  or financing projects to reduce emissions from other sources, such as afforestation projects designed to increase the absorption of carbon dioxide (trees consume more carbon dioxide than they produce).
The evaluation of such projects will be extremely difficult.
Assuming the "cap" component of the bill will reduce the output of energy generated by burning fossil fuels, energy prices will rise and consumption therefore fall.
The reduction in output may increase the profits of the energy companies, just as when competitors form a cartel to increase prices and profits by reducing output.
True, a quota that reduced the output of an energy producer by the same amount as a tax would create an opportunity cost equal to the tax: that is, the same innovation that would reduce the tax to zero by eliminating the producer's carbon generation would increase the producer's output to its former level.
But is a difference in the likely efficacy of the two methods, quite apart from the fact that the tax (to the extent not reduced to zero as a result of an innovation) but not the quota would generate government revenues.
As a matter of practice though not of theory, firms often do not react to opportunity cost as quickly as they do to an out-of-pocket cost.
The out-of-pocket cost shows up right away on the balance sheet (some of it at any rate--some part will be passed forward in the form of a higher price) and is likely to affect the price of the corporation's stock more quickly than a failure to take advantage of an opportunity to eliminate the cost by innovating.
The effect of the cap and trade bill on the amount of carbon dioxide and other greenhouse gases in the atmosphere is thus likely to be slight, and the administrative costs of the program will be great.
Emissions will continue to increase, probably at no lower rate than at present; for the modest effect of the bill will be offset by the growing emissions by China, India, and other rapidly developing countries.
Conceivably the bill will provide some impetus to effective international cooperation to limit global warning, but one cannot be very optimistic.
Hence the importance of a technological fix, which does not require international cooperation to be effective.
If technology were developed for removing carbon dioxide from the atmosphere, Third World countries could emit carbon dioxide to their hearts' content.
We may indeed already have the technological fix, though mysteriously it receives little attention.
Sulphur dioxide, the cause of acid rain and the poster child for cap and trade--because the cap and trade program for sulphur dioxide has been a big success--is the opposite of a greenhouse gas: it cools the atmosphere by reducing the amount of sunlight that reaches the earth's surface.
Injecting relatively small quantities of sulphur dioxide into the atmosphere would offset the effect of atmospheric carbon dioxide in heating the earth's surface.
The opposition of environmentalists to using a pollutant to combat global warming and therefore seeming to approve of pollution, and concern with the bad effects of increasing the amount of sulphur dioxide in the atmosphere (effects that might not be limited to a modest increase in the amount of acid rain), have thus far kept this option from serious consideration in political circles.
Becker raises the interesting question of the implications of option theory (though he does not use the term) for dealing with the global-warming problem.
Since there is considerable uncertainty concerning the gravity of the problem, there is an argument for moving slowly (as in the cap and trade bill) while gathering additional information in an effort to dispel the uncertainty.
But option theory can be run in reverse and be appealed to as a ground for taking early preventive measures.
Suppose that at time t there is some nonquantifiable but nontrivial probability of a disaster of immense proportions at time t + 2.
Suppose further that at time t + 1 we will learn the probability of the disaster at t + 2, but that by then the cost of effective preventive measures will be immensely higher.
Although the tradeoffs are uncertain, it may make sense to incur the much lower cost of preventive measures at time t.
This is the tendency of current thinking about a future financial crisis such as the one last September that has brought about the depression we find ourselves in.
No one can estimate the probability of a future such crisis, but it is widely agreed that preventive measures should be taken against the possibility of one.
The analogy to the global-warming problem lies in the fact that both economic depression and climate change are disequilibrium events involving adverse feedbacks.
In the case of an economic depression, the adverse feedback is a deflationary spiral, in which falling demand results in falling employment and prices, producing hoarding, which in turn reduces demand further and therefore output and therefore employment, and so on down.
In the case of global warming the possibility of dangerous adverse feedbacks is illustrated by the melting of the arctic tundra in Alaska and Siberia.
Much methane, a potential greenhouse gas though one found in only small amounts in the atmosphere, is trapped in arctic tundra.
As surface temperatures rise, tundra melts, releasing methane into the atmosphere, which in turn causes surface temperatures to rise more, releasing more methane, and so on.
The possibility of serious adverse feedbacks makes both economic depressions and climate change extremely dangerous events, warranting emphasis on preventive measures taken well in advance.
That is an argument for more aggressive measures than contemplated by the cap and trade bill.
But the power of special interests and our soaring national debt make the argument academic.
But may we at least have a decade before the danger of acute global warming becomes acute? Probably, though no one can say for sure.
Still, a wait and see approach for a decade is certainly a defensible option.
College and university endowments have taken a big hit from the drop in the stock market and other asset markets.
A drop of 20 to 30 percent is common, and there is suspicion that endowments that contain a significant proportion of assets that are not traded on organized markets, such as real estate, have dropped even more, but without "marking to market" their nonfinancial assets.
The effect of a drop in the market value of endowment on a college's or university's finances depends on a variety of factors.
I will give a hypothetical example that may help in understanding the issue.
Suppose X College has an endowment that before the crash had a market value of $1 billion.
In normal times X cashes 5 percent of the endowment each year to contribute to X's budget, 5 percent being a widely used estimate of the average real return of a typical university investment portfolio.
Suppose X's total annual budget is $200 million, with a quarter contributed by the income on the endowment (5 percent of $1 billion = $50 million), a quarter by alumni gifts (apart from gifts intended to become part of the endowment), and a half by tuition.
In the economic downturn, I'm assuming, the endowment has fallen by 30 percent, to $700 million; tuition net of financial aid has dropped by 5 percent (because of inability of parents to pay tuition, as a result of declines in their income and wealth); and alumni gifts have (for the same reason) fallen by 10 percent.
Then College X's income will have declined by 12.5 percent, from $200 million to $175 million, assuming the college continues to treat 5 percent of the (shrunken) endowment as income.
What should X do (other than expelling its suddenly impecunious students and replacing them with affluent ones of less academic promise)? The typical response has been to cut spending by the amount of the drop in income: in my example, that would require X to reduce its spending by 12.5 percent, which it can do in various ways, the usual ones being laying off staff, freezing hiring of faculty and staff, delaying construction, deferring maintenance, reducing staff salaries, and curtailing extracurricular activities.
At first glance, this seems a puzzling response.
Why all this dislocation, instead of either spending capital (that is, taking more than 5 percent out of the endowment) or borrowing? Take borrowing first.
Unless a dollar is worth less to College X this year than it will be next year (or whenever its income returns to its normal level), why should it spend less when it can spend the same, with modest effect on future spending, by borrowing (in my example, $25 million)? Lending and borrowing are methods by which the marginal utility of income can be equalized across time when total income varies from year to year.
Harvard is borrowing more than $2.5 billion, but it is still cutting its budget by 10 percent.
Similarly, although "spending capital" is a familiar example of improvidence, it can make perfectly good sense.
In my example, College X is short only $25 million, and if it takes that out of the endowment, the endowment will fall only from $700 million to $675 million.
A problem may be that donors of endowment money may have placed limitations on spending, fearing that a gift that they intended to be perpetual would be eaten up if X dipped into the principal of the gift.
But not all endowment money is restricted in this way and trustees of trust funds usually have authority to dip into principal in the event of an emergency.
However, X may fear that if it does that it will discourage future contributions to the endowment.
Given that last concern, borrowing seems the superior alternative to spending capital, and yet most colleges and universities seem reluctant to use borrowing to fill the entire gap between income and expenditure; otherwise they wouldn't be cutting spending.
Granted, credit remains tight, but the elite universities, at least, are fiscally sound and have alumni in positions of influence in the credit industry.
Maybe the reason the colleges and universities are not borrowing more is that they do not expect their income to recover in the foreseeable future.
Nonelite colleges depend heavily on state aid, which is likely to be meager for a number of years--state budgets are in terrible shape.
And they draw students from the segment of the population that has been hardest hit by the economic downturn.
Elite colleges and universities depend heavily on federal research grants, which may diminish as the government tries desperately to control the rapidly mounting national debt, and on donations by wealthy alumni.
Many of those alumni have suffered a permanent reduction in their wealth, and many others are facing the increasingly likely prospect of having to pay higher income taxes, which will make it more difficult for them to pay their kids' tuition.
Elite universities may have to limit tuition if they want to attract the best students.
A protracted economic crisis has different effects on different industries, because different industries have different vulnerabilities.
The current crisis is speeding newspapers, and print media more broadly, to an early grave, and may yet destroy all three Detroit automobile manufacturers.
It will not do in the colleges and universities, but it may have a lingering adverse effect on them that may explain and justify immediate measures to reduce expenditures.
The U.S.
Senate is a very peculiar institution.
It was not when it started.
It was created to be a check on the popularly elected House of Representatives, but also on the President, through its "advise and consent" power--the President's nominees for officials had to be confirmed by the Senate.
Senators were not to be elected, but to be appointed by the state legislatures.
The assumption was that the legislatures would want to appoint a person of distinction to represent the state.
There were only 13 states (though more were envisioned), so there would be only (at the outset) 26 Senators.
Their long terms (six years) would encourage expertise and a greater independence from popular passions than the members of the House of Representatives, elected for only two-year terms, could be expected to have.
The Senate would, in short, be an elite, deliberative, and only indirectly democratic body.
The change in the character of the Senate since the Constitution of 1787 has been profound.
Senators, as a result of a constitutional amendment, are now directly elected, and there are 100 of them.
The combination of the amount of time that they have to spend raising money and tending to constituents, and the immensely greater populations of most states now compared to the eighteenth century, and the enormously greater size and complexity of the federal government, have resulted in Senators' being underspecialized, despite having large staffs.
The filibuster (a creature of Senate rule rather than of the Constitution or a statute) creates a requirement of a supermajority to pass legislation to which there is substantial senatorial opposition, and rules or customs of senatorial "courtesy" give individual Senators considerable blocking power, for example power to delay confirmation hearings.
The result is that the Senate is an extremely inefficient institution compared to the House of Representatives, in which the majority is in firm command.
And because there are so many more House members (435), they have fewer committee assignments and thus can develop greater expertise than Senators; in addition, although they run for office three times as often, they run in much smaller districts and often with little competition and on both accounts don't have to raise as much money in campaign donations as Senators do.
Since the Senate is very large and Senators are directly elected, it is unclear why there is a Senate--that is, why the federal legislature is bicameral.
Bicameralism increases the transaction costs of enacting legislation, which can be good or bad (it is bad in national emergencies, as in the financial crisis of last September), and it also increases the cost of repeal, which on balance probably is bad, arbitrarily enhances the political power of sparsely populated states, results in many unprincipled and confusing legislative compromises, and diffuses responsibility for legislation.
It is not clear that on balance we are better off with the bicameral system.
The filibuster is an incomprehensible device of government.
A supermajority rule, whether it is the rule of unanimity in criminal jury trials or the supermajority rules for amending the Constitution, makes sense when the cost of a false positive (convicting an innocent person, or making an unsound amendment to the Constitution) substantially exceeds the cost of a false negative.
But it is hard to see the applicability of that principle to Senate voting, given the other barriers to enacting legislation.
These reflections on the filibuster are prompted by the Democratic Party's recent achievement of a filibuster-proof (60-40) majority in the Senate.
It might seem that since the President Obama and Vice-President Biden made such a strenuous effort to convert Senator Arlen Spector from the Republican to the Democratic Party, which put the latter within one vote of a filibuster-proof majority, they must think that having such a majority is a political asset.
I am not so sure they do think that.
It is easy to see why the conversion was in the Democratic Party's interest regardless of its potential effect on the filibuster; by eliminating one of the Republican Party's most prominent moderates, it contributed to the growing marginality of that party, as it becomes increasingly identified with a rather shrunken right wing.
I am not sure the filibuster-proof majority is a boon to the Democratic Party and program.
Because if now the Administration's legislative program fails of passage or is mutilated in the course of passage, it will not be possible to blame an obstructive minority consiting of filibustering Republican windbags.
Furthermore, the new voting alignment increases the power of every Democratic Senator, by threatening to align himself with the 40 Republicans in a crucial legislative showdown, to thwart the Administration's program or, more realistically, to insist on what may be costly compensation in the form of an amendment favoring an interest group that is important to his electoral prospects.
Indeed, each of the Democratic Senators now has an incentive to play the hold-out in order to extract concessions, in any situation in which Republicans need only one or a very few Democratic defectors in order to defeat an Administration bill.
The Bush Administration, especially in the person of Vice President Cheney, had an expansive view of presidential authority.
It was articulated as an interpretation of the Constitution, in particular Article II, which is about the presidency.
Truman similarly took an expansive view of presidential authority when he seized the steel industry during the Korean War, but the seizure was overturned by the Supreme Court.
(The Bush Administration had a mixed record in the Supreme Court in defending its expansive view of presidential authority, which centered on antiterrorist policy.)     Clinton     used administrative regulation to try to get around the Republican Congress with which he had to deal after the 1994 election.
Other Presidents, notably Lincoln, were prepared in emergency circumstances to violate the Constitution, as when     Lincoln     suspended habeas corpus at the outset of the Civil War; it is reasonably clear that the Constitution authorizes only Congress to suspend habeas corpus.
There is a third type of questionable exercise of presidential power, which consists of publicly demanding that a private firm or industry or other entity conform to the President’s desire, without pretending that the President has the legal authority to require such conformity.
(This overlaps with but is distinct from the concept of the presidential “bully pulpit”—the President’s power to appeal directly to the people for support of his policies.) In April 1962, for example, President Kennedy publicly denounced U.S.
Steel and the other major steel companies for announcing a stiff price increase to offset the cost of a collective bargaining agreement that it had signed with the United Steelworkers union.
He backed up his denunciation by threatening an antitrust investigation and made the threat credible, or at least frightening, by having his brother (the Attorney General) dispatch FBI agents to “interview” the top steel executives.
The Administration had encouraged the collective bargaining agreement and was incensed at U.S.
Steel’s attempt to offload the cost of it on consumers.
A price increase is a normal response to higher labor costs, but the President considered it a slap in the face—his face.
His public denunciation of the steel industry worked; the industry backed down and the antitrust investigation was called off.
President Obama has used this device of extra-legal presidential intimidation more frequently, probably, than any President.
In the spring of last year he told General Motors to fire its chief executive officer, Rick Waggoner.
He had no authority to do that, and didn’t pretend that he did.
Waggoner went.
Last month the President ordered British Petroleum to put billions of dollars into an escrow account for payment of claims for losses caused by the BP oil leak in the   Gulf of Mexico  .
He did not pretend to have any legal authority to order this, but BP quickly complied—as it did with the President’s insistence that it cut its dividend in order to be sure of having enough money to pay all the claims that might be made against it and the fines that might be imposed on it.
And the President’s criticisms of Wall Street bonuses may have been decisive in the decision of Goldman Sachs to scale down the bonuses it was intending to award for the firm’s highly profitable 2009.
Should a President use the prestige (one might even call it the “moral authority”) of the office, and his ability to command public attention, to obtain compliance with demands made by him on the business community that are not backed by law? I think not, apart from any distaste one may have for bullying.
It makes business subject to two regulatory regimes.
One is a legal regime, created by Congress and by the regulatory agencies to which Congress delegates a portion of its own constitutional regulatory power.
The other is a kind of “people’s democracy” regime, in which government stirs up public anger to force businesses to comply with extra-legal government demands.
This second regulatory regime operates without rules, and so subjects business to potentially debilitating uncertainty in the sense of a risk that cannot be quantified.
We know from Keynes and other students of uncertainty that a common and often the sensible response to uncertainty is to freeze, in the hope that the uncertainty will dissipate over time, or to take active steps to reduce the uncertainty.
Both are options for business faced with the threat of presidential wrath.
A business can hire less, invest less, and build up its cash balances as a hedge against adversity.
It can also redouble its lobbying and other influence activities in an effort to neutralize or deflect threats of extra-legal regulation.
Neither is a healthy response; the first is downright pernicious, especially in a depression or recession, or the early stages of economic recovery.
Both are responses that the threat of presidential bullying encourages.
Many of the President’s legislative initiatives, in particular the health reform law, the just-enacted financial regulatory reform law, and the credit card law of last year, have increased the uncertainty of the economic environment for business.
These laws really haven’t settled anything; it will take years of regulatory implementation before their full impact can be determined.
But in addition business has to deal with the unpredictable exercise by the President of an uncanalized extra-legal authority to bend business to his wishes.
It is no wonder that the economic recovery appears to be progressing so slowly.
I do not favor extending unemployment benefits.
The bill signed into law by the President last Wednesday would extend unemployment benefits by up to 13 weeks (depending on a state’s unemployment rate) for persons, estimated to number some 2.3 million, who have been unemployed for six months or longer, up to a maximum of 99 weeks (almost two years), at a total cost to the government estimated at $34 billion.
I would not object to the government’s giving the states some fraction of $34 billion to enable them to make additional welfare payments to persons experiencing serious economic hardship.
But extending unemployment benefits beyond the standard six-month limit is a bad idea.
As Becker points out, the total amount of the additional benefits is too small to have a significant positive effect on employment—and for the further reason that a transfer payment cannot be assumed to have a positive effect on consumption or investment.
The effect will depend on what the recipient of the transfer does with it.
If he saves it, at least in an inert form such as cash or a federally insured demand deposit that the bank in which the money is deposited uses to buy a Treasury security, then it will not stimulate production and hence employment.
Even if he uses it to buy something, it may turn out that the seller is selling from inventory and doesn’t plan to restock, in which event the effect on production will depend on what the seller does with the money he receives for the sale—maybe he will save it in some inert form!.
Far from being effective as stimulus, the extension of unemployment benefits will have two negative effects on employment.
First, it will increase the opportunity cost of the recipient’s rejoining the labor force.
Unemployment benefits are set lower than earnings to reduce the moral hazard that Becker discusses, but the gap between benefits and earnings is narrowed by the costs of work (such as commuting, and any disutility associated with work, such as fatigue and boredom) and by the benefits of household production and of leisure—and those benefits, unlike earnings, are not taxed.
The gap is so small for many unemployed people that studies show that they do not begin a serious job hunt until their unemployment benefits are about to expire.
So extending or otherwise enhancing unemployment benefits, far from stimulating employment, is likely to reduce employment and so slow the pace of economic recovery.
Second, extending unemployment benefits has a negative long-term effect on employment.
The longer a person is unemployed, the less likely he is ever to return to the labor force, at least in a job comparable to the one he held before becoming unemployed.
Apart from erosion of skills and of the habit of working, persons unemployed for a long time are unattractive hires because employers are suspicious of these persons’ attachment to or aptitude for work.
Thus the net effect of the just-enacted extension of unemployment benefits is likely to be to reduce employment and output.
And while the amount of the benefits involved in the extension is not large, the negative effect on the economy may be if a substantial number of the 2.3 million persons who will be receiving the benefits are unemployed for a longer time, as a result, than they would otherwise be.
So the extension has to be defended if at all as a welfare measure.
It is a bad measure because it is not means-tested.
To be counted as unemployed you have to be looking for a job, so one can assume that the unemployed are involuntarily unemployed and so would prefer to work.
But that doesn’t mean that they necessarily face economic hardship as a result of being unemployed.
They may be unemployed because their attachment to the labor force is actually quite week; they do not try very hard to meet their employer’s expectations and so are quick to be laid off in bad times, or their search for employment is lackluster.
They may have large savings or a high-income spouse.
No doubt the bulk of the long-term unemployed are hardship cases, but it is only hardship cases that make a strong claim on a welfare program.
The fact that not all persons who have been unemployed for a substantial period of time are hardship cases reinforces my concern that extending unemployment benefits will cause disemployment.
The persons who can afford, as it were, to be unemployed are likely to pocket any additional unemployment benefits that they receive and slow their search for a job.
I agree with Becker’s criticisms of the new law (not quite a law yet—it has not been passed by the Senate, but I am guessing it will be, because an ignorant public demands action).
It’s a monstrosity, and a gratuitous one, as there is no urgency about legislating financial regulatory reform.
The financial regulatory agencies have ample, indeed essentially plenary, authority over the financial industry; and because they were asleep at the switch when disaster struck, they are now hyper-alert to prevent a repetition of it.
Indeed, bank examiners have become so fearful of condoning risky banking practices that they are making it difficult for banks to lend to small businesses and consumers and thus are retarding the economic recovery.
The principal factors in the financial and larger economic collapse appear to have been : (1) Incompetent monetary policy under Alan Greenspan and his successor Ben Bernanke, which enabled the housing bubble.
The bubble’s bursting brought down the financial industry, which was heavily invested in both residential and commercial real estate.
(2) The inattention of the Federal Reserve and the Securities and Exchange Commission, which did not understand the changing nature of the banking industry, particularly the rise of “nonbank banks” dependent on short-term, uninsured capital.
Solvency regulation of these banks was overly lax.
(3) The overindebtedness of the American people and government, which has hampered the restoration of credit.
And (4) the failure of the Treasury Department under Henry Paulson, and the Federal Reserve under Bernanke, to rescue Lehman Brothers; they didn’t realize that Lehman’s bankruptcy would trigger a run on the banking industry, causing a global credit freeze.
Obama’s principal economic officials—Bernanke, Timothy Geithner, and Lawrence Summers—were implicated in the regulatory oversights that precipitated the crisis, as were key legislative officials, such as Christopher Dodd and Barney Frank.
None of them wants to shoulder blame for the crisis.
They want to change the subject.
So instead of blaming government, they blame the banking industry.
The industry did take risks that were excessive from an overall social standpoint, but industry will always take the risks that government permits them to take, if the risk taking is highly profitable and losses if the risks materialize will fall mainly on others, which is what happened.
Some banks took a hit, but the big ones are doing well.
The government saved them from bankruptcy and has allowed them to borrow from the Federal Reserve at interest rates close to zero, thus enabling them to return to profitability without doing much lending, which banks are reluctant to do during a deep economic downturn, when default risk soars.
But for government officials to say "we blew it—we had the powers we needed to prevent the crash but failed to use them because we were complacent and inattentive" would not be a politically satisfactory response to the economic debacle.
Just as politics requires that the President be seen to “do something” about the oil leak in the   Gulf of Mexico  , though there is nothing he can do, so politics requires that Congress “do something” to prevent a repetition of the economic disaster, though there is nothing it needs to do.
Much that the 2300 page long “Dodd-Frank Wall Street Reform and Consumer Protection Act” ordains is within the existing powers of the financial regulatory agencies to effectuate.
For example, although the Act creates a Financial Stability Oversight Council (consisting mainly of the chairman of the Fed, the Secretary of the Treasury, and the chairmen of the Federal Deposit Insurance Corporation and the SEC) to advise the President and Congress on systemic risk, these officials don’t need legislation to hold regular meetings if that would be useful.
The reason for creating such a Council is purely political: a governmental reorganization is a favorite response to a governmental failure because it is visible, easy to explain, usually cheap (it involves mainly just moving the boxes in a table of organization), and can be designed in such a way as to avoid ruffling too many interest-group and government-bureaucracy feathers.
It also buys time, since no one expects a reorganization to be effective immediately.
In like vein the Act in several hundred pages directs the creation of a consumer protection bureau to be lodged in the Federal Reserve Board.
The Fed already   has   such a bureau; it is ineffectual because the Fed cares about the solvency of banks, not the solvency of their customers.
Congress proposes to correct this skew by making the head of the Fed’s consumer protection bureau (renamed the Consumer Financial Protection Bureau—renaming being the least arduous and hence an irresistible form of reorganization) a Presidential appointee—so he can squabble with the Fed’s chairman yet not be fired.
The Federal Trade Commission, which is not protective of banks’ solvency, has extensive experience in protecting consumers, including consumers of financial products (the Commission enforces the Truth in Lending Act, for example), and could be given additional resources to police unfair and deceptive practices in mortgage and other consumer lending.
While subprime lending contributed to the financial crisis, most subprime borrowers were not deceived, abused, intimidated, etc.
Adjustable rate mortgages, strongly encouraged by Alan Greenspan, enabled people who could not afford the down payment on a house to buy a house anyway, gambling that continued housing price increases would give them an equity in the house that they could use to refinance with a conventional 30-year mortgage.
If the gamble failed, they would go back to renting.
The new law increases the amount of equity capital that banks must hold, relative to their total capital, in order to reduce bankruptcy risk.
But the Federal Reserve, in the case of commercial banks, and the SEC, in the case of nonbank banks (or the nonbank subsidiaries of commercial banks, such as Merrill Lynch, now a part of Bank of America), already have the authority to decide how much equity capital, relative to debt, the firms they regulate must hold.
The legislation requires that most credit-default swaps (a form of credit insurance, but also a device for speculating on bond prices and defaults) be traded through clearinghouses and on public exchanges, as publicly traded stocks are.
Uncertainty about the liabilities and solvency of issuers of credit-default swaps did contribute to the financial panic of 2008, but can be dispelled by requiring fuller public disclosure of firms’ off-balance-sheet contingent liabilities, including not only credit-default swaps but also the “structured investment vehicles” in which banks parked their mortgage-backed securities.
Requiring such disclosure is again within the existing authority of the financial regulatory agencies.
It may be argued in defense of the new law’s apparent redundancy that the agencies didn’t use their authority to avert the crisis (which is true), so they must be ordered to use it to avert future crises--which doesn’t follow.
It is a mistake for Congress to instruct regulatory agencies on the details of how to regulate.
The idea behind administrative regulation is that agencies hire experts to deal with technical issues that Congress has neither the competence nor the time to resolve.
Legislation once adopted is difficult to change, and can squeeze a regulatory agency into a straitjacket.
This is a particularly serious concern in the case of finance, an industry that experiences continuous change.
Besides trying to micromanage the regulatory process in some respects, the new law in other provisions takes a different tack and directs the regulatory agencies merely to study particular problems.
This is a waste of ink.
All the senior financial regulators are appointees of the Obama Administration.
If there are areas of financial regulation that would benefit from further study—and there are—the White House can tell its appointees to do so.
There is no need for a congressional prod.
There are little nuggets here and there, such as the abolition of the   faitnéant   Office of Thrift Supervision, but on the whole, so far as I can judge, the new law is a political measure in the worst sense.
British.
Petroleum’s drilling accident in the Gulf of Mexico this past April is the.
latest of several recent disastrous events for which the country, or the world,.
was unprepared.
Setting aside terrorist attacks, where the element of surprise.
is part of the plan, that still leaves the Indian Ocean tsunami of 2004,.
Hurricane Katrina in 2005, the global economic crisis that began in 2008 (and has.
been aggravated by Greece's recent financial collapse), and the earthquake in.
Haiti last year.
The.
reaction to the latest accident has been surprising.
Oil spills and underwater.
drilling accidents are common, and despite the media hype it is too soon to.
tell whether this one will prove to be the biggest yet.
The amount of oil.
leaked so far is substantially less than the amount spilled or leaked in.
previous accidents, including at least one in the Gulf of Mexico.
It.
is also surprising that so much criticism has been directed at the Obama.
Administration, and indeed against Obama personally.
Most of the criticism is.
absurd—his failure to react emotionally, and his inability to “just plug the.
hole,” are not personal or professional failings.
The Minerals Management.
Service in the Department of Interior does seem to have been asleep at the.
switch, but Obama unlike his immediate predecessor cannot be criticized as.
being hostile to regulation—if anything, he has too much faith in it.
MMS is a.
small and obscure agency far below the horizon of a president’s supervision.
No.
president can eliminate all pockets of incompetence in the vast federal.
government.
It.
is possible that the number of recent disasters has created a public sense that.
something is wrong with government: that it ought to be able to prevent all disasters.
But this is an unrealistic expectation.
Everything conspires against a.
government’s being able to protect its people against disasters, whether.
natural or man-made.
A factor that retards prevention of man-made disasters is.
the rapid and relentless advance of technology.
Regulation lags innovation.
The.
Federal Reserve, Treasury Department, and SEC were no more able to keep abreast.
of advances in financial engineering than MMS was to keep abreast of advances.
in drilling for oil at very great depths under water.
Slack regulation.
encourages private companies to adopt a high-risk business model.
Risk and.
return tend to be positively correlated, in finance because risky loans command.
higher interest rates and in underwater drilling because risk abatement is.
costly.
Business is particularly reluctant to take preventive measures against.
unlikely disasters because they do not pose a serious near-term threat.
If.
there is a 1 percent annual probability of a disastrous drilling accident or.
financial collapse, the probability that the disaster will occur any time in.
the next 10 years is only 10 percent.
Business managers have finite planning.
horizons just like politicians.
Of.
course, if the consequences of a disaster would be very grave, the fact that.
the risk is low is not a good reason to ignore it.
But there is a natural.
tendency to postpone taking costly preventive action against dangers that are.
only likely to occur at some uncertain point in the future (“sufficient unto.
the day is the evil thereof,” as the Bible says), especially if prevention is.
expensive, if the probability and often the consequences of disaster cannot be.
estimated with any confidence, if remediation after the fact seems like a.
feasible alternative to preventing disaster in the first place—and because.
there is so much else to do in the here and now than worry about remote.
eventualities.
The overcautious business will lose profits, investors, and.
staff to its bolder competitors; the overcautious regulator will be harassed by.
politicians pressured by business, labor, and other interest groups.
All.
the factors that I’ve identified came together to enable the economic crisis,.
despite abundant warnings from reputable sources, including economists and.
financial journalists.
Risky financial practices were highly profitable, and.
giving them up would have been costly to financial firms and their executives.
and shareholders.
The Federal Reserve and most academic economists believed.
incorrectly that in the event of a crash, remedial measures—such as cutting.
interest rates—would suffice to jump-start the economy.
Meanwhile, depending on.
how they were compensated, many financial executives had a limited horizon;.
they were not worried about a collapse years down the road because they.
expected to be securely wealthy by then.
Similarly, elected officials have.
short time horizons for policy; with the risk of a financial collapse believed.
to be low, and therefore a meltdown unlikely in the immediate future, they had.
little incentive to push for costly preventive measures, and this in turn.
discouraged the appointed officials of the Federal Reserve and other regulatory.
agencies from taking such measures.
Finally, with no reliable probability.
estimate of a financial collapse available, it seemed prudent to wait and see,.
hoping that with the passage of time at least some of the uncertainty about.
risks to the economy would dissipate.
The.
BP oil leak reveals a similar pattern, though not an identical one.
One.
difference is that the companies involved must have known that in the event of.
an accident on a deepwater rig prompt and effective remedies for an oil leak.
would be unlikely—meaning that there was no reliable alternative to preventing.
an accident.
But the risk of such an accident could not be quantified, and it.
was believed to be low because there hadn't been many serious accidents.
involved in deepwater drilling.
(No one knew how low; the claim by BP chief.
executive Tony Hayward that the chance of such an accident was "one in a.
million" was simply a shorthand way of saying that the company assumed the.
risk was very small.).
But.
other causal factors were similar in the leak and the financial crisis.
If.
deepwater oil drilling had been forbidden or greatly curtailed, the sacrifice.
of corporate profits and of consumer welfare (which is dependent on low.
gasoline prices) would have been great.
Elected representatives did not want to.
shut down deepwater drilling over an uncertain risk of a disastrous spill, and.
this reluctance doubtless influenced the response (or lack of it) of the civil.
servants who do the regulating.
The.
horizon of the private actors was foreshortened as well.
Stockholders often.
don't worry about the risks taken by the firms in which they invest, because by.
holding a diversified portfolio of stocks and other financial assets an.
investor can largely insulate himself from risks taken by any particular firm.
Managers worry more about the fate of their company because they often are.
heavily invested in it terms both of human capital and of reputation.
But they rarely.
are held personally liable for the debts of the firms they oversee and, more.
important, the danger to their own livelihood posed by seemingly small risks is.
not enough to discourage risk-taking.
Two.
final problems illuminate the nation’s vulnerability to disasters.
First, it is.
very hard for anyone to get credit for preventing a low-probability disaster.
Because such a disaster was unlikely to occur, the benefits of taking action.
beforehand could not be assessed unless the preventive action took the form of.
a dramatic last-minute save.
Had the Federal Reserve raised interest rates in.
the early 2000s rather than lowering them, it might have averted the financial.
collapse in 2008 and the ensuing global economic crisis.
But we wouldn't have.
known that.
All that people would have seen was a recession brought on by high.
interest rates.
Officials bear the political costs of preventive measures but.
do not usually reap the rewards.
The.
second problem is that there are so many risks of disaster that they can't all.
be addressed without bankrupting the world many times over.
In fact, they can’t.
even be anticipated.
In my 2004 book   Catastrophe:.
Risk and Response  , I discussed a number of disaster possibilities.
Yet I.
did not consider volcanic eruptions, earthquakes, or financial bubbles, simply.
because none of those seemed likely to precipitate catastrophes.
It.
would be nice to be able to draw up a complete list of disaster possibilities,.
rank them by expected cost, decide how much we want to spend on preventing each.
one, and proceed down the list until the total cost of prevention equals the.
total expected cost averted.
But that isn't feasible.
Many of the probabilities.
are unknown.
The consequences are unknown.
The costs of prevention and.
remediation are unknown.
And anyway, governments won't focus on remote possibilities,.
however ominous in expected-cost terms.
A politician who proposed a campaign of.
preventing asteroid collisions with Earth, for example, would be ridiculed and.
probably voted out of office.
It’s become a cliché that the United States has long had a shortgage of primary-care physicians (general internists, pediatricians, family physicians, general practitioners) and that this is a factor in the disarray and expense of our health care system.
Yet the very idea of a protracted shortage is an anomaly in a capitalist society.
A temporary imbalance of demand and supply can produce a shortage, but the shortage should not persist: price will rise to ration the existing supply, and this will both dampen demand and stimulate supply, and the combination will erase the shortage.
And this would be true of primary-care physicians in an unregulated market for health care.
If there were a shortage of such physicians, their fees would rise, and this would reduce the demand for their services; at the same time, their incomes would be higher because of the higher fees, and this would induce more medical students to become primary-care physicians.
So the shortage would end.
There was bound to be a relative decline in primary-care physicians because advances in medical technology increased the value of specialized medicine and so the demand for specialists (surgeons, radiologists, oncologists, cardiologists, urologists, gastroenterologists, neurologists, etc.) relative to primary-care physicians, who are generalists.
But those very advances, by increasing the number of possible treatments and and also increasing, in part through better treatments, longevity, increased the demand for primary-care physicians, who “specialize” in diagnosing and treating common ailments, which are the most frequent and become more common as people age.
(So primary-care physicians are both substitutes for, and complements to, specialized physicians.).
Yet instead there seems to be (though reliable statistics are hard to come by) a persisting shortage, now of long standing, of such physicians.
A symptom of a shortage is queuing—it indicates that the market price is not clearing the market.
There is a great deal of “involuntary” queuing in primary-care medicine, in the form of long unwanted delays in getting an appointment with a primary-care physician and refusals of these physicians to take on new patients.
Of course if there is no felt urgency about seeing a doctor, there is no reason not to make appointments well in advance.
But apparently difficulty in being able to be seen promptly by a primary-care physician drives many patients to hospital emergency rooms, which are very expensive.
While the fees charged by primary-care physicians have increased, their income, as well as prestige, relative to specialists has declined (even after adjustment for the fact that the specialists have to undergo longer residencies before they begiin earning real money).
As a result, medical students are increasingly attracted to specialties, especially ones such as dermatology, ophthalmology, and urology, which allow for a more comfortable life style because they do not involve frequent medical emergencies.
The attraction of specialization is particularly great for male medical students, who tend to have higher earnings goals and a greater desire for prestige than women, so women are becoming an increasing percentage of family-care physicians—and many of them work part time because they want to have children, and time to spend with their children.
This reduces the supply of primary medical care.
The concern with the decline of primary-care medicine has become acute because of the recently enacted health care reform law.
By a combination of requiring persons who do not have health insurance to buy it if they can afford to, subsidizing health insurance for people who can’t afford it, and expanding Medicaid eligibility (public health insurance for the poor), the reform is expected within a few years to increase the number of people who have public or private health insurance by more than 30 million, roughly a 20 percent increase in the number of insured.
At the same time, a higher proportion of the population will be elderly.
So the demand for health care will increase very substantially.
Most of that increase could in principle be accommodated by expanding the number of primary-care physicians, especially because a large fraction (no one knows how large) of the currently uninsured population are young and healthy.
Young and healthy people get sick, but mostly with ailments that do not require the care of specialists.
What young and healthy people mainly need is diagnosis of conditions such as high blood pressure and obesity that are health time bombs, and preventive care and counseling, and both the diagnosis and the care and counseling are services that primary-care physicians provide.
The need of children of poor families, and their parents, for pediatric counseling, and of the children themselves for pediatric care, is acute; and most pediatricians are primary-care physicians.
Moreover, the extensive follow-up care that people with serious diseases often require can usually be provided by primary-care physicians.
The health care reform legislation recognizes that the shortage of primary-care physicians will get worse, and that this will reduce the quality and increase the cost of medical care generally, but it doesn’t do much about it.
The main thing it does is increase Medicare reimbursement for primary-physician care by 10 percent.
The rest (so far as I can judge from the immensely complex legislation and its as yet incomplete regulatory implementation) is subsidizing gimmicks, such as the “medical home,” which is a euphemism for delegating some of the primary care now provided by doctors to nurses.
The underlying causes of the shortage of primary-care physicians are licensure and third-party payment.
I do not think it is a mistake to require that physicians be licensed, rather than allowing anyone to provide medical care, as we allow anyone to dig ditches, wait on tables, or for that matter start a new online business.
Patients are in a poor position to evaluate the quality of medical care, and without licensure of physicians would doubtless be highly vulnerable to quacks.
But licensure inevitably reduces supply.
Primary-care physicians have to spend four years in medical school and then three years as a resident paid little more than a subsistence wage.
The number of medical schools is limited, as is the number of residency programs; it has been argued (whether rightly or wrongly I don’t know) that specialists control the approval process for residency programs and use that control to throttle the expansion of primary-care medicine by limiting the number of new residency programs in primary-care medicine.
Many U.S.
physicians are foreigners trained abroad, which is fine, but we make them jump through loops to be licensed to practice medicine in the United States; the hoops may be justified to ensure that foreign-trained physicians are competent, but make it difficult to make up a physician shortage by recruiting foreign-trainmed physicians.
Third-party payment is a pervasive feature of American medicine.
Why anyone should want health insurance other than “major medical”—that is, insurance against catastrophic medical bills—is a great mystery, as is the fact that Medicare subsidizes routiine health care of upper-middle-class people.
Since disease and injury tend to be unpredictable, health insurance smooths costs over time, which is efficient, but a person could achieve that smoothing simply by saving the money that he now pays in health-insurance premiums and investing it to create a fund out of which to pay future health expenses as they occur.
But we are stuck with third-party payment, and it systematically favors specialists over primary-care physicians, because specialists tend to provide discrete procedures, which are easier for the insurers, whether they are private insurance companies or government, to cost.
The care provided by primary-care physicians has, to an extent, an elastic and discretionary quality.
If a hypochondriac constantly pesters his primary-care physician with imaginary symptoms, how much of the physician’s time dealing with the pest should be compensated by insurance and at what rate? How long should an annual physical exam take? How much time should the physician spend urging his patients to give up smoking? Wear car seatbelts? Avoid fast foods?.
I wish I had some answers, but I don’t, given the fundamental structure of the American health care system, which is unlikely to change in the foreseeable future.
If the shortage of primary-care physicians persists, queues will lengthen, and perhaps care will be rationed in other ways as well.
In 2008 there were believed to be 7.03 million unauthorized Mexican immigrants in the United States, and by 2010 that number had fallen to 6.64 million, a drop of 390,000—6.64 percent.
(I am dubious that these statistics are accurate, but doubtless there has been a significant decline in illegal Mexican immigration.) There are three possible reasons for the decline: reduction in employment opportunities in the United States, as a consequence of the severe economic downturn, involving heavy unemployment, that began with the financial crisis of September 2008; increased efforts at border control and apprehension of illegal immigrants to prevent unauthorized immigration, especially from Mexico (which is believed to be the source of nearly 60 percent of all illegal immigration to the United States); and improved employment opportunities in Mexico.
The first two factors are probably the most important—especially the first.
Many unauthorized Mexican immigrants were employed in the construction industry, and the economic downturn caused a tremendous surge of unemployment in that industry—a layoff of something like 600,000 construction workers, in all.
But the second factor has a played a role.
There has been toughened border enforcement, which has pushed up smugglers’ fees, which are paid for by the unauthorized immigrants; and so an increase in those fees discourages immigration.
I attach less weight to the third factor—Mexico’s improved economic situation—because the improvement of the Mexican economy has been a gradual process, beginning in the 1990s.
Mexico’s per capita GDP is still a third lower than that of the United States.
Approximately 18 percent of Mexicans live in extreme povery, and another 47 percent in less extreme poverty, so that a total of 65 percent of the Mexican population is poor.
And Mexico has a total population of 113 million.
So there is an enormous pool of potential immigrants to the United States, where wages are much higher than in Mexico.
The growing hostility to Mexican immigrants (and to immigrants in general) is understandable, though ill informed.
It mainly results from the belief that immigrants “take away” jobs from Americans; a more precise formulation would be that an increase in the supply of labor, if more than proportionte to an increase in demand, will push down wages; and American workers who refuse to accept a reduction in their wages will lose their jobs to immigrants.
In addition, immigrants place pressure on U.S.
public services, such as public schools and emergency rooms, though unlawful immigrants are not entitled to Medicare or social security, or to many other public benefits.
Unemployment in the United States is very high, and rising, but it is doubtful that restricting immigration would have a positive effect.
Immigrant workers spend much of their income in the United States (some of it, however, they remit to relatives in their country of origin), and so increase demand for goods and services, and indirectly employment; and by reducing wage levels they reduce the cost of goods and services, a reduction that also stimulates consumption and hence production and employment, although the net effect on the economy must be small—there are not that many illegal Mexican immigrants.
Weakness in consumption is a major factor in the nation’s current economic weakness, however, and there is no good reason to weaken it further by expelling or preventing entry of worker-consumers.
And efforts to curtail illegal immigration are costly, without doing much for employment.
Probably, therefore, restricting immigration is not a sensible policy from the standpoint of stimulating the U.S.
economy.
A better policy would be to increase the lawful Mexican immigration quota, since lawful immigrants are likely to be more productive workers with better educational backgrounds and to place less strain on U.S.
public services.
I have a slightly kindlier view of affirmative action than Becker does, though I agree with most of his points: specifically, that affirmative action harms the ablest beneficiaries of it by casting doubt on their ability; that it places many of them in situations in which they are bound to fail or cluster at the bottom because they have been admitted to a school or given a job that is above their level; and that remedial education is unlikely to be effective after early childhood.
But there are three areas in which preferences that are often (though not necessarily correctly) described as affirmative action seems to me defensible:.
1
Situations in which race, sex, ethnicity, etc.
is a legitimate job qualification.
An obvious case is casting a black man as Othello and a white woman as Desdemona in a performance of   Othello  .
A subtler case is making sure that in a prison or jail the vast majority of whose inmates are black there are some black correctional officers in supervisory positions; this is important for alleviating racial tensions.
(See my opinion in   Wittmer v.
Peters  , 87 F.3d 916 (7th Cir.
1996).) I would go further and say that if an all-male prison wanted to hire just male guards, or an all-female prison just female guards, I would permit this, although the courts disagree.
2
Situations in which a private firm or other private entity practices affirmative action in response to customer preference or to ward off adverse legal action.
A firm that sells primarily to blacks might want to give a preference to black applicants for sales positions or insist that its advertising agencies include black models in their advertisements for the firm's products.
Similarly, if a firm fears that it will be sued for discriminating against blacks, it should be allowed to favor blacks in hiring in order to reduce its legal risks.
These are situations in which affirmative action is prima facie efficient because it is being adopted voluntarily by a private, competitive institution, presumably as a profit-maximizing, cost-minimizing response to competitive pressures.
(Of course not all private institutions are commercial, but the noncommercial ones, universities for example, do face competitive pressures and do need to economize.) I don't think government should interfere with such choices.
This by the way is not to say that firms controlled by blacks, say, should be permitted to discriminate against whites.
That would not be affirmative action, which refers instead to discrimination in favor of the group that controls the discriminator.
3
Situations in which the only beneficiaries of affirmative action are black.
Most of my examples of affirmative action have involved blacks rather than women, Hispanics, etc.
My reason for that choice is that realism requires recognition that blacks are, for whatever reason or combination of reasons, in far the worst position, so far as health, prosperity, educational achievement, intermarriage, and other measures of success and integration, of any other major group in American society.
Women, Jews, Asians, and other traditional victims of discrimination or newcomers or outsiders have all advanced to positions of essential parity with male WASPs, but blacks have lagged badly in relative terms.
A situation in which 12 percent of the population is lagging badly behind the rest of the population is not healthy.
I don't think affirmative action for blacks does much to promote their integration and sense of belonging in this society, but it probably does a little (notwithstanding Becker's correct point about the negative effect of affirmative action on self-esteem).
Without affirmative action, elite educational institutions and other elite institutions (probably including the officer corps of the military) would have virtually no blacks, and this would underscore the gulf in achievement in a dramatic way that would be potentially harmful to social peace.
Colin Powell was a beneficiary of affirmative action and his well-deserved public success and prominence is probably good for black morale.
Category 3 is perhaps the only "real" affirmative-action category.
Such a classification would be consistent with Becker's treatment.
What is unfortunate is that although the only real case for affirmative action (outside my first two categories) concerns blacks, naturally other groups, seeing the potential benefits of discrimination in their favor, have climbed on the affirmative action bandwagon, often with ludicrous results, notably in the case of white, well-to-do, accentless, fully integrated Americans of Hispanic ancestry.
I well remember a conference I attended at which a law professor named Delgado advocated affirmative action for persons of color, among whom he counted himself.
However, as if often true of persons of Spanish ancestry, his black hair crowned a very pale face.
He was in fact the whitest person in the room, had no foreign accent, and by his presence had in effect converted "people of color" into a purely political category.
So I would like to see affirmative action confined and diminished, but I would not press for its abolition.
I especially would not favor judicial abolition in the name of equal protection of the laws.
Not that a powerful legal case can't be made; but it seems odd that the courts should strain to intervene on the side of   majority   rights, since the majority should be able to protect its interests in the democratic political process, without having to run to the courts.
In addition, it is doubtful that the courts could effectuate a ban on affirmative action.
As Becker points out, states have tried and failed, since such a ban can be circumvented in a variety of ways, such as by reducing or eliminating the weight given to meritocratic criteria in selecting for colleges or jobs.
One final point.
Given my category 2 above, the problematic area of affirmative action is largely confined to government employment, public universities, public contracting, and other government activities.
In many of those activities, personnel practices are not meritocratic, but political, nepotistic, or simply slack and inefficient because of lack of economic incentives.
If criteria are not meritocratic to begin with, injecting affirmative action may not reduce efficiency.
I infer that the aggregate social costs of affirmative action probably are not great--though neither are the benefits.
The word "corruption" is extraordinarily vague, and, in part for that reason, ubiquitous.
Charges of corruption are everywhere.
Notably, economically booming China is nevertheless said to be seething because of the corruption of local officials, and Chicago's mayor is being questioned by federal investigators about corruption in his otherwise very successful administration.
The problem with the word is twofold.
First, identical practices sometimes are "corruption," sometimes not.
Second, despite the pejorative connotations of the word, the normative signficance of corruption is not always clear.
Fifty years ago it was common in nightclubs in New York (maybe it still is--I haven‚Äôt been in a nightclub in New York in 48 years!) to have to give the headwaiter a tip in order to get a table, even if there were many empty tables.
This was a form of bribery, but accepted as proper.
Management knew about the practice and condoned it.
The headwaiters were doubtless paid less than if they had been forbidden to accept these bribes, but that of course was not a clear gain to the nightclub; the nightclub presumably charged customers less because the full cost of the entertainment to them was greater by the amount of the bribe.
So far, a wash; but if the bribes induced the headwaiter to be friendlier and more helpful to the clientele, the nightclub was better off.
Likewise with tips for waiters and waitresses, despite the possibility that a generous tipper will get better service at the expense of other customers, to the harm of the latter.
So what is wrong with bribing public officials to obtain public services, provided the practice is known and wages are adjusted accordingly? In effect, bribes shift the financing of public services from taxes to a combination of taxes and fees for service.
By injecting a market element into public services, bribes can actually improve efficiency when used to get around rigid or inefficient rules.
To recur to the 1950s in New York, municipal ordinances forbade contractors doing construction work to obstruct sidewalks and streets, but often it was impossible to do such work without creating at least minor obstruction and so contractors bribed police to look the other way.
The net effect on social welfare was probably positive.
But there are several problems that together make bribery of public officials on balance inefficient--and thus "corrupt" in an unequivocally bad sense.
First, not all rules are inefficient, and bribes are bad from an economic standpoint when they subvert an efficient rule, as when a building inspector accepts a bribe to overlook a serious fire hazard.
Second, without competition among bribe takers (in the New York nightclubs this was secured by the competition among the nightclubs themselves, which limited the amount of bribes that management permitted its headwaiters to receive), the bribe will exceed the cost of the public service being purchased with it, distorting the allocation of resources (though the higher taxes that would be required to compensate public employees who did not have bribe income would also have distortionary effects--all feasible taxes do).
Third, delay and uncertainty are created when multiple officials must be bribed.
And fourth, a bribery culture reduces pressure to repeal inefficient laws--in fact, it creates in public officials a vested interest in preserving such laws.
In that respect, it is a protection racket.
Since public corruption seems on balance inefficient, the question arises why it is so common.
The answer is that corruption flourishes where the economy is heavily regulated but the legal framework is weak.
The more heavily regulated the economy, the more irksome restrictions there are that will create a demand for methods of avoiding compliance with them, and bribery of the enforcers of the restrictions is one such method.
The weaker the legal framework, the more difficult it will be for the government to prevent bribery, a classic "victimless" crime because bribery is a voluntary transaction; and it requires a sophisticated legal machinery to detect and punish such crimes.
There is another and subtler effect of the legal framework.
Unless there is an effective machinery for the impartial enforcement of contracts, people will be reluctant to do business with strangers.
Economic activity will tend rather to be organized on the basis of familial and other personal relationships.
In such a culture it will seem perfectly natural for public officials to exhibit favoritism toward friends and relatives, including persons who purchase their friendship with a generous bribe.
Nepotism, clientalism, and bribery become substitutes for contract when the enforcement of contracts is undependable.
In contrast, corruption should be rare in a free-market system with courts that enforce contracts honestly and dependably.
So how to explain public corruption in America's big cities today? It seems less common than a half century ago, and perhaps that is because there is somewhat less economic regulation and also a somewhat greater professionalism in civil services, police, and the judiciary.
Another factor is that most big cities have Democratic mayors, and the Presidency has been in Republican hands for almost two-thirds of the period since 1969; Republican attorneys-general are more likely to investigate and prosecute public corruption in Democratic-controlled cities than Democratic attorneys-general are.
Becker discusses other causes of the decline in U.S.
corruption in his comment.
The persistence of corruption in some of our big cities may reflect the presence of immigrant communities in these cities, in which barter and other forms of reciprocal dealing based on (and constructing) relations of trust, extended family relationships, clan ties, and the like continue to organize significant economic activity and make it natural to think of public officials as "selling" public services to their friends and relatives.
The problem of corruption underscores the importance of the legal framework to economic development.
An honest, incorruptible police, criminal law enforcement machinery, and judiciary can increase economic efficiency by greatly reducing the amount of corruption (as well as in other ways), though it is equally important to have a commitment to free markets and a workable legislative and regulatory machinery to prevent economic activity from becoming encrusted with inefficient restrictions.
The principal criticism of my posting is that "merit," understood as doing well on exams (especially timed exams), is too narrow a basis for admission to college or law school and that affirmative action is a way of rectifying the mistakes caused by the overemphasis on that too-narrow criterion.
My view is that reference to "merit" and "meritocracy" is misleading.
A person is not "better" because he's a better exam-taker; for that matter, he's not "better," more "meritorious," because he has a higher IQ than someone else.
The issue regarding standardized testing is whether it's a good predictor of college or graduate school performance.
If it is, then people who do badly on the test, but are admitted anyway because of affirmative action (or because they're good athletes), are going to do poorly in college or graduate school and cluster at the bottom of the class.
Now maybe though they cluster at the bottom of the class, they do well professionally because grades are not a good predictor of performance in the "real world." So the argument would be that blacks from poor families do badly on the SAT and in college and law school but nevertheless do well professionally, because SATs and LSATs and the rest of the educational testing apparatus are poor predictors of professional success.
Now it would be odd if race were the explanatory variable here.
That is, if you took two people otherwise identical in upbringing, parents' occupations, etc., but one happpened to be white and one black, on what theory would standardized tests underpredict the black's professional success relative to the white's? Presumably the relevant variable in explaining black-white test differences would be not race as such but such factors as parents' education, household income, early schooling, etc.--factors that might well be correlated with race, but that would not be identical with race.
If parental income or some other such variable is thought to cause students who have in fact great professional talent and prospects to underperform in standard tests, then that would be an argument not for affirmative action on the basis of race, sex, ethnicity, etc., but for affirmative action on the basis of parental income or the other nonracial factor that was causing the difference in test scores.
This is resisted because the colleges don't care about students from poor families, etc.; they just want a certain percentage of blacks.
There is a special factor at work in law, the profession with which I'm most familiar, that casts particular doubt on the wisdom of racial affirmative action.
That is the fact that to become a practicing lawyer, you have to pass the bar exam--another standardized timed test but one for which you can't substitute a take-home exam or a term paper.
The black pass rate on the bar exam is shockingly low--something like 15 percent, compared to more than 60 percent for nonblack exam takers.
I cannot see the sense of bending law school admissions standards in favor of applicants who are unlikely to be able to enter the profession after spending $100,000 or more for three years of law school tuition.
A number of comments mentioned "diversity" as a valid ground for affirmative action in admissions to college and law school.
I agree that one benefit of college education is meeting a more diverse group of young people than one might have encountered growing up in one's particular community, which might be a lily-white suburb.
But the relevant diversity is not in the color of one's skin, but in attributes which, to repeat, while they may be correlated with race, are not identical to it.
There are black people who really aren't different from white people, and it is unclear how their presence increases the diversity of a student body.
Several comments from the right side of the political spectrum of our readership accused me of having a double standard--favoring or at least being willing to tolerate some discrimination against whites (i.e., some affirmative action) but not willing to tolerate discrimination against blacks.
I plead guilty to the double standard.
I do not think discrimination against blacks by whites, and discrimination against whites by whites, are symmetrical phenomena.
A dominant group may discriminate some against its own members--that's what affirmative action is--but it's not going to go too far, whereas discrimination by the majority against a minority is likely to be far worse and more injurious.
I am always interested and pleased when the comments go in unexpected directions, focusing on what I had thought distinctly peripheral aspects of my posting.
I had said that I thought a theatrical producer should be permitted to refuse to hire a white actor to play Othello, or a black actress to play Desdemona.
Several comments pointed out that there have been theatrical productions in which a white played a black, a woman a man, etc., and they noted that in Shakespeare's time, because women weren't permitted to appear on stage at all, female roles were played by adolescent boys--and since there were virtually no blacks in England and almost certainly no black actors, Othello was played by a white.
So a white male was playing opposite another white male and why shouldn't that be permitted today? Well certainly it should be permitted, but the question is whether the producer should be deprived of choice in the matter.
It is further true, as one comment points out, that while sex can be a "bona fide occupational qualification" under federal antidiscrimination law--and so a producer can insist that Desdemona be played by a woman, whatever Shakespeare might have thought of that--there is no BFOQ for race.
I consider this rule of law mistaken.
It seems to me that, at least if one is speaking of producers in the private sector, "discrimination" in the form of matching an actor's race, etc.
to that of the character he or she is playing should be permitted.
The impact on vocational opportunities for members of racial and other minorities is likely to be small.
Of course there are not as many black characters in drama as there are white ones, but then there are not as many blacks in this country as there are whites.
And, on the other side, matching the physical appearance of the actor with that of the character he's playing is important to an audience's understanding and enjoyment of a play.
In my view, that benefit, together with the principle that producers and other creative persons should have maximum freedom from government restrictions in deciding what to present to their audience, outweighs the cost to those minority actors who may occasionally lose an opportunity to play someone of a different race.
Indeed, it seems to me that freedom of expression requires no less.
It is not easy to respond to 160 comments; I can only discuss a handful, concentrating on the recurrent ones.
But I must begin with an apology to sports fans for confusing "Texas Cowboys" with "Dallas Cowboys." The monument on the Texas State Capitol grounds is to the state's cowboys, not to the football team.
My profound ignorance of sports stands exposed, and in some quarters my Americanism will now be questioned.
While I am being defensive, let me respond to the comment about "one of the quirkier Posner opinions of all time.
Have you ever wondered what water skiing in Hawaii had to do with the establishment clause?" The opinion,   Metzl v.
Leininger  , 57 F.3d 618 (7th Cir.
1995), is actually quite straightforward.
The issue was whether Illinois had violated the establishment clause by making Good Friday a public school holiday.
Christmas is of course a public school holiday, so the issue narrowed to whether there is a difference.
The difference, which is important to the Supreme Court and so has to be to me as a judge whatever my personal views, is that Christmas has become so far secularized that making it a holiday is not widely interpreted as signifying governmental endorsement of religion or Christianity.
Good Friday, it turns out, has not become secularized--except maybe in Hawaii, where it kicks off a spring holiday weekend.
I emphasized in my posting that I was going to discuss the economics of the establishment clause, not the legalities, so I'm a little surprised at Professor Rubin's accusing me of trying to impose an economic understanding on the clause.
There is a lively debate in the comments over the issue of "incorporation"--was the due process clause of the Fourteenth Amendment (ratified in 1868) intended to incorporate the Bill of Rights in the sense of making them applicable to the states? The historical evidence is conflicting, but the proposition seems so implausible on its face that I would require a much clearer showing of the historical understanding to be convinced.
Apart from the textual objection--the Bill of Rights includes a due process clause, so what literal sense can it make to say that the due process clause in the Fourteenth Amendment incorporated that clause and everything else in the Bill of Rights? But worse is the assumption that everything that Congress is forbidden by the Bill of Rights to do makes sense to prohibit every state, city, and village to do.
That is so mechanical, so insensitive to different responsibilities of different levels of government.
But all that is water under the bridge, given that only Thomas among the current Justices questions the incorporation doctrine.
The bulk of the comments concern two issues that were remarked only in passing in my posting.
One of these concerns my allegedly "snide" reference to "Intelligent Design," the anti-evolution theory now gaining traction in the nation's schools (there was an article on this in the   New York Times   this morning).
I said it was a thinly disguised version of Biblical inerrancy.
That statement was inaccurate, because as was pointed out in one of the comments not all adherents to ID believe that God created the universe, man, etc.
in the mannner described in the Bible.
However, it clearly is a religious conception, because "intelligent" design implies a designer, and what would you call such a designer but "God"? However, even if it is not a form of fundamentalist religion, it doesn't, in my view, belong in school.
It is one thing to note problems with Darwinism, and to discuss the interesting question whether any theory can be truly scientific if it cannot be supported or falsified by actual observations, but it is another to teach, as the IDers want to do, that there are these competing theories, evolution and ID.
ID does not have the structure of a scientific theory, there is no evidence for it, and there is no way to obtain evidence for (or for that matter against) it.
The other issue, peripheral to my posting but obviously not to the commenters, concerns school vouchers.
There were many interesting comments, and it is an issue to which Becker and I should probably devote a future posting to.
The place to begin in thinking about the issue is with the difference between the state's mandating and subsidizing a service, on the one hand, and providing the service itself, on the other hand.
The government can require that children be vaccinated and pay for their vaccination without manufacturing vaccines.
Similarly, it can require that children attend school and pay for their schooling without operating schools, something it doesn't seem to be particularly good at; politics and teachers' unions drive up costs and drive down quality.
The government would have to impose minimum standards on all voucher-supported schools, as it does now on private schools, but that is different from ownership and control.
The government used to regulate railroad rates, but, unlike the practice in many other countries, it did not own the railroads.
A voucher system is a first step toward privatizing education.
Means-tested voucher entitlements would enable parents to select a school even if they had no private means.
Many rich people would continue to send their kids to fancier schools than vouchers would pay for, but that would be no different than under the current system of public and private education.
To return to the subject of my posting, I think it would be a great mistake to confine vouchers to secular schools, whether public or private.
Catholic schools in this country have a good record and provide a type of education that is highly suitable for some children.
Most of the education provided in Catholic schools is secular, and the amount of the voucher could be limited to the secular component.
In two much-anticipated decisions rendered by the Supreme Court just before it recessed for the summer--  Van Orden v.
Perry   and M  cCreary County v.
American Civil Liberties Union of Kentucky  --the Court was asked to decide whether the display of the Ten Commandments on public property is a forbidden "establishment" of religion.
The First Amendment forbids Congress to make any law respecting an establishment of religion--that is, it may not create an established church, such as the Church of England, or the Roman Catholic Church in Italy.
The displays at issue in the Court's two cases were on state, not federal, property; but the Fourteenth Amendment has been interpreted, questionably but conclusively, to make most of the provisions of the Bill of Rights, including the establishment clause of the First Amendment, applicable to state and local action.
In the   Van Orden   case, the Ten Commandments were inscribed on a monument on the grounds of the Texas State Capitol.
The grounds were sprinkled with monuments of diverse character, including monuments dedicated to the Texas Rangers, the Texas Cowboys (the football team), the Heroes of the Alamo, Volunteer Firemen, and Confederate Veterans.
The Ten Commandments monument had been given to the state 40 years earlier by the Fraternal Order of Eagles, at the suggestion of Cecil B.
DeMille, who was promoting his movie   The Ten Commandments  ; and during this long interval, no one had complained about the monument until Van Orden.
The Court held that the display did not violate the establishment clause.
But in the other case,   McCreary  , where the Ten Commandments were displayed in a Kentucky courthouse, a differently composed majority of the Court held that the display did violate the clause.
I want to begin by considering from the ground up as it were, as a speculative exercise unrelated to the legalities, why a legislature should be forbidden to establish a church.
That is, suppose a large majority of citizens belong to a particular sect which they naturally believe has the truest understanding of religion.
What more natural than that they should try to embody their belief in law by pressing for legislation that will "establish" their sect as the "official" religion of the state or nation by imposing a tax to finance it? Of course the people who do not belong to the sect will not want to pay such a tax, but many government expenditures offend numerous citizens--think of all the people who oppose the war in Iraq; they nevertheless are taxed to support it.
It might be argued that being forced to support a religion one doesn't believe in is peculiarly offensive.
But, if so, a law to establish that religion would be unlikely to be enacted.
Minorities with strong feelings about an issue regularly prevail in legislative battles--think of all the laws that are passed forbidding discrimination against various minorities.
In fact, there is such religious pluralism in the United States that probably in no state except Utah could a law be passed establishing a particular religious sect even if the establishment clause had never been held applicable to the states.
Almost all establishment-clause cases involve efforts to "establish" religion in general (versus nonbelief), monotheism, Judeo-Christian monotheism, or Christianity.
These efforts take such forms as making time for voluntary prayer in public schools, encouraging public school instruction in "intelligent design," providing public funds for secular education in religious (mainly Catholic) schools or for the display of the creche during Christmas, or, as in the two recent cases, displaying religious materials on public property, usually without cost to the public--it is easy enough to obtain donations of such materials, as in the case of the Ten Commandments monument given Texas by the Fraternal Order of Eagles at the suggestion of DeMille.
Some of these efforts are held to violate the establishment clause, others not; there is no discernible pattern or crisp legal standard.
From a purely economic standpoint, it seems to me that the case for permitting such "establishments" should turn on whether the likely effect is merely to offset some subsidy for secular activities.
Obviously the fact that the public schools are "free" to the parents, being supported out of taxes, places religious and other private schools at an arbitrary disadvantage, so there is nothing wrong (remember I am speaking only of the economics of the question) with providing a comparable subsidy so that parental choice will not be distorted.
The subsidy of secular activities is more subtle in the case of public display, but it is nonetheless present.
Suppose that at Christmas time the public grounds display only secular aspects of Christmas, such as Santa Claus, and refuse to display a creche; then religious Christians are denied the same free opportunity to advertise, and enjoy seeing, their version of Christmas.
Similarly, suppose the Texas State Capitol welcomed a large variety of secular displays (as indeed it does) on its capacious grounds, but refused to permit a religious display; this would give a cost advantage to secular displays because they would be free both to the sponsors and to the viewers.
Some people are offended by any religious display; but given the nation's religiosity, probably more people are offended by the banning of all religious displays from public property, which they interpret as sending a message of hostility to religion in general or to the dominant Judeo-Christian monotheism in particular.
The case against requiring the teaching of "intelligent design," a thinly disguised version of Biblical inerrancy, is stronger because it confuses religion with science and weakens Americans' already dangerously weak scientific understanding.
An individual is entitled to reject science, but he should be taught it, and the teaching of science is impaired if religious dogma is treated as a form of science.
If secular activities are not being subsidized, I don't think there is a strong economic case for religious subsidies any more than for other private goods.
It is possible to argue, however, that subsidizing displays of the Ten Commandments does create value in an uncontroversial sense, because they are primarily understood nowadays as an ethical rather than religious statement.
The government is permitted to "propagandize" on behalf of uncontroversial moral principles, and the Ten Commandments contain arresting statements of some of those principles, such as "Thou shalt not kill." The complication is that some of the commandments are sectarian, such as the injunction to worship only one God.
Although atheists are in the forefront of litigation against alleged establishments of religion, there is a powerful argument first made by David Hume and seemingly illustrated by the state of religion in Western Europe that an established church weakens rather than strengthens religious belief, and, a closely related point, that rather than fomenting religious strife (a concern of the framers of the Constitution) it induces religious apathy.
Hume thought that religious officials paid by government would act like other civil servants, a group not known for zealotry, because they would have no pecuniary incentive to make coverts or maximize church attendance.
That is a good economic argument: if you are paid a salary that is independent of your output, you will not be motivated to work beyond the minimum requirements of the job.
A less obvious point is that a public subsidy of a particular church will make it harder for other churches to compete.
The result will be less religious variety than if the competitive playing field were equal.
A reduction in product variety (with no reduction in cost) will reduce demand for the product.
This point is less compelling than Hume's, because of offsetting considerations.
The subsidy may stimulate demand for the established church by reducing the quality-adjusted cost of attending it--suppose the subsidy is used to build magnificent cathedrals or hire outstanding organists and choirs.
The increased demand for the services of the established church may offset the lack of religious variety.
Moreover, if the subsidy causes the officials of the established church to become indolent, this may offset its cost advantage and facilitate the competition of other sects.
Empirically, however, it does seem that established churches do not increase, and, judging from the experience of most though not all European countries (Poland is a major exception), probably diminish religiosity, consistent with Hume's analysis.
However, his analysis is probably inapplicable to the attenuated forms of establishment that are all that are feasible in a religiously pluralistic society such as that of the United States (of course it may be pluralistic in part for Hume's reason).
A public display of the Ten Commandments is a far cry from a state-salaried minister, so far as the impact of public support of religion on proselytizing is concerned.
Becker rightly stresses relative as distinct from absolute performance norms.
I would add that this is a well-nigh universal phenomenon rather than one confined to athletics.
The reason is that there are very few absolute standards in nonempirical fields of human endeavor.
We form a judgment about the quality of a musical or literary work, an artist, a musical performer, and so forth by comparison with other works, other artists, performers, etc.
So it is natural for the writer, the artist, etc.
to do whatever he can to increase his performance relative to his peers.
The reason empirical fields are different is that in them success can be measured in absolute terms; a contribution to knowledge can be deemed important on the basis of the value of the knowledge alone.
Why then the objection to permitting athletes to use steroids and other drugs to enhance their performance? (The objection to permitting some athletes to cheat by using these drugs   sub rosa   is too obvious for discussion: that really is unfair competition.
The objection would disappear if the ban were lifted.) One valid objection, which seems however minor, is that it complicates comparison with earlier athletes, who didn't have access to performance-enhancing drugs.
But in many sports, such as baseball, they had an advantage denied to current athletes: black and Hispanic athletes were excluded from the competition.
Other changes that complicate comparison between baseball players of this generation with those of earlier generations  include the advent of night baseball, natural gains in height and weight because of better nutrition, improved vision correction, longer seasons, better equipment, better orthopedic surgery, more sophisticated techniques for managing a team, and better health care generally.
As Becker points out, no objections are raised to athletes' improving their performance by better training, more exercise, more practice, or abstention from alcohol and cigarettes.
So maybe the root of the objection to the performance-enhancing drugs is that they have long-term deleterious effects on the health of the user.
This in turn gives rise to an externality, since use by some athletes depresses the relative performance of non-users.
Yet I do not think that serious objections would be raised to self-destructive behavior in pursuit of athletic distinction as long as the behavior did not involve drug use.
A football lineman will not be criticized for blowing himself up into a 400-pound freak if he does it without the aid of drugs, even though the long-term effects on his health of the added weight are very bad and even though his weight gain may place pressure on other linemen to match it.
Nor do we criticize poets and other artists who deliberately lead unhealthy lives, either in search of experiences that they can incorporate into their work or out of sheer irresponsibility or mental derangement, even though they might be thought to be competing unfairly with the normals.
Some associates at large law firms work much too hard for the good of their health in order to steal a march on their competitors, but they are not criticized either.
So is the ban on doping athletes just a mindless reaction against novelty and science, a Luddite reaction? Or does it just reflect a confusion between cheating when drugs are banned and lifting the ban? I think not.
There are two valid reasons for the ban.
One is the pure "arms race" character of the doping; there is no improvement in the entertainment quality of football if 400‚Äìpound linemen confront each other rather than 200-pound linemen.
In contrast, the overworking law firm associates increase their firm's utput.
The other justification for the ban is that it is a rational means of protecting children.
Because successful athletes earn high salaries, because success as an athlete does not require a high order of intelligence, and because an athletic career to be successful must begin in high school (in the case of tennis, perhaps even earlier), there is enormous competition by minors to achieve athletic success.
If performance-enhancing drugs were legal, their use by teenagers would be pervasive, and teenagers lack sufficient maturity to trade off the benefits of an athletic career (discounted by the very low probability that any given teenage athlete will have a really successful athletic career) against the long-term damage to their health.
Of course adult athletes could be permitted to use such drugs but minors forbidden to do so, but such a legal regime would be difficult to enforce, especially given the "role model" status of adult athletes in the eyes of minors.
The lifting of the ban would remove all stigma from the use of such drugs.
Their legal and widespread use by star athletes would validate the drugs  in the eyes of impressionable youth.
An article in the   New York Times   of July 25 describes the efforts of the federal government to prevent Americans from gambling online in Internet casinos located outside the United States.
The article reports that 8 million Americans engage in Internet gambling, spending a total of $6 billion a year.
Gambling outside specific, authorized venues, such as Nevada (the state that has the fewest restrictions on gambling), Indian casinos, riverboat casinos, parimutuel betting at racetracks, and state lotteries, is illegal.
Illegal gambling is a standard example of a radically underenforced, "victimless" crime ("victimless" in the sense of being a voluntary transaction, as distinct from a coerced transaction such as theft.
The gambling laws are underenforced largely because gambling is victimless, which makes detection difficult and also reduces the public‚Äôs willingness to devote resources to preventing it.
The argument for criminalization is that gambling is an unproductive and often an addictive activity that, by virtue of its addictive character, drives the gambling addicts to bankruptcy.
In fact gambling is productive in an economic sense because it increases the expected utility of the gamblers; otherwise there wouldn't be gambling.
It is as productive as any other leisure-time activity that does not involve the acquisition of useful skills or knowledge.
Granted, the attraction of gambling is a little mysterious from a rational-choice perspective.
Because of the need of the casino or other gambling establishment to cover its costs, the gambles that are offered are bad in the sense that the net expected monetary payoff is negative.
The state lotteries derive significant revenues (an average of 2.3 percent in the 40 states that have state lotteries) from the sale of lottery tickets precisely by offering particularly bad odds: on average, of every $1 dollar in revenue from the sale of lottery tickets, only 50¬¢ is paid out in winnings.
So only risk preferrers should derive net expected utility from gambling.
Yet most gamblers probably have health, homeowners', and other forms of insurance and thus demonstrate risk aversion.
For just as only a risk preferrer will accept bad gambles, so only a risk averter would buy insurance, since the insurance company‚Äôs loading fee makes the net expected monetary payoff from insurance negative.
Some people believe irrationally that they are inherently "lucky," not realizing that "luck" is something observed ex post; no one has an asset called "luck"  that enables him to beat the odds.
Other people are so desperate or miserable that their marginal utility of money is very low, which truncates the downside risk of a gamble.
Suppose, to take an extreme case, that you have only $1 left in the world.
There isn't much you can do with $1, so, even if if you were risk averse, your most sensible move might be to buy a lottery ticket, on the theory that it is really costless.
(Thus, welfare programs encourage gambling by reducing the cost of gambling away one‚Äôs financial resources.) The principle that this example illustrates is that if marginal utility is increasing in income, the benefit of winning a bet and thus increasing one‚Äôs income will confer more utility than an equal loss will confer disutility.
And finally, there is an inherent human fascination with uncertainty and randomness, and these features of our environment and experience can be observed with particular clarity in gambling.
In this respect, gambling is a consumption good rather than an investment good.
It is true that some people become addicted to gambling and go broke.
A 2000 study by the economists John Barron, Michael Staten, and Stephanie Wilshusen estimated that an abolition of casino gambling would reduce personal bankruptcies by 1.4 percent nationwide and by 8 percent in counties in which or near which casionos were located.
However, given the enormous number of people who gamble, the percentage who go broke as a consequence of their gambling must be very small.
This raises a serious question whether the harmless activity of a vast number of people should be curtailed to protect the small fraction who become addicted to it and as a result engage in self-destructive behavior.
If gambling addiction is considered a genuine mental disorder rather than a preference, it perhaps could be controlled by "suitability rules"  (a weak "perhaps"‚Äîthe costs of enforcement might well be prohibitive) that would limit the percentage of a person's income or assets that he could spend on gambling.
This would be a counterpart to the suitability rules that forbid securities brokers to buy highly risky securities for people for whom such investments are "unsuitable" by virtue of their financial situation.
Addiction to gambling is more costly the more difficult it is to declare bankruptcy and thus wipe out one's debts.
The Bankruptcy Abuse Prevention and Consumer Protection Act of 2005 did just that: made it more difficult to declare nonbusiness bankruptcies.
It will be interesting to see whether the Act reduces the amount of gambling and gambling-related bankruptcies.
The addiction argument is pressed in the legislative arena by the casinos and the other legal gambling establishments in support of restricting other gambling.
The establishments argue that they try to prevent their customers from bankrupting themselves.
But as far as I know,  this is true only in the sense that they make sure that their customers can pay their losses.
Internet gambling poses a strong competitive threat to the conventional legal gambling establishments, including state lotteries.
Those establishments have high overhead expenses--large staffs, expensive equipment (one-arm bandits, casino buildings, casino boats)--with the exception of the state lotteries, but the states, as I have noted, depend upon them as a revenue source, in lieu of taxes, which are unpopular.
(The purchase of a state lottery ticket is a voluntary tax payment of one half the ticket price.) Moreover, except for the lotteries, legal gambling imposes substantial time costs on the gamblers, who have to travel to the casino or the racetrack to place a bet.
It is because of the overhead expenses and the states'  revenue hunger that the odds offered the gambler are so bad (they are even worse, when the time cost of the gambler is added).
There may also be monopoly rents further worsening the odds, since competition in the gambling industry is so restricted in most states, a factor in the recent Abramoff scandal involving efforts to prevent competition with Indian casinos.
Internet gambling establishments have very low expenses, enabling them to offer an approximation to fair odds, and do not require any travel by the gambler.
One would think fair odds an enormously attractive feature of Internet gambling to gamblers.
At fair odds, which is to say with no loading expense (a gamblers' nirvana that Internet gambling, if allowed to operate without threat of criminal prosecution--which obviously drives up its costs--might approximate), the net utility of gambling soars because there is no longer a net expected financial loss.
So the legal casinos are correct that they offer a measure of control over gambling: by offering only bad odds, they reduce the demand for their product.
(The analogy is to the monopolistic provision of a product, which by reducing demand reduces the amount of pollution generated by the manufacture of the product.) It is doubtful, however, that this effect justifies the elaborate legal restrictions on the gambling industry.
There were a number of interesting comments.
I will reply to a few.
Several comments oppose federal support of medical or other research.
That is a legitimate position, but it is not directly relevant to the stem cell issue.
The reason is that banning federal support of stem cell research does not entail a reduction in the total federal funding of research, but merely a reallocation from stem cell research to other research.
In support of federal funding of basic research in general, as distinct from relying on state and private donations, David points out that "Almost every lab in a reputable academic institution in this country pursues multiple projects at once.
Thus, scientists from those labs would have to create entirely new labs, devoid of federal funding, to perform even one experiment using stem cells.".
Federal fundingof research is not ideal, because of political interference--the ban on stem cell research is only one example.
More important is the overinvestment in research on AIDS, relative to the number of lives at risk, and the disproportionate investment in research on breast cancer, compared for example to research on prostate cancer.
In general, though, the federal peer review process assures that NIH grants (for example) go to high-quality projects.
I do not believe there is the same kind of politicized geographical dispersion that one finds in more politicized and less objective areas that federal largesse supports, such as grants by the National Endowment of the Humanities.
A number of the comments debate what seem to me purely metaphysical questions concerning when life begins, whether five-day embryos should be treated as full-fledged human beings, etc.
By "metaphysical" I mean can't be resolved by reference to logic or evidence.
They are matters of opinion and endless contestation, strongly influenced by religious views that cannot be verified or refuted (modern religions are careful to avoid proposing falsifiable hypotheses, such as that the world will end on September 1, 2006).
I get no nourishment from such debates.
I believe that upbringing, temperament, experience, emotion, and certain brute facts  determine one's answers to such questions, not  truth or falsity.
If stem cell researh fulfills its promise, I believe that the moral objections will be swept aside, because even religious Americans are pragmatists.
I do not agree that if you think it's okay to harvest stem cells from a five-day old embryos, you've got no grounds for condemning the murder of children and adults or even the killing of a three-month old fetus.
All societies draw lines in these matters; none I think considers a decision to be celibate the equivalent of murder because the decision results in extinguishing potential life.
Where the lines are drawn depend ultimately, I have suggested, in our society at least, on practical considerations.
Let me respond briefly to some of the comments.
I do not know on what basis Mr.
Fallows in the   Atlantic Monthly   believes that we have broken al Qaeda's operational capacity.
Granted, we have seized or killed many of bin Laden's henchmen, and his sanctuary in Pakistan is less secure than his pre-9/11 sanctuary in Afghanistan, so it is fair to surmise that we have weakened al Qaeda.
But the Heathrow plot suggests (though does not prove) that al Qaeda can still orchestrate a devastating, though fortunately foiled (well, al Qaeda's 1995 plot to blow up airliners over the Pacific was also foiled, and that didn't prove that al Qaeda had been broken), attack on the United States.
It makes no difference whether al Qaeda employs British Muslims or Saudi Arabians to carry out the attacks that it plans.
I disagree with the comment that says that we should spend less on antiterrorism because terrorism kills fewer people than ordinary crimes.
First, it is harder to limit terrorism than to limit ordinary crime; the terrorists are more determined and less deterrable.
More important, the potential threat posed by terrorists in an era of proliferation is much greater than the potential threat posed by ordinary criminals.
When I said that our current expansive conception of civil liberties dates from a time we felt safer, I didn't mean to disparage the fear of nuclear war during the Cold War.
After communist subversion in the United States was defeated in the late 1940s and early 1950s, we felt pretty safe from domestic threats, the kind of threats that put pressure on civil liberties.
We no longer have that feeling of safety.
Finally, I was asked about profiling.
I am not an enthusiast for profiling.
Apart from the resentment it causes on the part of people (American Muslims) whom we very much want to keep loyal to the United States, it can be circumvented by recruitment of terrorists who do not fit th eprofile.
More and more "white" Europeans are being converted to Islam and some of them may become terrorists.
On the other hand, some limited, discreet profiling is efficient and I very much agree with the commenter who said there should be a "pass" from security checks for people who have security clearances or are otherwise certifiable as safe.
I agree with Becker's analysis, but I draw a few additional lessons from the recent foiled plot to bring down airliners with liquid bombs, and let me explain them.
The first lesson  is the shrewdness of al Qaeda and its affiliates in continuing to focus their destructive efforts on civil aviation.
Death in a plane crash is one of the "dreaded" forms of death that psychologists remind us arouse far more fear than forms of death that are much more probable; this explains the extraordinary safety of air travel compared to gas heaters, which kill with a much higher probability.
The concern with air safety, coupled with the fact that protection against terrorist attacks on aviation can be strengthened, though only at great cost in inconvenience to travelers, makes the recently foiled plot a merely partial failure for the terrorists.
The revelation of the plot will significantly increase the costs of air travel--costs that are no less real or substantial for being largely nonpecuniary (fear, and loss of time--which, ironically, will result in some substitution of less safe forms of travel, namely automobile travel).
The plot has also revealed the importance of counterterrorist intelligence.
A defense against terrorists as against other enemies of the nation must be multilayered  to have a reasonable chance of being effective.
One of the outer defenses is intelligence, designed to detect plots in advance so that they can be thwarted.
One of the inner defenses is preventing an attack at the last minute, as by airport security screening for weapons.
The inner defense would have failed in the recent episode because the equipment for scanning hand luggage does not detect liquid explosives.
The outer defense succeeded.
This is fortunate because airport security remains in disarray.
The liquid-bomb threat had been known since a similar al Qaeda plot was foiled in 1995, but virtually nothing had been done to counter it.
This is a failure of our Department of Homeland Security but also of the corresponding agencies in other countries, such as Britain's Home Office.
If intelligence had failed, the attack would have succeeded.
Intelligence succeeded in thwarting the attack in part because of the work of MI5, England's domestic intelligence agency.
The United States does not have a counterpart to MI5.
That seems to me a very serious gap in our defenses.
I have criticized it in a series of recent writings, including my book   Uncertain Shield: The U.S.
Intelligence System in the Throes of Reform  , ch.
4 (Hoover Institution and Rowman & Littlefield Publishers, Inc., 2005).
Perhaps now these criticisms will receive a more sympathetic hearing.
Primary responsibility for national-security intelligence has been confided to the FBI, a criminal-investigation agency oriented toward arrest and prosecution rather than toward patient gathering of intelligence with a view toward understanding and penetrating a terrorist network.
The title of an article on the front page of today‚Äôs   New York Times   says it all: ‚ÄúTracing Terror Plots, British Watch, Then Pounce: Experts See Different Tactics in U.S., Which Moves in Quickly.‚Äù The Bureau's tendency, consistent with its culture of arrest and prosecution, is to continue an investigation into a terrorist plot only for as long a time as is required to obtain sufficient evidence to arrest and prosecute a respectable number of plotters.
Under this approach, the small fry are easily caught but any big shots who might have associated with them quickly scatter.
The arrests and prosecutions warn terrorists concerning the methods of the FBI.
Bureaucratic risk aversion also plays a part; prompt arrests assure that members of the group won't escape the FBI's grasp and commit terrorist attacks.
But without some risk taking, the prospect of defeating terrorism is slight.
MI5, in contrast to the FBI (and to Scotland Yard's Special Branch, with which MI5 works), has no arrest powers and no responsibilities for criminal investigation, and it has none of the institutional hangups that go with such responsibilities.
Had the British authorities proceeded in the same manner in which the FBI would have been likely to proceed, rather than continuing their investigation until almost the last minute and as a result being able to roll up (with Pakistan's help) more than 40 plotters, most of the plotters might still be at large and the exact nature of and danger posed by the plot might not have been discovered.
The   Times   article says that the British could wait until the last minute because they have more legal scope for detaining suspects than we do.
I don't think this is correct, but if it is, it is one more sign that we still do not take the threat of terrorism seriously enough to reexamine a commitment to civil liberties formed in a different and safer era.
Which brings me finally to a silver lining.
It is not the fact that the plot was foiled; it was, as I said at the outset, merely a partial failure.
The silver lining is that this close call may shake us out of some of our complacency.
Because we have not been attacked since 2001, we are (or were until last week) beginning to feel safe.
We were ostriches.
An article in the current   Atlantic Monthly   by the usually astute journalist James Fallows proclaims victory over al Qaeda.
Fallows argues that by depriving bin Laden of his Afghanistan sanctuary we defeated al Qaeda, and the only danger now is that we will overreact to a diminished terrorist threat.
Bin Laden was indeed deprived of his Afghanistan sanctuary, but he promptly found another one, in Pakistan.
Though the plotters of the liquid-bomb attack are British citizens, the plot in its scope and objective has al Qaeda written all over it.
Al Qaeda is the high-end terrorist group.
It is not content with bombing merely a subway or a train.
Its hallmark is the spectacular attack, and the recent airliner plot had it succeeded would have rivaled the 9/11 attack in its impact.
Our ostrich brgade  may retreat to the claim that "our"  Muslims, unlike the British and Canadian Muslims, are fully integrated into American society and so pose no threat.
That is false.
The percentage of American Muslims who are potential terrorists is undoubtedly smaller than the corresponding percentages in either Britain or Canada.
But as there are many more American Muslims than there are British or Canadian ones, and as (we now know) British (and presumably Canadian) Muslim extremists want to attack us and not just their own host nations, we cannot afford to assume that we are safe.
Perhaps we shall no longer indulge that dangerous assumption.
Stem cells are "general purpose" cells out of which the cells specialized to particular organs develop.
Stem cells could, in principle, be used to "grow" human organs for transplant purposes.
The thereapeutic potential of stem cells is considered enormous, but, for the most part, not imminent; stem-cell research is at the basic- rather than applied-science stage.
For research purposes, embryonic stem cells, found in fertilized ova of a few days old, are greatly superior to adult stem cells.
The usual source of embryonic stem cells is embryos created for use in vitro fertilization.
More are created than are used to produce a fetus, and the surplus embryos are stored for future use or destroyed.
Stem cells can be extracted from these ‚Äúexcess‚Äù embryos, but, thus far, not without destroying the embryo.
In a thoughtful speech in August 2001, President Bush laid out the pros and cons of continued stem cell research.
The principal pro is obvious: the therapeutic potential of stem cells.
(Later I'll give some additional reasons why we shouldn't want to discourage such research.) The cons are ethical in nature.
Many religious people believe that a fertilized human ovum is a human being and they therefore regard the extraction of stem cells from the embryo, when it causes the embryo's destruction, to be murder, just like therapeutic (as distinct from spontaneous) abortions.
Miscarriages are the best-known form of spontaneous abortion, but about half of all pregnancies are terminated by spontaneous abortions, most occurring before the woman realizes she's pregnant.
Some people who do not consider an embryo a human being nevertheless oppose stem cells research believe that the use of stem cells for therapeutic purposes would be the equivalent of cloning a human being in order to create spare parts to replace a person's organs as those organs become incurably diseased or wear out.
They imagine a time when, if permitted, parents will clone their child at birth and use the clone as a source of replacement organs for the child.
Other people oppose embryonic stem cell research because they oppose in vitro fertilization as tampering with the natural order of things.
It is not easy to deal analytically with arguments that are based on religion or emotion rather than on pragmatic considerations.
Given the number of spontaneous (not to mention deliberate) abortions and the fact that in vitro fertilization, which produces excess embryos, is lawful, it is a little mysterious what exactly is objectionable about using some of these excess embryos, which would otherwise either be destroyed or stored indefinitely with dim prospects of ever being used to produce more in vitro children, unless the objector opposes all nonspontaneous abortion.
And that is an opposition founded on religious belief.
Some secular people oppose abortion as encouraging promiscuity, but that concern is inapplicable to the use of embryos as a source of stem cells.
The idea of cloning a child in order to have a spare source of organs--the idea that the clone's organs would be harvested, as needed, for transplantation into the child--is fanciful.
To create organs from stem cells would not require creating an entire person.
Moreover, whether a person originates as a clone (which is, for example, what an identical twin is), as a product of caesarean section, or in any other nonstandard fashion, it is still a human person with all the rights that persons have, including the right not to be killed just because someone else would like his organs.
(I am of course assuming that an embryo is not a person for legal purposes.).
Bush concluded his August 2001 speech by announcing that he would oppose lifting the existing ban on federal support of stem cell research except with regard to existing stem cell lines, of which there were then 60, in laboratories scattered around the globe.
Many of these lines turned out to be unusable for research purposes; today only 22 are left, which are too few to satisfy research needs.
In response to this deficit Congress passed, but the President has now vetoed, a statute that would have lifted the ban.
Many countries, such as the United Kingdom and Singapore, not only do not share our qualms about stem cell research but want to make such research a major focus of their thriving biotech industries.
Singapore recently lured leading American stem cell researchers to its major biological research center.
There are several economic points that spring to mind about the U.S.
ban.
The first is its futility, and this for two reasons.
Since the researchers are not tied to any particular country, the maximum effect of the U.S.
ban would simply be to shift all stem cell research to other countries; it would not stop the research and save the embryos.
In addition, however, U.S.
law does not ban stem cell research, but only the use of federal funds for that research.
The main therapeutic applications of stem cell research lie too far in the future and are too uncertain to attract much private investment, given the high discount rates that most businesses use to evaluate projects.
But there is plenty of state and especially private charitable spending on medical research, and so the ban on federal funding of this one area of medical research should merely cause a reallocation of research funds.
More state and private money will go to stem cell research and more federal money to areas of research that will be receiving less state and private money because more of that money will be used for stem cell research.
But if the federal ban is not affecting the amount of financial support for stem cell research, why are many of our researchers going abroad to conduct that research? Why do countries like the U.K.
and Singapore think they can steal a march on us? The answer may be that the U.S.
research community does not think that opposition to stem cell research will express itself only in a ban on federal support for such research.
Although the Supreme Court has recognized a constitutional right to abortion, it is unlikely to recognize a constitutional right to conduct stem cell research, even if the objections to such research are the same as the objections to abortion.
The fact that the objections are primarily a product of religious belief would not invalidate them, because banning stem cell research does not infringe anyone's free exercise of religion or constitute an establishment of religion.
Many moral precepts embodied in laws that no one supposes unconstitutional are the product of sectarian beliefs that secular people (or indeed religious people belonging to sects that are less influential in this country) reject.
However, most of the precepts themselves, such as the taboo against murder, are shared by people of different, and of no, religious faiths; you don't have to believe that Moses brought the Ten Commandments down from Mount Sinai (you don't have to believe there   was   a Moses) to condemn murder.
In contrast, opposition to abortion and stem cell research is not widely shared by people who do not belong to a particular subset of religious sects.
The loss of leading-edge biological researchers to other countries could be costly to the United States, especially if there are complementarities between stem cell research and other areas of biological and medical research.
We may wake up some day to find that foreign institutions have obtained patent protection for highly lucrative medical therapies that our population will demand the government subsidize.
I predict, however, that generous state and private funding of stem cell research will stem the reverse brain drain.
(And if researchers are easily lured abroad, they are easily lured back.) Moreover, as therapeutic applications of stem cell research become more imminent, the pressure to relax the ban on federal funding is bound to give way.
If a person's assets grow in value, he can borrow more against them, or expect a lower interest rate if he does not increase his borrowing (for then the lenders have more security).
This is true of houses as of other assets.
In a growing economy, with the amount of land available for housing more or less fixed, the value of residential property can be expected to increase--over the long run.
But in the short run, asset prices may stagnate or even decline.
In recent years, homebuyers have been willing to take on historically unprecedented risk in the form of 100 percent mortgages (on the subprime bubble, see my posting of June 24) and floating interest rates.
As a result, if housing prices fall, a buyer can find himself with negative equity (that is, owing more than his house is worth) and paying a much higher interest rate than the rate prevailing when he bought the house.
Although a floating interest rate shifts risk from lenders to borrowers, lending without requiring a significant (or sometimes any) down payment imposes substantial risk on lender as well as borrower, since they have in effect a joint interest in the property that secures the loan.
Moreover, the costs of foreclosure and resale are considerable and amplify the loss of value when housing prices fall and precipitate defaults.
Back in 2005, both the   Economist   magazine and the Federal Deposit Insurance Corporation, along with many others, warned that American housing prices were growing at unsustainable rates; the FDIC noted that in the five years ending in 2004, U.S.
home prices had risen by 50 percent.
There is a long history of housing busts following housing booms, and although generally in this country the booms and busts have been local rather than nationwide, Japan famously experienced an extremely severe nationwide drop in housing prices in 1990.
One might have expected concerns with the possibility of a bust, given the housing bubble and risky lending, to drive up interest rates, but this did not happen, because lenders were willing to assume a very high level of risk.
In part this was because the initial lenders could sell loan packages to hedge funds and other specialists in risk bearing.
The bubble burst, defaults ensured, interest rates rose--precipitating more defaults--and some lenders were wiped out.
Finally the Federal Reserve Board stepped in and eased interest rates by providing additional capital to the banking industry.
The only justification for bailing out risk takers is to avoid a depression (or as it is politely called nowadays, a "recession", but, oddly, the worse the macroeconomic consequences of a speculative boom and bust, the stronger the argument for punishing the risk takers (which include both borrowers and lenders) by not bailing them out.
The punishment should fit the crime (I use "crime" in a figurative sense); the worse the crime, the heavier the optimal punishment, setting aside issues of detectability.
If the government relieves risk takers of the consequences of their risks, there is a divergence between social and private risk.
An example is subsidized flood insurance, which leads to excessive building in floodplains.
There seems a particular perversity in making credit cheaper, since cheap credit fed the boom.
Lower interest rates encourage borrowing and hence spending and also increase the price of imports by making the dollar worth less relative to other currencies.
Moreover, government intervention to help lenders and borrowers invites further government regulation--for example limits on subprime lending.
There is no more reason to discourage risk taking than to bail out the risk takers when the risks they have voluntarily assumed materialize.
The losses sustained by hedge funds in the bursting of the subprime bubble lend a note of irony to opponents of taxing them comparably to other investment companies.
They argue that hedge funds play an essential role in bringing market values into phase with the underlying real economic values.
It now seems that a number of hedge funds were caught up in a speculative frenzy, and that far from bringing about convergence between market and real values they enlarged the wedge between them.
Studies in cognitive and social psychology have identified deep causes for the overoptimism, wishful thinking, herd behavior, short memory, complacency, and naive extrapolation that generate speculative bubbles--and that require heavy doses of reality to hold in check.
Any efforts to soften the blow will set the stage for future bubbles.
This summer has seen a significant degradation in the quality of airline transportation in the United States compared with the recent past--substantial increases in delayed and canceled flights, in missed connections, in waiting time to the next flight to one's destination if one's original flight is canceled, in crowding in planes, in poor in-flight service, and in lost luggage.
The delays have actually been masked by the airlines' practice of increasing scheduled times--for example, flights from Chicago to Washington, D.C.
used to be scheduled to take an hour and a half, but now are scheduled to take almost two hours, yet still are late more often than when the schedule called for a faster trip.
Much finger-pointing has accompanied the degradation in service, and there is a movement afoot, well discussed in an article in the August 8   Wall Street Journal  , for legislative intervention.
My guess (and that's all it is) is that the principal culprit is the difference between marginal and inframarginal consumers of a product or service that has heavy fixed costs (lumpiness).
Let me explain what is actually a simple point.
Competition compresses price to the intersection between demand and supply; think of the standard, simple demand-supply graph in which a falling demand curve intersects a rising supply curve.
To the left of the intersection, the demand curve is above the price,.
The space between the price and the demand curve denotes the existence of inframarginal customers (or quantities, but I'll disregard that detail), which is to say customers who would continue to buy the product or service even if its price were higher.
They would do that because they value it more than the marginal purchaser does--the purchaser who would not buy the product if the price were any higher than competition has constrained it to be, because the marginal purchaser purchases at a price just equal to his demand, that is, to the value he attaches to having the product.
The difference between what the inframarginal purchasers pay (the market price, the same price paid by the marginal purchaser) and what they would pay (the schedule of prices traced by the demand curve above its intersection with the price) is referred to as "consumer surplus".
In effect, it is value given away by the sellers to the purchasers.
The sellers derive no profit from it.
The only way they can increase their profits is to reduce their price, as that will attract more marginal customers; specifically, they will be customers for whom the value of the product is less than its current price but for whom the value of the product would exceed price if the price fell.
When price is at its competitive level, the sellers (unless they collude, and barring government intervention) can reduce price further only by increasing the quality of their product or reducing its cost.
Reducing cost may require reducing quality, which will reduce consumer surplus.
But the sellers' object is not to maximize consumer surplus, because they do not profit from it.
So if reducing price by reducing cost (and therefore quality) attracts new customers because they are not as concerned with quality as the inframarginal customers are, the sellers may be better off even though their customers as a whole may be worse off.
What makes this a particularly attractive strategy for airlines to follow is that a large proportion of their costs are fixed, that is, are invariant to quantity of output.
If a plane can carry 100 passengers, the cost savings from carrying a smaller number is trivial, unlike the cost savings to a retailer from selling fewer toothbrushes.
(The analogy to the airplane is to intellectual property--a book, say.
The fixed costs of the book will be very high relative to the cost of printing and distributing an additional copy, i.e., its marginal cost.)  Even a very low price to passengers, if it fills the plane, may be profitable, because almost all the revenue goes to pay the heavy fixed costs of the plane, as in the case of the book but not the toothbrush.
To the extent that an airline can price discriminate, it will, offering better service to the customers that are willing to pay for it.
Hence first class and business class versus coach.
(The analogy in intellectual property is to hardback versus paperback books, or to first-run versus subsequent-run movies.) However, there are limitations.
If flights are canceled or delayed, all the passengers are harmed; if first-class seats are filled (for they are especially profitable to the airlines), it will be harder for first-class passengers to find a first-class seat on the next flight if their original flight is canceled; and so forth.
The inframarginal customers (I am one of them) are furious.
Some of them are substituting other modes of transportation, such as car or train, for short flights, but that substitution is limited by the fact that fuel costs per mile, an increasingly high cost of driving, are actually lower for planes than for cars.
At the top end of the income distribution, some airline customers are buying shares in private planes.
In the middle, many are complaining to their Congressmen.
In a curious way, this last response could be thought an effort to obtain legislative rectification of a market failure.
For it is possible, if my analysis is correct, that aggregate economic welfare, in the form of the total combined consumer and producer surplus of airline transportation, has declined as a result of the airlines' competition for the marginal customer.
However, it is extremely unlikely that such a market failure could be rectified by legislation at a cost equal to or greater than the benefits.
It might be asked why the quality of airline service has been falling recently, rather than having always been low (at least since deregulation, which by limiting entry and price competition encouraged airlines to compete by providing better service).
The answer is that the costs of air transportation have been rising recently as a result of sharply higher fuel costs.
So it is not that the airlines are actually reducing fares, as I assumed for purposes of simplifying my analysis--in fact they are raising them.
But they are not raising them to the level necessary to maintain the previous quality of service, because if they did that they would lose their marginal customers.
Most markets adapt to differences in consumer preference by offering different qualities of product at different prices.
But except at the very high end, where as I said some airline customers are switching to private planes and private charter services, this is not happening in the airline industry.
The impediments include the network character of the industry, the fixed costs of airplane transportation, and the spillover effects of airline delay--the inability of airlines to adhere to their schedules complicates air traffic control.
Speaking of which, the airlines argue that the air traffic control system is antiquated and that this is contributing to air-traffic delay.
I find this implausible, because the the system, though operated by the government, is 90 percent financed by taxes on aviation fuel.
If the airlines want a better system, they should support rather than oppose higher taxes.
An excellent comment points out that obesity increases with urbanization and female employment outside the home, and therefore is global.
It increases with urbanization because urban work tends to be less physically strenuous than rural, and with female employment because work outside the home increases the time cost of home-cooked meals and thus the demand for restaurant, fast food, and junk food fare, all of which tend to be fattening because the purveyors compete to provide food that is at once cheap, tasty, and filling.
In these respects, professional food preparers probably outcompete home cooking on average.
Several of the comments criticize my introductory remarks about homosexuality.
I said that "in a heterogeneous society, practice tends to be normative.
That is why homosexual activists greatly exaggerate the prevalence of homosexuality--asserting, on the basis of a misreading of Kinsey's famous studies, that 10 percent of the population is homosexual, whereas the true figure is probably at most 2 percent.
The more homosexuals there are, the stronger their claim to be normal, a claim that would fail in a society that had a strict moral code condemning homosexuality.
Similarly, the more fat people there are, the more being fat is seen as normal." One comment predicts that I will offer "profuse apologies" for these remarks and particularly for having used the term "homosexual" rather than the politically correct "gay," and another ascribes my political incorrectness to my age.
The word "homosexual" is not pejorative; nor has it yet been displaced by "gay"; until it is, I have no inclination to switch merely in order to make a political statement.
I should however have made clear that (as one of the comments notes) in saying that only 2 percent of the population is homosexual, I did not mean to suggest that only 2 percent have had a homosexual experience.
In my book   Sex and Reason   I point out that "opportunistic" homosexuality (homosexual acts by persons whose orientation is heterosexual) is common in situations in which persons of the other sex are not available as sexual partners, as in prisons and (traditionally) on naval vessels.
Ancient Greek homosexuality was primarily opportunistic, and today we have the curious phenomenon of "LUGS"--lesbians until graduation.
There were a number of interesting comments on these two posts, and it was especially interesting to hear from a number of pilots and air controllers with regard to the problem of deteriorating airline service.
One controller straightened me out about the financing of the air control system--it is mainly by ticket taxes rather than airline fuel taxes.
This is unfortunate, because number of tickets is a very crude proxy for scheduling that causes congestion which in turn increases the costs of the air traffic control system.
Compare two airlines, flying the same number of passengers (hence selling the same number of tickets), but one flies large planes infrequently and the other small planes frequently.
The second will cause more congestion but will not pay higher ticket prices.
I did not focus on solutions to the problem of air traffic congestion, but the logical solution would be some form of congestion pricing--for example a tax based on number of flights, which would create an incentive for fewer flights (with larger planes).
This would be an efficient solution, however, only if consumer surplus would be greater with less delay but also less frequent flights, and that is a difficult calculation which as far as I know has not been made.
One commenter pointed out that my observation that the per-vehicle cost of gasoline is higher than the per-airline-passenger cost of aviation fuel, and that this retards the substitution of auto for air transportation in response to air traffic congestion, is incomplete.
The proper comparison is per-vehicle-  passenger   gasoline cost, since if a vehicle has more than one passenger the gasoline cost per passenger falls.
Turning now and very briefly to the credit-crunch posting, one commenter complains that as a result of other people's reckless borrowing (and lending), he--a conservative borrower and investor--is experiencing a loss of market value of his home and of his stock portfolio; and so wouldn't it be appropriate for the Federal Reserve Board, if it could, to do something to alleviate his loss, for which he is blameless.
I think not.
Everyone's investments are at risk from unforeseen external shocks, and it would be infeasible for government to compensate everyone who experienced a loss as a result of those shocks.
Even when average house prices are rising, the price of a number of houses will be falling; and the stock market gyrates quite apart from the occasional bubble.
In a heterogeneous society, practice tends to be normative.
That is why homosexual activists greatly exaggerate the prevalence of homosexuality--asserting, on the basis of a misreading of Kinsey's famous studies, that 10 percent of the population is homosexual, whereas the true figure is probably at most 2 percent.
The more homosexuals there are, the stronger their claim to be normal, a claim that would fail in a society that had a strict moral code condemning homosexuality.
Similarly, the more fat people there are, the more being fat is seen as normal.
A half century ago, when obesity and overweight were relatively rare in this country, fat people were regularly ridiculed by entertainers, and this ridicule helped to keep people thin.
As more and more people become fat, fatness becomes more normal-seeming, and the ridicule ceases (though another factor is the march of "political correctness," which discourages criticism of people's weaknesses).
It makes sense, as the recent article in the   New England Journal of Medicine   finds, that friends' fatness would have an influence distinct from that of the culture as a whole.
We each inhabit a subcommunity or subcommunities within the national (and world) community as a whole, and we are more likely to take our clues from these subgroups than from the broader community.
In my own ingroup of 16 judges (11 active members of my court, 4 senior members, and 1 nominee, who will replace an active member who will be taking senior status), only 2 are overweight (12.5 percent), compared to a nationwide average of 66 percent.
Among my other friends, judicial and otherwise, the percentage who are overweight is probably no greater than 12.5 percent.
But separating out common causes from social influence is difficult.
My social network consists almost entirely of affluent, educated people who are knowledgeable about calories and about the health effects of overweight, who can easily afford both expensive alternatives to junk food and membership in health clubs or ownership of exercise equipment, and who (this of course is related to their affluence and education) have a low discount rate and thus do not neglect long-term consequences of current behavior.
One expects these people to be thin even if they are uninfluenced by the weight of the other people in their network.
And likewise at the other extreme, with networks composed of people who are poor and badly educated and have high discount rates, all of which are correlated with obesity.
Still, it is plausible that there would be some social influence within these networks.
The reason is that there is no clear notion of an optimal weight.
Nobody bothers to compute his body mass, which requires translating one's weight from pounds to kilograms and then dividing by the square of one's height in meters (the normal range for the body mass index so computed is 18 to 24), simple as this computation is.
A 2006 survey by the Pew Research Center finds that while people are acutely aware of the weight problem, they tend to regard themselves as of normal weight even when they are overweight, and this tendency to self-deception can be expected to be greater the heavier the people they associate with are.
If you weigh 180 pounds, though you should weigh only 150, but your friends weigh 200 pounds, you will tend to think of yourself as thin.
As Becker explains through the concept of a social multiplier, when you weighed 150 pounds your thinness may have constrained your friends, but when you move up to 180 you exercise a lesser constraint.
The social multiplier effect can of course operate in either direction.
Among young professional women, there is a cult of thinness that seems to illustrate the effect.
But it would be difficult to initiate such a downward cycle among the currently overweight.
Overweight may also be related to a decline in social conformity.
There is more variety in people's dress, hair length, etc., than there was fifty years ago.
Maybe it has become easier to make judgments about people without relying on crude signals, such as physical appearance.
Then the costs of a nonnormal appearance would fall, and this would help to explain the increase in overweight.
A further point has to do with sedentary life style.
That is rightly regarded as a risk factor for obesity.
But in addition, being sedentary reduces the cost of being obese, since the less active one is, the less one is impeded by being obese.
Fatness tends to creep up on one, and the less costly it is, the less incentive one has to lose weight, which is difficult.
The August 1 collapse of a bridge that carries Interstate 35W across a river in Minneapolis, which killed 13 people, has led to loud calls for greater expenditure on maintenance of America's hundreds of thousands of bridges and millions of miles of interstate highways.
The federal highway system--the "Interstates," like I 35W--has a total length of about 50,000 miles, and though that is only 1 percent of the total highway mileage in the United States, it carries almost a quarter of the nation's total road traffic, amounting to some trillion persons a year, and half its truck traffic.
Concern with inadequate maintenance has focused on the federal highway system.
It should be noted, however, that although the system is federal, its components--the individual interstate highways--are owned mostly by the states, with each state owning the portion of the federal highway system that falls within it.
The system is financed in part by federal and state gasoline taxes, in part by the general federal budget, and, though most of the Interstates are freeways, in part by tolls levied by the states.
The shock caused by the Minneapolis accident may seem excessive in light of these figures.
A highway system that carries a trillion drivers and passengers a year is bound to experience occasional fatalities due to defects in highway design or maintenance as distinct from accidents due to driver negligence, vehicular defects, fog, and other conditions for which the highway system generally could not reasonably be thought responsible.
Nor is it yet clear that the Minneapolis accident was due to deferred maintenance or other neglect.
Current speculation is that it may have been due either to the installation of defective de-icing equipment or to the placement of construction materials used in a project to resurface the bridge's roadway.
Although the accident may turn out to have had nothing to do with neglect of the transportation infrastructure (mainly highways, though of course there are local roads as well and some bridges are for railways rather than highways), there has been a long-standing, although until the Minneapolis accident a low-visibility, concern with what the concerned call "America's decaying physical infrastructure." What could this mean?.
A highway, like any other product, should be maintained at a level of quality at which marginal costs are equated to marginal benefits, including safety benefits.
There is such a thing as making a product too safe.
At the same time, there is a tendency to neglect the delay and wear-and-tear costs that are created by highways that have poor surfaces or that experience frequent lane closures due to the need to repair the road more often than if it were maintained properly.
So the question is whether the nation is spending too little on highway maintenance as judged by a cost-benefit analysis along the lines just suggested with proper emphasis on delay costs and a realistic assessment of expected accident costs.
The American Society of Civil Engineers has rated more than a quarter of the nation's bridges as "structurally deficient or functionally obsolete," and has called for an expenditure of $1.3 trillion to repair or replace them as well as to deal with similar problems of roads and other infrastructure.
But an engineering optimum is not the same thing as an economic optimum.
There is no indication that the ASCE has conducted a cost-benefit analysis of its proposal.
Considering the infrequency with which bridges collapse, the idea that more than a quarter of them are "structurally deficient or functionally obsolete" in a sense relevant to serious concerns about safety is implausible.
Not that it would be a surprise to find that the nation   is   spending less than the economically optimal amount on maintaining the highway system, bridges, and other infrastructure.
Enormous recent growth in the total miles driven on the interstate system has not been matched by expansion of the system, resulting in a substantial increases in delay and also in wear and tear.
The highway system has difficulty responding to a need for increased expenditures to preserve road quality and minimize delay because it is publicly owned, and is financed largely by taxes.
Politicians have trouble raising taxes to pay for projects the benefits of which will largely be realized after the politicians' current term of office expires.
Since accidents that are due to the collapse of a bridge or a highway segment, or some equally dramatic demonstration of a flaw in the highway system itself rather than a mistake by users, are rare, the likelihood of such an accident occurring within a politician's political time horizon is low.
So there is an incentive to defer maintenance and thus live (slightly) dangerously rather than raise taxes, the effect of which will be felt by the taxpayer immediately.
By the same token, rather than raise taxes to enable road repair or rebuilding that will avert any need for further maintenance for many years, politicians have an incentive to make frequent cheap repairs, even though the cumulative delay and accident costs (discounted to present value) may be great, rather than to take steps to reduce those delays and accidents after their term of office expires.
Furthermore, while state taxes are paid by state residents, only part of the delay costs resulting from inadequate maintenance are borne by them because many users of the interstate highway system are nonresidents of the state that maintains its segment of the system poorly.
In short, there are systemic reasons to expect the interstate highway system to be undermaintained, although there is also a reason to expect it to be overmaintained: the interest of road builders, like civil engineers, in overengineering the highway system in order to increase their revenues.
Yet it is possible that they maximize their revenues by frequent small repair and rebuilding projects rather than infrequent but costlier because more extensive ones.
My guess is that the interstate highway system is undermaintained, but it is just a guess, and I would be grateful if some of the readers of this post have better information on the question.
Supposing that the maintenance of the system is not optimal, what might be done to bring it closer to the optimum? One possibility would be to privatize the system.
Until the 1970s, it was believed that infrastructure services such as air transportation, rail and barge transportation, trucking, pipeline transportation, and even taxi service could be privately owned but had to be heavily regulated by government as "common carrier" services, with price and entry controlled by regulatory agencies.
We now know better.
Yet we treat highways, which are just another part of the transportation infrastructure along with airlines, railroads, and pipelines, as requiring not only tight regulation but public ownership.
In fact limited-access highways are easily financed by user fees--tolls--and the advent of electronic toll collection (EZ Pass and similar services) has reduced, and will soon largely eliminate, the delay caused by having to stop and pay a toll.
States have taken steps toward privatizing highways, including components of the interstate highway system, such as the Indiana Tollway.
(See our posts of June 20, 2006.).
If the interstate highway system were privatized, there would still be a need to ensure uniform safety standards, signage, and so forth, although the responsibility for achieving the requisite commonality could largely be delegated to the owners themselves, since it would be very much in their interest that the interstate system be from the user's standpoint uniform.
There would be a potential problem of monopoly, since for many drivers there is no good alternative to using the interstate system.
But that problem could be minimized by the terms on which states leased the operation of their stretches of the interstate system to private companies.
Private operators who skimped on maintenance would be subject to being sued in tort if accidents resulted and to having their leases terminated; and to the extent that deferred maintenance caused unnecessary delays and thus reduced the value of the use of a highway, the operator would not be able to charge as high a toll, and so there would be a market penalty for suboptimal maintenance.
When a sport or other game is played all over the world (chess for example, or soccer), it is natural that there should be international competition.
The oddity of the Olympics is that they are presented as athletic competitions between nations, rather than between teams each of which presumably would have a permanent residence in one nation yet might recruit team members from other nations as well.
Nations in the grip of nationalist emotion or wanting to advertise their power to the world (nations such as Hitler's Germany, which made the 1936 summer Olympics, held in Berlin, a major propaganda event; East Germany and other communist countries; and now China) invest heavily in training their Olympic athletes.
China is estimated to have spent as much as half a billion dollars to train their athletes for the Olympic games now underway in Beijing.
The heavy investments that nations that regard Olympic competition as a propaganda opportunity in turn spur other nations to invest heavily in training their own Olympic athletes.
The nationalistic fervor and great-power aspirations that Olympic competition stimulates seem to me a negative externality.
In addition, some unknown but doubtless large fraction of the expenditures on training athletes have no social product, but are in the nature of "arms race" expenditures.
If one nation spends very heavily on training its Olympic athletes, other nations, if they want to win a respectable number of medals, have to spend heavily as well.
The expenditures are offsetting to the extent that the objective of competition is to win rather than to produce an intrinsically better performance.
Economic competition produces better products at lower quality-adjusted prices, and this effect dominates the costs of competition in duplication of facilities and offsetting advertising.
The balance in athletic competition is different, because the main product (as in war) is winning, and it makes little difference to the consumer whether the winner ran a mile in 3.05 minutes or in 3.01 minutes.
Moreover, Olympic competition is inherently lopsided since, as Becker explains, success is largely determined by a nation's population, per capita income, and (in the winter Olympics) climate.
Why should Americans feel good if an American team beats a team from Costa Rica?.
Since the United States is acknowledged to be the world's most powerful nation, it has nothing to prove by doing well in the Olympics, and so we are sensible not to allot any tax revenues to financing the training of our Olympic athletes.
Doubtless we would were it not for the private donations that generously support the United States Olympic Committee.
Since other countries do not have the same tradition of charitable giving as the United States, and so rely on tax revenues to finance activities that in the United States are financed by private charity, our charitable support of Olympic competition actually places pressure on other nations to support their Olympic teams out of tax revenues.
Becker raises an interesting point by asking whether Olympic competition creates a positive externality that might warrant public subsidy, though he recommends against subsidization.
The Olympic games are immensely popular, but, given advertising-supported television, it is apparently impossible to finance them (and in particular the training of the Olympic athletes) out of television-advertising revenues.
There are, however, as he notes, other (private) sources of revenue of Olympic participants, such as endorsements by champion athletes.
Moreover, were there no public subsidies of Olympic competition, this would not doom the Olympic games; it would just reduce the amount of training that Olympic athletes received (the arms-race effect).
This would reduce the number of new world records set, and marginally reduce the quality of play and hence the pleasure that the audience for the Olympic games derives, but would actually tend to sharpen Olympic competition by reducing the effect of a nation's per capita income on its Olympic prospects.
I agree that there is no reason to expect the rate of growth of per capita income in the United States to decline in the foreseeable future.
Of course it   may   decline; the future is uncertain; a particular uncertainty concerns the ever-present possibility of catastrophe (see my book   Catastrophe: Risk and Response   [2004]).
Abrupt global warming, nuclear terrorism, a pandemic, an asteroid strike‚Äîall are possible events that could have cataclysmic effects on economic growth.
Also, it is important to distinguish between monetary income and economic welfare.
Increases in leisure and in the quality and variety of products and services can increase welfare without increasing per capita income; conversely, expenditures on security, while they may be cost-justified because of the risk of terrorist or other attacks, reduce consumption; and service deteriorations, for example due to congestion, can reduce welfare; but in neither case would the welfare loss show up in lower per capita incomes.
A related example is wasteful expenditures on health care, all of which show up as income to providers, though it is possible that as much as a third of all expenditures on health care in the United States either yield no benefits in greater longevity or better health or exceed what it would cost to achieve the same benefits more cheaply (for example, by exercise and healthy eating).
I do not share Becker's pessimism about the rise of regulation.
The deregulation and privatization movements have, since their beginning in the late 1970s, freed large parts of the economy from government control; income tax rates have fallen; unions have continued to decline; and the courts have become more conservative with respect to economic issues.
(The Supreme Court's "liberal" Justices are liberal mainly concerning issues, such as abortion, capital punishment, and homosexual rights, that have little economic significance.) There will now be some re-regulation, but I would be surprised if it went far, given the political power of business.
Environmental regulation has increased, but it deals with real externalities.
The increased regulation of labor markets, however, mainly as a result of antidiscrimination laws, is difficult to justify on economic grounds, though its economic effects may be largely offset by the decline of unions.
Even after the recent increase in the federal minimum wage, that wage in real (i.e., inflation-adjusted) terms is no higher than it was in 1960.
Social conservatives believe that the nation is in free fall because of the decline of traditional social values, a decline reflected in low marriage and high divorce rates, a high rate of births out of wedlock, increases in pornography and vulgarity, the flaunting of homosexual relations, and abortion on demand.
Becker does not cite any of these factors as inimical to economic growth; nor would I.
But there is a crucial ambiguity in the word "decline" when applied to a nation, and I will devote the rest of my comment to that.
To begin with, the word might denote not a reduction in the rate of growth of per capita income but a reduction in that rate relative to the rate in other countries.
Small differences in growth rates cumulate over time, like compound interest.
Some nations will grow faster than the United States, but I do not see the growth rate of the United States dropping below the world average.
The idea of national decline might even refer to a decline in a nation's share of world income.
The U.S.
share peaked in 1951 at 28 percent, fell to 21 percent by 1975, and is about 20 percent today.
The percentage will continue to fall as incomes in China, India, Brazil, and other rapidly developing countries rise.
This almost certain "decline" has, however, no significance for the welfare of Americans--except insofar as a nation's share of world income is correlated with the nation‚Äôs political (and ultimately military) power--"geopolitical power." And when one speaks of a nation in "decline," it usually is to the nation's geopolitical power that one is referring.
Although China's military expenditures are far smaller than those of the United States, they are increasing more rapidly and eventually may surpass ours; and their increase is driving Japan to become once again a major world military power.
Russia's military expenditures are increasing as well.
India's too.
And these are all countries that have potential enemies and so take military preparedness seriously (unlike Western Europe).
What is more, the power of large countries such as the United States (and before that, notably, Great Britain) to coerce small ones has declined.
When early in World War II Iraq and Iran began leaning toward the Axis powers, Britain (aided in Iran by the Soviet Union) quickly intervened and, more or less effortlessly, changed the governments in those countries.
Britain of course for centuries controlled a vast empire with slight military forces.
Tiny Holland ruled what is now Indonesia.
France ruled what is now Vietnam, Cambodia, and Laos.
Japan ruled Korea and Taiwan.
The Western nations, including the United States, are vastly less powerful than they were half a century ago.
The U.S., despite a military budget roughly equal to that of all other nations combined, has its hands full trying to control two militarily third-rate countries, Iraq and Afghanistan, and is incapable of preventing Iran from becoming a nuclear power.
From a political rather than an economic standpoint, the United States today may be in a position comparable to that of the Roman Empire in the fourth century A.D.
or the British Empire in the 1930s: the world's leading "empire" (in the sense not of having colonies, but of having the most influence over other countries), but, as an empire, in decline.
In 1989, Denmark began allowing homosexual couples to form "registered partnerships," which gives a couple most of the legal rights of married persons; the other Scandinavian countries followed suit.
In 2001 the Netherlands began allowing homosexual couples to marry.
Spain followed, despite the fierce opposition of the Catholic Church.
Canada too, and it is plain that most North Atlantic nations will soon recognize gay marriage.
In the United States, Massachusetts and California, by virtue of state court constitutional rulings, now allow gay marriage.
Several other states recognize "civil unions" or "domestic partnerships," the equivalent of Denmark's registered partnerships.
(All these laws, except the Scandinavian ones, are recent, and there have been few studies of their effects.) The federal Defense of Marriage Act (1996), however, not only denies the federal benefits of marriage to gay marriages, but also empowers states to refuse to recognize such marriages, and a majority of states have enacted laws (usually as part of the state's constitution) refusing to recognize such marriages.
The gay-marriage movement raises a number of interesting questions, which I approach from an economic perspective: why do homosexuals want to marry? What are the consequences of gay marriage likely to be? Why is there opposition to gay marriage?.
All sentimental and religious considerations to one side, marriage is a source of benefits.
One, which is a genuine social benefit, is a saving of transaction costs.
If you want to leave money to someone to whom you are not married, you will need a will; but if you are married, upon your death your spouse (if you have no will) will automatically receive a share of your estate.
Nor do you have to have a contract specifying the financial or other consequences of an abandonment or other dissolution of the relationship; the law of divorce supplies the necessary machinery.
In other words, the law provides a kind of standard contract that enables the costs of negotiating and drafting a private contract to be avoided.
In this respect it is very much like partnership law, so that the term "domestic partnership" to describe a marriage-like law for homosexuals is apt.
There are also private benefits (in the sense that there does not seem to be an efficiency justification for them), such as survivors' social security benefits and rights under the Family and Medical Leave Act, but most of these benefits (offset, but for most couples not fully, by the "marriage tax"--the higher incomes taxes paid by a couple each of whom has a good income than they would have to pay if they were not married) are federal, and the Defense of Marriage Act denies federal marriage benefits to the parties to gay marriage.
There are also, however, state-law evidentiary privileges that enable one to prevent his or her spouse (or sometimes even ex-spouse) to testify against one in a criminal case.
Employers often provide health and other benefits to the spouses of their employees as well as to the employees themselves, but there is nothing to prevent employers from offering those benefits to a same-sex partner of the employee, whether or not married.
Given the Defense of Marriage Act (and no-fault divorce, which enables each spouse to dissolve the marriage unilaterally), the net benefits of gay marriage to homosexuals are small, and indeed no greater than those conferred by domestic-partnership laws.
Nevertheless, most homosexuals are very strong supporters of gay marriage even on the assumption that the Defense of Marriage Act will not be repealed and even though it appears that relatively few homosexuals have taken advantage of the Massachusetts law, though this may be because of its newness; it was created by the state's highest court by an interpretation of the state's constitution in 2004, and there was initial uncertainty whether it would stand, or be nullified by constitutional amendment.
But probably the main reason for homosexuals' support of gay marriage is simply their desire to be treated equally with heterosexuals, which probably is also the principal reason for homosexuals' opposition to the armed forces' discrimination against them, rather than a great desire for either marriage or military service.
What are likely to be the consequences of gay marriage? If few homosexual couples take advantage of the right to undertake such a marriage, the consequences, at least in the short run, will be slight, especially since the right will be recognized in only a few states for the foreseeable future.
But even if all states recognized gay marriage and the Defense of Marriage Act were repealed, the consequences would be small simply because the homosexual population is small and many homosexual couples will not bother to marry; many heterosexual couples nowadays do not bother to marry, especially if they don‚Äôt plan to have children, and a higher percentage of heterosexual than homosexual couples do not plan to have children.
The much-bandied-about figure that 10 percent of the population is homosexual is false; it is based on a misinterpretation of Kinsey's data.
The true figure is about 2 to 3 percent for men and 1 percent for women.
My qualification "in the short run" was intended to leave open the question whether widespread recognition of gay marriage, and thus the legitimating of homosexual relationships, might either increase the number of homosexuals or undermine heterosexual marriage.
I do not think either consequences is likely.
Sexual preference seems pretty clearly to be genetic or otherwise innate rather than chosen on the basis of social attitudes toward particular sexual practices.
Despite greatly increased tolerance of homosexual behavior in many countries (including the United States) in recent decades, there is no evidence that I am aware of that the number of people who prefer homosexual to heterosexual sex has grown.
Homosexuals are more open about their sexual identity and this creates an impression that there is more homosexuality than there used to be--and there may indeed be more homosexual   behavior  .
But the preference appears to be unchanged.
So parents probably need not worry that recognizing gay marriage will increase the likelihood of their child's turning out to be homosexual.
Although some of the opposition to gay marriage is religiously motivated, I believe the main opposition comes from the feeling of many (heterosexually) married people that allowing gay marriage degrades or depreciates the concept of marriage, much as if polygamous marriage were permitted or people were permitted to marry their dogs or their automobiles.
Apart from the weight that widespread public opinion is entitled to be given in a democratic society, there is the danger that if people respect the institution of marriage less, the marriage rate, already low, will fall still lower, with adverse social consequences.
Again, the danger seems small.
The people who worry about the effect of gay marriage on the institution of marriage are those most committed to the institution, and they are unlikely to desert it.
And if they did? If what marriage mainly is is simply a standard contract, it is not obvious that its decline, and replacement by private contracts (in other words, the privatization of marriage), would have serious social consequences, given the ease with which under modern law marriages can be dissolved by either party.
Marriage retains and will probably long retain tremendous symbolic significance in our society as a symbol of love and commitment (that is why cheating on a spouse attracts greater opprobrium than cheating on a person with whom one has a long-term, but not marital, sexual relationship), and it is likely to retain that significance even as gay marriage becomes more widespread, as it seems bound to do.
A recent article in the   Washington Times   by Amy Fagan, entitled ‚ÄúHollywood‚Äôs Conservative Underground,.
www.washingtontimes.com/news/2008/jul/23/hollywoods-conservative-underground/ (visited Aug.
23, 2008), is a reminder of the curious domination of the American film industry by left liberals.
The industry‚Äôs left-wing slant drives the Right crazy (if you Google "Hollywood Liberals," you'll encounter an endless number of fierce, often paranoid, denunciations by conservative bloggers and journalists of Hollywood's control by the Left).
Fagan's article depicts Hollywood conservatives as an embattled minority, forced to meet in secret lest the revelation of their political views lead to their being blacklisted by the industry.
The conservatives' complaint is an ironic echo of the 1950s, when communists and fellow travelers in Hollywood--who were numerous--were blacklisted by the movie studios.
We need to distinguish between actors, actresses, set designers, scriptwriters, directors, and other "creative" (that is, artistic) film personnel, on the one hand, and the business executives and shareholders of the film studios, on the other hand.
(Producers are closer to the second, the business, echelon than to the creative echelon.) The creative workers, I think, are not so much magnetized by left-wing politics as drawn to political extremes--for there have been a number of extremely conservative Hollywood actors, such as Ronald Reagan, John Wayne, Charlton Heston, Mel Gibson, and Jon Voight--Voight recently wrote a fiercely conservative op-ed in the   Washington Times  , where Fagan's article was published.
The left end of the political spectrum in this country is still somewhat more respectable than the right end, and so if one finds a class of persons who are drawn to political polarization, more will end up at the far liberal end of the political spectrum than at the far conservative end, yet it will be polarization rather than leftism as such that explains the imbalance.
No one has a good word for Stalin and Mao nowadays, but socialism is not a dirty word, as fascism is.
But why should actors and other creative workers in the Hollywood film industry, and indeed "cultural workers" more generally, be drawn to political extremes? The nature of their work, which combines irregular employment with high variance in income, an engagement with imaginative rather than realistic concepts, noninvolvement in the production of "useful" goods or service, and, traditionally, a bohemian style of living (a consequence of the other factors I have mentioned), distances them from the ordinary, everyday world of work and family in a basically rather conservative, philistine, and emphatically commercial society, which is the society of the United States today.
The choice of a political ideology, which is to say of a general orientation that guides a person's response to a variety of specific political and ethical issues, is less a matter of conscious choice or weighing of evidence than of a feeling of comfort with the advocates and adherents of the ideology.
An ideology attractive to solid bourgeois types is unlikely to be attractive to cultural workers as I have described them.
So we should not expect those workers to subscribe to the conventional political values, and apparently a disproportionate number of them do not.
Moreover, though most actors and other creative film workers are not particularly intellectual, as cultural producers much in the public eye they have a natural affinity with public intellectuals, who I found in my book   Public Intellectuals: A Study of Decline   (2001) split about 2/3 liberal 1/3 conservative.
The situation of Hollywood's business executives, including investors in the film business, is different.
They are not cultural workers, and one expects their focus to be firmly on the bottom line.
It is true that the Hollywood film industry was founded largely by Jews and has always been very heavily Jewish, and that Jews of all income levels are disproportionately liberal.
But if Hollywood based its selection of movies to produce and sell on the political views of the studios' owners and managers, that would be commercial suicide, as competitors would rush in to cater to audiences' desires.
The idea that Hollywood is a propaganda machine for the Left is not only improbable as theory but empirically unsupported.
Hollywood produces antiwar movies during unpopular wars and pro-war movies during popular ones (as during World War II), movies that ridicule minorities when minorities are unpopular and movies that flatter them when discrimination becomes unfashionable, movies that steer away from frank presentation of sex when society is strait-laced and movies that revel in sex when the society, or at least the part of the society that consumes films avidly, society turns libertine.
The Hollywood film industry follows taste rather than creating taste, as one expects business firms to do.
What troubles conservatives about Hollywood is less the promotion in movies of left-liberal policies than the breakdown of the old taboos.
Those taboos were codified in the Hays Code, which was in force between 1934 and 1968 with the backing of the Catholic Church.
The code forbade disrespect of religion and marriage, obscene and scatological language, sexual innuendo, and nudity.
The code was abandoned because of changing mores in society rather than because leftwingers suddenly took over Hollywood.
If conservatives bought the studios and reinstituted the Hays Code they would soon be out of business.
But what is true is that when movie audiences demand vulgar fare, then given that conservatives are more disturbed by vulgarity than liberals are, the film industry becomes less attractive to conservatives as a place to work in.
This may be an additional reason for the left-liberal slant of the industry.
But as long as the industry is an unregulated competitive industry, market forces will prevent studio heads and owners from trying to impose their own values on audiences, rather than trying to create movies that are in sync with those values.
I agree with Becker that it's a silly program.
Like the bailout of the auto companies, the program had dual environmental and economic-recovery goals.
The environmental goal, to reduce carbon emissions, was trivial; the aggregate improvement in gas mileage from the program is certain to be minuscule.
The contribution to economic recovery was probably very small as well--possibly negligible.
The program was one of transfer payments, not government investment.
The distinction is important to Keynesian deficit spending (what is now referred to as "stimulus") as a method of fighting a depression.
The idea behind such programs is to replace deficient private investment with public investment, for example, the construction of a new highway.
The government hires a contractor who hires workers and by doing so increases employment, which raises incomes and therefore spending.
A transfer payment does not do that, at least immediately.
It is true that people who participated in the "cash for clunkers" program couldn't pocket rather than spend the money they received from the government, as they could with the other transfer payments included in the stimulus program; they had to use it to help them buy a new car.
But that is different from paying a road contractor to build a new highway.
The contractor as I said has to go out and hire people to build it, so unemployment falls (on the assumption, correct with regard to construction, that there is a high rate of unemployment in the industry).
The purchase of a new car merely reduces a dealer's inventory, and whether the reduction leads to new production will depend on estimates of future demand.
Those estimates are likely to be inverse to the success of the "cash for clunkers" program.
For, as Becker notes, the program may to a large extent merely have caused people to accelerate a previously determined intention to trade in their old car.
Timing is important; had the program been put into effect in the winter, the buying spurt that it induced might have had a bracing effect on consumer confidence.
But by August the economy had sufficiently improved that the need for confidence-boosting measures that had no other effect on economic activity had waned.
Unlike Becker, I do not conclude from this unhappy episode that the Keynesian approach to fighting depression is misconceived.
The problem with the $787 stimulus package that Congress enacted in February, to which the "cash for clunkers" program was a belated addition, was that it was poorly designed and has been lackadaisically executed.
Roughly two-thirds of the program consists of transfer payments rather than public works, and because the Administration has failed to push the public-works components (it should have appointed an expediter to try to cut the red tape that smothers public projects), virtually all the stimulus disbursements to date have consisted of transfer payments (including, what are not really transfer payments, tax reductions that don't put cash in people's pockets until they are reflected in reductions in withholding or estimated tax payments, or in increased rebates when one files one's year-end return on April 15).
Keynesians recognize that timing is key to the success of a stimulus program in fighting an economic collapse.
The stimulus program should have been enacted last fall and heavily weighted in favor of public works concentrated in areas and industries of high unemployment, with provisions for cutting red tape even at the risk of a higher incidence of fraud and waste, which are constants in government programs.
The Treasury Department in its "white paper" of June 17 recommended the creation of a "Consumer Financial Protection Agency," and later followed up with a detailed legislative proposal for a "Consumer Financial Protection Agency Act of 2009." The proposal is pending in Congress.
Although the Commission's remit would not be limited to mortgages, risky mortgage lending is the Act's principal target.
The supporters of the Act maintain that quite apart from instances of fraud, which are punishable under existing law, many consumers were unable to deal sensibly with the terms of the mortgages that were offered to them during the housing boom of the early 2000s, which peaked in March 2006 and then deflated, bringing down much of the rest of the economy with it, as we know.
The mortgage bankers and other sellers of residential mortgages often did not require that prospective buyers demonstrate that they had the financial wherewithal to be able to repay the mortgages; mortgages that required no down payment were sold, often to people of quite limited financial means; prepayment penalties were common, which make it costly to refinance a mortgage if interest rates fall; and many mortgages were "ARMs"--adjustable-rate mortgages, which specified low "teaser" rates for the first few years followed by higher rates when at the end of the teaser period the rates were "reset.".
A recent, and very thorough, article, by Oren Bar-Gill, "The Law, Economics and Psychology of Subprime Mortgage Contracts," 94   Cornell Law Review   1073 (2009), argues that many consumers made themselves worse off by taking out mortgages during the boom (in fact the bubble) period because they could not respond rationally to the offers by the sellers of mortgages.
Many of them could not compare the terms of alternative mortgages (say a conventional 30-year mortgage and an ARM) because the terms were not stated in an intelligible fashion.
In addition many consumers were afflicted by "myopia" and "optimism." "Myopia" in this context means inability to give proper weight to future costs--for example, higher interest rates when the mortgage resets; they do not look behind the "teaser" rates even though the reset rates are disclosed.
Optimism in this context refers to exaggerating one's future economic prospects--unrealistically believing that either one's income will increase or housing prices will continue rising and by doing so enable one to refinance the mortgage on attractive terms--one's equity will have increased because the amount of the mortgage is fixed.
Bar-Gill's concern with inadequate disclosure of the annual percentage interest rate of mortgages does not present a novel regulatory issue.
The Truth in Lending Act requires disclosure of the annual percentage interest rate of a mortgage or other consumer loan (APR), and if the requirements are inadequate (Bar-Gill believes that the APR is not required to be disclosed early enough in the negotiations over the mortgage), or violations not punished severely enough to deter, the Act can be amended.
But neither the Truth in Lending Act nor any other statute or regulation, so far as I know, requires that mortgage offers be designed to discourage choices based on myopia or optimism.
Bar-Gill himself recommends only requiring earlier and clearer disclosure of APR, though he describes this as a first step in purging the mortgage market of irrationality, rather than a complete solution to the problems he sees.
His analysis is based, as he explains, on findings by behavioral economists, who investigate departures from rationality in economic decision making.
But like them, he does not make clear what he means by "rationality." It cannot mean full information, or the ability to process information flawlessly, because these condtions are rarely met in any area of human activity.
It does, however, imply consistency and the avoidance of fallacies that cause serious harm, financial or otherwise, to people who harbor them.
It is unclear that either myopia or optimism in the sense in which Bar-Gill uses these terms is irrational.
It might seem that if the discounted annualized present cost of an ARM is higher than that of a fixed-rate mortgage, anyone who prefers the former is irrational: he is paying more than he has to.
But that conclusion depends critically on the discount rate, which differs from person to person.
Some people have very low discount rates; they save a lot of money, or they incur substantial costs in an education that will yield a commensurate increase in earnings only after many years.
Other people have high discount rates; they live for the present.
These people are not irrational.
The difference between them and people with low discount rates is a matter of personality rather than of cognition.
If you have a high discount rate, the low teaser rate in an adjustable-rate mortgage may be a good deal more attractive than the high reset rates.
You are "irrational" only from the perspective of low-discount-rate persons, such as Professor Bar-Gill, who has two doctorates, two masters degrees, and a total of 13 years of education after high school.
Optimism is also a personality trait, and, as it happens, one essential to human progress.
As I have argued elsewhere with reference to our current economic situation, what Keynes called "animal spirits" and, alternatively, confidence or optimism are essential to entrepreneurship because of the great uncertainty of a business environment.
Someone who invests in building a factory that will not produce anything for years is taking a big risk of failure, and because it is a risk that cannot be reliably quantified he is taking a leap of faith, and he will not do that unless he happens to have an optimistic outlook.
It is not that rationality implies such an outlook, but that rationality is not inconsistent with it.
Optimists are often disappointed, but sometimes are richly rewarded for the risks they take; and as long as the prospect for such rewards confers on them greater ex ante utility than more cautious, pessimistic decisions would do, they are not behaving irrationally.
Nothing ventured, nothing gained is the credo of the optimist and the terror of the pessimist, but neither reaction is irrational.
The optimist and the pessimist just have different personalities.
Bar-Gill has made a value judgment rather than an economic judgment.
Now it is possible that the kind of wet-blanket regulation that he might favor if he thought it feasible--which is the kind of regulation that the sponsors of the Consumer Financial Protection Agency Act very much do favor--could be defended on macroeconomic grounds, as conducing to economic stability.
Had there not been in the early 2000s a strong market for risky mortgages, there would have been fewer defaults when the housing bubble burst and therefore less damage to the solvency of the banking industry.
But whether the proposed Act would do anything to limit risky mortgage lending is unclear.
It would authorize the Consumer Financial Protection Agency to require that a prospective mortgagor be shown, and entitled to choose, a "plain vanilla" mortgage that would be very short and easy to read and would alert the mortgagor to the various risks created by different mortgage terms.
But if people have high discount rates and (or) are highly optimistic, disclosure of alternatives will not affect their choice.
So the stability issue narrows to how many mortgagors there are who, if only the alternatives to a risky mortgage were presented clearly to them, would forgo the risky option.
Doubtless there are some; Bar-Gill cites persuasive evidence of that.
But enough to prevent another housing bubble? That seems unlikely, but is in any event unproven.
The focus of the Administration's health-care plan, and of its campaign to enlist public support for the plan, is dissatisfaction with health insurance.
To see the problem--or whether there is a problem--compare health insurance to fire insurance.
Almost everyone has fire insurance (even if he doesn't want it, invariably it is required by the mortgagee, if there is one).
The reason is that a fire can wipe out a big part of most people's wealth, and, given declining marginal utility of income, which makes most people prefer a certainty of obtaining a million dollars to a 50 percent chance of obtaining $2 million (and a 50 percent chance of nothing), the cost of fire insurance is a good investment.
The insurance company knows how much it may have to pay if there is a fire because the insurance policy has a dollar limit.
If someone is convinced that his house is fireproof and therefore fire insurance would be of no value to him, and therefore refuses to buy it, the insurance premiums charged the buyers of fire insurance will be slightly higher (because his being in the pool would have reduced the expected cost to the insurance company).
But no one is concerned with this, because very few people opt out of fire insurance.
Health insurance is different superficially because of the extreme variance in costs of medical treatment; some people have medical conditions that cost literally millions of dollars to treat.
But this is a problem in other forms of insurance as well, such as liability insurance in which the insurer undertakes to pay the insured's legal expenses, which can be astronomical; and insurers deal with such difficult-to-estimate risks through reinsurance and large deductibles.
Health insurers, if left to themselves, generally refuse to insure the cost of treating pre-existing conditions; but that is no different from a life insurer that refuses to issue a policy (or charges more for it) to someone whom a medical exam reveals to have a short life expectancy.
Prudent people buy life insurance when they're young and in good health.
Health insurers often cancel an insurance contract, or refuse to renew it, after discovering that the insured is in bad shape and likely to cost the company a great deal in the future.
Fire insurers and automobile insurers often do the same thing.
If people want to have lifetime protection, they have to pay higher premiums but it is hard to see why health insurers would refuse to offer such contracts; in fact some people do have such health insurance.
There are several puzzling aspects to health insurance, one of which, however, is rather easily solved, and that is the fact that a significant fraction of the population has no private health insurance.
If your house burns down and is uninsured, tough luck.
But if you get sick and have no insurance and no money, you can still get treatment at the nearest hospital emergency room.
(You will be billed, and if you have enough money you will have to pay the bill.) If you have no money, you're a free rider, but the amount of free riding is kept down by the cost that emergency rooms impose on patients by making them wait--and a queuing cost is a real cost to the people forced to stand in the queue.
Many of the uninsured are young and healthy; they are like the person with the fireproof house.
If they were forced to insure, therefore, premiums for health insurance might fall, though this is highly uncertain.
Many of the uninsured, rather than being young and healthy, are uninsured because of pre-existing medical conditions that imply that these people will incur abnormally high costs of treatment in the future.
Medicaid, charity treatment in emergency rooms of hospitals, and Medicare when utilized by indigent people constitute a form of poor relief.
There is no reason why Medicare shouldn't be means-tested; people who can afford medical care should pay for it themselves.
The fact that, because of tax subsidy, most health insurance is offered as an employment benefit screws up the health-insurance system considerably.
Not only does the subsidy result in giving people more medical benefits that they would want if they had to pay the full, unsubsidized price.
They lose the insurance if they lose their job or if the employer cancels the group insurance policy, and when they seek new insurance they may find themselves turned down, or made to pay a very high price, because of their age or because they now have a pre-existing condition.
If people were willing to pay high premiums, and accept high deductibles and copayments, they could buy health insurance policies that would give them lifetime protection against all major medical problems they might encounter.
But people are not willing to pay high premiums or (mysteriously) to accept high deductibles and substantial copayments.
They prefer to take a chance on their employer-supplied health insurance and on making it to 65 (Medicare eligibility age) without going broke as a result of a medical condition for which they are not adequately insured.
And if they have no employer-supplied health insurance they may decide to do without and hope for the best even if they could afford to buy an expensive individual policy.
Repealing the deductibility of employer-supplied medical benefits from federal income tax, and instituting a means test for Medicare, would reduce the demand for, and therefore total cost, of medical services and reduce the federal deficit as well, since Medicare costs the federal government more than $300 billion a year.
Since Medicare would cover fewer people, there would be less need to institute procedures designed to limit expense by limiting treatments--something people fear, whether rationally or not.
forex broker.
free forex grail spreadsheet.
islamic perspective on forex trading.
i tell you when to trade forex.
forex trading dynamic barriers.
forex terms definitions tic.
the best forex trading software.
imeter forex currency strength.
forex mt4.
forex insider code.
forex indicator software.
value signals forex.
crack forex.
forex cracked.
forex lou rivas.
forex wireless.
how can i open or manage a forex bureau as an entrepreneur.
succesfull traders on forex.
i forex.
best forex sites.
forex time.
forex best time.
get free forex trading system advance.
forex strategy and forums.
forex killer tutorial.
dvd kumpulan ebook forex trading.
autotrade forex sytem.
forex api.
how forex operates.
kaskus cd collection forex belle femme.
uk online casino reviews.
phantom efx downloads do not appear in online casino.
harrah casino online.
black jack casino school online.
best online casino guide.
best online casino worldwide.
online casino promotions bonuses.
uk best casino online.
online slots casinos payouts.
online casinos usa players.
online casinos with the best slots.
casinos online online baccarat.
casino surveillance online degree.
online casinos.
casino online business opportunities.
us friendly online casinos.
dansk online casinos.
best online casino bonus.
online casino reveiw.
dansk online casino.
online casino penge.
de bedste online casino.
online casino betalings.
online casino spiller.
online casino nyheder.
online casino spil.
online casino spille.
online casino blackjack tips.
online casino gamble.
online casinos that offer e check as a deposit option.
online casino royal.
bedste online casino.
best online casinos.
online casino black jack.
bet royal online casino bonus codes.
eurolinx online casino.
online casino scames.
what is the best online casino to play at.
online casino no deposit required.
starting an online casino.
online casino vegas strip.
best online casino.
online casinos with tournaments.
rate online casinos.
casino online roulette.
online no download hot hot penny casino.
blog casino online roulette.
a online casino.
canadian online casinos.
vegas casino online.
no deposit online casino promotions.
online casino poker.
play poker online casinos online.
registering with online casinos.
online real money casinos for united states.
download online casinos.
online casino real money.
play casino slot online.
new online no deposit casino.
online casino us bonus.
casino game online play poker top.
online casino wheel of fortune.
blackjack best online casinos.
virtual roulette online casino gaming.
no deposit online casino listings usa.
online casino with paypal.
paypal deposite online casino.
top online casino blackjack.
best us online casino.
lucky nugget online casino.
best paying online craps casino.
online casino roulette spin no bet.
playtech online casinos.
online casinos excepting us players.
online casinos us players.
online casinos that allow us play.
divici online casino.
platinum play online casino.
online casino mac download.
casino pit boss online.
atlantic vegas casino online.
blackjack game best online casino.
best paying online casino.
no deposit online casino bonuses.
online casinos that accept us players.
best payout online casino.
spinning jackpots microgaming online casinos news.
old samurai casino online keno.
online casino site.
owning an online casino.
golden nugget online casino.
online caribien casinos.
links to no deposit online casinos.
online casino blackjack.
best casino slot online.
online casino review.
online casino bonuses for us players.
online casino no deposit bonuses for us players.
no deposit online casino.
online casinos giving new players no deposit bonus.
casino casino gamble gambling gambling online online.
vegas strip online casino reviews.
online casinos using moneyexchange.
casinos gambling online.
The biggest problem besetting the Administration's program of health reform is how to pay for it.
The heart of the program is extending insurance coverage to tens of millions of people who at present are not insured.
This will cost more than $100 billion a year just in subsidies, but the total cost will be higher because demand for medical services will rise.
At present, people who are not insured are billed directly for medical services.
Often they cannot pay, but then their credit takes a hit, or they are forced into bankruptcy.
And emergency rooms use queuing to increase the cost of their services to the indigent.
When the uninsured become insured, the marginal cost of medical services to them falls to the copayment or deductible that they are charged; the total price (pecuniary plus nonpecuniary) is now much lower, so more service is demanded, and prices to all consumers of medical services rise because supply is inelastic.
Some advocates of extending coverage argue that it will reduce aggregate medical costs.
They point out that people may defer preventive care that might ward off an illness, or a worsening condition, that might cost more to treat than preventive care would have cost.
The other side of this coin is that preventive care may keep alive people who would have died, thus ending their demand for medical care.
But everyone dies eventually, and a very high fraction of total medical costs are incurred in the last few months of life.
Moreover, because of technological progress and the high value that people place on extending their life, medical expenses are growing far more rapidly than per capita income, and, as a result, postponing death imposes disproportionately greater costs on the next generation.
A partial offset, however, may be that greater and therefore more costly efforts may be undertaken to postpone death the younger the dying person is.
Preventive care can also be very costly, especially when it takes the form of expensive screening: screening costs are incurred by the healthy as well as the sick.
The most attractive form of preventive care, at least from a government budgetary standpoint (disregarding for a moment nonpecuniary benefits and costs, to which I'll return), is behavioral change: for example, safe sex as an AIDS preventive--or losing weight, or, more realistically, not gaining excessive weight in the first place, to prevent obesity.
Obesity has increased rapidly in the United States, to the point where, at present, more than half the adult population is overweight and 25 percent is obese.
A recent study estimates that the average obese person incurs annual medical expenses that exceed by 42 percent the average annual medical expenses of the non-obese; the aggregate excess cost is almost $150 billion a year.
Average expense is potentially misleading because of the shorter lifespan of unhealthy people.
However, I believe that except in cases of extreme obesity, the effect on lifespan is less than the effect in creating medically treatable conditions such as diabetes, joint problems, complications from surgery, and cardiovascular disease.
The economist Tomas Philipson and I have written about the economics of obesity.
We have pointed out that the decline in the price of fatty foods, along with the rise in the opportunity cost of physical activity (work is more sedentary than it used to be, so one has to invest extra time to get exercise, and television and video games have increased the utility that people derive from sedentary leisure pursuits), explains the dramatic long-term increase in the percentage of Americans who are seriously overweight.
It might seem that if people derive greater utility from consuming fatty foods in large quantity than the costs in illness and medical care, the increase in obesity actually is optimal from an economic standpoint.
But there are three reasons to doubt this.
The first is that the obese externalize part and probably most of the excess medical costs that their condition imposes, because health insurers (including Medicare) generally do not discriminate on the basis of weight.
The second reason to doubt that we have the optimal amount of obesity is that high and rising aggregate health costs, because financed to a large extent by government, are contributing to the serious fiscal problems of the United States: the United States has a soaring national debt that may have very grave long-term consequences for America's prosperity.
Obesity thus has potential macroeconomic significance.
Third, there is reason to doubt that the obese actually gain more utility from the behaviors that contribute to their obesity than the costs of obesity, which are not limited to medical costs but include discomfort, loss of mobility, discrimination by employers, and social ostracism by people who consider obesity repulsive or believe it signals lack of self-control, gluttony, or low IQ (or all three characteristics).
Obesity is highly correlated with education.
Highly educated people are much more likely to be thin than people who are not highly educated.
This is partly but not only because highly educated people have on average higher incomes than other people.
They can afford more expensive foods, which are low in calories, and the cost of exercise, which can be considerable, as it may require joining a gym or having a personal trainer.
But income is not a complete explanation, because highly educated people in low-paying jobs, as many teaching (including college teaching) jobs are, tend to be thin.
But is this because one needs education to realize that eating fatty foods makes one fat and that fat people have medical and other problems that thin people do not? Surely not.
It is rather that educated people have better impulse control, or, in economic terms, a lower discount rate (the rate at which a future cost or benefit is equated to a present cost or benefit), than uneducated people do, on average at any rate.
To get an education means incurring present costs for future benefits, and that is less attractive the higher one's discount rate.
Moreover, intelligent people derive greater benefits from education in terms of present enjoyment and future income than unintelligent people do, and intelligence implies lower costs of foreseeing consequences of one's actions: it is easier for an intelligent person to realize the consequences of indulging one's tastes for fatty foods than an unintelligent person, given that obesity is not an immediate consequence of eating such foods.
Low-IQ people (and many high-IQ ones as well) may also fail to realize how much more difficult it is to lose weight than to avoid gaining weight in the first place.
A further problem with people of low intelligence and (what goes with it) low income is poor parenting, as a result of which children grow up with bad eating habits, including excessive consumption of fatty goods; these habits may be difficult to break in adulthood.
If the unintelligent experience greater costs of imagining the consequences of eating fatty foods, that is an argument for providing them with greater information about those consequences, to offset their deficit in understanding.
Maybe with full knowledge the unintelligent would be willing to incur the costs, in somewhat more expensive food and in fewer sedentary leisure pursuits, of avoiding becoming obese.
So aggregate utility might actually be increased, as well as aggregate medical costs reduced, by an effective campaign of warning people about the consequences of eating fatty foods.
I do not think that government should regulate behavior on the premise that it knows better what makes people happy than people themselves do; but controlling external costs is or should be an uncontroversial governmental function.
Such an educational campaign as I have suggested would be a cheap form of preventive care, but would it be effective? The evidence is mixed, but a 2008 review article by Lisa Harnack and Simone French in the International Journal of Behavioral Nutrition and Physical Activity finds that labeling restaurant menus with calorie information does reduce consumption of high-calorie foods.
Conjoined with reduced calories in school lunches, elementary- and high-school courses in nutrition, and warnings in food advertising and labeling similar to the warnings in cigarette advertising and labeling, the prevalence of obesity might be reduced at slight cost--possibly to the benefit of almost everyone except the sellers of fatty foods.
One of the health-care-reform bills pending in the Senate would relax legal limitations on "discrimination" by private group-health insurers; that is a step in the right direction, as are growing efforts by employers to encourage their workers to control weight (the motive is to reduce the cost of health insurance to the employer).
Medicare could be modified to reduce fees to thin people.
In addition, a calorie-based food tax (which would, for example, fall heavily on sugar-flavored soft drinks), would reduce obesity at negative cost to the public fisc.
Such a tax may seem "unfair" to people who consume such foods but are thin, but this is just to say that the tax would be at once a regulatory and a revenue tax, and in the latter aspect would be subject to criticism only if it were an inefficient tax relative to alternative methods of taxation.
I am distrustful of international economic comparisons, especially when only two countries are compared, in this instance the   United States   and     Germany    .
Countries differ in a large number of respects that bear on economic performance; in a large sample, many of those differences may cancel, but a sample of two is tiny.
Becker provides a plausible explanation of     Germany    ’s lower unemployment rate, but is it correct? According to the Bureau of Labor Statistics,   Germany  ’s unemployment rate is not the lowest current unemployment rate of ten major countries that the BLS has compared after adjusting the unemployment rates in those countries to correspond to     U.S.
    concepts of unemployment.
  France   has the highest unemployment rate—9.6 percent—followed by the     United States     at 9.5 percent.
The German rate is 7.5 percent.
  Australia  ,   Canada  ,   Japan  , and the     Netherlands     have substantially lower unemployment rates.
The     U.K.
    unemployment rate, surprisingly, is only slightly above the German rate.
  Sweden   and     Italy     complete the picture, with unemployment rates roughly halfway between the German and the American.
Given European labor laws, it is surprising that the unemployment rate in France is only trivially higher than the U.S.
rate and that the unemployment rates in Italy, the Netherlands, and Sweden are significantly below the U.S.
rate, though Swedish labor laws, at least, are fairly loose, and the Italian unemployment rate is believed to be underestimated.
But why should   Canada   and   Japan   have unemployment rates substantially below the     U.S.
    level?.
I venture the following guess.
All these countries except   Australia   had before the economic crisis a higher personal savings rate than the     United States    .
And all without exception derived a much higher fraction of their national income from exports than the     United States    .
Because of our very low personal savings rate, and the (related) fact that most savings were in the form of home equity and (directly or indirectly) common stock, the crash of 2008 ushered in a protracted period of stagnant consumption spending as frightened American consumers increased their personal savings rate from 1.7 percent three years ago to 6.4 percent today.
Producers and distributors in the third quarter of 2008 and in 2009 could foresee a sustained period of subpar demand, and so laid off many workers and have been slow to rehire them, anticipating that demand for goods and services will not increase substantially for some time.
In other advanced economies, higher personal savings and greater reliance on export earnings created expectations of more rapid economic recovery.
Consumers would have less incentive to increase their savings further and thus reduce consumption; so businessmen could expect demand to revive sooner than in the United States.
They could also expect their export markets to rebound soon, knowing that major importers of technologically advanced manufactured goods, such as   China  ,   India  , and     Brazil     and other Latin American countries, had been hit less hard by the economic crisis.
The quicker a solid economic recovery is expected to occur, the less prone employers are to lay off workers, because laying off and rehiring are costly, and the costs may well exceed a short period in which employees are not busy because demand for the employer’s product is weak.
I agree with Becker that the Obama Administration’s overly ambitious legislative program and intermittent outbursts of hostility to business are hurting our economic recovery.
But I would be inclined to give greater weight, in explaining our stubbornly high unemployment, to the other factors that I have mentioned.
A company or other organization, or an individual, is insolvent when its liabilities (what it owes) exceed the market value of its assets.
Bankruptcy is a legal mechanism for liquidating or reorganizing an insolvent entity in a way that maximizes value for the creditors.
When a firm is insolvent, each of its creditors is eager to be repaid what he is owed, out of the firm’s assets.
By definition those assets are insufficient to satisfy all the creditors’ claims, so the creditors race to obtain judgments, which are then satisfied by sale of the firm’s assets, perhaps at fire-sale prices because of the race.
Even if the firm could be saved as a going concern by eliminating some of its debt burden (its liabilities), transaction costs will make it difficult for the creditors to agree on how far their respective claims will be written down—how in short to share the grief.
In a bankruptcy proceeding, the creditors are barred from suit and the judge supervises an orderly disposition of assets (whether by liquidation or by placing them in a reorganized entity) designed to maximize their value and hence the creditors’ ultimate return.
Bankruptcy is not limited to individuals and business firms; under     U.S.
    law, even a city can be declared bankrupt; and this happens occasionally.
In one     Illinois     town, the bankruptcy judge ordered the sale of city hall to satisfy creditors’ claims.
    U.S.
    states cannot be subjected to bankruptcy proceedings, and neither can the federal government, or the governments of other nations.
But that doesn’t mean that a state or nation can’t be insolvent.
Insolvency is the condition; bankruptcy is a method of treating the condition.
A nation has creditors in both a narrow and a broad sense.
In the case of our federal government, they are of four types: owners of federal securities (Treasury bonds and short-term bonds called Treasury bills or Treasury notes); other persons or firms that have contracts with the federal government, for example for sale of goods or services to it; holders of federal entitlements, such as social security, Medicare, Medicaid, and the pensions of retired federal employees; and beneficiaries of government services (as distinct from transfer payments), such as drivers on interstate highways and visitors to national parks, as well as the population at large, which is protected from crime and foreign aggression by federal police and military forces.
The first two categories of holders of government “debt” in a broad sense—owners of government bonds and holders of government contractors—correspond closely to the creditors of private companies.
The third does not because federal entitlements can always be cut with impunity, from a legal standpoint; and the fourth are not entitlements, but services that are funded by annual congressional appropriations and so can be altered without being thought to disrupt settled expectations; they are the domain of “discretionary” government spending, though in a legal sense entitlements are discretionary also rather than being fixed and legally enforceable obligations.
But remember that insolvency is the condition, bankruptcy merely a treatment for the condition; and a condition is not less grave just because the best treatment for it is unavailable—in fact the condition is more serious in that case.
These reflections are suggested by the first issue (August 25) of a new publication by Morgan Stanley called   Sovereign Subjects  .
The first issue is captioned “Ask Not   Whether   Governments Will Default, but   How  .” It is a criticism of the conventional method of evaluating a nation’s economic condition, which is to compare public debt (government bonds) to Gross Domestic Product.
In the case of the   United States  , that ratio in percentage terms is 53 percent, which is high by historical standards but lower than that of a number of European nations, which are listed in the report (  France  ,   Germany  ,   Greece  ,   Ireland  ,   Italy  ,   Portugal  ,   Spain  , and the     U.K.
   ).
But as the report points out, this is not a proper way to determine solvency.
The proper way is to compare assets and liabilities.
The major asset of any government is its taxing power, which of course cannot be equated to the entire GDP, or the entire value of the nation’s human and physical capital; for both economic and political reasons, the taxing power is limited to a percentage of GDP well below 100 percent.
And, realistically, the liability side of the national ledger includes not only government bonds but also other contractual obligations, entitlements, and at least strongly anchored expectations concerning government services (we’re not about to eliminate our armed forces).
In addition, as especially emphasized in the Morgan Stanley report, the national balance sheet must be reckoned in dynamic terms, with due regard for likely increases both in GDP and in liabilities, especially increases in entitlement spending that are likely to result from the continued ageing of the population.
Because American tax rates are low by international standards and resistance to increasing them is fierce, Morgan Stanley’s report estimates that the ratio of current   U.S.
  public debt to realistically realizable tax revenues is 3.58 to 1, which is the highest by a large margin of the countries in the report’s list; only     Greece     comes close (3.12 to 1).
But America has certain advantages, such as a younger population and a more rapid rate of economic growth, and as a result its ratio of net worth to GDP is in the middle of Morgan Stanley’s list of countries—but it is strongly negative, as are the ratios of all the countries in the list (Italy, surprisingly, being at the top, and Greece, unsurprisingly, at the bottom).
According to Morgan Stanley’s calculation (which obviously is merely suggestive, as the report emphasizes, because of the uncertainty of the future),     America    ’s net worth is negative, and this negative net worth is eight times larger than our GDP.
This means that the net present value of the government’s liabilities, minus assets, is approximately $120 trillion.
What does a firm or an individual do when it is broke and there is no bankruptcy regime? It defaults.
Nations do occasionally default on their bonds or other contractual obligations; or, if the bonds or other obligations are denominated in the local currency, they inflate the currency and so repay their obligations in cheaper money, which is the equivalent of a partial default.
The     U.S.
    government is unlikely to do either of these things and therefore, in conventional parlance, is not insolvent.
But as the Morgan Stanley report insightfully emphasizes, the government has other “stakeholders,” and they are creditors in a loose but illuminating sense.
A cut in social security, Medicare, or other entitlements amounts to breaking the government’s promises to the holders of these entitlements, promises on which the holders have relied.
So does reducing government services (highways, police, etc.) or increasing taxes, which reduces the net value of those services.
The deeper the financial hole that the government has dug for itself by incompetent economic management—and our government has dug itself a very deep hole, largely because of the mismanagement of monetary policy and financial regulation by the Federal Reserve under Greenspan and Bernanke and by other government agencies—the more difficult it is to climb out of the hole on the backs of holders of entitlements and recipients of government services.
The political resistance is too intense.
It’s at that point that the bondholders, and holders of other contractual rights against the government, have to start worrying about the prospects for outright default or default through inflation.
These are possibilities in our future, just as in the future of     Greece    .
In my first book on the economic crisis (  A Failure of Capitalism: The Crisis of ’08 and the Descent into Depression  ), which was completed in February 2009, I argued that the crisis should be called a depression rather than a recession, in part because of the enormous debts that the government was assuming in an effort to overcome the crisis.
In my second book (  The Crisis of Capitalist Democracy  ), completed in January of this year, I further emphasized the potential long-term adverse consequences of the crisis, and argued that a depression or recession should not be considered over until GDP rejoins its growth path.
GDP in real terms is essentially unchanged from what it was two and a half years ago (2007), which means it’s roughly 7.5 percent below the growth path (which assumes 3 percent real growth annually), and suggests that it will be years before the economy gets back on it.
I continue to insist that this is the proper way to evaluate an economic crisis.
Most journalists and many economists believe that the “recession” as they like to call it (or “Great Recession”—indicative of a mindless proliferation of labels) ended in the third quarter of 2009, when GDP began to increase, after having been flat in 2008 (though falling sharply in the last quarter) and falling in the first half of 2009.
But the current performance of the economy, and the likely political and long-term economic consequences, convince me that we are in the midst of a depression, much as we were in 1936 (before a sharp drop in 1937–1938), even though the economy had grown rapidly since the bottom of the depression in 1933.
Why is the economy so sluggish at present? The basic reasons are, I think, first, the reduction in household wealth, due to the fall in housing and common stock values (with the fall in housing values precipitating many foreclosures); second, the high rate of unemployment, underemployment, and reductions in wages and benefits; and third the continued weakness of the banks.
The reduction in household wealth increased the amount of leverage (debt-equity ratio) in consumers’ personal finances, and consumers have been deleveraging by increasing their personal savings rate (which has increased from 1.7 percent three years ago to 6.4 percent today), leaving them with less money for consumption.
One might think that today’s very low interest rates would discourage savings, but the other side of this coin is that savers must increase the amount of their savings in order to obtain the interest income they obtained when interest rates were higher.
With less consumption, there is less production and hence less private investment.
These effects are being compounded by the weakness of the labor market from the perspective of workers, which reduces incomes and, by increasing insecurity, increases the propensity to save.
And while banks are making good profits because of the very low interest rates at which they can borrow, they continue to hold many sick assets (mainly investments in home and commercial mortgages) on their books, making them reluctant to lend.
And anyway loan demand is way down.
Most borrowers from banks are either small business or consumers (large businesses tend to borrow by issuing bonds or commercial paper rather than by taking out banks loans), and neither group is in the mood for increasing its indebtedness.
One spur to recovery from a depression is the need to rebuild inventories and replace durable goods that have worn out.
This need may explain, along with the stimulus program enacted in February 2009, the growth in GDP that began in the third quarter of 2009 and seems now to be fading.
As long as demand for consumption goods is weak, sales will slow after inventories are restocked and worn-out durable goods are replaced.
Modern products tend to be highly durable and inventories tend to be much smaller than they used to be, so one cannot expect these standard spurs to economic recovery to have much staying power.
The uncertainties and long-term debt created by the Obama Administration excessively ambitious domestic programs (notably health care and financial regulatory reform) on top of the deficit spending of the Bush years, the plunge in federal tax revenues resulting from the depression-induced decline in taxable income, and uncertainty about which Bush tax cuts will be allowed to expire in the coming year, have impeded private investment.
They have done this by exacerbating concerns about what the economic picture will look like both in general and for individual businesses and consumers in the next several years, and perhaps much longer.
A further worry is the volatility of the stock market.
The tendency is to view the market’s gyrations as reflections of changing estimates of future corporate earnings and of efforts by investors in the market to guess what other investors are likely to do.
But when as at present a large fraction of the population has a significant part of its savings invested in the stock market, market volatility increases economic anxieties and thus dampens spending.
As long as private investment and interest rates remain very low, there is a case for further stimulus (deficit spending), especially since federal stimulus spending has been offset to a degree by reductions in state and local government spending.
Cautious or fearful consumers save in safe forms, such as insured bank deposits, Treasury bills, or cash.
Such savings are inert; under present conditions they do not get translated into productive investment, since banks are reluctant to lend but instead keep most of the deposits they receive in either cash or government securities.
The government, to whom no one is afraid to lend, could put all these inert savings to work on infrastructure and other projects that would employ the unemployed.
That is in principle, but because the Obama Administration botched the design, execution, and public relations of the $862 billion stimulus program (President Obama, despite his undoubted eloquence and intelligence, has proved to be a poor explainer of his economic policies), and because of the soaring public debt (to which the stimulus contributed), there is no political stomach for a further stimulus of any consequence.
What is to be done? With Congress in recess and the mid-term elections looming, probably very little.
The best hope may be that     the President’s bipartisan deficit commission (the National Commission on Fiscal Responsibility and Reform) will issue a first-rate report.
(Its report is due December 1.) If the commission, chaired by President Clinton’s chief of staff, Erskine Bowles, and former Republican Senator Alan Simpson, produces an economically sound and politically palatable program for restoring the nation’s long-term fiscal soundness—a program to which far-reaching tax reform will be central—this may alleviate economic uncertainty and encourage more consumption and private investment in the near term.
I don’t put much weight on public opinion polls that show a drop in Americans’ optimism about the economic future of the country.
Optimism and pessimism are personality traits that condition people’s reaction to uncertainty, but they are also influenced by uncertainty.
This was one of Keynes’s insights.
When a sharp economic downturn creates the kind of uncertainty about economic prospects that we’re now observing, people’s “animal spirits” (his term for optimism) droop; hoarding increases and entrepreneurship flags.
These are rational responses to uncertainty, but they do not predict a nation’s future economic performance.
There are, however, objective reasons to worry that the nation’s economic growth may be stunted for years to come.
One reason is the high level of international political instability threatening to U.S.
interests (think of Pakistan, Iran, Afghanistan, Iraq, Lebanon, Yemen, and North Korea, and of potentially tense relations between the United States on the one hand and Russian and especially China, on the other hand), which will require the United States to continue to spend disproportionately on national security.
A second reason for concern is looming fiscal crisis in the     United States    , as a result of a diminished tax receipts, huge and growing federal public debt, uncontrolled entitlement spending, and deterioration in state and local government finances.
And third is our government’s inability to address in a serious fashion, let alone solve, a host of serious structural problems.
We badly need tax reform, education reform, health reform, immigration reform, environmental reform (with emphasis on climate control and preserving biodiversity), repeal of the drug laws, a reduction in economic inequality (with particular emphasis on improving the lot of the black underclass), a better transportation network, and a slimmed-down public sector.
Our political system seems incapable of achieving improvements in any of these areas.
So we need reform of the political system, but that is blocked by a combination of incumbent self-interest and a constitutional structure optimized for eighteenth-century conditions.
In assessing the likely impact of these factors on our economic future, we need to distinguish between per capita income and a more inclusive measure of social welfare—call it happiness or utility (though, depending on how “happiness” is defined, it might better be regarded as a component of utility; see Becker’s comment on a well-known paper by Stevenson and Wolfers, which together with the paper can be found at       http://bpp.wharton.upenn.edu/jwolfers/Papers/EasterlinParadox.pdf      , analyzing the correlation across countries of happiness with income).
Per capita income is a proxy for and input into happiness, but it is not a synonym for it.
If there is diminishing marginal utility from increased income, then an increase in income inequality, which we have been experiencing of late and which may continue, may offset the effect of an increase in average income on utility.
So also an increase in the average age of retirement as a result of increasing the age of eligibility for full social security benefits (one reform that is at least conceivable), or other increase in hours worked, are offsets to increased per capita income.
But against this is the seemingly continuous increase in the quality and variety of most products and services.
(Not all: airline service has deteriorated markedly since the 1990s and traffic congestion has increased.) Increased longevity, and the greater youthfulness of most middle-aged and elderly people, are only the most dramatic signs of impovements in welfare that are related to but not adequately proxied by increases in average personal income.
Innovation is costly, so if the rate at which per capita income rises declines, the rate of improvement in the quality and variety of goods and services will slow.
Nevertheless that improvement will continue even if per capital incomes stagnate.
While, as Becker points out, per capita income has been growing in the United States for a long time at an average annual rate of about 2 percent, happiness has been growing at a much lower rate, if at all.
People in 1810 were not sad at the thought of how much better people would be living in 2010; and people in 2010 take for granted improvements in product quality before they were born.
One has to be a certain age to appreciate these improvements; the first car I drove was a 1953     Pontiac    .
The issue of increased per capita income over time is often discussed in terms of parental preferences: parents want their children to be better off than they.
Here I think the distinction between income and welfare is critical.
I believe that most parents want their children to be happy, though some want them to be successful even at some (modest) cost in happiness, rather than to be financially better off than they (than the parents).
The farther down one goes in the income distribution, however, the closer the correlation between income and happiness; this is an implication of diminishing marginal utility of income.
So poor people, or people who are not poor but are beset by anxieties, do want their children to be better off financially than themselves; but I do not think people who are well off do, or at least should, want their children to have higher incomes than they.
Parental altruism implies concern for children’s welfare, rather than for children’s incomes per se; and the higher a family’s standard of living, the less likely an increase in that standard in the next generation is to increase happiness.
A frequent and I think sound criticism of the Obama Administration’s economic policy is that it supports programs that promote economic recovery from the severe recession that began in December 2007   and   programs that retard that recovery, thus sending a mixed signal that by unsettling the business environment retards recovery.
This mixed message is no surprise, however, because a President is a politician.
Has there ever been a President who consistently pursued sound economic theory? Criticism of politicians must be tempered by recognition of political reality.
You don’t have to be a conservative to think it a bad idea to promote unionism in an economy struggling to climb out of a deep economic hole; you can be a Keynesian.
A principal goal of unions of course is to raise wages, though in doing so it causes employment to fall by raising the cost of labor relative to the cost of capital.
Keynes emphasized (though the point was not original with him) that workers strongly resist cuts in their nominal wages, where “nominal” means the dollar amount of the wages and is contrasted with “real,” which means the purchasing power of the wage.
In an economic downturn, an employer who thinks it infeasible to reduce the nominal wages of his employees will have to lay off workers so that his costs of production are not excessive in relation to the diminished demand for his product.
Therefore the higher the nominal wages of employees, the more unemployment will be generated by an economic downturn.
Keynes thought moderate inflation should be part of a strategy of recovery from an economic downturn, because nominal wages tend to be sticky on the upside as well as on the downside (though less so), and so the employer’s real wage bill would fall in an inflation, though only initially—until the workers caught on to the fall in the purchasing power of their wages and demanded a raise.
(This theory of “profit inflation” is no longer accepted by most economists.).
Against all this it might be argued that if employers cut wages rather than laying off workers, the reduction in wages would reduce the amount of money that the workers could spend on consumption; and economic recovery depends on increased spending on consumption (plus investment).
But the stickiness of wages, which makes employers prefer layoffs (with the result in the Great Depression, which involved severe deflation, that many workers who retained their jobs experienced a substantial increase in their real wage), is not primarily a result of unionism, or even of worker pressures.
Employers don’t like to make steep across-the-board wage reductions in an economic downturn, because the reductions frighten and distract the workers.
Layoffs, in contrast, enable reductions in overhead as well as in wages and concentrate unhappiness on former workers (those who are laid off), whereas an across the board wage reduction demoralizes current workers.
Collective bargaining agreements, which are contracts between a union and an employer, usually forbid the employer to reduce wages during the contract’s term (usually three years).
But the main reason why unions make it more difficult to recover from a recession or depression is that by raising an employer’s labor costs they cause the employer to lay off more workers in an economic downturn than if those costs were lower.
Think of how the United Auto Workers had swollen the labor costs of the     Detroit     automakers, so that if the government had not stepped in and bailed out General Motors and Chrysler in the spring of 2009 the loss of employment as a result of the economic downturn would have been catastrophic.
It is important to realize, moreover, that higher wages are often not the major source of higher labor costs in unionized firms.
Benefits (such as health insurance) are very important—critically so in the collapse of GM and Chysler.
And often the major source of higher labor costs in unionized firms is union-negotiated limitations on the employer’s control over his labor force: the employer’s freedom to assign tasks, to discipline unsatisfactory workers, and to determine the order of layoffs.
Unions bureaucratize labor relations.
Almost at the bottom of the economic plunge—July 2009—the federal minimum wage rose, pursuant to legislation passed by the Democratic-controlled Congress in 2007, to $7.25 an hour.
That is just the sort of thing one doesn’t want to happen in a recession.
Unions strongly support the minimum wage in order to reduce competition from nonunion workers, but raising the wage retards recovery by increasing sellers’ labor costs.
Unions are weak in the private sector; only about 7 percent of private workers are unionized.
But unions are powerful in the public sector—about 30 percent of public employees are unionized—and have contributed to the high wages of such employees.
By swelling the labor costs of cities and states, these high wages have forced them to raise taxes and cut benefits in the midst of the most severe economic downturn since the Great Depression.
Even in the private sector, though unions are weak, employers are concerned that the pro-union policies of the Obama Administration will result in greater unionization and hence higher labor costs.
This concern is a source of uncertainty, which slows economic activity.
Under uncertainty consumers increase their savings (much of which may not get invested productively, at least without a considerable lag) and producers increase their cash balances.
The Administration is not uniformly pro-union.
It rightly cares more about education than about unionism, and as a result has clashed with the teachers’ unions; this is one of the Administration’s most laudable endeavors.
It has not pushed hard for (though it supports) enactment of the Employee Free Choice Act, which by abolishing the secret ballot in elections for collective bargaining representative and instituting compulsory arbitration of union-management disputes would significantly shift the balance of power from management to labor.
It did condition the bailout of the     Detroit     automakers on preserving substantial, and completely unjustified, union benefits.
The overall terms of the bailout weakened the union, but not as much as if the bankruptcy of the automakers had been allowed to proceed without government intervention.
The Administration has, however, under union pressure dragged on signing free-trade agreements that have been negotiated with     South Korea     and other countries.
This would not retard our economic recovery if the net effect were to increase our exports relative to our imports, for exports increase domestic production and hence employment and imports tend to reduce it.
But because of retaliation by foreign countries that want to increase their own exports and reduce imports, the effect of the Administration’s foot dragging is simply to reduce the efficiency of the     U.S.
    economy.
Also allegedly under union pressure, the Administration delayed suspending (as it is empowered to do in an emergency) the Jones Act, which protects the   U.S.
  maritime industry from foreign competition, to enable foreign vessels to assist in combating the oil leak in the   Gulf of Mexico  .
Worse, the Administration has required that all projects funded by the $787 (now $862) billion stimulus enacted in February 2009 comply with the Davis-Bacon Act, which requires payment of union wages.
Recently the President signed an executive order requesting all federal agencies to consider requiring all federal construction contractors to sign labor agreements.
And he has said silly things like “labor is not part of the problem.
    Labor is part of the solution.” These are just words, but they worry business by creating the impression that the President is hostile to it, and they increase the uncertainty of an already uncertain business environment.
The pro-union policies of the Roosevelt Administration, notably the National Labor Relations Act (the Wagner Act), are generally believed to have made the Great Depression worse than it would have been without those policies.
The Obama Administration’s pro-union policies will in all likelihood worsen our current economic situation.
An article in the   New York Times   of September 20 by Louise Story, entitled   "Many Women at Elite Colleges Set Career Path to Motherhood,"   reports the results of surveys and interviews concerning career plans of women at the nation's most prestigious colleges, law schools, and business schools.
Although not rigorously empirical, the article confirms--what everyone associated with such institutions has long known--that a vastly higher percentage of female than of male students will drop out of the work force to take care of their children.
Some will resume full-time work at some point in the children's maturation; some will work part time; some will not work at all after their children are born, instead devoting their time to family and to civic activities.
One survey of Yale alumni found that 90 percent of the male alumni in their 40s were still working, but only 56 percent of the female.
A survey of Harvard Business School alumni found that 31 percent of the women who had graduated between 10 and 20 years earlier were no longer working at all, and another 31 percent were working part time.
What appears to be new is that these earlier vintages did not expect to drop out of the workforce at such a high rate (though they did), whereas current students do expect this.
That is not surprising, since the current students observe the career paths of their predecessors.
So, contrary to the implication of the article, there is no evidence that the drop-out rate will rise.
The article does not discuss the interesting policy issues presented by the disproportionate rate of exit of elite women from the workforce.
Nor does it have much to say about why women drop out at the rate they do.
The answer to the latter question seems pretty straightforward, however.
Since like tend to marry like ("assortative mating"), women who attend elite educational institutions tend to marry men who attend such institutions (and for the further reason that marital search costs are at their minimum when the search is conducted within the same, coeducational institution).
Those men have on average high expected incomes, probably higher than the expected incomes even of equally able women who have a full working career.
Given diminishing marginal utility of income, a second, smaller income will often increase the welfare of a couple less than will the added household production if the person with the smaller income allocates all or most of her time to household production, freeing up more time for her spouse to work in the market.
The reason that in most cases it is indeed the wife (hence my choice of pronoun) rather than the husband who gives up full-time work in favor of household production is not only that the husband is likely to have the higher expected earnings; it is also because, for reasons probably both biological and social, women on average have a greater taste and aptitude for taking care of children, and indeed for nonmarket activities generally, than men do.
But it is at this point that policy questions arise.
Even at the current very high tuition rates, there is excess demand for places at the elite colleges and professional schools, as shown by the high ratio of applications to acceptances at those schools.
Demand is excess--supply and demand are not in balance--because the colleges and professional schools do not raise tuition to the market-clearing level but instead ration places in their entering classes on the basis (largely) of ability, as proxied by grades, performance on standardized tests, and extracurricular activities.
Since women do as well on these measures as men, the student body of an elite educational institution is usually about 50 percent female.
Suppose for simplicity that in an entering class at an elite law school of 100 students, split evenly among men and women, 45 of the men but only 30 of the women will have full-time careers in law.
Then 5 of the men and 20 of the women will be taking places that would otherwise be occupied by men (and a few women) who would have more productive careers, assuming realistically that the difference in ability between those admitted and those just below the cut off for admission is small.
While well-educated mothers contribute more to the human capital of their offspring than mothers who are not well educated, it is doubtful that a woman who graduates from Harvard College and goes on to get a law degree from Yale will be a better mother than one who stopped after graduating from Harvard.
But I have to try to be precise about the meaning of "more productive" in this context.
I mean only that if a man and woman of similar ability were competing for a place in the entering class of an elite professional school, the man would (on average) pay more for the place than the woman would; admission would create more "value added" for him than for her.
The principal effect of professional education of women who are not going to have full working careers is to reduce the contribution of professional schools to the output of professional services.
Not that the professional education the women who drop out of the workforce receive is worthless; if it were, such women would not enroll.
Whether the benefit these women derive consists of satisfying their intellectual curiosity, reducing marital search costs, obtaining an expected income from part-time work, or obtaining a hedge against divorce or other economic misfortune, it will be on average a smaller benefit than the person (usually a man) whose place she took who would have a full working career would obtain from the same education.
The professional schools worry about this phenomenon because the lower the aggregate lifetime incomes of their graduates, the lower the level of alumni donations the schools can expect to receive.
(This is one reason medical schools are reluctant to admit applicants who are in their 40s or 50s.) The colleges worry for the same reason.
But these particular worries have no significance for the welfare of society as a whole.
In contrast, the fact that a significant percentage of places in the best professional schools are being occupied by individuals who are not going to obtain the maximum possible value from such an education is troubling from an overall economic standpoint.
Education tends to confer external benefits, that is, benefits that the recipient of the education cannot fully capture in the higher income that the education enables him to obtain after graduation.
This is true even of professional education, for while successful lawyers and businessmen command high incomes, those incomes often fall short of the contribution to economic welfare that such professionals make.
This is clearest when the lawyer or businessman is an innovator, because producers of intellectual property are rarely able to appropriate the entire social gain from their production.
Yet even noninnovative lawyers and businessmen, if successful--perhaps by virtue of the education they received at a top-flight professional school--do not capture their full social product in their income, at least if the income taxes they pay exceed the benefits they receive from government.
Suppose a professional school wanted to correct the labor-market distortion that I have been discussing.
(For I am not suggesting that the distortion is so serious as to warrant government intervention.) It would be unlawful discrimination to refuse admission to these schools to all women, for many women will have full working careers and some men will not.
It would be rational but impracticable to impose a monetary penalty on the drop-outs (regardless of gender)--making them pay, say, additional tuition retroactively at the very moment that they were giving up a market income.
It would also be infeasible to base admission on an individualized determination of whether the applicant was likely to have a full working career.
A better idea, though counterintuitive, might be to raise tuition to all students but couple the raise with a program of rebates for graduates who work full time.
For example, they might be rebated 1 percent of their tuition for each year they worked full time.
Probably the graduates working full time at good jobs would not take the rebate but instead would convert it into a donation.
The real significance of the plan would be the higher tuition, which would discourage applicants who were not planning to have full working careers (including applicants of advanced age and professional graduate students).
This would open up places to applicants who will use their professional education more productively; they are the more deserving applicants.
Although women continue to complain about discrimination, sometimes quite justly, the gender-neutral policies that govern admission to the elite professional schools illustrate discrimination in favor of women.
Were admission to such schools based on a prediction of the social value of the education offered, fewer women would be admitted.
Now that the immediate crisis is over, the question arises of the amount and form of compensation of the victims.
My answer to the question is twofold.
First, there should be no compensation to affluent people who could have insured against their loss, whether or not they actually bought insurance.
Second, in determining compensation for uninsurable losses (or losses by people who cannot afford insurance), the amount should be determined by reference to the practices of insurance companies.
Just because a person loses his house in a flood that destroys hundreds of thousands of other houses, rather than in a fire that destroys just his house, is no reason for the taxpayer to reimburse him for the loss.
The fact that most people do not buy flood insurance, just like the fact that most Californians don't buy earthquake insurance, is no reason for me to insure them.
Only if they can't afford insurance, or if the insurance industry refuses to insure against a particular risk (generally these are cases in which either the risk is impossible to quantify, so that an insurance premium cannot be calculated, or the aggregate risk is so great that the entire insurance industry does not have the resources to insure it), is there a compelling case for government intervention.
Flood insurance is federally subsidized, and as a result the annual premium is quite low.
The average premium in 2000 was $353, and it started at only $112, though coverage is limited to $250,000 per house plus $100,000 for the contents of the house.
It's a puzzle why so few people buy flood insurance even in areas of the country that are prone to flooding.
People may feel--and they may be right!--that the government will pick up the tab if there is serious flooding.
If so, that is a compelling reason why federal disaster relief should be limited to people who can't afford insurance.
(Most people too poor to afford flood insurance don't own homes or have many possessions, but they need compensation for the loss of what they do have.).
I grant that there may be a considerable degree of readily understandable thoughtlessness in failing to buy flood insurance; it is not part of the standard homeowner's policy, so people may just overlook it.
But this negligence is difficult to understand in flood-prone regions such as southern Louisiana.
Nor should we be subsidizing carelessness.
It appears that total losses from Hurricane Katrina may reach $200 billion, of which insurance is expected to cover only 10 to 25 percent.
Obviously the other 75 to 90 percent of the losses are not losses suffered by individuals too poor to afford flood insurance.
Hard questions need to be asked before the taxpayer is asked to pay the difference between insurable losses and losses actually insured.
It might seem that flood insurance would not cover the indirect costs of a flood, such as having to find another place to live while the flood damage is being restored.
But such costs are routinely covered by fire-insurance policies and I assume (without knowing) by flood-insurance policies as well.
I have no objection to government's compensating the losses of those too poor or otherwise unable to obtain insurance (to repeat, not all losses are insured by the private insurance industry), including life insurance for persons killed in the New Olreans flood.
Social insurance is a legitimate utilitarian device.
But the form and limits of this compensation should be similar to those of the insurance industry.
People do not buy insurance against the emotional distress caused when their house or other possessions are destroyed by fire, and neither, therefore, should the government "insure" against such losses by means of its disaster-relief programs.
There have been suggestions to create a victims' compensation fund that would be similar to the fund created for the victims of the 9/11 attacks--a fund that, unlike insurance, would pay large amounts to cover the human suffering inflicted by the disaster, for example paying the survivors of people killed in the New Orleans flood amounts vastly greater than the typical life insurance policy would pay--in fact amounts calculated the way damages are calculated in personal injury and other tort suits.
Such funds make no economic sense even though a harmful action, whether of man or by nature, can inflict a loss greatly in excess of any insurance that the victim may have had.
A person who has no family may see no point in buying life insurance, but that doesn't mean he doesn‚Äôt value his life; current estimates by economists of the value of the life of an average American, as I mentioned in last week‚Äôs posting, are in the neighborhood of $7 million.
So if the victims of the 9/11 attacks could sue Osama bin Laden, they would be entitled to claim their full losses, irrespective of insurance (insurance just shifts part of the loss to the insurance company--it doesn't reduce the loss).
By making the full losses a cost to the injurer, the law charges a "price" for the harmful activity that operates as a deterrent.
This rationale for full compensation has no application to social insurance, which is intended as a substitute for private insurance rather than as a substitute for the tort system.
To the extent that losses caused by nature or the public enemy are aggravated by venal or incompetent officials, those officials can in some instances be sued (and the full losses traceable to their misconduct recovered as damages) and in others punished by humiliation or loss of office.
Let me in closing give some examples to illustrate how my proposal would operate.
In case number 1, an affluent couple loses its house to the flood; the house was not insured against flood damage.
The couple would receive no government compensation.
In case number 2, the same thing happens, except because the couple had all its money tied up in the house, the loss of the house, without insurance, renders the couple destitute.
The couple would be eligible for Medicaid and other welfare benefits, as well as for private charity, including assistance from family members, but would not (under my proposal) be entitled to any special government compensation.
In case number 3, a poor family, which already receives welfare benefits, owns a modest home, which is destroyed in the flood and, again, is not insured.
I would favor the government's compensating the family for the value of the home.
In case number 4, an affluent couple would like to buy flood insurance, but it is not available.
Whether compensation for the loss of their home should be paid by the government should depend on why the insurance is unavailable.
If it is unavailable simply because the risk of a flood is so great that there is an insufficient market for insurance to interest any insurance company, compensation should be denied so that people aren't encouraged to build in flood-prone areas.
But if insurance is unavailable because of a genuine market failure, I would favor government compensation (i.e., social insurance); an example would be if no insurance company offered such insurance because the industry incorrectly believed that there was zero probability of a flood in the area of the couple's home and concluded that therefore there would be no demand for insurance.
The first comment I would like to respond to is Professor Becker's.
It flags an issue that I did not discuss adequately in my posting.
He argues that the fact that an individual loses property or for that matter life in conjunction with similar losses of other individuals in an event reasonably classified as a disaster or catastrophe is no reason to stretch the social safety net beyond its usual dimensions; the individual should be treated identically to a person who sustained the identical loss in a noncatastrophic setting--say as a result of a pipe's bursting while the owner of the house was away for several weeks and when he returned the house was so flooded as to be completely ruined.
I think Becker is basically right, and indeed I made a similar point in my posting but then wandered away from it.
But I also think that there are some differences between the disaster and nondisaster settings.
In my hypothetical bursting-pipe case, there would be no occasion to evacuate the owner to another city.
As in this illustration, the disaster setting is likely to involve costs that would not be incurred in the nondisaster case, and the social safety net may not have been designed with that possibility in mind.
Of course it would be better to alter the net to take account of the possibility, rather than, as we are doing in the aftermath of Katrina, responding ad hoc, excessively, and probably wastefully.
My posting distinguished between insurance against loss and damages for a wrongful act; damages are more generous, including for example monetary compensation for pain and suffering.
Several comments question the distinction.
Let me offer two responses.
First, if there was culpable negligence in the handling of the catastrophe by officials or public employees, it is conceivable--no stronger word is possible--that victims of the hurricane and of the ensuing flood may have a legal claim.
I say "conceivable" rather than "possible" because there are a number of limitations on suits against public officials, especially suits based on their discretionary acts or their policies, as distinct from execution (usually by lower-level officers such as policemen).
Second, the demand for insurance and the (social) demand for legal liability are quite different.
The demand for insurance is based on risk aversion, which is based in turn on the declining marginal utility of money.
Ask yourself whether you would pay $10 to avoid a one in ten thousand chance of having to pay $100,000.
These are actuarial equivalents ($10 = $100,000 x .0001).
So if you would prefer the riskless alternative (pay $10), you're risk averse and a potential customer for insurance.
So if, as in the usual case, pain and suffering would not reduce your money income, there would be no point in your buying insurance.
But since pain and suffering are a real loss, optimal deterrence of acts that cause such a loss require the courts to try to put a price tag on the loss and include it in damages.
I was intrigued by the suggestion in several comments to make the purchase of flood insurance mandatory in areas where the risk of flooding is significant.
It sounds like a good idea, provided those areas can be defined (the alternative of making flood insurance mandatory for everyone, even in deserts where the premium presumably would be close to zero, is objectionable as being administratively cumbersome) and that the insurance is not subsidized by the government.
If it is not subsidized and therefore represents the insurance industry's best estimate of the expected financial losses from flooding, then requiring it will discourage construction in flood-prone areas (by making it more costly to live in such areas) and by doing so will reduce losses from floods and reduce demands for government bailouts--demands that appear to be leading to extravagant federal programs for compensating victims of Katrina.
Compulsory insurance is a second-best solution from an economic standpoint; first best would be letting people choose whether to insure or "go bare" but not compensate them if they chose not to insure and then suffered a big loss.
But it is unrealistic to expect government to take such a hard line in the disaster setting, where the number of victims is so large as to place the government under irresistible pressure to compensate--unless the victims are privately compensated through insurance.
I am more pessimistic than Becker that the world in general or the United States in particular can sustain its current rate of economic growth even when economic welfare is defined to include, as I agree it should be, utility or well-being.
My pessimism is not rooted in any concern about running out of fossil fuels, however.
As the quantity of reserves of such fuels (mainly coal, oil, and natural gas) fall or the cost of extraction rises (or, more likely, both), prices of the fuels will rise and the rise will both moderate demand and accelerate the search for substitutes.
There will be effects on the distribution of income (the owners of the reserves will be enriched at the expense of many consumers), but this will not affect average per capita income worldwide.
Indeed, I think average income ("full" income, including nonpecuniary components, consistent with my earlier remark about the definition of economic welfare) will rise as a result of increased prices of fossil fuels, because of the negative externalities associated with the use of fossil (i.e., carbon-based) resources for generating energy.
These externalities include traffic congestion and, what is much more serious, increased atmospheric concentration of carbon dioxide, a major factor in global warming--which I take very seriously.
(The New Orleans flood may be the first disaster to which global warming has contributed; it is unlikely to be the last.) The higher the price of coal, oil, and natural gas, the better, as far I am concerned.
However, a distinction should be made between long-run and short-run effects.
A very large unforeseen change in the price of an important input such as energy could precipitate a national or global recession because the economy could not adapt instantaneously to such a change.
My reason for pessimism about the future is connected to Becker's reason for being optimistic! I fear population growth.
The combination of increased longevity as a result of medical advances and healthier life styles, reduced infant mortality, and a continued high demand for large families in much of the world seems likely to overcome the "demographic transition," that is, the well-documented negative effect on birth rates of increases in average income to middle-class levels.
World population, currently somewhat more than 6 billion, may well rise to 10 billion by 2050.
If average output rises as well, the total amount of economic activity several decades from now may be a significant multiple of the present level.
That higher level portends a big increase in carbon dioxide emissions even if fossil-fuel prices rise sharply, and an ominous reduction in biodiversity (with potentially very harmful effects on agriculture) as a result of more land being cleared for human habitation.
It may well be possible to offset these effects by investments in various ameliorative technologies, but investments that merely offset the bad effects of population growth do not increase net well-being.
Supporters of population growth point out correctly that given a more or less fixed percentage of geniuses, the greater the aggregate population the more geniuses there are, and geniuses can confer benefits on society as a whole that greatly exceed what they take out of society in their own consumption.
A related point is that the larger the market for a good, the lower its price is likely to be because the fixed costs of producing it are spread over a larger output.
But this effect may be offset by the higher prices of scarce inputs as demand increases.
More important, if there is a fixed percentage of geniuses, there may also be a fixed percentage of evil geniuses, including potential terrorists.
In the age of weapons of mass destruction--which are becoming ever cheaper, more accessible, and (in the case of bioweaponry) more lethal--the harm that a terrorist can do may outweigh the good that a benign genius can do.
I am also concerned about negative externalities that result from an increased percentage of elderly people in a nation's population.
Judging by Medicare, the elderly are already able to use their voting power to extract vast subsidies for their medical care that would be more productive in other uses.
This misallocation is likely to grow as the elderly become a larger and larger fraction of the voting population.
Even if net well-being is likely to decrease rather than increase in the years ahead, it can be argued that the effect on total well-being will be offset by population growth.
Suppose average utility for 6 billion people is 2, and for 10 billion is 1.5; then total utility is greater in the second stage (15 billion versus 12 billion).
But very few people think that total well-being is a proper maximand, as such a view would lead to grotesque results; if population grew enough, total utility might increase even if average utility fell to Third World levels.
A few brief responses (Becker and I are planning to discuss population issues further next week) to a characteristically interesting set of comments.
The most frequent comment is that I am worrying too much about population growth because the vast population growth that the world has experienced in past centuries has not resulted in a net diminution of human welfare.
But we do not live in history; we live in the present and the future.
To suppose that an established trend is bound to continue is to be guilty of na√Øve extrapolation.
I do wish to emphasize, however, in light of one of the comments, that I have never suggested and do not believe that the world is going to run out of food any more than it is going to run out of energy sources.
Refusal to recognize developments that may make the future differ from the past is illustrated by a comment which states that only the United States has the technology necessary to create devastatingly effective bioweaponry.
That is a dangerous error.
Several years ago, Australian plant scientists, by injecting mousepox virus with commercially available genetic material, increased the lethality of the virus and at the same time made it immune to the mousepox vaccine.
Mousepox is biologically similar to smallpox.
Those same scientists could if they wanted to, and if they could get hold of smallpox virus, make the virus immune to existing vaccines and even more lethal than it is in nature, where the death rate is 30 percent.
Because smallpox is highly contagious even before symptoms appear, and its initial symptoms are ambiguous, hundreds of millions of people could be infected before the epidemic was even discovered, and there would no vaccinated health workers or security personnel to enforce a quarantine.
Although all smallpox virus is supposed to be under lock and key in two laboratories, one in the United States and one in Russia, this is not certain and in any event it is expected that the smallpox virus will be synthesized within five years; the polio virus has already been synthesized.
That is our future.
One comment accuses me of putting environmental welfare ahead of human welfare and even of "deifying" the environment.
That is not a correct characterization of my view.
I am not a Green.
Environmental and human welfare are interrelated; otherwise there would be no antipollution policies.
Global warming is a profound danger to human welfare.
Granted, there is still some scientific debate over global warming, but increasingly it resembles the scientific debate over the health consequences of cigarette smoking.
There is never complete certainty in scientific matters, but the efforts of a minority of scientists to debunk global warming are beginning to resemble the efforts of a minority of scientists to debunk evolution.
For further discussion of the matters touched on in this response, see my book   Catastrophe: Risk and Response   (Oxford University Press, 2004).
I am a strong environmentalist, and support the ban on using DDT as a herbicide.
Although Rachel Carson's belief that DDT causes cancer has not been substantiated, there seems little doubt that its widespread use as a herbicide, if continued, would have caused a significant reduction in biodiversity because of its lethal effect on many fish and bird species.
In my book   Catastrophe: Risk and Response   63 (2004), I quote a responsible estimate that the combined effect of human population growth (and resulting contraction in animal habitats), herbicide use, global warming, and other factors is causing 10,000 species to become extinct every year.
Of course, there have always been extinctions--without which there would be no room for new species to evolve--but the fossil record suggests that the background (i.e., pre-human) average annual number of extinctions is only one.
Even the fierce environmental skeptic Bj√∏rn Lomborg estimates that the current annual extinction rate is 1500 times the background rate.
And these figures greatly understate the loss of genetic diversity, because much genetic diversity is intraspecies (e.g., birds of the same species but a different color; or imagine if there were only one breed each of dogs and cats).
That diversity has been plummeting as well, in part because of selective breeding, which reduces the number of strains of each crop to the best, the others being abandoned.
The decline in genetic diversity--to which spraying crops with DDT would be contributing significantly if it were permitted--is alarming even from a purely selfish anthropocentric perspective because such diversity, like other forms of diversification, performs an important insurance function.
This is most obvious when one considers plant diversity; if there were only one strain of wheat, predator evolution would concentrate on it and once the strain was eliminated a significant part of the human food supply would be destroyed.
But with animals too, the elimination of a species (or even a breed) can have a ramifying effect throughout the food chain, as when the exctinct species was the major food source of another species, which in turn was a major food source of still another species, and so on.
All this said, the quantities of DDT used in spraying indoor houses in Subsaharan Africa (where 90 percent of malaria deaths occur) are so minute that the environmental effects are inconsequential.
The Stockholm Convention on Persistent Organic Pollutants (2001) bans DDT but with an exception for its use against malaria, and the puzzle is why the exception is so rarely invoked, South Africa being a notable exception.
An even greater puzzle is why the Bill and Melinda Gates Foundation, which is the world's largest foundation and has made the eradication of malaria a priority, is spending hundreds of millions of dollars searching for a vaccine against malaria but nothing (as far as I know) to encourage indoor spraying with DDT.
Of course, spraying can't eradicate malaria, because it just kills malaria-bearing mosquitoes that happen to get inside a house, but it appears to be extremely effective in minimizing malaria infection,  as well as being cheap.
So it is difficult to understand why the Gates Foundation doesn't divert some of its resources to promoting and if necessary financing the spraying, pending the discovery of a vaccine.
As we know in the case of AIDS, the search for a vaccine against a particular disease can be protracted.
Not that eliminating childhood deaths from malaria (I have seen an estimate that 80 percent of malaria deaths are of children) would be a completely unalloyed boon for Africa, which suffers from overpopulation.
But on balance the case for eradicating malaria in Africa, as for eradicating AIDS (an even bigger killer) in Africa, is compelling.
Malaria is a chronic, debilitating disease afflicting many more people than die of it, and the consequence is a significant reduction in economic productivity.
Considering how much cheaper and easier it would be to (largely) eliminate malaria than to eliminate AIDS (which would require behavioral changes to which there is strong cultural resistance in Africa), the failure of the African countries, the World Health Organization, the World Bank, and private foundations and other nongovernmental organizations to eliminate most malaria by means of indoor spraying with DDT is a remarkable political failure.
Identity theft (or identity fraud) refers to fraud effectuated by stealing personal identifying data, such as a credit card number or a social security number, often by means of computer hacking or by emails in which the sender impersonates an individual, firm, or agency that has a legitimate need for the identifying data.
Identity theft has become extremely common and is estimated to be defrauding Americans of a total of more than $50 billion a year.
This is an understatement of the social costs of identity theft because victims often must spend hundreds of hours restoring their credit.
Maximum punishments are severe, but the garden-variety identity theft is not heavily punished relative to potential gains.
For example, an identity thief who steals $1 million and has no previous criminal record is (if prosecuted for violating federal fraud law) likely to receive a prison sentence of less than five years, except that as a result of the recently enacted Identity Theft Penalty Enhancement Act another two years will be tacked on.
The economic theory of punishment teaches that, at least as a first approximation, the expected cost of the fine or other punishment for crime should just exceed the expected gain to the criminal from committing the crime, in order to make it worthless to him.
If we are not completely confident about what the gain from the crime is likely to be (or if we think some crimes, like breaking into an unoccupied vacation house in a snowstorm, should not be deterred, we may want to base the sentence on the victim's loss instead of the perpetrator's gain.
In the usual case of identity theft, the loss to the victim will exceed the gain to the thief, because the time costs to the victim are not recouped in any form by the thief.
Oddly to a noneconomist, those costs, together with the costs of efforts by potential victims of identity theft to avoid becoming actual victims and the costs incurred by the identity thieves themselves to accomplish their thefts, are the real social costs of identity theft.
The mere transfer of wealth from victim to thief does not reduce the social product, but merely rearranges it.
The word "expected" which I used in the preceding paragraph is intended to distinguish between a certain value and a probabilistic one.
The expected value of a 100 percent probability of incurring a cost of $100 is $100, but so is the expected value of a 1 percent probability of incurring a cost of $10,000 ($100 = .01 X $10,000).
If the probability of apprehending and punishing an identity thief is very low, the punishment will have to be jacked up very high in order to deter.
Suppose an identity thief who sends out 100,000 "phishing" emails (impersonating persons or firms who would have a legitimate need for access to the recipient's personal identifying information) anticipates a $10,000 profit.
If the probability that he will be caught and punished for his fraud is 1 percent, then a fine slightly in excess of $1 million would be necessary to deter him.
Probably he could not pay such a fine, and so a prison sentence would have to be substituted, designed to impose the equivalent disutility on him.
My guess is that very few identity thiefs are caught, and also that many of them make a lot more than $10,000 per fraud, given such techniques as phishing that enable a fraudulent solicitation to be disseminated essentially without cost to an immense number of potential victims; if even a minute percentage of the recipients are hooked, the identity thief can make a killing.
If this analysis is correct, the optimal punishment for identity theft is extremely heavy; it might well be life in prison.
Any proposal for punishment that strict would encounter a variety of objections--all superficial.
The first is that punishment should be proportional to the gravity of the crime, in the sense of the cost that the crime imposes on the victim.
By this criterion, bank robbery is a more serious crime than identity theft (and in fact is punished much more severely) because it frightens and sometimes endangers the bank's employees (only sometimes, because most bank robberies nowadays are "robbery by note"--the robber gives the teller a note saying that he is armed, but he isn't).
But bank robbery is actually a sucker's crime; almost all bank robbers are caught because of a combination of surveillance cameras and the money packs that tellers are instructed to give robbers, which explode after a few minutes, covering the robber with indelible ink.
Moreover, it is only because crimes that create a risk of physical injury are treated as categorically more serious than white-collar crimes that bank robbery is deemed a more serious offense than identity theft.
Probably identity theft is a greater social problem (the average bank robbe's take per robbery is only $7,000), and, even if it is not, almost certainly it should be punished more severely because the probability of apprehension and punishment is much lower than in the case of bank robbery.
Nor would we have to worry, as we do with many crimes, that making the punishment for a particular crime very severe may increase the incidence of a more severe crime--may in other words impair "marginal deterrence." That is the policy of imposing heavier punishments for more serious crimes not because the punishment must fit the crime in a retributive (eye for an eye) sense, but in order to deter the substitution of more serious crimes for less serious ones.
Were robbery punished as heavily as murder, robbers would have a greater incentive to murder their victims because that would reduce the probability of punishment (by eliminating witnesses) without increasing its severity.
It is difficult to imagining identity thieves substituting more serious crimes for identity theft.
A further argument against severe punishment is that identity theft is easily prevented by potential victims, and it is less costly to society for them to take their own precautions than for the taxpayer to pay for more prisons.
There are two fallacies here.
The first is the assumption that increasing the length of prison sentences increases the number of prisoners.
That depends on the responsiveness of potential criminals to a higher expected cost of punishment.
If it is high, then an increase in punishment may reduce the number of prisoners by increasing deterrence by a greater percentage than the added length of the sentence.
(Below I argue that it is likely to be high in the case of identity theft.).
The second fallacy is to disregard the heavy   aggregate   costs of self-protection against identity theft.
Everyone who has a credit card or social security number or other personal identifying information (which is to say everyone), and in addition has some financial resources, is a potential victim of identity theft.
Among this large group of people, all who are cautious will take some steps to prevent identity theft, as will the custodians of their personal identifying information.
These costs, which would be avoided if identity theft could be stamped out, must be compared with the costs of increasing the punishment of identity thieves.
Those costs might be slight if the threat of heavier punishment had such a strong deterrent effect that the threat had rarely to be carried out.
A reason to expect a more than average responsiveness of crime to punishment in the case of identity theft is that identity thieves tend to be educated people, or at least to have pretty good technical skills.
Educated people tend to have low discount rates because education entails deferral of earning.
And people with low discount rates are more responsive to increased prison terms, which involve adding years at the end of the existing term.
A person with a very high discount rate might not be deterred when his expected sentence for committing some crime increased from 20 years to 25 years, but a person with a low discount rate might consider that extra five years a significant present cost (that is, after discounting to present value--the lower the discount rate, the higher the present value of a future stream of costs or earnings).
Moreover, an educated person is likely to have superior legitimate alternatives to crime than an uneducated person; and the closer a substitute a legitimate earning opportunity is for earnings for crime, the less the expected earnings from crime need be reduced by increased punishment in order to induce the substitution.
This is the other side of my earlier point about marginal deterrence.
Two of the comments make the excellent point that much, maybe most, identity theft consists in a friend or relative stealing personal identifying information and that  such "retail" theft should not be punished nearly as heavily as the kind of professional identity theft that my post focused on.
The way to deal with this problem, however, as in the case of most crimes that embrace acts of widely varying gravity, is to set a broad statutory sentencing range--say from fine and probation at the bottom to 25 years in prison at the top--and within the range to promulgate sentencing guidelines based on the magnitude of the particular defendant's conduct and other relevant factors, such as his criminal history.
A recurrent issue in criminal law enforcement is how much responsibility for crime prevention to place on potential victims.
In principle, it is always cheaper to deter crime by threat of punishment than to require victims to incur expenses to protect themselves from crime, because maintaining the credibility of the threat is likely to be much cheaper than victim self-protection because the latter requires  every potential victim to incur costs to avoid being a victim.
(It's the difference between penning danagerous animals in zoos and leaving it to every homeowner to fence out dangerous animals.).
But this is in general rather than in every case.
The more costly it is for the state to apprehend and prosecute and punish criminals, the more likely it is for victim self-protection (or some combination of public enforcement and victim self-protection) to be optimal.
In this vein, some comments express concern that banks and other vendors are not doing enough to prevent identity theft of their customers because they hope to shift the loss to the customers.
I doubt that this is a serious problem, but if it is it may argue for requiring protective measures by the banks and vendors.
One comment puzzles over the fact that bank robbery should be a sucker's crime--that is, that though the expected gain is slight relative to expected punishment costs, the crime is still common.
It is a puzzle.
But my impression is that bank robberies nowadays are committed mainly by stupid or mentally unstable people, who are tempted by what seems the simplicity of giving a bank teller a threatening note.
When as perhaps in that instance deterrence fails, the alternative is incapacitation--long sentences to prevent the bank robber from repeating his crime if it is thought unlikely that he will be deterred by the threat of a longer prison sentence the next time (recidivists get longer sentences, having shown by their first crime that they are less deterrable than the average person).
I greatly enjoyed the comment that pointed out that the first and most consequential identity theft was that of the serpent in the Garden of Eden, by Satan.
This is not in   Genesis  , where Satan is not mentioned, but in later versions of the Fall of Man, notably Milton's   Paradise Lost  , where Satan takes over the serpent while the latter is sleeping, and convinces Eve that she should eat the forbidden fruit because he, the serpent, did and it did him no harm--indeed, it enabled him to learn to talk!.
The Program for Individual Student Achievement (PISA), the source of Becker's statistics, does triennial international comparative studies of 15-year-olds' educational achievements.
The latest results to be reported are those of the 2003 survey.
The United States came out in the middle of the European pack in reading literacy, but in math proficiency we were below the European average, as well as below several Asian countries in the sample.
There are three questions to ask about these results: (1) Are they meaningful, in the sense of providing an accurate picture of relative math proficiency? (2) If so, are there economic or other real-world consequences? (3) If there are, what if anything should be done to increase our rank score?.
(1) The answer to the first question appears to be "yes," partly because of consistency with other studies.
There is a good article on this issue by Paul E.
Peterson, available on the Web, called "Ticket to Nowhere." See also an article by Mariann Lemke and colleagues in a recent issue of the   Educational Statistics Quarterly  , also available on the Web.
Math skills appear to be deteriorating in the United States as well asto be  inferior to the average of the 40 countries in the PISA sample; and this is true even if blacks and Hispanics, who on average do poorly (especially blacks) on these tests, are excluded from this comparison, although such exclusion does increase our international rank somewhat (but a proper comparison would require similar exclusions from some other countries in the sample).
I agree with Becker that it would be interesting to see what difference it would make if college rather than high school students were tested.
But I would not expect much difference, because little emphasis is placed nowadays on math in American colleges.
(2) But so what? Here I agree with Becker; perhaps I am even more emphatic than he, that better education in mathematics would not have substantial effects on social and economic welfare.
Very few jobs nowadays require even simple math skills ("deskilling" is the story of modernity); almost all computation is automated.
Even fewer jobs require advanced math skills--and kids cannot acquire those skills by education; they are innate.
What would be socially and even economically useful would be to instruct high school students in the rudiments of statistical theory.
That would help them learn  to think straight about a range of public policy issues, as well as to avoid certain recurrent mistakes in everyday life.
People are terrible at handling probabilities.
For example, most people, including otherwise quite intelligent and well educated people, don't understand that randomness is not regular alternation--that a typical random pattern is 1000110110001, not 101010101010.
And this mistake leads them, for example, to give undue weight to the recent performance of a mutual fund (e.g., 1101).
But whether to teach statistical theory in high school is an issue of educational policy rather than a matter of raising the scores on math tests.
It would also be helpful to the United States, mainly from a public policy standpoint, if more of our people were scientifically literate; and it would help them to be so if they knew some math, because modern science is heavily mathematical.
In my book   Catastrophe: Risk and Return   (2004), I examined the issue of scientific literacy briefly, pointing out that only a third of American adults (adults, not 15-year-olds) know what a molecule is, that 39 percent believe that astrology is scientific, that 46 percent deny that human beings evolved from earlier animal species, and that almost 50 percent do not know that it takes a year for the earth to revolve around the sun (many do not know that the earth revolves around the sun).
These are amazing statistics, and yet, according to the materials I consulted, the scientific literacy of the U.S.
population actually exceeds that of the European Union, Japan, and Canada.
In the age of the computer and the Internet, school education probably has rather limited effect on job performance, marital stability, happiness, or other measures of welfare, except perhaps for elite people--but they attend elite educational institutions and probably get a better education than is available in any other country; our best colleges and universities are the envy of the world.
And because of compulsory schooling, a very bright child almost always will be spotted even if he or she comes from a poor or educationally deprived home, and will be shunted onto the elite track.
So I do not think that  the low quality of public education matters a great deal from an overall social standpoint, except that our public schools seem needlessly costly, and also unresponsive to the special needs of very poor students.
These are reasons why I strongly support school voucher programs.
(3) If this is wrong and our poor international standing in math proficiency is hurting the United States, the solution is to teach more math and less of something else.
It should not be to drill the kids in the 2003 PISA math test so that they can do better on the next one.
It is always possible to improve scores on standardized tests by orienting instruction to the tests, by tutoring, or, if worse comes to worst, by withholding the test from the weakest students!.
Two weeks ago the   New York Times   published an article on pollution in China: "As China Roars, Pollution Reaches Deadly Extremes," Aug.
26, 2007, section 1, page 1.
The point of interest is this: "Sulfur dioxide and nitrogen oxides spewed by China's coal-fired power plants fall as acid rain on Seoul, South Korea, and Tokyo.
Much of the particulate pollution over Los Angeles originates in China" (p.
6).
These effects are separate from China's growing contribution to global warming: it is possible that by the end of this year China will surpass the United States as the leading emitter of carbon dioxide into the atmosphere.
Although China is making some efforts to curb pollution, its efforts are more likely to reduce the rate of growth of pollution than to reduce it from its current level, because of the continued rapid expansion of the Chinese economy, which includes a rapid growth in the number of vehicles using China's roads.
Global warming affects the entire earth, though unequally, but Chinese air pollution is "exported" mainly to a few nations, mainly Korea, Japan, and the western United States.
Other differences between the carbon-emission and conventional air-pollution phenomena are that there is far more uncertainty about the magnitude of the threat posed by global warming, and far greater costs to arresting global warming, than in the case of China's external air pollution, and this enables one to see the problem of international control of air pollution in rather clearer terms than that of controlling carbon emissions.
It is a problem of externalities.
The costs of Chinese air pollution to Koreans, Japanese, and Americans are not costs to China, and the benefits of abating this external pollution would not be benefits to China.
But this description of the problem ignores the Coase theorem, one version of which is that if transaction costs are low, the market itself will internalize externalities and thus solve the externalities problem.
We might think of the present legal regime as one in which China has a property right in the activities that give rise to pollution, or stated more precisely that its ownership of coal-fired power plants, gasoline-powered vehicles, and so forth carries with it a right to pollute.
If so, then Korea, Japan, and the United States (assuming they are the only countries seriously affected by Chinese pollution) could persuade China to reduce its pollution by paying China an amount of money just slightly above what it would cost China to reduce its pollution "exports" to these countries to the level desired by the "victim" nations.
This assumes that the cost of the negotiations, both among the victim nations and with China, would not be so great as to prevent a deal that made all the parties involved better off; but it is not clear why those costs should be particularly high.
Nor is there a serious danger that China would increase its polluting activities in order to extort more money from the other nations, since pollution hurts the people of China far more than it hurts any other population (the pollution described in the   Times   article is grotesque in its magnitude and lethality).
The transaction would be efficient, but it would also bring about a transfer of wealth from what I am calling the victim nations to China.
But this is a common kind of market event.
A real estate developer who wanted to create a residential community on land adjacent to a funeral home, and feared that the funeral home's presence would depress house values by giving the occupants of the houses an unwelcome reminder of their mortality, could pay the funeral home to relocate.
And if buying off a polluter seems crass--"Greens" would denounce it for conveying the message that pollution is a legitimate byproduct of economic activity (a "commodity" for the victims of air pollution to buy from the polluter)--there are other means of inducing China to reduce air pollution.
There are things that China wants from Korea, Japan, and the United States, and these countries can give China some of those things in barter for China's strengthening its enforcement of its existing pollution controls or adopting and enforcing newer, more stringent ones.
An alternative would be to negotiate an international agreement by which China and all other nations surrendered control over their pollution to an international environmental protection agency.
But the transaction costs would be prohibitive, in part because of extreme uncertainty about the policies that the agency would adopt.
Nations do not surrender their sovereignty lightly.
There were a number of good comments.
I respond to a few here.
One pointed out that the interstate highway system was largely built in the 1950s and 1960s, rather than continuously.
As a result, it presents a "bloc obsolescence" problem necessitating heavy expenditures on maintenance and rebuilding--costs exacerbated by the unanticipated wear and tear resulting from the vast increase in usage of the system.
This makes the problem of financing the necessary expenditures an urgent one.
Another comment points out perceptively that imposing tolls on users of the interstate highways could create a negative externality by deflecting users to non-toll roads, thus increasing congestion and wear and tear on those roads.
That is an argument for tolling all roads--to which two objections are raised in comments.
One is that tolls will be prohibitive on roads that are lightly traveled, assuming that their light traffic doesn't reduce maintenance costs to trivial levels (as it would not).
What is true is that tolls will be kept down in order to avoid deflecting users of these roads to more congested roads, adding to the congestion on them.
The second objection is that it is infeasible to impose tolls on busy commuter roads such as the Minneapolis bridge that collapsed, because it would slow down traffic too much.
But this is a short-term objection.
Technology is rapidly coming on line, and at rapidly falling cost, to enable tolls to be charged (and varied with time of day to minimize congestion) without need for toll booths, but instead through a system of sensors in the pavement and cameras overhead.
I am not competent to offer an opinion on macroeconomic policy.
But I can, with some confidence, say this about Becker's proposal that the Federal Reserve Board adopt a rule approach to adjusting the money supply to limit inflations and recessions: there is no way in the existing legal regime to make such a rule enforceable.
The Board is a creature of Congress.
If it resists strong political pressures, Congress can retaliate.
Unlike the Supreme Court, the Board has no constitutional standing.
And even the Supreme Court, which Congress could not abolish without a constitutional amendment, is not immune from political pressures, because Congress can limit the Court's jurisdiction and controls the Court's budget.
Moreover, political pressures influence who is appointed to the Court--and to the Fed as well.
And Congress can pass laws that would impede or even nullify Fed policy, as by raising or lowering taxes or running deficits or surpluses in the federal budget.
Therefore, it might well be a mistake for the Fed to surrender discretion in favor of a rule approach.
Discretion enables the Fed to bend in the face of political pressure; rigid adherence to rules might cause the Fed to break in the face of particularly intense such pressure.
I understand that some central banks do follow a rule approach, but they may operate in a political context different from ours--I do not know.
The larger question is whether any public official can be totally nonpolitical in a democratic (perhaps in any) system.
I suspect not.
On the broader issue of rules versus discretion, I doubt that generalization is possible.
Rules have great virtues, but they are limited because they are necessarily based on information possessed by the rulemaker when the rule was made.
No rulemaker is omniscient.
After the rule is promulgated, unforeseen circumstances are likely to arise to which the rule will be maladapted.
The inflexibility of rules has to be traded off against the benefits in simplicity, clarity, and ease of compliance and application that rules confer.
The tradeoff will not always favor rules.
There are three alternatives to rules.
One is standards.
A fixed speed limit is a rule; negligence is a standard.
A standard is less definite than a rule, which is a minus, but it is more flexible, which is a plus.
It would be impossible to anticipate every possible cause of an accident (driving above 60 m.p.h.
at night, in snow, in heavy traffic, on a divided highway, or in an SUV, etc.) and make a rule that would declare each cause to be either culpable or excusable.
The negligence standard enables a court to determine liability as cases arise, on the basis of a weighing of the costs and benefits of measures that would have avoided the particular accident.
One way to state the difference between rules and standards is that standards enable information obtained after promulgation to be incorporated into the law without need for formal rulemaking.
When for example Congress passes a vague statute, thus leaving it to the judges enforcing the statute to fill in the details, in effect the judges are enlisted in the legislative process.
An example is the per se rules of antitrust laws, which are judge-made rules supplementing the general directives in the antitrust statutes.
Another alternative to rules is discretion, which differs from a standard in not being enforceable in court.
For example, prosecutors in our system have discretion whether or not to prosecute a particular person for a particular crime.
Unless a prosecutor bases an exercise of his discretion on an invidious ground, such as race or religion, a court will not review that exercise.
Discretion in the criminal law is a way of introducing needed flexibility without creating loopholes through which criminals could escape the law.
Criminal laws tend as a result to be overinclusive.
People are constantly cutting corners of various sorts, often not realizing that by doing so they actually are violating a criminal statute.
Prosecutors are, quite wisely, not given sufficient resources to prosecute every crime, but instead are given discretion to allocate their limited resources as they see fit.
They can overlook the minor crimes without worrying that the criminal law contains loopholes that may enable a major criminal to elude justice.
Similarly, no driver actually obeys all the driving laws, but police rarely bother to ticket anyone who exceeds the posted speed limit by less than 5 or 10 m.p.h.
Suppose the posted speed limit is 60 m.p.h., but the de facto speed limit is 70.
If the posted speed limit were raised to the "realistic" speed limit of 70, many drivers would drive faster because the police would not have enough resources to catch all the speeders, so the realistic speed limit would rise.
Moreover, police would lack flexibility to ticket the occasional driver who was driving above the posted but below the de facto speed limit but who the police believed should be ticketed anyway because he was driving erratically--weaving, tailgating, etc.
Of course they could ticket him for these activities, but it would be harder to determine and prove them, compared to detecting speed with a radar gun.
The third alternative is presumptions or guidelines, which have the structure of rules--that is, are simple and definite--but which allow discretion in enforcement.
An example is the federal sentencing guidelines, which enable a defendant, a judge, and probation officers to determine a definite range for a sentence for a particular crime committed by a person having particular characteristics (such as criminal history), but allow the judge to sentence outside the range if he can give a good reason for doing so based on sentencing factors set forth by Congress.
This approach makes sense when there is a need for guidance but strict rules would be too inflexible.
The merger guidelines used by the Justice Department and the Federal Trade Commission are a further illustration of the presumption/guideline approach.
They enable firms contemplating mergers to have a pretty good idea in advance of whether the merger will provoke a challenge from one of the enforcement agencies.
The firms can go to the agency assigned to their proposed merger before the merger is consummated and get a definitive determination of whether it will be challenged.
This procedure enables the lawyers and economists at the agency to decide whether the proposal is consistent with the spirit as well as the letter of the guideline.
There have always been communes, such as the Israeli kibbutzim, but they have usually failed, mainly because of free-rider problems.
If wages are uniform, shirkers flourish; also, incentives to undergo training that would increase the value of one's output are blunted.
Thus we read in the   New York Times   article of August 27 mentioned by Becker that "Mr.
Varol was born on a kibbutz in the far north, but he left at 18.
He is at peace in his new home, but bitter about the past.
'My parents worked all their lives, carrying at least 10 parasites on their backs,' he said.
'If they'd worked that hard in the city for as many years, I'd have had quite an inheritance coming to me by now.'"Yet, curiously, Varol's "new home" is one of the 30 percent of Israel's 250 or so kibbutzim that, accordingly to the   Times   article, remain genuine communes--that is, with collective ownership and equal wages, though the collective raising of children has probably been abandoned.
(Curiously, Varol's kibbutz--Ein Ha-Shofet--was named in honor of Justice Brandeis,   shofet   being the Hebrew word for judge.) In trying to weaken the bond between parents and children, the founders of the kibbutz movement, echoing Plato, who in his sketch of an ideal communist state in the   Republic   had advocated the communal rearing of children, were acknowledging that parents' instinctual desire to advance their children was inconsistent with communal equality.
The kibbutz movement is almost a century old, and it is more remarkable that 30 percent of the kibbutzim are still communist than that 70 percent are not, although the   Times   article does not indicate the percentage of the total kibbutz population that lives in those "classic" communist kibbutzim or how complete their commitment to the kibbutz ideal actually is.
An even more durable example of voluntary collectivist living that comes to mind is found in the Catholic monasteries and convents--and notice that it is too is founded on a realization that family ties are inimical to communal ordering.
A kind of private quasi-collectivism persists in poor, disordered, or anarchic societies in which tribes and clans exercise functions that governments perform in wealthy societies.
The kibbutzim were founded as collective farms in a pre-mechanized agricultural economy.
Because they were small, because the skill and effort of each of the members of a kibbutz could be readily observed and evaluated, because (really the same point) there was little specialization, because the danger posed by the surrounding Arab population created a strong sense of mutual dependence among kibbutz members, because many immigrants to what is now Israel did not have good employment options, and because of the mysterious Jewish enthusiasm for communist and socialist movements, free-rider problems could be contained, and so the "classic" kibbutzim, unlike most voluntary communes, and without the religious backing of Catholic monasticism, flourished for generations.
Even so, kibbutzniks were never more than 7 percent of the Jewish population of Palestine.
As the extraordinarily favorable conditions for voluntary collectivism waned, it was inevitable that the classic kibbutz system would fade.
The kibbutz in its original collectivist form gives us a glimpse of pre-political human society.
In a society in which there is no effective central government, as was essentially the situation of the Jewish community in Palestine until Israel gained statehood in 1948 (and even afterwards, in frontier regions exposed to Arab terrorism), smaller groups will form for self-defense and the provision of other public goods, such as social insurance.
Collective ownership and wage equality are ways of protecting each member of the collective from economic and other vicissitudes; "from each according to his ability, to each according to his needs"--the communist slogan actualized in the classic kibbutz--is then a method of social insurance.
The difference between the uniform wage that every worker receives and the below-average value of a particular worker's output because of age, infirmity, wounds, or sheer inability is in effect a transfer, in the nature of insurance proceeds, from the other workers.
But it is less efficient than the forms of private and social insurance that arise when there is a government that can enforce property rights and thus enable industries such as the insurance industry to function, and that can provide social insurance out of tax revenues.
Probably, then, human beings have both collectivism and individualism in the genes, enabling us to adapt both to environments in which collectivism is welfare-promoting and environments in which individualism is welfare-promoting.
There are perennial calls for drafting all 18 year olds to serve in either the military or some civilian alternative.
Congressman Charles Rangel has repeatedly introduced bills in Congress (the "Universal National Service Act") that would do this.
The bills have never come close to passage, and are unlikely to in the future even with Democratic control of both houses of Congress.
But universal national service is one of those seductive ideas that refuse to die completely, and perhaps therefore it deserves a serious analysis.
It is analytically interesting and can serve as an example of the utility of a cost-benefit approach to public programs.
Roughly 4 million Americans reach the age of 18 every year.
There are only 1.4 million active-duty military personnel, so only a small fraction of each vintage of 18 year olds could be assigned to the military.
At their present size, our active-duty armed forces require only about 150,000 new recruits each year.
So any universal national service obligation would have to be primarily an obligation to do civilian work.
Civilian national service (in the United States--thus excluding the Peace Corps, and the missionary work that young Mormon men are required to perform for two years without compensation) funded by the federal government exists already.
The "AmeriCorps" program provides federal grants to a large number of  service organizations, both public and private.
Although these organizations pay only the living expenses of their volunteers plus a modest education grant, the federal contribution amounts to some $27,000 per volunteer.
The number of volunteers supported by AmeriCorps grants is small--well under 100,000.
But of course total volunteer activity is much greater than that, and by no means limited to young persons--an affiliate of AmeriCorps is the "Senior Corps." A survey by the U.S.
Department of Labor found that there were some 60 million American engaged in volunteer activities in 2006 and that the median number of hours that the volunteers devoted to such activities was about 50 hours a year.
Thus, assuming that the average is not much different from the median and that a full-time job is 2000 hours a year, there were the equivalent of 1.5 million full-time volunteers (50/2000 x 60 million).
That number is important because a universal national service obligation would have a substitution effect: someone required by law to provide a year of national service would be likely to reduce the amount of volunteer service that he would provide in the future.
If, for example, there were a two-thirds reduction in volunteering, from 1.5 million full-tine equivalents to 500,000, and thus a loss of 1 million full-time-equivalent volunteers, universal national service would augment volunteer activities by only 3 million full-time equivalents a year (4 million - 3 million).
Granted, this number would rise if universal national service had a complementary effect on volunteer service rather than or, more plausibly, as well as a substitution effect--if, that is, the year of obligatory service created a taste for such service.
I find this implausible.
If 4 million persons were conscripted for one year's national service, at an annual expense of $27,000 per person, the program would cost more than $100 billion a year--probably much more, because the $27,000 figure excludes the overhead expenses of the service organizations that receive the per capita grants.
The $100 billion (or whatever the correct figure is) would be a transfer payment, but it would generate costs of two types.
The first would be the deadweight costs that the taxes required to fund the payment would impose.
The second and doubtless greater cost would be the difference between the value of the conscripts' national service work and the value of their output in whatever jobs they would have had were it not for their national service obligations.
About half the 18 year olds would (but for their national service obligation) be in college rather than working, and so the effect of universal national service on them would be to postpone their entry into the job market by a year.
Their lost wages in their first job would be a rough estimate of the value of their work in that job.
The starting salary for college graduates is more than $40,000, other than for liberal-arts majors, and this is about twice the starting salary for high school graduates.
That is some evidence that a universal national service program would be inefficient: it would in effect reallocate a year of a college graduate's working life from after college to before college, when he would be less productive.
Against this it could be argued that the national service work that the 18 year olds would perform would have a social value in excess of its private value.
But this seems unlikely for most jobs that these teenagers would perform, such as helping out in hospitals and nursing homes and picking up litter on roadsides and in parks.
A possible exception is tutoring children, since education produces significant social benefits.
But only a small fraction of the 4 million national service conscripts could usefully be employed in that activity.
Universal national service would also have peculiar effects on the distribution of income.
The unpaid national service workers would replace low-paid service workers, pushing many of them into poverty.
Proponents argue that, all narrowly "economic" issues to one side, universal national service would confer intangible social benefits in the form of increased solidarity, as all Americans would share in the experience of working for the overall social good without compensation beyond modest living expenses.
But given the heterogeneity of the jobs that the national service workers would be performing, the solidarity-enhancing effect would surely be quite limited.
It would be different if the 4 million were all drafted into the armed forces for a year, but that is infeasible.
In a candid moment proponents of universal national service might respond that its real purpose is to take rich kids down a peg by forcing them to work for a year with minimal compensation.
The hope would be that the experience would make the rich empathize more with the poor and therefore treat them more generously.
This seems unlikely, though the issue is worth studying.
A person's attitude toward issues of distributive justice is shaped by a variety of factors, including temperament, parental values--and personal experiences not limited to a year's working without pay.
Becker points to India as an example of a society in which competition has been more effective than law in reducing discrimination in employment.
As with most analyses of historical phenomena, determining causation is rife with uncertainty.
Had the Indian government not abolished the caste system, would discrimination against untouchables have declined as much as it has?.
The question is of more than academic interest from an American standpoint because we have laws against so many forms of employment discrimination--discrimination on racial grounds, of course, but also on grounds of ethnicity, religion, sex, disability, and age.
We also had a caste system in the South until relatively recently.
So do we need discrimination laws, or can competition be relied on to eliminate discrimination?.
The answer I would give is that competition cannot be relied upon to eliminate discrimination (nor has Becker ever argued that it can be), but that, even so, laws against discrimination may not be desirable on balance, at least from the standpoint of economic efficiency, as distinct from making a political or moral statement.
They may also not be very effective.
I will confine my analysis largely to employment discrimination.
If an important class of customers does not want to be served by, say, black employees, or if an important class of workers does not want to work with black employees, then the tendency in the absence of a discrimination law, as Becker explains, will be segregation of the workforce: the market will be served by a combination of all-white and all-black firms.
If, however, segregation raises employers' costs by more than the increase in wages that they would have to pay their white employees to induce them to work side by side with blacks, plus the loss of net revenues from white customers who do not want to be served by black employees, there will be competitive pressure on the employers to integrate their work forces.
The pressure will depend in part on how strong the whites' aversion to working with or dealing with blacks is.
There is no reason for competition to affect that aversion, other than by bringing the costs of it home to employers and through them to their white workers and customers.
Although law can try to eliminate employment discrimination, it is unlikely to be very effective and if it is effective it may not be efficient.
Take the second point first.
Suppose white employees have a strong aversion to working with blacks.
Then forbidding discrimination will impose a heavy cost on the white employees.
If there are more of them than there are blacks, the cost to the white employees may exceed the benefits to the black employees.
Of course, an antidiscrimination law may rest on a political or moral judgment that costs imposed by thwarting a taste for discrimination should not count in the social calculus, but that is a judgment outside of economics.
Now as to the efficacy of such laws: it is bound to be limited unless enforced by savage penalties, which our discrimination laws are not.
There are three reasons for their limited efficacy.
The first is that an employer who wants to continue discriminating against blacks can (within limits) reconfigure his work force to reduce his demand for skills likely to be possessed by black applicants for employment, can substitute capital for labor, and can relocate to areas in which the applicant pool contains few blacks.
Second, felt legal pressure to hire blacks results in "affirmative action," which both creates resentment among whites and casts some doubt on the average quality of black employees and so in effect stigmatizes the entire class.
And third, because a discrimination law makes it more difficult to fire a member of the class protected by the law, it increases the cost of hiring members of the class and so increases the incentive to discriminate in hiring.
There is some evidence that the passage of the Americans with Disabilities Act, forbidding discrimination against the disabled, led to an actual decline in the number of disabled persons employed.
Although an employment discrimination law is thus apt to be of limited (though not zero) efficacy, other bodies of law can play a large role‚Äîlarger even than market forces‚Äîin reducing employment discrimination.
Much employment is public, and public bodies can decide to incur the costs of eliminating discrimination in their work forces and hire many blacks.
In addition, laws that reinforce a caste system, such as the Jim Crow laws in the southern states that persisted into the 1950s, can reduce employment opportunities for blacks beyond what private discrimination would do, for example by limiting their educational opportunities.
The repeal or invalidation of such laws can thus indirectly increase black employment opportunities.
Deregulation is a minor but interesting legal change that tends to reduce discrimination.
A regulated monopoly is constrained in the amount of monetary profit that it can obtain, but unconstrained in nonmonetary perks, including indulging a taste for discrimination.
Neither legal nor market forces have brought employment parity between whites and blacks in the United States.
Parallel with the struggle of blacks for parity, Jews, East Asians, and immigrants generally, have made rapid economic progress and indeed (at least in the case of Jews and East Asians) largely overcome discrimination, yet without significant help from the law.
An open economy provides opportunities even to victims of discrimination, especially if the victim group is large enough to achieve economies of scale in trade within the group.
As members of the group grow modestly affluent and thus achieve a standard of living that enables them to assimilate to the larger culture, as by consuming similar goods and services and sending their children to good schools, discrimination against them declines because they cease to seem ‚Äúdifferent‚Äù from the majority.
When members of a minority group talk and think and act like the majority and have the same tastes and in short share the same culture, the fact that they may have a different physical appearance ceases to count greatly against them, as indicated by high rates of intermarriage in the groups I have mentioned.
Assimilation to the dominant culture, as yet incomplete for a great many blacks, may thus be the major force in reducing discrimination, with competition and law playing lesser roles.
The forthcoming presidential election has drawn attention to online predictions markets.
The first, and one of the best known, is the Iowa Electronic Market (IEM), started in 1988 to bet on presidential elections.
Participants can bet up to $500.
The odds and hence the price of a contract are set by the bidders themselves, as in a stock market, rather than by the "house," as in casino gambling.
A number of other prediction markets, some using virtual (i.e., play) rather than real money, have emerged, includingTradeSports.com, the Foresight Exchange Market, Newsfutures, Intrade, and the Hollywood Stock Exchange.
IEM, on which I'll focus, has correctly predicted the outcome of every presidential election since 1988, and its predictions have been consistently more accurate than the polls.
An interesting comparison between the Gallup Poll and the Iowa market in the 1996 presidential campaign (www.biz.uiowa.edu/iem/media/96Pres_VS.html) reveals that throughout the entire campaign the Iowa market‚Äôs predicted outcome was much closer (in margin of victory) to the actual outcome than the Gallup Poll was.
Studies have found that prediction markets beat polls and other prediction tools even when a prediction market uses play rather than real money.
The Pentagon planned to create a prediction market in which participants could bet on the likelihood of terrorist attacks, assassinations, and coups.
The plan caused outrage and was abandoned.
There was a serious objection to the plan: people planning terrorist attacks, assassinations, and coups have inside information which they could use to make a killing (pun intended) in the prediction market.
The success of prediction markets is related to though distinct from the success of the "blogosphere" in ferreting out information that eludes the mass media.
Both the blogosphere and prediction markets aggregate greater amounts of information than any centralized information gatherer can obtain.
In the case of the blogosphere, it is easy to see why this is so.
It is virtually costless (except in time) to become a blogger, and among the millions of people drawn to blogging are people with all sorts of pockets of specialized information, which the internet enables to be pooled rapidly.
This pooling resembles the economic market, in which vast amounts of information, encapsulated in prices, are pooled (the basic insight of Friedrich Hayek, and the secret of capitalism‚Äôs superiority to socialism as a means of optimizing economic activity).
Prediction markets provide an even closer analogy to the market, since they (or rather some of them, for others permit betting only with play money) provide financial rewards for correct information (as blogging rarely does), in this resembling ordinary commercial speculation.
Someone who thinks he has superior insight into political processes will have an incentive to place a bet in IEM or some other political prediction market.
This method of aggregating information--call it expert aggregation--is different from public opinion polling, which is based on randomness.
The political pollsters quiz a random sample of likely voters for their likely vote; they do not ask them for an opinion of how other people will vote, a matter on which randomly selected respondents cannot be expected to have an expert opinion.
The idea behind the prediction market is that the opportunity to make money or just the fun of betting on one's insights or hunches (the only reward that the virtual prediction markets offer participants) will elicit expert opinions--more so, certainly, than random polling, which anyway, as I have said, does not ask respondents for an opinion about anyone's voting except their own..
I don't think the success of prediction markets is due to a "wisdom of crowds" phenomenon--the idea that somehow large groups of seemingly nonexpert people are bound to "get it right." The "wisdom of crowds" is really just a matter of reducing sampling error.
Suppose 100 people guess the weight of a person.
Some will guess too low, some too high, but the average guess will be close to the true weight.
If, however, just one person is asked to guess, the chances are great that his guess will be either too high or too low.
One problem with prediction markets, a problem that occurred on the day of the 2004 presidential election, is that a market can swing on the basis of unreliable information until the information is corrected.
(That happened last week when the price of stock in United Airlines plummeted on a mistaken report that the airline was about to declare bankruptcy.) Exit polls showed Kerry winning a disproportionate number of the votes cast early in the morning, and immediately the prediction markets predicted that he would win the election; and of course he lost.
Another potential problem with the prediction-market model is that the limits of the bets that can be placed, illustrated by the Iowa market‚Äôs $500 limit, are so low.
One understands why there are limits: otherwise there would be a danger of market manipulation.
Expenditures on the current presidential election campaign will exceed a billion dollars.
It must be that the prediction markets attract people who derive nonpecuniary satisfaction from successful bets and that among those people are likely to be a number who really do have insight into the issues bet on in the market, since their bets are more likely to be correct and therefore they are more likely to derive the satisfaction that comes from successful betting.
Probably most people who bet on horse racing think they know something about horses, and probably most people who bet on the outcome of a political campaign know something about politics.
It may seem odd, though, that a stranger would have a better sense of how people will vote than a random sample of people would know, each of them, how he or she will vote.
But only about half of all eligible voters actually vote in a presidential election, many people refuse to talk to pollsters, some people do not make up their mind until the last minute (but may be hesitant to reveal their indecision to a pollster), some respondents will tell the pollster what they think he wants to (or will be impressed to) hear, and the number of persons sampled is never large enough to avoid a confident prediction of a point outcome, as distinct from a range (say a 95 percent probability that one candidate's vote percentage will be between 47 and 50 percent and the other's between 49 and 52 percent).
There is an interesting question whether prediction markets should be thought of as "gambling‚Äù and perhaps prohibited.
As a matter of policy, that would be a mistake, even if one thinks that gambling should be prohibited.
The prediction markets are markets for speculation, rather than for game-playing or risk-taking.
Slot machines, card-playing, roulette wheels, and other conventional forms of gambling do not generate socially valuable information.
Speculation does.
Commercial speculation serves to hedge commercial risks and bring prices into closer phase with value.
Political, cultural, etc.
prediction markets also yield socially valuable information.
The outcome of elections is important to companies and even individuals for whom particular public policies are important; they may wish to make adjustments to avert or exploit looming political change.
Politicians too need to have as sharp a sense as possible about the effects on the electorate of their and their opponents' strategies.
Apparently they can get more accurate information from the prediction markets than from the public opinion pollsters.
There has been such a flood of media coverage of the financial crisis that it is best to begin with some very simple, basic points.
Banks (broadly defined to include investment banks and the many other lenders) borrow--bank deposits, for example, are loans to banks--and then lend out what they have borrowed.
As a result, their loans are much larger than their capital assets (cash, a building, etc.).
If their capital shrinks in value, they have less protection against the possibility that the loans they make will not be repaid in full.
If a bank's capital is 10, and it borrows 100 and lends 100, and the persons or firms it lends to return only 90, its net worth will fall to zero (10 [its capital] + 90 [the value of its loans] - 100 [the amount it owes its depositors] = 0.
Banks in recent years have increased the ratio of their loans to their capital because borrowing costs were low and financial experts thought they had discovered ways of reducing the risk of leverage (that is, of borrowing).
Many of the loans were mortgage loans, and the value of those loans fell when the housing bubble burst.
(Risky, and in some cases deceptive, mortgage practices had contributed to the bubble.) What made the situation worse was that rather than retaining the mortgages that they originated, banks (especially the major ones) sold the mortgages in exchange for securities backed by the mortgages.
Those securities became a part of a bank's capital.
The value of the securities depended on the value of the mortgages that the entity issuing the securities had bought; those mortgages were the entity's assets.
As that value fell, the bank's capital fell.
The mortgage-backed securities achieved geographical diversification of mortgage risk.
But the housing bubble, though not geographically uniform, was sufficiently widespread that geographical diversification did not reduce the risk of mortgage defaults sufficiently to avert the fall in the value of mortgage-backed securities.
A complicating factor was that the value of those securities was and is very difficult to determine, because each security represents a share in pieces of many different mortgages.
The bank that owns the security cannot readily determine the value of all those different mortgages, since it has no direct relationship with the mortgagor, having sold the mortgage to the entity that issued the mortgage-backed securities.
Because the banking industry (and remember that I am defining "banking" very broadly, basically as all lending) was highly leveraged, and because much of its capital consisted of securities very difficult to value, the bursting of the housing bubble reduced the capital of the banks, but by an unknown amount.
The reduction and uncertainty have curtailed lending by reducing the capital cushion that a bank needs to reduce to an acceptable level the risk that some of its loans will not be repaid.
That is the "credit crunch,‚Äù and it is painful because so many individuals and businesses borrow to finance their activities.
Ordinarily one would expect a credit crunch to be self-correcting.
As lending dropped because of the fall in bank capital, interest rates would rise and this would attract more capital to the financial markets.
We have seen this process at work in Warren Buffett's $5 billion investment in Goldman Sachs.
Buffett has capital, Goldman needs it, so Buffett gives it to Goldman in exchange for preferred stock (which is really a type of bond but one that does not have a term--it is never repaid) paying a handsome interest rate.
But Goldman is pretty healthy.
Many lenders have so much of their capital tied up in mortgage-backed securities or other novel forms of capital that are difficult to value that they cannot attract new capital at a price that would enable the lender to continue in business.
The sale of the securities would just expose their lack of value.
The federal government, however, has essentially unlimited capital because of its taxing power.
It is prepared at this writing to contribute perhaps as much as a trillion dollars to rebuild the capital of the banking industry.
The Treasury wants to make this contribution in the form of buying the dubious securities, but that seems to be a mistake, unless pressure of time allows for no alternative.
If the Treasury pays the actual value (if anyone can determine what that is) of the securities, it will not be injecting new capital into the banking industry, but merely swapping one form of capital for another.
If the Treasury pays more than the securities are worth, then it is contributing capital to the industry all right, but it is also enriching the owners and managers of the banks, which creates the familiar moral hazard problem as well as upsetting people by rewarding careless management practices.
The more it overpays, the most costly the bailout plan to the taxpayer.
A more palatable approach would be for the government to drive a Warren Buffett style hard bargain, in which, rather than buying anything from banks, the government would invest in them in a form, such as purchase of newly issued preferred stock, or bonds with a long maturity, that would augment the banks' capital and thus enable banks to make more loans.
That would avoid conferring a windfall on the banks by overpaying them for their bad securities; no one thinks Buffett is conferring a windfall on Goldman Sachs.
After the industry was back on its feet, the government could sell the bank stocks or bonds that it had acquired.
I agree with Becker that capitalism will survive the current financial crisis, even if it leads to a major depression (which it may not).
It will survive because there is no alternative that hasn't been thoroughly discredited.
The Soviet, Maoist, "corporatist" (fascist Italy), Cuban, Venezuelan, etc.
alternatives are unappealing, to say the least.
But capitalism may survive only in damaged, in compromised, form--think of the spur that the Great Depression gave to collectivism.
The New Deal, spawned in the depression, ushered in a long era of heavy government regulation; and likewise today there is both advocacy and the actuality of renewed regulation.
I would like to examine the possibility that government is responsible for the current crisis; for if it is, this would be a powerful intellectual argument against re-regulation, though not an argument likely to have any political traction.
I do not think that the government does bear much responsibility for the crisis.
I fear that the responsibility falls almost entirely on the private sector.
The people running financial institutions, along with financial analysts, academics, and other knowledgeable insiders, believed incorrectly (or accepted the beliefs of others) that by means of highly complex financial instruments they could greatly reduce the risk of borrowing and by doing so increase leverage (the ratio of debt to equity).
Leverage enables greatly increased profits in a rising market, especially when interest rates are low, as they were in the early 2000s as a result of a global surplus of capital.
The mistake was to think that if the market for housing and other assets weakened (not that that was expected to happen), the lenders would be adequately protected against the downside of the risk that their heavy borrowing had created.
The crisis erupted when, because of the complexity of the financial instruments that were supposed to limit risk, the financial industry could not determine how much risk it was facing and creditors panicked.
Compensation schemes that tie executive compensation to the stock prices of the executives' companies but cushion them against a decline in those prices (as when executives are offered generous severance pay or stock options are repriced following the fall of the stock price) further encouraged risk taking.
Moreover, even when businesses sense that they are riding a bubble, they are reluctant to get off while the bubble is still expanding, since by doing so they may be leaving a lot of money on the table.
Finally, if a firm's competitors are taking big risks and as a result making huge profits in a rising market, a firm is reluctant to adopt a safe strategy.
For that would require convincing skeptical shareholders and analysts that the firm's below-average profits, resulting from its conservative strategy, were really above-average in a long-run perspective.
It should be noted that because of the enormous rewards available to successful financiers, the financial industry attracted enormously able people.
It was not a deficiency in IQ that produced the crisis.
Becker makes incisive criticisms of the government's responses to the crisis.
He points out that those responses create moral hazard, specifically a bias toward financing enterprise by bonds rather than by stock because the government's bailouts are limited to the bondholders and other creditors; create additional moral hazard because the responses include extending government insurance of deposits to money market funds; impede hedge funds by forbidding short selling, which enables the funds to hedge their risks; reduce information about stock values (another consequence of forbidding short selling); increase regulation of financial markets, which will carry with it the usual heavy costs of heavy-handed regulation; blur the role of the Federal Reserve Board by increasing its powers and duties; and increase the federal deficit.
But here is a remarkable thing about these responses.
To a great extent they are not responses by government, really, but by the private sector.
Bernanke and Paulson are neither politicians nor civil servants; Bernanke is an economics professor and Paulson an investment banker.
Their principal advisers are investment bankers rather than Fed and Treasury employees.
Even the prohibition of short selling, which seems like a product of the kind of mindless hostility to speculation that one expects from politicians, has been strongly urged by Wall Streeters, including the CEO of Morgan Stanley.
The White House, the Congress, and even the SEC have been only bit players in the response to the crisis.
In effect, the government's power to repair the crisis that Wall Street created has been delegated to Wall Street.
It is true that the top financial officials of our government have usually come from the financial industry or academia.
The difference is how recently Bernanke and especially Paulson were appointed, how heavily they are relying on financial experts from the private sector rather than on civil servants, and how small a role the politicians in Congress and the White House have played in shaping the response to the crisis.
I do not criticize the delegation of the handling of the crisis to (in effect) the finance industry.
I imagine that Bernanke and Paulson and their private-sector advisers are the ablest crisis managers whom one could find.
I merely want to emphasize that the financial crisis is indeed a "crisis of capitalism" rather than a failure of government, though it will not and should not lead to the displacement of free-market capitalism by an alternative system of economic management.
But it is already shifting the boundary between the free market and the government toward the latter.
Vice President Cheney is reported (I don't know whether accurately or not) to have said that "deficits don't matter." Certainly the Bush Administration ran big ones, as a result of which the public debt (which is the national debt less federal liabilities to Americans created by the social security and other entitlement programs) doubled.
It has continued mounting as the deficit continues growing, and has now reached $7.5 trillion, which is more than half the (annual) Gross Domestic Product.
It will continue to grow rapidly, because of the fall in federal tax revenues as a result of the economic downturn, because of the aging of the population which along with the continued acquisition of advanced medical technology is causing a continuing rapid increase in Medicare costs, because of the reluctance of Congress to raise taxes or cut any spending programs, and because of the likely cost of the ambitious new programs of the Obama Administration.
How much money will actually be appropriated for the programs, such as health-care reform and climate control, is, at this writing, unclear.
The public debt is funded by Treasury borrowing (actually a bit of it is being funded as an emergency measure by the Federal Reserve, but I will ignore that), of which more than 40 percent comes from foreign governments and other foreign entities and the rest from Americans, including banks and other financial institutions.
Much of this borrowing is in the form of 10-year Treasury bonds, which are now commanding an interest rate of about three and a third percent.
The government is having no difficulty at present in borrowing at moderate interest rates to fund the public debt, large as it is.
The reason is partly that Treasury securities are safe in the sense that there is no risk of default and the current global economic downturn has increased the demand for safe investments, and partly that the world is awash in dollars because of the policy of a number of major nations, such as China (but not only China--others include Germany, Japan, and the oil-exporting small nations of the Middle East), of running very large current-account surpluses (i.e., trade surpluses).
These surpluses are largely in dollars because the dollar is the principal international reserve currency, which is to say a currency used in international transactions in preference to using local currencies that fluctuate more than the dollar does.
As the world's principal sourcce of international reserve currency, the United States in effect sells dollars to the rest of the world to provide liquidity in international trade, and many of the dollars come back to the United States in the form of investments in Treasury securities, especially from countries that have large dollar reserves because they export much more than they import.
A country that supplies a major international reserve currency must run a current-account deficit because otherwise the rest of the world wouldn't have enough of the currency for their transactions.
The fact that foreign countries need large dollar reserves for this purpose means that there are a lot of foreign dollars available for the purchase of U.S.
securities, quite apart from current-account surpluses.
This makes it easy for the United States to borrow at reasonable interest rates to fund its public debt, even if Americans, unlike Japanese, are not big savers.
(The fact that Japanese are big savers enables Japan to fund a public debt that is proportionately much greater than ours, without much difficulty.) Americans are saving much more nowadays than they were a year ago, but this may change as the economy recovers.
As long as Americans are saving a lot, and wanting their savings to be safe, and foreigners as well, and as long as nations like China are running huge current-account surpluses, we can fund our public debt at reasonable interest rates.
But that is provided it doesn't grow too fast, and it is growing very fast and there are no signs of its slowing.
As the economy recovers, federal tax revenues will rise, but federal expenditures will be rising too, and rising all the faster if a significant part of the Administration's ambitious program is authorized by Congress, because there don't seem to be any serious efforts at either increasing any taxes (even by reducing deductions) or cutting any spending programs.
The perfection of interest-group politics seems to have created a situation in which taxes can't be increased, spending programs can't be cut, and new spending is irresistible.
Judging by the Bush Administration's profligacy and its impact on the public debt, the situation is bipartisan.
At some point the wheels may start coming off the chassis.
Assume that the public debt continues its rapid growth because government spending increases rapidly but Congress refuses to authorize significant increases in taxation.
The Treasury will have to borrow more and more, yet at a time when recovering economies need investment capital, forcing interest rates up and hence deepening our deficits.
We already pay more than $400 billion a year in interest on the public debt, and that amount will rise rapidly as both the size of the public debt and interest rates rise.
Assume further that political pressures prevent the Federal Reserve from raising interest rates in order to head off inflation caused by the banks' finally deciding to lend (as the economy recovers) the huge excess reserves that they have accumulated as a result of the Fed's open-market operations during the current economic crisis.
Fear of inflation will push up long-term interest rates, including rates paid by the Treasury to fund the growing public debt.
Fear of inflation will also make foreign countries worry about the value of their dollar reserves, and wonder whether the dollar should continue to be the predominant international reserve currency.
As the dollar falls in value, however, the public debt will become cheaper to repay, the demand for U.S.
exports will grow, and our demand for imports will fall.
The increase in the ratio of exports to imports will reduce the current-account deficit and thus reduce the rate of increase of the public debt.
But increasing exports relative to imports, by tending to reverse the long-term decline in U.S.
manufacturing relative to services, may be a painful and protracted one.
We have grown accustomed to financing our consumption by borrowing heavily abroad to pay for manufactured imports and for our elaborate systems for distributing goods and providing other services.
Our economic productivity has become heavily dependent on the immigration of high-IQ professionals, but one casualty of the current economic crisis has been restrictions on immigration that are designed to protect Americans' jobs.
Moreover, even with a reduced current-account deficit, U.S.
public debt will be rising because of increasing unfunded expenditures on medical care and other social programs, and for all one knows on military activity as well since the United States remains the world's policeman.
And lenders will charge higher interest rates to continue to fund our public debt if they think the dollar is losing value because of inflation.
If inflation persists, then given that there are other international reserve currencies, namely the euro and the yen, and in time the renminbi (the Chinese currency), the dollar will decline as an international reserve currency, and, with the demand for dollars thus reduced, its value will fall further.
As real interest rates rise as a consequence of the growing public debt and decline in demand for the U.S.
dollar as an international reserve currency, U.S.
savings rates will rise, and by reducing consumption expenditures this will slow economic activity.
Economic growth may also fall as more and more resources are poured into keeping elderly people, most of whom are not highly productive members of society from an economic standpoint, alive.
The United States may find itself in the kind of downward economic spiral in which "developing" countries often find themselves.
As an economic power we may go the way of the British Empire, which occupied approximately the same position in the world economy in the early twentieth century as the United States does today.
Becker is correct that the tariff appears to be a pay back to the unions for their strong support of Obama in the 2008 election.
The significance of that support was amplified by a questionable feature of our political system.
All but two states award all their electoral votes to the candidate who wins a plurality of the popular vote in the state.
This makes winning, however narrowly, the popular vote in states that have a lot of electoral votes disproportionately important to a successful strategy for a presidential candidate, and in turn amplifies the effect of interest groups in those states.
States in which unions, despite their modest fraction of the labor force overall, are electorally powerful include major swing states, such as Ohio.
Industry-wide unions are labor cartels, but the aggregate economic effects of unions in the American economy is probably slight.
Many unions are unaggressive, being chiefly interested in union dues.
Others operate primarily to protect workers against arbitrary supervisors and unsafe working conditions, and these unions may generate net benefits--may even improve labor relations by increasing employee trust.
The United Auto Workers is a dying remnant of the dinosaur era of industrial unions; the UAW has done much to bring down the U.S.-owned domestic auto industry but it could not have succeeded had it faced competent management.
Unions reduce management flexibility, and that is particularly harmful in a depression or recession.
Union contracts usually require that layoffs be strictly inverse to seniority, so the employer cannot use a depression-generated need to lay off workers to get rid of dead wood; for that reason and because of wage inflexibility created by union contracts, unionized firms cannot cut their costs in response to a fall in demand for their products as rapidly or deeply as nonunionized firms can, and this retards the restoration of economic equilibrium.
The effect is particularly pronounced in a deflation, which in fact we are experiencing.
In a deflation, constant nominal wages result in increased real wages, thus raising producers' costs and spurring layoffs.
Because unions are quite weak in the nation as a whole, the importance of union support to presidential candidates doesn't necessarily translate into strong support in Congress for pro-union policies.
Congress doesn't kow-tow to presidents even when the political party to which the president belongs dominates Congress.
The tariff on Chinese tires is a unilateral presidential gift to a union.
One hopes it is just a matter of throwing a small bone to the union movement, because to launch a trade war against China would be playing with fire.
There was much more at stake for unions in the auto bankruptcies, but I remain sympathetic to the government's bailouts, especially the initial bailouts of last December.
Because of the credit crunch, it is unlikely that bankrupt auto companies could have attracted "DIP" financing--that is, financing the operations of a company that is in bankruptcy but is continuing to operate (such a bankrupt is called a "debtor in possession" (DIP)).
Without DIP financing, GM and Chrysler would have had to liquidate (that is, shut down), because they could not operate without credit.
Their liquidation would have thrown hundreds of thousands of workers, perhaps more than a million, out of work at a time when the economy was in a steep downward spiral; the effect could have been catastrophic.
By the time the auto companies were allowed to declare bankruptcy, in May, they had undergone a gradual partial liquidation and the panic phase of the economic downturn had passed, so the economy could take the bankruptcy in stride.
Still, without government assistance, there was a danger of liquidation and so a case for the further bailouts--though not for the government's becoming the actual owner of an auto company, namely General Motors.
I do not approve of the "Buy America" provisions of the $787 billion stimulus, which appear to be another bone thrown to the unions, because of the danger that they may provoke retaliation.
However, they are at least understandable as a stimulus provision.
The purpose of a stimulus program is to increase employment, and to the extent that stimulus moneys are used to buy imported goods, the purpose is thwarted--the stimulus then stimulates a foreign economy, not the U.S.
economy.
Much better than including "Buy America" provisions in the stimulus law, however, because much less likely to provoke retaliation, would be targeting the stimulus expenditures more carefully on the production of goods and services that involve minimum use of imported components, road building being the obvious example though another is military equipment, as emphasized by Martin Feldstein.
The stimulus program was in my opinion an essential measure for fighting the looming depression, but it was poorly designed.
I am not bold enough to make forecasts about economic recovery, given the unusual economic situation that the country is in.
The recovery may be fast or slow, shallow or steep, continuous or interrupted--and if fast and steep may set the stage for inflation and other economic troubles.
So I am neither an optimist nor a pessimist.
I am uncomfortable with the way in which modern economists discuss economic downturns.
Before the 1930s depression, economic busts were called--"depressions." As far as one can judge from the incomplete nineteenth-century economic statistics, that depression was of unprecedented severity, and hence came to be called the "Great Depression." Which is fine.
But thereafter, for reasons I can't fathom, economic busts, instead of being called "depressions" (though of course not "Great Depressions," because they were much less severe), came to be called "recessions." The current downturn, because it is the worst since the Great Depression, is now being called the "Great Recession." I find this lexical nitpicking distracting and unhelpful.
Why not just say, we're in a depression, severe by postwar standards but mild compared to the Great Depression?.
I also question the convention that says that a depression (or recession, if one insists on retaining that word) ends when GDP growth resumes.
Actually, that is not the official (National Bureau of Economic Research) position; its business-cycle committee looks at other factors as well, such as employment.
It would be a nonsensical convention applied to the Great Depression; it would imply that the Great Depression ended in March 1933, when output and employment began to rise from their respective one-third and one-quarter decline from 1929.
I would prefer to say that a depression ends either when economic output returns to its pre-depression level or, better, when it returns to the GDP trendline of average annual growth, which is about 3 percent in real terms.
So this depression has not ended.
This depression was never likely to be as severe as the Great Depression.
One reason is the automatic stabilizers, such as unemployment benefits and other social-welfare programs, and progressive income tax.
Another reason is changes in the composition of the workforce.
Manufacturing and construction, two of the industries most likely to respond to a fall in demand by laying off workers, account for a much lower percentage of the U.S.
workforce today than in the 1930s; and services (which had a low unemployment rate even in the Great Depression) account for a much higher percentage.
In addition, there is federal deposit insurance, and a clearer understanding that in a depression the government should try to increase the money supply, and indeed should try to create at least a mild inflation.
The Roosevelt Administration did both things as soon as it took office and they probably were responsible for the rapid improvement in the economy that began soon after his inauguration, though it was later interrupted by the economic dive in 1937 and 1938--what has been called the "second depression.".
Nevertheless this depression resembles the Great Depression in one respect that makes forecasting particularly chancy--it has been accompanied and made worse by a financial crisis.
The normal depression comes about either from something that happens in the nonfinancial economy, such as a big increase in productivity which causes unemployment, or by the action of a nation's central bank in raising interest rates to stop or head off inflation.
In both cases, as shown in research by Christina Romer and others, the depression can be effetively treated by the central bank's reducing interest rates, which stimulates economic activity by increasing lending.
But we are in a depression in which interest rates are very low.
Indeed, the Federal Reserve is maintaining the federal funds rate at just a shade over zero percent and has been for many months.
There are other interest rates, and the effect of the federal funds rates on them is complex, but nevertheless there is nothing further the Fed can do, or at least that it wants to do (because it's beginning to worry about a future inflation), to lower interest rates, though credit remains very tight because the banks remain undercapitalized and demand for loans is weak because people and many businesses are overindebted.
It's because monetary policy, though in combination with bailouts it has saved the banking industry from bankruptcy, cannot do anything to stimulate economic activity that we have the $787 billion stimulus program and other programs, such as the federal subsidies that are keeping GM and Chrysler in business.
These programs may be responsible for the recent improvement in the economy, at least in part, though there is no good evidence.
The consumer price index is lower than it was a year ago, which means we're in a deflation.
It's a mild deflation, but any deflation increases the burden of debt, which in turn reduces personal consumption expenditures and investment.
The unemployment rate is high and rising, and the underemployment rate, 16.8 percent in August, is very high.
Housing prices remain very low, which increases indebtedness because a house is the principal asset of most people, and mortgage debt obviously does not fall when the value of the mortgaged property falls.
The fall in housing prices, by wiping out the housing equity of milliions of people, exacerbates unemployment by making it more difficult for the unemployed to seek jobs in different parts of the country--they can't afford the down payment on a house if their existing house is worth less than the unpaid balance of their mortgage.
Another factor retarding recovery is the reluctance of older workers to retire, because their retirement savings are impaired.
Employers are reluctant to lay them off for fear of being accused of age discrimination, which is illegal.
With fewer workers exiting the work force, there is less room for the thousands of people who each day are looking for a job.
There are factors pushing in the opposite direction--toward a rapid recovery.
As manufacturers work off their inventories, production restarts; as people's incomes fall, they divert more income to consumption and less to savings; when their incomes fall really far, they start spending their existing savings; and as durables wear out, the demand for durables increases.
(It's the fact that the purchase of durable goods is postponable that leads to such drastic falls in manufacturing in a depression, compared to services.) And as economic conditions improve in other countries, U.S.
exports will rise, which will stimulate U.S.
output.
I don't know how these factors balance out, and I suspect no one knows.
After the economists and the businessmen alike were caught by surprise by the housing and credit bubbles and ensuing financial crisis, all macroeconomic forecasts should be treated with a measure of skepticism.
Rapid advances in technology are nothing new; what is new in recent decades is rapid advances in the kinds of technology that require intellectual skills.
This has increased the returns to people who have high IQs and good technical education.
In addition, fulfilling Max Weber’s prediction that modernity would see ever more activities regulated by rational methods rather than by authority or personality, even low-tech activities, like management and marketing, have become ever more scientific, requiring a high level of intelligence.
High IQ and technical education are complements.
But people who have modest IQs also benefit from education, as does society.
Education even at its lowest levels helps to instill good work habits, respect for knowledge, simple communication and analytical skills, social skills, and civic values.
Every country, therefore, can benefit from having a good educational system, including pre-collegiate, collegiate, and postgraduate education.
How to organize such a system and what the optimal level of resources to allocate to it is are of course difficult questions.
There probably are diminishing returns to providing higher education, because IQ provides a ceiling beyond which educational effort is wasted on students.
The     United States     may be in that position today.
Many colleges offer what amounts to a remedial high school education, postponing the students’ entry into the work force.
If we had better high schools, we might have fewer colleges (or more—if better high schools improved intellectual motivation and performance).
With ever-increasing specialization of the workforce, there is an argument for making education increasingly vocational.
Developing countries face a chicken and egg problem in education policy.
Until there is a highly educated stratum in a nation’s population, there will not be an adequate pool from which to draw teachers of technical subjects.
But without such teachers the nation will be unable to create such a stratum—internally.
Presumably the way for a developing country to proceed, therefore, is to send its brightest young people abroad for advanced education.
Some will remain abroad but those who return to their native country will supply the elite teachers of the next generation.
This by the way suggests that the teaching of English should be a priority for pre-collegiate education in developing countries, since the best universities are in English-speaking countries.
Developing countries, wisely from their perspective, are notoriously casual about enforcing intellectual property rights.
They don’t produce much intellectual property themselves, and of course consumers of intellectual property would prefer not to pay for it as long as it continues to be produced in quantity despite their free riding.
(Producers of intellectual property can actually benefit from free riding if IP “thieves” become a paying market for follow-on and complementary products, as often happens.) Appropriation of intellectual property becomes a means by which technical workers in developing countires can enlarge their technical knowledge and set the stage for innovation.
So the priorities should be: first education and imitation, second innovation.
I would particularly stress the importance in developing countries of education in instilling civic values in a country’s youth, values that include honesty, respect for knowledge, tolerance, and, of perhaps greatest importance, loyalty to national institutions.
Intense family and clan loyalties increase the cost and reduce the efficacy of government, foster nepotism and other forms of corruption, reduce social mobility, and undermine commercial values, which depend on impersonal markets.
Modernity, which centrally entails a weakening of family and clan ties, is a precondition of economic progress, and education an important factor in promoting a modern outlook, quite apart from education’s role in developing technical skills.
Many developing countries are authoritarian, and their rulers may worry that education will loosen their grip on the population.
On the contrary, it is more likely to strengthen it, by weakening family and clan loyalties that compete with loyalty to the regime.
  Prussia  , the Soviet Union, Communist Cuba, and now     China     are examples of authoritarian regimes that emphasized education for the masses without undermining their authority.
There were many excellent comments.
One asked who actually benefited from the shortage of gasoline resulting from Katrina and Rita--the gasoline dealers or the refiners? I had mentioned the first, Becker the second.
Both gained, but the refiners more.
The reason is, as one commenter noted, that once the gasoline station has sold all the gasoline stored on his premises and has to buy more from a refiner, he will have to pay the high price necessary to ration the limited output of the refiners, unless he has negotiated a fixed-price supply contract.
I was surprised by how many comments took issue with the basic proposition that price controls are inefficient.
A number of questionable propositions, theoretical and empirical, were offered.
For example, it was suggested that charging the market-clearing price in a shortage is inefficient because it is not the "equilibrium" price.
I think what the commenter meant is that if the shortage is temporary, the price that clears the market will soon fall.
But the point is that it   is   the market-clearing price.
If a lower price is charged, supply will exceed demand and will have to be allocated by some nonprice method, such as queuing.
It is quite right, as I had suggested in my post, that for people with low time costs, queuing may be preferable to paying a high (money) price.
But the poorest people don't have cars, so they are not affected by a gasoline shortage.
Above them are people of modest incomes, who can afford cars but are highly sensitive to gasoline prices; nevertheless, the number of people who would incur extreme hardship from having to pay an extra $2 or $3 a gallon for a few weeks is, I would guess, small (I would invite submission of evidence that it is large).
Moreover, while their time costs may be low, they will not be zero.
Queuing in a shortage situation can become extreme, because not knowing whether there will be any supply available people tend to queue when their need is not urgent, for example when they have a half-full gas tank but are unsure when, if they do not fill it now, they will be able to do so when the tank is almost empty.
Fear of shortages also makes shortages worse and queuing longer by increasing hoarding.
The worse that shortages are expected to be because of price controls, the more hoarding the expectation of shortages will induce--and so the shortages will be worse.
As I said in my post, in situations of extreme hardship, which I illustrated with the case of a shortage of human growth hormone and that one of the comments illustrated with the case of scarcity of organs for transplantation, the welfare effects of rationing by price may justify either nonprice rationing or a government subsidy to enable people of limited means to obtain a product much more likely to increase their welfare than that of affluent purchasers.
It should be noted also that some of the effects of shortages on the distribution of income and wealth are automatically corrected by the tax system: windfall profits of gasoline dealers and retailers are taxable as income, and so a part of them is in effect returned to the general public.
More would be returned if an "excess profits" tax were imposed, as in World War II, in recognition of the enormous profits that shortages created by the nation's all-out war effect had generated for defense contractors.
One comment blamed the refiners for not having hardened their Gulf Coast refineries against catastrophic hurricanes.
If they were negligent in failing to do so, this might conceivably support a tort suit to recover the refiners' windfall profits.
(They would not be negligent if the expected benefits from such hardening did not exceed the costs.) For then there would be a sense in which the shortage was artifically induced.
But the point does not support a general policy of imposing price controls during shortages.
One comment reported a rumor that Starbucks had charged $10 per bottle of water to firemen and police officers who responded to the 9/11 attacks in New York City, even though--the commenter said--there was no shortage of water.
But the rumor, if true, describes a situation similar to that in the admiralty salvage case.
The responders appear unexpectedly and need water; they don't have time to shop for it and anyway there are many more of them than are usually trying to buy water from a Starbucks store, and so its supply would quickly be depleted if it charged its normal price--so there   would   have been a shortage at that price.
And, as one comment pointed out, price controls in shortage situations, for example in the form of fines for "price gouging," discourage merchants from stocking up with extra supplies for future emergencies.
Keeping an inventory of items that will be demanded only in emergencies is extremely costly, and may be cost justifiable only if the merchant knows that should there be an emergency the items can be sold at a higher than normal price.
This is an objection to a general windfall-profits tax.
Some of the comments challenge the most basic assumptions of a free-market society, such as that markets generally yield much more satisfactory allocative results than bureaucrats.
One would think that the experience of communism would have disabused people of belief in the superior efficiency of "central planning." The issue is not philosophical--whether a market system of resource allocation is "just" or whether democracy should be used to allocate resources instead of markets because it is more--democratic.
It is whether you like the consequences that "price gouging" laws would produce.
The major consequences would be shortages, leading to nonprice rationing that would impose enormous costs, and thus augment and shift, rather than reduce, the price of the shortage item.
I think the experience of queuing would change the minds of most intellectuals who think that resources should be allocated by nonprice methods.
Let me try finally to be more precise about the nature of the market failure in the admiralty salvage case.
One comment suggested that the problem there is not monopoly at all, in the sense of an artifical scarcity, but transaction costs.
In fact both monopoly and high transaction costs are present and they are related.
The would-be rescuer creates an artifical scarcity by threatening to withhold supply (that is, refuse to rescue) unless the ship in distress agrees to the rescuer's exorbitant price--a monopoly price because it is based on the purchaser's lack of alternatives.
However, the situation is actually one of bilateral monopoly because while the purchaser lacks alternatives, so does the seller; he has (at present) no other market for his rescue services than this particular ship in distress.
Because price under bilateral monopoly is indeterminate within a broad range, negotiation (transaction) costs are high, which creates a particularly acute problem in a rescue situation in which time is of the essence.
The case is completely different from that of the hurricane-induced gasoline shortage.
Hurricane Katrina has produced a mass of interesting revelations.
One is that more than half the states have laws forbidding "price gouging," often defined with unpardonable vagueness as charging "unconscionably" high prices.
These laws are rarely enforced.
But the sharp runup in gasoline prices as a result of Katrina (and also Hurricane Rita, which followed almost immediately), impeding imports of crude oil and causing a number of refineries in the path of the hurricanes to shut down temporarily, prompted a flurry of enforcement threats and even a few fines.
It also prompted denunciation by politicians of greedy refiners and gasoline dealers, and proposals for federal legislation prohibiting "unconscionably excessive" gasoline price increases.
What prompts such reactions besides sheer ignorance of basic economics (a failure of our educational system) and demagogic appeals by politicians to that ignorance is the fact that an unanticipated curtailment of supply is likely to produce abnormal profits.
The curtailment reduces output, which results in an increase in price as consumers bid against each other for the reduced output.
In addition, the reduction in output is likely to reduce the sellers' unit costs; the reason is that sellers normally sell in a region in which their costs are increasing--if they were decreasing, the sellers would have an incentive to expand output further.
With both price rising and cost falling, profits are likely to zoom upward.
(Some gas stations are reported to have seen their profits increase by 400 percent shortly after Hurricane Katrina struck.) In times of catastrophe, with consumers hurting, the spectacle of sellers benefiting from consumers' distress, while (it seems) deepening that distress by charging them high prices, is a source of profound resentment, and in a democratic society profound resentments trigger government intervention.
Such intervention is nevertheless a profound mistake, and not only from some narrow "economic" perspective that disregards human suffering and distributive justice.
If "price gouging" laws or even merely public opinion deters refiners and dealers from charging the high prices necessary to equilibrate demand and (reduced) supply, there will be shortages.
Consumers will still be paying a higher price than before the shortage, but they will be paying the higher "price" in the cost of time spent waiting on line at gasoline stations, or (if they drive less because of the shortage) in the form of restricted mobility.
And those who need the gasoline the most, not being able to express their need by outbidding other consumers for the limited supply, will suffer the most from the shortages.
The only beneficiaries will be people with low costs of time and nonurgent demand.
But here is an interesting wrinkle.
Admiralty law and common law (both are systems of judge-made law, but they are classified separately by lawyers because they used to be administered by separate courts) alike forbid certain practices that might be described as "price gouging." Suppose a ship is sinking, and another ship comes along in time to save the cargo and passengers of the first.
The second ship demands, as its price for saving the cargo and passengers of the first ship, that the owner of the ship give it the ship and two-thirds of the rescued cargo, and the captain of the first ship, on behalf of the owner, being desperate agrees.
The contract would not be legally enforceable; under the admiralty doctrine of "salvage," the second ship would be entitled to a "fair" price for rescuing the first, but to no more.
In a parallel case, also maritime but governed by common law rather than admiralty law (the   Alaska Packers   case, well known to law students), seamen on board a ship that was fishing for salmon in Alaska waters went on strike, demanding higher wages.
The captain of the ship agreed because, the fishing season in these waters being very short, he could not have hired a replacement crew in time to make his quota.
Again, however, the court refused to enforce the contract, in essence because it had been obtained under duress.
These cases, it turns out, are subtly but critically different from the "price gouging" alleged in the wake of Katrina and Rita.
The refiners and dealers who raised prices after the hurricanes disrupted gasoline refining had not created the situation that resulted in a reduction in supply.
If they had, say by agreeing to increase price above the existing level, they would have been punishable for violating the antitrust laws.
(There were some accusations of price fixing, but as far as I know they have not been substantiated.) Similarly, in the salvage case, the rescue ship is not being asked to ration a limited supply by raising price; there is no one else competing for the rescue service--there is just the one ship in distress.
And in   Alaska Packers  , there was no labor shortage, which would have justified seamen in demanding higher wages; the seamen created the shortage by refusing to work.
From an economic standpoint, their workers' cartel was symmetrical with my hypothetical refiners' or dealers' cartel.
Both are examples of opportunistic behavior--behavior designed to take advantage of an unforeseen opportunity to charge a monopoly price by threatening to withhold output.
The hurricane-induced scarcity of gasoline that pushed up prices was not an artificial scarcity, but a natural one.
The price increases generated by a natural scarcity (or indeed any scarcity not created by the person or firm imposing the increase), while they may generate "windfall profits," are unavoidable in a way that price increases due to a shortage created by a cartel are not.
A further exception to taking a hard line against responding to a natural scarcity by imposing price controls, some would argue, is the rare situation in which the consequence would be an intolerable gap between wealth and welfare.
Suppose there is a highly limited supply of human growth hormone, so that if price is allowed to ration demand, all the hormone will be purchased by rich people who would want their sons and daughters of average height to be taller, and no hormone will be purchasable by poor people, or even people of average income, who have children who will be dwarfs unless they get the hormone; they simply are outbid by the rich.
In such a case, there may well be a compelling moral argument for allocation of the limited supply on a basis other than price, presumably some utilitarian concept of welfare: aggregate happiness would be promoted by allocating the hormone on the basis of need rather than ability to pay.
This was not a factor in the market's response to the incipient gasoline shortage caused by the hurricane.
Not only are the duress and welfare objections to price allocation inapplicable to the run up in gasoline prices but higher prices for gasoline are a source of substantial external benefits (that is, benefits not conferred on the parties to the transaction, so that the parties do not have an incentive to consider them in deciding on the price and other terms of the contract).
By reducing the amount of driving and (if the higher prices persist) a switch to more fuel-efficient cars, higher gasoline prices cause a reduction in the amount of carbon dioxide emitted into the atmosphere--a major cause of global warming--while also reducing more conventional forms of automobile air pollution.
A reduction in driving also reduces traffic congestion, which imposes costs in the form of delay on all drivers in congested areas.
Finally, a reduction in the amount of oil consumed in the United States would make the nation more secure by reducing the wealth and economic leverage of the vulnerable, unstable, or hostile nations, such as Saudi Arabia, Iran, and Venezuela, that control so much of the world‚Äôs oil supply.
In short, the social benefits of gasoline "price gouging" appear to exceed the social costs by a large margin.
I agree wholeheartedly with Becker about the desirability of our accepting more skilled immigrants for permanent residence.
Of course, the more that are accepted, the lower the average quality.
The qualities of skilled immigrants that Becker rightly praises are a function of the size of the skilled-immigrant quotas; the lower the quotas, the more outstanding the successful applicant is likely to be.
But it is a safe bet that the quotas would have to be much higher before a discernible fall in average quality was detectable.
The most difficult issue relates to the security concerns that Becker touches on, which I discuss at the end of this comment.
I think there is a simple answer to the "brain drain" problem.
For concreteness, consider immigration to the United States of Indian software engineers.
The more who immigrate, reducing the supply of Indian software engineers to Indian software producers, the higher the wages of those engineers in India.
This will tend both to reduce the numbers immigrating to the United States and to elicit a greater supply of engineers for the Indian market.
I am puzzled by the political opposition to increasing the quotas for highly skilled immigrants.
The average worker is benefited by immigrants who have skills much greater than his own, because they increase U.S.
productivity (so the average worker benefits as a consumer and he may even benefit as a worker if his employer's greater productivity increases the employer's demand for workers) and he does not compete with them; they are in different job categories.
And as Becker points out, restricting immigration of highly skilled workers increases the incentive of U.S.
firms to outsource production to countries containing such workers; and outsourcing, by exporting jobs, harms the employees of those firms.
So I would not expect unions, or average Americans, to oppose the immigration of the highly skilled.
Maybe firms that compete with employers that utilize skilled immigrant workers most efficiently oppose such immigration; maybe universities as well, because, as Becker mentions, the more skilled immigrants there are, the weaker the demand of nonimmigrant Americans for scientific and technical education.
Hiring skilled immigrants is a way of outsourcing such education.
One way to reduce opposition to such immigration would be to insist on a somewhat higher skill level of applicants for skilled-worker visas.
The higher the required level, the fewer nonimmigrant Americans will be affected by the immigration of skilled workers.
There are three principal employment-based immigration categories for greencard applicants  (as opposed to the H1B program for temporary employment).
The top two, EB1 and 2, set very high standards, but the third, EB3, is very loose: it "is for aliens with bachelor's degrees, but who do not qualify for the EB2 category, skilled workers with at least two years of training or experience, and unskilled workers.
An alien in the bachelor's degrees category must demonstrate that he or she has a bachelor's degree or equivalent, that a bachelor's degree is required for the position, and that he or she is a member of the profession." Oddly the same annual quota--40,000 visas--is fixed for each of the three categories.
If the first two quotas were increased, and the criteria for the third tightened up, perhaps by specifying particular industries, such as high-tech, in which the applicant could work, the impact on nonimmigrant American workers would be reduced.
I would not describe as "profiling" a system of screening would-be immigrants that, without fixing quotas on a national basis, screens more carefully applicants from nations that are breeding grounds of terrorists.
The efficacy of such screening is another matter; the less effective it is, the stronger is the argument for reducing skilled-worker immigration from countries in which terrorists are admired and recruited.
Besides terrorists, we have to worry about spies from potentially hostile nations; this implies a need for careful screening of Chinese immigrants as well.
As usual, a number of excellent comments.
Several express concern that an Indian "brain drain" will hurt India by depleting its supply of high-IQ individuals.
This is unlikely.
As I said in my post, the "drain" is self-correcting because a reduction in the supply of (say) software engineers in India will result in higher wages for those workers there, which will not only slow the drain but also increase the supply by improving the job prospects.
Since India has a population in excess of 1 billion, the number of high-IQ individuals is undoubtedly so great that a limited brain drain will not significantly weaken the nation's long-term prospects.
Several comments discuss my puzzlement concerning the political opposition to larger quotas for skilled immigrants.
I can understand why the skilled American workers with whom those immigrants would be competing might favor keeping the quotas low, but my impression is that for the most part they do not, perhaps because in knowledge industries like software skilled workers add more value than they capture in their wages, creating a virtuous cycle that benefits the entire workforce in the industry.
(This would be consistent with opposition to expanding quotas for women and minorities--such quotas favor the less skilled and so do not produce value that benefits competing workers.) The main opposition to relaxing the quotas seems to come from unions; maybe they fear that any relaxation would spread to less skilled workers.
The universities, moreover, have an additional stake in limiting the quotas besides the one I mentioned in my post; as one commenter points out, foreign students, forbidden by their student visas to work at regular jobs, provide valuable research assistance to university faculty.
A comment about the cultural benefits conferred by immigrants suggests a partial answer to the security concerns that I expressed in my post.
Our intelligence system is in desperate need of more people fluent in Asian (including Middle Eastern) languages and intimately familiar with the cultures of those regions, and the need cannot be met by training Americans.
Several comments emphasize quite properly the defective structure and administration of the quotas.
One comment points out that as the temporary skilled-worker visas (H1B) expire, the holders often join the queue for permanent-residence visas, and so the queue lengthens--it is now several years for India and China.
(The quotas are on a country basis--another mistake.) Visas are granted in the order in which they are applied for, and thus to the people who have been in the queue the longest.
They tend to be the less qualified workers; the more qualified will have had better opportunities in their home country and the longer the queue, the more incentive they will have to exploit those opportunities rather than wait in immigration limbo.
In addition, we lose some of the best immigration prospects by delay in the processing of visa applications, which is due to a shortage of visa personnel against a background of increased scrutiny of those applications for security reasons.
The relatively low costs of expanding the number of such personnel would probably generate more than offsetting benefits in a higher quality and quantity of highly skilled immigrants.
I agree with almost everything that Becker says in his post.
For example, I agree that whether or not the government should subsidize day care, it should not provide day care facilities; the subsidy should take the form of vouchers, so that the private sector would provide the facilities.
Subsidization and provision should normally be separated, since the government is an inefficient service provider compared to private firms.
I also agree that when the purpose of subsidizing child care and work leave is to increase the birth rate, the emphasis should be on subsidizing the second and third child (to prevent population from declining, there must be an average of 2.1 births per woman), since, as Becker points out, most couples have at least one child.
And I agree that there is no good reason to encourage a higher birth rate in the United States.
Not because our fertility rate is higher than that of the other rich (and even many poor) countries (the point emphasized by Becker), though it is, but rather because it is close to the replacement rate, and, more important, because the United States is uniquely attractive to high-quality, easily assimilated immigrants, who are good substitutes for native-born citizens.
One thing that puzzles me is the suggestion that child care and work leave subsidies are intended to encourage women to stay home and take care of their children.
I had thought the opposite--that the purpose of child care subsidies, when they take the form of subsidies for day care, and compensated work leaves, was to encourage women who want to have children to work.
After all, women who don't work don't need day care or work leave.
From an economic standpoint, women should not be encouraged to enter the labor market unless the social value of their output in that market is greater than the social value of their household production, importantly including their contribution as mothers to their children's human capital (broadly defined).
I do not know whether it is.
Of course if women want to work and do not receive any child care or work leave benefits, they may decide to have no or fewer children, but as Becker points out, if a higher birth rate is the goal, child care and work leave subsidies are not effective means to it.
I do not agree that if women are better mothers if they stay home with their children, the government should require one parent (who as Becker points out will usually be the mother) to stay home.
That would put some women to an unnecessarily hard, and socially suboptimal, choice--women who would be far more productive in the labor market, but who also would be, on balance, superior mothers (maybe just because of a superior genetic endowment!).
I do think fertility has to be a major concern for countries, such as most European countries plus Japan, that have at once a birth rate far below the replacement rate and a difficulty in assimilating immigrants.
The ideal solution might be simply for these countries to grow smaller in population terms; the problem is that this would place unbearable strains on social services, and so countries faced with a declining population are under irresistible political pressure to admit immigrants, whether or not they can be assimilated.
Europe has a large and growing Muslim immigrant population that is not only poorly assimilated, but, to some extent, a danger to their host countries and the world; and that population is growing rapidly not only through immigration but also because of early marriage and large families.
These nations might be well advised to pay women to have a second and third child.
It is doubtful that subsidization, however well designed, will raise the birth rate of the European nations that have birth rates far below the replacement level to that level.
Children in such nations are extremely expensive, especially in opportunity costs of parents' time (which is why birth rates are so low), and the tax rates necessary to offset those costs enough to have a significant rate on the birth rate might be infeasible.
(Another factor in declining birth rates I believe is reduced gains from marriage, the reduction being related to women's opportunity costs of time but to other factors as well.) However, modest subsidies that reduced the rate of population decline might be worthwhile in moderating the demand for immigrants.
The award earlier this month of the Nobel Peace Prize to Muhammad Yunus and his Grameen Bank of Bangladesh has directed attention to the phenomenon of microfinance, which Yunus and his bank have pioneered.
The term refers to the making of tiny loans to poor people in underdeveloped countries, like Bangladesh.
The amounts are sometimes only tens of dollars, the borrowers are small farmers, shopkeepers, artisans, and other minute commercial enterprises--overwhelmingly female (97 percent)--and the interest rates, which are designed to compensate the lenders fully, are high‚Äîsometimes as high as 20 percent a day.
Although Yunus's motivation is not primarily commercial, the high interest rates and a relatively low default rate (in part because groups of women related to or friends with the borrower often agree to guaranty repayment of the loan), said to be only 1 percent, enable the Grameen Bank and its imitators (collectively referred to as "MFIs"--microfinance institutions--to cover their costs.
The MFIs provide other services to poor entrepreneurs as well, but the loans ("micrcredit") are the most interesting feature of this experiment in helping poor countries throw off their poverty.
Microfinance began with the Grameen Bank in the 1980s, and to date the bank has disbursed almost $6 billion in loans to some 6 million people.
The total number of borrowers from all microfinance institutions is expected to reach 100 million by next year.
The aggregate value of these loans is a drop in the bucket so far as alleviating Third World poverty is concerned, but the award of the Nobel Prize is a vote of confidence that may encourage continued growth of the program.
What exactly microfinance has to do with "peace" is obscure.
The causes of war are complex, and it is by no means clear that poverty is a major one.
In any event the actual contribution of microfinance to peace must be slight and speculative.
So the award of a Nobel Peace Prize to Yunus was questionable, but that is not to criticize Yunus's project.
The experiment is a worthy one, though its success has yet to be demonstrated despite glowing appraisals by Kofi Annan and others.
It may simply be the latest development fad.
It does however seem superior to philanthropy in the sense of handouts, which in this case would mean giving grants (or heavily subsidized loans) to small entrepreneurs on the basis of competitive applications.
For that is a competition in rhetoric.
Middlemen would spring up to assist the applicant in writing a persuasive application, and the fees charged by the middlemen would be a good example of how the prospect of obtaining economic rents (crudely, something for nothing) channels the expected rents into costs.
And the grants would frequently be misallocated.
The high interest rates that the microfinanciers charge induce self-selection by the borrowers: a borrower has to have confidence in the project for which he is seeking microcredit in order to be willing to assume the burden of servicing his debt.
Of course such confidence is sometimes, and perhaps among the poor often, misplaced.
An obvious question is why, if microfinance is remunerative, commercial banks and other commercial lenders did not enter the market long ago; for as I said, microfinance began in the 1980s.
One possibility is that regulations designed to protect the solvency of banks limits their ability to make risky loans.
Usury laws may be an obstacle too, if they are differentially applied to ordinary lenders as distinct from microfinanciers--yet the Grameen Bank seems to be an ordinary stock corporation, not a nonprofit.
More important may be the existence of a close substitute for microfinance in the form of informal loans by relatives and clan members, a method of financing that is feasible (and extremely common) in societies in which the clan and the extended family can discipline members by threat of ostracism and other informal sanctions.
The total capital possessed by the family or clan might be slight by usual commercial standards, yet if only one or a few members have any real entrepreneurial prospects, the limited capital may be sufficient to finance their tiny projects.
So microfinance is perhaps best understood as a device for easing the transition from an economy based on trust to a normal commercial society.
As a substitute for trust, microfinance has obvious drawbacks.
Extremely high interest rates, though justified not only by the risk of default (and the opportunity cost of money, that is, the riskless interest rate) but also by the very high transaction costs of a tiny loan (since those costs are largely fixed, rather than varying with the size of the loan), burdens the borrower with very heavy fixed costs, since he must repay the loan regardless of the success of his enterprise.
The higher a producer's fixed costs relative to his total costs, the riskier his enterprise, since if demand for his product falls or his marginal costs rise he will find it extremely difficult to adjust by cutting output; the cut will reduce the revenue out of which he has to pay principal and interest on the loan.
Borrowing at astronomical interest rates seems an unlikely formula for commercial success--and the more unlikely the poorer the borrower.
In the family or clan alternative, trust may provide an extremely low-cost substitute for the transaction costs involved in microfinance.
Perhaps then microfinance will occupy a narrow niche in capital markets between family and clan resource pooling at one end and commercial lending at the other.
Indeed, the fact that the overwhelming majority of microfinance borrowers are women suggests that the particular market failure that microfinance corrects is discrimination against women in the family and clan capital markets.
An alternative form of microfinancing would be equity rather than debt financing, on the model of private equity firms like Blackstone and the Carlyle Group.
Of course these multibillion dollars firms have no interest in making $100 loans in Bangladesh.
But the Grameen Bank could presumably furnish equity in lieu of loans to its customers, thus sharing the risk with them and so reducing the risk to them; and it is a superior risk sharer because of size and diversification.
But maybe the bank would find it too difficult to evaluate projects, or would fear being inundated by applications from the impecunious.
I end on a skeptical note.
The evidence for the efficacy of microfinance in stimulating production and alleviating poverty is so far anecdotal rather than systematic.
The idea of borrowing one's way out of poverty is passing strange.
And I am unaware of any historical examples of nations that climbed out of poverty on the backs of small entrepreneurs financed by credit.
Also, recall that Grameen Bank has lent almost $6 billion to some 6 million persons.
This implies an average loan of almost $1,000, which in a country like Bangladesh is not chicken feed and makes one wonder how much of the Grameen Bank's loan portfolio is actually microfinance.
(Yet the bank's financial statement indicates that the average loan balance in 2005 was only $85--I don't understand how this squares with the aggregate figures that I gave above, which are also published by the bank!) Then too, the bank has been in operation since 1983, which is more than 20 years and indicates that the average number of borrowers is only 300,000 a year, with presumably many repeat borrowers.
Bangladesh has a population of almost 150 million people.
It is true that the microfinance movement is growing--and as it grows we may see default rates rising and the microfinanciers adjusting, as the Grameen Bank may already have done, by greatly increasing the minimum size of loans.
Think back to that low default rate for the Grameen Bank.
The bank does not have written loan agreements and does not sue defaulters or invoke other legal remedies against them.
The natural inference to draw is that the bank is extremely selective in its choice of persons to whom it is willing to lend, and such selectivity, if imitated by other microfinanciers, must greatly limit the scope and impact of microfinance.
I suggest, albeit tentatively, that there may be a good less to microfinance than its boosters claim.
Becker has posed an intriguing question: if a woman thinks she would be better off as a second or third (or nth) wife rather than as a first and only wife, or not married at all, why should government intervene and prohibit the arrangement? From an economic standpoint, a contract that makes no one worse off increases social welfare, since it must make both of the contracting parties better off; otherwise they would not both agree to the contract.
The question has achieved a certain topicality because of the movement to legalize homosexual marriage.
One of the standard objections to such marriage is that if homosexual marriage is permitted, why not polygamous marriage? The basic argument for homosexual marriage is that it promotes the welfare of homosexual couples without hurting anybody else.
That seems to be equally the case for polygamous marriage.
But is it? My view is that polygamy would impose substantial social costs in a modern Western-type society that probably would not be offset by the benefits to the parties to polygamous marriages.
(For elaboration, see my book   Sex and Reason   (1992), particularly Chapter 9.) Especially given the large disparities in wealth in the United States, legalizing polygamy would enable wealthy men to have multiple wives, even harems, which would reduce the supply of women to men of lower incomes and thus aggravate inequality.
The resulting shortage of women would lead to queuing, and thus to a high age of marriage for men, which in turn would increase the demand for prostitution.
Moreover, intense competition for women would lower the age of marriage for women, which would be likely to result in less investment by them in education (because household production is a substitute for market production) and therefore reduce women's market output.
Of course, forbidding the wealthy to buy a particular commodity is usually inferior to taxation as a method of reducing inequality.
Yet we do forbid the buying of votes, which could be thought a parallel device to forbidding the buying"" of wives: one vote, one wife."
We think that vote buying would have undesirable political consequences.
So might polygamy.
In societies in which polygamy is permitted without any limitation on the number of wives, wealthy households become clans, since all the children of a polygamous household are related through having the same father, no matter how many different mothers they have.
These clans can become so powerful as to threaten the state's monopoly of political power; this is one of the historical reasons for the abolition of polygamy, though it would be unlikely to pose a serious danger to the stability of American government.
In polygamous households, the father invests less time in the upbringing of his children, because there are more of them.
There is also less reciprocal affection between husband and wife, because they spend less time together.
Household goverance under polygamy is bound to be more hierarchical than in monogamous marriage, because the household is larger and the ties of affection weaker; as a result, "agency costs" are higher and so the principal (the husband, as head of the household) has to devise and implement means of supervision that would be unnecessary in a monogamous household.
(An additional factor is that women in a polygamous household have a greater incentive to commit adultery since they have less frequent sex with, and affection for, their husband, so the husband has to watch them more carefully to prevent their straying.) This managerial responsibility deflects the husband from more socially productive activities.
A woman who wanted a monogamous marriage could presumably negotiate a marital contract that would forbid the husband to take additional wives without her consent.
However, she would have to buy this concession from the husband, which would make her worse off than if he were denied the right (in the absence of a contractual waiver of it) to take additional wives.
Allowing polygamy would thus alter the distribution of wealth among women as well as among men.
Against all this it can be argued that polygamy would be uncommon in a society such as that of twenty-first century United States.
But the less common it is, the fewer the benefits to be anticipated from legalizing it.
And I am not sure that it would be all that uncommon.
Although few American couples want to have more than two or three children, a polygamous union is not a couple.
If a couple has three children, the ratio of adults to children is 2:3.
In a polygamous household consisting of a husband, two wives, and four children, the ratio of adults to children is higher: 3:4.
So the per-parent burden is less, even though there are more children.
Because polygamy is illegal everywhere in the United States, few Americans think of it as an option.
If it were made respectable by being legalized, who knows? There are 400 American billionaires, and several million Americans with a net worth of at least $6 million.
Nor, with most women working, is it obvious that a man would have to be wealthy in order to attract multiple wives, though presumably men who wanted to be  polygamists would have to be able to offer some financial inducements, since most women would prefer to be a man's only wife.
As more and more men attempted to become polygamists, the "price" they would have to pay for a wife would rise, so polygamy would be a distinctly minority institution.
But it would not necessarily be trivial in size or harmless in its social consequences, which would be likely to exceed those of homosexual marriage.
Polygamy is banned in most advanced societies and flourishes chiefly in backward ones, particularly in Africa.
This is some evidence against legalizing it.
This posting is stimulated by comments made about my passing reference to overpopulation in Subsaharan Africa in the recent blog on DDT and by an article in the   Wall Street Journal   last week called "The Coming Crunch" that notes with concern a prediction that the population of the United States will reach 400 million in 35 years.
Concerns about overpopulation are ridiculed by conservatives because of the mistaken predictions by Paul Ehrlich (not to mention Thomas Malthus!) in his book   The Population Bomb   and by other anticapitalists since  the first Earth Day (1970), and have spread to liberals because the only way to slow or stop the growth of the U.S.
population is by curtailing immigration (e.g., the "fence").
Although I have been strongly critical of the shoddy arguments of Ehrlich and other doomsters (in my book   Public Intellectuals  ), I believe that overpopulation is a serious issue and deserves dispassionate analysis.
Just because the problem of overpopulation has been exaggerated in the past doesn‚Äôt mean it is not a problem today.
The future may not resemble the past.
The belief that the mistakes of Malthus, Ehrlich, and other past prophets of doom show that current concerns with overpopulation are unfounded is on a par with the belief that we shouldn't worry about terrorism because many fewer Americans have been killed by terrorists than in automobile accidents.
Such arguments confuse frequencies (the past) with probabilities (the future).
Economists stress the "demographic transition," that is, the tendency of the birth rate to decline steeply as a nation becomes wealthier.
But apart from the fact that not all nations experience significant economic growth, such growth tends, other than in Europe and Japan, not to make the rate of population growth zero or negative.
Most demographers forecast that world population, currently somewhat more than 6 billion, will rise to between 9 and 14 billion by mid-century.
I shall address the following questions: what are the costs of population increase (1) to the country in which the increase occurs, (2) if that country is the United States; and (3) to other countries; (4) what are the benefits to the country in which the increase occurs and (5) to other countries; and (6), when the costs exceed the benefits, what if anything should be done to slow or arrest population growth?.
1
If the arable and otherwise inhabitable parts of a poor country are densely populated, increased population will result in significantly  higher costs of food and other agricultural products by requiring more intensive cultivation, or cultivation of poor soil.
It will also increase the cost of water, and time spent in commuting and other transportation.
This seems to be the situation in India and much of Africa.
And notice that China, though it is en route to becoming a wealthy country, has not abandoned its "one child" policy.
That policy is an inefficient method of limiting population growth, but is evidence that China does have a problem of overpopulation.
Surely India does as well, though like China its economic output is growing rapidly.
2
The United States is not densely populated, but that is only when density is computed  on a nationwide basis, i.e., if the total area of the country is divided by the population.
Particular areas, mainly coastal (including the Great Lakes coasts), are densely populated, and further population increases in those areas would increase commuting times, which have lengthened in recent years, and in some of these areas (such as California and Arizona) would place strains on the water supply.
In principle, however, these problems can be solved by pricing, including greater use of toll roads.
Increased commutes impose environmental costs, but tolls could be based on those costs.
3
The greatest costs of further population increases are likely to be costs external to individual countries and therefore extremely difficult to control by taxation or other methods of pricing "bads," because most of the benefits of these measures would be reaped by other countries.
These are environmental costs, mainly global warming and loss of biodiversity, about which I have written at length in my book   Catastrophe: Risk and Response   (Oxford University Press, 2004).
Of course, population growth per se does not increase global warming, but the burning of forests and, most important, of fossil fuels does, and these activities are positively correlated with population.
Not only is it now the scientific consensus that global warming is a serious problem, but its adverse effects are appearing sooner than expected; it is by no means certain that a technological fix will be devised and implemented before the effects of global warming become catastrophic.
4
Population growth in productive societies increases the society's total output and hence its geopolitical power.
It also has a positive effect on innovation by increasing the size of markets.
Innovation involves a high ratio of fixed to variable costs (it costs hundreds of millions of dollars to develop a new drug, yet once it is developed, the drug may be very cheap to produce), so the larger the market for the innovative product or process the likelier are the fixed costs of invention to be recouped in sales revenues.
Some people also believe that the larger the population, the more innovators there will be, assuming that a fixed percentage of the population consists of innovators, whatever the size of the population.
This is a questionable argument for population growth, as it ignores the fact that a fixed percentage of the population presumably also consists of potential Hitlers and Stalins and Pol Pots, and thus the absolute number of these monsters grows with population growth.
Moreover, a population increase that is due to a higher birth rate (as distinct from immigration) increases the number of young people in a society, who are impressionable and therefore more likely than older people to be drawn to extremist politics, including terrorism.
In addition, greater competition among innovators may reduce the potential returns to each innovator by increasing the number of simultaneous innovations, and may thus reduce the incentives to innovate.
The relationship between aggregate population and creativity seems in any event very loose.
The citizen population of Athens in the fifth and fourth centuries B.C.
was roughly 25,000, but produced intellectual and artistic works that dwarf those of entire continents.
Furthermore, technological growth currently favors destructive over beneficial technologies.
The increasing lethality and availability of weapons of mass destruction--the proliferation problem--has a greater short-term downside than benign inventions have an upside, especially since much innovative activity is focused on increasing longevity, and thus population.
Policies that accelerate the rate of technological advance are dangerous unless the advance can somehow be channeled into productive forms.
It cannot be.
A dubious benefit of population growth is that it lowers the average age of the population and therefore the burden of the elderly.
That is a Ponzi scheme rationale for encouraging growth of population, since as soon as the growth ceases, the average age will shoot up--especially if it is correct that population growth increases the rate of medical innovation and thus the life span!.
5
An increase in one nation's power reduces the power of other nations; so there is again a negative externality.
The increase in the world's Muslim population is a negative externality for non-Muslim nations, especially the European nations, with their shrinking or about-to-start-shrinking populations.
But by the same token an increase in the non-Muslim population of Europe would probably be a boon for the European nations.
And an increase in the rate of innovation in one nation will benefit other nations unless intellectual-property laws are extremely strict (which would have its own negative economic effects).
6
If, apart from poor countries, the major costs of population growth are external to the particular nations in which population is growing, there is very little that can be done, given the weakness of international institutions, which is due in turn to the number and diversity of nations that have to be coordinated for effective action against global problems.
Moreover, limits on immigration do not reduce global population growth and thus do not respond to the global-warming problem.
Rich countries, however, can aid poor countries to reduce their rate of population increase by encouraging family planning and, in particular, female education, since educated women have higher opportunity costs of fertility, and hence fewer children, than uneducated ones.
Where, as in the United States, the costs of population increase are concentrated in particular areas (whether in geographical areas or along highways), the costs can be neutralized by increasing prices proportionally tied to density by taxation or other methods of pricing negative externalities.
I share much of Becker's skepticism about a "fat tax" (see my article with Tomas J.
Philipson, "The Long-Run Growth in Obesity as a Function of Technological Change,"    Perspectives in Biology and Medicine  , Summer 2003 Supplement, p.
S87), though I would look favorably on a tax on soft drinks; I would even consider a ban on the sale of soft drinks to children, as I explain later.
The case for a fat tax, as an economist would be inclined to view it, is that a high-calorie diet contributes to obesity, which contributes to bad health, which imposes costs that are borne in part by thin people (thin taxpayers, in particular).
I do think, despite skepticism in some circles, that obesity, even mild obesity, has negative health consequences, including diabetes, high blood pressure, joint problems, and certain cancers, and that much of the cost of medical treatment is externalized.
But as Philipson and I emphasized in our article and Becker emphasizes too, lack of exercise is also an important factor in obesity.
Moreover, the significance of an externality lies in its effect on behavior, and I am dubious that people would consume fewer calories if they had to pay all their own medical costs rather than being able to unload many of those costs on Medicaid, Medicare, or the healthy members of private insurance pools.
Indeed, if as I believe obesity is positively correlated with poverty, reducing transfer payments to people of limited income might result in more obesity.
Indeed, high-caloric "junk food" might conceivably though improbably turn out to be the first real-world example of a "Giffen good," a good the demand for which rises when the price rises because the income effect dominates the substitution effect.
A heavy tax on high-caloric food might so reduce the disposable income of the poor that they substituted such food for healthful food, since fatty foods tend to be very cheap and satisfying, and often nutritious as well.
However, this is unlikely because food constitutes only a small percentage (no more than 20 percent) of even a poor family's budget.
A fat tax would not only be regressive; to the extent it induced the substitution of more healthful foods (as opposed to the Giffen effect), it would as Becker notes reduce the utility (pleasure) of the people who love junk food.
This assumes that the junk-food lovers are rational and reasonably well informed, so that they trade off the pleasure gains of eating such food against the health costs.
Here I begin to have doubts.
I don't think the fact that obesity is correlated with poverty is due entirely to the fact that fatty foods tend to be cheap as well as tasty and satisfying.
I suspect that many of the people who become obese as a result of what they eat do not understand how, for example, something as innocuous as a soft drink can produce obesity.
I also suspect that producers of soft drinks and other fatty foods are ingenious in setting biological traps--designing foods that trigger intense pleasure reactions caused by brain structures formed in our ancestral environment (the prehistoric environment in which human beings attained approximately their current biological structure), when a taste for fatty foods had significant survival value.
(The producers of soft drinks and other junk food  also place vending machines in schools, when permitted.) I am doubtful, however, that much can be done about this problem.
I do not think, for example, that a campaign of public education would be effective, because it could be neutralized by industry advertising (which, however, would have the indirect effect of a tax--it would increase the food producers' marginal costs) and because the people who most need the education are probably the least able to absorb it.
However, the consumption by children of soft drinks that contain sugar presents a distinct and perhaps soluble social problem.
Soft drinks have virtually no nutritional content (unlike foods rich in cream or butter), and recent studies indicate that they are a significant factor in obesity, as well as a source of caffeine dependence and dental problems.
They also have good substitutes in the form of drinks sweetened artifically rather than by sugar.
And while generally parents know better than government what is good for their children, many parents who permit their children to drink soft drinks do not.
Banning the sale of soft drinks to children could not have a Giffen effect and would not be much more costly to enforce than the ban on the sale of cigarettes to children, and might well be a justifiable policy measure.
Now any measure for improving public health has the following limitation: if people are healthier and live longer, this does not necessarily reduce their lifetime expenditures on health care.
 Most of those expenditures are incurred in the last six months of life, and no matter how long people live, they will eventually enter that terminal phase.
However, the longer their healthier lives, the lower their average  lifetime health-care expenditures and the greater their productivity, as well as the greater their utility since poor health reduces utility.
(Besides its health effects, obesity reduces physical comfort and attractiveness.) I would therefore expect a ban on sale of soft drinks to children to yield a modest net increase in social welfare.
B  eyond Bias and Barriers: Fulfilling the Potential of Women in Academic Science and Engineering,   is a book-length study published last month by the National Academy of Sciences.
The study was conducted by a committee appointed by the NAS (along with the National Academy of Engineering), and it concludes that women's underperformance in academic science and engineering relative to men is caused not by any innate differences between men and women but by subtle biases, and by barriers in the form of refusing to make science jobs more "woman friendly." The study is available online at http://darwin.nap.edu/books/0309100429/html/R7.html.
The study will, one hopes, be carefully dissected by experts, but I will be surprised if it stands up to expert scrutiny.
Of the 18 members of the authorial committee, only one was a man, and only five were members of the National Academy of Sciences and only one was a member of the National Academy of Engineering.
The one man, Robert J.
Birgenau, although a distinguished physicist, happens to be the Chancellor of the University of California; for him to have dissented from the report would have condemned him to the same fate as Lawrence Summers, and swiftly too.
The composition of the committee shows remarkable insensitivity.
The theme of the report is the importance of unconscious bias with respect to issues of gender; did it not occur to the members and to the NAS and NAE that women might have unconscious biases regarding the reasons for the underperformance of women in science and engineering relative to men?.
Economists, foremost among them Gary Becker, have done a great deal of work on issues of sex discrimination and women's career choices.
The only economist on the committee, however, was Alice Rivlin, a specialist in the federal budget.
Her Brookings website lists works such as "Restoring Fiscal Sanity," but lists no book or paper relating to gender issues.
The problem of the committee's biased makeup would be less serious if the report itself were transparent, but it is not.
Although it cites a great many academic studies, it does not give the reader enough information about them (the methods used, the robustness of the findings, the quality of the journal in which the study was published, the professional standing of the authors, the reception of the study in the relevant professional community, etc.) to enable an evaluation.
Some of the observations in the report suggest a distinct lack of academic rigor, as when it reports that Japanese schoolgirls do better on math tests than American schoolboys.
Since there is much more job discrimination against women in Japan than in the United States (see, e.g., http://www.pbs.org/nbr/site/research/educators/060106_04c/), one would expect   Beyond Bias and Barriers   to predict that Japanese girls would do very poorly on math exams.
The report expresses particular concern with underperformance of black women in science and engineering, who underperform not only white men and women but also black men, even though black women generally outperform black men in educational attainment.
This suggests that maleness rather than race explains differential performance in science.
Other obvious objections to findings favored by this biased report are ignored.
For example, there is a large difference in the average research output of male and female scientists.
However, that difference is greatly diminished when the comparison is between male and female scientists in leading research universities; the obvious but unmentioned reason is that these universities are not discriminating in favor of women but merely applying the same high standards to both sexes.
No one thinks that no female scientists are comparable to excellent male scientists; the issue is why there are so few female scientists in those top-tier universities.
Another example: from the fact that the gender gap in science has diminished in recent decades one cannot reason, as the report does, that there are no genetic or otherwise innate differences in preferences or aptitudes for a scientific career.
If a gender or racial gap is due partly to discrimination and partly to innate factors, then eliminating discrimination will narrow the gap, but will not eliminate it.
The study is notably deficient in comparisons between women in science and in other demanding occupations.
Women do better, relative to men, in academic law  than they do in academic science, mathematics, and engineering yet law is a highly demanding field.
And how to explain their domination of primatology, a scientific field? The problems that women in science face, particularly in highly mathematized fields such as physics,  in combining family and career seem no different from the problems they face in other fields inside and outside of science.
If the report's ambitious program of making science woman-friendly, for example by more financial aid, day care, and the stretching out of degree programs, were extended--and why shouldn‚Äôt it be?--to other demanding fields, there would be no basis that I can find in the study for predicting that more women would enter science rather than the fields that they appear to prefer.
I want to reply to some of the comments on both my last posting, which was on the NAS report on women in science, and also the previous one, on DDT.
Women in Science   .
I notice that the comments in defense of the NAS  report tend to be--defensive; and also emotional.
One comment suggests that if a committee 17/18 female is likely to be biased, any male who comments on the report is likely to be biased too.
But I did not suggest that the committee should have been composed primarily of men, only that it should have been more balanced, and that the fact that the only man on the committee could not, because of his position, dissent from the report, made his inclusion, as the lone man on the committee, entirely unprofessional.
Another commenter vigorously denies that there is any difference between men and women, then states that he prefers female doctors because they are more caring!.
A number of comments point to the   range   of differences between men and women, encompassing behaviors (crime, sports), preferences, test results, psychology, and much else besides, including the tendency of women in science to prefer the less mathematical fields (I gave the example of primatology).
These differences could I suppose all be the product of discrimination, but that seems highly unlikely.
One comment states that the underrepresentation of women in science may be a result of path dependency (where you start may determine where you end up)--the fewness of women in science in past times.
This is not persuasive, because there were virtually no women in academic law when I was a law student in the 1950s, but now about half of all  law professors are women.
One last point: a good test for whether there is discrimination against or in favor of a group is its average performance in the profession alleged to be a site of discrimination relative to that of the majority.
If women were discriminated against in science, one would expect the average woman in science to outperform the average man in publications, awards, etc., simply because only women who were better than men could overleap the discrimination hurdle.
But if there is discrimination in favor of women in science, then the average man should outperform the average woman, because then it is the men who have to overcome the discrimination barrier.
(If there is no difference in average performance of men and women in a given field, the inference is that there is no sex discrimination in that field--employers and other performance evaluators  regard sex as irrelevant.) Since men outperform women in science rather than vice versa, the inference is that there is discrimination in favor of women.
DDT and Overpopulation   I repeat my abject apology for calling DDT a herbicide rather than a pesticide.
Some comments suggest that the mistake reveals my complete incompetence to discuss environmental issues.
That seems a bit harsh.
The reason for the mistake was simply that herbicides play a particularly important role in diminution of genetic diversity--thanks in part to the ban on DDT--so I was thinking about herbicides when I was considering the effects of DDT.
Some comments point out correctly that interior spraying won't eliminate mosquitoes and therefore malaria; and that is true.
But complete eradication may not be cost justified.
Costs and benefits must be compared at the margin.
If 99 percent of deaths from malaria can be eliminated by interior spraying, it may not be worthwhile to spend billions of dollars developing and producing a vaccine.
That is why I find the Gates Foundation's campaign to eradicate malaria puzzling.
(Actually, I don't think it's very puzzling.
There is often a strong political and public-relations dimension to foundation giving, even foundation giving for activities thought nonpolitical, such as saving lives.
Somehow giving money to spray the interior of houses with DDT lacks pizzazz and could even be thought politically incorrect.).
Most of the comments fasten on the following paragraph in my posting: "Not that eliminating childhood deaths from malaria (I have seen an estimate that 80 percent of malaria deaths are of children) would be a completely unalloyed boon for Africa, which suffers from overpopulation.
But on balance the case for eradicating malaria in Africa, as for eradicating AIDS (an even bigger killer) in Africa, is compelling.
Malaria is a chronic, debilitating disease afflicting many more people than die of it, and the consequence is a significant reduction in economic productivity." Many commenters regard "unalloyed boon" as a particularly callous chardacterization.
I think some of the commenters don't understand the meaning of the word "unalloyed." I did not say it was a good thing that children die of malaria; I just said that it was not   just   a good thing, if  the deaths reduce population.
Now, they may not, as one comment explains, because a family that loses a child to malaria may decide to have another child in its place, and indeed if the family is risk averse it may end up having more children because of the high risk of losing one or more of them to malaria than if there were no such risk.
That is an interesting empirical question.
I suspect that on balance there will be fewer children surviving to adulthood, simply because of the cost of additional children.
I continue to insist that overpopulation, including in subsaharan Africa, is a real problem.
It is true but absurdly irrelevant that New York City has a greater population density than Africa.
Overpopulation is not a simple matter of dividing people by square miles.
In an agricultural society, population density tends to be negatively correlated with wealth, simply because the land must be worked harder to obtain food.
Good land is not the only resource that is in limited supply--so is fresh water, forest products, game, and mineral resources.
Scarcities in these resources can be overcome, but only at a cost.
It is true as several comments point out that as a society grows wealthier, the birthrate tends to drop (the "demographic transition"), but Africa seems to be trapped by extreme poverty exacerbated by overpopulation.
Is it foolish for China to try to limit its population? If not, the case for limiting the African population is much stronger, because Africa has a far less productive population.
And so far I have been speaking only of the effects of population on the populous country.
There are external effects as well.
The effects of population on  the destruction of forests and on the demand for electricity and cars are major contributors to global warming.
Thomas Malthus, though like the rest of us not very good at predicting the future, was a brilliant economist.
He was wrong that the human population would increase geometrically (he did not consider contraception as a means of voluntarily limiting population) and the supply of food only arithmetically (he did not foresee advances in the technology of food production).
But he was right that achieving an equilibrium between population and food could require starvation, war, or other unattractive methods of limiting population.
In this he foreshadowed natural selection, as Darwin acknowledged.
Rising food prices are doubtless causing malnutrition and even starvation in some backward countries today, and if they continue to rise, more people will starve.
Becker is correct that sensible policies can moderate the price increases, and perhaps restore the trend toward lower food prices, but who can be confident about the adoption of sensible policies?.
An important factor in recent food price increases is the ethanol subsidies.
Ethanol is a "clean fuel" in the sense that unlike gasoline its burning as a fuel does not produce the conventional pollutants, including carbon monoxide.
It does produce carbon dioxide, the principal culprit in global warming, but this effect is said to be offset by the fact that the corn from which ethanol is manufactured absorbs carbon dioxide, as trees do.
However, the manufacture of ethanol requires a great deal of energy (more energy, some critics believe, than the ethanol itself produces), and in China for example that energy is supplied mainly by coal-burning plants, a fertile generator of carbon dioxide.
Moreover, deforestation by fire, common in the Third World, is increasing in order to provide more cropland for the production of ethanol, and deforestation by fire is a major source of atmospheric carbon dioxide.
So it is doubtful that ethanol is a significant part of the solution to the problem global warming--indeed it may be part of the problem--and in any event the subsidy is more often defended as an answer neither to conventional air pollution nor to global warming, but instead as a means toward making the United States self-sufficient in energy.
The federal subsidy alone is currently running at a level of $7 or $8 billion a year.
There are state subsidies as well, and, more important than either type of direct subsidy, there are indirect subsidies in the form of legal requirements that gasoline producers purchase a specified amount of ethanol to mix in with their gasoline.
A federal law enacted in 2005 doubled those requirements and is believed to have been a big factor in the ethanol boom and resulting recent increase in corn prices.
Ethanol could be bought cheaply from Brazil, but high tariffs prevent the Brazilian and other foreign producers from competing with our farmers and producers.
We could not achieve energy self-sufficiency from our own production of ethanol.
Even if all the corn produced in the United States were used to produce ethanol, which is unthinkable, the amount of gasoline consumed would fall by only 12 percent.
(This is a little misleading; an enormous increase in the demand for ethanol would lead to more cropland being switched to corn from other crops.
But that could result in much higher food prices.) Moreover, the amount of other fossil fuels consumed would rise because of the energy requirements for the production of ethanol.
We could as I said increase the percentage of our total fuel consumption that is supplied by ethanol by buying ethanol from abroad, and while that would make us dependent on other countries for an important part of our fuel supply, it would not be dependence on other oil-producing countries.
That would be a benefit.
Because of the instability of many of those countries (such as Iraq and Nigeria), and the hostility to the United States of some of them (such as Iran and Venezuela), there would be value in achieving energy independence, or at least a good deal more independence than we have today.
But we cannot achieve it through the ethanol subsidy.
We can achieve it (at least insofar as ethanol can contribution to the solution) only by relaxing the tariff on imported ethanol.
But this sensible measure seems blocked by one of the absurdities of our political system--the Iowa caucuses, which extract pledges from all plausible presidential candidates to preserve and indeed expand our home-grown ethanol industry--and, more broadly, by the excessive influence of our tiny farm population on U.S.
policy.
As a result of these factors, ethanol subsidies are bipartisan.
Most ethanol is manufactured from corn.
The United States is the world's largest exporter of grains, and exports of our corn account for one-fourth of total worldwide grain exports.
As a result of the increasing diversion of U.S.
corn to the production of ethanol, food prices in the United States and the world have soared.
It is estimated that by the end of this year, food prices in the United States will have grown in real terms by almost 5  percent (a 7.5 percent nominal increase in price minus a 2.6 percent inflation rate).
Technology is more likely to bail us out before our political system does.
What is called cellulosic, as distinct from corn, ethanol--the production of ethanol from a variety of plants, other than corn--holds promise for enabling ethanol to be produced without forcing up the price of corn, but is not yet commercially feasible.
I have no view on SCHIP; the commenter who assumes I oppose it because I am a reactionary beast does not have an accurate fix on my political views.
I am not familiar with the particulars of SCHIP, but I would be inclined to favor free health insurance for all children (up to age 18), financed by means-testing Medicare and social security, because careful attention to the health of children will reduce their health problems and health expense in later life.
Government spends much too much on the elderly relative to the young, presumably because children don't vote.
Also, my blog post did not mention Rumsfeld or the Hoover Institution, which has appointed him to a temporary visiting lectureship.
I am far more critical of Rumsfeld than my fellow blogger Becker is.
The Iraq war has been a fiasco, and much of the responsibility must be borne by Rumsfeld as Secretary of Defense throughout most of the war.
But should that disqualify him from a quasi-academic appointment, if he is otherwise qualified, as he surely is? I shouldn't think so.
I accept correction for having described Larry Summers's controversial talk as stating that female IQs are "flatter" than male IQs.
What I meant and I hope the context made clearer than the term I used is that there is more variance in male IQs than in female ones--the distribution of male IQs has longer tails than the distribution of female IQs.
So assuming the same mean IQ for the two genders, there are more male geniuses and male morons than there are female geniuses and female morons.
That is an arguable proposition with some support in evolutionary biology.
It is important though highly controversial to explore the genetic causes of differences in human achievement or behavior in order to avoid an inaccurate sense of how much discrimination is responsible for differences across races, genders, etc., in behavior and achievement.
For example, the female crime rate is grossly lower than the male crime rate.
Is it plausible that the difference is wholly unrelated to genetic differences between men and women?.
One commenter asks: could it not be that the reason that university faculties are disproportionately left leaning is that leftist policies are more intelligent than conservative policies, so that university faculty, being of above-average intelligence, are naturally more likely to support leftist policies? There are two objections to this suggestion.
The first is that political opinion in faculties is not uniform across disciplines.
Economists, for example, are more conservative on average than teachers in the humanities, but they are not less intelligent.
Second, while today there is a widespread feeling that conservatives have lost their way, in the past the left has frequently supported policies that we know in retrospect were mistaken, such as communism, socialism, highly progressive taxation, urban renewal, rent control, populist theories of antitrust, heavy-handed public utility and common carrier regulation, progressive education, unilateral disarmament, pacifism, syndicalism, and anarchism.
Both Left and Right have much to be embarrassed about.
At about the same time that Iran's president, Mahmoud Ahmadinejad, was speaking at Columbia University and being insulted by his introducer, the university's president, liberal law professor Lee Bollinger, former Harvard president and former Secretary of the Treasurer under Clinton, Larry Summers, was being disinvited to address the University of California Board of Regents after being denounced by University of California faculty as a symbol of gender and racial prejudice, and Erwin Chemerinsky, a left-leaning constitutional law professor, was being reinvited to be dean of the law school of the University of California at Irvine after being disinvited, apparently because of concern about his politics.
What can we learn about American universities todayf rom this confluence of bizarre events ? We can learn that the nation's elite universities are well to the left of the population as a whole.
Not that "left" is quite the precise term for Ahmadinejad, a Holocaust denier who would like to see Israel wiped off the face of the earth and may well be seeking nuclear weaponry in violation of the Nuclear Non-Proliferation Treaty of which Iran is a signatory.
But his status as an enemy of the United States and a leader of a revolutionary Third World state that overthrew a monarch (the Shah of Iran) allied with the United States makes him more acceptable to the left than the Democratic Jewish ex-president of Harvard who dared to raise the question whether there might be a genetic explanation for the fact that the female distribution of IQ is flatter than the male, although the means are the same, the distributions largely overlap, and thus there are plenty of women in the scientific and other professions who are more brilliant than many of their male colleagues.
Throughout most of the history of higher education in the United States, colleges and universities were on the same political wavelength as the nation as a whole.
When I was a student at Yale College in the 1950s, the politics of the student body was revealed by the fact that when during the 1956 presidential campaign Adlai Stevenson gave a talk at Yale (which I attended), he was roundly booed, though not by me, and of course Eisenhower went on to win a landslide reelection.
The faculty and students of the colleges and universities moved to the left in the 1960s, along with much of the intelligentsia--journalists, pundits, young lawyers, teachers, congressional staffers, and the like.
But, curiously, when beginning in the late 1970s, and accelerating with the election of Reagan in 1980, the country as a whole moved right, the colleges and universities stayed where they were--not the students, who had moved with the times, but the faculty and the administrators.
And there they remain, not all of them of course; but the humanities, and the social sciences except economics, and the law schools (but of course not the business schools), and the admissions offices (with their zeal for affirmative action), are well to the left of the population as a whole, and to the left of their students.
This is especially true of the elite private and public universities and the leading liberal arts colleges, apart from Catholic institutions.
The reasons are mysterious.
One may be the attraction--also mysterious--of Jews for left-wing causes, an attraction that the embourgeoisement of American Jews and the virtual disappearance in America of Christian anti-Semitism has not eliminated.
Jews occupy a disproportionate number of faculty positions at elite colleges and universities.
Probably another reason for the left's influence in higher education is that Americans who came of age during the late 1960s, a portion of whom were radicalized then, are today in senior positions in many faculties.
(A man or woman who was 18 in 1968 is 57 today.) A third reason may be the dearth of other outlets, besides faculty politics, for political activism today.
There is no serious left-wing movement in the United States.
There is a strident Republican right influential in the Republican Party, but the strident Democratic left exerts little influence on the Democratic Party.
You can post an angry comment on MoveOn.org, but that cannot be a very satisfactory mode of political expression compared to frightening the University of California's Board of Regents into embarrassing itself by disinviting a Democrat of Larry Summer‚Äôs stature and distinction, or   √©pater  -ing the bourgeoisie by inviting Ahmadinejad to thunder against Bush and the West from a perch on Morningside Heights.
An ironic counterpoint to university leftism is the increasing, and increasingly successful, imitation of business firms by America's colleges and universities.
The leading universities are becoming giant corporations with multi-hundred-million dollar (or even billion dollar) budgets.
As they grow, they need and so they hire professional management.
Professional university management, in turn, takes its cues from its peers in the business sector.
So we have universities deeply involved in hedge funds, greedy for supracompetitive investment returns, engaged in the commercialization of scientific research, angling for applications for admission by the children of the rich, manipulating their statistics in order to move up in   U.S.
News & World Report  ‚Äôs college rankings (for example by fuzzing up their admissions criteria, so that they get more applicants and therefore turn down more and so appear more selective), exaggerating the job prospects of their advanced-degree graduates, bidding for academic stars by offering high salaries and low teaching loads, and, related to the bidding wars, creating a two-tier employment system with tenured and tenure-track faculty on top and tenure-less, benefit-less graduate students and temporaries on the bottom to do the bulk of the teaching.
And so the modern American university system allows its faculty and administrators to live right, while thinking left.
Becker has accurately summarized the International Monetary Fund‚Äôs recent report on the effect of globalization (meaning increased integration of the world‚Äôs economy) on inequality.
(It is chapter 4 of the IMF's "World Economic Outlook" published this month and available online at http://www.imf.org/external/pubs/ft/weo/2007/02/pdf/c4.pdf.) In essence, the report, while acknowledging serious data limitations, finds that average incomes have increased significantly in most nations in recent decades, but that income inequality has also increased in most nations, mainly because of disproportionate increases in the incomes of the top fifth of the populations.
The incomes of the other quintiles have increased too, but not as fast, so that overall the gap between rich and poor has increased although the poor are better off--just not as better off.
Both the increase in average incomes, and especially the increase in inequality, are driven mainly, the report finds, by increased utilization of advanced technology, which increases the returns to high-skilled workers relative to the returns to low-skilled or unskilled ones.
The report suggests that greater investment in education would tend to reduce inequality by increasing the proportion of high-skilled workers.
I want to question three assumptions of the IMF report.
The first is that increased income inequality is a bad thing, the second is that an increase in world average incomes is a good thing, and the third is that greater investments in education are bound to reduce inequality.
I do not think that increased income inequality is bad (regrettable, unfortunate, deplorable, etc.), in general (an important qualification, relaxed below), when it does not involve any reduction in the incomes of a substantial fraction of the population.
Suppose that over some period the average income of people in the bottom four quintiles of a nation's income distribution increases by 2 percent and the average income of people in the top quintile increases by 10 percent.
The result is increased income inequality, but so what? Everyone is better off, and why should the fact that the rich are better off by a larger percentage concern anyone? What is true is that if the baseline is extreme inequality and many people are below the poverty level, a further increase in inequality can be politically destabilizing.
Suppose 99 percent of a nation's people live in poverty and the other 1 percent are rich and over some period the average income of the 99 percent rises barely at all, lifting few above the poverty level, while the average income of the 1 percent who are already rich doubles.
Such a pattern would exacerbate what would doubtless already be a high degree of social unrest.
I argued in my blog post of December 10, 2006, that the continuing enrichment of the already superrich stratum of the American population is a potential source of political problems too.
But concern with the impact of particular forms and degrees of inequality in particular countries at particular junctures in their history does not justify concern with a rise in inequality in the world as a whole, an approach that while natural for the IMF to take treats the entire world as if it were a single nation, thus abstracting from particular circumstances of particular nations, though it is the particulars that determine whether inequality is a serious problem.
It might be argued that, given diminishing marginal utility of income, average and total human happiness would be increased if the incomes of the poor grew more rapidly than those of the rich, because presumably an extra dollar confers less utility on a rich person than on a poor one.
But this observation would be pertinent only if rising inequality were a product of unsound policies, whereas the IMF report attributes it to economic factors, such as technological progress and absence of barriers to foreign investment, that are vital to continued growth in average incomes.
The poor, unless consumed by envy, are not made better off by policies that leave them as poor (or make them even poorer) but reduce the incomes of the rich.
Concern with inequality, it should be noted, is distinct from concern with poverty.
It would be possible to alleviate poverty without reducing the share of income going to the wealthiest quintile of the population.
Focusing on quintiles tends to break the link between equality and welfare.
Suppose some adjustment in the tax code resulted in reducing the average income of persons earning $100,000 a year by 2 percent and increasing the average income of persons earning $50,000 a year by 1 percent (the difference reflecting the much larger number of persons in the lower income bracket and the deadweight cost of the tax increase on the higher-income taxpayers); would that increase average happiness? I doubt it.
My second proposition is that, while again it is natural for an international organization like the IMF to consider increased global wealth a very good thing, there is no reason for any given individual to think that.
None of us is a citizen of the world.
We are citizens of particular countries, and our personal welfare is bound up with the welfare of our country rather than with that of the world as a whole.
Do Americans benefit from the rapidly increasing wealth of China? Some do, of course, both as consumers and as suppliers.
But there many losers (besides the obvious ones--those who make products that compete with imports to the United States from China), since China's rapid growth has increased the price of commodities such as oil, severely aggravated the problem of global warming, and contributed to the rapid growth of Chinese military power, which is a potential danger to the United States.
Russia's increasing wealth has made Russia more bellicose and less friendly to the United States; and, in general, nations such as Russia that are rich in natural resources, especially oil, are not dependable allies of the United States--and they are all growing richer.
And the technological progress that is such a big factor in increased world wealth makes international terrorism more dangerous than it would otherwise be.
Where would terrorists be without cellphones, the internet and web, and cheap international air fares?.
Third, it is not certain that increased investments in education would result in less inequality.
There is the cost of such investments to consider, and who within a society would bear that cost.
(Taxpayer-subsidized tuition for students at Berkeley does not increase income equality in the United States.) One must also consider who would benefit the most from education.
Suppose everyone in a nation had the identical opportunity to obtain as much education as he or she could benefit from.
The abler students would receive a better education than the less able, and the preexisting inequality of human capital might persist or even increase.
For notice that in the United States income inequality has been growing even though educational opportunities are abundant, with more than a third of the population obtaining some college education; most of the rest could obtain it as well if they thought they would benefit from it.
Presumably, then, the countries that ought to be considering greater investment in education for the sake of reducing income inequality are those in which that inequality is greater than it is in the United States.
In countries in which it is less, a greater investment in education would increase average incomes but might leave inequality unchanged--or even increase it to the U.S.
level.
I want to comment on Becker's post, of course, but I will also take the opportunity to respond to one of the themes in the very interesting comments that readers of our blog made on my post of last week.
I agree completely with Becker that the government should not in general have an ownership interest in private companies.
The "in general" qualification is intended in part to approve of allowing the government to acquire such an interest temporarily, as part of the current bailout (for reasons I explain below); and in part to leave open the question whether the Social Security Administration should be permitted to invest some of its funds in the stock market; if the investment were spread over the entire market, so that SSA had only a very small stake in any given firm, the influence of government on firm management would be small.
I would worry, however, that it would grow and turn out to be an entering wedge for socialism, but that is a story for another day.
I also agree that caps on the salaries of the executives of banks that participate in the bailout are dumb.
Not only are such caps bound to be evaded, but if they were not evaded they would have the curious effect of subsidizing mediocrity.
Capping the salaries of the executives in one industry will drive out (and deter from entering) some of the ablest executives, creating a space that will be filled by mediocrities.
The allocation of talent across industries will be distorted and the recovery of the financial sector retarded.
Where I differ from Becker is with respect to the question whether the government should demand common stock in the banks it buys assets from.
I think it should (as it is authorized to do by the bailout law just enacted).
The reason goes to the heart of the justification for the bailout.
The banks are holding assets of dubious value.
This makes them reluctant to lend money, because as I explained in my last post what banks do is borrow (for example, from depositors) and then lend the borrowed money, and they need a capital cushion against the possibility that the people they lend to will default.
The smaller the cushion, the more conservative a bank‚Äôs lending policy must be.
If the government in executing the bailout buys the bank's bad assets at prices equal to their true, low value, the bailout will have no effect (with a qualification, concerning liquidity, noted below).
A bank will be exchanging an asset worth say $1 million for $1 million; its capital will be no greater, and so neither will its willingness to lend be any greater.
The bailout will work only if the government overpays.
Suppose it pays $2 million for an asset worth only $1 million.
Then it has added $1 million to the bank's capital.
That capital is owned by the bank's shareholders.
The government's purchase of the asset will therefore have enriched the shareholders.
Moral-hazard issues to one side, why should the taxpayer be enriching shareholders? The alternative is for the government to say to the bank in my example: we will pay $2 million for your lousy asset but in exchange we want you to issue us $900,000 worth of stock.
(Not $1 million worth of stock, for then the bank might have no incentive to make the sale--or might, as the capital infusion could help it to stave off bankruptcy.).
I anticipate the following objections: (1) The banks will not participate.
But why not? They would not only be making money on the deal; as I just mentioned, by strengthening their capital base they would also be reducing the likelihood of bankruptcy.
(2) Government should not have an ownership interest in private companies.
I agree, but this would be a   temporary   interest; the government would sell its interest as soon as it could find a private purchaser.
(That was what happened in Sweden after it bailed out its banks from a crash similar to ours in thr 1990s.
See Joellen Perry, "Swedish Solution: A Bank-Crisis Plan That Worked,"   Wall St.
J  ., Apr.
7, 2008, p.
A2.) (3) The taxpayer can recoup completely without the government's taking an ownership interest because the problem is not that the "bad" assets are so bad, as that they are illiquid; the bailout will restore liquidity without adding to bank capital.
The third point is the most important, and let me pause on it.
The idea behind it is that the value of the "bad" assets that the banks hold is unnaturally depressed by the panic that has seized the financial industry.
The bailout will dispel the panic and so restore the "bad" assets to their true, "good" value.
The government will need only to hold the assets until their maturity and it will be able to sell them then at a price equal to or even higher than the "excess" price that it will have paid for them during the bailout.
The objection to this analysis is that if the situation is as depicted, there should be more private buying of bad bank assets than we are observing.
Buffett should be investing not only in Goldman Sachs but also in hundreds of other financial institutions.
There is plenty of global capital and why isn't more of it going to the purchase of bank assets whose true value is greater than their current market value? The bailout makes most sense if hundreds or even thousands of banks (there are more than 8,000 banks in the United States) really are broke or nearly broke, so that credit will dry up unless there is a massive infusion of capital into the banking industry.
The fact that the required infusion is coming from the U.S.
government suggests that the global capital markets are not confident that they could recoup investments in buying bank assets.
But this objection is not conclusive.
It is possible that the banks' problem is not, or at least not only, undercapitalization because of the decline in the value of their assets, but lack of liquidity, which is different.
Suppose you have a very valuable asset but all of a sudden the government decrees that money is no longer legal tender--that all transactions henceforth must be in bamboo shoots.
Now, though your asset was valuable before the decree and will again be valuable when the decree is lifted, at the moment there is no market for it.
If you do not know when the decree will be lifted, you will be very reluctant to make loans, because you will not know whether, if a loan goes sour, you can sell or borrow against your assets in order to cushion the loss and avoid bankruptcy.
If that is the problem, the bailout may restore liquidity and thereby enable banks to sell or borrow against assets on the basis of their true value, and eventually the government will recoup the cost of the bailout, because it will own those assets and can sell them, when markets return to normal, for at least what it paid for them.
But probably the banks' problem is a combination of undercapitalization and illiquidity.
Their assets include assets whose value is tied to mortgages, and the value of mortgages has declined because of increased risk of default as a result of the bursting of the housing bubble.
Insofar as the bailout helps banks to overcome undercapitalization as well as illiquidity, it will be enriching the banks' owners--unless it demands common stock in partial compensation for its buying the banks' questionable assets for more than they are worth.
The theme in the readers' comments to which I would like to respond, and it is also a theme in the   Wall Street Journal  's editorial comments on the financial crisis, is that government policy, rather than the free market, is responsible for the crisis--government policy in the form of encouragements spurred in part by Congress to home ownership through the government-chartered though private Fannie Mae and Freddie Mac home-mortgage companies, low interest rates imposed by the Federal Reserve Board, and lax supervision by the Securities and Exchange Commission and other regulators.
I wish it were true.
And what is true is that the government, including Congress, the Federal Reserve Board, and the SEC, were complicit in contributing to or creating some of the preconditions for the crisis--cheap credit and lax regulation.
But there is a difference between creating and merely exacerbating a crisis.
Moreover, it is a paradox to exonerate the market on the ground that the government did not do enough to regulate it!.
I believe that the basic causes of the crisis were six factors internal to the market system.
The first was abundant and therefore cheap global capital--the result of private economic activity--and, consequently, low interest rates, which encouraged borrowing.
The second factor was a housing bubble caused in part by those low interest rates and in part by aggressive marketing of mortgages.
The third was new financial instruments that businessmen believed reduced borrowing risks and so increased optimal leverage.
The fourth was the difficulty of "selling" a conservative business strategy to shareholders in a bubble environment.
Borrowing more and more at low interest rates while home or other asset values are rising enables financial institutions to make higher profits, and a firm that refuses to jump on the bandwagon will as a result experience lower profits and will have difficulty convincing shareholders that they really are better off because the higher profits of the competing firms are unsustainable.
The fifth factor was sheer uncertainty--was it a bubble? If so, when would it end? Would the new financial instruments assure a safe landing if it was a bubble and it burst? And the sixth factor was that the downside risk to highly leveraged financial institutions was truncated by generous severance provisions for their executives, authorized by boards of directors that were not effective monitors of executive decisions.
Cycles of boom and bust are intrinsic to capitalism.
Government can make them more serious, and sometimes less serious, but if you take away government you will still have periodic economic crises.
I agree with Becker that the effect of the financial crisis on capitalism will depend on the severity of the crisis.
Very few people are committed in an emotional sense to a free-market ideology; if the free market seems not to be working, the population and its political representatives will cast about for an alternative.
In this longish comment, I respond briefly to some of the readers‚Äô comments on my last week‚Äôs post, bring up to date my discussion of the financial crisis, and in closing return to the question whether capitalism has "failed.".
1
Several comments note that there were a number of other prophets of doom besides Nouriel Roubini.
Here is one: "When the downturn in house prices occurs, many homeowners will have mortgages that exceed the value of their homes, a situation that is virtually certain to send default rates soaring.
This will put lenders that hold large amounts of mortgage debt at risk, and possibly jeopardize the solvency of Fannie Mae and Freddie Mac, since they guarantee much of this debt.
If these mortgage giants faced collapse, a government bailout (similar to the S&L bailout), involving hundreds of billions of dollars, would be virtually inevitable." Dean Baker, "The Menace of an Unchecked Housing Bubble,"   The Economists' Voice  , vol.
3, issue 4, article 1.
Given the multitude of warnings from respectable sources, it is puzzle why the warnings did not stimulate a serious effort to evaluate the health of the financial services industry and the adequacy of regulation.
Part of the answer may lie in a perceptive comment by reader Jamison Davies.
He reminds us that "Important to [Roberta] Wohlstetter's argument [about why the Japanese attack on Pearl Harbor achieved surprise] is the concept of the 'signal-to-noise' ratio, i.e.
the amount of useful information being taken in compared to the information that is false, misleading, or irrelevant.
It turns out that earlier concerns about inflationary spikes may have just turned out to be background 'noise'‚Ä¶as well as other economic issues, but ex ante it is extremely difficult to tell what data will be predictively useful and what is just noise." Davies adds that "the difficulty in early warning, among other things, is that if you give correct warning and act in response to that warning, the attack will likely not materialize (i.e.
if the US knew Japan was about to attack Pearl Harbor our defensive preparations would prevent Japan from following through).
This means that successful warnings are undercounted because the catastrophe never emerges.
This tends to weaken early warning systems as they are perceived to be ineffective even though they may have averted serious problems." Davies points out that "the economic analogy is regulation.
Regulations were seen as unnecessary and dismantled because there had been no crises, but policymakers failed to consider that there may have been no crises precisely because of the regulation.".
Another comment quotes economist Thomas Sowell as saying: "Failure is an important part of the success of the capitalistic system.‚Äù"The commenter adds that in "the free market system, companies that are seriously mismanaged in one way or another will fail, and these failures make room for the ones that are well managed." All true, but in the current crisis many seriously mismanaged firms will be saved by the government, and many firms that are not mismanaged will fail because of the effect of the mismanagement of other firms on consumer demand and the credit market.
2
In earlier posts Becker and I have discussed whether the financial crisis is a liquidity crisis, a solvency crisis, or both.
At this writing it seems that it is more a solvency crisis than a liquidity crisis.
The initial bailout plan--to buy the sick assets of banks, such as their mortgage-backed securities--was premised on the assumption that the crisis was one of inadequate liquidity: uncertainty or perhaps even unreasoning fear was preventing the sale of bank assets at prices that reflected their "true" value.
If this was incorrect--if the problem was not that the banks' sick assets were frozen but that the banks were undercapitalized--the plan would be unsound: either the government would pay the actual, low value of the assets, in which event the banks would have no more capital than before, or it would overpay and thus be giving the banks a gift at taxpayers‚Äô expense.
The plan was quickly altered (the U.S.
embarrassedly taking its cues from the prime minister of England) from a purchase of assets to a contribution of capital in which the government would receive interest-bearing preferred stock in exchange.
A disturbing note is Secretary of the Treasury Paulson's plea to the banks who have received the capital contribution to lend it out rather than hoard it.
What is disturbing is that since banks are in the business of lending and do not receive a return on money that they hoard, they don't need prodding to make loans unless the risks are too great.
The risks remain too great unless the capital infusion ($250 billion split among nine banks) is large enough to make the banks adequately capitalized.
With the recession/depression spreading and deepening, the risks of lending are growing and so the banks need a bigger capital cushion than when the economy was booming.
It will not be prudent for them to lend unless either they have that cushion or the government guarantees the repayment of the loans they make.
3
The severity of the recession/depression precipitated by the financial crisis cannot yet be gauged accurately.
One reader amusingly cites the prediction of "Scholars of Astrology" that the economy will recover in seven months.
If so, the crisis will not provoke a serious rethinking of the nation's commitment to a market economy.
But if the recovery takes substantially longer--if, as seems possible, we are in the midst of the most serious depression since the Great Contraction of 1929 to 1933 (and why has the word "depression" become unmentionable? Why does everyone except me prefer the anodyne euphemism "recession?)--then that commitment will come under fire.
Should it?.
There are three basic types of economy (with many intermediate possibilities, of course): a pure free-market economy; a regulated market economy; and socialism.
In the first, all economic ordering is left to private action: money is private, contracts are enforced not by legal means but by concern with reputation and threats of retaliation,   caveat emptor   prevails, and the role of government is limited to providing internal and external security against violence.
In such a world there are, for example, no restaurant inspectors, and if you get ill eating in a restaurant you have no legal recourse; but restaurants might form voluntary associations that would conduct inspections, and careful consumers would patronize only the members of reputable such associations.
Very few economists support so lean a system of government.
Virtually all support a regulated market system in which, for example, victims of food poisoning have tort remedies but systems of restaurant inspection are also instituted, to back up those remedies in recognition that most incidents of food poisoning are not serious enough to warrant the expense of bringing a lawsuit and that many restaurants operate on a shoestring budget and could not pay a substantial tort judgment.
An alternative to inspectors might be requiring anyone entering the restaurant business to post a substantial bond and allowing the successful plaintiff in a tort suit against a restaurant to recover his attorneys‚Äô fees.
But these are simply alternative methods of regulation rather than a recursion to a pure free-market economy.
Given the history of economic failure under socialism, we should exhaust the possibilities for adopting more effective regulations of the financial-services industry before jettisoning our regulated market system in favor of a socialist one.
That is so obvious as not to require argument.
What is less obvious is why so many people think that the financial crisis is proof that a market economy does not work and thus we need fundamental change rather than merely incremental regulatory reform.
The answer lies in what conservative economists used to call the "Nirvana fallacy." This is the idea that any failure of the economy to attain optimality is a "market failure" that warrants government intervention.
Conservative economists pointed out that the proper comparison is never between the operations of the actual market and an unattainable theoretical perfection, but between market-directed and government-directed or -regulated allocations of resources in particular economic settings.
Market failures are ubiquitous, as the current crisis demonstrates.
The crisis is not primarily a result of government actions.
The quasi-governmental status of Fannie Mae and Freddie Mac and the pressures exerted on them by Congress to facilitate home ownership by insuring risky mortgages were contributing factors to the crisis, but the basic causes were misassessment by the industry of the risks associated with extremely high levels of borrowing, misunderstanding of risk by home buyers encouraged by real estate brokers, mortgage brokers, and banks, conflicts of interest by rating agencies, corporate compensation policies that truncated downside but not upside risk, and the private costs of disinvesting in an industry undergoing a bubble (the housing industry) before the bubble bursts, since until that moment the profits from riding with the bubble will be increasing.
An additional factor was government inaction, but the failure of government to intervene in a market that is failing obviously presupposes rather than illustrates market failure.
In contrast, gratuitous government intervention when there is no market failure is a genuine example of government failure.
So a confluence of market failures has created an economic crisis, and the challenge is to develop regulatory responses that reduce the cost (net of the direct and indirect costs of the regulations themselves) of such failures.
Complacency on the part of some economists and politicians about the efficiency of the market system, and specifically an exaggerated belief in the robustness of financial markets, have created the impression that the current crisis is a crisis of capitalism rather than just another demonstration of the radical imperfection of human institutions--including the market.
Milton Friedman was one of the twentieth century's most distinguished economists, and one of the century's three economists (the other two being John Maynard Keynes and Friedrich Hayek) who had the greatest political influence--and he was the only American in the group.
Friedman spent most of his career at the University of Chicago, so it is natural that the University should name a major new component of the University, devoted to economic research, after him.
The Institute is essentially a joint venture of the University's economics department, graduate school of business, and law school.
The use of his name will help the University raise the funds required for the new Institute.
The decision, announced five months ago, has generated controversy on the University campus, sharpened by the current economic crisis that is thought in some circles to have damaged Friedman's legacy (it has certainly damaged Alan Greenspan's legacy).
Some 170 faculty members have signed a petition circulated by a Committee for Open Research on Economy and Society--which opposes the decision naming the new institute after Friedman--asking that a meeting of the University Senate (which consists of some University administrators and all faculty members who have been on the faculty for more than a year) be convened to discuss the decision.
The stated ground of opposition is that naming the Institute after Friedman would constitute the University's endorsement of his political views and would bias the research conducted by the Institute in favor of the free-market ideology that Friedman promoted so strongly.
But the opposition is also and probably primarily powered by distaste for Friedman's political and policy views and for his willingness to provide economic advice to the Chilean dictator Augusto Pinochet.
Friedman's association with policies that are either liberal or politically neutral, such as the volunteer army, the earned income tax credit (the negative income tax), the legalization of the laws against marijuana and other mind-altering drugs, and even affirmative action, is overlooked.
I don't think anyone would quarrel with the idea of an institute devoted to the support of academic research on economic issues, even though many of the issues that economists examine have political implications.
The name is the focus of the controversy.
Friedman was an advocate of politically controversial policies with which a number of University faculty do not want the University to be associated.
When buildings, classrooms, institutes, schools, etc.
in universities are named after someone, it is usually a donor.
Especially when an institute, which is likely to be a special-purpose organization, is named after a public figure, it is natural to associate the mission of the organization with the name of that figure: the Hoover Institution of Stanford University was named after Herbert Hoover and is indeed conservative, though it is noteworthy that the Institution's conservative reputation has not extended to Stanford University as a whole, and no more would one expect the University of Chicago to be branded as conservative merely because it contains an institute named after a conservative economist.
The University of Chicago is not a conservative institution, though it is not as monolithically liberal as its peer institutions.
The purpose of naming the new institute after Friedman was presumably to encourage fund-raising; one economics professor at the University has been quoted as saying that Friedman's name would "resonate with the donors." So a further worry is that most of the donors will be conservatives who support Friedman's political views (that is to say, his   conservative   political views, as many of his views were not conservative), and that the new Institute will perhaps unconsciously bias hiring and promotion in favor of economists who support those views.
The Institute might (again, whether consciously or unconsciously), it is feared, conceive its mission as being to promote the ideas of the "Chicago School of Economics," of which Friedman was perhaps the leading (though not the founding), and certainly the most influential, member.
But that is unlikely.
Economics is a highly competitive academic field, and piety toward distinguished predecessors is not the path to academic success.
It is odd that the opponents of the Friedman naming should think that economists, of all people, would subordinate career motives to loyalty to Friedman's memory or the "Chicago School" (especially young economists for whom Friedman is just a name).
If the religion professor who is leading the movement against the naming is right that "Friedman's over"--that the current economic crisis has consigned Friedman, along with Greenspan, to the dustbin of economic history--he should have no fear that the new Institute will be biased in favor of Friedman's views.
If a physics institute were named after Albert Einstein, would the institute's researchers reject quantum theory?.
It might seem that the controversy could be easily resolved by simply changing the name of the Institute.
But that would be costly to the University in several respects.
First, it would doubtless offend many donors, and probably leave the Institute in worse financial shape than had it not been named after Friedman in the first place.
Second, it would weaken the University administration and encourage the encroachment by faculty on administration prerogatives.
There is a whiff of the 1960s in the effort by faculty (joined by a number of students) to move the University of Chicago leftward.
Even if the original naming of the Institute after Friedman was a mistake, there is now too much at stake for the University administration to back down.
When Becker and I blogged on the financial crisis last Sunday, the bailout had just been announced.
The reaction of the stock markets and of senior government officials here and abroad suggests that the premise of the bailout--that the financial crisis is a liquidity crisis that can be resolved by the government's buying the assets of troubled banks at prices equal to the value the assets would have if there were a market for them (that is, if there were adequate liquidity to enable transactions)--was mistaken.
The crisis appears to be one of solvency rather than (or perhaps along with) one of liquidity; banks, along with insurers of bonds and other securities, are undercapitalized and so, as I suggested last week, require a capital infusion rather than just a purchase of frozen assets.
All of which merely underscores the enormous cloud of uncertainty that has enveloped the crisis and left economists struggling to understand the causes, magnitude, future course, and cures of what is shaping up as the biggest economic bust since the Great Depression of 1929 to 1933.
Last week's stock market crash may also reflect doubts about the government's competence to deal effectively with the crisis.
There is a sense that its reluctance to take an equity stake in the banks reflects a doctrinaire hostility to public ownership.
But here is the biggest mystery of all: why was the crisis not foreseen? An article on the front page of the business section of yesterday's   New York Times   attributes that blindness to "insanity," more precisely to a psychological inability to give proper weight to past events, so that if there is prosperity currently it is assumed that it will last forever.
This explanation is implausible--often people fail to adjust to change because they expect the future to repeat the past--and unhelpful, especially when one remembers that the academic specialty of Federal Reserve Board chairman Bernanke is the Great Depression.
We can get more help in answering the question of unpreparedness, or neglect of warning signs, from the literature on surprise attacks, notably Roberta Wohlstetter's great book   Pearl Harbor: Warning and Decision   (1962).
As she explains, there were many warnings in 1941 that Japan was going to attack Western possessions in Southeast Asia, such as the Dutch East Indies (now Indonesia); and an attack on the U.S.
fleet in Hawaii, known to be within range of Japan‚Äôs large carrier fleet, would be a logical measure for protecting the eastern flank of a Japanese attack on the Dutch East Indies, Burma, or Malaya.
Among the factors that caused the warnings to be disregarded are factors that may also have been decisive in the neglect of the advance warnings of the financial crisis now upon us: priors (preconceptions), the cost and difficulty of taking effective defensive measures against an uncertain danger, and the absence of a mechanism for aggregating and analyzing warning information from many sources.
Most informed observers in 1941 thought that Japan would not attack the United States because it was too weak to have a reasonable chance of prevailing; they did not understand Japanese culture, which placed a higher value on honor than on national survival.
Securing all possible targets of Japanese aggression against attack would have been immensely costly and a big diversion from our preparations for war against Germany, deemed inevitable.
And there was no Central Intelligence Agency or other institution for aggregating and analyzing attack warnings.
Much the same is true of the warning signs of the current financial crisis.
Reputable business leaders and economists had been warning for years that our financial institutions were excessively leveraged.
In mid-August of this year the   New York Times Magazine   published an article foolishly entitled "Dr.
Doom" about a perfectly reputable academic economist, a professor at New York University named Nouriel Roubini, who for years had been predicting with uncanny accuracy what has happened.
In September of 2006--two years ago--he had "announced that a crisis was brewing.
In the coming months and years, he warned, the United States was likely to face a once-in-a-lifetime housing bust, an oil shock, sharply declining consumer confidence and, ultimately, a deep recession.
He laid out a bleak sequence of events: homeowners defaulting on mortgages, trillions of dollars of mortgage-backed securities unraveling worldwide and the global financial system shuddering to a halt.
These developments, he went on, could cripple or destroy hedge funds, investment banks and other major financial institutions like Fannie Mae and Freddie Mac." By August of this year, when the   Times   article was published, Roubini's predictions had come true, yet he continued to be ignored.
Until mid-September, the magnitude of the crisis was greatly underestimated by government, the business community, and the economics profession, including specialists in financial economics.
Bernanke had repeatedly stated that it was unlikely that the mortgage defaults that accelerated after the housing bubble burst in mid-2006 would spill over to the financial system or the broader, nonfinancial economy.
In May of 2007, for example, he said: "Importantly, we see no serious broader spillover to banks or thrift institutions from the problems in the subprime market." It has been more than two years since the housing bubble burst.
One might have thought that that was enough time to enable the experts to discover that our financial system was in serious trouble.
Why were the warnings ignored rather than investigated? First, preconceptions played a role.
Many economists and political leaders are heavily invested in a free market ideology which teaches that markets are robust and self-regulating.
The experience with deregulation, privatization, and the many economic success stories that followed the collapse of communism supported belief in the free market.
The belief was reinforced, in the case of the financial system, by advances in financial economics, and relatedly by the development of new financial instruments that were believed to have increased the resilience of the financial system to shocks.
Borrowing and then lending the borrowed funds is inherently risky, because you have fixed liabilities but (unless you invest in risk-free assets such as short-term Treasury Bills) risky assets.
But it was believed that the risks of borrowing had been reduced and therefore that leverage (the ratio of borrowing to capital) could be increased without increasing risk.
Bayesian decision theory teaches that when evidence bearing on a decision is weak, prior beliefs will influence the decision maker's ultimate decision.
Second, doing something to reduce the risks warned against would have been costly.
Had banks been required to increase their reserves, this would have reduced the amount they could lend, and interest rates would have risen, which would have accelerated the bursting of the housing bubble--and then Congress or the Administration would have been blamed for the fall in home values and the increase in defaults and foreclosures.
In addition, it is very difficult to receive praise, and indeed to avoid criticism, for preventing a bad thing from happening unless the probability of the bad thing is known.
For if something unlikely to happen doesn't happen (as by definition will usually be the outcome), no one is impressed; but people are impressed by the costs of preventing that thing that probably wouldn't have happened anyway.
This is why Cassandras--prophets of doom--are so disliked.
It usually is infeasible as a practical matter to respond to their warnings--but if the prophesied disaster hits, those who could have taken but did not take preventive action in response to the warnings are blamed for the disaster even if their forbearance was the right decision on the basis of what they knew.
The deeper problem is that it is difficult and indeed often impossible to do responsible cost-benefit analysis of measures to prevent a contingency from materializing if the probability of that happening is unknown.
The cost of a disaster has to be discounted (multiplied) by the probability that it will occur in order to decide how much money should be devoted to reducing that probability.
No one knew the probability of a financial crisis such as we are experiencing.
Even Roubini did not (as far as I know) attempt to quantify that probability.
Which brings me to the last and most important reason for the neglect of the warning signs, because it suggests the possibility of responding in timely fashion to future risks of financial disaster.
That is the absence of a machinery (other than the market itself) for aggregating and analyzing information bearing on large-scale economic risk.
Little bits of knowledge about the shakiness of the U.S.
and global financial systems were widely dispersed among the staffs of banks and other financial institutions and of regulatory bodies, and among academic economists, financial consultants, accountants, actuaries, rating agencies, and business journalists.
But there was no financial counterpart to the CIA to aggregate and analyze the information--to assemble a meaningful mosaic from the scattered pieces.
Much of the relevant information was proprietary, and even regulatory agencies lacked access to it.
Companies do not like to broadcast bad news, and speculators planning to sell a company's stock short do not announce their intentions, as that would drive the stock price down, prematurely from their standpoint.
In any event, no effort to determine the probability of financial disaster was made and no contingency plans for dealing with such an event were drawn up.
The failure to foresee and prevent the 9/11 terrorist attacks led to efforts to improve national-security intelligence; the failure to foresee and prevent the current financial crisis should lead to efforts to improve financial intelligence.
Of all the puzzles about the failure to foresee the financial crisis, the biggest is the failure of foresight of professors of finance and of macroeconomics, with a few exceptions such as Roubini.
Some of the media commentary has attributed this to economics professors' being overly reliant on abstract mathematical models of the economy.
In fact professors of finance, who are found mainly in business schools rather than in economics departments, tend to be deeply involved in the real world of financial markets.
They are not armchair theoreticians.
They are involved in the financial markets as consultants, investors, and sometimes money managers.
Their students typically have worked in business for several years before starting business school, and they therefore bring with them to the business school up-to-date knowledge of business practices.
So why weren‚Äôt there more Roubinis? I do not know.
And why, if not more Roubinis, not more financial economists who took the warning signs sufficiently seriously to investigate the soundness of the financial system? I do not know that either.
Limiting the compensation of a handful of employees at a handful of firms can't have any effect except to benefit the firms' competitors by making them more attractive places to work.
The limitations are a form of scapegoating designed to appease public anger over the high incomes of financiers who precipitated an economic collapse that has caused widespread suffering, much of it to people who, unlike financiers, bumbling or inattentive government regulators, macroeconomists, members of Congress, and improvident homebuyers and home-equity borrowers, bear no share of blame for the collapse.
There is a slightly better, though still unconvincing, case for regulating (2) compensation structure, as distinct from the level of compensation, of (2) all financial institutions.
Since the market for financiers is global (in part because even a very small country can become a major banking center, given the mobility of capital and of financial personnel and the absence of any need for elaborate infrastructure, physical resources, or a large domestic market), effective regulation of compensation structures would require agreement among all major and many minor nations.
If that obstacle to effective regulation could be surmounted, the case for regulation would come down to the fact that front-loaded compensation of financial executives can increase macroeconomic risk.
To explain, the risk of the kind of financial collapse that occurred in 2008 was reasonably perceived as small; had it been perceived as large, the banking industry would have reduced its leverage and other sources of risk.
The risk of the kind of financial collapse that occurred in 2008 was reasonably perceived as small; had it been perceived as large, the banking industry would have reduced its leverage and other sources of risk.
That small-seeming risk was produced by individual risky transactions, and the object of compensation reform is to discourage such transactions.
Suppose the transactions were the purchase of triple-A tranches of mortgage-backed securities at an attractive price, but carried a correlated annual risk of 1 percent that the investments would turn out to be worthless and bring down the firm.
A financial executive paid salary or bonus based on the expected profit of such a deal would have an incentive to make it despite the slight chance that it would blow up eventually.
Merely requiring, say, that a portion of his salary or bonus be placed in escrow for a few years would not deter him; the reduction in his expected compensation would be too small.
Suppose 50 percent of the bonus he received on the deal was placed in escrow and the duration of the escrow was five years.
Then he would face a 5 percent chance of losing half his bonus.
That would be too small an expected penalty to dissuade him from making the deal.
The penalty could not be made sufficiently heavy to disuade him without depriving him of most of his current income.
So I think regulating financial compensation is a mistake.
At the same time I think financial executives probably are overpaid from a social perspective.
The reason is that their high incomes are generated mainly by speculative trading of stocks and bonds and other financial assets.
Speculative profits are not net additions to economic welfare, because they are offset by the losses of the speculators on the other side of successful speculators' trades.
That is not to say that speculation has no social value.
It generates great social value by bringing about improved matching of prices to values, which encourages investment in productive activities.
But the amount of profit that a speculator makes is not the measure of the social value of a successfl speculation.
The increase in social value is probably only a small fraction of the speculator's profits.
If financial speculation involved a lot of career risk, in the same way that becoming an actor does, then the high incomes of successful speculators, like those of successful film actors, would be compensation for the risk of failure.
But financial executives, while they do sometimes lose their jobs because of bad trades, generally experience a soft landing because their training and experience equip them for a variety of good jobs in business, government, or academia.
Recipients of Harvard Ph.D.'s in physics are said to have two career tracks open to them: academia and Wall Street.
No doubt many are attracted to Wall Street by the much higher incomes they can expect there.
Yet their social value might well be greater in academia.
Higher marginal income-tax rates, or a stiff tax on financial transactions, might go a slight distance toward correcting the financial brain drain, but probably it is a problem that we shall just have to live with.
The   New York Times   published an article last Thursday on the Swiss health care system, which can be viewed here: www.nytimes.com/2009/10/01/health/policy/01swiss.html?_r=1&em.
The system is simple.
There is no "public option," that is, there is no government health insurance program, such as Medicare or Medicaid.
There is very little employer-provided health insurance, presumably because employee health benefits are not tax exempt; almost all health insurance is therefore bought by the insured.
Everyone is required to buy a health insurance policy that provides a specified minimum of benefits (they can buy more expensive policies if they want), but there are subsidies for people for whom the expense would be a hardship; about 30 percent of the population receives a subsidy.
Because of the heavy subsidization, the prices charged by the insurance companies are limited by government, but at a high level.
(The limits therefore limit doctors' fees and incomes, and doctors are less well paid in Switzerland, relative to average wages, than in the United States.) There are many insurance companies, and people can switch freely among them.
Copayments or deductibles are larger, and as a result the average out-of-pocket cost of health care is higher in Switzerland than the United States--an average of $1,350 per year, versus $890 in the United States.
But the aggregate cost of health care is much lower in Switzerland--11 percent of GDP versus our 16 percent--though higher than in any other country besides the United States.
There is, as I said, no special program for the elderly, corresponding to Medicare--which may be why male life expectancy at age 65 is higher in the United States than in Switzerland, although female life expectancy at age 65 is higher in Switzerland and life expectancy at birth is substantially higher in Switzerland, in part because infant mortality is only about half as great as here.
The quality of medical care does not appear to be inferior in Switzerland to that in the United States, and there appears to be no problem of queuing, as in Britain and Canada.
Indeed the Swiss have significantly more doctors, nurses, and hospital beds per capita than the United States, which suggests that there may be less queuing there than here; and there is general satisfaction among the Swiss with their system, although there is some grumbling over the high cost of medical care.
Of course one must not put too much weight on a single article, but the information in the   Times   piece appears to be corroborated, at least the statistical data; and some of my description of the Swiss system is drawn from other sources.
If the United States could reduce its medical costs from 16 percent of GDP to 11 percent, the savings would be $700 billion a year; and if the reduction did not reduce the health or longevity of the American population or create queuing costs, there would be no offsetting cost; the $700 billion in savings would be net.
But while the Swiss health-care system may be great for the Swiss, comparing the health-care systems of two countries, even if they are broadly similar (both the United States and Switzerland are wealthy, modern, Western, democratic, capitalist nations), is treacherous, because beneath the broad similarities are potentially important relevant differences.
Two of particular importance in the present context are, first, that the Swiss are probably healthier than Americans, on average, apart from any superiority of Swiss health care, and, second, that the Swiss probably have lower expectations of health care than Americans.
The Swiss do not have a large "underclass" (corresponding to the residents of our inner cities) that is poor and has a very high murder rate and high infant mortality and a high incidence of AIDS and other diseases.
In addition, the Swiss do not have America's obesity problem, which is a source of abnormally high medical costs because of the treatment costs of diabetes and other diseases to which obese people are disproportionately prone.
And the Swiss people in all likelihood do not expect as much medical intervention as Americans too.
Europeans tend to be more fatalistic than Americans.
They do not share our preoccupation with extending the longevity of very old people, or our exaggerated faith in medical science that leads some of us to describe the death even of a nonagenerian relative as a "medical failure." Nor do they have as great a propensity as we to insist (after researching a disease on the Internet) on receiving medical care beyond what a doctor's professional judgment thinks warranted.
Our expectations regarding medical treatment are connected to our poor health: Americans want both to indulge in an unhealthy but enjoyable life style and live forever, and they try to square the circle by demanding extravagant (by international standards) health care.
(I am exaggerating, of course; some of our poor health is due to ignorance rather than to a deliberate choice to substitute medical treatment for healthful living.).
So we might adopt the Swiss system and discover that our aggregate costs of health care had declined little from their current 16 percent of GDP.
Indeed, because of increased coverage, it might increase (see below).
The proper use to be made of the experiences of other nations with health care is not advocacy for our adopting the health-care system of a nation broadly comparable to ours that spends a lower fraction of GDP on health care than we do.
It is to note the methods used by foreign countries whose health-care systems are well regarded by the local population and see whether any of them could work well here, bearing in mind the dangers of piecemeal adoption of foreign methods.
(An example of those dangers is the adoption by the Detroit auto companies some years ago of the "quality circles" used by Japanese auto companies to increase productivity by encouraging their workers to suggest productivity-enhancing innovations.
The quality circles failed in Detroit because the auto companies did not realize that what made the quality circles work in Japan was the practice of lifetime employment; our workers were reluctant to suggest productivity improvements because they knew it might well result in a smaller workforce and therefore in layoffs.).
The features of the Swiss health-care system that seem well adapted to American conditions (though whether their adoption would be politically feasible is a separate question--to which the answer is "no," at least at present) are, first, repealing the tax exemption for employer-furnished health benefits, since the exemption both creates an artificial incentive for employers rather than employees to buy health insurance and disguises the cost of the benefits to the employees (in lower wages); second, making everyone buy health insurance, in order to prevent adverse selection (that is, excess demand by the unhealthy), the problem to which group (normally employer-group) insurance is a second-best solution; third, requiring significant copayments or deductibles so that the marginal cost of health care to the insured is not so low as to induce the overuse of medical resources; and fourth, providing no special program for the elderly, but instead requiring them to buy insurance like everyone else, with the cost subsidized only if they cannot afford the cost of the insurance rather than just because they are old.
Such reforms would probably produce a net savings in aggregate U.S.
health-care costs, though this is not certain, because of the subsidies and because any extension of coverage--which would be considerable because everyone would be required to have health insurance and the number of uninsured in the United States exceeds 40 million persons--is likely to increase the demand for health care.
The subsidies are transfer payments rather than costs in the sense of consuming real resources, but worrisome nevertheless because of the potential long-term harm to the economy from our soaring public debt.
But the aggregate transfers and (real) costs would probably be less under a version of the Swiss approach than under the approach urged by the Administration, which does not have credible cost-saving measures build into it.
Oliver Williamson, an economist who won half a Nobel prize last week, has made important contributions to a field of economics that is not as well known as it should be: "organization economics." This is a field, closely related to a branch of sociology called organization theory, to which pioneering contributions were made by Alfred Chandler, Herbert Simon, and Ronald Coase, as well as Williamson; more recent contributors of note include Jacques Cr√©mer, Bengt Holstrom, Luis Garicano, Canice Prendergast, Jean Tirole, and others.
I have used organization economics in my academic work on the structure of our national intelligence system; Garicano and I have published an organization-economic study of the FBI's domestic intelligence branch in the Journal of Economic Perspectives, and I have written a review essay on organization economics for a forthcoming issue of the   Journal of Institutional Economics  .
Oddly, an interest in organizations is a latecomer to economics, even though most economic activity is conducted through organizations.
The standard economic model is of trade between individuals, or firms assumed to behave as individuals.
For many purposes the model, despite its extreme simplification, is adequate.
If one wants to know how cigarette producers will respond to a rise in cigarette taxes, it is enough to assume unrealistically that a cigarette producer is one person rather than a complex organization.
But for other questions the assumption is inadequate--most obviously if the question is why some business firms have steeply hierarchical structures and others rather flat ("M-shaped"--"M" standing for multidivisional) ones (this distinction has been a particular emphasis in Williamson's work).
Or why compensation practices within firms (or government agencies) take the form they do.
Or--most fundamentally--why there are firms at all--why all economic activity isn't carried on by contracts among individuals.
Ronald Coase asked that question in a paper entitled "The Nature of the Firm," published in the 1930s.
His answer was that a producer has a choice between contracting with independent contractors for the output of the various inputs into this production of the finished product, and contracting with individual workers--employees--not for their output but for the right to direct their work--and that the employer would choose between forms of contract--the contract with the independent producers or the employment contract--on the basis of which was more efficient, given the nature of his business.
Neither form of organizing production is perfect.
The arms' length contract form requires detailed specifications that create inflexibility.
The command form--the employer directing the work of employees rather than contracting for their output--creates the well-known principal-agent problem (the problem economists call "agency costs")--the employee is supposed to be working to maximize the firm's profits, but what he wants to maximize is his own utility, so the employer has a control problem.
The modern literature emphasizes the principal-agent problem but also moves beyond it by emphasizing another aspect of control within an organization: the creation, transmission, processing, coordination, and use of information.
Because the span of supervision by one person is limited, the more employees a firm has, the more supervisors it requires; and the more supervisors it has, the more supervisors of supervisors it requires because the span of control is limited at every tier of the hierarchy.
So as an organization expands, the layers of supervisors multiply, and the consequences ared delay in executing orders, loss of information, attenuation of the directions emanating from the top, and in short a weakening of control and coherence.
The larger the organization, moreover, the more difficult it will be to correlate the work of a particular employee with the value of the organization's output, and so the employee's incentives will fall further out of alignment with those of the firm.
A partial alternative to hierarchy is to decentralize the organization in imitation of the market, by delegating authority to division heads and requiring them to compete with one another for allocations of capital from central management.
That is the essence of the "M-form" of corporate organization ("M" standing for multidivisional).
Organization economics emphasizes the variety of agency costs that flourish in complex organizations, such as "influence activities," by which agents try to influence the decisions of their principals, for example by flattery, by being a "yes man" and not "rocking the boat," by doing personal favors, by making alliances with coworkers, by jockeying for promotion, and by hoarding information to make oneself indispensable and reduce the output of one's competitors in the organization.
The challenge to organizations is to generate cooperation without use of the price system, since the employer does not buy the output of his employees.
Instead organizations rely on common norms, understandings, customs, and perspectives that substitute for explicit contracting and thus enable cooperation on dimensions of performance that cannot be prescribed by formal directives.
This set of informal binding elements (the organization's "culture") includes codes and other shared specific human capital that facilitates communication and coordination among agents.
Unfortunately, an organizational culture that is optimal in its current environment may become suboptimal when the environment changes, yet adaptation to the new environment may be difficult because once information channels and other organizing elements are created, an investment has been sunk that will constrain the organization's reaction to a new environment.
Change is especially hard because an organization's culture is diffused throughout the organization rather than concentrated in one place (an employment manual, for example) where it could be changed at a stroke.
The result is organizational conservatism or inertia, and explains why innovations tend to come from new firms rather than from existing ones.
An important aspect of organizational culture, one that I have emphasized in my academic work, is the awkwardness of combining different cultures in the same organization.
An example is the combination of criminal-investigation and security-intelligence functions in the FBI.
The former lend themselves to what are called "high-powered" incentives, which are systems of compensation and promotion that are based on objective performance criteria.
In the case of criminal investigation these are number of arrests weighted by convictions and sentence.
Intelligence work does not lend itself to such performance criteria, because the effect of surveillance and other intelligence activities in preventing terrorism or subversion is usually very difficult to assess.
Hence motivation takes the form of creating a "high commitment" environment in which the organization's leaders try to elicit good performance by getting staff to internalize the organization's goals.
The problem is that the absence of objective criteria of performance opens the door to "influence activities" by which members of the organization jockey for advancement.
If both types of task are combined in the same organization--those that can be directed by high-powered incentives and those that require high commitment as their motivator, the best employees will tend to gravitate toward the first type of task because they will be confident that they will do well if their performance is judged according to objective criteria.
They will be much less certain how well they will do in a job in which influence activities play a large role in determining success.
The problem of culture clash in an organization is further illustrated by the financial collapse of last year.
Banks had traditionally been conservative organizations emphasizing risk avoidance, modest compensation, gradual promotion, and secure tenure.
When in the deregulation era they were permitted to expand into riskier and (therefore) more lucrative forms of financial intermediation, they attracted a different kind of employee--smarter, more willing to take career as well as financial risks, more independent, and demanding higher pay.
Because they were generating more profits for the bank, their influence grew and placed pressure on the traditional bankers to take more risks in order to hold their own in the struggle to control the organization.
So one proposal for preventing a recurrence of the financial crisis, since the crisis was due in part to highly risky lending by banks, is to restore the separation codified in the Glass-Steagall Act of conventional banking from high-risk forms of financial intermediation.
The financial collapse illustrated another facet of organization economics as well.
The banking industry expanded very rapidly in the low-interest-rate environment created by Greenspan's monetary policy in the early 2000s, and the expansion took the form largely of the expansion of existing firms rather than the creation of new ones.
When an organization expands rapidly, there is a danger of loss of control over subordinate employees.
The danger in the case of the banking industry's expansion was increased by the fact that many of the new hires consisted of young risk takers whose attitudes and skills were often quite different from those of the higher tiers of management.
Senior managers had difficulty in assessing and limiting the highly risky deals engineered by the young hot shots.
Becker is right to point out the difference in supply conditions between oil (and other minerals, but I will limit my discussion to oil) and agricultural products: it is cheaper to expand output of the latter than of the former.
Hence as demand for oil and for food rise as a function of population growth (an important qualification, as I'll explain--population growth is not the only driver of increased demand for food), oil prices will rise faster than food prices.
This is fortunate because while there are substitutes for oil, there are no substitutes for food.
A continued increase in world population will increase the demand for both oil and food, and historical experience suggests, as Becker explains, that the increased demand for food can be met at only modestly increased cost even if the world's population expands greatly, though this depends in part on how rapid the expansion is--the more rapid it is and hence the steeper the increase in the demand for food, the higher the cost of meeting that demand will be, as it is easier to increase production in the long run than in the short run.
Moreover, a sizable expansion in population would raise the price of farmland by increasing its opportunity cost.
As the world grows wealthier, the rate of expansion of population should, if historical experience is a guide, slow.
But even if population stopped growing altogether, the demand for food would continue to rise because more people (perhaps billions more) would be able to afford the rich diet that people in wealthy countries consume.
Supplying that rich diet is very costly in agricultural resources, for one of the major components of the diet is meat and the production of meat requires more agricultural output than the production of cereals and vegetables, since the animals that people eat are big consumers of food.
Technological innovations may hold down increases in the price of food that are due to the increased demand for a rich diet as multiplied by increase in population.
But those innovations may create substantial externalities even if they do not push up prices (indeed, the less the increase in prices, the greater the output of agricultural commodities and hence the greater the externalities).
As more and more countries adopt the most efficient methods of agricultural production, and thus for example converge on the optimally genetically modified variants of crops, genetic diversity will decline, which will increase the potential damage from blights.
(It is not only stock portfolios that benefit from diversification.) Agriculture is a heavy user of water, moreover, and global warming appears to be reducing the supply of water usable for irrigation by reducing the size of glaciers.
The run off from the seasonal melting of glaciers provides a more usable supply of water than rainfall, because the water from a melting glacier is channeled, while rain that falls outside a river or other body of water is difficult to store for use in irrigation.
I am one of those timid souls who worry about the downside of technological advance and economic growth.
I find the prospect of continued increases in population and income, and of the technological innovations necessary to cope with those trends, unsettling.
Becker makes the essential point about the difference between the quality of government under autocracy (dictatorship, or nonconstitutional monarchy) and under representative democracy: there is more quality variance under the former.
Dictatorship tends to extremes of bad and good, representative democracy to mediocrity because the institutions of representative democracy are designed to diffuse rather than to concentrate political power.
Dictatorship and democracy are not dichotomies; they are points on a spectrum.
At the left end of the spectrum is direct democracy (the people vote on policies rather than representatives, as in a California referendum) in ancient Athens; it is susceptible to being destabilized by sharp swings in public opinion.
Moving a step to the right we come to modern American democracy: public opinion is mediated by popularly elected representatives and the power of government is limited by an independent judiciary.
A step to the right of that brings us to the original U.S.
constitutional democracy, that of 1787, in which, of the five major arms of government—the Presidency, the Senate, the House of Representatives, the federal judiciary, and the federal bureaucracy (consisting of all executive branch officials and employees except the President and Vice President)—only one was popularly elected, the House.
The President and the Senate were indirectly elected (via the Electoral College in the case of the President and the state legislatures in the case of the Senate), and the judges and bureaucrats were not elected at all.
Today the Senators are elected directly, and likewise (in effect) the President.
Take a big step to the right, and we encounter quasi-democratic polities, such as England in the eighteenth century and the German Empire (1871 to 1918).
Take another big step to the right and we encounter oligarchy—the rule of the few—illustrated by the collective dictatorships of the Soviet Union after the death of Stalin, and of China today.
A final step to the right bring us to the one-man dictatorships, for example of Hitler, Stalin, Mao, and Castro.
This spectrum has a parallel in the variety of managerial structures found in business.
At the right end is the highly centralized, “U” (unitary) form of organization.
At the left end is the decentralized, “M” (multidivision) form of organization illustrated by the high-tech firms of Silicon Valley.
The centralized form minimizes redundancy, maximizes top-down control, and is optimal for relatively simple, mature production processes that do not require constant innovation and flexible adaptation to a rapidly changing economic environment.
The decentralized form of management accepts redundancy and loss of tight control as the price for fostering innovation and adaptation by granting a considerable measure of autonomy to employees.
The Soviet Union’s command and control economy worked well in World War II because it enabled vast quantities of a limited number of weapons to be rapidly produced and distributed and millions of soldiers to be rapidly trained and deployed, but the system proved to be fatally maladapted to the complexities of a civilian economy.
In general, then, the simpler the economy (all-out low-tech war is the limiting case: there is only one demander, and for a limited range of goods and services, thus making supply simple), the more adaptive a dictatorial political system; the more complex the economy, the more adaptive democracy is.
A dictatorship is apt to limit information flows and business autonomy, and by doing so to reduce flexibility and innovation, fearing the private sector as a potential power rival to the dictator.
At the same time, the dictatorship wants the population to be content, for then it is more easily controlled.
The competing aims of limiting private freedoms and producing contentment may lead the dictator to relax control over the economy as increasing complexity makes a command and control economy increasingly inefficient.
As that happens and people become wealthier, they also become more self-confident and assertive, creating pressure for self-government and therefore democracy.
Dictatorship will often by optimal for very poor countries.
Such countries tend not only to have simple economies but also to lack the cultural and institutional preconditions to democracy.
Dictatorship is much less likely to be optimal for advanced economies.
This pattern seems to be broadly observed.
China and India present an interesting contrast and case study of theories of politico-economic development.
Both are huge Asian nations with rich cultures, limited natural resources, serious minority problems, perceived military challenges, great poverty, many very intelligent people (mainly as a function of their large populations), a belated and incomplete but meaningful embrace of free-market economics, and (related to the previous point) a booming modern sector.
So they have much in common.
But India is a democracy and China a collective dictatorship.
What does that portend for their future relative economic growth?.
I am very reluctant to make predictions.
I shall go no further than to say that China seems to me to have more serious problems than India, problems rooted in part, though only in part, in dictatorship.
China is historically unstable and extremely corrupt, nationalistic, militaristic, and aggressive; like Wilhelmine Germany, which it resembles in other ways as well, it has a paranoid fear of encirclement by hostile powers (Japan, Taiwan, the United States, Vietnam, India).
It is also fearful of and hostile toward its numerous minorities, reactions that feed and are fed by nationalism.
Nationalism fosters loyalty to the state and thus serves the political purpose of the dictatorship and so is encouraged by it.
The dictatorship has further augmented its power by not only retaining but also expanding public ownership and control of much of the nation’s industry.
In this and other ways China has an unbalanced economy.
It has pursued a policy, sensible for a poor country with low wage rates, of producing mainly for export.
But as incomes rise, the pressure to move toward a consumer economy will increase, and it may be difficult for China to achieve this transition in a timely fashion because it is just beginning to develop a robust consumer infrastructure—a strong retail sector, product warranties, effective regulation of food and drugs, anti-fraud policies, health insurance, etc.—in short a modern service sector, which cannot be created overnight.
Perception of China’s problems has been obscured by its spectacular growth statistics (though it’s unclear how accurate they are) and by the nimbleness of its response to the global economic crisis, compared to most democratic nations, including the United States.
It is not an accident.
An economic emergency is like a war: it requires a rapid and concentrated response, which is easier to organize and implement in a dictatorship than in a democracy.
Keynes pointed this out in his foreword to the German translation of the   General Theory  , published in Germany during the Nazi period.
He wrote that “the theory of aggregated production, which is the point of the   General Theory  ,…can be much easier adapted to the conditions of a totalitarian state [  eines totalen Staates  ] than the theory of production and distribution of a given production put forth under conditions of free competition and a large degree of laissez-faire.” By “aggregated production,” he seems to have meant private plus government production, the latter being particularly important in a depression to take up the slack created by the drop in private demand for goods and services.
All true; but we know what happened to Nazi Germany.
The statistics on education and earnings presented by Becker are dramatic, but also puzzling, at least superfically.
Why should the sex ratio of either education or earnings change over a relatively short period of time (30 or 40 years)? It is fairly easy to explain the growth over this period in the percentage of women who work full time in the market rather than in the household—improvements in contraception, a fall in the marriage rate (though that is a function in part of women’s higher market earnings potential), a reduction in the demand for children (also, however, in part a function of that higher potential), the shift from a manufacturing to a service economy (and the growing automation of manufacturing), and the growth in household labor saving appliances—all these things have contributed to increased female participation in the labor force, but it is not obvious that they would increase the ratio of full-time female earnings to full-time male earnings.
It’s not as if there’s been a relative increase in the number of jobs for which women are better suited than men.
Women are not as well suited to perform jobs requiring upper-body strength as men are, but men can perform virtually all service jobs as well as women can.
So an increase in demand for service workers should draw men as well as women into such jobs, leaving the gender wage ratio unchanged.
Similarly, one would expect an increase in the returns to education to affect men the same way it would affect women, so that relative graduation rates would not change and therefore would not affect relative earnings.
One factor in the increased ratio of female to male earnings is undoubtedly that until quite recently most women who worked full time were unable, unless they were unmarried, or married but childless, to spend as many years working full time as they are able to do today.
They would have to take years off from full-time employment to take care of their children, and so would be investing less in their human capital than male workers and therefore earning less.
And they would tend to cluster in full-time jobs that involve short work days, notably teaching, and so more of their compensation (relative to men’s) would take the form of leisure, as distinct from pecuniary income, than men’s compensation would.
Law, and especially medicine, fields that require protracted education and long hours—and are compensated accordingly—would be unattractive professions for most women.
Another factor is discrimination.
Women were largely excluded from the major professions until the 1960s (in part at least because of expectations that they would become full-time practitioners), and their educational opportunities were limited until then as well—many elite colleges and professional schools did not admit women.
Beginning in the 1970s, antidiscrimination laws corrected but also overcorrected sex discrimination, by placing pressure on employers to hire and compensate women at higher rates than justified by labor costs.
For, in order to avoid accusations of discrimination, employers began bending over backwards to hire and retain women, even ones who were slightly less qualified than men.
And the laws forbade employers to charge higher health insurance or life insurance premiums to female employees, even though they tend to use more medical services than men, and live longer, and so cost more to health and life insurers.
But all this leaves unexplained why women would be graduating at higher rates from colleges and from graduate and professional schools than men.
One possibility is differences between men and women in variance in IQ—the issue that got Larry Summers into trouble when he was president of Harvard.
Suppose as he conjectured (with some evidence) that men and women have the same IQ but the distribution of male IQs is flatter than that of women—a higher proportion of men than of women have very high and very low IQs.
As graduation from most college and most graduate or professional programs requires a normal or high but not very high IQ, the greater male variance in IQ would tend to truncate male but not female graduation (and hence enrollment) rates: low-IQ males would be underrepresented in higher education, but high-IQ males would be overrepresented in just a few programs, such as high-energy physics, and so would not balance the males who were not admitted or dropped out at high rates.
Males would continue to be overrepresented in jobs involving upper-body strength, but these tend not to require a high level of education.
Cultural factors may also be at work, especially in the black community, where academic performance is disparaged among young men but not young women.
For example, 42 percent of black women who graduate from high school go on to college, compared to only 37 percent of black males; and just 35 percent of black male college students graduate within six years, compared to 45 percent of black female college students.
This implies that 19 percent of black women who graduate from high school are graduating from college within six years compared to only 13 percent of black males.
The overall situation is actually worse, because only 48 percent of black males graduate from high school, compared to 59 percent of black females (implying that the college graduation rate for black females is almost twice that for black males); and the disparity is almost as great for Hispanics.
Blacks and Hispanics constitute a sizable fraction of the U.S.
population.
Nevertheless, there is a gap between white male and white female graduation rates as well; the high school graduation rate for white males, for example, is 74 percent, compared to 79 percent for white females, and the college graduation rate is 43 percent for white males compared to 57 percent for white females.
Recently, as an aspect of growing hostility to immigrants to the United States fueled by our continuing economic crisis, questions have been raised concerning the desirability of what is called “birthright citizenship.” The term refers to awarding citizenship to everyone born in the United States (with a few very minor exceptions, such as the children of accredited foreign diplomats and of foreign heads of state on official visits to the U.S.), including the children of illegal immigrants whose sole motive in immigrating may have been to confer U.S.
citizenship on their as yet unborn children.
This rule—though thought by some (not by all) to be compelled by section 1 of the Fourteenth Amendment, which provides that “all persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside,” but which in any event is codified in a federal statute which provides that “the following shall be nationals and citizens of the United States at birth: (a) a person born in the United States, and subject to the jurisdiction thereof”—can indeed be criticized.
“The Federation for American Immigration Reform estimates that 165,000 babies are born each year in the United States to illegal immigrants and others who come here to give birth so their children will be American citizens.” Kelley Bouchard, “An Open-Door Refugee Policy Has Its Critics,”   Maine     Sunday Telegram  , June 30, 2002, p.
11A.
There is said to be “a huge and growing industry in Asia that arranges tourist visas for pregnant women so they can fly to the United States and give birth to an American.
Obviously, this was not the intent of the 14th Amendment; it makes a mockery of citizenship.’” John McCaslin, “Inside the Beltway: Rotund Tourists,”   Wash.
    Times  , Aug.
27, 2002, p.
A7.
We should not be encouraging foreigners to come to the United States solely to enable them to confer U.S.
citizenship on their future children.
That abuse provides an argument for abolishing birthright citizenship.
A constitutional amendment may be required to change the rule, thoiugh maybe not, see Peter H.
Schuck & Rogers M.
Smith,   Citizenship Without Consent: Illegal Aliens in the American Polity   116–17 (1985); Dan Stein & John Bauer, “Interpreting the 14th Amendment: Automatic Citizenship for Children of Illegal Immigrants,” 7   Stanford L.
& Policy Rev.
  127, 130 (1996), since the purpose of the rule was to grant citizenship to the recently freed slaves and the exception for children of foreign diplomats and heads of state shows that Congress does not read the citizenship clause of the Fourteenth Amendment literally.
If birthright citizenship is not commanded by the Constitution, it can be eliminated by amending the statutory provision that I mentioned.
But closing the loophole that encourages foreigners to come to the United States solely to make their future children U.S.
citizens would not address the larger question of birthright citizenship.
For undoubtedly most children born in the United States to illegal immigrants are not born to persons whose motive for immigrating was based in whole or significant part on a desire to have U.S.
citizen children.
Most countries outside the Western Hemisphere do not recognize birthright citizenship; instead they base citizenship of children on the citizenship of their parents or other lawful connections between the parents and the country (ethnicity or religion, for example).
Should we adopt that approach, by constitutional amendment if necessary? (It may not be necessary, as I have suggested, but I take no position on that question.) The problem is that though it would discourage people from coming to the United States for the sole or main purpose of having children who would be U.S.
citizens, it would probably on balance increase the size of the illegal immigrant population.
The United States, at least when the economy is healthy, is a magnet for illegal immigrants, most of whom manage to avoid being deported.
Their children born here are U.S.
citizens despite the parents’ illegal status.
If birthright citizenship were abolished, these children would not be U.S.
citizens, at least not automatically as at present.
Nor would their children.
One would have, as in some European countries, generations of illegals—persons who had never lived anywhere else, who could not feasibly be deported (and to where?—the country of origin of their grandparents, a country with which they had no connection, and the language of which they did not speak?).
If many illegal immigrants do not become well assimilated in the United States (many do, however), at least their children born here probably do.
And that is a considerable benefit to the nation that eliminating birthright citizenship would undermine, perhaps eliminate.
If most illegal immigrants were deported within a few years of their arrival in the United States, the problems created by birthright citizenship would largely disappear; if the parents had children born here, still they would be young children and the parents would bring them back home with them when they were deported and most of these children, grown to adults, would not come to live in the United States though entitled to do because they were U.S.
citizens.
But illegal immigration to the United States has overwhelmed the resources that the political process is willing to allocate to exclusion and deportation (given demand by U.S.
employers for immigrant labor, legal or not), and as a result many illegal immigrants have children who grow to maturity in the United States and know no other country.
For to be illegals would create what in Europe is called a “helot” population, helots in ancient Greece having had an ill-defined intermediate status between slaves and free citizens.
Concern with birthright citizenship is probably misplaced, because the most serious problem of U.S.
immigration policy is not who should be excluded but who should be admitted, and that problem should be tackled first.
We are handicapping our growth by refusing to allow easy admission of those immigrants who are most likely to foster economic growth by virtue of their IQ, skills, or wealth.
Instead we continue to emphasize lotteries and family-reuniting as the principal criteria of lawful immigration.
The new coalition government in England is embarking on an ambitious austerity program.
One goal is to eliminate 490,000 jobs in the public sector; a related goal is to slash government expenditures by 19 percent over the next four years.
Administrative budgets—overhead—of government agencies are to be slashed by 34 percent.
(The U.S.
population is five times as great as the U.K.’s—so imagine our eliminating 2,450,000 public sector jobs!) By a combination of tax increases and spending cuts, the U.K.
government hopes to eliminate the annual national budget deficit, currently more than 11 percent of GDP (similar to ours), by 2015.
Among popular programs to be trimmed or eliminated are housing subsidies for middle-class persons.
Defense expenditures are to be reduced by 8 percent, education spending by 3.6 percent, and the teaching budgets of public universities by 40 percent.
Since spending on technical subjects is to be protected, other disciplines, such as the humanities, will face especially steep cuts and tuition will rise.
England can make dramatic changes in policy overnight because its government is extremely centralized.
The Cabinet dominates the House of Commons, thus fusing the executive and legislative branches (the legislative branch is effectively unicameral—the House of Lords counts for little), and except in Scotland and Wales, there is no regional autonomy, as there is in the United States by virtue of our federal system.
Except in wartime, we cannot make major policy changes in a hurry, though the Obama Administration managed in two years to make more changes than any President, in a comparable period, since Lyndon Johnson (and much of the groundwork for his “Great Society” legislation had been laid in the Kennedy Administration and gained momentum from Kennedy’s assassination).
England’s economic crisis is a great deal like ours; its response has been dramatically different.
England, Keynes’s homeland, has rejected the Keynesian solution to depression or recession, which is to stimulate consumption by deficit spending aimed at increasing the incomes and confidence of the public; the Obama Administration has embraced the Keynesian approach (which Americans call “stimulus’), though its implementation has been erratic, in part because of deficiencies of timing, design, and execution, and in part because of political resistance.
But the two nations’ diametrically opposed approaches have this in common: both proceed, in part, from a desire to use an economic crisis as an occasion for long-term economic reform.
Both administrations are looking beyond the crisis.
Both believe that the crisis has made the public receptive to reforms that they would resist in times of prosperity.
The English government, being dominantly although not entirely conservative (for it is a coalition government and the Liberal Democratic Party is the junior member of the coalition), wants reform in the form of a leaner government.
Our government, being dominated by liberals (at this writing, though perhaps not for long, both Houses of Congress, as well as the Presidency, are Democratic), wants a larger government, which will expand public subsidies of health care, increase regulation (and not only of financial institutions), and combat global warming with taxes and subsidies.
Both approaches risk making the economic crisis, which as I said is similar in both countries, worse in the short run, though long-term benefits of the attempted reforms may outweigh present costs.
England is slashing public payrolls and subsidies at a time of high unemployment and economic anxiety, and this may reduce economic growth in the near term.
The U.S.
government is frightening business by its regulatory measures, the health reforms that may well increase the labor costs of business, the prospect of higher taxes, anti-business rhetoric, and a general ambition to make government larger and more aggressive, which is unsettling the politico-economic environment of business.
There is a sense too that in their impatience to bring down the unemployment rate, the Federal Reserve and the Treasury may soon embrace policies that sow the seeds of future inflation.
Uncertainty causes businesses and consumers alike to tend to freeze—to save (in “safe” forms that don’t promote productive investment) rather than spend—and the Obama Administration has increased the level of economic uncertainty in the United States.
Unfortunately, as we have learned in the last two years, the economics of the business cycle, and in particular of the business cycle when it enters a deep trough, are not well understood.
It is possible to tell a “story” about how the English program will stimulate investment and consumption by convincing businesses and consumers that a leaner government will conduce to faster economic growth.
But there is an equally plausible “story” about how the English program will reduce growth by reducing employment and incomes, and that what both England and America need is further deficit spending to increase incomes and employment.
There is too much uncertainty, compounded by the dramatic economic changes occurring in major trading partners of England the United States, to be confident that one “story” is true and the other false.
What may make the prospects for England brighter than those for the United States regardless of which story is true is that the English people are famously patient and stoic, and the English government as I said unusually centralized, and this combination may give the government flexibility in adjusting its policies to changing economic conditions.
If anti-Keynesian policies don’t work, the government can switch to Keynesian policies.
We don’t have that flexibility.
Not only is our government highly decentralized, but our politics appear to be approaching gridlock, in which neither political party will either raises taxes significantly nor cut public spending significantly nor even resist new spending programs.
The Keynesian approach has been discredited, whether rightly or (as I think) wrongly in this country, but fiscal rectitude (as one might call the English approach) seems unattainable here as a political matter.
There is widespread concern that elementary and secondary school education in the United States is deteriorating and is now inferior to that of many other countries, as measured for example by high-school graduation rates and college attendance rates.
Proposals for reform fall into two main classes: increasing competition in the provision of educational services; improving the quality of public school teachers.
A reform aimed at improving the quality of public school teachers that has received a good deal of attention lately is the “value added” method of evaluating teachers’ performance, now being used in the public schools of Los Angeles.
See “Grading the Teachers: Value-Added Analysis,”       www.latimes.com/news/local/teachers-investigation/        .
The evaluation begins with determining the average improvement of a student, say between the end of third grade and the end of fourth grade, and then comparing the student’s actual improvement, all as measured by performance on standardized tests.
If his or her improvement is above the average improvements, the teacher is rewarded, for example with a bonus; if below, the teacher can be counseled in an effort to improve the teacher’s performance.
The Secretary of Education, Arne   Duncan, supports value-added teacher evaluations (and informing parents of the teachers’ scores); the teachers’ unions oppose it as a step toward merit-based pay of public school teachers or even elimination of tenure.
The main objection to the program is that the value added by a teacher can’t really be measured.
The reason is that much may influence a student’s performance (including his year-to-year improvement) besides the teacher, including fellow students, conditions at home, and the student’s intelligence and application.
These other factors could in principle be controlled for, but actually to do so would probably strain the ability of public school bureaucracies to devise and administer sophisticated statistical measurements.
The alternative would be to assume that differences across students average out.
But unless classes are very large and students are assigned to teachers randomly, differences in average performance are unlikely to be statistically robust.
The value-added methodology is, moreover, very difficult to apply beyond elementary school.
When students have more than one teacher at a time their progression from year to year is the result of a team effort, and it is difficult to identify the contribution of each teacher.
Even if improvement (or lack thereof) is measured on a subject-by-subject basis, the existence of complementarities between subjects (math and science, for example, or history and social studies) means that a teacher in one subject can influence student performance in another.
And the complementaries can be subtle: an excellent English teacher may inspire her students with enthusiasm for school in general, stimulating them to improve their academic performance in unrelated courses.
Even with all these difficulties acknowledged, the granting of bonuses to teachers who receive above-average value-added evaluations would have some good effect on teachers’ motivation.
But of course the money has to come from somewhere, and the benefit may not equal the cost.
It is doubtful, moreover, that value-added evaluations, even when publicized (as the   Los Angeles Times   did recently with the L.A.
public schoolteacher evaluations, causing a good deal of commotion and, it seems, the suicide of one teacher), have much effect on bad teachers, either by causing them to improve or by easing them out of the system.
The methodology is too crude (and likely to remain so) to provide a solid basis for censure, self-criticism, instituting a system of merit pay, or ending teacher tenure.
Tenure has of course bad effects, whether it’s tenure in public or private schools, in public or private universities, in the federal judiciary, or unionized workplaces: it encourages slacking off, selects for people who have a high degree of leisure preference, and leads to retention of poor performers.
At the same time, however, it is a form of compensation valued by many; were it eliminated in public schools, the schools would have to pay higher, maybe much higher, salaries, which hardly seems feasible in today’s economic climate (which is likely to be tomorrow’s too).
So value-added evaluation of public schoolteachers, while ingenious (despite its limitations) and growing in popularity, does not seem to be the answer, or even a major part of the answer, to dissatisfaction with American education.
Competition is more promising.
Two forms should be distinguished: charter schools, which are public schools (that is, publicly financed and tuition free) that are however managed outside the normal public school system in order to enable and encourage experimentation; and means-tested vouchers, which are scholarships that a student can use to attend a private (including parochial) school.
About a million students are enrolled in charter schools, and 200,000 other students receive vouchers enabling them to attend private schools.
Home schooling is another alternative to public schools, and an important form of competitive education, but it is not feasible for students from poor homes because their parents (often just a mother) rarely have enough education for them to able to teach their children.
The charter schools have turned out to be a mixed bag.
There is excess demand for them, which is some evidence that they are superior to public schools.
But studies of drop-out rates and other measures of quality indicate that while some charter schools are indeed better than public schools, many are worse; there is as yet no convincing evidence that on balance they are superior to public schools.
There is more evidence that vouchers improve educational performance, though it is not conclusive.
See     “    Is School Choice Enough?”         www.city-journal.org/2008/forum0124.html      .
Vouchers enable poor students to attend established schools with a proven record of quality (many of these are Catholic parochial schools), so it is not surprising that they are more effective in improving academic performance than conversion of existing public schools to quasi-private status, which is the character of the charter-school movement.
Teachers’ unions are more fiercely opposed to vouchers than to charter schools, which is a vote in favor of vouchers! Private schools have greater freedom from regulation and are less likely to be unionized than charter schools (although charter schools, too, are generally not required to bargain collectively with their teachers), and they are numerous and established and can expand to accommodate increased demand.
I favor vouchers, but they are no panacea.
Obviously, basic education is an important social good.
But even bad schools provide that.
How much value good schools can add to the skills and knowledge of students who now attend bad schools is uncertain.
Maybe most students who attend bad schools have limited aptitude and motivation because of low IQ, poor physical or mental health, peer-group pressures, a bad family environment, or effects of popular culture.
How far such impediments to academic performance can be remedied by teachers, however skilled, and at what cost, is unclear to me.
A rich set of comments.
Several suggest that the solution to the "soft bribery" problem is to require that all campaign contributions be anonymous; then no one could prove that he had contributed to a particular candidate.
The problem is that, since "soft bribery" is an important motive for contributions, the total amount of contributions, and hence of political advertising, will fall, and so there will be reduced dissemination of political information.
That is a loss.
I do not know whether it would exceed the gain from reducing the amount of soft bribery, but it might well.
The brunt would be borne by new entrants, who need to advertise more in order to make a dent in the "brand recognition" of incumbents.
In addition, the wealthy, who are the big donors, are not a monolith; they have competing interests and therefore provide virtual representation for many ordinary people, such as the employees of the big corporations.
Also the wealthy do not have the votes; their political advertisements are aimed at average people.
Furthermore, if some candidates court the wealthy, this will drive others to raise money from the nonwealthy, something that the Internet has made easier to do, as we learned in the 2004 presidential election.
The nonwealthy give less per capita, of course, but there are vastly more of them.
Still another point is that even the wealthy do not care solely about policies likely to benefit them.
They also care about leadership, always a major focus in a presidential election.
I agree with the comment which suggests that increased political advertising could reduce turnout.
The politicians are not interested in maximizing turnout, but in winning, and a winning strategy may be to depress turnout if higher turnout would produce more votes for your opponent.
Negative advertising might provoke counter advertising also negative, the net effect of which was to reduce turnout but to the advantage of the candidate who had initiated the negative campaign.
I do not agree, however, that advertising in commercial markets is likely to depress output (the analog of turnout in the electoral market).
The comment that argues this points out that in a cartelized market, that is, in a market in which the sellers have agreed not to compete in price, there is a tendency for nonprice competition, including advertising, to increase, as sellers vie to engross the largest possible share of the profits generated by the cartel price.
I don't see how forbidding advertising in such a setting would result in higher output; it would simply increase the sellers' profits at the cartel price.
On the contrary, by reducing the erosion of cartel profits through nonprice competition, the advertising ban would tend to make the cartel last longer.
But in any event there is no price competition in the political market because politicians can't buy votes directly.
Advertising (broadly defined) is the only permitted method of competition.
Finally, I disagree with the suggestion, common though it is, that unlimited campaign spending impairs democracy by giving political power to the wealthy, or more precisely to any individuals or groups able and willing to spend disproportionately to support particular candidates or policies.
The suggestion confuses democracy with equality.
Democracy is the political system in which the principal officials are forced to stand for election at short intervals.
The identity and policies of the officials may well be influenced by the underlying distribution of income and wealth in society, but that does not make the society less democratic.
There is concern about the possibility of a flu pandemic that would be as or more lethal than the 1918-1919 Spanish flu pandemic, which may have killed as many as 50 million people worldwide; 500,000 died in the United States.
A strain of avian flu first detected in 1997 has infected some 150 million birds, including chickens, ducks, and geese, mainly but not only in eastern Asia.
More than 100 human beings have been infected, of whom about half have died.
The victims were infected by contact with diseased birds rather than by contact with infected humans.
As long as the only transmission is from birds to humans rather than from humans to humans, there will be no human pandemic.
But the flu virus is notoriously mutable; if the current strain of avian flu mutated into a form that made it transmissible from one infected person to another, it might spread rapidly through the human population.
Stocks of vaccine for immunizing people from the avian-flu strain, and of drugs (mainly Tamiflu) for treating already infected people, appear are inadequate.
The Swiss pharmaceutical manufacturer Roche, the only producer of Tamiflu, has been reluctant to license its production to other manufacturers.
The probability of a pandemic is unknown, but probably significant because of the vast number of infected birds and the increasing number of infected human beings, in whom the virus might mutate into a form in which it was transmissible to other human beings.
Flu pandemics have been frequent.
There were two in the twentieth century besides the Spanish flu pandemic.
They occurred in 1957-1958 and 1968-1969, and each killed more than a million people worldwide.
All three twentieth-century pandemics involved strains of avian flu.
There was also the swine-flu pandemic scare in 1976; the failure of a pandemic to materialize has engendered some skepticism concerning the likelihood of an avian flu pandemic.
One of the most foolish forms of commentary on issues of public safety is to note the number of false alarms and infer from that number--entirely illegitimately--that there is nothing to fear.
The world in general and the United States in particular are unprepared for a flu pandemic.
Although the current strain of avian flu was discovered eight years ago, vaccine development and production are just beginning, along with stockpiling of Tamiflu.
Apparently there is at present only enough vaccine for 1 percent of the U.S.
population.
Roche has only a limited capacity for producing Tamiflu and, as mentioned, is reluctant to license other pharmaceutical firms to produce the vaccine.
The President recently announced a $7.1 billion program for improving the nation's defenses against flu pandemics, but it will take years for the program to yield substantial protection.
So we are seeing basically a repetition of the planning failures that resulted in the Hurricane Katrina debacle.
The history of flu pandemics should have indicated the necessity for measures to assure an adequate response to any new pandemic, but until an unprecedented number of birds had been infected and human beings were dying from the disease, very little was done.
The causes are the familiar ones.
People, including policymakers, have grave difficulty taking measures to respond to risks of small or unknown probability.
This is partly because there are so many such risks that it is difficult to assess them all, and the lack of solid probability estimates makes prioritizing the risks inescapably arbitrary, and it is partly because politicians have truncated horizons that lead them to focus on immediate threats to the neglect of more remote ones that may be more serious.
("Remote" in the sense that, if the annual probability of some untoward event is low, the event, though it could occur at any time, would be unlikely to occur before most current senior officials leave office.) But by the time a threat becomes immediate, it may be too late to take effective response measures.
There is also a psychological or cognitive impediment--an "imagination cost"--to thinking seriously about risks with which there is little recent experience.
Wishful thinking plays a role too.
There is the inverse Chicken Little problem: the illogical reaction that because the swine-flu pandemic never materialized, no flu pandemic will ever materialize.
Another example of wishful thinking is the argument that most people afflicted by the Spanish flu in the 1918-1919 pandemic died not of flu, but of bacterial diseases such as pneumonia that the flu made them more vulnerable to.
But, first, is is far from clear that "most" died of such diseases, and, second, the current strain of avian flu appears to be more lethal than the Spanish flu.
Only about 1 percent of Spanish flu victims died, whereas 50 percent of known human victims of the current avian flu have died.
That percentage  is probably an overestimate because many of the milder cases may not have been reported or may have been misdiagnosed; but it is unlikely that the true fatality rate is only one-fiftieth of the current reported rate.
It is estimated that even a "medium-level" flu pandemic could cause up to 200,000 U.S.
deaths and a purely economic impact (that is, ignoring the nonpecuniary cost of death and illness) of more than $150 billion.
A specific problem with respect to preventing flu pandemics is the difficult economics of flu vaccines.
Because of the frequent mutations of the virus, a vaccine may be effective for only one season, in which event the manufacturer must recover his entire investment in the vaccine in just a few months.
The expected cost of the vaccine to the manufacturer is increased by his legal liability (a form of products liability) for injuries due to the side effects of the vaccine.
If a large population is vaccinated, a percentage of the population, amounting to a very large number of people, will in the normal course experience illness in the months following the vaccination.
Many of them will be tempted to sue, and uncertainty about the causation of an illness may enable a number of persons to recover damages who would have become ill anyway.
This problem can be solved in a variety of ways: by requiring proof of negligence rather than imposing strict liability for side effects of vaccination; by increasing the burden of proving causation in vaccination suits; or by the governmen's undertaking to indemnify the producers for damages attributed to the vaccine.
Even if such steps were taken, there would be a strong case for the government‚Äôs financing vaccine development and procuring large quantities of vaccines for distribution as needed.
Measures along these lines are now being taken; and the government‚Äôs agreeing to indemnify manufacturers for damages  resulting from vaccine side effects would be a natural evolution from the National Vaccine Injury Compensation Program, created in 1986, which provides relatively modest "no fault" compensation for injuries caused by vaccination but does not preclude lawsuits against the manufacturers of the vaccine.
However, measures not begun until the threat of a pandemic is imminent may be too little, too late.
A difficult question is compulsory licensing of patented or other proprietary flu vaccines.
On the one hand, compulsory licensing would speed the production of vaccine; on the other hand, it would reduce the incentive of firms to develop new vaccines in the first place.
The answer may be to combine compulsory licensing with generous research subsidies.
Hurricane Katrina and now the danger of an avian flu pandemic--one an actual, the other a potential, catastrophe for which the nation failed or is failing to prepare adequately--underscore the need for institutional reforms that will overcome policy myopia based on inability to plan seriously for responding to catastrophes of slight or unknown probability but huge potential harm.
An amazing number of comments, some however needlessly incivil.
A very interesting comment suggests that, if seven years of exclusivity are enough to induce substantial expenditures on developing orphan drugs despite their small market, we should reexamine the need for the 20-year patent term.
An even more radical possibility would be to jettison patent protection in favor of some variant of the Orphan Drug approach, a form of intellectual property protection that is much simpler than patent protection.
But I recognize the force of the criticisms of the Act in the excellent comment by "SteveSC.".
Another comment asks whether the alternative uses to which resoiurces would be put if there were no Orphan Drug Act would contribute as much as or more than to social welfare; the commenter says that "it does not seem that a drug like Viagra is nearly as useful as say, one treating cancer." This proposition has great intuitive appeal, but it is (speaking of useful) useful to distinguish between the utilitarian and economic perspectives.
Economists generally measure the welfare effects of a new product by willingness to pay rather than by subjective satisfaction (pleasure, happiness, freedom from pain, etc.).
From that standpoint, a drug like Viagra that has a huge potential market might be more "valuable" than a drug that treated a cancer from which only a tiny number of people suffer.
I am not suggesting that the economic criterion of welfare should be the only one employed by government.
But I insist on the   relevance   of the economic perspective--and here I quote the commenter who said "It is in fact appropriate to ask--ad absurdum--whether an Act resulting in pharmaceutical companies spending billions to find a cure for a disease whose only victim were Bill Gates, instead of spending them on research that might benefit thousands of even millions of Americans, would in fact have negative benefits [for] the rest of us." No offense intended to Mr.
Gates; it is nevertheless a worthwhile question.
A pair of excellent articles by Geeta Anand on the front page of the   Wall Street Journal   for November 15 and 16 discusses the little-known but very costly Orphan Drug Act of 1983.
The Act is designed, mainly by providing expanded intellectual-property protection (there are also tax incentives and research subsidies, but they are considered less important), to encourage the creation of drugs for the treatment of rare diseases, defined as diseases that afflict no more than 200,000 Americans at any given time.
Partly because different cancers are classified as different diseases, an estimated 25 million Americans have a rare disease as defined by the Act.
A company that is first to obtain the Food and Drug Administration's approval to sell such a drug has the exclusive right to sell it for seven years.
Although this is shorter than the term of a pharmaceutical patent (normally 20 years), establishing patent eligibility is a far more difficult and protracted undertaking and a patent once obtained is subject to court challenges that often succeed in invalidating it.
The expansion in intellectual-property rights brought about by the Orphan Drug Act makes the following economic sense: The incentive to create an intellectual work is a function of the size of the potential market for it.
The reason is that, by definition, the principal costs of such a work are fixed costs, incurred before the first sale is made; in the case of orphan drugs, they are the cost of R & D plus (what is often greater) the cost of clinical testing, and they greatly exceed the costs of actually producing the drug.
The larger the market, the lower the fixed costs per sale, and so the less the seller has to charge in order to recover those costs.
If fixed costs are 100 and variable cost (the cost of producing one unit of the product) is 1, then if there are 10 customers the producer must charge each at least 11 (100 divided by 10, plus 1) to break even, but if there are 100 customers he can break even at a price of 2 (100 divided by 100 plus 1).
Hence the rarer a disease, and thus the smaller the potential market for a drug to treat it, the higher the price that the producer must charge in order to break even.
His ability to charge that high price will depend on his ability to exclude competition; a producer allowed to duplicate the new drug could undercut the price charged by the original producer yet make a large profit because he would not have borne any R & D costs.
The higher the break-even price and therefore the greater the profit opportunity for a competitor, the likelier that competition will quickly erode the price and prevent the original producer from recovering his fixed costs.
Giving the original producer more than the usual protection against competition that the law provides to creators of intellectual property is thus a method of increasing the incentive to create drugs that have only a small potential market because relatively few people suffer from the diseases that the drugs treat.
This is not just a theoretical point.
The fixed costs of a new drug are indeed high, even if the industry-sponsored figure of $800 million is, as I believe, an exaggeration.
This means, moreover, that even without a threat of competition, the incentive to develop a new drug that would have very few buyers would often be insufficient to induce that development.
Suppose a drug cost $500 million to develop and had only 50 potential customers.
Then each would have to pay (over his lifetime) $10 million (actually more, because of discounting to present value) to enable the producer to cover its fixed costs.
Health insurers might be unwilling to pick up such a tab.
The success of the Orphan Drug Act in encouraging the creation of orphan drugs (more than 200 such drugs have been approved since the Act was passed, compared to only 10 in the preceding decade), which in 2003 had total worldwide sales estimated at roughly $28 billion, confirms the economic analysis and shows that intellectual-property protection can have important incentive effects.
But has the Act produced a net gain in economic welfare? That is less certain.
Of course many people have benefited from the drugs.
But the costs per benefited person are frequently astronomical; that is implicit in the rationale for giving producers of such drugs increased protection against competition.
The costs are especially high for those orphan drugs, apparently the majority, that alleviate symptoms or prolong life but do not cure the disease, so that the patient has to take them for the rest of his or her life.
The   Wall Street Journal   articles give an example of a woman who suffers from Gaucher disease and spends (or rather her health insurer spends) $601,000 a year for the drug, Ceredase, and its administration.
Because by definition the percentage of people who suffer from rare diseases is small, it is feasible for health insurance to cover such extraordinary expenses, provided the insurance pool is large.
And Ceredase is at the high end of orphan drug expense.
Resources for medical research are finite.
The Orphan Drug Act sucks large research expenditures into creating treatments for rare diseases.
Without the Act, those resources would be channeled by the market into other investments that might produce a higher social return.
The English economist Arnold Plant pointed out many years ago that if the law protects some monopolies, as by granting patents or equivalent intellectual-property protection, the profit opportunities that such protection creates (Ceredase generates an estimated 25 percent annual rate of return on investment for its producer, Genzyme Corp.), which are not generally available in the economy, may attract into the monopoly markets resources that would produce greater consumer welfare if invested in production in competitive markets.
As a result of competition, the price of television sets is much less than the price that people would be willing to pay if the sale of television sets were monopolized; the difference is "consumer surplus" and is a measure of the net value that the industry creates.
For all one knows, the consumer surplus that would be generated if the resources now devoted to developing orphan drugs were channeled into competitive markets would exceed the net benefits of those drugs, bearing in mind that there are few beneficiaries.
The number of people who take orphan drugs is far fewer than the total number of people with rare diseases.
Indeed, apparently only 200,000 Americans are taking such drugs.
Assuming that most global expenditures on orphan drugs are for Americans (I'm just guessing--I do not have U.S.
figures), this would be an average expenditure of $100,000 ($100,000 times 200,000 equals $20 billion).
Few people would be willing, if only because few people would be able, to spend anywhere near this much on drugs.
As the economist Tomas Philipson points out, however, if people who do not suffer from rare diseases derive a benefit from orphan drugs--whether because they are altruistic or because they fear that they or members of their families might develop such a disease--then the total social surplus created by the Orphan Drug Act may exceed the consumer surplus.
Yet if the R & D expenditures induced by the Act were channeled instead into developing drugs for equally serious but much more common diseases, this might well be preferred by most people.
It is tempting to attribute the recent riots to the failure of the French (more broadly the Continental) economic model, in particular job protections (mandated fringe benefits, minimum wage, and tenure) that make employers reluctant to hire (because labor costs are so high and bad workers so difficult to fire).
The least productive workers are hurt worst by such a system--hence the enormous unemployment rate among French of African (mainly Algerian) origin--20 percent or higher.
But the United States, with its much more open economy, has its own history of race riots.
The riot in 1965 in the Watts district of Los Angeles resulted in 34 deaths.
Race riots in Detroit and Newark in 1967 resulted in another 70 or so deaths.
The race riots that broke out in April 1968 after Martin Luther King, Jr.
was assassinated spread to 110 cities, the worst hit being Washington, D.C.
And in 1992 the beating by police of Rodney King led to another major race riot in Los Angeles.
The recent French riots, however, have been more widespread even than those of April 1968, though they have involved remarkably few deaths (one, at this writing) and apparently very little looting.
Riots either of the American race-riot variety or the recent French ethnic-riot variety (most Algerians are white rather than black) are mysterious phenomena.
They are not concerted, and so, in contrast to political riots such as the one that occurred at the 1968 Democratic Convention in Chicago, they are difficult to understand in instrumental terms, as efforts to extract concessions from the government.
Although the April 1968 race riots involved looting, the net economic effect, according to a study by economists William Collins and Robert Margo mentioned by Becker, was to depress the value of black-owned property.
Undoubtedly insurance rates for stores in black neighborhoods rose as well and were passed on in part to consumers.
Other things being equal, one would expect unemployment to increase the likelihood and scope of a race riot, because the unemployed have lower opportunity costs both of rioting and of being jailed.
Becker, however, cites a study that finds that the likelihood of a race riot in the United States is not correlated with black unemployment.
Residential segregation can be expected to increase the likelihood of rioting, because it produces a concentration of people having similar propensities.
The rioters don't have to assemble far from their homes in order to form a critical mass of rioters; the need to agree on a time and place at which to assemble would reduce the likelihood of a spontaneous riot.
Poor information, which allows inflammatory rumors to spread, is still another plausible causal factor in riots; likewise youth, because young people have less aversion to risk and violence than mature people; and of course anger, which may be induced or aggravated by discrimination and inequality.
But so far as economic differences between France and America are concerned that can be traced to our more open labor markets, probably the only significant one, so far as bearing on the likelihood of riots is concerned, is the much higher French unemployment rate, though even its significance is somewhat doubtful, in view of the lack of correlation between riot propensity and black unemployment in the U.S.
history of race riots.
Several other differences between France and the United States may be as important as or more important than the difference in unemployment rates.
One is that the French appear to have a much greater propensity to riot, or to engage in other riot-like direct action, than the citizens of other countries.
French truckers and farmers are notorious for direct action, as in blocking roads, in order to enforce their demands.
In 2003, a plan to reduce civil servants' pensions provoked wildcat strikes by tens of thousands of civil servants.
Why the French have this propensity I don't know (it probably is not French economic policies, which are similar to those of most European countries), but it suggests a lower riot threshold than in the United States.
Another relevant consideration is that the French, like most Europeans, are much less welcoming to foreigners than Americans are.
This is one reason that we have not experienced and are unlikely to experience riots by Muslims, even though there are several million of them in the United States.
Direct comparison with France is difficult, however; because Muslims are a far higher proportion of the French population (roughly 10 percent to our roughly 1 percent).
There is little discrimination against American Muslims, in part because most of them are solidly middle class.
No doubt our free labor markets have enabled them to achieve middle class incomes.
But it is possible that even if the French had free labor markets, French insularity would result in discrimination.
After all, that was the U.S.
experience with blacks: our race riots invariably occurred in northern states, in which blacks had the same legal access to jobs and education as whites but nevertheless were still being subjected to serious private discrimination in the prime riot era of the 1960s.
Another factor in the recent French riots may be the French refusal to engage in affirmative action.
The French are reluctant even to collect statistics on the number of people in France of various ethnicities, their incomes, and their unemployment rates.
No effort is made to encourage discrimination in favor of restive minorities (as distinct from women, who are beneficiaries of affirmative action in France) and as a result there are very few African-origin French in prominent positions in commerce, the media, or the government.
Affirmative action in the United States took off at approximately the same time as the 1967 and 1968 race riots, and is interpretable (so far as affirmative action for blacks is concerned) as a device for reducing black unemployment, creating opportunities for the ablest blacks to rise, promoting at least the appearance of racial equality, and in all these ways reducing the economic and emotional precipitants of race riots.
Of particular importance, affirmative action was used to greatly increase the fraction of police that are black, while the "community policing" movement improved relations between the police and the residents of black communities.
French police, traditionally brutal, have by all accounts very bad relations with the inhabitants of the Muslim slums.
The French riots are a reminder that affirmative action, although offensive to meritocratic principles, may have redeeming social value in particular historical circumstances.
I have been in Mexico only twice, in 1969 and in 2002, and the increased fear of crime in my second visit was palpable.
On my first visit, I felt as safe as in the United States; on my second visit, although I did not feel quite as constrained as Becker describes his recent visit (maybe the crime situation has worsened in the last four year), there were constant reminders of the crime menace (such as the metal detectors at the entrance to the Four Seasons Hotel in Mexico City) and my hosts insisted on driving my wife and me everywhere.
As Becker points out, despite the absence of good statistics there is no doubt that the Mexican crime rate is very high.
Moreover, an increase in a crime rate greatly understates the increase in the demand for crime that generates the increased rate.
The reason is that a rising crime rate induces defensive measures (alarm systems, private guards, gated communities, etc.) that, though they nominally reduce crime, actually just transform the costs of crime to crime victims into costs of crime avoidance by potential crime victims.
The increase in the Mexican crime rate is mysterious.
It is true that income inequality in Mexico has increased, in major part it seems because of liberal economic policies that, as in the United States, have increased the demand for skilled workers by subjecting employers to greater competition, both foreign and domestic.
And increased income inequality, even when it is not associated with an increase in poverty, increases the gains from crime by increasing the number and wealth of people worth robbing, kidnapping, and defrauding.
Yet it seems unlikely that the increase in income inequality in Mexico has been great enough to explain the increase in crime, especially since poverty in Mexico has declined dramatically since the mid-1990s, though the crime rate seems to have continued to grow--at least it has not declined.
Becker rightly stresses the corruption and incompetence of the Mexican police as an important factor in the crime rate.
This too is mysterious.
Mexico is not a poor country by international standards.
Its per capita GDP calculated on a purchasing power parity basis is slightly over $10,000, which places it in the upper third of the world's nations and ahead of such countries as Bulgaria, Romania, Turkey, and China, and within spitting distance of placid Costa Rica.
Furthermore, because Mexico has a large population, that per capita figure translates into a total Gross Domestic Product of more than $750 billion.
Mexico can afford an efficient police force in its capital!.
In 2003 Rudoph Giuliani was hired by Mexico City to advise on how to reduce the crime rate.
He made 146 recommendations, based in part on his successful campaign to reduce crime in New York City when he had been mayor and in part on the obvious need to increase the salaries of Mexico City's police.
Mexico City's police chief announced that he had accepted all of Giuliani's recommendations.
But there was no implementation.
I find that baffling.
Although the underlying cause of Mexico's astronomical crime rate may be income inequality, it doesn't follow that reducing inequality is the the most efficient way to reduce the crime rate.
It is a fallacy to think that the only sound solutions to social problems are those that remove the underlying causes of the problems.
If crime can be repressed by improved law enforcement at lower cost than by attacking inequality, then improving law enforcement is the superior strategy.
Efforts to reduce Mexican income inequality would probably either be totally ineffectual or stifle economic growth.
Certainly the reform of law enforcement is the place to start.
So the task for the think tanks is not to study the Mexican (or broader Latin American) crime problem as such; for the problem is obvious, and so is the solution (better law enforcement).
What they should study is why the Mexican government and other Latin American governments are incapable of implementing the obvious solution.
I knew Milton Friedman, but not well; and I am not competent to express an informed opinion on his major academic work, which was in macroeconomics.
The economists of his generation with whom I principally associated were George Stigler, Ronald Coase, and Aaron Director (Friedman's brother-in-law)--microeconomists who had a major impact on the law and economics movement.
I did, however, read a few of Friedman's essays.
Two in particular struck me around the time I came to Chicago.
One was his essay on the methodology of positive economics, in which he argued that the way to test a theory was not by assessing the realism of its assumptions, but by assessing the accuracy of its predictions.
Economics makes heavy use of unrealistic assumptions, primarily concerning rationality, and yet the predictions generated by models based on those assumptions are often accurate.
Where they are inaccurate, this is a spur to reexamining the assumptions and perhaps modifying them, as is occurring in such fields as finance, where assuming a more complex human psychology than finance theorists traditionally assumed has helped to explain anomalies (from a rational-choice perspective) in the behavior of financial markets.
The emphasis on predictions connects Friedman's essay to Karl Popper's philosophy of science, in which the scientific method is viewed as a matter of making bold hypotheses, confronting them with data, and ascribing tentative (always tentative) validity to the hypotheses that survive the confrontation.
Popper's methodology of fallibilism has strong affinities with Friedman's methodology.
Both are strongly empiricist.
Stigler in conversation merged these two closely related approaches, and I was very struck by the melded approach.
The other essay of Friedman's that struck me was an essay on taxation in which he argued, contrary to the conventional view at the time (though I gather the argument was not original with him), that there was no theoretical reason for supposing income taxes superior in point of efficient resource allocation to excise taxes.
An excise tax--say, a 10 percent tax on yachts--drives a wedge between cost and price and so deflects buyers to substitutes that may cost more to produce but look cheaper because they are not taxed at so high a rate.
(The effect is the same as monopoly pricing.) But Friedman argued that income taxes have the same effect, by driving a wedge between the cost of work and the wage (price) received by the worker, thus deflecting him to untaxed substitutes, such as leisure, or to jobs that generate untaxed benefits, including leisure in the case of teaching (for example), but also prestige, amenities, tax-favored fringe benefits, and job security.
This idea of the parity of excise and income taxes has wide-ranging implications for public policy, since the tendency (still) is to neglect the misallocative effects of income taxation--a neglect of which I think even Friedman was sometimes guilty, as I am about to argue.
Perhaps his most important general contribution to economic policy was the simple, but when he first propounded it largely ignored or rejected, point that people have a better sense of their interests than third parties, including government officials, do.
Friedman argued this point with reference to a host of issues, including the choice between  a volunteer and a conscript army.
With conscription, government officials determine the most productive use of an individual: should he be a soldier, or a worker in an essential industry, or a student, and if a soldier should he be an infantryman, a medic, etc.? In a volunteer army, in contrast, the determination is made by the individual--he chooses whether to be a soldier or not, and (within limits) if he decides to be a soldier what branch, specialty, etc., to work in.
A volunteer army should provide a better matching of person to job than conscription, and in addition should create a more efficient balance between labor and capital inputs into military activity by pricing labor at its civilian opportunity costs.
But this is in general rather than in every case.
The smaller the armed forces and the less risk of death or serious injury in military service, the more efficient a volunteer army is relative to a conscript one.
These conditions are not satisfied in a general war in which a significant fraction of the young adult population is needed for the proper conduct of the war and the risk of death or serious injury is substantial--the situation in World War II.
For then the government's heavy demand for military labor, coupled with the high cost of military service to soldiers at significant risk, would drive the market wage rate for such service through the roof.
Very heavy taxes would be required to defray the expense of a volunteer army in these circumstances and those taxes would have misallocative effects that might well exceed the misallocative effects of conscription.
I mention this example because I find slightly off-putting what I sensed to be a dogmatic streak in Milton Friedman.
I think his belief in the superior efficiency of free markets to government as a means of resource allocation, though fruitful and largely correct, was embraced by him as an article of faith and not merely as a hypothesis.
I think he considered it almost a personal affront that the Scandinavian nations, particularly Sweden, could achieve and maintain very high levels of economic output despite very high rates of taxation, an enormous public sector, and extensive wealth redistribution resulting in much greater economic equality than in the United States.
I don't think his analytic apparatus could explain such an anomaly.
I also think that Friedman, again more as a matter of faith than of science, exaggerated the correlation between economic and political freedom.
A country can be highly productive though it has an authoritarian political system, as in China, or democratic and impoverished, as was true for the first half century or so of India's democracy and remains true to a considerable extent, since India remains extremely poor though it has a large and thriving middle class--an expanding island in the sea of misery.
What is true is that commercial values are in tension with aristocratic and militaristic values that support authoritarian government, and also that as people become economically independent they are less subservient, and so less willing to submit to control by politicians; and also that they become more concerned with the protection of property rights, which authoritarian government threatens.
But Friedman seemed to share Friedrich Hayek's extreme and inaccurate view that socialism of the sort that Britain embraced under the old Labour Party was incompatible with democracy, and I don't think that there is a good theoretical or empirical basis for that view.
  The Road to Serfdom   flunks the test of accuracy of prediction!.
I imagine that without the element of faith that I have been stressing, Friedman might have lacked the moral courage to propound his libertarian views in the chilly intellectual and political climate in which he first advanced them.
So it should probably be reckoned on balance a good thing, though not to my personal taste.
His advocacy of school vouchers, the volunteer army (in the era in which he advocated it--which we are still in), and the negative income tax demonstrates the fruitfulness of his master micreconomic insight that, in general, people know better than government how to manage their lives.
But perhaps not always.
Increasing the federal minimum wage, currently $5.15 an hour, is a priority of the new Democratic Congress.
Democratic leaders want to raise it by 40 percent, to $7.25 an hour.
From an economic standpoint, even from an egalitarian standpoint, raising the minimum wage, especially by such a large amount (roughly 10 percent of the American workforce makes less than $7.25 an hour, which is double the percentage of the workforce that is paid the current minimum wage), would be a grave mistake.
As a matter of economic theory, increasing the price of an input into production, such as labor, has two effects: an increase in the price of the product, because the producer's costs have risen (provided the increased input cost affects his competitors as well) and a reduction in the demand for the input, both because the higher price of the product reduces demand for it and because substitute inputs will now be more attractive.
Any such substitution will be inefficient because it is motivated not by an increase in the real cost of labor but by a government-mandated increase in the price of the input, which has the same misallocative effect as monopoly pricing.
If the input is labor, forcing employers to pay employees an above-market wage will result in (1) higher prices for the goods or services produced by the employers, which will have the same effect as a tax on the consumers of those goods or services, (2) higher wages for those minimum-wage employees whose employers decide to retain them and pay the mandated new wage, and (3) less employment of marginal workers, that is, of workers paid less than the imposed minimum.
Any interference with the market-determined wage level is prima facie inefficient; and to the extent that marginal workers are poorer than workers unaffected by a minimum wage, and the consumers of goods and services produced by employers of marginal workers are also below average in income, a minimum-wage law is inegalitarian as well as inefficient.
Its effect on income equality, however, depends not only on the relative incomes of the groups affected by the law but also on the balance between the effect on employment and the effect on the wages of those who are retained.
The lower the percentage drop in employment relative to the size of the minimum wage, the less likely the net effect of the mininum-wage law will be to make marginal workers worse off.
Some economists, notably David Card and Alan Krueger, deny that the minimum wage has any disemployment effect.
See their book   Myth and Measurement: The New Economics of the Minimum Wage   (1995), but their work has been heavily criticized.
See, e.g., David Neumark & William Wascher, "Minimum Wages and Employment: A Case Study of the Fast Food Industry in New Jersey and Pennsylvania: Comment," 90   Am.
Econ.
Rev  .
1362 (2000), and Richard V.
Burkhauser, Kenneth A.
Couch & David C.
Wittenburg, "A Reassessment of the New Economics of the Minimum Wage Literature with Monthly Data from the Current Population Survey," 18   J.
Lab.
Econ.
  653 (2000).
It is unlikely that a 40 percent increase in the minimum wage would have no effect on employment.
Although working full time at $5.15 an hour yields an annual income (slightly more than $10,000) barely above the poverty line, most minimum-wage workers are part time, and for the majority of them their minimum-wage employment supplements an income derived from other sources.
Examples of such workers are retirees living on social security or private pensions who want to get out of the house part of the day and earn some pin money, stay-at-home spouses who want to supplement their full-time spouse's earnings, teenagers working after school, and other students.
An increase in the minimum wage--depending critically of course on how great the increase is--will provide a windfall to some minimum-wage workers, many of whom are not poor, and disemploy some others, also not poor.
The effect on wage equality is likely to be slight, but consumer prices will be higher (which may reduce overall equality) and the efficiency with which goods and services are produced by low-wage workers will be reduced.
As a means of raising people from poverty or near poverty, the minimum wage is distinctly inferior to the Earned Income Tax Credit, which compensates for low wages without interfering with the labor market.
EITC is of course not devoid of allocative effect, because like any other government spending it is defrayed out of taxes; but it is probably a less inefficient tax than the minimum wage.
And it is a more efficient device for spreading the wealth, since many, perhaps most, minimum-wage workers are not poor.
So why are the Democrats pushing to increase the minimum wage rather than to make EITC more generous? Three reasons can be conjectured.
First, unions, which are an important part of the Democratic Party's coalition, favor the minimum wage because it reduces competition from low-wage workers and thus enhances the unions' bargaining power and so their appeal to workers.
This would not be as serious a problem for unions if minimum-wage workers were organized.
But the fact that most minimum-wage workers are part time makes them uninterested in joining unions.
Second, increasing the EITC would mean an increase in government spending and hence in pressure to increase taxes, and the Democrats wish to avoid being labeled tax-and-spend liberals.
And third, genuinely poor people vote little.
The number of nonpoor who would be benefited by an increase in the minimum wage, when combined with the number of nonpoor workers whose incomes will rise as a result of reducing competition from minimum-wage workers, probably exceeds the number of nonpoor who will be laid off as a result of an increase in the minimum wage.
Teenagers, moreover, will be among the groups hardest hit, and most of them do not vote.
Critics of the American electoral process have long complained that the process was poisoned as the combined result of (1) gerrymandering, (2) inadequate limitations on donations to political campaigns, (3) barriers to third parties, (4) barriers to voting (such as registration requirements and conducting elections on workdays rather than weekends or holidays), (5) public ignorance of policy issues and the consequent ability of political advisers, consultants, media specialists, pollsters, etc., to manipulate the public‚Äôs voting behavior, and (6) mistake-prone voting equipment, such as the notorious punchcards that cast a shadow over the 2000 presidential election in Florida.
But the outcome of Tuesday's midterm election suggests that these problems are less serious than the critics believe.
A   Newsweek   poll taken days after the election reported that 51 percent of those polled thought the Democrats' election victory a good thing; the election gave the Democrats approximately 51 percent of the seats in both the House of Representatives and the Senate (counting the two independent Senators as Democrats, though technically they are independents).
Gerrymandering poses the issue of the democratic legitimacy of our electoral system in its starkest form, as the avowed purpose is to reduce the number of legislators elected by the party not in control of the gerrymandering process.
But this, it turns out, is easier said that done.
For one thing, there is an inherent tension between incumbents and challengers in the same party running in different districts.
An incumbent wants his district so configured that it will be dominated by members of his own party.
But challengers belonging to the same party do not want the incumbent'‚Äô districts to be packed with members of the party because that reduces the number of party members in the challengers' districts.
So if incumbents prevail in districting, and in the next election the electorate proves to be hostile to incumbents, the gerrymander may boomerang, because challengers from the same party to incumbents of the other party will have fewer supporters in their districts.
The idea that without strict limitations on campaign finance the wealthy will dominate campaigns and tug the nation rightward also turns out to be questionable.
There are many wealthy liberals and the Internet has made it much easier than it used to be to obtain modest campaign donations from the nonwealthy.
Moreover, the effect of political advertising (which is where most campaign donations go) is diluted by the fact that voters are exposed to vast amounts of information and opinion that derive from sources other than advertising--not ony the mainstream media but, increasingly, blogs and other informal media.
As usual in this election, Republicans outspent Democrats but were badly beaten anyway.
The states are permitted by the courts to establish barriers to third parties, mainly by requiring that a party have a large number of signatures from registered voters in order to get a place on the ballot.
Yet despite this requirement,  third parties are on the ballot in many states and the fact that they usually obtain only a handful of votes (though sometimes they play a spoiler role, as the Reform Party did in both the 1992 and 2000 elections--and in last week's election Joseph Lieberman was reelected to the Senate, running as an independent) is due more to the inherent difficulty of third parties in a presidential (as distinct from parliamentary) political system (for example, because a third party is so unlikely to produce a president it has difficulty attracting ambitious candidates) than to the barrier to entry that ballot-access rules create.
Although voter turnout is lower in the United States than in other countries, the consequences again are slight because those persons who are eligible to vote but don't bother to do so tend to have the same political opinions as those who do vote.
Nor is it obviously wrong as a matter of democratic theory to discourage from voting those people whose interest in the political process is so attenuated that they are unwilling to incur the modest inconvenience that the American system imposes on would-be voters.
The poor voting equipment in many precincts throughout the country undoubtedly disfranchises a number of voters; but as with other barriers to voting this one affects outcomes only if the people disfranchised have systematically different political preferences from others.
This is rare although it may have happened in Florida in the 2000 election.
It wasn't a factor in the recent election.
Finally, although surveys reveal that most Americans are indeed political ignoramuses, even the significance of this fact for the healthy functioning of the democratic process can be doubted.
Issues of public policy, especially at the federal level, and issues of the competence and leadership qualities of officials at that level, are so difficult for outsiders to government to assess that it is unrealistic to think that the electorate could become well informed--unless the American population reallocated a substantial amount of its time from work, family, and cultural and other leisure activities to the study of politics and policy.
That might not be an efficient reallocation of time, especially if its principal product was confusion.
If the electorate can be expected to focus only on highly salient issues of policy and leadership, it may not need to be well informed.
Maybe all it needs to know is that things are going badly or well and that the party in power bears some responsibility for the situation.
Expert commentators on the recent election results, regardless of their politics, are virtually unanimous in the view that the Republicans deserved the severe rebuke that they received from the electorate.
These experts may be right or wrong--a question on which I would be uncomfortable, being a judge, to offer an  opinion.
What is pertinent to the present discussion is only that if the franchise were confined to experts, the results of the election would have been the same or very similar.
If the electorate comes to the same conclusion as the experts, the implication is that democracy can work quite well even when the electorate lacks expertise.
An article in the business section of the   New York Times   last Sunday (November 11) by economist Austan Goolsbee, summarized an academic paper by an MIT economist named Michael Greenstone that uses Iraqi government bond prices to estimate the bond market's response to the Bush Administration‚Äôs "surge." Greenstone's paper, available from the Social Science Research Network, is dated September 18, which is two months ago, and perhaps developments since would alter his conclusions.
To estimate default risk from the bond's current trading price, and in particular to estimate the effect of the surge on the default risk, is not straightforward.
For example, Greenstone adjusts for the probability that a Democrat will be elected President next year (regarded as increasing the likelihood that Iraq will default on the bonds); that probability may have changed since September 18.
The worldwide credit crunch has worsened since then, and that might have an effect, independent of the surge, on the price of the Iraqi bonds.
The bonds ($3 billion worth issued in January 2006), which mature in 2028 and until then pay 2.9 percent on their face value twice a year, so almost 6 percent per annum, are trading at a steep discount (currently about a 40 percent discount, which jacks up the yield to almost 10 percent--almost $6 for a bond that costs $60).
This means that purchasers of the bonds (which are actively traded) are demanding compensation for bearing a substantial risk of default.
The most interesting conclusion in Greenstone's study is that, after correction for other factors, the surge is correlated with a 40 percent increase in the bond market's estimate of default.
It seems unlikely that the surge itself would increase the risk of default, though it might, by enabling both the Sunnis and the Shiites to rest and augment their forces for the eventual showdown, taking advantage of a kind of truce imposed by the additional American troops.
More likely, the bond traders see the surge as a desperate last gamble by the United States; as a preclude to U.S.
withdrawal and specifically as a sign that the United States will withdraw soon after the next Presidential election, whoever wins the election; as a political gimmick; and as a failure in the aim of the surge of promoting progress toward a political settlement that will enable Iraq to be a functioning nation when we leave.
If, as the bond traders fear, Iraq is likely to be divided well before 2028 into three separate nations (Kurdish, Sunni, and Shiite), a default is likely.
There are two general questions that Greenstone's interesting study raises.
The first is the relation between default risk and U.S.
failure.
For comparison, consider another recent study, by Kim Oosterlinck and Marc Weidenmeir, this one of the price of Confederate bonds in the Amsterdam market during the American Civil War.
Initially the bond prices traded at a discount that indicated that the Confederacy had a 42 percent chance of winning the war and therefore presumably of repaying the bonds; but with the crushing twin defeats of the Confederacy at Gettysburg and Vicksburg in the summer of 1863, the bond market quickly downgraded the Confederacy's chances of winning the war to only 15 percent, and the estimate kept falling till the end of the war.
The difference between that case (and other examples of using bond prices to predict the course of a war) and the Iraq case is that the risk of the Confederacy's defaulting on its bonds depended essentially on just one event, namely whether the Confederacy--the issuer of the bonds--lost the war.
In the case of Iraq, the relation of the risk of default to the outcome of the war is obscure.
No one thinks that the United States can actually be defeated by Sunni insurgents, al Qaeda in Mesopotamia, Shiite militias, Iranian infiltrators, or any other armed groups in Iraq or the surrounding areas.
At the same time, few believe that the United States can win the war in the sense of eliminating widespread violence and coupling withdrawal with handing over control of the country to a functioning government, as the United States was able to do several years after the end of World War II in Germany and Japan.
Moreover, if we wanted to avoid a default, we could do so simply by buying up the bonds (at the current discount, they would cost only $1.8 billion, provided they were bought up surreptitiously so as not to force up the price significantly) and then forgiving the debt.
And finally, one can imagine a scenario in which American policy is an utter failure, but the utter failure actually reduces default risk.
Suppose we pull out of Iraq and Iran takes over.
Iran might decide to pay off the bonds in full (the amount of money is small) in order to increase its credit standing.
So really the only (though major) significance of Greenstone's bond market study, so far as our situation in Iraq is concerned, is that it is evidence that the surge, while it has reduced the number of deaths in Iraq, has not increased the viability of the Iraqi state, but instead has revealed (possibly even contributed to the prospect revealed) that the attainment of viability is increasingly unlikely.
The second general question raised by Greenstone's paper is whether financial markets are better predictors of the outcome of wars and other political crises than experts are, including the experts who staff intelligence agencies.
One might think that experts would be better predictors because they had specialized knowledge that bond traders would lack and that experts who work for intelligence services would have not only expert knowledge but knowledge that bond traders could not obtain by consulting experts, because it would be classified knowledge.
A careful historical study (and access to classified information, at least for recent crises) would be required to answer the question which predictor is better.
But it would not be surprising if the financial markets turned out to do better than the experts, including national security personnel.
Financial markets aggregate the opinions of a vast number of investors, and those investors who at least think that they have real insight tend to be the ones who determine the prices in those markets.
Friedrich Hayek's great legacy to economics was to show that the price system can aggregate vast amounts of information much more efficiently than a centralized bureaucracy can do.
And intelligence agencies are centralized bureaucracies.
The innumerable mistakes that the United States has made in Iraq suggest that our government does not have good means of obtaining and evaluating information concerning that country, possibly because of a combination of bureaucratic inefficiency and the vastness of the quantity of relevant data.
The people who trade Iraq government bonds do so not because they are told to study Iraq or paid a salary to do so or have an academic or journalistic interest in the country, but because they hope to make money.
Presumably therefore they are self-selected for knowing a lot about Iraq‚Äîand for thinking they know enough to put their money where their mouth is.
They may be right.
There were a number of interesting comments.
I cannot reply to all of them, but I will reply to a few.
One comment was that "wealthy private donors often take an interest in the results they are purchasing with their donations.
Few would argue that government is equally demanding with its funds.
Thus, an increase in government spending to replace charitable donations seems counter productive." I do not agree.
The reason is that charitable foundations are perpetual, and are controlled by self-perpetuating boards of trustees.
As a result, the original donors do not control a foundation.
There is thus less monitoring of foundations than there is of government programs.
Another comment questions how eliminating charitable deductions from the estate tax would generate much tax revenue.
People will simply give whatever money they had otherwise intended to charity while they are still alive, noting they don't need tax deductions to do so. This might be largely true if there were no gift tax (it would not be completely true because, not knowing when they die or what their future expenses will be, people are reluctant to give away almost all their money before they die).
There is and must be a gift tax to back up the estate tax.
The same commenter said that I am "radically underestimating the incentive effects of giving to one's offspring.
Assume you can close all the 'loopholes' permanently and it is impossible to give one's wealth to one‚Äôs children (what a hideous thought) after death or while alive, then I assure you an awful lot of our most prolific job creators would either retire or stop being workaholics and instead take time out to smell the roses." This comment conflates closing loopholes with confiscatory tax rates.
Even if there were no deductions, an estate tax of less than 100 percent would allow people who accumulated an estate to pass some of the money in the estate to their children or others, including charities.
Another commenter makes the following good point in defense of the estate tax: "Most of the wealth of these billionaires is not from income and has not been taxed, e.g.
Balmer has not sold and rebought his Microsoft stock on which to pay capital gains, but has held it and his wealth is from appreciation that has not been taxed.
So the estate taxes, for the most part, are taxes on money as it passed to heirs that has not yet been taxed.‚Äù When the heirs sell the stock, moreover, they pay income tax only on the difference between the value of the stock at the time of the donor's death and its current value, so that the appreciation in stock value during the donor's life is never taxed.
Finally, one unrelated and very strange comment: "I take great offense at your recent statements that all Muslims in America should be under surveillance." Neither recently nor ever have I made such a suggestion.
Becker presents persuasive evidence that the amount of tax evasion varies, as one would expect in a rational-choice model of taxpaying, with variance in the private costs and private benefits of evasion.
I am inclined to believe that the private costs are higher than he suggests, which if true would mean that more tax compliance can be attributed to rational fear of punishment than he suggests and less to taxpayers' feeling a moral duty to pay taxes.
For example, the civil penalties for tax evasion are quite severe (the fraud penalty is 100 percent of the amount of taxes evaded), and anyone charged with civil or criminal tax evasion will incur heavy legal and accounting expenses in defending against the charge.
Although the audit rate is low, it is not random, but rather is higher for those taxpayers who are in the best position to evade taxes without being caught or whose tax returns raise a red flag because of unusually high deductions or other suspicious circumstances.
And once one has been caught evading taxes, one can expect the rate of future audits of one's returns to be high.
While it is true that underpayment of taxes is rarely prosecuted criminally, even when deliberate, criminal prosecution is likely if the tax evader takes steps to conceal the evasion, as by never filing a tax return, keeping phony books, or forging evidence of deductions.
Moreover, the government does occasionally prosecute even small fry.
Thus far I have focused only on punishment costs.
But a neglected point in the economics of crime is the information costs of committing a crime.
Evading taxes requires more knowledge than stealing a bike.
Most taxpayers probably don't have a clue as to how to evade taxes without being caught.
It might seem awfully simple--just list your cat as one of your dependents.
But to know whether this would work, you would have to know whether the government has any independent source of information about the number of a person's dependents.
You can't just go to a lawyer and ask him what the best way of evading taxes is.
Most people comply with most laws most of the time.
I believe that in most cases they do this not because they feel any moral duty to comply with law, but because the potential payoff does not seem to exceed the costs, including the information costs that I have emphasized.
The reason I doubt that there is much of a felt moral duty to comply with tax law is that there is a vast amount of illegal behavior by normally law-abiding citizens.
The flouting of the traffic laws, the theft of employer property, the nonpayment of social security taxes on household help, illegal gambling, and the employment (both personal and commercial) of illegal immigrants are only the most obvious examples.
These are cases in which law enforcement is so lax that the expected punishment costs for most violations hover close to zero, and there are distinct benefits from violation.
Still, Becker is unquestionably correct that there is a good deal of tax evasion apart from the social security example.
It could be greatly reduced by stiffer penalties and a greater investment of resources in law enforcement.
Every dollar spent by the Internal Revenue Service on enforcement brings in several dollars in additional tax revenue, suggesting that an expansion in the IRS‚Äôs budget would be necessary to equate the marginal benefits of tax enforcement to its marginal costs.
But this suggestion ignores the fact that the benefits are, as a first approximation, merely income transfers, whereas the marginal costs of tax enforcement are social costs.
If taxes are evaded, the resulting shortfall in tax revenues is made up by increasing the tax rate, and there is no social loss unless the increase has worse misallocative effects than the evaded taxes would have had, had they not been evaded.
One reason, therefore, that tax evasion is widespread is that it may be cheaper from an overall social standpoint to have slightly higher tax rates than to devote additional resources to law enforcement, though the first-best solution might be stiffer penalties, especially monetary penalties.
Deliberately lax enforcement would then explain the amount of evasion.
The general question that Becker raises of the moral costs of committing crime is a fascinating one.
I would be inclined to search as hard as possible for nonmoral costs before concluding that morality is a major motivator of behavior, especially with regard to crimes, like tax evasion, that do not have an identifiable victim.
In the case of many crimes, the benefits to most people of perpetrating them would be so slight (and often zero or even negative) that sanctions play only a small role in bringing about compliance; enforcement costs needn't be high in order to deter when nonenforcement benefits are low.
Some examples: the demand for crack cocaine among white people (including cocaine addicts) appears to be very small.
Both altruism and fear deter most people from attempting crimes of violence, quite apart from expected punishment costs.
The vast majority of men do not have a sexual interest in prepubescent children.
Well-to-do people often have excellent substitutes for crime: any person of means can procure legal substitutes for illegal drugs (for example, Prozac for cocaine, Valium for heroin).
Fear of injury deters most people from driving recklessly or while drunk.
People who have no taxable income are incapable of evading income tax.
People who do have taxable income can obtain benefits from evading it, but the costs of evasion are, as I have emphasized, nonnegligible, so there is widespread compliance along with a good deal of evasion.
I would therefore expect differences across countries in tax evasion to be related more to differences in penalties, collection methods, and so forth than to differences in morality.
Americans may exhibit higher tax compliance than Italians, but Americans are not a more moral people than Italians.
This is not to deny the independent behavioral effect of social (including moral) norms, but on reflection one can see that these norms are enforced, even if not by law.
When the cost of compliance with a norm is low, as in the case of picking up after one's dog, dirty looks alone may impose a cost greater than that of compliance.
But taxes are not paid in public, so shaming is not a feasible alternative penalty to the legal sanctions for tax evasion.
The   Forbes   and   Financial Times   articles to which Becker refers present an astonishing portrait of immense personal fortunes.
I shall limit my comment to the Americans on the list.
Becker is correct to note that the very largest fortunes are made rather than inherited.
The reason probably is that most of them are the result of recent advances in digital technology and the increased globalization of financial and other markets.
At a guess (because I don‚Äôt recognize all the names in the   Forbes   list), more than half of the 20 largest American fortunes are due to those recent developments and so could not be the product of inheritance.
As one moves down the   Forbes   list, inherited wealth appears to account for an increasing number of the American fortunes.
The American fortunes are overwhelmingly due to lawful entrepreneurial efforts rather than to politics or illegality.
It is true that Microsoft lost a major antitrust case (and a number of minor ones), owing to its attempt (or so the courts found) to smother Netscape.
But it is highly unlikely that Microsoft‚Äôs campaign against Netscape accounts for any of the Gates, Ballmer, or Allen fortunes, as in retrospect it is apparent that Netscape lacked the business acumen to mount a successful challenge to Microsoft's dominance of the personal-computer operating-system market, as Microsoft feared.
I also agree with Becker that the benefits to consumers from the entrepreneurial efforts that produced Microsoft, Google, Apple, E-Bay, Amazon.com, Wal-Mart, private-equity firms, hedge funds, and other commercial successes that have generated large personal fortunes are much larger than the personal fortunes garnered by the founders and principals of such companies.
It does not follow, however, that these billionaires "deserve" their fortunes and therefore should be as lightly taxed as they are.
As the economist Sherwin Rosen showed in a famous article, in certain circumstances a very small difference in ability can translate into an enormous different in reward.
The key is the reproducibility of a product or service or innovation.
If one pianist is slightly better than any other, his recordings may capture the entire market for recordings of the kind of pieces he plays best because the consumer has no reason to buy his rivals' slightly inferior recordings, provided prices are comparable.
As transportation costs and tariff barriers fall and foreign countries become richer, the markets for the best American products expand, increasing the profit potential for producers with the lowest quality-adjusted costs.
The greater output of the superior producer confers real value, but there is only a loose relation between that value and the reward to the producer.
Bill Gates is extremely able, but not a thousand times abler than pikers worth a mere $50 million.
But of course we must not kill the goose that lays the golden eggs, through a level of taxation that discourages entrepreneurship, a risky activity.
We might want to clip the goose's wings if we thought that huge fortunes were politically destabilizing, but this is not a danger in the United States, however much the left rails against Richard Scaife and the right against George Soros.
There may well be a legitimate concern with the influence of campaign contributions on public policy (as illustrated by the opposition of New York's Democratic establishment to taxing hedge funds more heavily), but that concern argues for placing limits on contributions rather than on huge fortunes.
Yet even without thinking these fortunes dangerous, or the product of anything more sinister that skill and luck, we might as Becker suggests see in them an attractive source of tax revenues.
The ideal tax is a tax that produces large revenues but has minimal allocative effects.
A uniform head tax, avoidable only by emigration, would have minimal effects on people's behavior but would generate only modest revenues, because if genuinely uniform the tax would have to be set at a level that the poorest person could pay.
A highly progressive income tax, without loopholes, would produce a great deal of revenue but probably would generate significant misallocative effects by causing people to substitute leisure for work and riskless jobs and investments for risky ones.
In these respects the estate tax is somewhere in between the head tax and the highly progressive income tax.
Death cannot be averted, and in that respect an estate tax resembles a head tax.
But the potential revenues are much greater, especially in an era of large fortunes.
Adding up the fortunes listed in the   Forbes   article for just the 10 wealthiest Americans yields a total of almost $600 billion.
The estate tax has as many holes as a very large Swiss cheese, but they could be closed.
There are two objections, however, to a stiff estate tax on large fortunes.
The first is that it would encourage the wealthy to spend rather than invest, in order to reduce their taxable estates.
But this is a more serious concern for the taxation of modest or even large estates than of immense ones, simply because of the limits of personal consumption.
How much of a $3 billion annual income can a person spend on consumption rather than investment? The estate tax is likely to have a significantly smaller misallocative effect than an income tax that would produce the same revenue.
The second concern with stiffening the estate tax is that it will reduce gifts to charity.
It will, because one of the biggest loopholes in the estate tax is the charitable deduction, though, as Becker points out, some very wealthy people, such as Andrew Carnegie and John D.
Rockefeller, made large charitable donations before there was an estate tax (first introduced in 1916).
To the extent that charitable expenditures substitute for government expenditures in areas such as education and medical research (not, however, religion--the largest beneficiary of charitable expenditures--because government is not permitted to subsidize religion), a reduction in charitable deductions is tantamount to a reduction in tax revenues, but the reduction cannot be dollar for dollar--otherwise there would be no incentive to make charitable gifts to any activity that government also funds.
The reduction in charitable deductions from repealing the charitable exemption might not be great, moreover, because if one person reduces his contribution to a charity, this increases the incremental effect of another person's contribution.
I worry, too, about charitable gifts overseas on the scale of the Gates Foundation; my post on January 1 of this year questioned the appropriateness of compelling U.S.
taxpayers to fund (through the charitable deduction from income tax as well as from estate tax) contributions to foreign nations or their populations.
So although a stiffer estate tax on large fortunes (which would not require an increase in the tax rate but merely a closing of loopholes) would probably impose some cost in loss of charitable donations, which could in turn increase demand for public spending, I believe the revenue potential of such a tax would offset the costs.
The tax increase could be made revenue-neutral, enabling a less efficient tax, such as the personal or corporate income tax, to be reduced.
Here is a puzzle.
With the recent deterioration of airline service, airlines' posted flight schedules have become uninformative.
On time is defined as within 15 minutes of scheduled arrival, and a large percentage of flights are not "on time" even as so generously construed.
Flights often are delayed by hours, and sometimes canceled, in which event the delay is the interval between the scheduled arrival of the canceled flight and the arrival of the later flight that one is booked on.
A truthful airline schedule would list the mean length of a flight on a given route together with some indication of variance--perhaps the standard deviation, or what the media call the "margin of error," which is two standard deviations, from the mean.
So why don't airlines publish accurate, informative schedules? Instead they have surreptitiously adjusted their schedules to enlarge scheduled flight times slightly in order to reduce measured delay.
Some airlines have better on-time arrival records than others.
Why don't they publish accurate schedules, or at least advertise their better on-time record? The problem, it might seem, is that such disclosures have a two-edged quality.
They draw the consumer's attention to the seller's own faults as well as to his competitors' faults.
The disclosures say "I'm bad, but he's worse." But this is not an adequate explanation.
Airline travelers know that airline schedules are grossly inaccurate.
All they would learn from truthful comparative advertising would be which airlines' schedules are least inaccurate, and one might think that that would be both valuable information to consumers and effective advertising for the better airlines.
At the same time that sellers forgo much product disclosure that would seem advantageous both to them and to their customers, they make disclosures that have no information value and should not persuade any rational consumer, such as implausible, self-serving, and empty claims that their product is better, or super; and these claims are often wrapped in clever, funny pictures or anecdotes that are designed to seize the attention of the viewer, but that convey no information.
The purpose of the empty claims is easier to understand than the dearth of the type of negative comparative advertising that one might expected the better airlines to publish.
The informationally empty claims convey to the reader the name of a product (so they are not really completely empty) and the sense that it must be a dependable product in some sense to be the subject of such classy advertising.
It may be oversubtle to suggest as some economists have that media advertising signals a commitment to quality because if consumers are disappointed with the product the heavy investment that the seller made in advertising will be wiped out.
It is enough that the glossy ads convey that this is a product that consumers ought to have in mind the next time they are shopping for that class of products, so that when they are scanning a shelf in a grocery store or drugstore or other retail outlet they will recall the brand name and give it a careful look before passing on to the next brand on the shelf.
The brand name incidentally serves the important function of providing an assurance of at least approximately uniform quality, since that is required to enable the seller to retain trademark protection and thus prevent other sellers from selling their products under the same brand name.
But the reluctance of sellers to engage in the type of comparative advertising that would reveal shortcomings in the advertiser's product remains mysterious, since, as in the airline example, these shortcomings are usually quite well known to the persons at whom the advertising would be aimed.
Automobile manufacturers were reluctant to install, let alone advertise, safety features such as seatbelts until the government required the installation of them and partly as a consequence of this consumers became safety conscious.
But people always knew that automobile accidents were frequent and that tens of thousands of Americans were killed every year in such accidents.
Some cars were safer than others and why didn't the manufacturers of those cars advertise their safety features? Why were auto manufacturers reluctant to open a new front, namely that of relative safety, in their competitive war with each other?.
Evidently people have an aversion to being reminded of bad things that they know.
To know something does not require that one be thinking about it.
Everyone knows that he is going to die some day, and that that day may come very soon, but we do not like to dwell on such things.
Advertisements for life insurance intimate mortality, of course, but very obliquely.
They do not say: you had better buy our insurance today because tomorrow you could be dead from an aneurysm, a terrorist attack, or a broken neck from slipping on a banana peel.
If you fly, you know about and dread long delays, but you do not want to be told: "our planes have never been stuck on runways because of thunderstorms for more than four hours, but X Airlines' plans have been stuck for as long as eight hours, with overflowing toilets, etc.".
Then too there may be a sense that the part of such dismal comparative advertising that unavoidably disparages one's own product will be thought more credible than the part that disparages one's competitors' products.
The former is in the nature of a confession, the latter of an accusation; and confessions are highly credible.
The reader will know that all airlines experience delays, but if only one--the advertising airline--admits to this, it becomes associated in the reader's mind with delay.
The idea that products can pick up unwanted associations that are harmful, though it might seem from a strictly rational standpoint that they should not be, underlies the legal concept of trademark "dilution." Suppose you sell roasted chestnuts on a street corner in Manhattan and you call your stand "Rolls Royce." No one will think that the manufacturer of Rolls Royce is also engaged in the retail sale of nuts.
So there would be no "passing off," in trademark jargon.
But Rolls Royce could sue you for trademark dilution.
(An alternative theory of such a suit would be that you are appropriating the aura that Rolls Royce has acquired as a result of the investment in quality by its manufacturer, and that aura should be treated as a property right of the manufacturer that you are infringing.) Because the next time a person is shopping for a luxury car, and trying to choose between a Rolls Royce and an Aston Martin, he may find himself involuntarily associating the Rolls with the nut stand.
An even clearer example is the legal concept of trademark "tarnishment," illustrated by a case in which a seller of T-shirts stenciled "I Like Cocaine" on them in the style of the Coca-Cola company's advertising slogan "I Like Coke." In rational-choice terms, we might try to explain the dilution and tarnishment cases by suggesting that the trade name or advertising alleged to infringe trademark creates a distracting mental association that requires a mental exertion to overcome, and thus imposes a cost on the trademark owner that may exceed the benefit to the alleged infringer.
I suspect that Internet advertising is more informative than media advertising, and that the gap will grow.
The reason is that the Internet can pinpoint ads to particular tastes of the viewer (see the recent articles on Facebook's advertising plans, which will enable better matching than Google advertising) and thus give the viewer pertinent information.
This is difficult with media advertising because the audience that reads or views the advertising is heterogeneous.
Becker has laid out the case for refusing to bail out GM, Ford, and Chrysler.
It is a powerful case, and if the drop in auto sales that is driving these companies toward insolvency had occurred two years ago, there would be in my view no case, other than a political one, for a bailout.
But in the current financial crisis, I believe a bailout is warranted, provided that the shareholders and managers of the companies are not allowed to profit from it.
There are two types of corporate bankruptcy: liquidation and reorganization (Chapter 7 and Chapter 11 of the Bankruptcy Code, respectively).
In a liquidation the bankrupt company closes down, lays off all its workers, and sells all its assets.
That probably would not be the efficient solution to the problems of the Detroit automakers.
They are still producing millions of motor vehicles per year, and if they suddenly ceased production entirely there would be a big shortage even though demand is way down.
To put this another way, although at present the companies are probably losing money on virtually every vehicle they sell, at a lower level of production the price at which they sold their vehicles would exceed marginal cost.
The alternative to liquidation--reorganization--can work well in normal times, as in the United Air Lines bankruptcy that Becker mentions.
The reorganized business is able to borrow money because its post-bankruptcy borrowings ("debtor in possession" loans, as they are called) are given priority over its pre-bankruptcy debts, which are usually written down in bankruptcy, reducing the reorganized firm's debt costs and thereby enabling it to recover solvency.
The debts that get written down can include health and pension benefits, which in the case of the auto companies continue to be a big drag on profitability.
The major problems with allowing the automakers to be forced into bankruptcy within the next few months are three, all arising from the depression that the nation appears to be rapidly sinking into.
The first problem is that the companies might have to liquidate, because they might be unable to attract the substantial post-bankruptcy loans that they would need to enable them to remain in business.
The credit crunch--less politely the near insolvency of much of the banking industry--has made that industry unable or unwilling to make risky loans, and loans to the auto companies after they declared bankruptcy would be risky.
Second, not only the size of the automakers, but peculiarities of the industry, would cause bankruptcy to greatly exacerbate the nation's already dire economic condition.
In the very short term, the automakers would probably stop paying their suppliers, which would precipitate a number of the latter--already in perilous straits because of the plunge in the number of motor vehicles being produced--into bankruptcy.
Many of the suppliers would probably liquidate, generating many layoffs.
At the other end of the supply-distribution chain, consumers would be reluctant to buy cars or other motor vehicles manufactured by a bankrupt company because they would worry that the manufacturer's warranties would be unenforceable.
So more dealerships would close, producing more bankruptcies, liquidations, and layoffs.
With the demand for the vehicles made by the Detroit automakers further depressed and the supply-distribution chain in disarray, the liquidation of those companies would begin to loom as a real and imminent possibility.
Liquidation of the automakers would produce an enormous number of layoffs up and down the chain of supply and distribution.
Such prospects reinforce the unlikelihood that a reorganized industry could survive on debtor in possession loans.
The likely psychological impact of a bankruptcy of the U.S.-owned auto industry should not be underestimated.
Already consumers, rendered fearful by repeated misinformation from government officials concerning the gravity of the economic situation (including their reluctance to acknowledge that the nation was even in a ‚Äúrecession,‚Äù long after it was obvious to the man in the street that we were in something worse), are reducing their buying, precipitating big layoffs in the retail industry, which in turn reduce buying power, which in turn spurs more layoffs.
This vicious cycle would be accelerated by the laying off of hundreds of thousands of workers in the automobile industry, including employees of suppliers and dealers as well as of the manufacturers.
The U.S.-owned auto industry may be doomed; it may simply be unable to compete with foreign manufacturers (including foreign manufacturers that have factories in the U.S.); or a reorganization in bankruptcy may be the industry's eventual salvation.
But the automakers should be kept out of the bankruptcy court until the depression bottoms out and the economy begins to grow again.
(Recall that the government bailed out the airlines after 9/11, allowing United Air Lines to have an orderly bankruptcy reorganization beginning the following year and ending in 2006.) Any bailout, however, should come with strict conditions, to minimize the inevitable moral hazard effects of government bailouts of sick companies.
The government should insist on being compensated by receipt of preferred stock in the companies, on the companies' ceasing to pay dividends, and on caps on executive compensation, including severance pay.
A possible alternative would be for the government to refuse to bail out the industry but agree to provide the necessary debtor in possession loans to keep the auto companies from liquidating after they declare bankruptcy.
But this would be a kind of bailout, and probably would not be sufficient to avert the shock effects that I have described.
The costs of a depression in lost output, reduced incomes, and anxiety almost certainly exceed the benefits, and can have disastrous long-run consequences--had it not been for the Great Depression, it is unlikely that Hitler would have become chancellor of Germany.
But that is not to deny that there can be some benefits, as our current depression illustrates.
(The use of the word "recession" to describe any contraction less severe than the Great Depression is a triumph of euphemism over clarity.).
A depression is an essential backup to efforts to moderate the business cycle.
The housing bubble could not expand indefinitely; leverage could not keep growing indefinitely.
The government was doing nothing to prick the bubble or to limit leverage.
The longer the world economy went without a depression, the worse the collapse would be when it finally, inevitably, came.
The saving grace of catastrophes is averting worse catastrophes: imagine if, instead of attacking the United States with commandeered airliners, al Qaeda had waited a few years and attacked with suitcase nuclear bombs.
We would not have been on guard, as we are now because of the 9/11 attacks.
A depression increases the efficiency with which both labor and capital inputs are used by business, because it creates an occasion for reducing slack.One might think that a firm that has slack in good times will have as much incentive to reduce it as it would in bad times; slack (failing to maximize profits) is an opportunity cost, which in economics has the same motivational effect as an out-of-pocket expense.
But firms are organizations, and organizations experience agency costs, which are more difficult to control in good times than in bad.
If a firm's profits are growing, it is easier for the firm's executives to skim some of the profits, pocketing them in the form of excessive compensation or perquisites, than when the firm is shrinking.
In the former case, stockholders will be doing well, so the pressure they exert through the board of directors to minimize the extraction of rents by executives and other employees will be less intense than when the firm is at risk of collapse.
When the depression ends, the firm will have lower average costs, though they will drift upwards as the firm re-grows.
Government is rife with agency costs as well.
The depression will induce states, cities, and the federal government, all of which will be experiencing sharply reduced tax revenues, to provide public services more efficiently.
It will accelerate the very desirable trend toward privatization of government services such as toll roads and airports.
By increasing unemployment, a depression increases the demand for education by reducing the opportunity cost of it (forgone income is the largest cost of higher education); and education produces positive externalities.
It might seem that the depression would also reduce the income gains from being educated; but those gains accrue over a lifetime and so are little affected by a depression during a person's school years.
A depression is a learning experience.
The banking industry has certainly learned a great deal from the current financial crisis about the risks of leverage and the downside of complex financial instruments intended to diversify risk more effectively than by traditional means such as retaining highly safe liquid reserves to buffer any unexpected decline in the bank's loan revenues.
The current depression has depressed commodity prices.
Of particular importance has been its dramatic effect on the price of oil, which has fallen by about 40 percent in the last six months.
The price spike of last spring seems to have been due primarily to a shortage of supply; the industry could not expand production fast enough to keep pace with surging demand, particularly in China and India.
The fall in price seems to have been due primarily to a worldwide reduction in demand for oil caused by the global depression.
The combination of low prices with low demand is optimal from the standpoint of U.S.
(and probably world) welfare.
The low demand reduces the amount of carbon emissions, thus alleviating (though only to a slight extent) the problem of global warming.
The fall in the price of oil has reduced the wealth of the oil-producing nations‚Äîa goal that should be central to U.S.
foreign policy because of the hostility to us (Russia, Iran, Venezuela), or the political instability (Iraq, Nigeria, Algeria), of so many major oil-producing nations.
By undermining faith in free markets, the depression opens the door to more government intervention in the economy and eventually to higher taxes (though probably not until the economy improves).
These are not necessarily bad things.
Obviously neither the optimal amount of government intervention nor the optimal level of taxation is zero.
There are compelling arguments for greater government intervention to deal with the threat of global warming, to improve transportation and other infrastructure, to reduce traffic congestion, and to protect biodiversity.
Though in principle the money needed for such programs could be obtained from cutting wasteful government programs, that is politically infeasible.
So taxes will have to rise.
Federal taxes as a percentage of Gross Domestic Product are no higher today than they were in the 1940s, 1950s, and 1960s‚Äîperiods of healthy economic growth.
The marginal income tax rate reached 94 percent in 1945 and did not decline to 70 percent until 1964 (it is 35 percent today).
A modest increase in marginal rates from their present low level would increase tax revenues substantially, probably with little offset due to the distortions that any tax increase is bound to produce.
Taxes should not be increased during a depression, but as we come out of it they can be raised modestly to finance infrastructure investments and other investments in public goods, such as reducing carbon emissions.
The anxiety, reduced consumption, and reduced incomes during a depression are real costs and very heavy ones, but on the other hand the excessive borrowing that precipitated the depression enabled, for a period of years, higher consumption than the nation could actually afford.
Thus the current drop in consumption is in part an offset to the abnormal level of consumption earlier.
Indeed, since people loaded up with cars, fancy dresses, etc., while times were good (illusorily good because the nation was living beyond its means), the current reduction in the purchase of durables, while hard on sellers, may not be a great hardship to consumers.
(Nevertheless, people quickly get habituated to a high level of consumption, and a decline from that level is very painful.).
A related point is that the experience of a depression will induce greater thrift, increasing the formation of investment capital after the depression abates.
Finally, the depression will stimulate fresh thinking by the economics profession.
The profession's embarrassing failure to foresee the depression, and the failure of the Federal Reserve Board, of deposit insurance, and of other regulatory institutions and requirements to avert the near collapse of the banking industry, will stimulate fresh thinking about and research in macroeconomics and financial economics; and the regulatory responses initiated by the Bush Administration and those that will be undertaken by the Obama Administration will generate valuable data about the effects of economic regulation.
Economists will learn from the bad policies adopted in response to the depression (and some are bound to be bad) as well as from the good ones.
The essays commissioned by the John Templeton Foundation and available at www.templeton.org/market/ offer a variety of answers to the question whether free markets corrode moral character.
Becker's posting offers an interestingly different answer, and I shall offer a different answer as well.
Different cultures and, within cultures, different occupations both select for different character traits and shape character traits.
Let me start with culture.
One can distinguish between a culture built on notions of honor, military prowess, and status within a hierarchy often based on birth, on the one hand, and a commercial culture on the other.
English history is a case study of the transition from the first to the second, the second having been realized in the United States earlier and more fully than in the mother country.
The two types of culture select for and inculcate quite different character traits--reckless physical courage, a fierce concern with personal honor, identification with a group (family, dynasty, or nation), and hierarchic control in the former; cooperativeness, empathy, tact, politeness, intelligence, individualism, self-interest, prudence, and deferral of satisfactions (i.e., a low discount rate) in the latter.
Aggressiveness and a willingness to deceive are constants, although deception is more skillfully deployed in a commercial society.
Politicians possess and cultivate the traits associated with whatever culture they operate in.
Honor-based societies attract charismatic leaders, often warriors; democratic societies model their politics on the economic market.
As Schumpeter explained in his unfortunately rather neglected economic theory of democracy (sometimes called "competitive democracy"), democratic politicians, constituting the members of a governing class much like the business community in the economic domain, compete for the support of "consumers" (= voters) who "pay" (vote) for the competitor whose product (a package of policies, values, and leadership traits) they prefer.
People in a commercial society are probably more self-interested than people in an honor-based society, because the latter are more likely to identify with leaders or causes than to behave as separate individuals with individual tastes and goals.
Although commercial society selects for and encourages traits that we are apt to think "good," such as cooperativeness, intelligence, and empathy, in fact these qualities are morally neutral.
Intelligent and cooperative businessmen, whose empathetic qualities enable them to manipulate consumers' emotions and intellectual limits, will be prone to collude with their competitors and defraud their consumers, as well as to ignore pollution and other externalities that economic activity produces.
That is why even libertarians, with the exception of anarcho-capitalist extremists, believe that antitrust and antifraud laws are necessary controls over commercial activity.
Even without such laws, it is true, not all markets would be riven by collusion and fraud.
Collusion invites free riding, since a seller can increase its profits by slightly undercutting the cartel price; and the reputation concerns stressed by Becker will often deter fraud.
But without any regulation, cartel agreements would be legally enforceable, which would discourage free riding, though they would be eroded by new entry--but often the new entrants, attracted by supracompetitive prices, would be less efficient than the incumbent firms.
Reputation concerns will not deter deceptive advertising concerning traits shared by all products in the market in question.
A cigarette advertiser who advertises that his cigarettes are "safer" than competitors' cigarettes is reminding consumers that smoking is in fact unsafe.
The cigarette companies (also the automobile manufacturers) tried for decades to conceal the dangers inherent in their products, since trumpeting those dangers would have reduced demand.
Businessmen also have an incentive to manipulate the regulatory process, seek tax loopholes, and the like.
Although we tend to blame politicians and bureaucrats for bad policies, often they are merely brokering interest-group deals.
In a democratic society, it is legitimate (in fact inevitable) for policy to yield to the demands of interest groups.
We should not blame politicians who are honest agents of politically powerful forces.
Politicians who do not yield to those forces are ineffectual.
Of course politicians lie a great deal, but so does anyone who depends on the goodwill of others.
Max Weber in a famous essay on politics as a vocation distinguished between private and public morality.
Anyone in a public position--and this includes business and academic leaders as well as politicians--cannot indulge a taste for candor or altruism and expect to be successful at his job.
It is the same reason why good business leaders drive hard bargains with their suppliers, play off subordinates against one another, lay off workers by the thousands, receive huge compensation packages, and often relocate plants overseas when foreign wages and taxes are lower.
The difference between public and private morality shows that even honesty is a morally neutral quality.
Often the regulations imposed on business are mindless and crippling and to survive a businessman must violate them; in doing so he promotes both his own welfare and that of society as a whole.
History teaches that a commercial society is bound to be more prosperous and peaceful than an honor-based traditional society.
The commercial culture creates incentives and constraints that, provided that economic activity is effectively regulated, (an important qualification) maximizes the values that are important to most people.
This doesn't mean that people in a commercial society are "better" than people in other types of society.
The human race is genetically uniform, and our "moral" genes are not much different from the corresponding genes in chimpanzees.
The success of commercial societies just illustrates that different institutional structures produce different human behavior.
The defeat of the Republican Party in the November election is widely thought to signal the decline of conservatism in the United States.
But it is important to distinguish between the Republican Party and conservatism rather than to equate them.
In a two-party system, political parties are opportunistic coalitions and hence lack ideological homogeneity, especially in a culturally heterogeneous nation, such as the United States.
Apart from the many Republicans and Democrats who vote for a party out of habit or nostalgia or family tradition or attachment to a particular issue or a personal liking or loathing for the other people who vote for the party, there are ideological voters.
In the Republican Party these fall into three main groups: believers in (1) free markets, low taxes, and small government; (2) believers in tough criminal laws and a strong foreign policy; and (3) social (mainly religious) conservatives, who are hostile to abortion, gay marriage, pornography, and gun control.
Groups (2) and (3) converge on hostility to illegal immigrants.
Groups (1) and (2) are in some tension because a national security state requires big government and therefore high taxes.
Group (1) is in tension with (3) because (1) is libertarian and (3) is regulatory.
All three groups have been hurt by recent events, and all three are moving apart because of the hits on the others.
The financial crisis has hit economic libertarians in the solar plexus, because the crisis is largely a consequence of innate weaknesses in free markets and of excessive deregulation of banking and finance, rather than of government interference in the market.
Believers in a strong foreign policy have been hurt by the protracted and seemingly purposeless war in Iraq (the main effects of which seem to have been discord between the United States and its allies, increased recruitment of Islamic terrorists, and the strengthening of Iran and of the Taliban in Afghanistan and of al Qaeda in Pakistan) and the Bush Administration‚Äôs lack of success in dealing with Iran, North Korea, Afghanistan, Pakistan, and the Arab-Israeli conflict.
And social conservatives have been hurt by the stridency of some of their most prominent advocates, who all too often give the appearance of being mean-spirited, out-of-touch, know-nothing deniers of science (e.g., evolution, climate change).
The efficiency gap between the competing presidential campaigns created the appearance of a competence gap between the parties.
As the campaigns progressed, a surprising number of conservatives switched their support to Obama.
Thoughtful conservatives, already disturbed by the accumulation of blunders of the current Administration (the Iraq WMD, Katrina, the Justice Department scandals), culminating in its uncertain response to the financial crisis, were appalled at the iconic status that Joe the Plumber attained in the Republican campaign, the wild rumors spread by the conservative bloggers and talk-radio hosts, and the intellectual vacuity of many Republican candidates and advocates.
The Republican Party seemed to have descended to anti-intellectualism--to deriding highly educated people who speak in complete sentences as "elitists," as compared to the down-to-the-earth ignorance of Joe and his ilk--which sorts badly with the strong intellectual tradition of conservatism.
It is a self-defeating strategy of conservatives to argue that "all" intellectuals are liberal and therefore conservatives should think with their guts rather than their brains.
For myself, I would be happy to see conservatism exit from the political scene--provided it takes liberalism with it.
I would like to see us enter a post-ideological era in which policies are based on pragmatic considerations rather than on conformity to a set of preconceptions rooted in a rapidly vanishing past.
We have accumulated a substantial history of liberal and conservative failures.
The liberal failures include underestimating the cost of egalitarianism and of social engineering by judges (the Warren Court,   Roe v.
Wade  , the near abolition of capital punishment), and the benefits of discipline, of punishment, of enforcing principles of personal responsibility, and of military force.
The conservative failures include overestimating the efficiency of unregulated markets, the efficacy of military force, and the beneficent effects of religiosity.
Liberals are wrong to promote unions (described by one wag, albeit with some exaggeration, as the parasites that kill their hosts) and conservatives to promote abstinence as a substitute for condoms in preventing teenage pregnancy.
Now I know that it isn't really possible to think without preconceptions.
As Bayesian decision theory teaches, a rational decision maker starts with a prior probability of some uncertain event (that a credit crunch will turn into a major depression, for example), but adjusts that probability as new evidence comes to his attention--which means that his prior belief, his preconception, may, depending on the strength and direction of the evidence, affect his ultimate decision, which will be based on his posterior probability that the event will occur.
Nor do I mean to deny the value of theory, in particular economic theory, in guiding policy.
But there is a difference between rational preconceptions, based on theory and experience, and rigid emotional preconceptions, such as dogmatic libertarianism or egalitarianism or ungrounded hopeful beliefs such as that everybody in the world is yearning for and ready for democracy, that tell one more about the thinker's personality than about the quality of his thought and that may be impervious to reconsideration in the light of new evidence.
We should be skeptical of world views rooted in emotion that insulate people against inquiry into the foundations of their beliefs.
Concretely, there is a range of perfectly respectable economic theorizing, at one end (the interventionist) typified by Paul Samuelson and at the other end (the libertarian) by Milton Friedman, but it would be a mistake to commit to one or the other end since neither can be proved to be correct.
The libertarian end of the range failed to grasp the danger of deregulation of financial markets and underestimated the risk and depth of the current economic crisis--an economic shock that appears to be severe enough to trigger a genuine depression.
But the point I particularly want to stress is that the recent failures of conservatism are not a vindication of liberalism.
Both can fail, and as long as the failures are recognized, the United States can do fine.
Becker's analysis is impressive, but I hesitate to state with confidence that China would be better off to revalue its currency.
As Becker points out, China has pegged its currency to the dollar at a rate of exchange that greatly undervalues its currency relative to ours.
As a result China sells goods to U.S.
producers and consumers at very low dollar prices and buys goods from U.S.
producers at very high prices.
In consequence it exports a lot to the U.S.
(and to other countries as well, for its currency is undervalued relative to other currencies besides just the dollar, notably the euro) and imports little.
Since it receives more dollars than it pays, it has accumulated huge dollar reserves--accumulated them rather than giving them to its people.
It has more than $2 trillion in foreign reserves, mostly U.S.
dollars.
The dollar has been falling lately, and the value of China's dollar reserves with it.
Could China have sensible reasons for such an odd, old-fashioned policy ("mercantilism"--the maximization of a nation's cash or cash-equivalent reserves--famously attacked by Adam Smith more than two hundred years ago)? It could.
The immense exports that China's skewed exchange policy has fostered provide employment for a large number of Chinese.
Their wages are low, but at least they have jobs.
Of course they might have jobs if the dollar were cheaper relative to Chinese currency.
China would import more and export less.
It would manufacture less, because many workers would be required for the expanded system of domestic distribution that would be necessary if domestic consumption (both of Chinese manufactures diverted from export to internal markets and of imported goods).
It would manufacture a different mixture of goods, because of competition from imported goods, but above all it would need a much more elaborate system of wholesale and retail distribution, and perhaps a different commercial culture.
The transition to a modern consumer society with its credit cards and product warranties and malls and the rest would be difficult.
In the interim there might be widespread unemployment; shifting employees from manufacturing to distribution, or from one type of manufacturing to another, doesn't happen overnight.
And China doesn't have the kind of social safety net that we do, to catch the unemployed before they reach the bottom.
Because of the limitations of domestic consumption, Chinese are great savers, and this relieves the pressure the government would otherwise feel to provide social services.
That provision might strain the government's administrative abilities.
China has a long history of political instability, and there is tension between its dictatorial communist government and its largely free-enterprise economy.
It is naturally reluctant to take chances on changing its economy from one of producing manufactured goods for export to one of manufacture and distribution primarily for domestic consumption.
And there is value to China in those trillions in foreign reserves that it has accumulated.
They magnify its global power.
China is our major creditor.
It finances our deficit.
Like any dependent debtor, we must be very careful not to offend our major creditor.
It is true that our relation with China is one of bilateral monopoly: if we devalue the dollar (which we may be doing) in order to lighten our debt burden, we hurt China; but if China in retaliation stops buying our Treasury bonds, we are badly hurt.
For all these reasons, while China is likely to abandon mercantilism in the long run, it probably is sensible for it to do so gradually.
Would we benefit from China's abandoning mercantilism? As Becker points out, our consumers benefit from the artificially low prices at which Chinese goods are sold in this country.
At the same time, our dependence on China's financing our public debt weakens our ability to influence Chinese policy on issues of urgent concern to us, such as the threat of nuclear proliferation posed by North Korea, Iran, and Pakistan, and the need to take effective steps to limit global warming.
Then too it seems that the only way in which we can buy those cheap goods from China is to borrow from China.
We buy more from China than we sell to it and so China accumulates dollars to bridge the gap, dollars that it then lends to the U.S.
Treasury.
The effect is to reduce pressure on our government to pay down our immense and growing public debt either by raising taxes or by cutting spending.
We cannot continue along the path of ever-growing debt unless our economy grows very rapidly, which is not assured.
So I am not sure that I agree with Becker that China's policy is good for us and bad for it.
The reverse may be truer.
In October, the President announced that $13 billion (some commentators believe a more accurate estimate is $14 billion) of the $787 billion stimulus package enacted this past February would be used to pay every social security annuitant $250 in 2010, ostensibly to "compensate" for the fact that there will no cost of living (inflation) increase in social security benefits.
The social security COLA for year t is based on the increase in the Consumer Price Index between the end of the third quarter of t - 1 and the end of the third quarter of t -2.
(t is 2010, t - 1 2009, and t - 2 2008.) There will be no cost of living increase in 2010 for the excellent reason that as of the end of the third quarter of this year (September 30, 2009), the cost of living had fallen 1.3 percent from the end of the third quarter of 2008.
Social security has a ratchet: benefits increase when the cost of living increases but do not decrease when the cost of living decreases.
There is thus nothing to "compensate" social security annuitants for; on the contrary, they will be receiving a windfall in 2010 by virtue of the increase in their real (as distinct from nominal) benefits: their 2010 benefits will buy more.
Transfer payments, moreover, are a poor device for fiscal stimulus.
The idea of a fiscal stimulus as an anti-depression device is to increase employment and by doing so restore business and consumer confidence; we are seeing today how high and rising unemployment is sapping that confidence and retarding recovery from the current depression (and it is a depression, not the "Great Recession" as some are calling it, though that's an issue for another day).
Transfer payments are at two removes from putting unemployed people to work.
The amount of the transfer that is saved by the recipient in a savings account or other safe haven is (by definition) not spent, and so does not increase demand and therefore supply and therefore employment.
And the amount of the transfer that   is   spent is spent at a store or other retail outlet to purchase a good that has already been produced.
It is buying from inventory.
Only when the store's inventory falls to a level at which the store has to order a new supply of goods from the manufacturer is there any stimulation of production, and thus of hiring; and of course the stimulation may not be of production by an industry, or in an area, of high unemployment.
The dive that the economy took in the wake of the September 2008 financial collapse was unanticipated, and as a result sellers found themselves with excess inventories; until they were worked down, production would remain depressed.
In sum, the effect of a transfer payment on employment may therefore be nil.
Apart from its inefficiency as a contribution to the recovery, largesse for the elderly--whose medical expenses, paid for largely by the taxpayer under the Medicare program--are threatening to bankrupt the country, sends the wrong signal: the signal of fiscal profligacy.
Lawrence Summers, the brilliant economist who heads the National Economic Council in the White House, has publicly endorsed the $250 dollar gift to social security recipients.
He claims that it corrects an "anomaly." The anomaly he points is that social security recipients received only one $250 stimulus check this year and will receive no cost of living increase next year, whereas the tax benefits in the stimulus plan will be paid next year as well as this year.
But social security annuitants received a 5.8 percent cost of living increase this year, whereas few workers received as large a wage increase; and they will be receiving a real as distinct from nominal increase in benefits next year.
The only "anomaly" in the picture is the cynical provision of a windfall to a group that has suffered less from the depression than persons of working age, a group whose only claim to a $250 Christmas gift paid for by the federal taxpayer is that it votes more heavily than the young.
What's $13 billion at a time when trillions are spent casually? The real significance of the measure is the insight it gives into the Administration's apparent indifference to fiscal prudence.
And not just the Administration.
The political parties play leapfrog when it comes to spending--each trying to outdo the other in generosity to powerful voting blocs, and specifically to the elderly--the recipients of enormous social security and Medicare benefits, courtesy of the federal taxpayer.
The costs of both the Medicare and social security programs are increasing rapidly as the population ages, and as the population ages the voting power of the elderly increases, placing additional pressure on a budget already disproportionately devoted to supporting the least economically productive members of society.
(As a septuagenarian, I claim the right to make politically incorrect remarks about the elderly.
Moreover, I am speaking of the average; many elderly people are hard-working and productive.).
Becker is certainly right that growth in productivity is an important driver of economic growth.
But we must consider the source of the growth in productivity in order to understand the conjunction in the last two quarters of rapidly rising productivity with rapidly rising unemployment.
If productivity growth is the result of technological innovation (and "technology" in this context need not be limited to engineering--it could include innovations in management, marketing, inventory control, and so forth), then the effect of greater productivity on economic growth will indeed be positive.
But it is unlikely that the productivity spurts in the second and third quarters of this year have been due to innovation.
More likely they have been due to old-fashioned cost cutting spurred not by technological advances but by economic distress.
The only explanations I have seen offered for the productivity surge is cutting wages and working the workers harder.
I have found no suggestion of any technological change that might be responsible for such a large, sudden surge in productivity.
Facing declining demand and a frightened work force, a firm is under pressure to reduce its costs and it can do that in a variety of ways, including laying off workers, pushing its remaining workers to work harder, reducing wages and benefits, buying cheaper inputs, slowing delivery, paying its bills more slowly, and responding more slowly to customer complaints.
Some cost reductions will not increase productivity, as they will be proportional to reductions in output.
But others will, such as laying off the least productive workers, or reducing quality in ways that do not show up in statistics on productivity (as they should--a reduction in quality is a reduction in the value of output).
Productivity gains that are based merely on adaptations to temporarily depressed economic conditions will be lost when conditions improve.
As labor markets tighten, a firm will perforce hire workers who are less productive than the workers it had retained in a slimmed-down workforce during the depression; and so productivity will decline.
The productivity gains in the last two quarters could actually signal pessimism about the pace of the recovery.
There are costs to reorganizing one's business in order to adapt to a reduction in demand.
The shorter the expected reduction, therefore, the less reorganizing a firm will do.
Indeed, often during a recession or depression there is the phenomenon of "labor hoarding": if a restoration of normal demand is expected in the near future, a firm may be better off with a workforce larger than it needs to meet the current demand than it would be laying off workers and having to incur the expense of rehiring them, or hiring new workers, when the downturn ends.
There has been less labor hoarding in the current downturn than in previous ones, and this may be because employers do not anticipate an early return to normal demand.
Their pessimism would be consistent with predictions that unemployment will continue to rise for some months, and thereafter will decline only slowly.
For with such a high rate of unemployment (and underemployment--10.2 percent and 17.5 percent at this writing, respectively), demand for goods and services is likely to remain at a low level.
On December 3 the President will convene a "jobs summit" to consider what if anything to do about the dismal employment picture.
And dismal it is.
The figure of 10.2 percent unemployment in October understates the problem because people who have given up on seeking a job, or who are involuntarily working part-time rather than full-time, are not counted as unemployed.
They are, however, included with the unemployed in the statistics of underemployment, and the underemployment rate has reached 17.5 percent.
These rates may continue to rise.
And more than in previous downturns, employers have been cutting wages and benefits, which from a worker's standpoint is a form of quasi- or partial unemployment.
At the end of the summer there was some hope for a rapid economic recovery, but that has faded.
Recovery from a recession or depression precipitated by a collapse of the banking industry secondary to a housing collapse tends to be slow.
Weakened banks are hesitant to lend, and because housing is a big part of household wealth a collapse of housing prices tends to inhibit spending, or alter spending patterns, and especially to inhibit borrowing: debt is a fixed cost, so when household wealth declines people find themselves overindebted.
With the supply of and demand for credit weak, economic activity slows.
The banks' reluctance to lend, which expresses itself in stricter credit standards, is especially hard on small business, which depends on bank loans for credit; small businesses unlike big cannot finance themselves by issuing bonds or commercial paper or using retained earnings in lieu of credit.
And small businesses in the aggregate are big employers.
The Administration's ambitious health-care reform is inhibiting hiring by small business by creating uncertainty about the health-insurance costs that employers will bear.
Mounting concern with our rapidly growing national debt is a further damper on investment and hence employment.
There is even concern that we may be in a trap in which rising unemployment feeds on itself.
Credit defaults are highly correlated with the unemployment rate, so as unemployment rises, defaults rise, and defaults impair bank capital, causing a further tightening of credit, which by hurting small business pushes unemployment up.
All this is speculation and for all I know the unemployment rate will start falling soon and rapidly.
But most forecasters think not, and so it is understandable that the Administration would like to do more than it is doing to curb unemployment.
But what is there to do? In part because of mistakes in the design, implementation, and explanation of the $787 billion stimulus program enacted last February, and in part because of concern with the rapidly growing federal deficit, the stimulus has become extremely (I think undeservedly) unpopular, and Congress will not enact another stimulus program as urged by left-wing economists.
What then can be done? One possibility, which has been tried in Europe recently, apparently with some success, is to pay employers, through tax credits or otherwise, to hire workers.
This is fiscal stimulus--Keynesian deficit financing--by another name.
It is like the government's paying a construction company to build a highway, which will require the company to enlarge its workforce.
All that might seem to distinguish the job subsidy is that the link between funding and jobs is more direct, which increases its political appeal.
A common objection is that it will encourage fraud--employers will fire workers and then rehire them, to obtain the subsidy.
Or, less transparently, it will fire workers and hire replacements, again in order to obtain the subsidy.
But a bigger objection, which is also an objection to the original stimulus program, is that it's not targeted on industries or areas of above-average unemployment.
Even in an area of low unemployment.
an employer will have an incentive to hire workers in order to obtain the subsidy, but he may do this by hiring workers who already have a job, and the net effect on unemployment will therefore depend on what the hired worker's former employer does--maybe just pay him to stay.
There are other ways of stimulating employment, at lower cost and probably with greater impact.
One would be to reduce the federal minimum wage, which over a three-year period beginning in 2007 will have risen from $5.15 to $7.25 an hour--a 40 percent increase.
As time passes, unemployment becomes less a matter of layoffs and more a matter of failing to provide jobs for new entrants to the workforce, and a reduction in minimum wage would make these new entrants--inexperienced workers with modest wage expectations--far more employable.
Another way to reduce unemployment would be to amend the stimulus law to redirect the remaining unspent funds to areas and industries of high unemployment.
Another would be to reduce payroll taxes, including the unemployment-insurance tax and the employer's share of the social security tax; for payroll taxes are part of the cost of labor.
The effect on the employer would be similar to that of a wage cut, and would increase the demand for labor.
Since social security and unemployment benefits (as opposed to taxes) would be unaffected, the reduction in the taxes would not reduce the employees' full wages and so induce a demand for higher wages.
So the employer's net labor cost would fall and his demand for labor rise.
The problem is that the government's deficit would increase, but that would also be true of a subsidy for hiring, though it would not be true of a reduction in the minimum wage.
Japan spent the 1990s unsuccessfully trying to recover from a collapse of the Japanese banking industry caused by the bursting of a housing bubble, despite aggressive monetary and fiscal policies.
As a result of those policies, Japanese national debt soared, but was financed mainly internally because of the very high Japanese personal savings rate.
With its large surplus of exports over inputs, moreover, Japan accumulated dollars (and other currencies), which also reduced the debt burden.
Interest rates remained very low, in part because of chronic deflation.
The low interest rates stimulated the "carry trade": investors would exchange Japanese yen for local currencies in countries that had high interest rates.
This is a form of arbitrage, but tends not to erase international interest-rate differences, as one might expect arbitrage to do.
Japan was hard hit by the current economic crisis, in part because of its dependence on exports.
It responded with aggressive monetary and fiscal measures, as before--with what appear to be potentially disastrous results, if one may judge from data in a recent article in the   Wall Street Journal   (Richard Barley, "Japan: The Land of the Rising CDS," Nov.
11, 2009, p.
C20).
The International Monetary Fund predicts that next year Japan's ratio of national debt to GDP will be an astronomical 2.27, forcing Japan to continue borrowing heavily abroad.
Interest rates remain very low, in part because Japan is again experiencing deflation.
Rating agencies have reduced Japan's bond rating to AA-, yet the government, lulled by low interest rates, apparently has no sense of urgency about the country's mounting debt burden, a burden aggravated by the rapid aging of Japan's population.
International financial markets believe that there is some probability that Japan will default on its debt.
The "CDS spread" (the percentage of a debt that someone desiring insurance against the debt's defaulting must pay for the insurance) on Japanese government debt is almost 1 percent (.75 percent).
The United States differs in many respects from Japan, but is beginning to look more and more like it.
We too experienced a banking collapse in the wake of the bursting of a housing bubble, and our monetary and fiscal responses, though aggressive, may not have been highly effective.
The fiscal stimulus--the $787 billion federal spending program enacted in February--was enacted late and is poorly designed, and some think too small.
And there is concern that, like Japan, we are babying our weak banks by allowing them to overvalue the assets and underestimate the liabilities on their balance sheets.
The stress tests conducted last spring, for example, both underestimated the stress the banks were under (by assuming an unemployment rate--unemployment being highly correlated with bank-loan defaults--substantially lower than it became within a couple of months after the tests) and disregarded likely defaults of bank loans that will mature after 2010.
Our government, too, is lulled by low interest rates into believing that we can continue to run huge deficits without raising taxes or cutting spending significantly, simply by borrowing.
Our public debt (the amount of federal government debt that is contractually obligated, as distinct from accounting reserves for entitlement programs such as Medicare and social security), of which almost half is owned by foreign governments and other foreign investors, has reached $7.5 trillion, which is more than half our GDP, and is on course to increase by at least a trillion dollars a year for the indefinite future.
Like Japan, we have an aging population, which is pushing up entitlement costs.
Our government seems not to have any economically realistic or politically feasible plans either to raise revenue or cut spending, but instead plans ambitious new spending programs (notably but not only on revamping health insurance).
Proposed economies seem tokens.
There is an air of complacency about deficit spending and public debt--again like Japan.
Because of our low inflation rate (it is close to zero) and the Federal Reserve's "easy money" policy (as a result of which our banks are holding a total of $1 trillion in excess reserves), the dollar has now become a favorite currency for the carry trade: dollars exchanged for local currencies earn interest more or less effortlessly, though not without risk.
The carry trade may be a factor in the recent rises in commodity prices; indeed there is fear of additional asset-price inflation (bubbles) as a result of all the dollars sloshing around in the world economy.
Should the U.S.
economy grow more rapidly than the public debt, we'll be okay.
But the government's focus appears to be not on economic growth, but on redistribution (the major goal of health reform) and on creating at least an aura of prosperity, at whatever cost in deficit spending and future inflation, in time for the November 2010 congressional elections.
It is always difficult to decide whether a religious tenet of a hierarchical religion, such as Roman Catholicism, reflects religious belief or institutional strategy.
The Roman Catholic Church is a huge “corporation,” one that reached its present size, wealth, and influence in a competitive environment, where it had first to confront paganism and Judaism, and later Protestantism and secularism.
The Church has long been hostile to contraception, but the nature of its hostility has changed, and may be changing yet again with the Pope’s recent acknowledgment that the use of condoms may sometimes be justified as a way of preventing the spread of AIDS.
I want to consider the institutional as distinct from doctrinal considerations that might explain the history of orthodox Catholic views of contraception.
In the early years of Christianity, the Church had to steer a middle course between Christian extremists who thought sex a form of purely animal behavior that Christians should eschew, and pagans, who had a notably relaxed attitude toward sex, including masturbation, homosexual and other nonmarital sex, and contraception in the form of coitus interruptus and abortion.
Rejecting sex altogether was not a viable policy for an ambitious Church (think where rejection of sex got the Shakers), but accepting the pagan view would have resulted in a failure to differentiate Christianity from paganism, and perhaps reduce Christianity’s appeal to women.
The compromise position that the Church adopted was that sex was proper as long as it was oriented toward its proper function, which, the Church held, was procreation within marriage.
But it had to qualify this view to avoid condemning sex by married people who turned out to be sterile, for example because the wife had reached menopause.
So the Church allowed that a secondary lawful purpose of sex was to reinforce the marital bond.
Many centuries later the “demographic transition”—the tendency for the birthrate to fall when a nation achieves a certain level of prosperity—placed the Catholic condemnation of contraception under pressure.
Married couples wanted to have sex, but didn’t want to have the number of children that an active sex life would produce without contraception.
And contraceptive methods improved.
Eventually the Church achieved a partial accommodation by authorizing the “rhythm” method of contraception, since that was a form of abstinence and abstinence was consistent with Catholic doctrine—indeed it was enjoined on priests and nuns.
But few married couples found it satisfactory.
Greatly improved contraception (notably the pill), improved treatments of venereal diseases, increased privacy, relaxation of parental controls, continued declines in family size, and increased divorce rates (in part a consequence of lower birthrates and women’s greater access to the job market)—all factors that reduced procreative relative to nonprocreative sex (in part by increasing the prevalence of nonmarital sex)—put irresistible pressure on the Catholic prohibition of contraception, to the point where today in the United States and most European nations, even such traditionally strongly Catholic nations as Ireland and Italy, Catholics use contraception at the same rate as non-Catholics.
With the Church unable to resist the sexual revolution, efforts to prevent contraception were seen as likely to have perverse consequences.
True, if contraception were unavailable, there would be less promiscuity; but there would be some promiscuity, and probably a good deal, and a higher fraction of sex acts would result in unintended pregnancy, and therefore in an increased number of births to unwed teenagers and an increased number of abortions.
The net effect would be unclear, but could well be worse from the standpoint of overall Catholic doctrine.
The Church finds itself today in a quandary: its proscription of contraception is so widely ignored, and so anachronistic given today’s sexual mores, as to invite derision—to make the Church seem “out of it.” This might not matter a great deal if Roman Catholicism were a fringe faith, as Christianity was at its inception.
It is, as I said, a vast “corporation.” It has hundreds of millions of “customers.” It has been losing customers in the Western world, but gaining them in Africa—but Africans, ravaged by the AIDS epidemic, are pressing for a relaxation of the Church’s ban against contraception because condoms are a cheap and effective method of preventing infection with the AIDS virus.
It is therefore not surprising, from an institutional perspective, that the Pope should take a first, albeit hesitant.
step back from the proscription of contraception by acknowledging publicly that condoms might be justifiable as a method of reducing the incidence of AIDS.
Apparently he gave the example of a male prostitute’s using condoms (although there is some question whether the male sex of the prostitute might just have been a mistake in translation), and this puzzled people because the traditional objection to contraception is that it prevents procreative sex, and male prostitutes service homosexuals and homosexual sex is not procreative.
But homosexual sex in Catholic teaching is a mortal sin, so anything that facilitates it, like condoms in the presence of AIDS, is morally questionable.
The biggest problem that the Church faces in backing off its traditional condemnation of contraception is a potential loss of religious authority, which is no small matter in a hierarchical church.
In 1930, responding to the Anglican Church’s rescission of its prohibition of contraception, Pope Pius VI made an “infallible” declaration unequivocally reiterating the Catholic Church’s age-old prohibition of the practice, and his declaration was repeated by subsequent popes well into the 1990s.
Were the Church now to repudiate that doctrine, it would undermine papal authority.
Infallible papal pronouncements would be seen as tentative, revisable, like Supreme Court decisions, which have the force of precedents but can be and occasionally are overruled.
Moreover, the ban on contraception is bound up with broader views of sex that are held by the Church.
Contraception facilitates nonmarital sex, and one method of contraception—the condom—because of its dual use as a preventive of venereal infection, facilitates homosexual sex.
The Church is more strongly condemning of nonmarital sex, including homosexual sex, than it is of contraception, but relaxing the ban on contraception would undermine its other policies toward sex.
Moreover, the efficacy of contraception in preventing teenage births is bound up with sex education in schools, and sex education has the inevitable consequence of “normalizing” teenage sex.
Concern with the loss of religious authority may explain another peculiar feature (to an outsider, at least) of Catholic doctrine, which is the ban on priests’ marrying and on women becoming priests.
The problem of priests’ sexually molesting boys would be solved if priests were allowed to marry and if women could be priests, because then the priesthood would attract fewer homosexuals.
The current shortage of priests and nuns (a shortage due in part to the reduction in the average size of Catholic families—a reduction that in turn is due in part to contraception) would also be greatly alleviated if priests could marry and women could become priests.
But the solution would represent such a dramatic reversal of age-old Catholic doctrine as to undermine any pretense of papal infallibility.
An intermediate position for the Church to take—and the most likely position for it to take in the short run—would be to relax the ban on contraception only with respect to condoms, viewed as an essential preventive of AIDS.
Yet even that might be a problematic solution, because it would be seen as an acknowledgment that people cannot control their sex drives, yet that control is basic to the most distinctive features of Catholic doctrine, such as the ban on sexual activity and marriage of priests and nuns, on divorce, and on nonmarital and “unnatural” sex (homosexual sex, masturbation, oral and anal intercourse, etc.).
Why sex plays such a large role in Catholic doctrine is a deep puzzle, but precisely because it plays such a large role, an attempt to backtrack from it could prove destabilizing.
The Pope may thus have opened Pandora’s Box.
But he may have had no choice, from the institutional perspective that I have been emphasizing.
The increase in food prices in recent years has been dramatic.
World food prices roughly doubled between 2004 and 2008, then declined sharply as a consequence of the global economic crisis and now is rising rapidly again and may soon reach or even exceed its 2008 peak.
The price of a commodity will rise if demand increases in the face of an upward-sloping supply curve (unit cost increasing with amount produced) or if, with demand unchanged, the supply curve shifts upward, or if both changes occur.
(There are other conditions, such as cartelization, that can drive up prices, but I’ll ignore them.) The supply curve of food is upward sloping, and demand for food has been increasing because of increases in population and in income in poor countries (with higher incomes, people eat more and also eat more costly food, such as meat as compared to bread or potatoes), and also because of demand for biofuels, such as ethanol, that often are manufactured from crops.
The increase in demand moves output out along the supply curve, increasing cost and therefore price.
In addition, the supply curve itself has been moving upward because of the increased price of oil (a major input into fertilizer and into the operation of farm machinery and transportation of agricultural output to processing plants and markets) and the output-restricting measures taken by Russia and China recently, discussed by Becker.
The increased food prices are calamitous for very poor countries, and for poor people in other countries (in the absence of subsidy, such as the U.S.
food-stamp program), but the world as a whole can take them in stride because they are self-correcting.
High prices reduce demand and thus move output down the supply curve, resulting in a new, lower-price equilibrium.
High food prices might even be considered a good thing from the standpoint of overall global economic efficiency.
Farmers are a small minority in wealthy countries such as the United States and France, but the small size of the agricultural sector actually augments its political power, because a compact interest group can enrich its members by obtaining protective legislation that raises the costs of the rest of the population by only a small fraction.
The result is widespread restrictions on food imports that reduce agricultural output in countries that produce agricultural products for export.
A notable restriction is the U.S.
tariff on ethanol produced by Brazil, which manufactures ethanol from waste agricultural byproducts, such as cornstalks, rather than, as the U.S.
does, from a valuable crop—corn.
The “green revolution” greatly expanded agricultural production and caused farm prices to fall.
Whether there will be comparable innovations in the years ahead cannot be predicted with any confidence, but they seem unlikely to offset negative factors, which are numerous.
Global warming will increase droughts and interfere with irrigation.
Glaciers are melting rapidly, which causes flooding (as recently in Pakistan) but decreases the amount of water in rivers or lakes.
The reason for that decrease is that while glacier evaporation increases rainfall, rainfall does not significantly increase the amount of water in rivers or lakes, which are major sources for irrigation both natural and artificial, because rain falls randomly; in contrast, normal glacier melting feeds rivers and lakes directly.
There is in fact a worldwide water shortage; it can be alleviated by a variety of means, but most of them are costly, such as desalination.
World population and incomes can be expected to grow for a number of years; so, as agricultural output moves out along a rising supply curve, agricultural prices may continue to rise steeply for many years, with, eventually, calamitous consequences in poor countries.
And agriculture is increasingly vulnerable to blight because there is less agricultural diversity; farmers the world over use the same (“best”) seeds for each crop, reducing genetic diversity and so increasing the vulnerability of crops to disease.
The most promising technological innovations in agriculture involve genetic modification of crop seeds.
Limitations on the consumption of genetically modified crops may seem an example of legal measures intended for the protection of domestic agricultural production.
Certainly that is a motive and certainly the concern that eating genetically modified crops is dangerous to one’s health is spurious.
Nevertheless such crops pose potentially serious environmental dangers.
They are engineered to be like weeds—hardier than natural vegetation—and we all know what weeds can do to the vegetation they come into contact with.
Genetically modified crops thus may destroy large areas of natural vegetation.
Moreover, simply expanding the area of the earth’s surface that is under cultivation is likely to cause deforestation, which is environmentally harmful.
Moreover, as less fertile or accessible land is brought under cultivation, the marginal cost of food production rises.
The combination of rising population and incomes, stimulating demand, and increased costs of agricultural supply, spells continued steep increases in food prices.
(Some of the costs are external, however, and hence do not directly affect food prices.) The tendency may be reinforced by the kind of inefficient responses to high food prices that we are witnessing in Russia and China.
At some point increases in food prices may induce reform measures that would moderate the increase, such as elimination of tariff barriers.
And even before that, rising food prices may induce private adjustments, such as lower birthrates and less meat consumption, that reduce the demand for and the cost of food, and hence food prices.
Of course the future is uncertain, and food prices famously volatile.
Nevertheless the probability of continued increases in food prices cannot be reckoned slight.
“Quantitative easing” is a pompous, uninformative term for a central bank’s buying debt (bonds, mortgages, commercial paper, etc.) in quantity in an effort to depress interest rates in order to stimulate economic activity.
Recently the Federal Reserve began buying $600 billion (for starters) worth of long-term Treasury bonds.
It is buying them with money that it creates by a computer stroke.
That money will expand the money supply relative to the output of the economy and thus (depending however on how rapidly the money circulates) increase inflation, which in turn will reduce the burden of fixed debt and, it is hoped, thereby encourage people to spend more.
In addition, by increasing the demand for bonds, the program will increase their price, which in turn will reduce their return; bonds are fixed-income debt, so as the price of a bond rises, the interest it yields, being a fixed amount, becomes a smaller percentage of the price.
So interest rates will fall, stimulating (it is hoped) borrowing and hence spending.
Finally, by increasing the world supply of dollars, the purchase of bonds with cash newly created by the Fed will reduce the value of dollars relative to other currencies, thus making exports of American goods and services cheaper and imports of foreign goods more expensive.
As Becker points out, anticipation of inflation leads owners of dollar-denominated assets to sell those assets, which further increases the amount of dollars in the world economy relative to other currencies.
The first and third effects are probably more important than the second, the effect on long-term interest rates.
Those rates are low in part because short-term rates are very low and there is considerable substitution between short- and long-term loans.
Moreover, the modest incremental effect on long-term interest rates of increasing the demand for and thus price of long-term bonds may be offset by the effect of enlarging the money supply in causing inflation expectations to rise, which in turn increases interest rates.
The first (inflation) and third (devaluation) effects of the new program are not emphasized by the Fed because of their sensitivity.
Since the financial collapse of September 2008, the Fed has been pouring money into the economy, and as a result its total assets (mainly bonds of various sorts) have soared to $2.3 trillion.
The new quantitative-easing program may push the total well beyond $3 trillion (remember that the $600 billion is just the initial implementation of the program).
This will not necessarily cause an immediate increase in inflation, because much of the money supply is as a practical matter frozen because of uncertainty about the economic environment.
The banks are sitting on $1 trillion in excess reserves (in effect, lendable cash); large corporations have large cash hoards as well; and the personal savings rate has increased severalfold in the last two years.
Money that is hoarded rather than spent does not increase inflation.
If the Fed creates $1 in money and the private sector responds by increasing the amount of saving by $1, there is no effect on inflation because there is no increase in the number of dollars that are chasing the goods and services produced by the economy.
Even if there is reluctance on the part of the private sector to spend the new money pumped into the private economy by the Fed’s new program, there probably will be at least a small uptick in inflation because of expectations of a future increase in spending and hence in inflation.
This can be a good thing by lightening fixed debt, such as mortgages that carry a fixed rather than adjustable interest rate.
The less debt people have, the less they will save and so the more they will spend.
Increased consumption will lead in turn to increased production and hence increased employment, resulting in higher incomes which in turn will spur more consumption.
Similarly, the devaluation of the dollar will increase the demand for U.S.
exports, which in turn will spur production, and there will be a further effect of increasing production because of the reduction in imports; and some of that reduction, moreover, will induce increased domestic production aimed at satisfying demand formerly supplied by imports.
So “quantitative easing” is a rational response to a depressed economy with stubbornly high unemployment and very low inflation.
But that doesn’t mean it’s a sensible response.
There are three principal objections to the new program.
The first is that the inflation that it aims to increase by a slight to moderate amount may get out of hand.
Suppose businesses and consumers increase their spending, and the banks lend the $1 trillion they’re holding in excess reserves (accounts in federal reserves banks, equivalent to cash).
The ratio of money in circulation to goods and services will rise, and inflation will tick upward, perhaps more than desired.
The Fed can reduce the money in circulation by selling some of its huge inventory of bonds, but by doing so it will raise interest rates (just as increasing the demand for bonds lowers interest rates, increasing the supply raises them), which may choke off the economic recovery.
If it hesitates to sell bonds and retire the cash it receives from the sales, expectations of inflation may soar, and inflation rise to a dangerous level; and to bring it down the Fed will then have to sell bonds after all, draining money out of the private economy at a rate that brings on the kind of very sharp recession that the nation experienced in the early 1980s.
No one knows or can know whether the Federal Reserve can walk such a tightrope.
Even if it can do so as a technical matter, political pressures may cause it to fall off the tightrope.
The Fed will be subject to greater political pressures, beginning in the near future, as a result of the financial regulatory reform legislation passed earlier this year, which by giving the Fed regulatory authority over financial institutions that are not commercial banks is increasing its political exposure.
The second objection to the new program concerns its effect on the role of the United States in the global economy.
Nations such as China, Germany, and Japan that are large exporters are irate at our devaluing our currency by increasing the world supply of U.S.
dollars.
They are capable of retaliating, and if as a result our trade balance does not improve significantly the program of “quantitative easing” may end up having no beneficial effect other than to increase inflation, which may as I said get out of hand.
Moreover, the U.S.
dollar is the major international reserve currency.
That is, it is the currency in which many international transactions not limited to transactions with U.S.
firms are denominated because of the stability of the dollar.
The status of the dollar as the international reserve currency requires foreign central banks to buy dollars in quantity so that firms in foreign countries can buy dollars for their international transactions.
The dollars accumulated by central banks in turn are available to be lent back to the U.S.
Treasury (by purchase of Treasury bonds from it) to help finance our huge national debt.
If we manipulate the value of the dollar to improve our trade balance, we undermine confidence in the dollar’s stability, and the demand for dollars as a reserve currency may fall.
The third and perhaps biggest objection to the program of quantitative easing is that it relaxes the pressure on our politicians to address urgent issues of economic reform.
The politicians are sitting back and letting the Fed try to hoist us out of our current economic hole.
The pressure to respond to the urgent need to put the health reform and financial regulatory reform programs on hold because of the debilitating uncertainty that they have injected into the business environment, and to take effective steps that will be politically painful (for they will include entitlements reform) toward increasing the rate of economic growth and reducing the rate of increase of the national debt, is being blunted.
These objections might recede in significance if “quantitative easing” could be expected to stimulate the economy.
But that seems unlikely.
Banks and corporations are awash with money.
Their reluctance to lend because of the uncertainty of the business environment is unlikely to be overcome by a further and probably modest reduction in long-term interest rates—modest because of the substitution effect I mentioned earlier and because the bond-buying program will increase expectations of future inflation, which in turn will push up long-term interest rates.
Becker makes the important point that growth in the deficit, because of an increased gap between government spending and tax revenues, is tolerable if GDP grows faster; for it is not the absolute size of the deficit, but its relation to the size of the economy, that is important.
And there are, as he says, a number of reforms that would result in faster economic growth, including tax reforms and a rational immigration policy.
And spending could be cut—just placing social security and Medicare on a means-tested basis would do wonders.
But there are two questions to ponder.
One is whether reforms aimed at increasing economic growth (rather than reducing spending) would be likely to increase that growth by a large enough margin to make a growing deficit shrink as a percentage of GDP.
I am skeptical.
Americans are lightly taxed by international standards and the rate of formation of new businesses normally is very high.
Small businesses are currently having trouble borrowing but that is a consequence of the continuing weakness of the banking system rather than of anything to do with the tax system, and the financial system is likely to revive before tax reform could be implemented.
The higher tax rates of the Clinton years seem not to have inhibited economic growth.
It would be great if the immigration laws were changed to encourage more immigration by high-IQ foreigners and also by wealthy ones, but, again because of the continuing weakness of the U.S.
economy, the demand for skilled immigrants is at present weak.
There may also be a practical ceiling on the rate of economic growth of a mature, highly complex economy.
Maybe at a growth rate above 3 percent, labor and materials shortages create bottlenecks and inflation that make it prudent for the Federal Reserve to push up interest rates in order to slow down growth.
If this is right and if taxes are cut and spending rises, it is hard to see how the annual deficit can be kept from rising by more than 3 percent.
It is illuminating to compare the increase in the national debt during the Presidency of George W.
Bush—a period in which Congress (until 2007) and the Presidency were highly pro-business—with the increase in GDP during that period.
In 2002, the debt increased by 5.5 percent and GDP by 1.3 percent.
The corresponding figures for 2003 were 6.2 percent and 1.4 percent; for 2004, 5.7 percent and 3.4 percent; for 2005, 3.7 percent and 2.6 percent; for 2006, 3.4 percent and 2.9 percent; for 2007, 3.6 percent and 2.8 percent; for 2008, 5.0 percent and 2.0 percent; and for 2009, 5.5 percent and 2.6 percent.
Since then of course the gap has widened, but that is because of the economic crisis.
We would feel great if we were back in the Bush economy! Yet in every year of Bush’s Presidency, the deficit grew faster than GDP.
That may be the “new normal.”.
The second question (which relates to the hypothesis of a new normal) is the political realism of economic reforms that would increase the growth rate or reduce the deficit.
As I have argued previously, both political parties seem to have converged on a policy of high spending and low taxes.
The Democrats want even higher spending than the Republicans do, and the Republicans want even lower taxes than the Democrats do, but these differences should not blind us to the realization that neither party has serious plans for reducing the annual increases in the deficit.
I do not see this changing in the new Congress.
If the Republicans had won control of the Senate, they would be under pressure to produce legislation that the President would sign, lest the new Congress be accused of being a “do-nothing” Congress—the accusation that won Truman the Presidency in 1948.
But since the Republicans do not control Congress, an oppositional stance is attractive.
And it is more difficult for Obama to compromise with Republicans than it was for Clinton, because Obama is more liberal than Clinton was and the Republicans are more conservative today than in the 1990s.
The status of the dollar as the international reserve currency, and the mercantilist policies of countries like China and Germany, will enable us to finance our growing deficit, and thus postpone the day of reckoning, for some time.
But at some point the wheels may start coming off the chassis.
Still, life is full of surprises.
The prospects for the United States looked grim in the 1970s and bright for Japan.
Then Reagan was elected and the sky cleared here, and then the Japanese housing and banking bubbles burst and Japan entered the long period of economic stagnation in which it still finds itself.
Maybe we’ll get lucky again.
I agree with everything Becker says, but will add a few points.
Not only would banning television advertising of fattening foods on programs oriented to children and teenagers not reduce obesity, but it might increase it.
To the extent that, as Becker suggests, such advertising has a much greater effect on brand shares than on aggregate demand for the products, the advertisers as a whole might be better off if forbidden to advertise.
With higher profits, and an important form of nonprice competition eliminated, advertisers might compete more on price, resulting in lower prices to consumers and therefore greater competition.
A more effective measure to reduce youthful obesity might be to ban the sale or service of soft drinks and other high-calorie foods in schools, or even to tax such foods heavily.
Of course such measures are from an economic standpoint justifiable only if the growth in obesity represents a market failure (and even then, only if the costs of the measures are lower than the benefits in correcting such a failure)."Obesity" is a loaded term; it is the name we give to being too fat.
It is possible that being fatter than doctors think healthy is optimal, just as it is possible that eating a diet that deviates from what doctors would prescribe for someone who aspires to live to be a hundred is optimal.
People trade off health costs for benefits in other currencies; food high in calories tends to be both delicious and cheap.
The health effects of overweight are highly publicized.
In addition, in our society fat people are generally considered much less attractive than thin people, and there is a considerable premium in the job market for attractive people, partly because coworkers and supervisors obtain utility from associating with attractive people, partly because being attractive enhances self-confidence, self-esteem, and social skills.
In addition, thin people should have a significant advantage in competing for jobs involving trust, since thinness signifies self-control and in turn a low discount rate, which should make a worker more concerned with his reputation and therefore more trustworthy, although a countervailing factor is that employers may distrust the commitment to work of employees who look as if they spend most of their day in the gym!.
Given all the negatives of overweight, it is difficult to believe that obese people have underestimated the costs of being overweight.
But the huge diet industry, and the growing resort of the obese to dangerous abdominal surgery (gastric-bypass or bariatric surgery--"stomach stapling") are contrary evidence.
It is much easier to avoid gaining weight than to lose weight, and while some people have an unfortunate biology that creates irresistible cravings for excessive amounts of fat, the obesity problem seems much more widespread.
If the cause were biological, the well-documented increase in obesity over the last several decades would be inexplicable.
A factor that the economist Tomas Philipson and I have emphasized is the increasingly sedentary character of activity in both work and the home, as a result of the shift from manufacturing to services and the growth of labor-saving devices in both the workplace and the home.
In the old days the average individual, male or female, was in effect "paid" to expend calories, the payment taking the form of pecuniary income for strenuous work in the workplace or nonpecuniary income from household work.
Today one has to pay to expend calories by joining a gym or otherwise taking time from work or leisure to exercise.
As Becker points out, the trend has affected children and teenagers because of the growing substitution of sedentary leisure activities for athletics.
Strikingly, because of concerns over liability, many schools no longer make physical education mandatory.
Still another factor may be that as more and more people become overweight, the stigma of obesity diminishes.
When I was a kid, fat kids were rare, and were teased.
The more fat kids there are, the more ‚Äúnormative‚Äù their appearance becomes.
In addition, if parents are fat, the credibility of their lecturing their children on the importance of remaining thin is undermined, "Do as I say, not as I do,"  is not a very effective means of persuasion.
Political correctness may even be a factor.
Jokes at the expense of fat people used to be a staple of comedy (remember Abbott and Costello?).
No more.
Political correctness has reduced the use of ridicule to enforce social norms.
All this said, the case for public intervention to reduce obesity is uncertain.
The main costs of obesity, in increased illness and disability, are borne by the obese themselves, which greatly weakens the economic case for intervention.
True, the obese are able to shift some of their medical and disability costs to others through the Medicaid, Medicare, and social security disability programs, which are subsidized health and disability programs that do not limit benefits to the obese even though the obese experience increased illness and disability as a consequence of their obesity.
Yet the benefits of preventive health can be exaggerated.
It increases the percentage of the elderly in the population, and the elderly are very heavy demanders of expensive--and subsidized--health care and pensions.
Becker has presented in his post today a compelling restatement of the economic case for capital punishment.
I have a few minor disagreements and qualifications, and I will first mention them and then respond to some of the very large number of comments that my last week's posting elicited.
I do not consider revenge an impermissible ground for capital punishment.
Revenge has very deep roots in the human psyche.
As I have long argued, basing the argument on work by evolutionary psychologists such as Robert Trivers, the threat of revenge must have played an essential role in maintaining order in the "ancestral environment." That is a term that evolutionary biologists use to describe the prehistoric era in which human beings evolved to approximately their current biological state.
In that era there were no laws, police, etc., so the indignation that would incite a person to revenge himself upon an aggressor must have had substantial survival value and become "hard wired" in our brains.
The wiring remains, and explains some of the indignation that people feel, especially but not only the friends and family members of murder victims, toward the murderer.
It seems plausible to me (here modifying what I said in my original posting) that the net increment in utility that they derive from the execution (versus life imprisonment) of the murderer exceeds the net increment in disutility that the murderer derives from being executed rather than imprisoned for life.
The strong support for capital punishment in public opinion polls provides limited support for this conjecture.
I do not favor public executions; nor dismemberment or other horrific modes of execution.
The incremental deterrent effect might well be nontrivial, but would be outweighed by public revulsion.
There is also the danger of brutalization.
As Friedrich Nietzsche pointed out, making people squeamish is one of the projects of modernity, and may explain the banning of blood sports as well as the movement away from public and gruesome executions.
The idea is that if people become unaccustomed to bloody sights they will be less likely to employ violence in their relations with other people.
Still another objection to public and gruesome executions is that they offer murderers an opportunity to die as heroes by showing fortitude.
I agree that marginal deterrence is important and that it generally argues for reserving the heaviest sentences for the most serious crimes.
But there are two important qualifications.
First, a very heavy sentence may be necessary to deter a minor crime because the likelihood of apprehension is very low.
The expected punishment cost of crime is, as a first approximation (ignoring attitude toward risk), the punishment if imposed multiplied by the probability of imposition, so if the probability is very low a compensating increase in punishment is indicated.
This does not impair marginal deterrence as long as the crimes are not close substitutes: a heavy fine for litterers will not increase the robbery rate, whereas capital punishment for robbers would increase the murder rate (of robbers' victims)--were it not for my second qualification.
Even if murder and robbery were both capital crimes, there would be marginal deterrence because the police would search much harder for a robber who murdered his victim; the more extensive search would compensate, in part anyway, for the loss of the information that the victim could have given the police to identify the robber.
Moreover, capital punishment is merely a ceiling; even if robbery were a capital crime, judges and juries would be much less likely to impose the death sentence on a robber who had not killed his victim than on one who had.
Marginal-deterrence theory provides, however, a compelling reason to execute prisoners sentenced to life without parole who murder in prison; the threat of a sentence of imprisonment can have no deterrent effect on them.
Becker mentions the possibility of racial discrimination in execution.
Studies done some years ago--I do not know whether they would be descriptive of current practice--revealed the following pattern: murderers of black people were less likely to be executed than murderers of white people.
Since blacks were more likely to murder other blacks than to murder whites, this meant that blacks were less rather than more likely to be executed than whites, relative to the respective murder rates of the two races.
(Blacks commit murders at a much higher rate than whites.) The explanation offered was that judges and juries tended to set a lower value on black victims of murder than on white ones.
From this some observers inferred that capital punishment discriminates against blacks.
The inference is incorrect.
The proper inference is that murderers of blacks are underpunished.
I turn now to the comments on my posting.
A long comment by "ohwilleke" makes a number of interesting points, but  they do not support his opposition to capital punishment.
He notes first of all that many factors influence the murder rate besides the probability of execution.
That is true, but it does not, as he suggests, make it "insanely difficult to make any econometric estimate" that is not "meaningless." Econometrics, which is to say the set of statistical methods used by economists to try to tease out causal factors, enables the particular factor of interest, in this case the probability of execution, to be isolated.
The methods are not entirely reliable, which is why neither Becker nor I claim that economists have proved that capital punishment deters; we merely claim that there is significant evidence that it does.
I note how many commenters remark correctly that murder rates are higher in the South, even though that is where most executions occur, than in other regions.
But that is not an argument that executions do not deter.
The higher the background rate of murder, the more severe one expects punishments to be.
A high murder rate implies a high expected benefit from murder and so the expected cost of punsihment has to be jacked up to offset that greater benefit.
Ohwilleke's comment claims that the "error rate" in capital punishment is 10 percent.
This is incorrect.
Not a single person among the 119 that he contends were erroneously sentenced to death was executed.
That is a zero error rate.
One commenter asks whether capital punishment "really deter[s] the type of person who actually does the murdering?" Of course if someone in a state that has capital punishment commits the kind of murder that puts him at risk of such punishment, the threat of capital punishment has not deterred.
This is the usual situation with criminal punishment.
People who commit crimes are people for whom the expected cost of punishment, combined with the other costs of crime, is less than the expected benefit of the crime.
The purpose of punishing these people is not to deter them--by committing the crime in the face of threatened punishment they have shown themselves to be undeterrable--but to deter people who, were it not for the expected punishment cost, would commit the crime because its other costs were lower than its expected benefit.
Finally, several comments usefully point out that capital punishment has a secondary deterrent effect: it induces murderers to plead guilty and receive a life sentence.
My ignorance of popular culture has once more been exposed by alert readers of this blog.
I had never heard of Chris Farley.
I have now looked him up on Google Images.
I get it.
Several comments suggest that I ignored the biological basis of obesity.
Let me clarify.
Of course obesity is a biological phenomenon (though also a social one to the extent that in our society "obesity" carries negative connotations).
I meant only that, since human biology changes only very slowly, changes in human  biology can't explain the recent increases in obesity.
But the biological foundations of obesity do require more emphasis than I gave them.
In what evolutionary biologists call the "ancestral environment," a period ending some 20,000 years ago when our biological development reached approximately its current level, a genetic propensity to eat as much high-calorie food as possible had great survival value because the food supply was uncertain, and high-calorie food converts to fat that people can live off for a time if they have no food.
When the good supply becomes assured and people become sedentary, they continue wanting to eat high-calorie foods because that is a genetic predisposition.
They can avoid becoming fat by eating less than their genes tell them to, as it were, but this--fighting the genes--requires great self-discipline.
It is much easier to control one's weight if one is physically active, in effect recreating the conditions of the ancestral environment in the gym or equivalent.
But that is costly, especially in time.
Biology plays a further role.
Differences in biology between people make it much easier for some people than for others to control their weight, sometimes without any exercise.
This blurs the value of thinness as a signal of trustworthiness as a result of having a low discount rate.
Notice, as a curious historical note, that as late as the nineteenth century obesity was taken as a signal of prosperity and attractiveness, and thinness (including of women) was taken as a signal of poverty and ill health.
This was because poor people tended to be undernourished and hard working, and tubeculosis and other wasting disseases were disproportionately diseases of the poor.
Since food was expensive and leisure a privilege of wealth, being fat was a sign of success and valued accordingly.
I may have been precipitate in suggesting that reducing obesity would not affect aggregate medical costs.
What I had in mind is that because on average a very high percentage of one's total lifetime medical costs are incurred in the last few months of life, and because the older one is, the greater on average one's medical needs, the principal financial effect of improving health in youth and middle age may be to increase the elderly population, and of course death can only be postponed, not eliminated.
But I am painting with too broad a brush; careful study is required to assess the costs of lifestyle changes that might improve health.
I regret having failed to respond to comments on my last week's posting, on truancy.
I recognize the irony; I have been truant.
Let me belatedly respond to two particularly important comments.
The first is that the "carrot" approach, as in the Progresa program that Becker mentions, may work much better in underdeveloped countries, for example countries in which many girls are kept out of school by their parents.
To take an extreme example, if 50 percent of children are truant, then paying all parents to send their kids to school will have a chance of affecting the attendance of half the children; if only 1 percent are truant, then 99 percent of parents receive a payment that will not affect their behavior because their children are not truant.
Second, a point as important as it is obvious, getting kids into school will confer few if any social benefits unless schooling improves their life prospects, particularly their employment prospects.
If there are no jobs for them when they get out, the only effect of giving them schooling may be to radicalize them.
It is thus ironic that the French have a program for forcing kids into school, since the job opportunities for graduates are so limited by  the country's employment laws.
The recent execution by the State of California of the multiple murderer Stanley "Tookie" Williams has brought renewed controversy to the practice of capital punishment, which has been abolished in about a third of the states and in most of the nations that the United States considers its peers; the European Union will not admit to membership a nation that retains capital punishment.
From an economic standpoint, the principal considerations in evaluating the issue of retaining capital punishment are the incremental deterrent effect of executing murderers, the rate of false positives (that is, execution of the innocent), the cost of capital punishment relative to life imprisonment without parole (the usual alternative nowadays), the utility that retributivists and the friends and family members of the murderer's victim (or in Williams's case victims) derive from execution, and the disutility that fervent opponents of capital punishment, along with relatives and friends of the defendant, experience.
The utility comparison seems a standoff, and I will ignore it, although the fact that almost two-thirds of the U.S.
population supports the death penalty is some, albeit weak (because it does not measure intensity of preference), evidence bearing on the comparison.
Early empirical analysis by Isaac Ehrlich found a substantial incremental deterrent effect of capital punishment, a finding that coincides with the common sense of the situation: it is exceedingly rare for a defendant who has a choice to prefer being executed to being imprisoned for life.
Ehrlich's work was criticized by some economists, but more recent work by economists Hashem Dezhbakhsh, Paul Rubin, and Joanna Shepherd provides strong support for Ehrlich's thesis; these authors found, in a careful econometric analysis, that one execution deters 18 murders.
Although this ratio may seem implausible given that the probability of being executed for committing a murder is less than 1 percent (most executions are in southern states--50 of the 59 in 2004--which that year had a total of almost 7,000 murders), the probability is misleading because only a subset of murderers are eligible for execution.
Moreover, even a 1 percent or one-half of 1 percent probability of death is hardly trivial; most people would pay a substantial amount of money to eliminate such a probability.
As for the risk of executing an innocent person, this is exceedingly slight, especially when a distinction is made between legal and factual innocence.
Some murderers are executed by mistake in the sense that they might have a good legal defense to being sentenced to death, such as having been prevented from offering evidence in mitigation of their crime, such as evidence of having grown up in terrible circumstances that made it difficult for them to resist the temptations of a life of crime.
But they are not innocent of murder.
The number of people who are executed for a murder they did not commit appears to be vanishingly small.
It is so small, however, in part because of the enormous protraction of capital litigation.
The average amount of time that a defendant spends on death row before being executed is about 10 years.
If the defendant is innocent, the error is highly likely to be discovered within that period.
It would be different if execution followed the appeal of the defendant's sentence by a week.
But the delay in execution not only reduces the deterrent effect of execution (though probably only slightly) but also makes capital punishment quite costly, since there is a substantial imprisonment cost on top of the heavy litigation costs of capital cases, with their endless rounds of appellate and postconviction proceedings.
Although it may seem heartless to say so, the concern with mistaken execution seems exaggerated.
The number of people executed in all of 2004 was, as I noted, only 59.
(The annual number has not exceeded 98 since 1951.) Suppose that were it not for the enormous delays in execution, the number would have been 60, and the additional person executed would have been factually innocent.
The number of Americans who die each year in accidents exceeds 100,000; many of these deaths are more painful than death by lethal injection, though they are not as humiliating and usually they are not anticipated, which adds a particular dread to execution.
Moreover, for what appears to be a psychological reason (the "availability heuristic"), the death of a single, identified person tends to have greater salience than the death of a much larger number of anonymous persons.
As Stalin is reported to have quipped, a single death is a tragedy, a million deaths is a statistic.
But that's psychology; there is an economic argument for speeding up the imposition of the death penalty on convicted murderers eligible for the penalty; the gain in deterrence and reduction in cost are likely to exceed the increase in the very slight probability of executing a factually innocent person.
What is more, by allocating more resources to the litigation of capital cases, the error rate could be kept at its present very low level even though delay in execution was reduced.
However, even with the existing, excessive, delay, the recent evidence concerning the deterrent effect of capital punishment provides strong support for resisting the abolition movement.
A final consideration returns me to the case of "Tookie" Williams.
The major argument made for clemency was that he had reformed in prison and, more important, had become an influential critic of the type of gang violence in which he had engaged.
Should the argument have prevailed? On the one hand, if murderers know that by "reforming" on death row they will have a good shot at clemency, the deterrent effect of the death penalty will be reduced.
On the other hand, the type of advocacy in which Williams engaged probably had some social value, and the more likely the advocacy is to earn clemency, the more such advocacy there will be; clemency is the currency in which such activities are compensated and therefore encouraged.
Presumably grants of clemency on such a basis should be rare, since there probably are rapidly diminishing social returns to death-row advocacy, along with diminished deterrence as a result of fewer executions.
For the more murderers under sentence of death there are who publicly denounce murder and other criminality, the less credibility the denunciations have.
I agree with Becker that the great strength of charitable foundations, and the principal justification for the tax exemption (though a secondary one is to offset the free-rider problem in charitable giving--if you give to my favorite charity, I benefit, and so the more you give the less I will be inclined to give), are that they bring about a decentralization of charitable giving, breaking what would otherwise be a governmental monopoly and thus reducing the play of politics in charity.
In addition, however, to the extent that charitable giving substitutes for government spending, such giving (minus the tax benefits to the giver) represents a form of voluntary taxation, like state lotteries.
Given the enormous skewness of incomes in today's United States, it is good to encourage voluntary taxation of the wealthy.
But I would not place much weight on competition by universities and other recipients of charitable giving for foundation grants, since the recipients will compete whatever the source; universities compete for government grants just as they do for private grants.
A perpetual charitable foundation, however, is a completely irresponsible institution, answerable to nobody.
It competes neither in capital markets nor in product markets (in both respects differing from universities), and, unlike a hereditary monarch whom such a foundation otherwise resembles, it is subject to no political controls either.
It is not even subject to benchmark competition‚Äîthat is, evaluation by comparison with similar enterprises‚Äîexcept with regard to the percentage of its expenditures that go to administration (staff salaries and the like) rather than to donees.
The puzzle for economics is why these foundations are not total scandals.
The solution to the puzzle seems to me twofold: the foundations are controlled by trustees, whose prestige is invested in the success of the foundation; and foundations are constrained by law, as well as by the limited benchmark measure that I mentioned, to give away most of their income, and this limits the ability of staff to appropriate the foundation's  income for its personal benefit.
A deeper puzzle relates to the leftward drift in foundation policies that Becker discusses, a drift enabled by the perpetual character of a foundation.
(I agree that foundation staff work is attractive to liberals and that the children of the founders tend to be more liberal than their fathers.
In both cases the main reason is probably that while the creators of the major foundations invariably are successful businessmen, and business values are conservative, foundation staff are not businesspeople and many children of wealthy businesspeople do not go into business either.) The puzzle is why conservatives establish perpetual foundations.
Don't they realize what is likely to happen down the road? The answer may be that the desire to perpetuate their name is greater than their desire to support conservative causes.
In any event, a rule forbidding perpetual foundations would be paternalistic.
If rich people want to squander their money on feckless foundations, that should be their privilege.
Moreover, to the extent that foundation spending substitutes for government spending, the comparison is of two inefficient forms of enterprise, and the foundations may be the less inefficient form.
I agree with Becker that the fact that a person like Bill Gates or Warren Buffet is a great businessman doesn't give him any comparative advantage in doing good.
I also question the appropriateness of American foundations' spending money abroad.
A foreign aid program is an instrument of U.S.
foreign policy that can be undermined by private expenditures in the amount now being spent abroad by the Gates Foundation.
And I have trouble understanding why American taxpayers should (via the tax breaks for charitable giving) help finance foundations' contributions to foreign countries.
At the same time, critics of the small percentage of U.S.
GDP that the United States devotes to foreign aid ought in fairness to add in the foreign giving by our foundations in calculating that percentage.
There is a further question, given that Gates and Buffet remain active in business, how much of their charitable giving is actually in support of their businesses.
This is a particular conern with regard to Gates because Microsoft operates worldwide and is a controversial company.
The Gates Foundation helps to polish Microsoft's image.
There is nothing wrong with corporate image building, but there is no reason to favor it with tax breaks.
This topic could be thought a continuation of our last week's topic, the trans fat ban in New York City.
We are again dealing with safety regulation.
The case for punishing drunk drivers may seem clearer than the case for banning trans fats in restaurant meals because the externality is more pronounced and consumer competence is not in issue, but the appearance is deceptive.
Becker's proposal for heavier penalties for drunk driving could be criticized as paternalistic, because it regulates an input rather than an output.
If there are 1.4 million annual arrests for drunk driving, and if we assume realistically that this is only a fraction of the actual incidents of drunk driving, yet only 2,000 innocent people are killed by drunk drivers, then it follows that most drunk driving is harmless.
Why then punish it with arrests and severe penalties? Why not just punish those drunk drivers who cause deaths or injuries to nonpassengers? In fact we do punish such drivers, under such rubrics as reckless homicide (if the victim dies) or reckless infliction of bodily injury.
And the punishments are severe.
Why punish the 99+ percent of drunk driving that is harmless? Indeed, if the penalties for reckless homicide are optimal, the implication is that the number of deaths from drunk driving, 17,000 a year, is also optimal.
This is actually a plausible inference.
If there are only 2,000 nonpassenger deaths (other than that of the drunk driver himself) caused by drunk driving every year (and how many of the accidents in which a drunk driver is involved are actually   caused   by the drinking?), then the probability of being killed by a drunk driver is very small, and the value of life estimate that I used in my post on the trans-fat ban should be usable here as well to conduct a cost-benefit analysis of drunk driving.
The probability of a drunk driver's killing someone must also be small, given the number of drunk drivers implied by the arrest statistics.
Suppose the annual probability that a drunk driver will kill a nonpassenger is .001 (as it would be, given the 2,000 victim figure, if there are 2 million drunk drivers, which is a very modest extrapolation from the arrest figure, since many drunk drivers are not caught).
Then the expected injury cost from drunk driving is $7,000 (.001 x $7 million).
(This corresponds to Becker's $10,000 figure, which seems to me too high, as it disregards the drunk drivers who are not arrested.
Notice that if only a third of drunk drivers are arrested each year, the expected-cost figure drops to $3,333.) This implies that a driver who derives at least $7,000 in utility per year from drinking while driving (more commonly, shortly before driving) is behaving optimally and should not be punished at all.
The larger issue that the drunk-driving question raises is the choice between ex ante and ex post regulation.
Health inspections of restaurants and, yes, a ban on trans fats are examples of ex ante regulation.
Such regulation prevents dangerous activity rather than waiting for the danger to materialize and using the legal system to punish the injurer.
The tort system is an example of ex post regulation.
If you drive recklessly but don't injure anybody, you have not committed a tort.
Tort law comes into play only when an injury occurs.
The theory is that an optimal tort penalty for the injury deters tortious conduct, not perfectly--or there would be no tort cases--but well enough.
Criminal law is a mixed bag.
Crimes that result in injury are punished, usually quite severely, but much preparatory conduct--attempts and conspiracies--is punished as well, even when no harm results (the failed attempt, the abandoned conspiracy).
Arrests for speeding--and for drunk driving--are examples of ex ante regulation.
The economic argument ex ante regulation is that ex post regulation is often inadequate.
This is obvious in the trans-fat situation--it would be impossible to figure out which victims of heart disease owed the disease, and to what extent, to which restaurants.
In the case of reckless homicide, the answer is less clear.
Suppose drunk driving is inefficient--the drunk driver derives less utility than the expected accident cost--and so we want to deter it by punishing the drunk driver who kills or injures a  nonpassenger.
Suppose the value of life is $7 million and 10 percent of the drivers are not apprehended.
Then the optimal penalty would be a fine of $7,780,000 ($7 million √∑ .9).
Few drivers could pay that, so the trick would be to impose an equivalent disutility on them by nonpecuniary means, such as imprisonment.
This is not  to suggest that punishing drunk drivers who are arrested, the method that Becker endorses, can't achieve the correct deterrence.
But punishing just the ones who kill might be more efficient--there wouldn't be as much need for policemen, there would be fewer trials and prison terms, and probably many drunk drivers are quite harmless, for it is unlikely that everyone who drives while drunk has an equal probability of causing an accident.
In general, heavy punishment of fewer people is chaper than light punishment of more people.
Thus, only if ex post punishment failed to deter optimally would there be a strong case for punishing drunk drivers who are not involved in accidents with nonpassengers (assuming drunk driving is inefficient, the assumption questioned earlier in this comment)--or at least a strong case based on the simple model of rational choice that underlies my analysis.
Maybe drunk drivers systematically underestimate the effect of drinking on the likelihood of an accident, or believe that they can fully compensate for the danger by driving more slowly, or exaggerate the degree to which they can hold their liquor, or exaggerate  their driving skill.
Maybe their drunk-driving behavior is addictive, and they do not realize, before starting to drink, that they won't be able to avoid drinking before they drive.
These might be grounds for ex ante regulation--even for regulations anterior to arrest, such as stiff alcohol taxes.
Economic inequality is growing in the United States and other developed countries, and also in rapidly developing countries, notably China and India.
Becker and I blogged about economic inequality on April 23, almost eight months ago, but indications that inequality is surging at the very top of the income distribution merits a further look, as does the recent study of world income inequality that is the focus of Becker's comment.
Recent reports in the media document phenomenal returns to hedge-fund operators, private-equity investors, and other finance specialists, astronomical CEO salaries, enormous returns to software entrepreneurs, a stampede of lawyers and doctors to Wall Street, $200,000 law-firm signing bonuses for 27-year-olds who have clerked for the Supreme Court, enormous philanthropic gifts ($100 million gifts to colleges and universities by alumni are no longer unusual), and soaring demand for products bought only by superwealthy people, such as full-sized passsenger airliners converted at great expense to private airplanes, $40 million homes, paintings costing tens of millions of dollars, and automobiles costing several hundred thousand dollars.
There are now almost 800 billionaires in the United States and countless millionaires, and one out of every 500 U.S.
households have an annual income of at least $1 million.
Now this is to look only at the top of the income distribution.
It is not to consider the income distribution as a whole, let alone poverty.
In the more conventional focus on earnings by quintiles, one sees little change in recent years.
But since 1980 the percentage of total personal income going to the top 1 percent of earners has risen from 8 percent to 16 percent.
It is the top of the distribution on which I‚Äôll be focusing.
What are the causes, and what are the effects, of this trend in the income (and of course wealth) of the highest-earning segment of the distribution? Part of it is reduced marginal tax rates, because high marginal tax rates discourage risk-taking.
Consider two individuals: one is a salaried worker with an annual income of $100,000 and good job security, and the other is an entrepreneur with a 10 percent chance of earning $1 million in a given year and a 90 percent chance of earning nothing that year.
Their average annual incomes are the same, but a highly progressive tax will make the entrepreneur's expected after-tax income much lower than the salaried worker's.
Many of the people at the top of the income distribution are risk takers who turned out to be lucky; the unlucky risk takers fell into a lower part of the distribution.
It is rich people as a class who are growing relatively richer, not necessarily individual rich persons.
Marginal income tax rates on the wealthy have not declined much in recent years, however; but the income tax rate cuts since 2001 have favored the wealthy.
Another and more important factor in the recent wealth surge is a growing return to high IQs; outstanding success in highly complex fields such as finance and software is highly correlated with high levels of intelligence.
And increased size of markets as a consequence of increased international trade provides greater returns to successful innovations.
I am more interested in the effects of the increasing incomes of the rich--though one might ask:   are   there any effects, other than those that are perfectly benign? Even though the federal income tax is increasingly a proportional rather than a progressive tax (though it is still somewhat progressive--the average tax rate for the top 1 percent of earners--24 percent--is roughly twice that for all federal taxpayers), the more skewed the distribution of income, the higher the proportion of taxes that is paid by the rich.
And in fact the top 1 percent of earners pay more than one-third of all federal income taxes today, which is a boon to the rest of the population.
Very wealthy people also provide patronage for the arts, funds for high-risk ventures (actually, art is one of those ventures), and money for philanthropic enterprises.
And there is very little envy of the rich on the part of other Americans, in part perhaps because of the much-derided but very real "trickle down" effect.
This is due partly to philanthropy but more to the enormous consumer surplus generated by products such as Microsoft Windows, the brainchild of persons who are now  billionaires.
It is also due in part to the fact that, given diminishing marginal utility of income, income increases at lower levels in the income hierarchy increase personal welfare more than increases at higher levels do.
Moreover, real wealth is a function of improvements in the quality and variety of products and services, and these improvements benefit all classes of the population.
All this is not to say that the existence of a stratum of exceedingly wealthy people is altogether to the good.
There are three potentially bad consequences for our society:.
1
The existence of enormous financial returns to IQ deflects high-IQ people from entering careers in which the social returns may greatly exceed the private returns: government service, basic science, and teaching.
The quality of both the civil service and the public schools appears to be falling.
2
Massive philanthropy directed abroad can interfere with a coherent foreign policy.
Major philanthropies such as the Gates Foundation do not coordinate their spending decisions with U.S.
national goals.
3
Huge personal wealth may play a disproportionate role in political competition.
Personal wealth confers an enormous advantage on a candidate, but also permits a person who does not want to be a candidate to exert an influence on candidates and policies, as in the case of Richard Mellon Scaife and George Soros.
The fact that a person is a highly intelligent speculator, such as Soros, is no guarantor of political insight or wisdom; and the fact that a person has inherited a vast fortune, such as Scaife, is no guarantor of ability of any sort.
More important, however, heavy campaign spending by the wealthy force nonwealthy candidates to spend increased time and effort on fund raising, which makes a political career less attractive to nonwealthy persons and  makes nonwealthy politicians less well informed about policy and more dependent on interest groups than if campaign spending were lower.
Are these consequences serious enough to warrant remedial action? I think not, except that they may provide some grounds for wanting to retain, perhaps even to strengthen, the estate tax.
The disincentive effects of taxing estates are much less than those of income taxation.
New York City's Board of Health has decided to ban trans fats in food sold in restaurants (also in food sold by catering and meal services), the ban to become fully effective in mid-2008.
The ban raises a fundamental issue of economic policy.
Trans fats are largely synthetic fats widely used in fried foods and baked goods.
There is substantial medical evidence that they are significant contributors to heart disease (perhaps increasing the incidence of heart disease by as much as 6 percent) because they both raise the cholesterol that is bad for you (LDL) and lower the cholesterol that helps to protect your arteries against the effects of the bad cholesterol (HDL).
About half of New York City's 20,000 restaurants use trans fats in their cooking; and roughly a third of the caloric intake of New Yorkers comes from restaurant meals.
A strict Chicago School economic analysis of the ban would deem it inefficient.
The restaurant industry in New York is highly competitive, and so if consumers are willing to pay a higher price for meals that do not contain trans fats, the industry will oblige them; to force them to shell out more money, rather than leaving it to their decision, is thus paternalistic, indeed gratuitous.
Restaurants catering to health-conscious eaters will advertise that they do not use any trans fats in their meal preparations, or will state on the menu the amount of trans fats in each item.
Other restaurants will cater to diners who prefer a cheaper meal to a heathier one.
The ban thus forces people who want to eat in restaurants to pay higher prices even if they would prefer to pay less and take the risk of an increased likelihood of heart disease.
Some of these would be people who eat in restaurants rarely, and avoid trans fats when they cook at home, so that the health risk to them of a restaurant meal containing trans fats is small.
Others would be people who disbelieve the medical opinion--and such opinion often is wrong--or think that trans fats improve the taste of food or that the ban is the result of political pressure from producers of substitutes for trans fats, such as corn oil, or from the restaurants that have voluntarily abandoned the use of trans fats and don't want to be put at a competitive disadvantage by restaurants that have lower costs because they do use trans fats.
Moreover, the enforcement of the ban will increase the costs of New York City government, resulting in higher taxes on an already heavily taxed population.
Since half the restaurants in New York City continue to use trans fats, this shows that a majority of consumers would not support the ban.
What is missing in this analysis is a cost that, ironically, a great Chicago economist, George Stigler, did more than any other economist to make a part of mainstream economic analysis: the cost of information.
It might seem, however, that the cost of informing consumers about trans fats would be trivial--a restaurant would tell its customers whether or not it used trans fats, if that is what they're interested in, and if it lied it would invite class action suits for fraud.
But there is a crucial difference between the cost of disseminating information and the cost of absorbing it.
If gasoline stations in the same neighborhood charge slightly different prices for the same grade of gasoline, the reason may be that the price difference is smaller than the time (and gasoline!) cost to the consumer of driving to the different stations to see which has the lowest price.
But if the consumer did bother to conduct that search, he would have no difficulty in understanding the information that he obtained.
It is different with trans fats.
Many people have never heard of them; many who have don't know that they are (very probably) harmful to health; and, above all, almost no one outside the medical and nutrition communities knows how harmful trans fats are, and in what quantity.
That is, they do not know what a dangerous level of trans fats is, what their own consumption of trans fats is relative to that level, and how much their restaurant-going increases the total amount of trans fats that they consume.
They have, in short, no idea of the benefit of avoiding trans fats in restaurants.
And except for a few hypochondriacs and people who already have heart disease, no one wants his restaurant experience poisoned by having to read a menu that lists beside each item the number of grams of trans fats it contains and indicates (perhaps with a skull and crossbones) the danger created by consuming the item.
Actually the danger would be impossible to explain to diners, because it would depend on the diner's average daily consumption of trans fats, which neither the diner nor the restaurant knows.
In such a situation, even those of us who distrust government regulation of the economy should be open to the possibility that the ban on trans fats would produce a net improvement in the welfare of New Yorkers by satisfying a preference that most of them would have if the cost of absorbing information about the good in question were not prohibitive.
A very crude cost-benefit analysis suggests that this possibility is real.
Proponents of the ban estimate that it will reduce the annual number of heart attack deaths in New York City by 500.
That can be taken as an upper-bound estimate.
It seems high to me, as the total annual number of deaths from heart disease in New York City is only 25,000, and it seems unlikely that removing trans fats from restaurant meals alone would cause a 2 percent drop in the heart disease death rate.
If that 500 figure holds up, then if one uses the consensus economic estimate of the value of an American life (an estimate based on behavior toward risk, behavior that reveals the cost that the average American is willing to pay to reduce the risk of death), which is $7 million, a saving of 500 lives confers a benefit of $3.5 billion.
(This figure is too high, but I will adjust it later.) On the cost side, although the restaurant industry is up in arms about the ban, and although the ban's proponents cannot be correct that the industry would incur no cost at all to substitute other fats for trans fats--for if there were no cost, the substitution would have been made years ago, when trans fats began to be implicated in heart disease--I have not seen evidence that the cost would be great.
Remember that half the restaurants in New York City have already phased out trans fats, without anyone noticing a big jump in restaurant prices.
And the manufacturing cost of the substitutes for trans fats does not appear to be higher--the only advantage of trans fats is that they increase the shelf life of foods somewhat.
This is important to restaurants, by enabling them to economize on spoilage costs, but surely not critical.
The New York City restaurant industry has annual sales of $9.5 billion.
I do not know what percentage of those sales is accounted for by the restaurants that have already phased out trans fats, so let me assume, conservatively, that the restaurants that have not done so account for $6 billion of the $9.5 billion.
Suppose the ban would increase their costs by 1 percent--which seems too high, however, since the major costs of a restaurant are wages, which would be unaffected, and the cost of food, which would be affected only slightly (the shorter the shelf life, the more food must be bought relative to the amount that can be sold).
Apparently the substitutes for trans fats do not affect the taste of food.
One percent of $6 billion is $60 million.
My $3.5 billion benefit figure is obviously much greater than my $60 million cost figure, and probably it is too great.
Many of the 500 deaths may be of people who have advanced heart disease and thus a truncated life expectancy and impaired value of life, quite apart from trans fats.
Most of the deaths are of elderly people (only about 12 percent of deaths from heart disease in New York City are of people below the age of 65), whose value of life may be below average, though most elderly people cling pretty tenaciously to life, consistent with studies that find that elderly people are on average actually happier than young people.
I suspect too that the figure of 500 deaths due to trans fats in restaurant food is too high.
But suppose I slash it to 100, and assume that the average value of life in this group is only $1 million; this still yields a benefit figure, $100 million, that comfortably exceeds the cost figure, comfortably enough to cover the cost of enforcing the ban.
Moreover, the benefit figure excludes the benefit to people who have heart disease but do not die of it (or have not yet died of it).
Heart disease causes suffering even when it does not kill the sufferer.
I have also excluded from the benefit figure any external benefit, that is, a benefit to people who do not have heart disease (or perhaps never eat in restaurants), but subsidize the medical expenses of those who do, through Medicare, Medicaid, and risk pooling by private insurance companies.
I exclude it because I'm not sure it's a net external benefit.
Even a total elimination of heart disease might not significantly reduce aggregate expenditures on health care, because it would result in an increase in illness and death caused by other diseases, such as cancer.
(Diseases in effect compete with each other; if a person is saved from one disease, this increases the "market" for another disease.) It would also increase the average age of the population, which might result in greater transfer payments and hence heavier taxes.
My cost-benefit analysis is, necessarily, highly tentative.
However, it inclines me to a sympathetic view of the trans-fats ban.
I anticipate strong opposition from libertarians.
Professor Becker is traveling, and as a result will not be able to post his comment on the trans-fats issue until mid-week.
My piece provoked as I expected a number of interesting comments.
Some take a strong libertarian line, which can be traced back to John Stuart Mill: government has no right to regulate private activity that does not have adverse effects on nonconsenting third parties.
But I consider myself a Millian, and if I am correct that it is very difficult for people to absorb information about the dangers of trans fats, and if I am further correct that the cost of ill health from heart disease will be shifted in part through programs like Medicaid and Medicare to the as it were nonconsenting taxpayer, then the ban is Millian.
I appreciate the concern that adoption of a sound regulation of restaurant meals will encourage further, less justifiable, interferences with consumer autonomy.
But one can never expect government to get things just right, given the play of politics.
There are always going to be silly regulations, but that is not a compelling argument for having no regulations at all.
The commenters who denounce the "nanny state" do not indicate what if any regulations they approve of.
Do they think there should be no inspections of restaurants by health inspectors? No regulation at all of food or drug safety by the Food and Drug Administration?.
Some commenters think that people should be encouraged to study the dangers of trans fats and make their own judgments about what to eat.
But people have limited time to do research on such matters.
It makes sense to delegate the research to a central authority, so that instead of 300 million people trying to learn about trans fats and every other lurking menace, a handful of experts conducts  the research and when it is reasonably obvious how we would react if we were informed of its results, implement the proper response.
Surely our capacity to absorb information is quite limited and we must rely on the research of others for most of what we know and the knowledge of others for our protection.
Some of the  comments reflect a (natural) misunderstanding of the concept of "value of life," pointing out correctly that people do not sell their lives for the calculated value.
All value of life means is this: if the average person would demand $7,000 to assume a .001 (one in a thousand) risk of immediate death, there would be a net increase in social welfare if the risk could be averted at a cost of less than $7,000.
Suppose 10,000 people are exposed to the risk.
Then the total cost that the risk imposes is $70 million ($7,000 x 10,000), and net social welfare will be increased by a measure that eliminates the risk at a cost of less than $70 million.
Another way to put this, with identical implications, is that a 1 in 1,000 risk of death will result in 10 deaths in a population of 10,000, and the $70 million loss figure amounts to valuing each life at $7 million.
Some comments take issue with the details of the cost-benefit analysis.
That is fair, but notice how I scaled down the benefit figure radically in order to allow a generous margin of error, and how I excluded a major benefit--eliminating the suffering, as distinct from death from, heart disease.
I doubt that any plausible adjustments could reverse my conclusion that the benefits of the ban exceed the costs.
I do think it worth emphasizing that trans fats seem exceptionally dangerous--almost in the category of poisons.
The article in the   New England Journal of Medicine   that Professor Becker cites estimates a 6 to 19 percent reduction of what the authors call heart disease "events" from eliiminating trans fats, and the research backing it seems solid.
Incidentally, I did not see in the article anything about trans fats doing more for the taste of foods than substitute oils--in fact the article discusses a Danish study that finds no quality difference.
I agree with Becker that many young people who are clogging their arteries by eating restaurant meals rich in trans fats will be saved by better cholesterol drugs that we can expect in the future.
However, those drugs will doubtless be paid for in large measure by taxpayers through the Medicare and Medicaid programs.
This means that the cost of trans fats will be shifted, in part at least, from those who consume them to those who do not--a classic externality, which justifies public intervention (depending on its cost and efficacy) even to Millian liberals such as myself.
A number of firms, such as TerraPass, sell "carbon offsets" to consumers worried about global warming.
You give TerraPass information about your driving, flying, and the size of your house, and TerraPass computes your annual carbon dioxide emissions and offers for a price to offset some or all of them by investing the proceeds from your purchase in projects (for example, wind farms) for reducing carbon emissions.
In principle, if you purchase offsets for your entire carbon emissions, your net contribution to global warming is zero.
The carbon-offset movement is an echo of the "cap and trade" approach to pollution control, which is used for example to limit emissions of sulfur dioxide.
(The Kyoto Protocol creates such a system for carbon emissions, but the United States is not a signatory to the Protocol and has no cap and trade program for carbon.) In cap and trade, each polluter is given a permit to emit a certain quantity of a pollutant.
The total amount permitted to all polluters will be less than the total pollution, because the aim is to reduce pollution.
The key point is that the cost of compliance varies across polluters.
Consider two polluters.
One can eliminate a ton of emissions at a cost of $10, the other at a cost of $50.
At any price between $10 and $50, both polluters are better off if number one sells the right to emit a ton of emissions to number two; society too is better off, because the trade frees up $40 to invest in other goods.
The problem with carbon offsets is that they are purely voluntary.
You do not obtain a monetary benefit by reducing carbon emissions, as you would if you had an emissions permit that you could sell to big emitters, or if you would be punished for exceeding a permitted level of emissions.
When you buy a carbon offset, you are making a charitable contribution to fighting global warming.
Since charitable motivation is weak compared to self-interested motivation, carbon offsets are a poor substitute for a cap and trade system, quite apart from the doubts that have been raised about the efficacy of the projects in which the firms offering carbon offsets invest.
At best, moreover, carbon-offset programs are severely limited because consumers are not the only emitters of carbon dioxide.
A further problem is that the investments by the carbon-offset firms in reducing carbon emissions may to a great extent simply replace existing investments.
(An estimate of the replacement effect should be reflected in the price that TerraPass charges for offsets.) There is commercial and governmental investment in wind and nuclear energy, reforestation, climate research, fossil-fuel efficiency, and so forth, and if now consumers through carbon-offset programs invest in such projects, the commercial and governmental investors may scale back.
But the most serious drawback of the carbon-offsets movement lies elsewhere--though not, as environmental radicals would have it, because it makes emitting carbon dioxide into the atmosphere respectable, whereas it ought to be thought sinful, like littering, or driving without a catalytic converter.
Although carbon emissions pose a much greater danger to the environment than other pollutants, they differ because they confer benefits as well as impose costs, and indeed reducing them to zero would be a disaster because atmospheric carbon dioxide is essential to maintaining a temperate climate.
There is nothing wrong with emitting carbon dioxide.
The wrong lies in the quantity being emitted, which is excessive.
The most serious drawback of the carbon-offsets movement is that it is likely to make the problem of excessive carbon emissions more rather than less serious, and this for three reasons.
The first is that it creates the impression that modest reductions in the rate of annual increases in carbon emissions make a meaningful contribution to the fight against global warming.
They do not.
Given the limitations of the carbon-offsets movement that I have noted (its purely voluntary nature and the fact that only consumer emissions are affected), plus the fact that any reductions attributable to the movement are more than offset by continuing rapid increases in emissions by China, India, and other rapidly developing economies, the movement can at best limit only very slightly the rate of annual increase in carbon emissions, whereas the need is to reduce the level of those emissions.
The reason is that, because atmospheric carbon dioxide is absorbed by the oceans only very gradually (and the ability of the ocean to act as a "carbon sink" apparently is declining), a high annual level of carbon emissions tends to have a cumulative effect, so that even if that level were steady (rather than increasing, as it is), the atmospheric concentration would rise.
Second, the movement encourages the belief that anyone who reduces his carbon "footprint" (that is, the emissions of carbon dioxide that he causes) to zero has done his bit to combat global warming.
My wife and I have two cars, two houses, and fly a certain amount, but according to TerraPass's calculation, we can reduce our carbon footprint (roughly 32 tons of carbon dioxide a year) to zero at a cost of $282 a year.
Then I will feel good about myself.
But if a million American families having similar carbon footprints eliminate them at this rather modest price, the result--a reduction of 32 million tons of carbon dioxide emitted per year--will be microscopic, as the worldwide   hourly   emission of carbon dioxide is 16 million tons.
A million American families would be roughly 1 percent of the U.S.
population.
Suppose the carbon-offsets movement, which is recent, and is getting a boost from the increasingly ominous evidence of global warming, grows beyond my expectations, to a point at which 10 percent of the U.S.
population is paying TerraPass or other carbon-offset providers to offset an average of 32 tons per family.
The effect would be to reduce annual worldwide carbon emissions by 20 hours' worth, or about one-quarter of 1 percent, and the reduction would be greatly offset by the worldwide growth of emissions, currently running at about 3 percent a year.
Third, and most serious, the carbon-offset movement, combined with well-publicized projects by Google and other companies to reduce carbon emissions, creates the false impression that global warming can be tamed by voluntary efforts, just as cleaning up after dogs has been achieved by voluntary efforts, without need for legal compulsion.
Global warming cannot be tamed by voluntary efforts, because the costs of significantly reducing carbon emissions in order to reduce the atmospheric concentration of carbon dioxide (or at least stop it from increasing) are enormous.
If people believe that voluntary efforts will suffice, there will be no political pressure to incur the heavy costs that will be necessary to avert the risk of catastrophic climate change.
Against this it can be argued that the carbon-offset movement is increasing the public awareness of the global warming problem, which may lead to other voluntary efforts to reduce carbon emissions, such as switching from SUVs to more fuel-efficient vehicles, or may exert pressure on politicians to support the regulation of carbon emissions.
I am skeptical.
I think very few Americans are prepared to incur substantial costs to deal with a problem that is so afflicted by uncertainty about its imminence and magnitude as global warming.
They will avoid cognitive dissonance by exaggerating the practical efficacy of largely symbolic gestures, such as purchasing carbon offsets.
I shall focus my comment on the consequences of the sovereign-wealth funds for the United States; Becker's post focuses on the consequences for the nations that have such funds.
Owned mainly by major oil-exporting nations, sovereign-wealth funds have today in the aggregate some $2.5 trillion in assets, and if oil prices remain sky-high this figure may grow to more than $20 trillion in a relatively short time.
At that point the funds will be among the world's most important sources of investment capital.
(The total debt and equity capital in the world is about $110 trillion, though of course it will be greater when the sovereign-wealth funds reach $20 trillion, if they ever do.).
The rise of the sovereign-wealth funds may well be a positive development for the rest of the world, assuming that the alternative would be for these countries to increase domestic consumption or to invest in domestic infrastructure.
The decision to invest on a global basis increases the global supply of capital, including therefore the supply of capital for investment in the United States.
As Becker and I argued in our August 7, 2005, postings concerning opposition to the proposed acquisition of Unocal by an oil company owned by the Chinese government, the purchase of assets by foreign nations, even when they are hostile or potentially hostile to us, does not threaten U.S.
welfare or security.
The purchase of a company from its owners places money in the hands of those owners that they can invest for a higher return--if they did not think they could do this, they would not sell the company.
So such a purchase is wealth-enhancing.
It does not undermine our national security just because the purchaser is a foreign government, but on the contrary enhances our security because the investment is a hostage.
It's as if to guarantee China's good behavior the president of China sent his family to live in the United States.
But it is different if the purchase could create a security risk, as was argued to be the case with the proposed purchase by Dubai of a British company that serviced a number of U.S.
ports (see my posting of March 13, 2006).
The concern (which may have been overblown, however) was that Arabs employed by the Dubai company would obtain in the ordinary course of business information about the ports and might pass it on to Islamic terrorists.
Notice that this was a concern about foreign companies whether or not government-owned.
One of the motivations for the creation of the sovereign-wealth funds is the concern of the oil-exporting nations with the value of their huge dollar surpluses; China has the same concern, though in its case its trade imbalance with the United States is not due to oil exports.
As Becker points out, at least in the case of the oil-producing nations these surpluses are due to the fact that the governments of the nations are the producers, so they receive the export revenues; if the producers were private companies, the revenues would not go into government coffers.
For political reasons, however, the governments are the recipients of the oil revenues, they are paid in dollars, and they want to put their dollars to work rather than just accumulate them or distribute them to their citizens.
And rather than just purchase U.S.
Treasury notes or other safe securities--which would not make economic sense, since as Becker points out the amount of money in the funds greatly exceeds the nations' liquidity needs--the governments are in effect operating giant hedge funds, investing in diverse assets all over the world.
By doing this they are giving hostages to the nations in which they invest.
We should welcome the fact that these investments are less liquid than the short-term securities in which governments conventionally invest their reserves.
The less liquid an asset, the better a hostage it is; it can't be withdrawn as rapidly.
In addition, excess liquidity in the world's financial system can lead to financial instability.
The concern being expressed in some quarters in this country about the rise of the sovereign-wealth funds is ironic in view of the fact that our government's policies have contributed significantly to the growth of these funds.
Those policies include failure to exploit our Alaskan and offshore oil resources more vigorously, because of the opposition of environmentalists; our low tax rates, which facilitate consumption, including consumption of foreign goods, which in turn shifts dollars abroad; and, in particular, our very low taxes on oil and on oil products, such as gasoline and aviation fuel.
A stiff tax on imported oil, by reducing consumption, would reduce the wealth of the oil-exporting nations and hence the size of their sovereign-wealth funds.
Such a tax would have the not incidental further benefit of reducing emissions of carbon dioxide (though from that perspective a tax on carbon emissions is superior to a tax on oil and oil products) and of stimulating the search for alternatives to fossil fuels, a major culprit in global warming.
It is no secret that professors at American colleges and universities are much more liberal on average than the American people as a whole.
A recent paper by two sociology professors contains a useful history of scholarship on the issue and, more important, reports the results of the most careful survey yet conducted of the ideology of American academics.
See Neal Gross and Solon Simmons, ‚ÄúThe Social and Political Views of American Professors,‚Äù Sept.
24, 2007, available at http://www.wjh.harvard.edu/~ngross/lounsbery_9-25.pdf (visited Dec.
29
2007); and for a useful summary, with comments, including some by Larry Summers, see ‚ÄúThe Liberal (and Moderating) Professoriate,‚Äù   Inside Higher Ed  , Oct.
8, 2007, available at www.insidehighered.com/news/2007/10/08/politics (visited Dec.
29
2007).) More than 1,400 full-time professors at a wide variety of institutions of higher education, including community colleges, responded to the survey, representing a 51 percent response rate; and analysis of non-responders indicates that the responders were not a biased sample of the professors surveyed.
In the sample as a whole, 44 percent of professors are liberal, 46 percent moderate or centrist, and only 9 percent conservative.
(These are self-descriptions.) The corresponding figures for the American population as a whole, according to public opinion polls, are 18 percent, 49 percent, and 33 percent, suggesting that professors are on average more than twice as liberal, and only half as conservative, as the average American.
There are interesting differences within the professoriat, however.
The most liberal disciplines are the humanities and the social sciences; only 6 percent of the social-science professors and 15 percent of the humanities professors in the survey voted for Bush in 2004.
In contrast, business, medicine and other health sciences, and engineering are much less liberal, and the natural sciences somewhat less so, but they are still more liberal than the nation as a whole; only 32 percent of the business professors voted for Bush--though 52 percent of the health-sciences professors did.
In the entire sample, 78 percent voted for Kerry and only 20 percent for Bush.
Liberal-arts colleges and elite universities are even more liberal than other types of institution of higher education.
In liberal-arts colleges, the percentages liberal, conservative, and moderate are 62 percent, 4 percent, and 35 percent, respectively; and in elite universities the figures are 44 percent, 4 percent, and 52 percent.
Professors in the 26 to 35 year-old age range are less liberal and more moderate (though not more conservative) than older professors, which I attribute to those youngsters' having reached maturity after the collapse of communism.
It is thus no surprise that only 1 percent of the young professors describe themselves as "left radicals" or "left activists," compared to 17 percent of those aged 50 or older.
The summary in the Gross-Simmons paper of the previous literature on professorial political leanings finds that, at least since the 1950s, American college and university faculties have been more liberal than the nation as a whole, but that the liberal skew is more extreme today than it was in the 1950s.
This is my experience.
Between 1955 and 1962 I was a student at Yale College in the humanities and then at the Harvard Law School, and neither the humanities faculty at Yale nor the Harvard Law School faculty was noticeably liberal (the former was actually rather conservative), and I mean by the standards of that era, not by today‚Äôs standards.
Today both institutions are notably liberal, though the present dean of the Harvard Law School has been attempting with considerable success to make her faculty politically more diverse.
The Gross-Simmons study notes that the liberal skew is not limited to the United States, but is found in Canada, Britain, and much of Continental Europe, as well.
The survey results raise two questions: What is the explanation for the results? And what are the consequences? I address only the first question.
There is nothing mysterious about the fact that the members of a particular occupational group should have a different political profile from that of the population as a whole.
A 1999 survey of U.S.
military officers found that 64 percent were Republican, 8 percent Democratic, and 17 percent independent.
In contrast, a 2002 study found that 40 percent of journalists are liberal and 25 percent conservative--a breakdown similar to but much less extreme than that of professors.
The conservatism of military officers is easy to understand--conservatives are much more favorable to the use of military force, and to the values of honor, personal courage, discipline, hardiness, and obedience, which are highly prized by the military, than liberals are.
And the liberalism of journalists probably reflects the tastes of their readers; in my 2001 book   Public Intellectuals: A Study of Decline  , I found that the liberal-conservative split among public intellectuals (roughly 2 to 1) corresponded to the ratio of the circulation of liberal newspapers and magazines to the circulation of conservative ones.
It is tempting to conclude that the liberal bias of journalists and professors (especially in the humanities and social sciences) is the same phenomenon--the liberalism of the "intelligentsia," usefully defined by the   Merriam-Webster Online Dictionary   as "intellectuals who form an artistic, social, or political vanguard or elite." But that just pushes the question back one step: why should an intelligentsia be liberal? Because intellectuals are naturally critical of their society, which in the case of the United States is rather conservative, or at least not "liberal" as academic liberals understand the word? That is not a satisfactory explanation, because a society can be attacked from the Right just as easily as from the Left.
Some of the most distinguished intellectuals of the twentieth century attacked social, cultural, political, or economic features of their societies from the Right--think of Martin Heidegger, William Butler Yeats, T.
S.
Eliot, Friedrich Hayek, and Milton Friedman.
Today, in fields such as law, political theory, and economics, there is a vibrant conservative movment--the puzzle is why it is so distinctly a minority movement in the university world.
Moreover, our college and university professors, especially those whose interests and background overlap most closely with those of the majority of journalists, appear to be markedly more liberal than journalists, the other major division of the intelligentsia.
One explanatory factor may be that colleges and universities select for people who are comfortable in a quasi-socialistic working environment.
Virtually all colleges and universities in the United States are either public or nonprofit, there is usually salary compression within fields, tenure shields professors from the rigors of labor-market competition, and professorial compensation substitutes fringe benefits (such as tenure), leisure, and other nonpecuniary income for high salaries.
The ablest academics generally have the highest opportunity costs--the brilliant chemist could get a high-paying job in the private sector, the brilliant law professor could make a lot of money as a practicing lawyer, and so forth--which suggests that the ablest academics attach especially great value to nonpecuniary relative to pecuniary income and hence are likely to feel especially alienated from a capitalist economy.
This may be one reason why elite universities are more liberal than nonelite ones.
(The greater liberalism of liberal-arts colleges may just reflect the fact that such colleges employ fewer scientists and engineers, who are less liberal on average than professors in the humanities and the social sciences.) In addition, there is the curious but well-documented fact that Jews are far more liberal than their socio-economic standing would predict; they are also disproportionately found in the faculties of elite colleges and universities.
Furthermore, conservatism is associated in many people‚Äôs minds with religiosity, and faculty in nontechnical fields in elite universities are rarely religious.
Catholics and evangelical Christians are underrepresented in such universities.
Professors who are conservative in matters of economics, crime control, and national security but liberal with regard to social issues such as abortion rights, homosexual marriage, and  separation of church and state would hesitate to describe themselves as conservatives, and many would not vote Republican.
Another factor that may explain the liberal skew in the academy is political discrimination.
Academics pick their colleagues, so once a department or school is dominated by liberals, it may discriminate against conservatives and thus increase the percentage of liberals.
There is a good deal of anecdotal evidence of such discrimination, but the best test (though hard to "grade" in soft fields) would be whether conservative academics are abler on average than liberal ones.
If conservatives are disfavored, they need to be better than liberals to be hired.
Political discrimination is less likely to be prevalent in fields in which there are objective performance criteria, which may be why there is a smaller preponderance of liberals in scientific and technical fields.
Related to discrimination is herd behavior, or conformism.
Despite their formal commitment to open debate, academics, like other people, do not like to be criticized or otherwise challenged.
The sciences, well aware of this tendency, have institutionalized practices, such as peer review, insistence that findings be replicated, and high standards of logical and empirical rigor, that are designed to foster healthy disagreement.
These practices are much less common in the humanities and the soft social sciences.
One response to discrimination or herd behavior favoring liberals in academic has been the formation of conservative think tanks; if their professional staffs were added to college and university faculties, the liberal skew would be less extreme, though the difference would not be great.
A further point also related to both discrimination and conformity bias is that once a field acquires a political cast, it will tend henceforth to attract as graduate students and thus as future professors students who share its politics, as otherwise (as Louis Menand pointed out in a comment on the Gross-Simmons study) the students may have difficulty surviving graduate school, obtaining a good starting job, and finally obtaining tenure.
My last point is what might be called the institutionalization of liberal skew by virtue of affirmative action in college admissions.
Affirmative action brings in its train political correctness, sensitivity training, multiculturalism, and other attitudes or practices that make a college an uncongenial environment for many conservatives.
For all these reasons, although the weakening of left extremism in college and university faculties can be expected to continue, the liberal skew is unlikely to disappear in the foreseeable future.
The turmoil in the housing finance market raises fascinating questions.
I shall offer some brief thoughts on the principal ones.
1
  Surprise  .
I have been preoccupied in recent years with the subject of intelligence failures, about which I have written several books.
The subprime mortgage "crisis" follows a classic pattern that should help us to understand the inevitability of intelligence failures (Pearl Harbor, the Tet Offensive, the Egyptian-Syrian surprise attack on Israel in October 1973, 9/11, and so on   ad nauseam  ).
These failures typically are not due to lack of essential information or absence of warning signs or signals, but to lack of precise information concerning time and place, without which effective response is impossible except at prohibitive cost.
Alarms over risky mortgage practices had been sounded for years, and ignored for years.
Someone, whether a home buyer or an investment bank buying home mortgages, who had heeded the warnings when they were first made, or indeed until years later, would have left a good deal of money on the table.
2
  Bubbles  .
There were two bubbles: a housing bubble, and an investment bubble.
The bubble phenomenon is related analytically to the phenomenon of surprise just discussed.
A bubble begins when prices, in this case of housing, begin rising at a rate that seems inexplicable in relation to demand.
No one knows how high they will rise.
In conditions of uncertainty, there is a tendency to base expectations on simple extrapolation: if prices are rising, they are expected to continue to rise--for a time, but no one knows for how long a time.
There is a reluctance to act as if they will not continue rising, for by doing so one is leaving money on the table.
As a bubble expands, the rational response is to reduce risk, without forgoing profit, by getting in and out of the market as quickly as possible.
The increased trading may keep the bubble expanding.
3
  Ignorance  .
It has been argued that the people who took out subprime mortgages with adjustable interest rates did not understand the risks they were assuming, and that the banks that bought mortgage-backed securities did not understand the risks they were assuming.
I am skeptical.
Suppose you have a low income, you'd like to own a house, and a mortgage broker offers to arrange a mortgage that will cover 100 percent of the price of the house.
What do you have to lose by accepting such a deal? Since you haven't put up any capital, you have no capital to lose if you lose the house because you cannot make your monthly mortgage payments.
As for the banks, they rode the bubble for too long; but, to repeat, had they got out too early, they would have left a lot of money on the table.
4
  Asymmetry of Risk  .
Bubbles are more likely to occur when downside risk is less than upside risk.
My example of the asset-less home buyer illustrates the point.
The savings and loan "crisis" of the 1980s was exacerbated, or perhaps even created, by the fact that federal deposit insurance was not experience-rated: savings and loan associations paid the same rates regardless of the riskiness of the loans that they made with the depositors' money.
Since the potential loss to depositors was truncated, there was an incentive to take excessive risks.
CEOs of banks, as of other large, publicly owned firms, face asymmetrical risk too.
If the bank is profitable, the CEO's compensation soars.
If his investment gambles fail, he may be fired, but he will be consoled by receiving tens or even hundreds of millions of dollars in deferred income, stock options, or severance pay.
This may have been a factor in the decision of many banks to try to ride the bubble to the top.
5
  Psychology  .
Economists have become increasingly sensitive to the findings of cognitive psychology, which teaches that emotions and cognitive quirks afflict all of us and lead to behavior that often deviates from simple models of rational choice, important as those models are.
Among the psychological tendencies that are relevant to an understanding of bubbles are the following: optimism bias, and a related belief in luck (there is no such thing--some people are lucky, but that is a product of randomness, not of a thing--"luck"--that people possess in different proportions); herd behavior; excessive discounting of future costs; and difficulty in thinking sensibly about probabilities.
6
  What Is to Be Done?   In my opinion, nothing.
There have always been bubbles.
There will always be bubbles because of the factors that I have been discussing.
The Federal Reserve Board, though ably led and staffed, missed the mortgage bubble just as it missed the tech-stock bubble that exploded in 2000.
The proposals now on the table for resolving the subprime mortgage "crisis" or preventing future such fiascos include, first, requiring that more information be given to prospective borrowers and, second, that mortgage interest adjustments be frozen or other measures taken to reduce foreclosures.
Information is not the problem, as I have argued; and bailing out the borrowers, which is to say truncating downside risk, will set the stage for a future housing bubble.
Nor is it a good excuse for the second class of measures that we must at all costs avoid a recession.
A major depression, such as we last experienced in the 1930s, imposes immense social costs in the form of lost output.
A recession involving some temporary unemployment may impose lower social costs than governmental interventions designed to head it off.
When nations are ranked by gross national income per capita, the United States comes in sixth, after Luxembourg, Norway, Switzerland, Denmark, and Iceland, confirming one's general impression that the United States is the wealthiest large country; none of the countries ranked ahead of the U.S.
have more than a fortieth of the U.S.
population (Switzerland, the most populous of the group, has a population of 7.5 million).
But when countries are ranked by the United Nations' Human Development Index, which rates 177 of the world‚Äôs 193 countries, the United States falls to 12, Denmark to 14, and Luxembourg to 18; and among the nations promoted above the United States are Australia, Canada, Sweden, Japan, the Netherlands, France, Finland, and Spain (in that order).
The composition of the Index reflects dissatisfaction with income as a measure of well-being.
And of course it is a limited measure; income is not the only argument in a person's utility function.
The Human Development Index is an attempt to develop a better measure of well-being.
It is a composite of three indexes: GDP per capita (computed on a purchasing power parity basis, to correct for distortions introduced by using currency exchange rates); life expectancy at birth; and a combination of the adult literacy rate and the combined primary, secondary, and college/university enrollment rate, with the adult literacy rate being weighted twice as heavily as the enrollment rate.
For each component index, the value of 0 is assigned to the minimum level of the development indicator (income, life expectancy, and enrollment) and 1 to the maximum, and each country's score is the percentage of the maximum level that it achieves.
A country's Human Development score is the simple average of its scores on the three indexes.
I cannot myself see the value of the Human Development Index.
Not that per capita income, life expectancy at birth, and level of education as proxied by adult literacy and school enrollments are unimportant; a ranking of each of these aspects of human development might be a good first step in identifying areas of weakness that a society might wish to devote additional resources to improving.
It is the combining of the indexes and announcing that the combination offers a ranking of nations by the degree of their "human" as distinct from narrowly defined "economic" development that strikes me as dubious, and indeed as senseless.
The obvious objection is to the equal weighting of the three indexes, and to the omission of a host of other important dimensions of development, such as housing quality, pollution, tax rates, adult life expectancy, crime rates, unemployment, inflation, quality and variety of goods and services, economic growth, and quality of education--though including them would exacerbate the weighting problem, and some involve serious measurement problems.
A less obvious objection, but a general problem with rankings, is that from a sensible evaluative standpoint the distance between ranks is more important than the number of ranks that separate two countries.
The wealthiest nation has a per capita income twice as great as that of the 20th wealthiest nation.
That is a big difference.
But now consider life expectancy at birth.
Japan is number 3 with a life expectancy at birth of 82 years; the United States is only number 44, with a life expectancy at birth of 78.
A four-year difference in life expectancy is not trivial by any means; but compare it to the difference in per capita income between the third richest country, Switzerland, and the 44th, Palau: the Swiss income per capita is almost eight times as great as the per capita income of Palau.
If a country devotes resources to improving life expectancy, it has to give up some other good.
It is hard to say that the United States is making a mistake in not spending more resources on extending life expectancy; many Americans think that we spend too much on health care already.
One reason (though by no means the only one) that the United States ranks only 44th in life expectancy is that our large black population has an abnormally high death rate; the average life expectancy of black male Americans is only 69.
This shockingly high death rate reflects deep-seated problems of American blacks that would probably cost an enormous amount of money to solve.
The political will to expend those resources does not exist.
This may be a misfortune, a tragedy, or even a sin, but to use it to push the United States down in an index of human development is a political judgment, rather than anything determined by neutral social science.
The Human Development Index is an example of ranking mania that has the United States tightly in its grip, so maybe Americans shouldn't complain about the Index.
One cannot generalize about the value of rankings.
There are pluses and minuses.
The major plus is that a ranking is an economical method of presenting information.
The related minus is that it often presents it in a misleading way--that is my earlier point that the distance between ranks is more important than the number of ranks that separates the persons (nations, etc.) being ranked.
The more compressed a distribution--of ability, health, income, etc.--the less meaningful rank ordering is.
But the more serious problem with rank ordering is the arbitrariness of weighting different quality measures to come up with a composite ranking.
It is well illustrated by the college, law school, and business school rankings done annually by   U.S.
News & World Report  .
Unlike the UN, the editors of that magazine do not rank the different measures they use (such as SAT or LSAT scores and ratio of applicants to admits) equally; but the weightings are just as arbitrary.
They are worse in one respect than the Human Development Index: they are manipulable.
A school can (and many schools do ) increase its ratio of applicants to admits by blurring its admission criteria or reducing its application fee, thus increasing the number of applicants without increasing the number of admits.
It is unlikely that a nation would try to improve its ranking in the Human Development Index by reallocating resources to activities that influence the rankings.
I must be cautious in discussing the Madoff scandal because as a judge I am forbidden to make a public comment on pending or impending litigation.
Madoff himself of course has been arrested, and already lawsuits have been filed against some of the "funds of funds" that steered investors' money to him.
I shall proceed on the assumption that the media are correct in describing Madoff as the author of a Ponzi scheme--indeed he is reputed to have described it that way himself--but I shall treat it strictly as an assumption, a hypothesis, and not as established fact, which is for a court to determine.
And I will not comment at all on the suits against the funds of funds.
It is unsurprising that a Ponzi scheme should come to light during a stock market crash.
As Warren Buffet is reputed to have said, one doesn't know who is swimming naked until the tide runs out.
The stock market crash would have reduced any remaining assets in Madoff's investment account at the same time that liquidity problems caused by the depression would have increased the rate of redemptions.
Madoff's scheme, as described in the media (and remember that I am not taking a position on the truth of any of the allegations that have been made against him), is not a classic Ponzi scheme.
The classic scheme is a "con" in the sense of a fraud perpetrated against greedy dopes.
A skillful con man uses his gift of salesmanship to inveigle people by such ludicrous pitches that only the least sophisticated, or those most blinded by greed, are conned.
A typical Ponzi scheme might offer a 10 percent monthly return on investment--the very improbability that such an offer could be genuine assures that only suckers will invest and they are least likely to discover that they have been conned until the con man has made a bundle.
They may never discover that they have been conned--they may be convinced by the con man that they lost their money because of a legitmate business failure.
Or they may be embarrassed to complain, or even afraid to complain because they suspect that they've been involved with a criminal enterprise--what but a criminal enterprise could generate a 10 percent monthly return on one's investment? It is possible therefore that many Ponzi schemes are never reported to the authorities and hence never detected.
The strategy that has been attributed to Madoff is the opposite of that of the typical Ponzi schemer: it is to obtain investments from well-off people far more financially sophisticated than the average Ponzi victim, including genuine financial experts such as hedge fund managers and bank officials.
And therefore it requires different tactics from that of the ordinary Ponzi scheme, such as offering returns only moderately above average, satisfying redemption requests promptly, turning down some would-be investors (it would be interesting to know whether there was a tendency to turn down investors who might prove nosy or suspicious), and trading on a reputation earned in a legitimate business (Madoff's business of market making).
Madoff is alleged to have preyed primarily on his fellow Jews; such "affinity" frauds are common, because people are likely to be more trusting of members of their own ethnic or religious group than of outsiders and because a con man may be abler to identify and exploit the weaknesses of members of his own group than of others.
The most interesting question raised by the scandal is why though it apparently continued for decades it was never detected by the Securities and Exchange Commission, even though beginning eight years ago a money manager named Harry Markopolos began bombarding the Commission with letters accusing Madoff of operating a Ponzi scheme.
(The fact that Madoff did not sue Markopolos for libel should have been another warning sign.) There are two hypotheses.
One is that regulation is hopelessly inefficient, and that it should be up to investors to protect themselves as best they can against securities frauds.
The SEC's budget was increased substantially in 2004 in reaction to its failure to have detected the Enron, World Com, and other financial scandals that erupted in the early years of the new century, yet it still failed to detect Madoff's scheme.
The other hypothesis is that under Chairman Christopher Cox (as under the first chairman appointed by President Bush, Harvey Pitt), the SEC has been too trusting of the securities industry, as part of a general philosophy of deregulation, small government, and laissez-faire that has characterized the Bush Administration.
The SEC does seem to have been asleep at the switch quite a bit of late.
Just days before the collapse of Bear Stearns marked the beginning of the banking crisis, Chairman Cox said that "We have a good deal of comfort about the capital cushions at these firms at the moment." In fact most of the firms about which he was speaking--the investment banks--were teetering at the brink, and in some cases over the brink, of insolvency.
Cox's reaction to the Madoff scandal has been to blame his subordinates in the Commission, rather than to take responsibility himself.
That is not an endearing reaction.
The standard governmental response to a major governmental failure is reorganization.
The government wants to prove that it is doing something to prevent a repetition of the failure, and the cheapest yet most visible and dramatic way to show that it has "gotten the message" and is going to "do something" is to reorganize.
Hence the creation of the Department of Homeland Security and the Directorate of National Intelligence in the wake of the 9/11 attacks.
It is beginning to seem likely that there will be an ambitious reorganization of the financial regulatory system.
In the course of that reorganization, the SEC may be abolished.
If so, Bernard Madoff and Christopher Cox can share the credit.
One of the reasons for the insolvency of the Detroit automakers (General Motors, Chrysler--and Ford, which appears to be insolvent too, despite its denials) is that their workers are paid higher wages, and receive much more generous benefits paid for by their employer, than the workers employed at the automobile plants, mainly in the South, owned by Toyota, Honda, and other foreign manufacturers.
The total wage and benefit bill for the Detroit automakers is about $55 per hour, compared to $45 for workers in the foreign-owned plants, and the difference is plausibly ascribed to the fact that the Detroit automakers are unionized and the "foreign transplants" not.
(The comparison excludes retiree benefits, a very large cost of the Detroit companies, but not an hourly labor cost.) This difference may seem small, considering that labor is only about 10 percent of the cost of making a car, but many of the workers at the companies that supply parts to the automakers (and the parts represent about 60 percent of the total cost of manufacturing the vehicle) are also represented by the United Auto Workers.
Anyway, since the foreign transplants have other competitive advantages over the Detroit automakers, the latter can hardly afford to have even slightly higher labor costs.
When the auto bailout bill was being debated in Congress in November (ultimately it was voted down), Senator Corker said that he would support the bill if it conditioned the bailout on the Detroit automakers' reducing their workers' wages and benefits (to which the union would have to agree) to the level at the foreign-owned plants, as well as conform work rules to the work rules in those plants.
The significance of the work rules must not be underestimated.
As is common in unionized firms, the United Auto Workers has successfully negotiated not only for wages and benefits for the workers they represent but also for rules governing what tasks the workers can and cannot perform, how many workers must be assigned to a particular task, the order in which workers are to be laid off (usually it is in reverse order of seniority, because older workers tend to be stronger supporters of unionization than younger ones because the latter have better alternative employment prospects and so don't worry as much about job security) in the event of a reduction in demand for the firm's products, methods of discipline, and so forth.
These work rules, collectively "featherbedding," make it difficult for a firm to optimize its use of labor, and, like the higher wages and benefits that unions obtain, add to the firm's labor costs relative to those of its nonunion competitors.
A December 16 blog by Rand Simberg, http://pajamasmedia.com/blog/detroits-downturn-its-the-productivity-stupid/, presents a shocking picture of how work rules impair productivity at automobile plants at which the workers are represented by the United Auto Workers.
The goal of unions is to redistribute wealth from the owners and managers of firms, and from workers willing to work for very low wages, to the unionized workers and the union's officers.
Unions do this by organizing (or threatening) strikes that impose costs on employers.
For employers are rationally willing to avoid those costs at a cost (provided it is smaller) of higher wages and benefits and restrictive work rules.
Because the added cost to the employer of a unionized work force is a marginal cost (a cost that varies with the output of the firm), unionization results in reduced output by the unionized firm and, in consequence, benefits nonunionized competitors.
Unless those competitors are too few or too small to be able to expand output at a cost no higher than the cost to the unionized firms, unionization will gradually drive the unionized firms out of business.
Unions, in other words, are worker cartels.
Workers threaten to withhold their labor unless paid more than a competitive wage (including benefits and work rules), but unless their union is able to organize all the major competitors in a market, the cartel will be eroded by the entry of nonunionized firms, which by virtue of not being unionized will have lower labor costs.
The parallel to producer cartels is exact--workers   are   producers.
We are seeing this process of erosion of labor monopolies at work in the automobile industry.
The market share of the Detroit automakers has shrunk steadily relative to that of the foreign "transplants" and with it the number of unionized auto workers--they are fewer by a third or more than they were in 1970.
If the Detroit automakers will be forced to liquidate unless they can bring their labor costs down to the level of the foreign transplants, the UAW will be out of business either because the Detroit automakers liquidate or because, as a result of union concessions, the workers will no longer be getting anything in exchange for the dues they pay the union.
I don't think there's much to be said on behalf of unions, at least under current economic conditions.
The redistribution of wealth that they bring about is not only fragile, for the reason just suggested, but also capricious, as it is an accident whether conditions in a particular industry are favorable or unfavorable to unionization.
By driving up employers' costs, unions cause prices to increase, which harms consumers, who are not on average any better off than unionized workers are.
Unions push hard for minimum wage laws and for tariffs, both being devices for reducing competition from workers, here or abroad, willing to work for lower wages.
Current union hostility to immigrant workers is of a piece with the unions' former hostility to blacks and women--which is to say, to workers willing to work for a wage below the union wage.
And by raising labor costs, unions accelerate the substitution of capital for labor, further depressing the demand for labor and hence average wages.
Union workers, in effect, exploit nonunion workers, as well as reducing the overall efficiency of the economy.
The United Auto Workers has done its part to place the Detroit auto industry on the road to ruin.
There is also a long history of union corruption (though not in the UAW).
And some union activity (though again not that of the UAW) is extortionate: the union and the employer tacitly agree that as long as the employer gives the workers a wage increase slightly above the union dues, the union will leave the employer alone.
There may be, I grant, cases in which unionization reduces an employer's labor costs.
If there is deep mutual antipathy between workers and employers, perhaps breaking out in violence--with strikebreakers beating up strikers and strikers beating up scabs and sit-down strikers destroying company property--there may be benefits from interposing an organization independent of the employer between employer and workers, and from creating (as the National Labor Relations Act has done) a civilized mode of resolving labor disputes.
But in cases in which union organization is mutually beneficial, the employer will   invite   the union to organize its workers.
I am sure the Detroit automakers would very much like to disinvite the United Auto Workers.
Unions do provide some services that are valuable to employers, such as grievance procedures that check arbitrary actions by supervisory employees; and union-negotiated protection of senior workers can benefit their employer by encouraging them to share their know-how with new workers, without having to fear that by doing so they will be sharing themselves out of a job.
But these are measures that an employer who thinks they will reduce his labor costs can take without the presence of a union.
Micky Kaus, another blogger who is an expert on the automobile industry, attributes much of the problem with the UAW to the procedures that govern labor relations in unionized plants.
The problem...is the American adversarial labor-management negotiating system, in which reasonable people doing what the system tells them they should do wind up producing undesirable results.
Just as negotiating over work assignments means factories adjust too slowly to generate continuous efficiency improvements (which often involve constantly changing work assignments) negotiating ponderous 3 year contracts (in which Gettelfinger [the UAW's president] must extract every possible concession to please the members who elected him) means contracts adjust too slowly to save the companies from failure if market conditions change...[T[he $14 wage scale for new hires [to which the UAW agreed several years ago] hasn't had an impact because nobody new is being hired by the UAW's employers, who are shrinking, not growing.
The obvious alternative to cutting the pay of nonexistent future workers would be to cut the pay of existing current workers--but they are the people the system tells Gettelfinger he needs to please. www.slate.com/blogs/blogs/kausfiles/ (Dec."
26, 2008).
The unions strongly supported the Democrats in the last election and are looking for payback.
I do think that there are good economic reasons for keeping the Detroit automakers out of bankruptcy until the current depression hits bottom and a recovery begins--until then the shock to the economy would be too great (see my post of November 16)--and that will keep the UAW alive for a while.
But if it resists making substantial concessions to the automakers, hoping that the President and Congress will force the automakers' bondholders to make the necessary concessions or that the taxpayer will be forced to subsidize the automakers indefinitely, the union will be playing a game of chicken that may end in its destruction rather than merely in its continued shrinkage as the industry shrinks.
The auto bailout is deeply unpopular with the public and the UAW's stubbornness may reinforce the impression that unions are dinosaurs slouching toward extinction.
Macroeconomic Policy and the Current Depression‚ÄîPosner.
I am not a macroeconomist, but given the strange, perhaps embarrassed silence of so many macroeconomists, mentioned by Becker, I feel less daunted by my lack of expertise than I ordinarily would be.
As Becker explains, the focus of central banks, such as the Federal Reserve Board, has been on maintaining price stability by reducing interest rates when economic growth is too sluggish and raising them when it is too fast.
The first response encourages economic activity when needed and the second limits inflation.
But control of interest rates cannot prevent depressions, including severe depressions.
Nor can fiscal policy--government spending and taxing.
There appear to be three types of depression (why that word has been displaced by "recession" eludes me--who is supposed to be fooled by such a euphemism?).
In one, the least interesting and usually the least serious, some unanticipated shock, external to the ordinary workings of the market, disrupts the market equilibrium; the oil-price surges of the early and then the late 1970s are illustrative.
The second, illustrated by the depression of the early 1980s, in which unemployment exceeded 10 percent for a time during 1982, is the induced depression: the Federal Reserve Board broke what was becoming a chronic high rate of inflation by an unexpectedly steep increase in interest rates, which shocked the economy.
In neither type of depression is anyone at fault, and the second was downright beneficial to the economy.
In the third and most interesting type of depression, illustrated by both the depression of the 1930s and the current depression, the cause is the bursting of an investment bubble.
There was a stock market bubble in the 1920s fueled by buying stock with money loaned by banks.
That was risky lending and as a result the bursting of the stock market bubble in 1929 resulted in bank insolvencies.
The severity of the depression may have been due to the Federal Reserve Board's failure to bail out the banks, but the depression itself was due to the stock bubble's bursting and precipitating bank insolvencies.
There was a lesser stock market bubble, in stocks of high-tech companies, in the late 1990s, but its bursting had a small effect on the economy as a whole.
The current depression is similarly the consequence, but a very grave one, of the bursting of a bubble.
The bubble started in housing, but extended to commercial real estate and other sectors of the economy as well.
Very low interest rates, imaginative marketing of houses (and of mortgages on houses) and other goods, and the deregulation of the banking industry spurred highly speculative investing; and the eventual bursting of the bubble, as in 1929, precipitated widespread bank insolvencies and a rapid and steep decline in the stock market, though this time the insolvencies preceded and precipitated the stock decline, rather than vice versa.
An article by Massimo Guidolin and Elizabeth A.
La Jeunesse published  a year ago in the   Review   of the St.
Louis Federal Reserve Bank noted that the personal savings rate of Americans had actually turned negative, meaning that people were spending more than they were earning.
And now such savings as people had, being heavily invested in the stock market, have become depleted by the drop in the stock market.
As a result of their inadequate savings, people who lose their jobs or cannot sell the houses they no longer can afford are limited in their ability to reallocate savings to consumption, as they had done in previous, milder depressions.
So consumption has fallen steeply, precipitating layoffs that have further reduced consumption (because the unemployed have lower incomes), creating the downward spiral that the economy finds itself in at this writing.
And the timing could not be worse: during a presidential transition, with the lame-duck President seeming uninterested in and uninformed about economic matters, with economic officials whose stumbling responses to the gathering financial crisis have undermined their credibility, and with the crisis accelerating during the Christmas shopping system, which normally accounts for as much as 40 percent of annual retail sales.
The buying binge financed by the heavy borrowing during the bubble have left consumers awash in consumer durables, so it is easy for them to postpone buying.
Moreover, consumer durables are more durable than they used to be, so that replacement can be deferred longer than used to be possible.
If this diagnosis is correct, then the public-works expenditure program that President-elect Obama is proposing, though anathema to economic libertarians, resisted by the Bush Administration, and bound to be wasteful, as all such programs are, may be the most sensible response to the depression and one clearly superior to a tax cut.
A tax cut or rebate, like the bank bailout, is unlikely, unless very large or credibly promised to be permanent, to stimulate consumption greatly; most of the money is likely to be used to rebuild savings or, in the case of the banks, to rebuild their equity cushion so that they can make loans, bound to be risky in a depressed economy, without courting bankruptcy.
In other words, to stimulate economic activity the government will have to step in and ‚Äúconsume,‚Äù in lieu of reluctant or impoverished consumers by spending money on road repair and other public goods.
A critical variable, however, is the length of time it will take for public-works projects actually to be begun.
American government tends to be extremely sluggish.
We blogged on November 18 about whether the government should provide money to the U.S.
auto manufacturers to keep them alive.
(I was for; Becker was against.) In the short period since then, there have been important developments bearing on the issue, culminating this past Friday in the blocking by Senate Republicans of the Democrats' modest ($15 billion) auto bailout bill, and the announcement by the Bush Administration that it might, after all, agree to use part of the $700 billion financial-sector bailout to keep the U.S.
auto manufacturers going until President-elect Obama takes office.
So Becker and I have decided to return to the issue.
The issue has a political and an economic dimension.
From a political standpoint, the current position--no bailout legislation, but possible allocation of part of the financial-sector bailout money to the domestic auto manufacturers--represents, unusually, a victory for both political parties.
The Republican Senators have stood up for principle--that freedom to fail is basic to capitalism, that wages and benefits should be set by free labor markets rather than by powerful unions, which are worker cartels, that government should not manage businesses, and that government expenditures should be minimized--and for the interests of Toyota and the other foreign manufacturers that have plants in the United States; for those plants are mainly in the South, which is the stronghold of the Republican Party.
By opposing an auto bailout the Republican Senators have also distanced themselves from the Bush Administration, which is at once unpopular and believed by many Republicans to have betrayed Republican small-government principles.
There is a grave risk that, as I argued in my November 18 posting, a collapse of the domestic auto industry could have serious adverse consequences for the U.S.
economy as a whole, which would expose the Republican Senators to criticism.
But that risk is buffered by the Administration‚Äôs apparent willingness to bail out the auto industry without new legislation.
The Democrats (including the incoming administration) have scored points among their constituencies by standing up for union workers, for the ‚Äúgreening‚Äù of the automobile industry, for states in which the domestic auto industry is centered that voted Democratic in the November election (Michigan, Ohio, and Indiana), for the principle of active government, and for trying to avert a deepening of the current depression.
The bailout bill was a mess, but a harmless one, if I am right that the domestic producers should not be allowed to collapse at a time of profound and, it appears, worsening economic distress.
The bill was a mess because of the conditions that it would have imposed on the industry, conditions that earned the justified ire of the Republican Senators because of its failure to lean hard on the collective bargaining agreements negotiated by the United Auto Workers, because of the divided control of the industry that the bill if enacted would have brought about--divided among the manufacturers, a federal "car czar," and intrusive congressional oversight--and because of the considerable element of fantasy in the idea that Congress plus the President can revitalize the domestic auto industry.
Nowhere is it written that the United States, let alone the midwest, where the domestic auto manufacturers are centered, has a comparative advantage over other countries, or other regions of the United States, in manufacturing motor vehicles.
Evidently it does not, and Congress and the President cannot change that, as Japan learned from the failure of its "industrial policy" administered by Japan‚Äôs once-admired Ministry of International Trade and Industry.
For the problem of the Detroit manufacturers is not just a matter of higher wages, to be solved by renegotiation of their collective bargaining agreements.
The wage difference (actually the benefits difference--the hourly wages of the auto workers employed by the domestic manufacturers are only slightly higher than the wages of the workers employed in the U.S.
plants of Toyota and other foreign manufacturers) is an important but not the decisive factor in the decline of the domestic auto industry.
The difference in the wage and benefits package between employees of the domestic manufacturers and of the foreign ones in the United States has been exaggerated by treating as a part of that package the annual payments to retired workers divided by the number of hours worked annually by current workers.
The money owed the retirees is a fixed cost, like any other debt.
Eliminating those payments, like reducing the industry's bond debt, would improve the industry's balance sheet by reducing its fixed costs, but would not reduce the cost of making cars, or increase their quality.
Merely wiping out existing debt, the main consequence of reorganization in bankruptcy, does not improve the efficiency or competitive position of the reorganized firm, which is why most reorganizations end in liquidation.
What would improve the efficiency of the domestic auto manufacturers, besides reducing wages and current workers‚Äô benefits, would be jettisoning union-imposed work rules; that was part of Republican Senator Corker‚Äôs ingenious proposal (of course rejected by the union) to condition a bailout on the union‚Äôs agreeing to a reduction in the wages and benefits of the Detroit auto makers' workers to the level prevailing in the southern automobile plants of the foreign auto companies.
The adoption of his proposal would have been tantamount to putting the United Auto Workers out of business--if unionized workers have the identical wages, benefits, and working conditions as nonunionized ones, why would anyone pay union dues?.
I doubt that anyone in Congress or in either the outgoing or the incoming Administration really thinks that a bailout bill will place the domestic industry on the path to salvation.
The conditions imposed to achieve the "reform" of the industry are window dressing.
All three domestic manufacturers (yes, Ford included) are insolvent, and while they are unlikely to close down and liquidate completely if forced into bankruptcy--Americans will probably buy 10 million motor vehicles in 2009 and they are unlikely all to be made by foreign companies (the foreign share of the U.S.
car market, including both imports and cars manufactured in the U.S.
plants of foreign companies, is about 50 percent, though they could take up some of the slack created by the collapse of the Detroit manufacturers, since the foreign companies‚Äô sales are down too).
Even with an infusion of federal money, there will be many plant closings and layoffs and many bankruptcies and liquidations of auto parts suppliers and auto dealers.
But formal declarations of bankruptcy by the domestic manufacturers would, I believe (as I argued in my November 18 posting), have a substantial added negative effect on the economy.
Consumers are markedly reducing their purchases of durable goods because their savings are so depleted that they cannot, as in previous economic downturns, reallocate savings to consumption.
Instead they are reallocating income from consumption to savings.
The result is a downward spiral: consumers spend less, so output drops, resulting in layoffs that result in further reductions in consumption and in turn in output.
The spiral will eventually bottom out, but it will bottom out at a lower level if hundreds of thousands of employees of auto manufacturers, auto parts suppliers, and auto dealers are terminated more or less all at once and consumers planning to buy a car in 2009 are scared off by the uncertainties associated with bankruptcy.
(Will warranties be honored? Will parts be available? Will the dealership from which one bought a car survive? Will service standards slip? What about the car‚Äôs resale value? And should one believe the soothing assurances that bankruptcy is no big deal for the customers of the bankrupt firm, as long as it does not liquidate, when all the other soothing assurances by the government have proved unfounded?) Because motor vehicles are highly durable, it is easy to be prudent and defer replacing one‚Äôs existing vehicle until one‚Äôs economic situation clarifies.
Granted, with General Motors having publicly acknowledged hiring a leading bankruptcy lawyer to counsel it and announced that it will be shutting much of its North American operations for a period of months, there is increasing public recognition that the Detroit automobile industry is bankrupt in all but name.
But I still fear the psychological effect of a formal declaration of bankruptcy at a time when many--probably most--Americans are anxious about their economic situation.
Individually, consumer prudence is wise; collectively, it will exacerbate the depression.
The realistic goal of an auto-industry bailout is not to reform, revitalize, or restructure the domestic industry; it is merely to postpone its bankruptcy for a year or two, until the end of the depression is at least in sight and consumer confidence is restored to the point at which the bankruptcy of the domestic manufacturers can be taken in stride.
To attain this goal does not require imposing conditions on the use that the auto manufacturers make of the bailout moneys.
The conditions that the bill would have imposed and that any other form of government funding will impose are not an economic but a political necessity because of widespread anger at the incompetence of the industry; a majority of Americans oppose any bailout of the Detroit manufacturers.
At the very least, the Obama administration should be allowed to decide the fate of the companies; that argues for a modest government loan that will keep them out of bankruptcy until, say, February.
I agree with Becker; no matter how badly the Fed performed in the run-up to the financial collapse of September 2008, stripping it of its political independence could only make things worse.
The Fed indeed performed badly, and Bernanke himself—not just Greenspan—must (though he’s refusing to) shoulder a significant share of the blame.
Bernanke approved of, and may even have played the key role in advocating, Greenspan’s policy of pushing interest rates way down beginning late in 2001, keeping them there, and promising that when the Fed started raising rates it would do so gradually and would use monetary policy to prevent asset prices from diving in consequence of higher rates (that is, it would push down interest rates, which would firm up housing prices because houses are a product bought mainly with credit).
These actions by the Fed nourished the housing bubble.
The Fed not only mismanaged monetary policy, but was notably lax in regulating the commercial banks and bank holding companies, ignored warning signs of a coming financial collapse, and prepared no contingency plans to deal with such a crisis even after Bear Stearns’ collapse signaled the existence of potential solvency problems in a wide range of banks and “shadow banks” (such as Bear Stearns, a broker-dealer rather than a commercial bank).
The Fed was blindsided by the collapse of the other dominoes in September 2008 and blundered gravely in failing to bail out Lehman Brothers, a failure that precipitated a run on the other shadow banks.
These failures have shaken confidence in the Fed, and provide I think a strong argument against giving the Fed additional powers.
But the failures do not provide an argument for reducing the Fed’s political independence.
The critical reason is Becker’s: looking back at the behavior of Congress in the years and months (and weeks and days) preceding the financial collapse of September 2008, there is absolutely no reason to think that, had Congress exercised more control over the Fed, the financial collapse would have been averted or its gravity diminished.
The reason is related to the principal argument for the independence of a nation’s central bank: that without it there would be much more inflation.
Politicians want low interest rates because they stimulate economic activity and thus create at least the illusion of prosperity, for which politicians want to take credit.
But if inflation is already high or expected to be high, low interest rates create a serious risk of more inflation.
The reason is that the way a central bank reduces interest rates (to simplify) is by buying Treasury securities; the cash it pays for them increases bank balances and therefore reduces interest rates and stimulates lending.
The more lending, the more spending, and the more spending the more money there is in circulation relative to output and therefore the more inflation there is, since inflation is determined by the ratio between the amount of money in circulation and the amount of goods and services available for purchase.
Congress would undoubtedly have wanted interest rates to stay low throughout the past decade and thus would have fought the Fed had the latter heeded warnings of a housing bubble and raised interest rates.
In fact there   was   inflation—asset-price inflation, the assets being houses and common stock, and at times oil and other commodities.
The Fed should have raised interest rates higher and earlier, and probably would have done so if it had realized that there was asset-price inflation.
But had it done so, it would have caused a slowdown in economic activity, and Congress would have intervened had it not been for the Fed’s independence.
Elected officials have short time horizons.
The Fed isn’t really   that   independent because, unlike the independence of the Supreme Court from the other branches of government, the Fed’s independence is not based on the Constitution but merely on statute, and Congress can change a federal statute at any time.
Because of this the Fed cannot ignore political pressures entirely.
But if the “reformers” get their way, the political pressures will operate directly, and inflation will be an even more serious problem.
The hostility to Fed is in part a hostility to “Wall Street,” especially to firms like Goldman Sachs which have made enormous profits this year while most of the country was suffering from the economic downturn.
The combination of the government’s having bailed out Goldman and the other big banks, thereby signaling that it will not allow such firms to fail, and the Fed’s current interest-rate policy, which has pushed short-term interest rates down almost to zero, has enabled Goldman to borrow capital at very low rates and lend or otherwise invest it very profitably because the distress of other banks has reduced aggregate private lending and investing.
Because private banks elect two-thirds of the members of the board of directors of each of the 12 regional federal reserve banks, and five of the presidents of those banks serve on a rotating basis on the Federal Open Market Committee—the body within the Fed that sets monetary policy and thus interest rates—there is suspicion that the Fed is a tool of Wall Street.
That impression would be dissipated, to a degree anyway, by cutting the private banks out of any role in determining the governance of the federal reserve banks.
Such a change might reduce the political pressure to reduce the Fed’s independence.
One of the controversies swirling around the movement for health-care reform concerns the use of “buying power” by the federal government to reduce drug prices; the government is a huge indirect purchaser of drugs since it finances the Medicare and (with the states) Medicaid programs, in addition to providing medical care to military personnel and veterans and medical insurnance to the government’s civilian employees.
A related issue (which I do not discuss) is whether the government should finance the importation of drugs from countries in which drug prices are lower.
The technical issue is the use of monopsony power.
Monopoly power in economics refers to restricting output in order to push price above the competitive level, which is the level at which a further increase in output would cause price to fall below the cost of producing that additional output.
A price above that level will deflect some buyers to substitute products that cost more to produce but are cheaper because they are sold at a competitive price.
For example, if the marginal cost of product A is 5 and its price is equal to marginal cost, most buyers will prefer it to B, a similar product that costs 6 and is sold at 6.
If now A is monopolized and its price rises to say, 7, some buyers of A will switch to B, and therefore will be buying a product that actually costs society more to produce.
This deflection is a social cost of monopoly, as is also any costs that sellers incur to obtain monopoly power, such as costs of collusion or of obtaining government protection from competition.
In addition of course there is a transfer of wealth from consumers to producers if products are sold at monopoly prices, though this does not reduce economic welfare unless consumers derive greater utility from a marginal dollar than producers (their shareholders, etc.) do.
Monopsony power is parallel.
It refers to a situation in which a buyer reduces its purchases of an input in order to reduce its costs.
Most products have an upward-sloping supply curve, meaning that the larger the quantity that is produced, the higher the unit cost of production, because the producer or producers have to bid more resources away from other producers.
So one way a company can increase its profits is by buying less of an input, though this will work only if the company buys a large fraction of the quantity of the input that is produced, as otherwise its reduction in quantity purchased will not have a substantial effect on the quantity of the input that is produced and therefore on its price.
Monopsony is inefficient, like monopoly, because it reduces the output of the monopsonized product below the competitive level; the monopsonist produces below that level in order to reduce his input costs.
In the drug setting, the government, purchasing or (more commonly) controlling (acting in effect as an intermediate buyer) the purchase of such a large fraction of total drug sales could reduce by some arbitrary amount the drug prices that it is willing to reimburse or permit the reimbursement of doctors, pharmacies, and hospitals for.
The drug companies in turn would reduce their output, though not immediately.
The reason is the structure of drug costs.
The cost of developing and obtaining FDA approval of a new drug, all of which cost is incurred before the drug is sold, is a large percentage of total costs.
Once the drug is on the market, the cost of actual production is very low.
The sale price of the drug is dominated by the presale development costs.
The effect of the government’s pushing down the sale price would therefore be to reduce the development of new drugs rather than the production of existing drugs.
(As Becker points out in his comment, this analysis does not apply to generic—unpatented—drugs, which generally are sold at a price close to marginal cost.
To those drugs, the normal monopsony analysis would apply: the government would pay a price that forced the producer to move down his supply curve, reducing output.).
Slowing the development of new drugs would be politically attractive because the reduction of government-financed drug costs would be experienced immediately but the cost in slower development of new drugs (because they would be less profitable) would be deferred, and indeed would be invisible because no one would know how much faster the development of new drugs would have been had it not been for the government’s exercise of its monopsony power.
In addition, the cost savings to the government from reducing the price of drugs could be used to reduce the deficit that the health-care reform program soon to be adopted by Congress will create.
Complicating analysis is the fact that expenditures on medical care are not well aligned with social benefits.
Because beneficiaries of government health-care subsidies do not pay the full cost of health care, including the full cost of drugs, health care is overproduced from the standpoint of economic efficiency.
This overproduction is exacerbated by the immense marketing expenses of the drug industry, which appear to exploit the ignorance and desperation of the sick.
Particularly objectionable in my opinion is television advertising of prescription drugs, which is designed to bypass physicians’ exercise of professional judgment by appealing over the heads of the physicians to the patients, who pester their physicians to prescribe drugs that the patients have seen advertised on television.
There is no doubt that many people derive great subjective utility even from new drugs that do not do much for them—that are not significantly superior to old drugs and produce only slight extensions of life.
But if they are misinformed or, more likely, do not bear the cost of the new drugs, there is no presumption that the provision of these drugs is utility-maximizing in an economic sense.
If I am correct that government subsidies and patients’ information costs have resulted in an overproduction of drugs from an efficiency standpoint, there is an argument for the government’s using its monopsony power to reduce the price of drugs.
If drug production is above its optimal level, the economic objection to monopsony—that it reduces output below the optimal level—withers.
An objection likely to block any such measure is that it amounts to government’s rationing medical care.
It does.
But if government is to be the (indirect) provider of care, it has to ration it; otherwise the costs of medical care will strangle the economy.
This past September, the “International Commission on the Measurement of Economic Performance and Social Progress,” which had been appoined by French president Sarkozy and was chaired by the well-known economist Joseph Stiglitz, issued a report of almost 300 pages criticizing Gross Domestic Product as a measure of social welfare and proposing a variety of alterations and alternatives.
Such criticisms are nothing new, but Stiglitz relates them to the deficiencies of GDP as a measure of our current economic situation, and this makes the report especially timely.
For U.S.
GDP (and that of many other countries) grew in the third quarter of this year, leading a number of economists and journalists to declare the end of the recession that began in December 2007 and that dramatically intensified in the financial collapse of September 2008.
There are three types of objection to the GDP as a measure of welfare.
The first is that it is defective even from the narrowest economic perspective.
GDP is the market value of all goods and services produced in a year.
It thus explicitly excludes nonmarket values.
But its treatment even of market values is defective because it excludes depreciation.
Suppose the calamitous effects of Hurricane Katrina on New Orleans and other parts of the Gulf Coast had caused (it probably did cause) a surge in the output of various services such as emergency relief, building repairs, and construction.
The market value of those services would be counted as part of GDP, without subtraction for the depreciation of the value of property that the flooding triggered by the hurricane caused.
Another objection to GDP as a measure of economic welfare, also a criticism based on economics though now viewed a little more broadly, is failure to adjust for monetizable, but not monetized, economic values.
An obvious example is household production, which can be valued in money terms by estimating what the household producer would earn in the market; that is only a lower-bound estimate, but it is better than nothing.
Leisure, which is also a value, can be monetized similarly.
Quality is another economic dimension that can be monetized, as is recognized in calculating the consumer price index; without an adjustment for quality change, the rate of inflation would be greatly overstimated.
More to the point, it would be wrong to conclude that if the cost of making a product declines and competition forces the producers to cut price, there has been a loss of value.
A great deal of the modern increase in the standard of living is due to improvements in product quality that do not result in cost increases commensurate with the improvements, and often result in lower costs.
A dramatic example is modern dentistry.
A possible further example is increased longevity, which can be monetized; the problem is relating increased longevity to the enormous expenditures on health care.
Turning to bads, economists can and do estimate the costs imposed by crime, pollution, and traffic congestion, but these costs are not subtracted, in the calculation of GDP, from the costs of police and prisons, costs of pollution control, and costs of dealing with congestion—all those costs (except a loss of production from congestion delays) are included in GDP.
Put differently, the monetizable value of investments in police, pollution control, and reducing congestion does not enter into GDP.
Even with all the suggested corrections made, GDP would be an imperfect measure of economic output, because government provides many services that are difficult to value: expenditures on the military and on foreign and domestic intelligence and counterintelligence are conspicuous examples.
It would be odd to say that the “market value” of a bomber that costs $100 million to build and is sold only to the U.S.
Air Force has a market value equal to the purchase price.
Its market value in a meaningful sense would depend on its contribution to reducing the expected costs of foreign threats to     U.S.
    security.
Which brings me to the third and broadest problem with GDP as a measure of welfare--that even if improved along the lines I have just suggested it would not really measure happiness or well being.
Market value is a function mainly of cost.
The value that people derive from goods and services is better measured by what they would pay for them if competition did not reduce their price to or near the cost of production; but that value (“consumer surplus”) is difficult to estimate.
Or consider—coming closer to current events that have sharpened traditional concerns with GDP’s adequacy as a measure of welfare—the anxiety that people who are involuntarily unnemployed experience.
The second in command at the international commission was the economist Amartya Sen, a pioneer (along with the philosopher Martha Nussbaum) in attempting to develop measures of human “capabilities” and ranking countries according to their ability to equip their citizens with such capabilities (long life, adequate nutrition, education, etc.).
The United Nation’s Human Development Index attempts such a ranking, and some might think it a candidate for replacing GDP.
So should GDP be changed or abandoned or supplemented or supplanted? I think not, for three reasons.
The first is that even the adjustments that every economist would favor in principle, such as subtracting depreciation from market value, involve contestable judgments (there is a measure called Net Domestic Product that subtracts normal depreciation from GDP).
If private economists want to make such adjustments and offer up their own estimates of the economy's output, fine; but GDP is an official government statistic, and to avoid the suspicion and perhaps reality of political interference it is essential that government statistics be calculated in a thoroughly objective, uncontroversial manner.
An extreme example is the   U.S.
  census, which would be a lot cheaper to conduct if it used sampling, rather than trying to count every household and individual in the     United States    .
But the loss of credibility would be damaging.
The objection to adjusting GDP would grow with every adjustment.
And adjustments that were not monetizable would require controversial decisions on weighting.
That is a standard problem with multicriteria rankings.
Second, crude as it is, unadjusted GDP is at least roughly correlated with adjusted measures of welfare.
In the international commission’s report, adjustments for leisure and other nonmonetized but monetizable values boost France’s GDP from being 66 percent of America’s to 87 percent.
In another words,   France   is poorer than     America    , though not by as much as the unadjusted GDP figures suggested.
Anyone who knows something about   France   knows that it’s a wealthy country, though not so wealthy as the     United States    , and that the French don’t work as hard as Americans.
Third, except at extremes (Norway versus Zimbabwe, say), the significance of GDP lies not in its use as a method of ranking nations, but its use as a method of measuring the business cycle in an individual nation.
A chart of U.S.
GDP oscillates around a trend line of about 3 percent per annum; there is a big dip in the Great Depression of the 1930s and a smaller though still significant dip since 2007.
The oscillations in GDP since 2007 provide a rough but serviceable starting point in appraising the performance of the economy—a sharp drop of GDP in the last quarter of 2008 and the first quarter of 2009, a smaller drop in the second quarter, an increase (of 2.8 percent) in the third quarter, which still leaves GDP well below its trend line.
But it is necessary to emphasize that it is   just   a starting point.
I disagree with economists who say the “recession” ended in the third quarter.
The depression (as I think we should call it if only because of its enormous potential political consequences) has caused massive unemployment with all the associated anxieties and hardships, has greatly reduced household wealth, has caused private investment to turn negative, has cost the government trillions of dollars in lost tax revenues and recovery expenditures (TARP, the fiscal stimulus, the mortgage-relief programs, the auto bailouts, etc.), has undermined belief in free markets and altered the line between government and business in favor government, and is threatening a future inflation while deepening our dependence on foreign lenders.
To view a change in GDP from negative to positive as signifying the end of a depression (by which criterion the Great Depression ended in 1933 and again in 1938) is to misunderstand the utility of GDP as a measure of economic activity.
I agree with Becker that the     Copenhagen     conference on global warming was an embarrassing and total flop.
As he explains, there is no prospect in the foreseeable future for an international treaty that will limit emissions of carbon dioxide.
A different approach is required.
And desperately required; for I believe that the threat of global warming is very serious, and that it is not merely a long-term threat.
(If it were, there would be no urgency about taking measures to slow it, for normal technological progress will eventually solve the problem at low cost.) The particular danger which concerns me, and which I emphasized in my book   Catastrophe: Risk and Response   (2004), is that of   abrupt   global warming.
The climate equilibrium (like the economic equilibrium, as we have discovered) is unstable, in part because of feedback effects.
For example, as the Alaskan and Siberian permafrost melts, methane, a potent greenhouse gas, is released into the atmosphere, causing surface temperatures to rise, which in turn accelerates the release of methane.
And also as surface temperatures rise, the amount of water vapor in the atmosphere increases—and water vapor is a potent greenhouse gas, and so increasing it accelerates the warming trend.
The ocean’s capacity to absorb atmospheric carbon dioxide is limited, so that as emissions increase, more remain in the atmosphere longer.
Similarly, the destruction of forests to make way for agriculture increases net carbon emissions because trees absorb more carbon dioxide (during the day, in photosynthesis) than they emit (at night, when they are breathing oxygen and exhaling carbon dioxide).
The effects of rapidly rising concentrations of greenhouse gases in the atmosphere in heating the earth’s surface could produce such catastrophes as the melting of the Greenland and Antarctic ice caps, which would raise the level of the oceans by several feet, and the melting of the Arctic ice cap, which by diluting the salt in the North Atlantic could shift the Gulf Stream from northeast to north, which would give Europe, because of its northern latitude, a Siberian climate.
These are catastrophes that could occur within the next one or two decades.
No probabilities can be attached to abupt global warming, so no expected cost can be calculated that would enable a cost-benefit analysis of preventive measures.
But when the likelihood of an immense disaster cannot be estimated, yet does not seem negligible, there is an argument for taking preventive measures, at least if they are not prohibitively costly.
The free-rider problem is defeating efforts to limit carbon emissions.
Emissions limitations even by a major emitter (the two biggest emitters are   China   and the     United States    ) would have only a slight effect on the concentration of carbon dioxide in the atmosphere.
It would not reduce the concentration; it would not keep it from increasing; it would just slow the rate of increase slightly.
The major effect would a transfer of wealth to any country that did not limit its emissions; that country would have a competitive advantage because it would not bear the cost of reducing emissions.
As other countries reduced production because of higher costs, the free-riding countries would increase their production (because they would have a comparative advantage)--and with it their carbon emissions.
The free-rider problem would not be serious if the cost of reducing emissions by significant amounts were low; but it is high.
Given existing technology, it requires a substantial reduction (whether brought about by emissions taxes or by quotas) in the use of motor vehicles, in the generation of electricity (other than in nuclear reactors), and in the clearing of forests for agrcultural and other uses.
What is needed to make a solution to global warming feasible is   cheap   technological fixes.
One is actually at hand, though strongly resisted by environmentalists (I am tempted to put “short-sighted” in front of “environmentalists”), and that is to inject sulphur dioxide, a potent sun-screen gas, into the atmosphere.
This solution (called “geoengineering”) is resisted because sulphur dioxide in the atmosphere causes acid rain; but acid rain is a far less serious problem than global warming.
Other possible technological solutions include injecting carbon emissions from electrical power generation underground, cheap battery-driven motor vehicles, and artificial “bacteria” that would devour carbon dioxide in the atmosphere.
All of these solutions would require large investments in research and development to achieve feasibility at low cost.
The benefits, however, would include not only reduced carbon emissions but also reduced dependence by the   United States  ,   Japan  , and many other countries on foreign producers of oil and natural gas, which include unstable countries and countries hostile to the     United States     and its allies.
The     United States     and other nations are financing the development of carbon-limiting technologies, but at low levels relative to the need.
I favor carbon taxes to increase the incentives for private R & D by producers who would be affected by such taxes and by firms interested in supplying emission-reducing technology to such producers.
Private innovation is anyway likely to be more efficient than government-sponsored R & D.
Emission taxes would have painful allocative effects in the short run, and their imposition should be deferred until the economy improves significantly.
But when imposed they would generate tax revenue as well as affecting R & D incentives, and we shall need additional tax revenue to control our immense and growing public debt.
I wish to correct a pair of errors, and a misunderstanding, concerning my post on contraception and catholicism.
The errors: The Pope who made the 1930 anticontraception pronouncement was not Pius VI, but Pius XI.
And there is some doubt whether his pronouncement should be taken as “infallible.” I am advised by a Catholic layman, Professor Stephen M.
Bainbridge, that the 1930 pronouncement was not intended to be infallible.
But he adds: "Although Pius XI's 1930 encyclical is generally regarded as anti-contraception, it was Paul VI's 1968 encyclical   Humanae Vitae   that is properly regarded as the clearest statement of the Magisterium with respect to contraception.
Note that nothing in Humanae Vitae purports to be an infallible definition of doctrine.
Because it has been widely accepted by the bishops and subsequent Popes, however,   I have no doubt that it qualifies for infallibility as part of the ordinary universal Magisterium.
 "     (emphasis mine).
A homosexual student group at the University of Chicago claims that I wrote that homosexuals are more likely to molest children sexually than heterosexuals.
They misunderstood what I said.
They fasten on the following sentence: "The problem of priests’ sexually molesting   boys   would be solved if priests were allowed to marry and if women could be priests, because then the priesthood would attract fewer homosexuals" (emphasis added).
I didn't say that homosexuals molest children more than heterosexuals do, a subject on which I'm uninformed.
I said that the problem of priests molesting   boys   would be solved (more precisely, would be alleviated, since there would still be some homosexual priests and some of them would be child molesters--necessarily of boys if they're homosexual) if priests could marry and women could be ordained.
The priesthood attracts homosexuals, for obvious reasons, and homosexual child molesters are molesters of boys.
Publicity concerning molestation of children by priests has focused on boys, which is why I suggested that an obvious response, though difficult for the Church because of its long-established doctrine, would be to allow priests to marry and women to be priests.
Women, by the way, are much less likely to molest children of either sex than men are.
This means that if some (or many) priests were women, there would be less sexual molestation by priests of either boys or girls.
A recent monograph published by the libertarian Cato Institute—Jeffrey A.
Miron and Katherine Waldock,   The Budgetary Impact of Ending Drug Prohibition   (2010), available at   www.cato.org/pubs/wtpapers/DrugProhibitionWP.pdf--offers   an estimate of the budgetary cost to the U.S.
government (federal, state, and local) of the federal and state legal prohibitions against the sale and use of marijuana, cocaine, heroin, and other mind-altering drugs.
The lead author, Jeffrey Miron, has an economics Ph.D.
from MIT and lectures in economics at Harvard; he has published extensively on the economics of the drug prohibition.
His coauthor is a doctoral candidate at NYU’s business school.
I will summarize the monograph and then offer some thoughts of my own on the question of legalizing the illegal drugs.
The authors estimate that legalizing these drugs (which would require repealing both federal and state prohibitions) would reduce government expenditures by $41.3 billion per year, with about two-thirds of the savings accruing to state and local government.
The savings would involve reductions in police expenditures, in prosecutorial and judicial expenditures, and in jail and prison expenditures.
The authors estimate the reductions by multiplying the various expenditure categories by the percentage of arrests, prosecutions, and prison terms that are attributable to drug offenses.
This is a crude method of estimation, because different types of criminal offense involve different amounts of police, prosecutorial and judicial, and prison resources; for example, the length of imprisonment for a particular type of offense is the best estimator of the prison costs for that offense, and the length varies across types of offense.
A further problem with the method of estimation is its disregard of fixed costs.
Given fixed costs, a reduction in output will not reduce total costs by the same percentage as the reduction.
At least not immediately; in the long run, a reduction in output should reduce total costs proportionately or nearly so, because in the long run all or at least most costs are variable.
So the $41.3 billion figure has to be taken with a grain of salt, but since it could be larger or smaller, it is a legitimate starting point for analysis.
In addition to reducing expenditures on law enforcement, legalizing the illegal drugs would, the authors argue, increase tax revenues (federal, state, and possibly local as well); they estimate the increase at $46.7 billion a year.
They assume first, conservatively, that the demand for the drugs would not increase; that is, at a given price, the amount of drugs purchased would not increase.
This is a counterintuitive assumption, since the illegality of the drugs discourages their purchase; but the authors point out that increased consumption of drugs might come largely at the expense of consumption of alcohol, tobacco, and other goods that are taxed as drugs would be.
However, the price of drugs would fall; although legalizing the drugs would result in the sellers having to pay taxes and incur regulatory expenses, and would result in advertising expenditures that might shift sales among sellers rather than increase demand, the authors plausibly assume that these effects would be offset by the elimination of the heavy costs that prohibition imposes on sellers, notably the threat of punishment and of gang violence.
The decline in price would have two effects, however, which might be largely offsetting from a tax-revenue standpoint: the amount of drugs sold would rise, but tax revenues per sale would fall because the tax rate would be based on the sale price.
The net effect would depend on the elasticity of demand for the drugs.
If demand is inelastic, this means that a fall in price will not generate a proportionate increase in amount purchased; since revenue is price times quantity, total revenue will fall when price falls.
The authors note that the demand for drugs is believed to be inelastic, and if so then revenue will fall more than price if drugs are legalized, and this will reduce the amount of tax collected relative to a good the demand for which is elastic.
Additional tax revenues, however, would be generated by income tax paid by the sellers; drug gangsters do not pay income tax on their income from criminal activity.
From estimates of drug consumption and prices—estimates that must however be taken with a grain of salt, once again, because there are no reliable records of illegal transactions—the authors derive the $46.7 billion figure for increased tax revenues from legalization; this assumes tax rates similar to those on clost substitutes, such as alcoholic beverages.
The sum of public expenditures that would be avoided and additional tax revenues that would be generated is, according to the authors’ estimate, $88 billion.
Their estimates are broken down by drugs.
Of particular significance is their estimate that the total budgetary improvement from legalizing marijuana, the least controversial of the illegal drugs, would be only about $17.4 billion, of which only $8.7 billion would come from reduced expenditures on law enforcement (computed from tables 3 and 4 in the monograph), the rest representing increased tax revenues.
And as we know from the recent decision by the Justice Department to continue enforcing the federal law against marijuana in California, despite that state’s repeal of its state prohibition, both federal and state prohibitions must be repealed for the legalizing of tegalization of illegal drugs to be effective.
Although I think the authors’ estimates are good enough to be a valid starting point for evaluation of the budgetary benefits of legalizing drugs, it is important to note that their monograph is not a cost-benefit analysis in the usual sense.
True, the costs of police, judges, prisons, etc.
are social costs; that is, they are resources that have opportunity costs.
But tax revenues are a transfer payment rather than an increase in overall national wealth.
The authors do not attempt to estimate whether taxing drugs is efficient relative to other taxes, though what is true is that taxing drugs is cheaper than prohibiting them, because collection costs are cheaper than the law enforcement costs that prohibition imposes.
Most important, the authors also do not consider the possible social benefits of prohibition.
Prohibition reduces the consumption of mind-altering drugs.
Of course there are mind-altering drugs that are not prohibited, and many of these are close substitutes.
These include the numerous prescription drugs that have mind-altering effects very similar to those of the illegal drugs, and of course there is alcohol and cigarettes.
Moreover, a tax on legalized drugs would raise the price to the consumer and thus moderate the effect of legalization on consumption.
But if the tax is too high, it will result in reviving the illegal industry.
And the authors probably underestimate the increased consumption that would result from a lower price or even the same price (brought about by a particularly stiff excise tax) because they don’t mention concerns with impurities and with the stigma of being a “drug addict” that are created by the prohibition and would be substantially reduced by its repeal.
The question would then be whether the external costs of increased consumption of mind-altering drugs would exceed the savings in law enforcement costs from legalization.
It seems doubtful that marijuana consumption generates significant social costs, but legalizing it would generate only modest cost savings--$8.7 billion a year, according to the authors’ estimates.
But cocaine, especially the crack form, along with heroin, ecstasy, LSD, methamphetamines, and perhaps others, may induce behavioral changes that cause social damage.
Most leaders of black communities believe that rampant drug usage is highly destructive to their communities, and not only because of the gang activity that prohibition induces.
Drug gangs would disappear with legalization and that would reduce the violence in those communities, but the effect might be more than offset by the effects of greater drug use.
Concern with the huge budget deficits of our federal, state, and local governments may gain the authors a more sympathetic reading than advocacy of repealing the drug laws usually does.
From a budgetary standpoint, the authors are estimating an annual savings of almost $90 billion.
But without an estimate of the social costs of increased drug usage, the path to repeal is blocked.
It would a step in the right direction if the Justice Department would take the position that it will not enforce a federal drug law in any state that repeals its parallel prohibition of that drug; that way we might obtain experimental evidence of the social costs of illegal drugs.
In my post of August 29 of this year—“Is the Federal Government Broke?”—I pointed out that a realistic assessment of the federal government’s finances, conducted in the first issue of a Morgan Stanley newsletter called   Sovereign Subjects   (published on August 25), and modeled on the kind of assessment required of private companies, concluded that the federal government probably is insolvent.
A firm is insolvent when its liabilities exceed its assets.
The major asset of the federal government is its taxing power, which is some fraction of Gross Domestic Product.
Its liabilities include bonds and other contractual debt, entitlements such as social security and Medicare, and government services such as maintenance of highways and national defense.
The so-called “entitlements” are not really contractual obligations because Congress can reduce or eliminate them, and likewise government services; but the political resistance to reducing let alone eliminating them could be as strong as the resistance to “restructuring” (a euphemism for defaulting on) government debt.
And the taxing power is limited, not only by political opposition but also by the negative effect of heavy taxation on incentives to work and to invest, and by other economic distortions that heavy taxation creates.
Not that the federal government could be forced to declare bankruptcy.
In a bankruptcy, the assets of the bankrupt firm are sold and the proceeds distributed among the creditors, or the creditors’ claims are converted to equity and the creditors thus become the firm’s owners, the shareholders being wiped out.
Neither of these things can happen to the federal government.
If it refuses to pay its contractual creditors—the individuals, firms and other private institutions, and foreign countries, that own Treasury securities or other federal debt—or if inflates the currency in order to reduce its real debt burden, it will find it difficult to borrow in the future without having to pay a very high interest rate, which will further deepen the federal government’s insolvency.
Inflation and default are only short-term measures for staving off financial disaster.
The federal government has at present a very large and growing deficit, and few ideas for reducing it that command widespread support.
Many of our state and local governments also have huge deficits—Illinois, California, and New York are insolvent by the methodology of the Morgan Stanley study, and doubtless other states and a number of cities as well—but their fiscal situation is, I believe, less dire than that of the federal government.
Start with cities.
Cities unlike states or the federal government can be forced to declare bankruptcy, and their assets can be sold to satisfy creditors.
Cities (also towns, villages, and other municipal entities) often have income-producing assets attractive to creditors in bankruptcy.
But it would be such a blow to municipal pride for a city to go broke and see its most valuable assets seized to pay its creditors, and would wreak such havoc on its ability to borrow in the future at reasonable interest rates, that city governments will do almost anything to avoid bankruptcy; and there is much they can do without encountering insuperable political opposition.
Cities do not provide many entitlements.
Rather, their costs are heavily concentrated in salaries and benefits of city employees, and these costs can be cut by salary reductions, reductions in pension and other benefit contributions, and layoffs.
Infrastructure costs, such as road maintenance, can usually be reduced as well (by what is euphemistically referred to as “deferred maintenance”); and sales and property taxes can be raised to help close a gap between revenue and expenditures.
Cities suffer but they cope.
The situation of the states differs in one crucial respect from that of municipalities: they cannot declare bankruptcy.
And unlike the federal government, they cannot use inflation or devaluation to get out of a fiscal hole because they are not permitted to issue their own currency.
In fact their situation is very similar to that of the nations of the eurozone.
And we have seen that the eurozone nations that have been hardest hit by the economic crisis, notably Greece, Ireland, Portugal, and Spain, have taken effective and in some instances draconian measures to cut their costs, including entitlement costs.
They have no choice, because they can’t borrow money at affordable interest rates if creditors consider them insolvent or verging on insolvency.
Our states are in a similar but better position because, unlike the eurozone countries, they do not provide elaborate entitlements.
Social security and Medicare, the two biggest U.S.
entitlement programs, are entirely federal.
Medicaid is shared, but a state could if it wanted abandon Medicaid.
State expenditures are heavily concentrated on roads, higher education, and prisons, and expenditures on such services can be reduced, and state taxes raised, with limited political resistance because of the absence of alternatives.
Nor do the states or cities bear the heavy costs of national security borne by the federal government, with its $700 billion annual defense budget.
The economic downturn caused a sharp decline in state tax revenues, which have been running about 12 percent below their pre-downturn levels.
The result has been an aggregate state budget gap for fiscal 2011 of $130 billion and an estimated gap of $140 billion for fiscal 2012.
About a third of these gaps will be filled by federal stimulus money.
(These statistics are from Elizabeth McNicholl et al., “States Continue to Feel Recession’s Impact,”   http://www.cbpp.org/cms/?fa=view&id=711  .) That is not an ideal solution because it merely shifts debt from the state to the federal ledger.
(See John B.
Taylor, “A Zero Stimulus Impact,” Dec.
9, 2010,   http://misunderstoodfinance.blogspot.com/2010/12/zero-stimulus-impact.html  .) And it still leaves big gaps in state budgets.
But rather than running up huge deficits, as the federal government has been doing, the states have managed to close the budget gap for 2011 by a combination of raising taxes and reducing expenditures, and they will do so for 2012 as well—they have no choice.
They are lucky not to! Because the federal government is not expected to default, it can borrow both from Americans and from foreigners at low interest rates, and so can continue to postpone the day of reckoning at which it will have to cut its expenditures.
There is no similar confidence in state or city bonds.
States and cities cannot long postpone the day of reckoning.
The analogy of Greece and Ireland is compelling.
When times are good, low tax rates generate large public revenues, which politicians spend to make themselves popular.
The result is waste, which can be cut in an economic downturn without inflicting unbearable political pain.
Taxes can be raised as well, because the politicians can tell the people—and the people will believe them—that unlike the federal government they cannot use deficit spending to enable tax and expenditure rates to remain unchanged.
What is true and disturbing is that the weakness of government compared to private corporate accounting standards (the need for reform of government accounting is acute) fosters profligacy and makes adjustments to fiscal reality unnecessarily abrupt and painful.
An illustrative problem for both state and cities is what might be called “indirect bankruptcy.” I refer by that term to the sale of assets to obtain money for current expenditures.
It is illustrated by the sale in the last few years by Chicago of two major income-producing City-owned assets: the Chicago Skyway and the system of parking fees and fines.
The assets were sold for their market value, which is to say the present value of the earnings that they are anticipated to generate over their useful life.
The City should have invested the proceeds of the sales in assets expected to generate a similar (ideally a higher) net income stream.
Instead the City dissipated the proceeds on paying current expenses, nevertheless running up huge deficits with now less future income to pay them down.
Another example of bad government accounting at the state level is excessive pensions and other benefits for public employees.
By assuming unrealistically that the value of a pension fund will increase at an annual rate of 8 percent, states are able to “fund” generous public pensions at low annual expenditure, since a contribution that compounds at 8 percent grows very rapidly.
Still, public pensions can be renegotiated, or even defaulted, and the political pain is limited because public employees are not a dominant political bloc—in part because their high wages and especially their generous pension and medical benefits have made them unpopular in a period in which private employees are struggling—and because the generous federal entitlements buffer the pain to individuals of reduction in state and local expenditures.
At the local level, there is concern that many cities and other municipalities leave many of their liabilities off their books, and this concern has led to a fall in the price of municipal bonds (and hence an increase in the interest rates that municipalities have to pay on new bond issues).
The financial situation of the municipalities may be more dire than that of the states, and the result may be a wave of municipal bankruptcies.
The fiscal situation of state and local government is serious from a macroeconomic standpoint because it adds to the nation’s overall debt load, a load shared among individuals, families, cities, states, troubled businesses, and the federal government.
This heavy debt load, by limiting the money available for consumption and production, is retarding the economic recovery and thus contributing to the continued high rate of employment.
Nevertheless the fiscal situation of state and local government seems more manageable than that of the federal government.
The Tax Relief, Unemployment Insurance Reauthorization and Job Creation Act of 2010, which President Obama signed into law on December 17, is being described as an $858 billion stimulus bill.
But this is imprecise.
The immensely complex Act is well summarized in “CCH Tax Briefing: The 2010 Tax Relief / Job Creation Act,” Dec.
20, 2010,   http://tax.cchgroup.com/downloads/files/pdfs/legislation/bush-taxcuts.pdf   (visited Dec.
25, 2010), which provides the following breakdown of the costs of the two-year program.
Individual Tax Cuts .............$ 186 billion.
AMT Relief ............................$ 136 billion.
Payroll Tax Deduction ..............$ 111 billion.
Estate/Gift Tax Relief .............$ 68 billion.
Capital Gains/.
Dividend Cuts .......................$ 53 billion.
Bonus Depreciation/179.
Expensing ............................$ 21 billion.
Other .................................$ 226 billion.
In addition to this $801 billion in tax relief, the Act authorizes a $57 billion extension of unemployment insurance benefits for a maximum of 99 weeks.
All these are just estimates, since the amount of tax relief depends on incomes, the size of estates, and so forth, and the amount of unemployment benefits depends on the number of claims made.
Most of the “costs” represent simply lost tax revenues; the Act extends tax cuts, made a decade ago, that would have expired at the end of this year—but the payroll (i.e., social security) tax deduction and extension of unemployment benefits, along with some of the other tax provisions, are new tax breaks.
The benefits extension will probably expire at the end of the two years in which the new Act will be in effect, as unemployment falls.
The other provisions of the Act are likely to be continued, given Republican hostility to any tax increases—and refusing to extend a temporary tax cut is rightly perceived as a tax increase.
The Act illustrates the nature of compromise between the Democratic and Republican parties in the current political climate: each party seeks tax reductions for its constituents (the payroll tax deduction being the Democrats’ preferred form of tax break) and the Democrats in addition seek to increase spending (hence the extension of unemployment benefits).
The aggregate federal deficit is about $14 trillion, and is increasing at a rate of about $400 billion a year.
All other things unchanged, the Act, assuming permanence, will almost double the annual increase in the deficit, putting the country farther along the road to bankruptcy—unless, as the Administration argues, the Act will operate as a stimulus (Keynesian deficit spending in a depression or recession) that will bring down the deficit by accelerating economic growth.
This would be unlikely if the tax breaks were new rather than for the most part merely continuations of the Bush tax cuts of a decade ago (I explain the significance of this qualification below).
The normal (not recessionary) rate of the nation’s economic growth is about 3 percent a year, but is usually much higher in the recovery period following a depression or recession.
The economy is expected to grow by only about 3 to 4 percent in 2011; if the new Act added 2 percent to the higher figure, which I don’t think anyone expects, so that the Gross Domestic Product grew by about $800 billion (6 percent of our $14 trillion GDP), this would, it is true, reduce the rate of growth of the deficit.
Suppose the additional $800 billion in GDP yielded $160 billion in additional federal tax revenues (20 percent).
Then the 2011 deficit, instead of being roughly $800 billion, would be “only” $640 billion—we would still be on the road to bankruptcy.
Moreover, the benefits in deficit reduction from the Act would soon peter out; there is no basis for thinking that the level of taxation in the Act, compared to the higher tax level in the 1990s, will produce a higher rate of economic growth than the normal 3 percent.
It is nevertheless possible to defend the new Act on two grounds.
The first, and I think less important, is that it indicates the possibility of compromise between the Obama Administration and the resurgent, and increasingly conservative and assertive, Republican Party, though compromise will be more difficult come January when the Republicans take control of the House of Representatives and significantly increase their representation in the Senate.
Second, and more important, estimates that GDP would grow by at least 3 percent in 2011 were premised on the expectation that the bulk, at least, of the Bush tax cuts would be continued.
Given the weakness of the economy, a sudden tax increase in 2011, which would have been the effect of allowing those cuts to expire, could easily have knocked one or two percentage points off the GDP growth rate.
It would have been better to have just continued the tax breaks and not have cut the payroll tax and extended unemployment benefits.
A small, and possibly (though not probably) temporary, tax cut is unlikely to stimulate much spending, and extending unemployment benefits can actually increase unemployment by making unemployed workers more picky in their search for a new job.
These provisions of the Act are simply the Democratic quid pro quo for the tax breaks for the wealthy, favored by the Republicans.
As Becker points out, the real significance of the increase in the unemployment rate from 9.6 percent in October to 9.8 percent in November is not the .2 percent increase, which is within the margin of error, but that it signals the depth of the economic hole that the country has fallen into.
It is now three years since the depression—and it is a depression, not a “recession” or even a “Great Recession”—those are euphemisms—began, and it did not end last summer when GDP stopped falling (by that measure, the Great Depression ended in 1933).
The proper comparison is between actual GDP and the GDP trend line (3 percent a year)—until actual GDP rejoins the trend line, the economy is in depression.
Real (inflation-adjusted) GDP is roughly the same today that it was three years ago; it “should” be 9 percent higher—which would make GDP almost $1.5 trillion greater today than it is.
The economy cannot rejoin the trend line with unemployment as high as it is.
A rise in the unemployment rate can actually be a recovery signal.
The reason is that the rate is based on a definition of the unemployed that excludes people who have not been looking for work in the previous four weeks.
As an economy recovers and demand for labor grows, people who had been discouraged from looking for a job because demand was so weak begin looking—and until they find a job, they are counted as unemployed.
But that is a case in which both employment and unemployment are rising, while the increase in the November unemployment rate reflected, rather, weakening demand for labor, though probably that is a random event rather than the beginning of a trend.
Besides the 15 million unemployed, another 2.5 million Americans would like to work but have not searched for work in the last four weeks and a further 9 million are involuntarily working only part time.
Many full-time workers have taken steep pay cuts, though the costs of goods and services have not fallen.
(True, average hourly earnings have not declined, but often the pay cuts take the form of furloughing workers or cutting shifts, so that the worker is working fewer hours per week and therefore taking home less pay.
The worker may or may not be able to find a part-time job to fill the gap.) The total number of employed persons in the United States is only 140 million; when you subtract from that number the workers who are involuntarily working part time and the (unknown) number of workers worried about losing their jobs or experiencing hardship because their wages have been cut, and then adds the unemployed and the discouraged workers, it is apparent that the employment picture is very dire.
The fall in income and rise in anxiety that mark (and mar) the current labor situation cause a reduction in consumption and hence in production, and so reduce the incentive of both consumers and businesses to borrow, and of banks to lend (they fear high default rates—and their balance sheets are probably weaker than they appear to be).
Anxiety increases not only the savings rate, but also hoarding—banks and other businesses accumulate cash, and individuals invest their savings in low-risk forms that do not fuel business investment (an example is a federally insured savings account, with the bank investing its deposits in Treasury securities).
With money circulating slowly and inflation as a result negligible, long-term fixed-interest-rate debtors, such as mortgagors, obtain no relief from their debt burdens.
The huge state and local debt, together with the enormous federal debt, have created economic uncertainty compounded by what appears to be political paralysis in dealing with public debt, and by the hard-to-predict impact of the health reform law and other Administration initiatives on business.
Together these factors may be creating a high-unemployment, low-growth equilibrium that could persist indefinitely.
What is to be done, if anything? The ideal solution—which is unattainable—would be to combine a short-term stimulus with long-term fiscal and regulatory reform aimed at reducing governmental deficits and increasing economic growth.
With interest rates very low and much savings in inert forms (such as the $1 trillion in excess reserves held by the banks), there is an argument for the government’s borrowing those savings and putting them to work on projects that would require labor, and thus reduce unemployment.
But not only is a further stimulus politically impossible (in part because of the poor design, execution, and explanation of the large stimulus program enacted in February 2009); it would take too long to put into effect to avoid a risk of its crowding out private investment when the private economy begins to grow more rapidly.
