In Hoc Anno Domini - WSJ.com

WSJ.com

In Hoc Anno Domini

When Saul of Tarsus set out on his journey to Damascus the whole of the known world lay in bondage. There was one state, and it was Rome. There was one master for it all, and he was Tiberius Caesar.

Everywhere there was civil order, for the arm of the Roman law was long. Everywhere there was stability, in government and in society, for the centurions saw that it was so.

But everywhere there was something else, too. There was oppression—for those who were not the friends of Tiberius Caesar. There was the tax gatherer to take the grain from the fields and the flax from the spindle to feed the legions or to fill the hungry treasury from which divine Caesar gave largess to the people. There was the impressor to find recruits for the circuses. There were executioners to quiet those whom the Emperor proscribed. What was a man for but to serve Caesar?

There was the persecution of men who dared think differently, who heard strange voices or read strange manuscripts. There was enslavement of men whose tribes came not from Rome, disdain for those who did not have the familiar visage. And most of all, there was everywhere a contempt for human life. What, to the strong, was one man more or less in a crowded world?

Then, of a sudden, there was a light in the world, and a man from Galilee saying, Render unto Caesar the things which are Caesar's and unto God the things that are God's.

And the voice from Galilee, which would defy Caesar, offered a new Kingdom in which each man could walk upright and bow to none but his God. Inasmuch as ye have done it unto one of the least of these my brethren, ye have done it unto me. And he sent this gospel of the Kingdom of Man into the uttermost ends of the earth.

So the light came into the world and the men who lived in darkness were afraid, and they tried to lower a curtain so that man would still believe salvation lay with the leaders.

But it came to pass for a while in divers places that the truth did set man free, although the men of darkness were offended and they tried to put out the light. The voice said, Haste ye. Walk while you have the light, lest darkness come upon you, for he that walketh in darkness knoweth not whither he goeth.

Along the road to Damascus the light shone brightly. But afterward Paul of Tarsus, too, was sore afraid. He feared that other Caesars, other prophets, might one day persuade men that man was nothing save a servant unto them, that men might yield up their birthright from God for pottage and walk no more in freedom.

Then might it come to pass that darkness would settle again over the lands and there would be a burning of books and men would think only of what they should eat and what they should wear, and would give heed only to new Caesars and to false prophets. Then might it come to pass that men would not look upward to see even a winter's star in the East, and once more, there would be no light at all in the darkness.

And so Paul, the apostle of the Son of Man, spoke to his brethren, the Galatians, the words he would have us remember afterward in each of the years of his Lord:

Stand fast therefore in the liberty wherewith Christ has made us free and be not entangled again with the yoke of bondage.

This editorial was written in 1949 by the late Vermont Royster and has been published annually since.

Printed in The Wall Street Journal, page A14

The Sidney Awards, Part II - NYTimes.com

NYTimes.com</a>

The Sidney Awards, Part II

Book tours are lonely, yet after spending four months promoting his novel “Freedom,” Jonathan Franzen went to an island 500 miles off the coast of Chile to be alone. He got at least one thing out of it, a profound essay in The New Yorker called “Farther Away,” the winner of another of this year’s Sidney Awards.

Franzen’s theme is solitude. He writes about Robinson Crusoe, the emergence of the novel, the potentially isolating effect of the Internet, and the suicide of his friend, the writer David Foster Wallace.

“He was a lifelong prisoner on the island of himself,” Franzen writes of his friend. “To prove once and for all that he truly didn’t deserve to be loved, it was necessary to betray as hideously as possible those who loved him best, by killing himself at home and making them firsthand witness to his act.”

Wallace emerges as a person who defined the extreme end of the isolation spectrum. Franzen is a bit down the scale, which explains what is best in his writing (his incredible powers of observation) and what is worst (his coolness toward his own characters). Many people with writerly personalities share these traits. You can also find a few of them, oddly, in politics.

Many of the best public-policy essays of the year tackled the interconnected subjects of inequality, wage stagnation and the loss of economic dynamism. If anybody wants a deeper understanding of these issues, I’d recommend a diverse mélange of articles: “The Broken Contract” by George Packer in Foreign Affairs; “The Inequality That Matters” by Tyler Cowen in The American Interest; “The Rise of the New Global Elite” by Chrystia Freeland in The Atlantic; and “Beyond the Welfare State” in National Affairs by Yuval Levin.

Each essay has insights that complicate the familiar partisan story lines. Cowen, for example, notes that income inequality is on the way up while the inequality of personal well-being is on the way down. One hundred years ago, John D. Rockefeller lived a very different life than the average wage earner, who worked six days a week, never took vacations and had no access to the world’s culture. Today, both you and Bill Gates enjoy the Internet, important new pharmaceuticals and good cheap food.

Anybody who is on antidepressants, or knows somebody who is, should read Marcia Angell’s series “The Epidemic of Mental Illness: Why?” from The New York Review of Books. Many of us have been taught that depression arises, in part, from chemical imbalances in the brain. Apparently, there is no evidence to support that.

Many of us thought that antidepressants work. Apparently, there is meager evidence to support that, too. They may work slightly better than placebos, Angell argues, but only under certain circumstances. They may also be permanently altering people’s brains and unintentionally fueling the plague of mental illness by causing episodes of mania, for example. I wouldn’t consider Angell the last word on this, but it’s certainly a viewpoint worth learning about.

Speaking about medicine gone wrong, Ethan Gutmann had a chilling piece in The Weekly Standard called “The Xinjiang Procedure” about organ harvesting in China. Prisoners are executed by firing squads and then, as they are slowly dying, doctors are rushed in to harvest livers and kidneys. Gutmann spoke with doctors compelled to perform this procedure:

“Even as Enver stitched the man back up — not internally, there was no point to that anymore, just so the body might look presentable — he sensed the man was still alive. ‘I am a killer,’ Enver screamed inwardly. He did not dare to look at the man’s face again.”

GQ magazine had a very good year with several fine articles. One of them was “The Movie Set That Ate Itself” by Michael Idov. It is about the movie director Ilya Khrzhanovsky who set out to make a film about Stalinism. He took over a Ukrainian city, amassed a cast of thousands and had them live in his own totalitarian city. They were forbidden to utter words or use technologies that did not exist in 1952. He redid the plumbing pipes so the toilets would sound like toilets from 1952. Actors and technicians had to answer to his every whim.

Hundreds left or were purged from the movie project, but many more were sucked in by the totalitarian mind-set, snitching on confederates, living in fear. Idov ends up denouncing his own photographer, after Khrzhanovsky turns against him.

Every year there are more outstanding essays than I have space to mention, but this year’s selection process has been the hardest. The Internet is everywhere, but this is a golden age of long-form journalism, and I could have chosen 50 pieces as good as the ones above. Click on The Browser, Longform.org and Arts & Letters Daily for links to more. Tweets are fun, but essays you’ll remember.

Gladwell: The Order of Things -- What College Rankings Really Tell Us : The New Yorker

THE ORDER OF THINGS

Last summer, the editors of Car and Driver conducted a comparison test of three sports cars, the Lotus Evora, the Chevrolet Corvette Grand Sport, and the Porsche Cayman S. The cars were taken on an extended run through mountain passes in Southern California, and from there to a race track north of Los Angeles, for precise measurements of performance and handling. The results of the road tests were then tabulated according to a twenty-one-variable, two-hundred-and-thirty-five-point rating system, based on four categories: vehicle (driver comfort, styling, fit and finish, etc.); power train (transmission, engine, and fuel economy); chassis (steering, brakes, ride, and handling); and “fun to drive.” The magazine concluded, “The range of these three cars’ driving personalities is as various as the pajama sizes of Papa Bear, Mama Bear, and Baby Bear, but a clear winner emerged nonetheless.” This was the final tally:


1. Porsche Cayman 193
2. Chevrolet Corvette 186
3. Lotus Evora 182

Car and Driver is one of the most influential editorial voices in the automotive world. When it says that it likes one car better than another, consumers and carmakers take notice. Yet when you inspect the magazine’s tabulations it is hard to figure out why Car and Driver was so sure that the Cayman is better than the Corvette and the Evora. The trouble starts with the fact that the ranking methodology Car and Driver used was essentially the same one it uses for all the vehicles it tests—from S.U.V.s to economy sedans. It’s not set up for sports cars. Exterior styling, for example, counts for four per cent of the total score. Has anyone buying a sports car ever placed so little value on how it looks? Similarly, the categories of “fun to drive” and “chassis”—which cover the subjective experience of driving the car—count for only eighty-five points out of the total of two hundred and thirty-five. That may make sense for S.U.V. buyers. But, for people interested in Porsches and Corvettes and Lotuses, the subjective experience of driving is surely what matters most. In other words, in trying to come up with a ranking that is heterogeneous—a methodology that is broad enough to cover all vehicles—Car and Driver ended up with a system that is absurdly ill-suited to some vehicles.

Suppose that Car and Driver decided to tailor its grading system just to sports cars. Clearly, styling and the driving experience ought to count for much more. So let’s make exterior styling worth twenty-five per cent, the driving experience worth fifty per cent, and the balance of the criteria worth twenty-five per cent. The final tally now looks like this:


1. Lotus Evora 205
2. Porsche Cayman 198
3. Chevrolet Corvette 192

There’s another thing funny about the Car and Driver system. Price counts only for twenty points, less than ten per cent of the total. There’s no secret why: Car and Driver is edited by auto enthusiasts. To them, the choice of a car is as important as the choice of a home or a spouse, and only a philistine would let a few dollars stand between him and the car he wants. (They leave penny-pinching to their frumpy counterparts at Consumer Reports.) But for most of us price matters, especially in a case like this, where the Corvette, as tested, costs $67,565—thirteen thousand dollars less than the Porsche, and eighteen thousand dollars less than the Lotus. Even to a car nut, that’s a lot of money. So let’s imagine that Car and Driver revised its ranking system again, giving a third of the weight to price, a third to the driving experience, and a third split equally between exterior styling and vehicle characteristics. The tally would now be:


1. Chevrolet Corvette 205
2. Lotus Evora 195
3. Porsche Cayman 195

So which is the best car?

Car and Driver’s ambition to grade every car in the world according to the same methodology would be fine if it limited itself to a single dimension. A heterogeneous ranking system works if it focusses just on, say, how much fun a car is to drive, or how good-looking it is, or how beautifully it handles. The magazine’s ambition to create a comprehensive ranking system—one that considered cars along twenty-one variables, each weighted according to a secret sauce cooked up by the editors—would also be fine, as long as the cars being compared were truly similar. It’s only when one car is thirteen thousand dollars more than another that juggling twenty-one variables starts to break down, because you’re faced with the impossible task of deciding how much a difference of that degree ought to matter. A ranking can be heterogeneous, in other words, as long as it doesn’t try to be too comprehensive. And it can be comprehensive as long as it doesn’t try to measure things that are heterogeneous. But it’s an act of real audacity when a ranking system tries to be comprehensive and heterogeneous—which is the first thing to keep in mind in any consideration of U.S. News & World Reports annual “Best Colleges” guide.

The U.S. News rankings are run by Robert Morse, whose six-person team operates out of a small red brick office building in the Georgetown neighborhood of Washington, D.C. Morse is a middle-aged man with gray hair who looks like the prototypical Beltway wonk: rumpled, self-effacing, mildly preppy and sensibly shoed. His office is piled high with the statistical detritus of more than two decades of data collection. When he took on his current job, in the mid-nineteen-eighties, the college guide was little more than an item of service journalism tucked away inside U.S. News magazine. Now the weekly print magazine is defunct, but the rankings have taken on a life of their own. In the month that the 2011 rankings came out, the U.S. News Web site recorded more than ten million visitors. U.S. News has added rankings of graduate programs, law schools, business schools, medical schools, and hospitals—and Morse has become the dean of a burgeoning international rankings industry.

“In the early years, the thing that’s happening now would not have been imaginable,” Morse says. “This idea of using the rankings as a benchmark, college presidents setting a goal of ‘We’re going to rise in the U.S. News ranking,’ as proof of their management, or as proof that they’re a better school, that they’re a good president. That wasn’t on anybody’s radar. It was just for consumers.”

Over the years, Morse’s methodology has steadily evolved. In its current form, it relies on seven weighted variables:


1. Undergraduate academic reputation, 22.5 per cent
2. Graduation and freshman retention rates, 20 per cent
3. Faculty resources, 20 per cent
4. Student selectivity, 15 per cent
5. Financial resources, 10 per cent
6. Graduation rate performance, 7.5 per cent
7. Alumni giving, 5 per cent

From these variables, U.S. News generates a score for each institution on a scale of 1 to 100, where Harvard is a 100 and the University of North Carolina-Greensboro is a 22. Here is a list of the schools that finished in positions forty-one through fifty in the 2011 “National University” category:


41. Case Western Reserve, 60
41. Rensselaer Polytechnic Institute, 60
41. University of California-Irvine, 60
41. University of Washington, 60
45. University of Texas-Austin, 59
45. University of Wisconsin-Madison, 59
47. Penn State University-University Park, 58
47. University of Illinois, Urbana-Champaign, 58
47. University of Miami, 58
50. Yeshiva University, 57

This ranking system looks a great deal like the Car and Driver methodology. It is heterogeneous. It doesn’t just compare U.C. Irvine, the University of Washington, the University of Texas-Austin, the University of Wisconsin-Madison, Penn State, and the University of Illinois, Urbana-Champaign—all public institutions of roughly the same size. It aims to compare Penn State—a very large, public, land-grant university with a low tuition and an economically diverse student body, set in a rural valley in central Pennsylvania and famous for its football team—with Yeshiva University, a small, expensive, private Jewish university whose undergraduate program is set on two campuses in Manhattan (one in midtown, for the women, and one far uptown, for the men) and is definitely not famous for its football team.

The system is also comprehensive. It doesn’t simply compare schools along one dimension—the test scores of incoming freshmen, say, or academic reputation. An algorithm takes a slate of statistics on each college and transforms them into a single score: it tells us that Penn State is a better school than Yeshiva by one point. It is easy to see why the U.S. News rankings are so popular. A single score allows us to judge between entities (like Yeshiva and Penn State) that otherwise would be impossible to compare. At no point, however, do the college guides acknowledge the extraordinary difficulty of the task they have set themselves. A comprehensive, heterogeneous ranking system was a stretch for Car and Driver—and all it did was rank inanimate objects operated by a single person. The Penn State campus at University Park is a complex institution with dozens of schools and departments, four thousand faculty members, and forty-five thousand students. How on earth does anyone propose to assign a number to something like that?

The first difficulty with rankings is that it can be surprisingly hard to measure the variable you want to rank—even in cases where that variable seems perfectly objective. Consider an extreme example: suicide. Here is a ranking of suicides per hundred thousand people, by country:


1. Belarus, 35.1
2. Lithuania, 31.5
3. South Korea, 31.0
4. Kazakhstan, 26.9
5. Russia, 26.5
6. Japan, 24.4
7. Guyana, 22.9
8. Ukraine, 22.6
9. Hungary, 21.8
10. Sri Lanka, 21.6

This list looks straightforward. Yet no self-respecting epidemiologist would look at it and conclude that Belarus has the worst suicide rate in the world, and that Hungary belongs in the top ten. Measuring suicide is just too tricky. It requires someone to make a surmise about the intentions of the deceased at the time of death. In some cases, that’s easy. Maybe the victim jumped off the Golden Gate Bridge, or left a note. In most cases, though, there’s ambiguity, and different coroners and different cultures vary widely in the way they choose to interpret that ambiguity. In certain places, cause of death is determined by the police, who some believe are more likely to call an ambiguous suicide an accident. In other places, the decision is made by a physician, who may be less likely to do so. In some cultures, suicide is considered so shameful that coroners shy away from that determination, even when it’s obvious. A suicide might be called a suicide, a homicide, an accident, or left undetermined. David Phillips, a sociologist at the University of California-San Diego, has argued persuasively that a significant percentage of single-car crashes are probably suicides, and criminologists suggest that a good percentage of civilians killed by police officers are actually cases of “suicide by cop”—instances where someone deliberately provoked deadly force. The reported suicide rate, then, is almost certainly less than the actual suicide rate. But no one knows whether the relationship between those two numbers is the same in every country. And no one knows whether the proxies that we use to estimate the real suicide rate are any good.

“Many, many people who commit suicide by poison have something else wrong with them—let’s say the person has cancer—and the death of this person might be listed as primarily associated with cancer, rather than with deliberate poisoning,” Phillips says. “Any suicides in that category would be undetectable. Or it is frequently noted that Orthodox Jews have a low recorded suicide rate, as do Catholics. Well, it could be because they have this very solid community and proscriptions against suicide, or because they are unusually embarrassed by suicide and more willing to hide it. The simple answer is nobody knows whether suicide rankings are real.”

The U.S. News rankings suffer from a serious case of the suicide problem. There’s no direct way to measure the quality of an institution—how well a college manages to inform, inspire, and challenge its students. So the U.S. News algorithm relies instead on proxies for quality—and the proxies for educational quality turn out to be flimsy at best.

Take the category of “faculty resources,” which counts for twenty per cent of an institution’s score. “Research shows that the more satisfied students are about their contact with professors,” the College Guide’s explanation of the category begins, “the more they will learn and the more likely it is they will graduate.” That’s true. According to educational researchers, arguably the most important variable in a successful college education is a vague but crucial concept called student “engagement”—that is, the extent to which students immerse themselves in the intellectual and social life of their college—and a major component of engagement is the quality of a student’s contacts with faculty. As with suicide, the disagreement isn’t about what we want to measure. So what proxies does U.S. News use to measure this elusive dimension of engagement? The explanation goes on:


We use six factors from the 2009-10 academic year to assess a school’s commitment to instruction. Class size has two components, the proportion of classes with fewer than 20 students (30 percent of the faculty resources score) and the proportion with 50 or more students (10 percent of the score). Faculty salary (35 percent) is the average faculty pay, plus benefits, during the 2008-09 and 2009-10 academic years, adjusted for regional differences in the cost of living. . . . We also weigh the proportion of professors with the highest degree in their fields (15 percent), the student-faculty ratio (5 percent), and the proportion of faculty who are full time (5 percent).

This is a puzzling list. Do professors who get paid more money really take their teaching roles more seriously? And why does it matter whether a professor has the highest degree in his or her field? Salaries and degree attainment are known to be predictors of research productivity. But studies show that being oriented toward research has very little to do with being good at teaching. Almost none of the U.S. News variables, in fact, seem to be particularly effective proxies for engagement. As the educational researchers Patrick Terenzini and Ernest Pascarella concluded after analyzing twenty-six hundred reports on the effects of college on students:


After taking into account the characteristics, abilities, and backgrounds students bring with them to college, we found that how much students grow or change has only inconsistent and, perhaps in a practical sense, trivial relationships with such traditional measures of institutional “quality” as educational expenditures per student, student/faculty ratios, faculty salaries, percentage of faculty with the highest degree in their field, faculty research productivity, size of the library, [or] admissions selectivity.

The reputation score that serves as the most important variable in the U.S. News methodology—accounting for 22.5 per cent of a college’s final score—isn’t any better. Every year, the magazine sends a survey to the country’s university and college presidents, provosts, and admissions deans (along with a sampling of high-school guidance counsellors) asking them to grade all the schools in their category on a scale of one to five. Those at national universities, for example, are asked to rank all two hundred and sixty-one other national universities—and Morse says that the typical respondent grades about half of the schools in his or her category. But it’s far from clear how any one individual could have insight into that many institutions. In an article published recently in the Annals of Internal Medicine, Ashwini Sehgal analyzed U.S. News’s “Best Hospitals” rankings, which also rely heavily on reputation ratings generated by professional peers. Sehgal put together a list of objective criteria of performance—such as a hospital’s mortality rates for various surgical procedures, patient-safety rates, nursing-staffing levels, and key technologies. Then he checked to see how well those measures of performance matched each hospital’s reputation rating. The answer, he discovered, was that they didn’t. Having good outcomes doesn’t translate into being admired by other doctors. Why, after all, should a gastroenterologist at the Ochsner Medical Center, in New Orleans, have any specific insight into the performance of the gastroenterology department at Mass General, in Boston, or even, for that matter, have anything more than an anecdotal impression of the gastroenterology department down the road at some hospital in Baton Rouge?

Some years ago, similarly, a former chief justice of the Michigan supreme court, Thomas Brennan, sent a questionnaire to a hundred or so of his fellow-lawyers, asking them to rank a list of ten law schools in order of quality. “They included a good sample of the big names. Harvard. Yale. University of Michigan. And some lesser-known schools. John Marshall. Thomas Cooley,” Brennan wrote. “As I recall, they ranked Penn State’s law school right about in the middle of the pack. Maybe fifth among the ten schools listed. Of course, Penn State doesn’t have a law school.”

Those lawyers put Penn State in the middle of the pack, even though every fact they thought they knew about Penn State’s law school was an illusion, because in their minds Penn State is a middle-of-the-pack brand. (Penn State does have a law school today, by the way.) Sound judgments of educational quality have to be based on specific, hard-to-observe features. But reputational ratings are simply inferences from broad, readily observable features of an institution’s identity, such as its history, its prominence in the media, or the elegance of its architecture. They are prejudices.

And where do these kinds of reputational prejudices come from? According to Michael Bastedo, an educational sociologist at the University of Michigan who has published widely on the U.S. News methodology, “rankings drive reputation.” In other words, when U.S. News asks a university president to perform the impossible task of assessing the relative merits of dozens of institutions he knows nothing about, he relies on the only source of detailed information at his disposal that assesses the relative merits of dozens of institutions he knows nothing about: U.S. News. A school like Penn State, then, can do little to improve its position. To go higher than forty-seventh, it needs a better reputation score, and to get a better reputation score it needs to be higher than forty-seventh. The U.S. News ratings are a self-fulfilling prophecy.

Bastedo, incidentally, says that reputation ratings can sometimes work very well. It makes sense, for example, to ask professors within a field to rate others in their field: they read one another’s work, attend the same conferences, and hire one another’s graduate students, so they have real knowledge on which to base an opinion. Reputation scores can work for one-dimensional rankings, created by people with specialized knowledge. For instance, the Wall Street Journal has ranked colleges according to the opinions of corporate recruiters. Those opinions are more than a proxy. To the extent that people chose one college over another to enhance their prospects in the corporate job markets, the reputation rankings of corporate recruiters are of direct relevance. The No. 1 school in the Wall Street Journals corporate recruiter’s ranking, by the way, is Penn State.

For several years, Jeffrey Stake, a professor at the Indiana University law school, has run a Web site called the Ranking Game. It contains a spreadsheet loaded with statistics on every law school in the country, and allows users to pick their own criteria, assign their own weights, and construct any ranking system they want.

Stake’s intention is to demonstrate just how subjective rankings are, to show how determinations of “quality” turn on relatively arbitrary judgments about how much different variables should be weighted. For example, his site makes it easy to mimic the U.S. News rankings. All you have to do is give equal weight to “academic reputation,” “LSAT scores at the 75th percentile,” “student-faculty ratio,” and “faculty law-review publishing,” and you get a list of élite schools which looks similar to the U.S News law-school rankings:


1. University of Chicago
2. Yale University
3. Harvard University
4. Stanford University
5. Columbia University
6. Northwestern University
7. Cornell University
8. University of Pennsylvania
9. New York University
10. University of California, Berkeley

There’s something missing from that list of variables, of course: it doesn’t include price. That is one of the most distinctive features of the U.S. News methodology. Both its college rankings and its law-school rankings reward schools for devoting lots of financial resources to educating their students, but not for being affordable. Why? Morse admitted that there was no formal reason for that position. It was just a feeling. “We’re not saying that we’re measuring educational outcomes,” he explained. “We’re not saying we’re social scientists, or we’re subjecting our rankings to some peer-review process. We’re just saying we’ve made this judgment. We’re saying we’ve interviewed a lot of experts, we’ve developed these academic indicators, and we think these measures measure quality schools.”

As answers go, that’s up there with the parental “Because I said so.” But Morse is simply being honest. If we don’t understand what the right proxies for college quality are, let alone how to represent those proxies in a comprehensive, heterogeneous grading system, then our rankings are inherently arbitrary. All Morse was saying was that, on the question of price, he comes down on the Car and Driver side of things, not on the Consumer Reports side. U.S. News thinks that schools that spend a lot of money on their students are nicer than those that don’t, and that this niceness ought to be factored into the equation of desirability. Plenty of Americans agree: the campus of Vanderbilt University or Williams College is filled with students whose families are largely indifferent to the price their school charges but keenly interested in the flower beds and the spacious suites and the architecturally distinguished lecture halls those high prices make possible.

Of course, given that the rising cost of college has become a significant social problem in the United States in recent years, you can make a strong case that a school ought to be rewarded for being affordable. So suppose we go back to Stake’s ranking game, and re-rank law schools based on student-faculty ratio, L.S.A.T. scores at the seventy-fifth percentile, faculty publishing, and price, all weighted equally. The list now looks like this:


1. University of Chicago
2. Yale University
3. Harvard University
4. Stanford University
5. Northwestern University
6. Brigham Young University
7. Cornell University
8. University of Colorado
9. University of Pennsylvania
10. Columbia University

The revised ranking tells us that there are schools—like B.Y.U. and Colorado—that provide a good legal education at a decent price, and that, by choosing not to include tuition as a variable, U.S. News has effectively penalized those schools for trying to provide value for the tuition dollar. But that’s a very subtle tweak. Let’s say that value for the dollar is something we really care about. And so what we want is a three-factor ranking, counting value for the dollar at forty per cent, L.S.A.T. scores at forty per cent of the total, and faculty publishing at twenty per cent. Look at how the top ten changes:


1. University of Chicago
2. Brigham Young University
3. Harvard University
4. Yale University
5. University of Texas
6. University of Virginia
7. University of Colorado
8. University of Alabama
9. Stanford University
10. University of Pennsylvania

Welcome to the big time, Alabama!

The U.S. News rankings turn out to be full of these kinds of implicit ideological choices. One common statistic used to evaluate colleges, for example, is called “graduation rate performance,” which compares a school’s actual graduation rate with its predicted graduation rate given the socioeconomic status and the test scores of its incoming freshman class. It is a measure of the school’s efficacy: it quantifies the impact of a school’s culture and teachers and institutional support mechanisms. Tulane, given the qualifications of the students that it admits, ought to have a graduation rate of eighty-seven per cent; its actual 2009 graduation rate was seventy-three per cent. That shortfall suggests that something is amiss at Tulane.

Another common statistic for measuring college quality is “student selectivity.” This reflects variables such as how many of a college’s freshmen were in the top ten per cent of their high-school class, how high their S.A.T. scores were, and what percentage of applicants a college admits. Selectivity quantifies how accomplished students are when they first arrive on campus.

Each of these statistics matters, but for very different reasons. As a society, we probably care more about efficacy: America’s future depends on colleges that make sure the students they admit leave with an education and a degree. If you are a bright high-school senior and you’re thinking about your own future, though, you may well care more about selectivity, because that relates to the prestige of your degree.

But no institution can excel at both. The national university that ranks No. 1 in selectivity is Yale. A crucial part of what it considers its educational function is to assemble the most gifted group of freshmen it can. Because it maximizes selectivity, though, Yale will never do well on an efficacy scale. Its freshmen are so accomplished that they have a predicted graduation rate of ninety-six per cent: the highest Yale’s efficacy score could be is plus four. (It’s actually plus two.) Of the top fifty national universities in the “Best Colleges” ranking, the least selective school is Penn State. Penn State sees its educational function as serving a wide range of students. That gives it the opportunity to excel at efficacy—and it does so brilliantly. Penn State’s freshmen have an expected graduation rate of seventy-three per cent and an actual graduation rate of eighty-five per cent, for a score of plus twelve: no other school in the U.S. News top fifty comes close.

There is no right answer to how much weight a ranking system should give to these two competing values. It’s a matter of which educational model you value more—and here, once again, U.S. News makes its position clear. It gives twice as much weight to selectivity as it does to efficacy. It favors the Yale model over the Penn State model, which means that the Yales of the world will always succeed at the U.S. News rankings because the U.S. News system is designed to reward Yale-ness. By contrast, to the extent that Penn State succeeds at doing a better job of being Penn State—of attracting a diverse group of students and educating them capably—it will only do worse. Rankings are not benign. They enshrine very particular ideologies, and, at a time when American higher education is facing a crisis of accessibility and affordability, we have adopted a de-facto standard of college quality that is uninterested in both of those factors. And why? Because a group of magazine analysts in an office building in Washington, D.C., decided twenty years ago to value selectivity over efficacy, to use proxies that scarcely relate to what they’re meant to be proxies for, and to pretend that they can compare a large, diverse, low-cost land-grant university in rural Pennsylvania with a small, expensive, private Jewish university on two campuses in Manhattan.

“If you look at the top twenty schools every year, forever, they are all wealthy private universities,” Graham Spanier, the president of Penn State, told me. “Do you mean that even the most prestigious public universities in the United States, and you can take your pick of what you think they are—Berkeley, U.C.L.A., University of Michigan, University of Wisconsin, Illinois, Penn State, U.N.C.—do you mean to say that not one of those is in the top tier of institutions? It doesn’t really make sense, until you drill down into the rankings, and what do you find? What I find more than anything else is a measure of wealth: institutional wealth, how big is your endowment, what percentage of alumni are donating each year, what are your faculty salaries, how much are you spending per student. Penn State may very well be the most popular university in America—we get a hundred and fifteen thousand applications a year for admission. We serve a lot of people. Nearly a third of them are the first people in their entire family network to come to college. We have seventy-six per cent of our students receiving financial aid. There is no possibility that we could do anything here at this university to get ourselves into the top ten or twenty or thirty—except if some donor gave us billions of dollars.”

In the fall of 1913, the prominent American geographer Ellsworth Huntington sent a letter to two hundred and thirteen scholars from twenty-seven countries. “May I ask your cooperation in the preparation of a map showing the distribution of the higher elements of civilization throughout the world?” Huntington began, and he continued:


My purpose is to prepare a map which shall show the distribution of those characteristics which are generally recognized as of the highest value. I mean by this the power of initiative, the capacity for formulating new ideas and for carrying them into effect, the power of self-control, high standards of honesty and morality, the power to lead and to control other races, the capacity for disseminating ideas, and other similar qualities which will readily suggest themselves.

Each contributor was given a list of a hundred and eighty-five of the world’s regions—ranging from the Amur district of Siberia to the Kalahari Desert—with instructions to give each region a score of one to ten. The scores would then be summed and converted to a scale of one to a hundred. The rules were strict. The past could not be considered: Greece could not be given credit for its ancient glories. “If two races inhabit a given region,” Huntington specified further, “both must be considered, and the rank of the region must depend upon the average of the two.” The reputation of immigrants could be used toward the score of their country of origin, but only those of the first generation. And size and commercial significance should be held constant: the Scots should not suffer relative to, say, the English, just because they were less populous. Huntington’s respondents took on the task with the utmost seriousness. “One appreciates what a big world this is and how little one knows about it when he attempts such a task as you have set,” a respondent wrote back to Huntington. “It is a most excellent means of taking the conceit out of one.” England and Wales and the North Atlantic states of America scored a perfect hundred, with central and northwestern Germany and New England coming in at ninety-nine.

Huntington then requested from the twenty-five of his correspondents who were Americans an in-depth ranking of the constituent regions of the United States. This time, he proposed a six-point scale. Southern Alaska, in this second reckoning, was last, at 1.5, followed by Arizona and New Mexico, at 1.6. The winners: Massachusetts, at 6.0, followed by Connecticut, Rhode Island, and New York, at 5.8. The citadel of American civilization was New England and New York, Huntington concluded, in his magisterial 1915 work “Civilization and Climate.”

In case you are wondering, Ellsworth Huntington was a professor of geography at Yale, in New Haven, Connecticut. “Civilization and Climate” was published by Yale University Press, and the book’s appendix contains a list of Huntington’s American correspondents, of which the following bear special mention:


J. Barrell, geologist, New Haven, Conn.
P. Bigelow, traveler and author, Malden, N.Y.
I. Bowman, geographer, New York City
W. M. Brown, geographer, Providence, R.I.
A. C. Coolidge, historian, Cambridge, Mass.
S. W. Cushing, geographer, Salem, Mass.
L. Farrand, anthropologist, New York City
C. W. Furlong, traveler and author, Boston, Mass.
E. W. Griffis, traveler and author, Ithaca, N.Y.
A. G. Keller, anthropologist, New Haven, Conn.
E. F. Merriam, editor, Boston, Mass.
J. R. Smith, economic geographer, Philadelphia, Pa.
Anonymous, New York City

“In spite of several attempts I was unable to obtain any contributor in the states west of Minnesota or south of the Ohio River,” Huntington explains, as if it were a side issue. It isn’t, of course—not then and not now. Who comes out on top, in any ranking system, is really about who is doing the ranking.

--
Stephen.Bates@gmail.com | +1 202-730-9760 

Connect with me!
 LinkedIn
Sent with Sparrow

Gladwell: Xerox PARC, Apple, and the Creation of the Mouse : The New Yorker

Xerox PARC, Apple, and the truth about innovation.

In late 1979, a twenty-four-year-old entrepreneur paid a visit to a research center in Silicon Valley called Xerox PARC. He was the co-founder of a small computer startup down the road, in Cupertino. His name was Steve Jobs.

Xerox PARC was the innovation arm of the Xerox Corporation. It was, and remains, on Coyote Hill Road, in Palo Alto, nestled in the foothills on the edge of town, in a long, low concrete building, with enormous terraces looking out over the jewels of Silicon Valley. To the northwest was Stanford University’s Hoover Tower. To the north was Hewlett-Packard’s sprawling campus. All around were scores of the other chip designers, software firms, venture capitalists, and hardware-makers. A visitor to PARC, taking in that view, could easily imagine that it was the computer world’s castle, lording over the valley below—and, at the time, this wasn’t far from the truth. In 1970, Xerox had assembled the world’s greatest computer engineers and programmers, and for the next ten years they had an unparalleled run of innovation and invention. If you were obsessed with the future in the seventies, you were obsessed with Xerox PARC—which was why the young Steve Jobs had driven to Coyote Hill Road.

Apple was already one of the hottest tech firms in the country. Everyone in the Valley wanted a piece of it. So Jobs proposed a deal: he would allow Xerox to buy a hundred thousand shares of his company for a million dollars—its highly anticipated I.P.O. was just a year away—if PARC would “open its kimono.” A lot of haggling ensued. Jobs was the fox, after all, and PARC was the henhouse. What would he be allowed to see? What wouldn’t he be allowed to see? Some at PARC thought that the whole idea was lunacy, but, in the end, Xerox went ahead with it. One PARC scientist recalls Jobs as “rambunctious”—a fresh-cheeked, caffeinated version of today’s austere digital emperor. He was given a couple of tours, and he ended up standing in front of a Xerox Alto, PARC’s prized personal computer.

An engineer named Larry Tesler conducted the demonstration. He moved the cursor across the screen with the aid of a “mouse.” Directing a conventional computer, in those days, meant typing in a command on the keyboard. Tesler just clicked on one of the icons on the screen. He opened and closed “windows,” deftly moving from one task to another. He wrote on an elegant word-processing program, and exchanged e-mails with other people at PARC, on the world’s first Ethernet network. Jobs had come with one of his software engineers, Bill Atkinson, and Atkinson moved in as close as he could, his nose almost touching the screen. “Jobs was pacing around the room, acting up the whole time,” Tesler recalled. “He was very excited. Then, when he began seeing the things I could do onscreen, he watched for about a minute and started jumping around the room, shouting, ‘Why aren’t you doing anything with this? This is the greatest thing. This is revolutionary!’ ”

Xerox began selling a successor to the Alto in 1981. It was slow and underpowered—and Xerox ultimately withdrew from personal computers altogether. Jobs, meanwhile, raced back to Apple, and demanded that the team working on the company’s next generation of personal computers change course. He wanted menus on the screen. He wanted windows. He wanted a mouse. The result was the Macintosh, perhaps the most famous product in the history of Silicon Valley.

“If Xerox had known what it had and had taken advantage of its real opportunities,” Jobs said, years later, “it could have been as big as I.B.M. plus Microsoft plus Xerox combined—and the largest high-technology company in the world.”

This is the legend of Xerox PARC. Jobs is the Biblical Jacob and Xerox is Esau, squandering his birthright for a pittance. In the past thirty years, the legend has been vindicated by history. Xerox, once the darling of the American high-technology community, slipped from its former dominance. Apple is now ascendant, and the demonstration in that room in Palo Alto has come to symbolize the vision and ruthlessness that separate true innovators from also-rans. As with all legends, however, the truth is a bit more complicated.

After Jobs returned from PARC, he met with a man named Dean Hovey, who was one of the founders of the industrial-design firm that would become known as IDEO. “Jobs went to Xerox PARC on a Wednesday or a Thursday, and I saw him on the Friday afternoon,” Hovey recalled. “I had a series of ideas that I wanted to bounce off him, and I barely got two words out of my mouth when he said, ‘No, no, no, you’ve got to do a mouse.’ I was, like, ‘What’s a mouse?’ I didn’t have a clue. So he explains it, and he says, ‘You know, [the Xerox mouse] is a mouse that cost three hundred dollars to build and it breaks within two weeks. Here’s your design spec: Our mouse needs to be manufacturable for less than fifteen bucks. It needs to not fail for a couple of years, and I want to be able to use it on Formica and my bluejeans.’ From that meeting, I went to Walgreens, which is still there, at the corner of Grant and El Camino in Mountain View, and I wandered around and bought all the underarm deodorants that I could find, because they had that ball in them. I bought a butter dish. That was the beginnings of the mouse.”

I spoke with Hovey in a ramshackle building in downtown Palo Alto, where his firm had started out. He had asked the current tenant if he could borrow his old office for the morning, just for the fun of telling the story of the Apple mouse in the place where it was invented. The room was the size of someone’s bedroom. It looked as if it had last been painted in the Coolidge Administration. Hovey, who is lean and healthy in a Northern California yoga-and-yogurt sort of way, sat uncomfortably at a rickety desk in a corner of the room. “Our first machine shop was literally out on the roof,” he said, pointing out the window to a little narrow strip of rooftop, covered in green outdoor carpeting. “We didn’t tell the planning commission. We went and got that clear corrugated stuff and put it across the top for a roof. We got out through the window.”

He had brought a big plastic bag full of the artifacts of that moment: diagrams scribbled on lined paper, dozens of differently sized plastic mouse shells, a spool of guitar wire, a tiny set of wheels from a toy train set, and the metal lid from a jar of Ralph’s preserves. He turned the lid over. It was filled with a waxlike substance, the middle of which had a round indentation, in the shape of a small ball. “It’s epoxy casting resin,” he said. “You pour it, and then I put Vaseline on a smooth steel ball, and set it in the resin, and it hardens around it.” He tucked the steel ball underneath the lid and rolled it around the tabletop. “It’s a kind of mouse.”

The hard part was that the roller ball needed to be connected to the housing of the mouse, so that it didn’t fall out, and so that it could transmit information about its movements to the cursor on the screen. But if the friction created by those connections was greater than the friction between the tabletop and the roller ball, the mouse would skip. And the more the mouse was used the more dust it would pick up off the tabletop, and the more it would skip. The Xerox PARC mouse was an elaborate affair, with an array of ball bearings supporting the roller ball. But there was too much friction on the top of the ball, and it couldn’t deal with dust and grime.

At first, Hovey set to work with various arrangements of ball bearings, but nothing quite worked. “This was the ‘aha’ moment,” Hovey said, placing his fingers loosely around the sides of the ball, so that they barely touched its surface. “So the ball’s sitting here. And it rolls. I attribute that not to the table but to the oldness of the building. The floor’s not level. So I started playing with it, and that’s when I realized: I want it to roll. I don’t want it to be supported by all kinds of ball bearings. I want to just barely touch it.”

The trick was to connect the ball to the rest of the mouse at the two points where there was the least friction—right where his fingertips had been, dead center on either side of the ball. “If it’s right at midpoint, there’s no force causing it to rotate. So it rolls.”

Hovey estimated their consulting fee at thirty-five dollars an hour; the whole project cost perhaps a hundred thousand dollars. “I originally pitched Apple on doing this mostly for royalties, as opposed to a consulting job,” he recalled. “I said, ‘I’m thinking fifty cents apiece,’ because I was thinking that they’d sell fifty thousand, maybe a hundred thousand of them.” He burst out laughing, because of how far off his estimates ended up being. “Steve’s pretty savvy. He said no. Maybe if I’d asked for a nickel, I would have been fine.”

Here is the first complicating fact about the Jobs visit. In the legend of Xerox PARC, Jobs stole the personal computer from Xerox. But the striking thing about Jobs’s instructions to Hovey is that he didn’t want to reproduce what he saw at PARC. “You know, there were disputes around the number of buttons—three buttons, two buttons, one-button mouse,” Hovey went on. “The mouse at Xerox had three buttons. But we came around to the fact that learning to mouse is a feat in and of itself, and to make it as simple as possible, with just one button, was pretty important.”

So was what Jobs took from Xerox the idea of the mouse? Not quite, because Xerox never owned the idea of the mouse. The PARC researchers got it from the computer scientist Douglas Engelbart, at Stanford Research Institute, fifteen minutes away on the other side of the university campus. Engelbart dreamed up the idea of moving the cursor around the screen with a stand-alone mechanical “animal” back in the mid- nineteen-sixties. His mouse was a bulky, rectangular affair, with what looked like steel roller-skate wheels. If you lined up Engelbart’s mouse, Xerox’s mouse, and Apple’s mouse, you would not see the serial reproduction of an object. You would see the evolution of a concept.

The same is true of the graphical user interface that so captured Jobs’s imagination. Xerox PARC’s innovation had been to replace the traditional computer command line with onscreen icons. But when you clicked on an icon you got a pop-up menu: this was the intermediary between the user’s intention and the computer’s response. Jobs’s software team took the graphical interface a giant step further. It emphasized “direct manipulation.” If you wanted to make a window bigger, you just pulled on its corner and made it bigger; if you wanted to move a window across the screen, you just grabbed it and moved it. The Apple designers also invented the menu bar, the pull-down menu, and the trash can—all features that radically simplified the original Xerox PARC idea.

The difference between direct and indirect manipulation—between three buttons and one button, three hundred dollars and fifteen dollars, and a roller ball supported by ball bearings and a free-rolling ball—is not trivial. It is the difference between something intended for experts, which is what Xerox PARC had in mind, and something that’s appropriate for a mass audience, which is what Apple had in mind. PARC was building a personal computer. Apple wanted to build a popular computer.

In a recent study, “The Culture of Military Innovation,” the military scholar Dima Adamsky makes a similar argument about the so-called Revolution in Military Affairs. R.M.A. refers to the way armies have transformed themselves with the tools of the digital age—such as precision-guided missiles, surveillance drones, and real-time command, control, and communications technologies—and Adamsky begins with the simple observation that it is impossible to determine who invented R.M.A. The first people to imagine how digital technology would transform warfare were a cadre of senior military intellectuals in the Soviet Union, during the nineteen-seventies. The first country to come up with these high-tech systems was the United States. And the first country to use them was Israel, in its 1982 clash with the Syrian Air Force in Lebanon’s Bekaa Valley, a battle commonly referred to as “the Bekaa Valley turkey shoot.” Israel coördinated all the major innovations of R.M.A. in a manner so devastating that it destroyed nineteen surface-to-air batteries and eighty-seven Syrian aircraft while losing only a handful of its own planes.

That’s three revolutions, not one, and Adamsky’s point is that each of these strands is necessarily distinct, drawing on separate skills and circumstances. The Soviets had a strong, centralized military bureaucracy, with a long tradition of theoretical analysis. It made sense that they were the first to understand the military implications of new information systems. But they didn’t do anything with it, because centralized military bureaucracies with strong intellectual traditions aren’t very good at connecting word and deed.

The United States, by contrast, has a decentralized, bottom-up entrepreneurial culture, which has historically had a strong orientation toward technological solutions. The military’s close ties to the country’ high-tech community made it unsurprising that the U.S. would be the first to invent precision-guidance and next-generation command-and-control communications. But those assets also meant that Soviet-style systemic analysis wasn’t going to be a priority. As for the Israelis, their military culture grew out of a background of resource constraint and constant threat. In response, they became brilliantly improvisational and creative. But, as Adamsky points out, a military built around urgent, short-term “fire extinguishing” is not going to be distinguished by reflective theory. No one stole the revolution. Each party viewed the problem from a different perspective, and carved off a different piece of the puzzle.

In the history of the mouse, Engelbart was the Soviet Union. He was the visionary, who saw the mouse before anyone else did. But visionaries are limited by their visions. “Engelbart’s self-defined mission was not to produce a product, or even a prototype; it was an open-ended search for knowledge,” Matthew Hiltzik writes, in “Dealers of Lightning” (1999), his wonderful history of Xerox PARC. “Consequently, no project in his lab ever seemed to come to an end.” Xerox PARC was the United States: it was a place where things got made. “Xerox created this perfect environment,” recalled Bob Metcalfe, who worked there through much of the nineteen-seventies, before leaving to found the networking company 3Com. “There wasn’t any hierarchy. We built out our own tools. When we needed to publish papers, we built a printer. When we needed to edit the papers, we built a computer. When we needed to connect computers, we figured out how to connect them. We had big budgets. Unlike many of our brethren, we didn’t have to teach. We could just research. It was heaven.”

But heaven is not a good place to commercialize a product. “We built a computer and it was a beautiful thing,” Metcalfe went on. “We developed our computer language, our own display, our own language. It was a gold-plated product. But it cost sixteen thousand dollars, and it needed to cost three thousand dollars.” For an actual product, you need threat and constraint—and the improvisation and creativity necessary to turn a gold-plated three-hundred-dollar mouse into something that works on Formica and costs fifteen dollars. Apple was Israel.

Xerox couldn’t have been I.B.M. and Microsoft combined, in other words. “You can be one of the most successful makers of enterprise technology products the world has ever known, but that doesn’t mean your instincts will carry over to the consumer market,” the tech writer Harry McCracken recently wrote. “They’re really different, and few companies have ever been successful in both.” He was talking about the decision by the networking giant Cisco System, this spring, to shut down its Flip camera business, at a cost of many hundreds of millions of dollars. But he could just as easily have been talking about the Xerox of forty years ago, which was one of the most successful makers of enterprise technology the world has ever known. The fair question is whether Xerox, through its research arm in Palo Alto, found a better way to be Xerox—and the answer is that it did, although that story doesn’t get told nearly as often.

One of the people at Xerox PARC when Steve Jobs visited was an optical engineer named Gary Starkweather. He is a solid and irrepressibly cheerful man, with large, practical hands and the engineer’s gift of pretending that what is impossibly difficult is actually pretty easy, once you shave off a bit here, and remember some of your high-school calculus, and realize that the thing that you thought should go in left to right should actually go in right to left. Once, before the palatial Coyote Hill Road building was constructed, a group that Starkweather had to be connected to was moved to another building, across the Foothill Expressway, half a mile away. There was no way to run a cable under the highway. So Starkweather fired a laser through the air between the two buildings, an improvised communications system that meant that, if you were driving down the Foothill Expressway on a foggy night and happened to look up, you might see a mysterious red beam streaking across the sky. When a motorist drove into the median ditch, “we had to turn it down,” Starkweather recalled, with a mischievous smile.

Lasers were Starkweather’s specialty. He started at Xerox’s East Coast research facility in Webster, New York, outside Rochester. Xerox built machines that scanned a printed page of type using a photographic lens, and then printed a duplicate. Starkweather’s idea was to skip the first step—to run a document from a computer directly into a photocopier, by means of a laser, and turn the Xerox machine into a printer. It was a radical idea. The printer, since Gutenberg, had been limited to the function of re-creation: if you wanted to print a specific image or letter, you had to have a physical character or mark corresponding to that image or letter. What Starkweather wanted to do was take the array of bits and bytes, ones and zeros that constitute digital images, and transfer them straight into the guts of a copier. That meant, at least in theory, that he could print anything.

“One morning, I woke up and I thought, Why don’t we just print something out directly?” Starkweather said. “But when I flew that past my boss he thought it was the most brain-dead idea he had ever heard. He basically told me to find something else to do. The feeling was that lasers were too expensive. They didn’t work that well. Nobody wants to do this, computers aren’t powerful enough. And I guess, in my naïveté, I kept thinking, He’s just not right—there’s something about this I really like. It got to be a frustrating situation. He and I came to loggerheads over the thing, about late 1969, early 1970. I was running my experiments in the back room behind a black curtain. I played with them when I could. He threatened to lay off my people if I didn’t stop. I was having to make a decision: do I abandon this, or do I try and go up the ladder with it?”

Then Starkweather heard that Xerox was opening a research center in Palo Alto, three thousand miles away from its New York headquarters. He went to a senior vice-president of Xerox, threatening to leave for I.B.M. if he didn’t get a transfer. In January of 1971, his wish was granted, and, within ten months, he had a prototype up and running.

Starkweather is retired now, and lives in a gated community just north of Orlando, Florida. When we spoke, he was sitting at a picnic table, inside a screened-in porch in his back yard. Behind him, golfers whirred by in carts. He was wearing white chinos and a shiny black short-sleeved shirt, decorated with fluorescent images of vintage hot rods. He had brought out two large plastic bins filled with the artifacts of his research, and he spread the contents on the table: a metal octagonal disk, sketches on lab paper, a black plastic laser housing that served as the innards for one of his printers.

“There was still a tremendous amount of opposition from the Webster group, who saw no future in computer printing,” he went on. “They said, ‘I.B.M. is doing that. Why do we need to do that?’ and so forth. Also, there were two or three competing projects, which I guess I have the luxury of calling ridiculous. One group had fifty people and another had twenty. I had two.” Starkweather picked up a picture of one of his in-house competitors, something called an “optical carriage printer.” It was the size of one of those modular Italian kitchen units that you see advertised in fancy design magazines. “It was an unbelievable device,” he said, with a rueful chuckle. “It had a ten-inch drum, which turned at five thousand r.p.m., like a super washing machine. It had characters printed on its surface. I think they only ever sold ten of them. The problem was that it was spinning so fast that the drum would blow out and the characters would fly off. And there was only this one lady in Troy, New York, who knew how to put the characters on so that they would stay.

“So we finally decided to have what I called a fly-off. There was a full page of text—where some of them were non-serif characters, Helvetica, stuff like that—and then a page of graph paper with grid lines, and pages with pictures and some other complex stuff—and everybody had to print all six pages. Well, once we decided on those six pages, I knew I’d won, because I knew there wasn’t anything I couldn’t print. Are you kidding? If you can translate it into bits, I can print it. Some of these other machines had to go through hoops just to print a curve. A week after the fly-off, they folded those other projects. I was the only game in town.” The project turned into the Xerox 9700, the first high-speed, cut-paper laser printer in the world.

In one sense, the Starkweather story is of a piece with the Steve Jobs visit. It is an example of the imaginative poverty of Xerox management. Starkweather had to hide his laser behind a curtain. He had to fight for his transfer to PARC. He had to endure the indignity of the fly-off, and even then Xerox management remained skeptical. The founder of PARC, Jack Goldman, had to bring in a team from Rochester for a personal demonstration. After that, Starkweather and Goldman had an idea for getting the laser printer to market quickly: graft a laser onto a Xerox copier called the 7000. The 7000 was an older model, and Xerox had lots of 7000s sitting around that had just come off lease. Goldman even had a customer ready: the Lawrence Livermore laboratory was prepared to buy a whole slate of the machines. Xerox said no. Then Starkweather wanted to make what he called a photo-typesetter, which produced camera-ready copy right on your desk. Xerox said no. “I wanted to work on higher-performance scanners,” Starkweather continued. “In other words, what if we print something other than documents? For example, I made a high-resolution scanner and you could print on glass plates.” He rummaged in one of the boxes on the picnic table and came out with a sheet of glass, roughly six inches square, on which a photograph of a child’s face appeared. The same idea, he said, could have been used to make “masks” for the semiconductor industry—the densely patterned screens used to etch the designs on computer chips. “No one would ever follow through, because Xerox said, ‘Now you’re in Intel’s market, what are you doing that for?’ They just could not seem to see that they were in the information business. This”—he lifted up the plate with the little girl’s face on it—“is a copy. It’s just not a copy of an office document.” But he got nowhere. “Xerox had been infested by a bunch of spreadsheet experts who thought you could decide every product based on metrics. Unfortunately, creativity wasn’t on a metric.”

A few days after that afternoon in his back yard, however, Starkweather e-mailed an addendum to his discussion of his experiences at PARC. “Despite all the hassles and risks that happened in getting the laser printer going, in retrospect the journey was that much more exciting,” he wrote. “Often difficulties are just opportunities in disguise.” Perhaps he felt that he had painted too negative a picture of his time at Xerox, or suffered a pang of guilt about what it must have been like to be one of those Xerox executives on the other side of the table. The truth is that Starkweather was a difficult employee. It went hand in hand with what made him such an extraordinary innovator. When his boss told him to quit working on lasers, he continued in secret. He was disruptive and stubborn and independent-minded—and he had a thousand ideas, and sorting out the good ideas from the bad wasn’t always easy. Should Xerox have put out a special order of laser printers for Lawrence Livermore, based on the old 7000 copier? In “Fumbling the Future: How Xerox Invented, Then Ignored, the First Personal Computer” (1988)—a book dedicated to the idea that Xerox was run by the blind—Douglas Smith and Robert Alexander admit that the proposal was hopelessly impractical: “The scanty Livermore proposal could not justify the investment required to start a laser printing business. . . . How and where would Xerox manufacture the laser printers? Who would sell and service them? Who would buy them and why?” Starkweather, and his compatriots at Xerox PARC, weren’t the source of disciplined strategic insights. They were wild geysers of creative energy.

The psychologist Dean Simonton argues that this fecundity is often at the heart of what distinguishes the truly gifted. The difference between Bach and his forgotten peers isn’t necessarily that he had a better ratio of hits to misses. The difference is that the mediocre might have a dozen ideas, while Bach, in his lifetime, created more than a thousand full-fledged musical compositions. A genius is a genius, Simonton maintains, because he can put together such a staggering number of insights, ideas, theories, random observations, and unexpected connections that he almost inevitably ends up with something great. “Quality,” Simonton writes, is “a probabilistic function of quantity.”

Simonton’s point is that there is nothing neat and efficient about creativity. “The more successes there are,” he says, “the more failures there are as well”—meaning that the person who had far more ideas than the rest of us will have far more bad ideas than the rest of us, too. This is why managing the creative process is so difficult. The making of the classic Rolling Stones album “Exile on Main Street” was an ordeal, Keith Richards writes in his new memoir, because the band had too many ideas. It had to fight from under an avalanche of mediocrity: “Head in the Toilet Blues,” “Leather Jackets,” “Windmill,” “I Was Just a Country Boy,” “Bent Green Needles,” “Labour Pains,” and “Pommes de Terre”—the last of which Richards explains with the apologetic, “Well, we were in France at the time.”

At one point, Richards quotes a friend, Jim Dickinson, remembering the origins of the song “Brown Sugar”:


I watched Mick write the lyrics. . . . He wrote it down as fast as he could move his hand. I’d never seen anything like it. He had one of those yellow legal pads, and he’d write a verse a page, just write a verse and then turn the page, and when he had three pages filled, they started to cut it. It was amazing.

Richards goes on to marvel, “It’s unbelievable how prolific he was.” Then he writes, “Sometimes you’d wonder how to turn the fucking tap off. The odd times he would come out with so many lyrics, you’re crowding the airwaves, boy.” Richards clearly saw himself as the creative steward of the Rolling Stones (only in a rock-and-roll band, by the way, can someone like Keith Richards perceive himself as the responsible one), and he came to understand that one of the hardest and most crucial parts of his job was to “turn the fucking tap off,” to rein in Mick Jagger’s incredible creative energy.

The more Starkweather talked, the more apparent it became that his entire career had been a version of this problem. Someone was always trying to turn his tap off. But someone had to turn his tap off: the interests of the innovator aren’t perfectly aligned with the interests of the corporation. Starkweather saw ideas on their own merits. Xerox was a multinational corporation, with shareholders, a huge sales force, and a vast corporate customer base, and it needed to consider every new idea within the context of what it already had.

Xerox’s managers didn’t always make the right decisions when they said no to Starkweather. But he got to PARC, didn’t he? And Xerox, to its great credit, had a PARC—a place where, a continent away from the top managers, an engineer could sit and dream, and get every purchase order approved, and fire a laser across the Foothill Expressway if he was so inclined. Yes, he had to pit his laser printer against lesser ideas in the contest. But he won the contest. And, the instant he did, Xerox cancelled the competing projects and gave him the green light.

“I flew out there and gave a presentation to them on what I was looking at,” Starkweather said of his first visit to PARC. “They really liked it, because at the time they were building a personal computer, and they were beside themselves figuring out how they were going to get whatever was on the screen onto a sheet of paper. And when I showed them how I was going to put prints on a sheet of paper it was a marriage made in heaven.” The reason Xerox invented the laser printer, in other words, is that it invented the personal computer. Without the big idea, it would never have seen the value of the small idea. If you consider innovation to be efficient and ideas precious, that is a tragedy: you give the crown jewels away to Steve Jobs, and all you’re left with is a printer. But in the real, messy world of creativity, giving away the thing you don’t really understand for the thing that you do is an inevitable tradeoff.

“When you have a bunch of smart people with a broad enough charter, you will always get something good out of it,” Nathan Myhrvold, formerly a senior executive at Microsoft, argues. “It’s one of the best investments you could possibly make—but only if you chose to value it in terms of successes. If you chose to evaluate it in terms of how many times you failed, or times you could have succeeded and didn’t, then you are bound to be unhappy. Innovation is an unruly thing. There will be some ideas that don’t get caught in your cup. But that’s not what the game is about. The game is what you catch, not what you spill.”

In the nineteen-nineties, Myhrvold created a research laboratory at Microsoft modelled in part on what Xerox had done in Palo Alto in the nineteen-seventies, because he considered PARC a triumph, not a failure. “Xerox did research outside their business model, and when you do that you should not be surprised that you have a hard time dealing with it—any more than if some bright guy at Pfizer wrote a word processor. Good luck to Pfizer getting into the word-processing business. Meanwhile, the thing that they invented that was similar to their own business—a really big machine that spit paper out—they made a lot of money on it.” And so they did. Gary Starkweather’s laser printer made billions for Xerox. It paid for every other single project at Xerox PARC, many times over.

In 1988, Starkweather got a call from the head of one of Xerox’s competitors, trying to lure him away. It was someone whom he had met years ago. “The decision was painful,” he said. “I was a year from being a twenty-five-year veteran of the company. I mean, I’d done enough for Xerox that unless I burned the building down they would never fire me. But that wasn’t the issue. It’s about having ideas that are constantly squashed. So I said, ‘Enough of this,’ and I left.”

He had a good many years at his new company, he said. It was an extraordinarily creative place. He was part of decision-making at the highest level. “Every employee from technician to manager was hot for the new, exciting stuff,” he went on. “So, as far as buzz and daily environment, it was far and away the most fun I’ve ever had.” But it wasn’t perfect. “I remember I called in the head marketing guy and I said, ‘I want you to give me all the information you can come up with on when people buy one of our products—what software do they buy, what business are they in—so I can see the model of how people are using the machines.’ He looked at me and said, ‘I have no idea about that.’ ” Where was the rigor? Then Starkweather had a scheme for hooking up a high-resolution display to one of his new company’s computers. “I got it running and brought it into management and said, ‘Why don’t we show this at the tech expo in San Francisco? You’ll be able to rule the world.’ They said, ‘I don’t know. We don’t have room for it.’ It was that sort of thing. It was like me saying I’ve discovered a gold mine and you saying we can’t afford a shovel.”

He shrugged a little wearily. It was ever thus. The innovator says go. The company says stop—and maybe the only lesson of the legend of Xerox PARC is that what happened there happens, in one way or another, everywhere. By the way, the man who hired Gary Starkweather away to the company that couldn’t afford a shovel? His name was Steve Jobs.

--
Stephen.Bates@gmail.com | +1 202-730-9760 

Connect with me!
 LinkedIn
Sent with Sparrow

Chicago's Plan to Match Education With Jobs

Chicago's Plan to Match Education With Jobs

BackHome
-->
December 19, 2011

 Chicago's Plan to Match Education With Jobs

 In The Wall Street Journal, Mayor Rahm Emanuel writes that AAR Corp., an aviation-parts manufacturer in the Chicago area, has 600 openings for welders and mechanics but can't find skilled workers to fill them.

 By Rahm Emanuel

 

The Chicago area has near 10% unemployment, but more than 100,000 unfilled jobs. Like the rest of the country, Chicago suffers from a skills gap that undermines our economic competitiveness and threatens our future prosperity.

Despite stubborn unemployment, we have companies offering well-paying jobs that have to go begging for skilled applicants. This is because our community college system, which was a worker's ticket into employment and the middle class during the postwar boom, has failed to keep pace with today's competitive jobs market. Consequently, in a 21st-century economy, our workers still have 20th-century skills.

For example, AAR Corp., an aviation-parts manufacturer in the Chicago area, has 600 job openings for welders and mechanics but can't find skilled workers to fill them. As mayor of one of America's largest cities, I find it unacceptable that at a time of high unemployment, more than 80% of manufacturers say they can't find skilled workers to hire.

This situation will only get worse. In the next 10 years, the Chicago area will need 9,000 additional computer-science workers, 20,000 new transportation workers and 43,000 new health-care workers, including 15,000 nurses.

In order to fill these jobs, we need to modernize our community colleges so that Americans no longer regard community colleges as a last ditch effort for a remedial education, but as their first choice for high-skill job training.

Right now, too many of our community colleges lack credibility in the eyes of CEOs and job seekers. Recently I met a young student at a public-transit stop who was commuting from Harold Washington Community College, where he goes to school, to his night job at a department-store warehouse. Riding from downtown to the Southside, studying along the way, that student, like millions of Americans, is doing his part to ensure he has a shot at a good job. But those of us in government have not been doing our part to meet him halfway. We need to guarantee that the diploma he earns has economic value. I want that student to worry only about doing well in his classes, not about whether the skills he gains in those classes will earn him a job.

So, last week I announced a series of partnerships between our community colleges and our top employers that will draw on their expertise to develop curricula and set industry standards for job training in high-growth sectors like health care, high-tech manufacturing, information technology and professional services.

This program, "Colleges to Careers," will team AAR Corp. with Chicago's Olive-Harvey College to design a curriculum for avionics and mechanics careers. It will partner companies like Allscripts and Northwestern Memorial Hospital with Malcolm X College to design job training in health-care information technology and nursing.

These partnerships will align workers' training with the expectations of employers so that community college students will not have to worry about whether they have the right skills for their chosen field. They will have the confidence of knowing that the company they want to work for has helped design their curriculum specifically so that they can be hired and be successful. Employers won't need to search for the skilled workers they need to invest and expand. They will have confidence in their future work force because they were a partner in shaping it.

Chicago already enjoys a dynamic work force, not only because we have some of the world's best universities, but also because we're a magnet for the brightest students from across the Big Ten states. By modernizing our community college system, we are matching that dynamism at every level of the jobs market. Whatever skill level employers need, from the boardroom to the shop floor, they can have confidence that Chicago's work force has the skill and depth they need to start a business and expand.

AAR Corp. will have a pipeline of trained workers to fill those 600 open jobs. Allscripts will have a reliable talent pool to fill the 300 jobs it is adding in Chicago. Northwestern Memorial Hospital and Rush University Medical Center will have the specialists they need for their large expansion projects.

I hope that cities across the country will follow Chicago's model. If we revive and modernize our training programs to match the needs of our high-growth industries, our community college system can catapult millions of people into employment and into the middle class, as it has done for generations of Americans.

Mr. Emanuel is mayor of Chicago and a former White House chief of staff.


Logout
Privacy Policy
Terms & Conditions
Customer Support
Email this to a Friend
© 2011 Dow Jones & Company, Inc. All Rights Reserved
Powered by mDog.com
-->

Stephen.Bates | +1 202 730-9760
mobile.short.typos

Buckley on Christopher Hitchens, 1949-2011 : The New Yorker

Postscript: Christopher Hitchens, 1949-2011

christopher-hitchens.jpg

We were friends for more than thirty years, which is a long time but, now that he is gone, seems not nearly long enough. I was rather nervous when I first met him, one night in London in 1977, along with his great friend Martin Amis. I had read his journalism and was already in awe of his brilliance and wit and couldn’t think what on earth I could bring to his table. I don’t know if he sensed the diffidence on my part—no, of course he did; he never missed anything—but he set me instantly at ease, and so began one of the great friendships and benisons of my life. It occurs to me that “benison” is a word I first learned from Christopher, along with so much else.

A few years later, we found ourselves living in the same city, Washington. I had come to work in an Administration; he had come to undo that Administration. Thirty years later, I was voting for Obama and Christopher had become one of the most forceful, and persuasive, advocates for George W. Bush’s war in Iraq. How did that happen?

In those days, Christopher was a roaring, if not raving, Balliol Bolshevik. Oh dear, the things he said about Reagan! The things—come to think of it—he said about my father. How did we become such friends? I only once stopped speaking to him, because of a throwaway half-sentence about my father-in-law in one of his Harper’s essays. I missed his company during that six-month froideur (another Christopher mot). It was about this time that he discovered that he was in fact Jewish, which somewhat complicated his fierce anti-Israel stance. When we embraced, at the bar mitzvah of Sidney Blumenthal’s son, the word “Shalom” sprang naturally from my lips.

A few days ago, when I was visiting him at the M. D. Anderson Cancer Center, in Houston, for what I knew would be the last time, his wife, Carol, mentioned to me that Sidney had recently written to Christopher. I was surprised but very pleased to hear this. Christopher had caused Sidney great legal and financial grief during the Götterdämmerung of the Clinton impeachment. But now Sidney, a cancer experiencer himself, was reaching out to his old friend with words of tenderness and comfort and implicit forgiveness. This was the act of a mensch. But then Christopher was like that—it was hard, perhaps impossible, to stay mad at him, though I doubt Henry Kissinger or Bill Clinton or any member of the British Royal Family will be among the eulogists at his memorial service.

I first saw his J’accuse in The Nation against—oh, Christopher!—Mother Teresa when my father mailed me a Xerox of it. He had scrawled a note across the top, an instruction to the producer of his TV show “Firing Line”: “I never want to lay eyes on this guy again.” W.F.B. had provided Christopher with his first appearances on U.S. television. The rest is history—the time would soon come when you couldn’t turn on a television without seeing Christopher railing against Kissinger, Mother (presumptive saint) T., Princess Diana, or Jerry Falwell.

But even W.F.B., who tolerated pretty much anything except attacks on his beloved Catholic Church and its professors, couldn’t help but forgive. “Did you see the piece on Chirac by your friend Hitchens in the Journal today?” he said one day, with a smile and an admiring sideways shake of the head. “Absolutely devastating!

When we all gathered at St. Patrick’s Cathedral, a few years later, to see W.F.B. off to the celestial choir, Christopher was present, having flown in from a speech in the American hinterland. (Alert: if you are reading this, Richard Dawkins, you may want to skip ahead to the next paragraph.) There he was in the pew, belting out Bunyan’s “He Who Would Valiant Be.” Christopher recused himself when Henry Kissinger took the lectern to give his eulogy, going out onto rain-swept Fifth Avenue to smoke one of his ultimately consequential cigarettes.

“It’s the fags that’ll get me in the end, I know it,” he said once, at one of our lunches, tossing his pack of Rothmans onto the table with an air of contempt. This was back when you could smoke at a restaurant. As the Nanny State and Mayor Bloomberg extended their ruler-bearing, knuckle-rapping hand across the landscape, Christopher’s smoking became an act of guerrilla warfare. Much as I wish he had never inhaled, it made for great spectator sport.

David Bradley, the owner of The Atlantic Monthly, to which Christopher contributed many sparkling essays, once took him out to lunch at the Four Seasons Hotel in Georgetown. It was—I think—February and the smoking ban had gone into effect. Christopher suggested that they eat outside, on the terrace. David Bradley is a game soul, but even he expressed trepidation about dining al fresco in forty-degree weather. Christopher merrily countered, “Why not? It will be bracing.”

Lunch—dinner, drinks, any occasion—with Christopher always was. One of our lunches, at Café Milano, the Rick’s Café of Washington, began at 1 P.M., and ended at 11:30 P.M. At about nine o’clock (though my memory is somewhat hazy), he said, “Should we order more food?” I somehow crawled home, where I remained under medical supervision for several weeks, packed in ice with a morphine drip. Christopher probably went home that night and wrote a biography of Orwell. His stamina was as epic as his erudition and wit.

When we made a date for a meal over the phone, he’d say, “It will be a feast of reason and a flow of soul.” I never doubted that this rococo phraseology was an original coinage, until I chanced on it, one day, in the pages of P. G. Wodehouse, the writer Christopher perhaps esteemed above all others. Wodehouse was the Master. When we met for another lunch, one that lasted only five hours, he was all a-grin with pride as he handed me a newly minted paperback reissue of Wodehouse with “Introduction by Christopher Hitchens.” “Doesn’t get much better than that,” he said, and who could not agree?

The other author that he and I seemed to spend most time discussing was Oscar Wilde. I remember Christopher’s thrill at having adduced a key connection between Wilde and Wodehouse. It struck me as a breakthrough insight; namely, that the first two lines of “The Importance of Being Earnest” contain within them the entire universe of Bertie Wooster and Jeeves.

Algernon plays the piano while his butler arranges flowers. Algy asks, “Did you hear what I was playing, Lane?” Lane replies, “I didn’t think it polite to listen, sir.” And there you have it.

Christopher remained perplexed at the lack of any reference to Wilde in the Wodehousian oeuvre. Then, some time later, he extolled in his Vanity Fair column the discovery, by one of his graduate students at the New School, of a mention of “The Importance” somewhere in the Master’s ninety-odd books.

During the last hour I spent with Christopher, in the Critical Care Unit at M. D. Anderson, he struggled to read a thick volume of P. G. Wodehouse letters. He scribbled some notes on a blank page in spidery handwriting. He wrote “Pelham Grenville” and asked me, in a faint, raspy voice, “Name. What was the name?” At first I didn’t quite understand, but then, recalling P.G.’s nickname, suggested “Plum?” Christopher nodded yes, and wrote it down.

I took comfort that, during our last time together, I was able to provide him with at least that. Intellectually, ours was largely a teacher-student relationship, and let me tell you—Christopher was one tough grader. Oy. No matter how much he loved you, he did not shy from giving it to you with the bark off if you had disappointed.

I once participated with him on a panel at the Folger Theatre on the subject of “Henry V.” The other panelists were Dame Judi Dench, Arianna Huffington, Chris Matthews, Ken Adelman, and David Brooks; the moderator was Walter Isaacson. Having little original insight into “Henry V,” or into any Shakespeare play, for that matter, I prepared a comic riff on a notional Henry the Fifteenth. Get it? O.K., maybe you had to be there, but it sort of brought down the house. Nevertheless, when Christopher and I met for lunch a few days later, he gave me a tsk-tsk-y stare and sour wince and chided me for “indulging in crowd-pleasing nonsense.”

I got off lightly. When Martin Amis, his closest friend on earth, published a book in which he took Christopher to task for what he viewed as inappropriate laughter at the expense of Stalin’s victims, Christopher responded with a seven-thousand-word rebuttal in The Atlantic that will probably have Martin thinking twice before attempting another work of historical nonfiction. But Christopher’s takedown of his chum must be viewed alongside thousands of warm and affectionate words he wrote about Martin, particularly in his memoir, “Hitch-22,” which appeared ironically—or perhaps with exquisite timing—simultaneously with the presentation of his mortal illness.

The jacket of his next book, a collection of breathtaking essays, perfectly titled “Arguably,” contains some glowing words of praise, including my own (humble but earnest) asseveration that he is—was—”the greatest living essayist in the English language.” One or two reviewers demurred, calling my effusion “forgivable exaggeration.” To them I say: O.K., name a better one. I would alter only one word in that blurb now.

Over the course of his heroic, uncomplaining eighteen-month battle with the cancer, I found myself rehearsing what I might say to an obituary writer, should one ring after the news of Christopher’s death. I thought to say something along the lines—the air of Byron, the steel pen of Orwell, and the wit of Wilde.

A bit forced, perhaps, but you get the idea. Christopher may not, as Byron did, write poetry, but he could recite staves, cantos, yards of it. As for Byronic aura, there were the curly locks, the unbuttoned shirt revealing a wealth—verily, a woolly mastodon—of pectoral hair, as well as the roguish, raffish je ne sais quoi good looks. (Somewhere in “Hitch-22,” he notes that he had now reached the age when “only women wanted to go to bed with me.”)

Like Byron, Christopher put himself in harm’s way in “contested territory,” again and again. Here’s another bit from “Hitch-22,”a chilling moment when he found himself alone in a remote and very scary town in Afghanistan,

in a goons’ rodeo duel between two local homicidal potentates (the journalistic euphemism for this type is “warlord”; the image of the goons’ rodeo I have annexed from Saul Bellow). On me was not enough money, not enough food, not enough documentation, not enough medication, not enough bottled water to withstand even a two-day siege. I did not have a cell phone. Nobody in the world, I abruptly realized, knew where I was. I knew nobody in the town and nobody in the town knew (perhaps a good thing) who I was, either…. As all this started to register with me, the square began to fill with those least alluring of all types: strident but illiterate young men with religious headgear, high-velocity weapons and modern jeeps.

His journalism, in which he championed the victims of tyranny and stupidity and “Islamofascism” (his coinage), takes its rightful place on the shelf along with that of his paradigm, Orwell.

As for the wit … one day we were talking about Stalin. I observed that Stalin, eventual murderer of twenty, thirty—forty?—million, had trained as a priest. Not skipping a beat, Christopher remarked, “Indeed, was he not among the more promising of the Tbilisi ordinands?”

I thought—as I did perhaps one thousand times over the course of our three-decade long tutorial—Wow.

A few days later, at a dinner, the subject of Stalin having come up, I ventured to my dinner partner, “Indeed, was he not among the more promising of the Tbilisi ordinands?” The lady to whom I had proferred this thieved aperçu stopped chewing her salmon, repeated the line I had so casually tossed off, and said with frank admiration, “That’s brilliant.” I was tempted, but couldn’t quite bear to continue the imposture, and told her that the author of this nacreous witticism was in fact none other than Christopher. She laughed and said, “Well, everything he says is brilliant.”

Yes, everything he said was brilliant. It was a feast of reason and a flow of soul, and, if the author of “God Is Not Great” did not himself believe in the concept of soul, he sure had one, and it was a great soul.

Two fragments come to mind. The first is from “Brideshead Revisited,” a book Christopher loved and which he could practically quote in its entirety. Anthony Blanche, the exotic, outrageous aesthete, is sent down from Oxford. Charles Ryder, the book’s narrator, mourns: “Anthony Blanche had taken something away with him when he went; he had locked a door and hung the key on his chain; and all his friends, among whom he had been a stranger, needed him now.”

Christopher was never a “stranger to his friends”—ça va sans dire, as he would say. Among his prodigal talents, perhaps his greatest was his gift of friendship. Christopher’s inner circle, Martin, Ian McEwan, Salman Rushdie, James Fenton, Julian Barnes, comprise more or less the greatest writers in the English language. That’s some posse.

But in leaving them—and the rest of us—for “the undiscovered country” (he could recite more or less all of “Hamlet,” too) Christopher has taken something away with him, and his friends, in whose company I am so very grateful to have been, will need him now. We are now, finally, without a Hitch.

The other bit is from Houseman, and though it’s from a poem that Christopher and I recited back and forth at each other across the tables at Café Milano, I hesitate to quote it here. I see him wincing at my deplorable propensity for “crowd-pleasing.” But I’m going to quote it anyway, doubting as I do that he would chafe at my trying to mine what consolation I can over the loss of my beloved athlete, who died so young.

Smart lad to slip betimes away
From fields where glory does not stay,
And early though the laurel grows
It withers quicker than the rose.


Photograph by Brooks Kraft/Corbis.

Manifesto for Sustainable Capitalism - WSJ.com

Not sure Milton Freidman would concur, but I was able to look past my distaste for Mr Gore's previous political history and appreciate the sensible approach here.

A Manifesto for Sustainable Capitalism

By AL GORE AND DAVID BLOOD

In the immediate aftermath of World War II, when the United States was preparing its visionary plan for nurturing democratic capitalism abroad, Gen. Omar Bradley said, "It is time to steer by the stars, and not by the lights of each passing ship." Today, more than 60 years later, that means abandoning short-term economic thinking for "sustainable capitalism."

We are once again facing one of those rare turning points in history when dangerous challenges and limitless opportunities cry out for clear, long-term thinking. The disruptive threats now facing the planet are extraordinary: climate change, water scarcity, poverty, disease, growing income inequality, urbanization, massive economic volatility and more. Businesses cannot be asked to do the job of governments, but companies and investors will ultimately mobilize most of the capital needed to overcome the unprecedented challenges we now face.

Before the crisis and since, we and others have called for a more responsible form of capitalism, what we call sustainable capitalism: a framework that seeks to maximize long-term economic value by reforming markets to address real needs while integrating environmental, social and governance (ESG) metrics throughout the decision-making process.

Such sustainable capitalism applies to the entire investment value chain—from entrepreneurial ventures to large public companies, seed-capital providers to institutional investors, employees to CEOs, activists to policy makers. It transcends borders, industries, asset classes and stakeholders.

Those who advocate sustainable capitalism are often challenged to spell out why sustainability adds value. Yet the question that should be asked instead is: "Why does an absence of sustainability not damage companies, investors and society at large?" From BP to Lehman Brothers, there is a long list of examples proving that it does.

gore
Corbis

Moreover, companies and investors that integrate sustainability into their business practices are finding that it enhances profitability over the longer term. Experience and research show that embracing sustainable capitalism yields four kinds of important benefits for companies:

• Developing sustainable products and services can increase a company's profits, enhance its brand, and improve its competitive positioning, as the market increasingly rewards this behavior.

• Sustainable capitalism can also help companies save money by reducing waste and increasing energy efficiency in the supply chain, and by improving human-capital practices so that retention rates rise and the costs of training new employees decline.

• Third, focusing on ESG metrics allows companies to achieve higher compliance standards and better manage risk since they have a more holistic understanding of the material issues affecting their business.

• Researchers (including Rob Bauer and Daniel Hann of Maastricht University, and Beiting Cheng, Ioannis Ioannou and George Serafeim of Harvard) have found that sustainable businesses realize financial benefits such as lower cost of debt and lower capital constraints.

Sustainable capitalism is also important for investors. Mr. Serafeim and his colleague Robert G. Eccles have shown that sustainable companies outperform their unsustainable peers in the long term. Therefore, investors who identify companies that embed sustainability into their strategies can earn substantial returns, while experiencing low volatility.

Because ESG metrics directly affect companies' long-term value, pension funds, sovereign wealth funds, foundations and the like—investors with long-term liabilities—should include these metrics as an essential aspect of valuation and investment strategy. Sustainable capitalism requires investors to be good investors, to fully understand the companies they invest in and to believe in their long-term value and potential.

We recommend five key actions for immediate adoption by companies, investors and others to accelerate the current incremental pace of change to one that matches the urgency of the situation:

• Identify and incorporate risk from stranded assets. "Stranded assets" are those whose value would dramatically change, either positively or negatively, when large externalities are taken into account—for example, by attributing a reasonable price to carbon or water. So long as their true value is ignored, stranded assets have the potential to trigger significant reductions in the long-term value of not just particular companies but entire sectors.

That's exactly what occurred when the true value of subprime mortgages was belatedly recognized and mortgage-backed assets were suddenly repriced. Until there are policies requiring the establishment of a fair price on widely understood externalities, academics and financial professionals should strive to quantify the impact of stranded assets and analyze the subsequent implications for investment opportunities.

• Mandate integrated reporting. Despite an increase in the volume and frequency of information made available by companies, access to more data for public equity investors has not necessarily translated into more comprehensive insight into companies. Integrated reporting addresses this problem by encouraging companies to integrate both their financial and ESG performance into one report that includes only the most salient or material metrics.

This enables companies and investors to make better resource-allocation decisions by seeing how ESG performance contributes to sustainable, long-term value creation. While voluntary integrated reporting is gaining momentum, it must be mandated by appropriate agencies such as stock exchanges and securities regulators in order to ensure swift and broad adoption.

• End the default practice of issuing quarterly earnings guidance. The quarterly calendar frequently incentivizes executives to manage for the short-term. It also encourages some investors to overemphasize the significance of these measures at the expense of longer-term, more meaningful measures of sustainable value creation. Ending this practice in favor of companies' issuing guidance only as they deem appropriate (if at all) would encourage a longer-term view of the business.

• Align compensation structures with long-term sustainable performance. Most existing compensation schemes emphasize short-term actions and fail to hold asset managers and corporate executives accountable for the ramifications of their decisions over the long-term. Instead, financial rewards should be paid out over the period during which these results are realized and compensation should be linked to fundamental drivers of long-term value, employing rolling multiyear milestones for performance evaluation.

• Incentivize long-term investing with loyalty-driven securities. The dominance of short-termism in the market fosters general market instability and undermines the efforts of executives seeking long-term value creation. The common argument that more liquidity is always better for markets is based on long-discredited elements of the now-obsolete "standard model" of economics, including the illusion of perfect information and the assumption that markets tend toward equilibrium.

To push against this short-termism, companies could issue securities that offer investors financial rewards for holding onto shares for a certain number of years. This would attract long-term investors with patient capital and would facilitate both long-term value creation in companies and stability in financial markets.

Ben Franklin famously said, "You may delay, but time will not, and lost time is never found again." Today we have an opportunity to steer by the stars and once again rebuild for the long-term. Sustainable capitalism will create opportunities and rewards, but it will also mean challenging the pernicious orthodoxy of short-termism. As we face an inflection point in the global economy and the global environment, the imperative for change has never been greater.

Mr. Gore, chairman of Generation Investment Management, is a former vice president of the United States. Mr. Blood is managing partner of Generation Investment Management.

Stephen.Bates | +1 202 730-9760
mobile.short.typos

confusion between Circles and Google Contacts Groups

Dear Google, I'd like to take a moment and express a few joys and displeasures at some of your recent changes to Google Reader, Gmail, Google+, Google Voice, and Picasa relative to Google Contacts Groups and Circles.  They are inexorably linked, and I'll do my best to show the linkages and why they are causing such grief.  

Since of course, the extensive portfolio of services you provide are generally free, I'm your product.  And since I willingly give up some of my privacy to use your products which generate advertising revenue for you, I hope you'll listen to my pain points and attempt to remedy them.  I'll submit these via your approved feedback models, but web forms are too constricting in terms of length and HTML formatting, so you'll read them here.

How an individual manages one's Contacts and Circles across your various applications is incredibly confusing.  I realize that you wish to push the Google+ social networking capability to compete effectively with Facebook.  However, let me count the ways you're making this incredibly difficult, and confusing to have to think about managing my contacts in both Groups and Circles.

Gmail Contacts:
There are both Groups and Circles.  However, I cannot manage Circles within the Contacts information, only individuals and Groups.  This is important, as I have extensive filters set up to apply Labels based upon incoming criterial.  Google/Gmail has taught us for many years to "label, don't file".  But I can't apply labels or filters to Circles. And I can't manage Google Voice settings from within Google Contacts.    For me to manage Circles, I have to go to Google+.  

I can't send a message to my Google Circles, the way I can a Group.

From the Department of Redundancy Department, there is significant overlap and extra work to manage both Google Circles and Google Contact Groups.  

Google Voice:
I've ported my mobile number to Google Voice and use it extensively.  However, Google Voice uses Google Groups for forwards, Greetings, and Call screenings.  Google Voice doesn't do anything with Circles.

Google Reader:
Oh, the changes you made here have caused me great distress as the carefully built up lists of trusted curators is now gone.  And my ability to post via the Google Reader widget has been removed, which was important to selectively share content behind the paywall with my Google Reader friends.  I have some workarounds now with Safari Reader and Posterous, but it adds another step to a clumsy process.   I realize that you wish to push the Google+ social networking capability to compete effectively with Facebook.  But now I can only share with Google+ Circles, NOT Google Groups.  Lastly, the Google Reader widget allowed me to share what I could select within my browser.  Now I'm restricted to sites that have implemented the Google +1, and that's annoying as a good majority of the content I wish to share with my Contacts is often behind paywalls which I pay for, my perhaps my colleagues do not.  I realize that you wish to push the Google+ social networking capability to compete effectively with Facebook. 

Picasa:
Sharing photos appears to work the same way as Google Reader.  I can still share via email, but I realize that you wish to push the Google+ social networking capability to compete effectively with Facebook.  

Google+
Your permissions model is based upon Circles.  But your email filters/labels are based upon Groups.  

From the Department of Redundancy Department, there is significant overlap and extra work to manage both Google Circles and Google Contact Groups.  

The Broken Contract | George Packard for Foreign Affairs

The Broken Contract | Foreign Affairs

Iraq was one of those wars where people actually put on pounds. A few years ago, I was eating lunch with another reporter at an American-style greasy spoon in Baghdad's Green Zone. At a nearby table, a couple of American contractors were finishing off their burgers and fries. They were wearing the contractor's uniform: khakis, polo shirts, baseball caps, and Department of Defense identity badges in plastic pouches hanging from nylon lanyards around their necks. The man who had served their food might have been the only Iraqi they spoke with all day. The Green Zone was set up to make you feel that Iraq was a hallucination and you were actually in Normal, Illinois. This narcotizing effect seeped into the consciousness of every American who hunkered down and worked and partied behind its blast walls -- the soldier and the civilian, the diplomat and the journalist, the important and the obscure. Hardly anyone stayed longer than a year; almost everyone went home with a collection of exaggerated war stories, making an effort to forget that they were leaving behind shoddy, unfinished projects and a country spiraling downward into civil war. As the two contractors got up and ambled out of the restaurant, my friend looked at me and said, "We're just not that good anymore."

The Iraq war was a kind of stress test applied to the American body politic. And every major system and organ failed the test: the executive and legislative branches, the military, the intelligence world, the for-profits, the nonprofits, the media. It turned out that we were not in good shape at all -- without even realizing it. Americans just hadn't tried anything this hard in around half a century. It is easy, and completely justified, to blame certain individuals for the Iraq tragedy. But over the years, I've become more concerned with failures that went beyond individuals, and beyond Iraq -- concerned with the growing arteriosclerosis of American institutions. Iraq was not an exceptional case. It was a vivid symptom of a long-term trend, one that worsens year by year. The same ailments that led to the disastrous occupation were on full display in Washington this past summer, during the debt-ceiling debacle: ideological rigidity bordering on fanaticism, an indifference to facts, an inability to think beyond the short term, the dissolution of national interest into partisan advantage.

Was it ever any different? Is it really true that we're just not that good anymore? As a thought experiment, compare your life today with that of someone like you in 1978. Think of an educated, reasonably comfortable couple perched somewhere within the vast American middle class of that year. And think how much less pleasant their lives are than yours. The man is wearing a brown and gold polyester print shirt with a flared collar and oversize tortoiseshell glasses; she's got on a high-waisted, V-neck rayon dress and platform clogs. Their morning coffee is Maxwell House filter drip. They drive an AMC Pacer hatchback, with a nonfunctioning air conditioner and a tape deck that keeps eating their eight-tracks. When she wants to make something a little daring for dinner, she puts together a pasta primavera. They type their letters on an IBM Selectric, the new model with the corrective ribbon. There is only antenna television, and the biggest thing on is Laverne and Shirley. Long-distance phone calls cost a dollar a minute on weekends; air travel is prohibitively expensive. The city they live near is no longer a place where they spend much time: trash on the sidewalks, junkies on the corner, vandalized pay phones, half-deserted subway cars covered in graffiti.

By contemporary standards, life in 1978 was inconvenient, constrained, and ugly. Things were badly made and didn't work very well. Highly regulated industries, such as telecommunications and airlines, were costly and offered few choices. The industrial landscape was decaying, but the sleek information revolution had not yet emerged to take its place. Life before the Android, the Apple Store, FedEx, HBO, Twitter feeds, Whole Foods, Lipitor, air bags, the Emerging Markets Index Fund, and the pre-K Gifted and Talented Program prep course is not a world to which many of us would willingly return.

The surface of life has greatly improved, at least for educated, reasonably comfortable people -- say, the top 20 percent, socioeconomically. Yet the deeper structures, the institutions that underpin a healthy democratic society, have fallen into a state of decadence. We have all the information in the universe at our fingertips, while our most basic problems go unsolved year after year: climate change, income inequality, wage stagnation, national debt, immigration, falling educational achievement, deteriorating infrastructure, declining news standards. All around, we see dazzling technological change, but no progress. Last year, a Wall Street company that few people have ever heard of dug an 800-mile trench under farms, rivers, and mountains between Chicago and New York and laid fiber-optic cable connecting the Chicago Mercantile Exchange and the New York Stock Exchange. This feat of infrastructure building, which cost $300 million, shaves three milliseconds off high-speed, high-volume automated trades -- a big competitive advantage. But passenger trains between Chicago and New York run barely faster than they did in 1950, and the country no longer seems capable, at least politically, of building faster ones. Just ask people in Florida, Ohio, and Wisconsin, whose governors recently refused federal money for high-speed rail projects.

We can upgrade our iPhones, but we can't fix our roads and bridges. We invented broadband, but we can't extend it to 35 percent of the public. We can get 300 television channels on the iPad, but in the past decade 20 newspapers closed down all their foreign bureaus. We have touch-screen voting machines, but last year just 40 percent of registered voters turned out, and our political system is more polarized, more choked with its own bile, than at any time since the Civil War. There is nothing today like the personal destruction of the McCarthy era or the street fights of the 1960s. But in those periods, institutional forces still existed in politics, business, and the media that could hold the center together. It used to be called the establishment, and it no longer exists. Solving fundamental problems with a can-do practicality -- the very thing the world used to associate with America, and that redeemed us from our vulgarity and arrogance -- now seems beyond our reach.

THE UNWRITTEN CONTRACT

Why and how did this happen? Those are hard questions. A roundabout way of answering them is to first ask, when did this start to happen? Any time frame has an element of arbitrariness, and also contains the beginning of a theory. Mine goes back to that shabby, forgettable year of 1978. It is surprising to say that in or around 1978, American life changed -- and changed dramatically. It was, like this moment, a time of widespread pessimism -- high inflation, high unemployment, high gas prices. And the country reacted to its sense of decline by moving away from the social arrangement that had been in place since the 1930s and 1940s.

What was that arrangement? It is sometimes called "the mixed economy"; the term I prefer is "middle-class democracy." It was an unwritten social contract among labor, business, and government -- between the elites and the masses. It guaranteed that the benefits of the economic growth following World War II were distributed more widely, and with more shared prosperity, than at any time in human history. In the 1970s, corporate executives earned 40 times as much as their lowest-paid employees. (By 2007, the ratio was over 400 to 1.) Labor law and government policy kept the balance of power between workers and owners on an even keel, leading to a virtuous circle of higher wages and more economic stimulus. The tax code restricted the amount of wealth that could be accumulated in private hands and passed on from one generation to the next, thereby preventing the formation of an inherited plutocracy. The regulatory agencies were strong enough to prevent the kind of speculative bubbles that now occur every five years or so: between the Great Depression and the Reagan era there was not a single systemwide financial crisis, which is why recessions during those decades were far milder than they have since become. Commercial banking was a stable, boring business. (In movies from the 1940s and 1950s, bankers are dull, solid pillars of the community.) Investment banking, cordoned off by the iron wall of the Glass-Steagall Act, was a closed world of private partnerships in which rich men carefully weighed their risks because they were playing with their own money. Partly as a result of this shared prosperity, political participation reached an all-time high during the postwar years (with the exception of those, such as black Americans in the South, who were still denied access to the ballot box).

At the same time, the country's elites were playing a role that today is almost unrecognizable. They actually saw themselves as custodians of national institutions and interests. The heads of banks, corporations, universities, law firms, foundations, and media companies were neither more nor less venal, meretricious, and greedy than their counterparts today. But they rose to the top in a culture that put a brake on these traits and certainly did not glorify them. Organizations such as the Council on Foreign Relations, the Committee for Economic Development, and the Ford Foundation did not act on behalf of a single, highly privileged point of view -- that of the rich. Rather, they rose above the country's conflicting interests and tried to unite them into an overarching idea of the national interest. Business leaders who had fought the New Deal as vehemently as the U.S. Chamber of Commerce is now fighting health-care and financial reform later came to accept Social Security and labor unions, did not stand in the way of Medicare, and supported other pieces of Lyndon Johnson's Great Society. They saw this legislation as contributing to the social peace that ensured a productive economy. In 1964, Johnson created the National Commission on Technology, Automation, and Economic Progress to study the effects of these coming changes on the work force. The commission included two labor leaders, two corporate leaders, the civil rights activist Whitney Young, and the sociologist Daniel Bell. Two years later, they came out with their recommendations: a guaranteed annual income and a massive job-training program. This is how elites once behaved: as if they had actual responsibilities.

Of course, the consensus of the postwar years contained plenty of injustice. If you were black or female, it made very little room for you. It could be stifling and conformist, authoritarian and intrusive. Yet those years also offered the means of redressing the very wrongs they contained: for example, strong government, enlightened business, and activist labor were important bulwarks of the civil rights movement. Nostalgia is a useless emotion. Like any era, the postwar years had their costs. But from where we stand in 2011, they look pretty good.

THE RISE OF ORGANIZED MONEY

Two things happened to this social arrangement. The first was the 1960s. The story is familiar: youth rebellion and revolution, a ferocious backlash now known as the culture wars, and a permanent change in American manners and morals. Far more than political utopia, the legacy of the 1960s was personal liberation. Some conservatives argue that the social revolution of the 1960s and 1970s prepared the way for the economic revolution of the 1980s, that Abbie Hoffman and Ronald Reagan were both about freedom. But Woodstock was not enough to blow apart the middle-class democracy that had benefited tens of millions of Americans. The Nixon and Ford presidencies actually extended it. In his 2001 book, The Paradox of American Democracy, John Judis notes that in the three decades between 1933 and 1966, the federal government created 11 regulatory agencies to protect consumers, workers, and investors. In the five years between 1970 and 1975, it established another 12, including the Environmental Protection Agency, the Occupational Safety and Health Administration, and the Consumer Product Safety Commission. Richard Nixon was a closet liberal, and today he would be to the left of Senator Olympia Snowe, the moderate Republican.

The second thing that happened was the economic slowdown of the 1970s, brought on by "stagflation" and the oil shock. It eroded Americans' paychecks and what was left of their confidence in the federal government after Vietnam, Watergate, and the disorder of the 1960s. It also alarmed the country's business leaders, and they turned their alarm into action. They became convinced that capitalism itself was under attack by the likes of Rachel Carson and Ralph Nader, and they organized themselves into lobbying groups and think tanks that quickly became familiar and powerful players in U.S. politics: the Business Roundtable, the Heritage Foundation, and others. Their budgets and influence soon rivaled those of the older, consensus-minded groups, such as the Brookings Institution. By the mid-1970s, chief executives had stopped believing that they had an obligation to act as disinterested stewards of the national economy. They became a special interest; the interest they represented was their own. The neoconservative writer Irving Kristol played a key role in focusing executives' minds on this narrower and more urgent agenda. He told them, "Corporate philanthropy should not be, and cannot be, disinterested."

Among the non-disinterested spending that corporations began to engage in, none was more interested than lobbying. Lobbying has existed since the beginning of the republic, but it was a sleepy, bourbon-and-cigars practice until the mid- to late 1970s. In 1971, there were only 145 businesses represented by registered lobbyists in Washington; by 1982, there were 2,445. In 1974, there were just over 600 registered political action committees, which raised $12.5 million that year; in 1982, there were 3,371, which raised $83 million. In 1974, a total of $77 million was spent on the midterm elections; in 1982, it was $343 million. Not all this lobbying and campaign spending was done by corporations, but they did more and did it better than anyone else. And they got results.

These changes were wrought not only by conservative thinkers and their allies in the business class. Among those responsible were the high-minded liberals, the McGovernites and Watergate reformers, who created the open primary, clean election laws, and "outsider" political campaigns that relied heavily on television advertising. In theory, those reforms opened up the political system to previously disenfranchised voters by getting rid of the smoke-filled room, the party caucus, and the urban boss -- exchanging Richard Daley for Jesse Jackson. In practice, what replaced the old politics was not a more egalitarian new politics. Instead, as the parties lost their coherence and authority, they were overtaken by grass-roots politics of a new type, driven by direct mail, beholden to special interest groups, and funded by lobbyists. The electorate was transformed from coalitions of different blocs -- labor, small business, the farm vote -- to an atomized nation of television watchers. Politicians began to focus their energies on big dollars for big ad buys. As things turned out, this did not set them free to do the people's work: as Senator Tom Harkin, the Iowa Democrat, once told me, he and his colleagues spend half their free time raising money.

This is a story about the perverse effects of democratization. Getting rid of elites, or watching them surrender their moral authority, did not necessarily empower ordinary people. Once Walter Reuther of the United Auto Workers and Walter Wriston of Citicorp stopped sitting together on Commissions to Make the World a Better Place and started paying lobbyists to fight for their separate interests in Congress, the balance of power tilted heavily toward business. Thirty years later, who has done better by the government -- the United Auto Workers or Citicorp?

In 1978, all these trends came to a head. That year, three reform bills were brought up for a vote in Congress. One of the bills was to establish a new office of consumer representation, giving the public a consumer advocate in the federal bureaucracy. A second bill proposed modestly increasing the capital gains tax and getting rid of the three-Martini-lunch deduction. A third sought to make it harder for employers to circumvent labor laws and block union organizing. These bills had bipartisan backing in Congress; they were introduced at the very end of the era when bipartisanship was routine, when necessary and important legislation had support from both parties. The Democrats controlled the White House and both houses of Congress, and the bills were popular with the public. And yet, one by one, each bill went down in defeat. (Eventually, the tax bill passed, but only after it was changed; instead of raising the capital gains tax rate, the final bill cut it nearly in half.)

How and why this happened are explored in Jacob Hacker and Paul Pierson's recent book, Winner-Take-All Politics. Their explanation, in two words, is organized money. Business groups launched a lobbying assault the likes of which Washington had never seen, and when it was all over, the next era in American life had begun. At the end of the year, the midterm elections saw the Republicans gain 15 seats in the House and three in the Senate. The numbers were less impressive than the character of the new members who came to Washington. They were not politicians looking to get along with colleagues and solve problems by passing legislation. Rather, they were movement conservatives who were hostile to the very idea of government. Among them was a history professor from Georgia named Newt Gingrich. The Reagan revolution began in 1978.

Organized money did not foist these far-reaching changes on an unsuspecting public. In the late 1970s, popular anger at government was running high, and President Jimmy Carter was a perfect target. This was not a case of false consciousness; it was a case of a fed-up public. Two years later, Reagan came to power in a landslide. The public wanted him.

But that archetypal 1978 couple with the AMC Pacer was not voting to see its share of the economic pie drastically reduced over the next 30 years. They were not fed up with how little of the national income went to the top one percent or how unfairly progressive the tax code was. They did not want to dismantle government programs such as Social Security and Medicare, which had brought economic security to the middle class. They were not voting to weaken government itself, as long as it defended their interests. But for the next three decades, the dominant political faction pursued these goals as though they were what most Americans wanted. Organized money and the conservative movement seized that moment back in 1978 to begin a massive, generation-long transfer of wealth to the richest Americans. The transfer continued in good economic times and bad, under Democratic presidents and Republican, when Democrats controlled Congress and when Republicans did. For the Democrats, too, went begging to Wall Street and corporate America, because that's where the money was. They accepted the perfectly legal bribes just as eagerly as Republicans, and when the moment came, some of them voted almost as obediently. In 2007, when Congress was considering closing a loophole in the law that allowed hedge fund managers to pay a tax rate of 15 percent on most of their earnings -- considerably less than their secretaries -- it was New York's Democratic senator Charles Schumer who rushed to their defense and made sure it did not happen. As Bob Dole, then a Republican senator, said back in 1982, "Poor people don't make campaign contributions."

MOCKING THE AMERICAN PROMISE

This inequality is the ill that underlies all the others. Like an odorless gas, it pervades every corner of the United States and saps the strength of the country's democracy. But it seems impossible to find the source and shut it off. For years, certain politicians and pundits denied that it even existed. But the evidence became overwhelming. Between 1979 and 2006, middle-class Americans saw their annual incomes after taxes increase by 21 percent (adjusted for inflation). The poorest Americans saw their incomes rise by only 11 percent. The top one percent, meanwhile, saw their incomes increase by 256 percent. This almost tripled their share of the national income, up to 23 percent, the highest level since 1928. The graph that shows their share over time looks almost flat under Kennedy, Johnson, Nixon, Ford, and Carter, followed by continual spikes under Reagan, the elder Bush, Clinton, and the younger Bush.

Some argue that this inequality was an unavoidable result of deeper shifts: global competition, cheap goods made in China, technological changes. Although those factors played a part, they have not been decisive. In Europe, where the same changes took place, inequality has remained much lower than in the United States. The decisive factor has been politics and public policy: tax rates, spending choices, labor laws, regulations, campaign finance rules. Book after book by economists and other scholars over the past few years has presented an airtight case: over the past three decades, the government has consistently favored the rich. This is the source of the problem: our leaders, our institutions.

But even more fundamental than public policy is the long-term transformation of the manners and morals of American elites -- what they became willing to do that they would not have done, or even thought about doing, before. Political changes precipitated, and in turn were aided by, deeper changes in norms of responsibility and self-restraint. In 1978, it might have been economically feasible and perfectly legal for an executive to award himself a multimillion-dollar bonus while shedding 40 percent of his work force and requiring the survivors to take annual furloughs without pay. But no executive would have wanted the shame and outrage that would have followed -- any more than an executive today would want to be quoted using a racial slur or photographed with a paid escort. These days, it is hard to open a newspaper without reading stories about grotesque overcompensation at the top and widespread hardship below. Getting rid of a taboo is easier than establishing one, and once a prohibition erodes, it can never be restored in quite the same way. As Leo Tolstoy wrote, "There are no conditions of life to which a man cannot get accustomed, especially if he sees them accepted by everyone around him."

The persistence of this trend toward greater inequality over the past 30 years suggests a kind of feedback loop that cannot be broken by the usual political means. The more wealth accumulates in a few hands at the top, the more influence and favor the well-connected rich acquire, which makes it easier for them and their political allies to cast off restraint without paying a social price. That, in turn, frees them up to amass more money, until cause and effect become impossible to distinguish. Nothing seems to slow this process down -- not wars, not technology, not a recession, not a historic election. Perhaps, out of a well-founded fear that the country is coming apart at the seams, the wealthy and their political allies will finally have to rein themselves in, and, for example, start thinking about their taxes less like Stephen Schwarzman and more like Warren Buffett.

In the meantime, inequality will continue to mock the American promise of opportunity for all. Inequality creates a lopsided economy, which leaves the rich with so much money that they can binge on speculation, and leaves the middle class without enough money to buy the things they think they deserve, which leads them to borrow and go into debt. These were among the long-term causes of the financial crisis and the Great Recession. Inequality hardens society into a class system, imprisoning people in the circumstances of their birth -- a rebuke to the very idea of the American dream. Inequality divides us from one another in schools, in neighborhoods, at work, on airplanes, in hospitals, in what we eat, in the condition of our bodies, in what we think, in our children's futures, in how we die. Inequality makes it harder to imagine the lives of others -- which is one reason why the fate of over 14 million more or less permanently unemployed Americans leaves so little impression in the country's political and media capitals. Inequality corrodes trust among fellow citizens, making it seem as if the game is rigged. Inequality provokes a generalized anger that finds targets where it can -- immigrants, foreign countries, American elites, government in all forms -- and it rewards demagogues while discrediting reformers. Inequality saps the will to conceive of ambitious solutions to large collective problems, because those problems no longer seem very collective. Inequality undermines democracy.

Sent with Sparrow

How Washington Orthodoxy Fails the Middle Class - Business - The Atlantic

How Washington Orthodoxy Fails the Middle Class

Revitalizing the American middle class in a transformed global economy is a staggeringly complex task. And neither Democratic nor Republican alone is the answer.

615 obama speech reuters weak.jpg

Reuters

On Tuesday, Barack Obama declared the debate over how to restore growth, balance, and fairness to the American economy the "defining issue of our time." "This is a make-or-break moment for the middle class," he said in a Kansas speech, "and for all those who are fighting to get into the middle class."

The following day, Republican front-runner New Gingrich said Mr. Obama "represents a hard left radicalism" and is "opposed to capitalism and everything that made America great." The best way to help the middle class, the former House Speaker argued, was slashing the size of the federal government and cutting taxes.

The arrival of the middle class at the center of the American political debate is a long overdue step forward, but Obama and Gingrich steered clear of an ugly truth. Revitalizing the American middle class in a transformed global economy is a staggeringly complex task. And neither Democratic nor Republican orthodoxy alone is the answer.

The Republican right, oddly enough, has become more doctrinaire, utopian and out-of-touch with global realities than the "Marxist" Obama administration.

A recent study by MIT professors Frank Levy and Thomas Kochan lays out the staggering task that revitalizing the middle class represents. Rising blue-collar employment after World War II allowed the United States to create what Obama called "the largest middle class and the strongest economy that the world has ever known." Now that those factories have moved en mass overseas, the U. S. faces a far more arduous undertaking. Levy and Kochan argue for a new "social compact" that includes a public-private partnership where the United States' unparalleled venture capital and research university systems create high-end design, production, marketing and distribution jobs. Reforming profit sharing, unions, higher education, on-the-job training and tax law would create higher-skilled American workers who benefit from company performance along with senior executives. They cite the training, innovation and profit-sharing practices of Wegman's, Cisco and Google as examples.

By contrast, Obama's most specific legislative proposal in his speech was a payroll tax cut funded by a surtax on millionaires. Economists say the cut is a helpful short-term stimulus, but the key to strengthening the middle class over the long-term is the difficult task of creating stable, well-paying jobs.

The United States is not alone. Developed economies around the world are experiencing the same income disparity and stagnation in middle class wages. The reasons for the change - and the potential solutions to America's economic woes - lie in the American middle class reinventing its place in a rapidly changing global economy. Sweeping technological innovations over the last twenty years have altered traditional economic dynamics. The Internet has created network effects in extreme, with hundreds of millions of worldwide users making Amazon, Facebook and other companies extraordinarily valuable in extremely short periods. At the same time, global, computer-driven financial markets produce staggering profits and losses at unprecedented speed.

A study released Monday by the Organization for Economic Cooperation and Development found that the primary cause of income disparity in the U.S. and its 33 other members was technological change. A historic integration of financial and trade markets, fueled by technology, created an unprecedented worldwide demand for highly skilled workers in those fields. As a result, a select group of CEOs, traders and others - the so-called one percent - became fabulously rich fantastically quickly.

At the same time, technology is dividing the middle class. A November study by researchers at Stanford and Brown found that the number of middle class neighborhoods in the United States has dwindled as income disparity has widened.

To the dismay of the middle class, technological innovation is sending jobs overseas but not reducing costs at home.  Education and health expenses in the United States, for example, continue to steeply rise. As The Economist recently noted, the middle class is squeezed from two sides, with wages dropping and living costs rising.

Our tired, polarized politics have not caught up with these changes. The Democratic party's failure to dramatically reform Medicare and Social Security, for example, undermines its argument that government can be lean and effective. At the same time, the global elite's prosperity is not magically trickling down as supply-side Republicans predict.

Finding a way forward is not easy. No one, including me, knows how to reinvent the American middle class. The workings of a rapidly, evolving globalized economy remain poorly understood. And the challenges American society faces are generational.

Obama's goals and vision for the middle class, in general, are far more realistic and inventive than those of conservative Republicans. The Republican right, oddly enough, has become more doctrinaire, utopian and out-of-touch with global realities than the "Marxist" Obama administration. Criticism that the president glosses over the country's staggering fiscal problems, twists figures, and issues vague proposals are legitimate, but the conservative right too often offers simplistic, naive and ideological answers to enormously complex dynamics.

Over time, the American middle class can innovate, moderate and educate its way back to prosperity. Public-private partnerships can create high-quality schools and jobs. American made high-end goods and services can be exported to China and other growing economies.

Americans should not fear technological change or increasing global competition. Instead, we must forge a new politics at home and a new place in a transformed world economy.

This article available online at:

http://www.theatlantic.com/business/archive/2011/12/how-washington-orthodoxy-fails-the-middle-class/249709/

--
Stephen.Bates@gmail.com | +1 202-730-9760 

Connect with me!
 LinkedIn
Sent with Sparrow