« RNA Single Helix Laundry Drying Rack | Home | 'Me love you long time' »

August 30, 2006

Agnotology: The study of ignorance — 'It is often safer not to know'


Isn't that the understatement of the still-young century?

How do we protect the President?

Simple: we don't tell him so he can deny any knowledge of it.

Karen Arenson, in an August 22 New York Times story about the recent SAT scoring debacle, expanded her focus to the larger question of how it happens that Gaston Caperton, the president of the College Board, which administers the test, could say with a straight face, as he recently did, that finding the specific cause of the disaster "did not really matter."

Like you, my first reaction was, "Huh?"

I guess when you're king you can get away with stuff like that.

I mean, stuff happens, right?

Long story short: Arenson interviewed Robert N. Proctor, a Stanford University professor "who is shaping a new field, the study of ignorance, which he calls agnotology."

Stop right there.

I hereby declare bookofjoe the Official Website For the Advancement of Agnotology™ (OWFAA).

Because who's better equipped to host anything having to do with ignorance than an ignorant person?




Enoch Choi?


But I digress.

Here's the Times story.

    What Organizations Don’t Want to Know Can Hurt

    Call in Miss Marple, perhaps, or Sherlock Holmes.

    Maybe they could solve the mystery of how some of the College Board’s SAT exams became wet last October, resulting in mistaken scores on more than 5,000 tests.

    The College Board’s president, Gaston Caperton, a former insurance executive and governor of West Virginia, said that finding the specific cause “did not really matter.” What was important, he said, was to ensure that improved controls caught future problems.

    But is that really enough? Should the College Board want to know whether someone merely spilled coffee on the answer sheets or whether they were exposed to rain? Was there negligence or a breakdown in security? The answer could help prevent similar incidents in the future.

    The leaders of corporations and other institutions, it turns out, are not always hungry for more information. Investigations can be costly. They can assign blame. They can uncover things that might give ammunition for lawsuits. They may delve deep into assumptions made when a system was put together, which may be outdated or expensive to change.

    Some technology watchers, like Robert N. Proctor, a Stanford professor who specializes in the history of science and technology, say they believe that there is increasing resistance to investigating, even as instances that warrant digging seem to be climbing.

    “There is a lot more protectiveness than there used to be,” said Dr. Proctor, who is shaping a new field, the study of ignorance, which he calls agnotology. “It is often safer not to know.”

    A desire not to learn too much may have guided Kenneth L. Lay, former chief executive of Enron, who died of a heart attack in July, six weeks after his conviction on fraud charges.

    Sherron S. Watkins, an Enron vice president, warned him of numerous financial improprieties, including accounting problems, months before the company’s collapse in late 2001. Mr. Lay did commission a review of her charges. But instead of hiring people with no relation to the company and a reputation for tough investigations, he employed the company’s longtime law firm and requested a narrow inquiry.

    “For one reason or another, whether it was sloppiness, or whether Enron signaled what it wished to happen, the problems were papered over,” said Stephen Gillers, a New York University law professor and ethics expert, who said that the result was a kind of “sweetheart investigation.”

    There seems to be no shortage of other examples in the corporate world. Merck, the third-largest American drug maker, was faced in 2000 with clinical trial results showing that patients taking Vioxx, a painkiller, had five times the number of heart attacks as those taking an older drug.

    But the company refused to accept the most obvious explanation — that Vioxx might be dangerous to the heart. Merck continued to sell Vioxx for four more years, until another trial confirmed Vioxx’s dangers, before withdrawing the drug.

    Now, Merck faces 14,000 lawsuits, covering almost 30,000 plaintiffs, contending the drug caused heart attacks and strokes. The company says it did nothing wrong and plans to defend every lawsuit.

    The desire to avoid blame and litigation can work as powerful incentives to keep details from surfacing.

    “People managing those systems don’t want people peeking behind the curtain to look at the systems,” said John M. Staudenmaier, editor of the journal Technology and Culture.

    “It’s not that all the men behind the screens are bad guys,” he added. “This is not about conspiracy. It is about the nature of very complex systems. You can’t watch everything.”

    Fields like aviation and medicine have found ways to overcome this.

    When a plane crashes, a specially designated agency, the National Transportation Safety Board, is there, with the mandate to investigate in detail. The emphasis is on finding cause, not fault. And the agency is not responsible for bringing enforcement actions for errors, which it says adds to its impartiality.

    Surgery, too, has a standardized mechanism to learn from errors. In contrast to aviation, the analysis is not done by a large investigative body, but in individual hospitals, where surgical departments routinely hold “mortality and morbidity” conferences to analyze mistakes.

    Atul Gawande, a surgeon at Brigham and Women’s Hospital in Boston, who described the process in his book “Complications: A Surgeon’s Notes on an Imperfect Science” (Metropolitan Books, 2002), said in an interview that surgery had moved from a field that lost many patients to one where problems arise less than 3 percent of the time “only by being willing to examine failures and to benefit from them.”

    The model is a simple one, he said. But it depends on confidentiality and an internal culture that does not assign blame. “It is a culture,” he said, “that accepts a sort of contradictory proposition that error is unacceptable, but that it is going to happen.”

    Of course, plane crashes and surgical errors involve the potential loss of life, which raises the pressure to avoid mistakes. But other types of organizations also have cultures and procedures to foster fact-finding and open discussion of mistakes.

    David A. Garvin, a Harvard Business School professor who studies how organizations learn and improve, said one useful model was the Army’s “after-action reviews,” developed in the 1970’s to learn more from combat training exercises.

    The model has spread in the Army, and corporations like Microsoft have employed similar types of reviews, said Dr. Garvin, who wrote about the process in “Learning in Action: A Guide to Putting the Learning Organization to Work” (Harvard Business School Press, 2000).

    In educational testing, which has become a big business, there are no health, safety or national security issues to compel an inquiry.

    That is why Kenneth P. LaValle, a New York State senator who heads the Senate Higher Education Committee, has proposed creating an oversight agency for the College Board and other testing organizations. The College Board’s failure to discover and disclose what really happened, he said, reinforced his belief that “at the core of their psyche is the feeling that they don’t have to share information with anyone.”

    The College Board quickly asked Pearson Educational Measurement, the company that scores the tests for the board, to explain how it had misscored thousands of exams. Pearson found that answer sheets swollen by dampness had been misread by scanning machines. The College Board suggested that the dampness may have come from heavy rains in the Northeast on the day of the test.

    The biggest numbers of mistaken scores were indeed in New York (1,215), New Jersey (996) and Massachusetts (591), according to a list from the board. But that explanation did not account for mistakes on exams taken in dozens of other locations, including Alaska, Hawaii, Nigeria and Bahrain. Nor did it explain why tests taken in the morning were affected, but not the subject tests taken in the afternoon.

    David Hakensen, a spokesman for Pearson, said his company could not comment because of litigation over the scoring errors.

    At Booz Allen Hamilton, the consulting firm the College Board hired to review its scoring practices after the problems occurred, executives said that dissecting past problems was not necessary either to ensure that the scoring system worked better in the future or that there was a good safety net to catch errors.

    “You can do both without knowing whether it was rain that made the papers wet, or whether someone spilled a cup of coffee,” said Gil I. Irwin, who led the study, along with Patrick W. Houston.

    Mr. Houston added that learning about problems in detail was not always worthwhile because the next problem might well be different.

    “If we tried to brainstorm everything that could possibly go wrong,” he said, “we’d be here for years — for a lifetime. But if controls are in place to identify problems, and rescore tests that were misscored, that’s what you’re really looking for.”

    Some testing experts disagree. “A common principle in troubleshooting in any realm is to isolate the source of the problem,” said Walter M. Haney, a testing expert at Boston College and its Center for the Study of Testing, Evaluation and Educational Policy.

    (Dr. Haney, who called the decision not to investigate further “irresponsible,” submitted an affidavit in the lawsuit against the board in federal court in Minnesota, saying there should be an outside inquiry.)

    He said in an interview that as educational testing has increased, there has been “a real upsurge in errors,” and that he saw little evidence that the College Board or other testing organizations were learning from their mistakes.

    Mr. Caperton, the board’s president, said the perception that the board was not being forthcoming was wrong. “We were never hiding anything from the first,” he said. “We want to be honest and open.”


I hereby declare the Zombies the Official Rock Group of OWFAA.

August 30, 2006 at 10:01 AM | Permalink


TrackBack URL for this entry:

Listed below are links to weblogs that reference Agnotology: The study of ignorance — 'It is often safer not to know':


this is important to consider in medicine, where we are encouraged to ask for particular details in the medical history... if we're not told, we're not responsible... sometimes with catastrophic consequences... it'd be better to have multiple rechecks with the patient about their history, possibly using a an internet based personal health record.

Posted by: enoch choi | Aug 31, 2006 12:15:59 PM

In the absence of facts, comes speculation: SATs are paper-driven, probly scored by hand, probly by overworked grad students or interns, some of whom took them home. Probly.

A spilled cuppa joe probly could not destroy 5,000 SATs, but a full-bladdered pet could. Not a yap dog, probly, but a Great Dane alpha male would certainly have the capacity.

Hence, a twist on the time honored "My dog ate my homework."

Which could elicit the bureaucratic face-saving response "it doesn't really matter" how the SATs got wet.

I sense a cover up.

Posted by: Ron Nelson | Aug 30, 2006 8:02:19 PM

The comments to this entry are closed.