Wednesday, March 08, 2006

I saw an interesting article that discusses what the Daniel Yankelovich sees as a "great divide" between 'scientists' and 'society.'

While I don't necessarily agree completely with the points he raises solution he recommends, many of his observations are really thought-provoking in the larger context of society (not just with regards to science). Here is a short section from the article that I thought might raise some interesting questions in your minds. Any thoughts?

"Winning Greater Influence for Science"
by Daniel Yankelovich
Issues in Science and Technology
(published by the National Academy of Sciences)
Summer 2003



.....
The unfortunate reality is that scientists and the rest of society operate out of vastly different worldviews, especially in relation to assumptions about what constitutes knowledge and how to deal with it. Scientists share a worldview that presupposes rationality, lawfulness, and orderliness. They believe that answers to most empirical problems are ultimately obtainable if one poses the right questions and approaches them scientifically. They are comfortable with measurement and quantification, and they take the long view. They believe in sharing information, and their orientation is internationalist because they know that discoveries transcend borders.

The nonscientific world of everyday life in the United States marches to a different drummer. Public life is shot through and through with irrationality, discontinuity, and disorder. Decisionmakers rarely have the luxury of waiting for verifiable answers to their questions, and when they do, almost never go to the trouble and cost of developing them. Average Americans are uncomfortable with probabilities, especially in relation to risk assessment, and their time horizon is short. Policymakers are apprehensive about sharing information and are more at home with national interests than with internationalism. Most problems are experienced with an urgency and immediacy that make people impatient for answers; policymakers must deal with issues as they arise and not in terms of their accessibility to rational methods of solution.

This profound difference in worldview manifests itself in a many forms, some superficial, some moderately serious, and some that cry out for urgent attention. Here are three relatively superficial symptoms of the divide:

Semantic misunderstandings about the word "theory." To the public, calling something a "theory" means that it is not supported by tested, proven evidence. Whereas a scientist understands a theory to be a well-grounded explanation for a given phenomenon, the general public understands it as "just a theory," no more valid than any other opinion on the matter. (Evolutionary "theory" and creationist "theory" are, in this sense, both seen as untested and unproven "theories" and therefore enjoy equivalent troth value.)

Media insistence on presenting "both sides." When this confusion over "theory" bumps up against media imperatives, the result is often a distorting effort to tell "both sides" of the story. In practice, this means that even when there is overwhelming consensus in the scientific community (as in the case of global warming), experts all too often find themselves pitted in the media against some contrarian, crank, or shill who is on hand to provide "proper balance" (and verbal fireworks). The resulting arguments actively hinder people's ability to reach sound understanding: Not only do they muddy the public's already shaky grasp of scientific fundamentals, they leave people confused and disoriented.

Science's assumption that scientific illiteracy is the major obstacle. When faced with the gap between science and society, scientists assume that the solution is to make the public more science-literate--to do a better job at science education and so bring nonscientists around to a more scientific mindset. This assumption conveniently absolves science of the need to examine the way in which its own practices contribute to the gap and allows science to maintain its position of intellectual and moral superiority. In addition, on a purely practical level a superficial smattering of scientific knowledge might cause more problems than it solves. Two other manifestations of the divide are less superficial and more serious:

The craving for certainty about risk and threat. The public and policymakers crave a level of certainty that the language and metrics of science cannot provide. For example, when the public is alarmed by something like the anthrax scare or some future act of small-scale biological or chemical terrorism, science will assess the threat in the language of probabilities. But this metric neither, reassures the public nor permits it to make realistic comparisons to other threats, such as nuclear terrorism. Science's frame of reference does not communicate well to the public.

Divergent timetables. The timetables of science (which operates in a framework of decades or longer) are completely out of synch with the timetables of public policy (which operates in a framework of months and years). It has taken nearly 30 years for the National Academy of Sciences to complete its study of the consequences of oil drilling in Alaska's North Slope; in that time, a great deal of environmental damage has been done, and political pressure for further exploration in the Arctic National Wildlife Refuge has gained momentum. At this stage, the academy's scientific report stands to become little more than a political football. Vaccine research is another example: Political demands for prompt action on high-profile diseases do not jibe well with the painstaking process of research and trial. Political pressures push resources toward popular or expedient solutions, not necessarily those with the greatest chance for long-term success.

Two more manifestations of the divide are particularly troublesome:

The accelerating requirement that knowledge be "scientific." In both the academic community and Congress, the assumption is growing that only knowledge verified by scientific means (such as random assignment experiments) can be considered "real knowledge." Unfortunately, only a minuscule number of policy decisions can ever hope to be based on verified scientific knowledge. Most public policy decisions must rely on ways of knowing--including judgment, insight, experience, history, scholarship, and analogies--that do not meet the gold standard of scientific verification. Our society lacks a clear understanding of the strengths and limitations of nonscientific ways of knowing, how to discriminate among them, and how they are best used in conjunction with scientific knowledge. Since the time of the ancient Greeks, our civilization has presupposed a hierarchy of knowledge, but never before have forms of nonscientific knowledge been so problematic and devalued, even though they remain the mainstay of policy and of everyday life.

Colliding political and scientific realities. Although the scientific framework demands that scientists maintain objectivity and neutrality, political leaders pressure scientists to produce the "correct" answers from a political point of view. When political and scientific imperatives collide, science is usually the loser. President Reagan's science advisors on antiballistic missile systems found themselves marginalized when they didn't produce the answers the administration wanted. Scientists do not have a lot of experience in dealing with political pressures in a way that permits them to maintain both their integrity and their influence. Arguably, this has been the greatest single factor in science's declining influence in policy decisions.

Nor are these the only symptoms. A host of other elements exacerbate the divide between the two worlds: unresolved collisions with religious beliefs, difficulty in assessing the relative importance of threats, the growing number and complexity of issues, and the wide array of cultural and political differences in society.
.....

No comments: