Polls as gossip

Early in my career as one of those "university academics," I experienced a number of intellectual revelations – at least revelations for a fat boy who had grown up mostly in Cleburne, Texas and (earlier) in Winslow, Arizona.  Some of these were when I began courses in marketing and statistics – that eventually led to a B.S. in economics, a MBA, and a Ph.D. in economics.  One of the first assignments I remember in my first statistics course was to read a book titled How to Lie with Statistics.

This book was assigned not to teach us things we were supposed to do when collecting data and when using statistical analysis to interpret that data.  This book was assigned to teach us what we were supposed not to do there.  To cloak statistical analysis in such forbidden procedures was not scientific at all: that type of research was revealed as merely an elaborate sham to promote some outcome the user held at the onset, an outcome or determination that had no scientific "truth" about it.  Much of what we are sold as "scientific" results is really only propaganda.

Also included with this assignment were some lectures about associated considerations – delivered by our old milquetoast professor, one Vernon Clover: bad dandruff, small stature, and (in opposition to evolved faculty today) not a political agenda in sight.  These observations might be grouped in a file folder titled "The Scientific Method."  These lectures focused on parameters valuable for researchers striving to separate chicken salad from the other chicken stuff.

One parameter is that one can never prove anything; we cannot know all the data.  The best we can hope to do is fail to disprove the alternate hypothesis to our study.  If we fail to disprove the alternate hypothesis, then the scientific conclusion is that we tentatively accept the primary hypotheses, pending further investigation.  Inescapably, therefore, science is never settled.  Another parameter is "garbage in, garbage out" in referring to the design of the data collection experiment to exclude data noise and to include all relevant data possible within budget and time constraints.  

Simon Kuznets, back in the 1920s, was the head of a federal office first charged with developing our national databases.  A quote attributed to him went (paraphrased): "The government can collect all the data it wants, and its researchers and statisticians can massage that data, run transformations on that data, do all sorts of things to that data in the name of accurate and meaningful results.  But we must never forget that in the first instance, the data were written down by the town watchman, who writes down what he damned pleases."

Another parameter is to avoid any personal bias, held at the onset, as to what the conclusions of the study should be.  Otherwise, the experiment will not yield chicken salad.  Of course: all humans are biased at some level, and this bias creeps in to all science and skews conclusions.  At the least, there is cultural bias.  Muslims, for example, will not research certain tenets of their faith.  Nor will we of ours.  Or of racial differences in I.Q.  Or gender.  Except to "prove" some pre-determined value judgment.  Not chicken salad, there.  

Finally, we were cautioned about the "engineer mentality:"  Science seeks to operate in the realm of what is and of what can be done.  Science has absolutely nothing to say about whether we ought to do something.  That is a personal opinion – one no more valid than any other.  That point is where the marketing comes in.  Say "goodnight," Al Gore.

If you experience technical problems, please write to helpdesk@americanthinker.com