Saturday, July 17, 2010

On Skepticism, Statistics, And The Social Sciences

Though the subject matter and the glowing reviews incline me towards adding it to my summer reading list, I haven’t yet read this book, but judging from the précis that Slate presents, its argument is a timely and important one, and one that I’m inclined to credit. More so than ever before, we have come to rely on the social sciences as a tool which can aid us in making important economic, educational, and political decisions, and rightly so – individual human cognition being the flawed process that it is, models that at least attempt to assemble a coherent, methodical account of what really happens in society are a vital check against errors in reasoning. But the thought has sometimes struck me that in recent years the social sciences may have become just advanced enough to lead us seriously astray in some cases, and a healthy skepticism about statistics may be what we need to prevent that.

Data is the currency which gives any scientific hypothesis its value, but even in the so-called hard sciences, in which it is possible to create a controlled and self-contained laboratory environment, it can be difficult to gather reliably. In the social sciences, this problem is orders of magnitude worse. Human society and behavior are complex phenomena, with myriad confounding factors which make it difficult even to spot relationships with real explanatory value. Economics, sociology, anthropology and the rest have given us tools that allow us to filter out some valuable information from the vast and murky pool of data that is observable human behavior, but we get ahead of ourselves when we mistake valuable information for knowledge. More often, it is merely a clue that points toward the possibility of knowledge – a magnetic reading that may indicate the presence of something like a needle somewhere in the haystack. Even without relying on clearly suspect data, trained social scientists sometimes fall into faulty reasoning. When we consider the possibility that our metal detector can malfunction – that the data itself may be incomplete, unrepresentative, or otherwise imperfect – we must consider the corresponding implication that what we’ve found may not be anything like a needle at all.

This problem, along with the fact that scientists are not immune to the demonstrable human frailties of ideological and cognitive biases, is why I adhere to a sort of Burkean, small-c conservatism on questions of social policy. Flawed as our long-standing social institutions may be, they have served us well enough to have endured over the years, and we cannot dismiss the possibility that there is a sort of unconscious ancestral wisdom embedded in them. New findings in social science serve a vital function by forcing us to periodically re-examine our assumptions and giving us ideas with which to experiment in order to improve our institutions, but they do not represent a good argument for upending entire long-standing systems. When the data – the flawed, messy, problematic data – suggests that a certain policy change may improve things, we should try it – but not commit to it until our collective anecdotal experience has confirmed that the suggestions of the data that it will work are correct.

No comments:

Post a Comment