Sampling error is a useful tool in the hands of anyone who routinely gathers data, which is pretty much anyone. It has great value for two-year olds, who spend nearly all of their waking hours collecting data about the world. It is equally as valuable to adult scientists whose working hours are generally spent recording data for a living. Simply put, when the data humans gather doesn’t agree with their preconceived view of the world, the convenient explanation is “sampling error.”
Sampling error is the natural consequence of the inability to examine everything. If you ask everyone on the planet whether they prefer red wine or white, you will get a very accurate idea of which is the most popular (Note: those who don’t drink wine are outliers, and can be eliminated). If, on the other hand, you ask this question of ten people in line at the fish market, you might get all whites. Since this is completely unreasonable, you can invoke sampling error, and get on with your Pinot Noir.
As handy as sampling error can be in some situations, it also has a dark side. Back in the 60’s, cognitive psychologist Peter Cathcart Wason was busy trying to understand why people unfailingly make certain predictable mistakes in reasoning. He concluded that folks always tend to favor information that supports their own personal beliefs and preconceived notions, regardless of whether or not that info is true. Pete’s term for this partiality? Confirmation bias. In the case of the wine example, it’s clear that those who answered “white” either aren’t serious wine drinkers, or else they didn’t understand the question. These too are “outliers” to be ignored.