By Jason Gillikin | November 12, 2011
Last week an all-too-typical situation crossed my desk: A manager, under pressure to explain declining departmental performance, pressured my data guys to come up with “new metrics” and then supplied a laundry list of requirements including a potentially misleading aggregation of certain trends.
If only scenarios like this were rare. Depending on the industry — and health care is a huge offender here — the “talk” and the “walk” of data-driven process improvement are out of sync. To be sure, in some companies and in some industries, data-driven process improvement is an art form. But in too many enterprises, the push for numbers muddies instead of clarifies the waters. Some thoughts:
- The push for “metrics” in practice is more lip service than an opportunity for meaningful operational improvement. High-level leaders seem to want “dashboards” that provide insufficient detail to be actionable. Mid-level managers, trying to plug the gap, demand more data than they know how to process and try to benchmark unlike processes. And in any case, not many people actually look at the data to try to understand what’s going on; requested reports may earn a cursory glance but are otherwise left to collect dust. We demand, but we do not consume.
- Many leaders lack basic statistical competence. In manufacturing management, you can’t get away without knowing how to read a run chart. But as a rule, in large corporations, statistical competence is rare. Most people don’t know the difference between a mean or a median. How many managers in a typical business know what a T-test is, or how to parse a correlation coefficient? Yet the most meaningful opportunities for reform are hidden in the statistical signals. If a conclusion is so obvious from the data that a simple bar chart will reveal it, odds are good that everyone already knows about it anyway.
- People reject what they don’t understand. If a conclusion requires intermediate-level statistics (or better yet, sampling), it’s more difficult for some leaders to fully internalize and support the conclusion since they don’t really understand how the sausage was made. In some cases, it’s easier to dismiss or suppress data findings instead of using the data to push difficult reforms.
- Data people often rank low and lack the institutional authority to require methodologically correct practices. Data analysis is like moral philosophy; everyone thinks they’re an expert, even absent any meaningful training. Managers with nearly zero statistical competence nevertheless seem quite open to dictating methodology or assumptions or even the process of data collection to subordinates who have much deeper subject-matter knowledge than they do. In this sort of environment, competent analysts are left to suck up the various ethical challenges that daily business brings their way, and opportunities for real improvement sometimes get left in the dust.
- People usually argue about the data instead of taking ownership for failures. If I had a nickel for every time someone tried to suggest that my reports needed to be “tweaked” or delivered in a certain way before they’d be useful, I could buy a new Ferrari. Yet the churn about reports masks what’s really going on — the managers blame the data and thereby find ways to avoid accepting responsibility for their poor performance. It’s way easier to demand that someone else endlessly modify a report (all the while, telling an upline that “the reports aren’t right”) than to own up to one’s own weaknesses as a leader.
- Some leaders think data comes from the ether. Well, in a sense. Few things are as frustrating as being asked to produce a dashboard for a product or service that has yet to be deployed, and for which no clear business benefit or ROI was ever articulated during the project conceptualization.
- In new product or service deployments, reporting infrastructure is often an afterthought. Companies install new systems and don’t bring in the data guys until things are too far along to reverse — and then the data guys say that the deployment can’t actually provide the data that people were hoping for.
These trends aren’t universal, of course. Some managers have excellent statistical reasoning skills, and some companies or industries require advanced statistical competence as a prerequisite to entry. Yet for people trapped in work assignments wherein the inmates seem to be running the asylum, hope need not be lost.
First, a mismatch in statistical competence between analysts and leaders shouldn’t be a deal-breaker. Often, building trust helps both sides see where the other is coming from. One of the best ways to keep everyone on track is to start early with measurement systems design. Analysts should work with managers early in a process — new deployment or PI effort — to identify the expected return on investment and the business imperative behind the work effort. Agree on what will be measured before measurements can actually begin.
Second, analysts ought to use their analysis documents as teaching moments, to explain the what/how/why behind their conclusions. Help people understand what’s being done without condescending to explain it as if the audience were comprised of 10-year-olds.
Third, bring in extra help. If a customer wants information that isn’t supported by the data, or which presents a misleading picture of what the data indicate, then push back — even if you’re a low-level data monkey. Cite external authorities or even the code of ethics of the American Statistical Association. Make it clear that an ethical boundary is being crossed, and escalate it within the organization.
Data-driven operational improvement, if done well, brings value to all stakeholders. The trick? Do it well. And sometimes getting to that point requires skills more political than statistical.