One moment...

Blogs / Performance Management / Performance Measures, Games, and Unintended Consequences

Performance Measures, Games, and Unintended Consequences

Game theory in public administration assumes that sometimes human beings will act in ways that are contrary to the organization’s or society’s best interests. The prisoner’s dilemma and the tragedy of the commons, for example, caution us about unintended consequences.

A recent article in Governing put forward an argument that performance management is susceptible to the same pitfalls, with everyone from Wells Fargo employees to school teachers potentially gaming their performance measures to show better results on paper, even as the needs of customers or students go unmet.

Obviously we face a choice. On one hand we can just give up and say—to paraphrase Lord Acton and to confirm the pessimistic view in the article—that performance metrics corrupt, perhaps absolutely. Or we can recommit to their value. Here are a few considerations to keep in mind:

  • Quotas. Yes, quotas are dangerous. Whether you’re looking at charter school attendance, speeding ticket revenues, or real estate sales, placing too high a value on the final numbers can lead to all manner of unethical approaches to bump up the numbers. For most jurisdictions, however, the pressures to meet quotas can be offset very easily by tracking more than one performance metric. For example, schools might look not only at daily attendance, but also at standardized test scores, graduation rates, college acceptances, or even alumni giving. Is your IT department graded on its processing time for help desk tickets? Perhaps you should track not only how long each ticket was open within IT, but also whether it was simply forwarded to another department unresolved, and whether employees are satisfied with IT services. You might also consider whether all help desk tickets are created equal, or whether some higher priority requests (e.g., dispatch systems offline) should be assessed separately.
  • Achievable results. Stretch goals can be a great motivator, but they run the risk of setting unrealistic expectations. If you’re adopting concrete goals for staff, be sure to work with them to agree on what’s achievable, both from their own experience and from their research on benchmark jurisdictions or private industry. If the targets are not achieved, be sure to analyze the possible causes and put an action plan in place to improve on that performance going forward.
  • Actionable intelligence. When the reason for measuring is a sense that it’s “the right thing to do,” or that workload statistics need to be presented to justify a budget request, it’s easy to fall into the trap of providing that data as part of the budgetary “rite of spring”’ and then forgetting it. Yes, it’s good to know how many work orders a department completed, and whether that total went up or down over the past year, but if you’re not planning to make any decisions based on the data, then maybe you’re wasting your time reporting it. Instead, think about the metrics that could inform a decision. Are you collecting those? And are you reporting on those on a routine basis (quarterly, monthly, or weekly) that would enable timely action? If your school system had a truancy problem, would you want the administrators to wait until June to address it? Of course not. So if your fire/EMS managers note trends in cardiac health, space heater fires, or drug overdoses, shouldn’t those be discussed and addressed as they arise?
  • Transparency. Some cynical local residents might assume that government workers are malingerers, relatives of politicians, or short-timers running out the clock, protected by civil service rules and seniority. But while the machine politics of old contributed to some of those stereotypes, performance data can help dispel that image. Not only can governments tell you at year end how many potholes were filled and how quickly, but through CRM/311 apps, they can pinpoint the specific locations of the work and the response times. With searchable databases and GIS, it’s also much easier to demonstrate the equity of service among neighborhoods.
  • Collaboration. Where the results to be achieved cross departmental or agency boundaries, there can be incentives for all sides to share information and work toward better understanding and more precise measurement. Yes, fire/EMS managers might be noticing more drug overdoses, but those same or underlying issues are undoubtedly being tracked by police, public health, social service, code enforcement, school districts, and other agencies. By sharing data from those disparate sources, there is a greater likelihood of identifying problem areas early and mitigating the contributing factors. And it doesn’t matter which department took the lead. Which brings us to . . .
  • Credit and blame. For performance management and a culture of continuous improvement to take hold, the concepts of credit and blame need to be laid aside. Unless there is obvious malfeasance, the goal of performance stat meetings or the like is not to reward individuals or berate those who did not achieve their goals. It’s to discover what’s working and what’s not and build on that information. If you’re in the early stages of performance management implementation, think about ways in which you can reward your teams for their commitment to the process, transparency, and collaboration, rather than just focusing on the quotas.

So while a certain degree of housecleaning is in order, the problems with performance management are no reason to drop grading, surveying, efficiency metrics, and quality controls altogether. Rather, each time the measures have been thoughtfully reviewed, those that remain should inform an ongoing discussion about strategic goals, effective management, and customer service.


Thomas Miller

Well written and important not to ignore. The gaming of evaluations of all kinds can undermine not only the trust of consumers but also the trust of the honest folks who measure fairly.

Please sign in to comment.

Posted by