Checklists for more systematic knowledge work

The Checklist Manifesto: How to Get Things Right, Gawande, Atul

The idea of a simple checklist to raise the quality of a routine practice seems innocuous enough. It also seems to rankle those with lots of education and experience as an unnecessary intrusion on their autonomy.

The canonical example is the story of the effort at Johns Hopkins Hospital to reduce central line infections in critical care settings. A central line is a catheter inserted into someone’s jugular vein in order to deliver medications. It’s a routine step for many patients in a critical care unit. It’s also a primary source of infection for patients in hospitals. While inserting a central line is straightforward for someone with the proper training, medical professionals will skip steps in the hustle and bustle. Peter Pronovost, a critical care specialist at Hopkins, developed a five-point checklist of the steps necessary to avoid central-line infections.

There’s absolutely nothing on the list that practitioners aren’t already trained to do and absolutely nothing controversial about the steps called for. Many of those professionals considered it an insult to have the obvious pointed out to them in written form. Yet when this checklist was deployed at Hopkins, central line infections dropped from 11% of patients to zero. Comparable results have been routinely achieved elsewhere.

Gawande reported these results first in an article in The New Yorker. In this book he expands on that story to look at

  • the origins of the modern checklist in WWII aviation
  • multiple examples of checklists deployed in other health care settings
  • the challenges inherent in developing checklists that work well in complicated environments
  • the difficulties in gaining meaningful acceptance of checklists among highly autonomous professionals

We live in an increasingly complicated and faster-paced world. But our memories are limited and fallible. The right piece of paper in the right place can compensate for those limitations and increase our capacity to deal with that world. The first balancing act is to design a checklist that increases our capacity to handle a situation significantly more than it increases the load on our limited memories. Pronovost’s checklist only touched on the five items most critical to preventing infections. It made no attempt to spell out every possible step in the process.

A checklist shouldn’t be confused with a procedure manual. Avoiding that confusion is an essential element in making organizational acceptance of checklists possible. Checklists are intended to improve and systematize the performance of those who are already proficient. In themselves, they are poor tools for developing proficiency in those still learning their craft.

This confusion between checklist and procedure is at the root of most resistance to efforts to deploy checklists in suitable settings.  Unfortunately, Gawande contributes to this confusion himself when he conflates checklists with project plans. Both are useful documents  but they serve different purposes and are constructed differently. I’d suggest that you skip the chapter on "The End of the Master Builder" on first reading. It makes the core argument clearer.

Even when properly designed and targeted as relevant aids for the proficient, there is still a change management and leadership challenge to address in deploying a checklist to support more effective practice. While Gawande offers a number of excellent stories and examples of implementing checklists in various settings, he isn’t looking for or tuned into the relevant details of organizational change.  This book provides excellent insight into why checklists work and what to think about when constructing them. Expect to look elsewhere for comparable advice on managing the associated change. Expect to need to do so as well.

As compelling as the rational evidence for checklists may be, orchestrating their adoption into the work practices of professionals presents a large hurdle. The hurdle, of course, is emotional. A checklist can be viewed as diminishing one’s expertise rather than as reinforcing it. Reversing that perception for both the expert and the rest of the organization is the key.

Chromakey and knowledge work

I came across the following YouTube video the other day while checking out Boing Boing (one of my favorite sources of interesting and provocative stuff).

Fascinating in its own right, but I keep coming back to it and thinking what it also has to say about the world we work in. Some thoughts:

  • Don’t let the scenery distract you from the action
  • Focus on a powerful story to lead your audience’s attention where you want it to go
  • Anchor your stories in people and their interaction

What do you think?

Reblog this post [with Zemanta]

The problem of incentives in knowledge work

WFEE09: Knowledge Wall/Gallery

Image by The Value Web Photo Gallery via Flickr

I’m struggling with the issue of incentives in organizations trying to promote improved knowledge management and more effective use of new collaboration tools such as blogs, wikis, and the like. Invariably, after an early spurt of activity and experimentation with the new systems, usage plateaus and talk turns to devising incentive systems to promote more participation. Behind the talk is the assumption that we can treat knowledge workers as rational economic actors and that the proper incentives will produce the desired behaviors.

The problem is the raft of research demonstrating that we are anything but rational economic actors. Spend any time digesting the insights in such work as Dan Ariely’s Predictably Irrational or Daniel Pink’s Drive: The Surprising Truth About What Motivates Us, to pick two recent examples, and you conclude that most organizational incentive systems are naively designed at best and actively harmful at worst. While carrots and sticks might be marginally useful if you need to crank out widgets or insurance claims, they aren’t for any work requiring significant creativity or discretion. Yet, we keep trying to devise simple reward systems and wondering why they fail.

The underlying issue is that focusing on designing incentives feels safer and easier than dealing with the hard managerial work of sitting down one-on-one with the individuals and planning out how to integrate these new tools into the day-to-day execution of knowledge work tasks. As Tom Davenport put it so pithily in Thinking for a Living the default managerial approach to knowledge workers is to "hire smart people and leave them alone." If the quality of knowledge work done by an organization is, in fact, a key differentiator in overall success, then this laissez-faire approach to managing knowledge work isn’t likely to be sustainable.

Behavioral complexities of knowledge work

There are actually two problems to be solved. The first is to get a handle on the behaviors that contribute to more effective knowledge work. The second is to understand what kinds of feedback will influence whether knowledge workers engage in the desired behaviors.

Consider the kinds of behaviors that you might see in an organization using its existing knowledge more effectively than average. Activities you might expect to see include:

  • Seeking out and finding experts elsewhere in the organization who can answer your questions
  • Experts in the organization making time to respond to questions they receive
  • Experts recognizing when repeated questions signal an opportunity for a new service or a deeper problem to address
  • Project teams experimenting with and adopting new practices such as After Action Reviews as part of their standard project plans
  • Individual knowledge workers revising their work practices to more easily find and incorporate previous work into new work

Multiplying examples would only reinforce the point that these behaviors are significantly more subtle and complex than those that find their way into typical incentive systems.

Rewarding something because it happens to  be measurable isn’t going to help, even if that is the all too common response in organizations that have fallen hostage to empty dictums that "you manage what you measure." You manage what you talk about. If that conversation can be boiled down to where the needle is pointing on one or two dials, then you live in a much simpler world than I do and I envy you.

In my world, there is a complicated and often mysterious relationship between what people do and what happens sometime later. You invest in getting to know the key people at a small software vendor. They get an email inquiry from a company interested in updating their approach to knowledge management that the software vendor forwards to you. You reply to the email, have a brief phone conversation, develop and submit a proposal over the weekend, and, three days later, land a substantial contract with someone you still haven’t met face-to-face. How do you map that into a performance measurement system?

Consider another example. A consulting firm is encouraging experts to submit their best work to a central document repository. Your call center expert responds and contributes an Excel spreadsheet used to analyze operating performance in an outbound call center. One of your smartest consultants (with an Ivy League Ph.D. in Applied Mathematics) grabs the spreadsheet for another call center project. Unfortunately, the Ph.D. mathematician doesn’t have time to discuss the document with the resident expert and proceeds to employ it incorrectly. Client damage control ensues. Is this a design flaw in the knowledge management system? A training problem? A developmental opportunity? Was it a staffing problem when our Ph.D was originally assigned to the project? What measurement system would signal this problem before it occurred? What measurement system would reveal the problem after the fact?

Focus on better feedback systems instead of incentives

You certainly want feedback systems that provide a picture of how knowledge workers in your organization are interacting with the tools and information you make available to them. Better yet, these feedback systems ought to let you detect and deconstruct patterns of practice over time. What you can’t get is a manageably small set of measures that you can reliably link to performance. You can’t operate on autopilot.

Two approaches come to mind. Both assume that individual knowledge workers have primary responsibility for figuring out how they contribute to creating value for the organization. Secondary responsibility for coaching knowledge workers through this effort lies with their immediate supervisors.

The first approach is to look for successful patterns of use within the existing knowledge sharing system. Use After Action Reviews or other techniques to examine and evaluate how a particular knowledge sharing opportunity played out.

The second approach is to add some basic instrumentation to the knowledge sharing system. Make it simple to count things like blog posts made, comments made, documents contributed, documents consulted, and pointers shared. Use that data to distill and identify patterns of practice worth emulating. For example, some knowledge workers might be adding value by connecting and integrating materials in the system to create new knowledge. Others might be helping by weeding out obsolete information or adding important caveats. There won’t be a single pattern of successful usage that all should emulate. It is much more likely that there will be multiple patterns. The managerial task is to help knowledge workers identify the patterns that they are most adept at, helping them refine their usage patterns over time, and monitoring the system as a whole to ensure that there is a good balance among usage patterns.

This is clearly a more complex and judgmental task than simply rewarding everyone for contributing more content. But it feels more suited to the actual complexities of doing and managing knowledge work in today’s environment.

 

 

Reblog this post [with Zemanta]