The problem of incentives in knowledge work

WFEE09: Knowledge Wall/Gallery

Image by The Value Web Photo Gallery via Flickr

I’m struggling with the issue of incentives in organizations trying to promote improved knowledge management and more effective use of new collaboration tools such as blogs, wikis, and the like. Invariably, after an early spurt of activity and experimentation with the new systems, usage plateaus and talk turns to devising incentive systems to promote more participation. Behind the talk is the assumption that we can treat knowledge workers as rational economic actors and that the proper incentives will produce the desired behaviors.

The problem is the raft of research demonstrating that we are anything but rational economic actors. Spend any time digesting the insights in such work as Dan Ariely’s Predictably Irrational or Daniel Pink’s Drive: The Surprising Truth About What Motivates Us, to pick two recent examples, and you conclude that most organizational incentive systems are naively designed at best and actively harmful at worst. While carrots and sticks might be marginally useful if you need to crank out widgets or insurance claims, they aren’t for any work requiring significant creativity or discretion. Yet, we keep trying to devise simple reward systems and wondering why they fail.

The underlying issue is that focusing on designing incentives feels safer and easier than dealing with the hard managerial work of sitting down one-on-one with the individuals and planning out how to integrate these new tools into the day-to-day execution of knowledge work tasks. As Tom Davenport put it so pithily in Thinking for a Living the default managerial approach to knowledge workers is to "hire smart people and leave them alone." If the quality of knowledge work done by an organization is, in fact, a key differentiator in overall success, then this laissez-faire approach to managing knowledge work isn’t likely to be sustainable.

Behavioral complexities of knowledge work

There are actually two problems to be solved. The first is to get a handle on the behaviors that contribute to more effective knowledge work. The second is to understand what kinds of feedback will influence whether knowledge workers engage in the desired behaviors.

Consider the kinds of behaviors that you might see in an organization using its existing knowledge more effectively than average. Activities you might expect to see include:

  • Seeking out and finding experts elsewhere in the organization who can answer your questions
  • Experts in the organization making time to respond to questions they receive
  • Experts recognizing when repeated questions signal an opportunity for a new service or a deeper problem to address
  • Project teams experimenting with and adopting new practices such as After Action Reviews as part of their standard project plans
  • Individual knowledge workers revising their work practices to more easily find and incorporate previous work into new work

Multiplying examples would only reinforce the point that these behaviors are significantly more subtle and complex than those that find their way into typical incentive systems.

Rewarding something because it happens to  be measurable isn’t going to help, even if that is the all too common response in organizations that have fallen hostage to empty dictums that "you manage what you measure." You manage what you talk about. If that conversation can be boiled down to where the needle is pointing on one or two dials, then you live in a much simpler world than I do and I envy you.

In my world, there is a complicated and often mysterious relationship between what people do and what happens sometime later. You invest in getting to know the key people at a small software vendor. They get an email inquiry from a company interested in updating their approach to knowledge management that the software vendor forwards to you. You reply to the email, have a brief phone conversation, develop and submit a proposal over the weekend, and, three days later, land a substantial contract with someone you still haven’t met face-to-face. How do you map that into a performance measurement system?

Consider another example. A consulting firm is encouraging experts to submit their best work to a central document repository. Your call center expert responds and contributes an Excel spreadsheet used to analyze operating performance in an outbound call center. One of your smartest consultants (with an Ivy League Ph.D. in Applied Mathematics) grabs the spreadsheet for another call center project. Unfortunately, the Ph.D. mathematician doesn’t have time to discuss the document with the resident expert and proceeds to employ it incorrectly. Client damage control ensues. Is this a design flaw in the knowledge management system? A training problem? A developmental opportunity? Was it a staffing problem when our Ph.D was originally assigned to the project? What measurement system would signal this problem before it occurred? What measurement system would reveal the problem after the fact?

Focus on better feedback systems instead of incentives

You certainly want feedback systems that provide a picture of how knowledge workers in your organization are interacting with the tools and information you make available to them. Better yet, these feedback systems ought to let you detect and deconstruct patterns of practice over time. What you can’t get is a manageably small set of measures that you can reliably link to performance. You can’t operate on autopilot.

Two approaches come to mind. Both assume that individual knowledge workers have primary responsibility for figuring out how they contribute to creating value for the organization. Secondary responsibility for coaching knowledge workers through this effort lies with their immediate supervisors.

The first approach is to look for successful patterns of use within the existing knowledge sharing system. Use After Action Reviews or other techniques to examine and evaluate how a particular knowledge sharing opportunity played out.

The second approach is to add some basic instrumentation to the knowledge sharing system. Make it simple to count things like blog posts made, comments made, documents contributed, documents consulted, and pointers shared. Use that data to distill and identify patterns of practice worth emulating. For example, some knowledge workers might be adding value by connecting and integrating materials in the system to create new knowledge. Others might be helping by weeding out obsolete information or adding important caveats. There won’t be a single pattern of successful usage that all should emulate. It is much more likely that there will be multiple patterns. The managerial task is to help knowledge workers identify the patterns that they are most adept at, helping them refine their usage patterns over time, and monitoring the system as a whole to ensure that there is a good balance among usage patterns.

This is clearly a more complex and judgmental task than simply rewarding everyone for contributing more content. But it feels more suited to the actual complexities of doing and managing knowledge work in today’s environment.

 

 

Reblog this post [with Zemanta]

3 thoughts on “The problem of incentives in knowledge work”

  1. Your point about feedback is important. I agree.

    But I also think employers will fail to get the best from knowledge workers until they get to share in some of the riches they create. Stock options may not be the best incentive, but linking bonuses to overall company or department performance can make sense. This aggregates all the small improvements in to one big metric.

  2. You raise an important point about ensuring that the feedback and rewards for good knowledge work should reflect long term perspectives and broader issues than the immediate surroundings of the knowledge worker

Comments are closed.