Free and cheap technology is killing organizational effectiveness

Technologies supporting knowledge work are deceptive, especially for knowledge work shared among groups and teams. The ease of getting started obscures the challenge of learning to be effective. We focus on the details of particular features and functions at the expense of ignoring the cognitive challenges of deep thought and collaborative work.

I’ve been participating in a Slack team with a loose group of colleagues scattered across two continents. I was off the grid for about two weeks and found myself lost when I returned to the conversation that had continued in my absence. My first hypothesis was that Slack was the culprit and that some magically better UX would eliminate the problem. Slack, of itself, isn’t the problem but it is emblematic of the deeper issue that should be tackled.

In trades and crafts, the most experienced and effective practitioners would never invest in cheap tools or materials. Learning to use those tools and materials effectively is the work of years of deliberate practice. The strategy shouldn’t be any different if you are manipulating ideas than if you were manipulating clay. But the marketing and deployment of software rejects these hard won lessons. Software fame and fortune is built on promises of simplicity and ease of use, where ease of use has been interpreted as ease of getting started and minimally productive. We’ve all become facile with learning the first 5% of new tools and services. We’ve been led to believe or we pretend that this is enough. Few among us are prepared to invest in pushing further. Fewer still belong to organizations willing to support this investment.

The payoff from even this 5% has long been sufficient in terms of personal and organizational impact. We’re reaching the limits of the return from this minimalist strategy–it’s even more acute when we shift focus from individual knowledge workers to teams and groups.

To go beyond the 5% we need to modify our expectations and approaches about how we blend powerful tools with powerful practices. We need to adopt the attitudes of those who think in terms of craft and expert practice. Organizationally, we need to provide the time, space, and support to design and invent this new craft.

My hypothesis is that there are models to look to and borrow from. In particular, I believe that the world of software development has the longest and richest experience of dealing with the individual and group production of the thought products of the knowledge economy. Further, there are individual expert knowledge work craftspeople in various other fields; their tools and practices are also worth understanding and reverse engineering.

I don’t have this all figured out yet. Nonetheless, I ‘d like to get a new conversation going about how to improve on this train of thought. Where are good places to look?

Repeatable Processes and Magic Boxes

There is a trap hidden in most efforts to create repeatable processes and systems that try to guarantee predictable results in doing knowledge work. To avoid that trap, you must learn to recognize and manage the magic box.

The promise of repeatable processes is to identify, design, and sequence all of the activities that go into producing a specified output. It is the core of industrial logic. For industrial logic to turn out uniform results with predictable quality, craft must be transformed into proven systems and repeatable processes. Variation must be rooted out and eliminated. To do this, all of the design thinking necessary to produce the product must be extracted from the process and completed before any further work can be done. You get the prototype product done right and then you churn out a million replicas. For this strategy to work, all of the magic must take place before the first process step ever occurs.

Knowledge work cannot be forced into an industrial straightjacket. The essence of knowledge work is to produce a bespoke result. I have no interest in the strategy McKinsey developed for its last client; I want the strategy that applies to my unique situation. I do not need the accounting system tailored to GM’s business; I need an accounting system matched to my organization. And therein lies the rub. I may want a bespoke accounting system, but I’m not willing to pay for it. Forty years ago, in fact, I had to pay for one anyway because off-the-shelf accounting software didn’t exist. All software development was custom; as was all consulting work.

Consultants and software developers are not stupid people. They could see that, despite the necessity of producing a unique product at the end, much of their work had elements of the routine and predictable. To increase the quality of their work, to train new staff, to improve their economics, and to better market their cumulative experience, knowledge work organizations worked to transform their practices into repeatable processes and methods. New industries were created as software developers redesigned their code to segregate what needed to be customized from what constituted a common core.

In this effort to apply industrial logic to what was fundamentally creative work, most organizations were sloppy or short-sighted about managing what was a fundamental tension between industry and craft. What was big, and shiny, and marketable was the packaging of cumulative experience into a consulting methodology, or a software development process, or a customizable software product.

What could not be eliminated, however, was the essential craft work necessary to employ the methodology or to customize the product. What happened was that this craft work was pushed into a box somewhere on the process map or into a line item in the standard workplan.

This is the “magic box.” Whatever its name or label, it is the step where the necessary creative work takes place. This is work that cannot be done until the moment arrives.

Why does it matter to identify which boxes in the process require magic? Because they determine the quality of the final result. The other boxes only matter to the extent that they set you up to succeed in the magic box.

How do you recognize the your are dealing with a magic box? What are the clues that differentiate it from among all the other boxes? Sometimes you must approach this by process of elimination; many boxes—“conduct field interviews”, for example—are more easily identified as not possibly containing magic. As for positive identifying features, look for language that suggests design thinking steps or analysis that isn’t tightly specified. “Develop market segmentation approach”, or “design chart of accounts” are examples of possible magic boxes.

Understanding which boxes are which in a process is essential to managing the process effectively. Regular boxes can be estimated and managed more tightly than magic boxes. Data collection, for example, is usually straightforward; analyzing it, requires more flexibility and adaptability. Collapsing these related, but distinct, activities into a single step would be a poor project management decision. Just because creative work cannot be controlled in the same predictable way as industrial work does not relieve managers of their responsibility to make effective use of limited resources. Being clear and isolating the magic boxes from the ordinary ones is essential to making those resource deployment questions.

Repeatable processes are often marketed and sold as “proven approaches” that eliminate the trial and error that the uninitiated risk if they strike out on their own. This has enough truth to be dangerous. Traveling in new terrain is safer with an experienced guide. The guide may help you get to where the underlying geology is promising, but cannot guarantee that you will strike gold. Honest guides will emphasize the distinction. But it is a distinction that is only meaningful to those who can hear it.

Trial and error is an unavoidable feature of creation. A serendipitous error is often the seed of a creative solution. Understanding where the magic needs to occur helps you distinguish between unproductive and potentially productive error. Unproductive error is an opportunity for learning and process improvement; it should be carefully reined in. Potentially productive error must be permitted and encouraged. That can only be done effectively by understanding where the magic needs to occur.

Some related links:

Project management for the rest of us

We operate in a world of projects, yet few of us are trained in how to think about or manage them. Management education focuses on designing for the routine and predictable. Today’s environment is neither. Projects remain foreign to the bulk of managers in organizations who are accustomed to running ongoing operations. What differentiates success from failure in projects bears little resemblance to what drives success in operations.

Although projects are ubiquitous, project management professionals build their reputations by dealing with the largest and most complex efforts. Lost in this quest to push back the edges of project management is the need to equip mainstream managers in organizations to operate in a project-based world. While expert project managers think about work breakdown structures, scope creep, critical-path mapping, and earned value analysis, the rest of us would like some help learning and understanding the essential 20% of project planning and management that applies to any scale project. How do we become reasonably competent amateur project managers?

The end is where to begin

Until you understand what the end result needs to look like, you have no basis to map the effort it will take to create. Imagine what you need to deliver in reasonable detail, however, and you can work backwards to the sequence of tasks that will bring it into being. How clearly you can visualize the desired end product, in fact, sets your horizon. A clear picture of the end result supports a clear plan of the entire path to that result. A fuzzy picture only allows you to move far enough to generate a sharper picture.

The trick is to visualize the end result as a concrete deliverable that you can hand over. Perhaps it is a slip of paper saying “the answer is 42.” Perhaps it is a working software application, or a marketing strategy. Thinking of it in concrete terms helps in two ways. First, it forces you to be clear about who is to receive this deliverable. If you can’t be clear about who the intended recipient is, you can’t be clear on design, structure, or format of the deliverable. Second, a concrete picture of a deliverable makes it easier to imagine a conversation between you and your audience. The more fully you can imagine that conversation, the easier it will be to imagine the path to creating it. If you can’t visualize a deliverable, you can’t specify the path to create it.

Identifying and connecting the dots

Peter Drucker separates knowledge workers from production workers by noting that the first responsibility of a knowledge worker is to ask “what is the task?” This responsibility flows from the need to define deliverables. In production work, there is no need to define deliverables; they are baked into the design of all repeatable processes. In knowledge work, nothing can happen until a deliverable is specified; understanding the shape of the deliverable binds the shape of the task.

If you think I am being too clever by half, consider the following thought experiment. In a production process, I am quite happy to take whatever output of the process rolls off the line next—one BMW 730i had better be undistinguishable from the next. On the other hand, if I am in the market for a new strategy, I will not accept a copy of the last strategy report McKinsey turned out.

Again, Drucker had things figured out before the rest of us. Knowledge work depends on creating and delivering answers that are unique to the situation at hand (see, for example, Balancing Uniqueness and Uniformity in Knowledge Work).

How do I break down the single task of “produce the necessary unique outcome” into a sequence of manageable tasks that I can string together? How do I specify the dots and how do I thread them together into a path that will get me to my destination? For starters, I had better have some meaningful knowledge of the problem domain. Assuming that knowledge base, there are several heuristics for using that knowledge to specify and sequence appropriate tasks. Keep the following phrases in mind as you take your understanding of the deliverables and the domain and translate that into a possible plan.

  • Break things into small chunks (inch pebbles are easier to manage than milestones
  • Do first things first
  • Ask what comes next
  • Group like things together
  • Errors and rework are essential to creative work

If you can see how to get from A to B in a single step, you’re likely looking at a potential task. “Small chunks” is a reminder that the only way to eat an elephant is in small bites. Somewhere between a day and a week’s worth of work for an individual is one useful marker of tasks that belong in a project plan. If you can break your deliverables into components, the components may constitute project tasks. Drafting a section of a report or analyzing one element of a budget are the kinds of chunks that make reasonable tasks on a project list.

An initial list of chunks or potential tasks forms the input to the next three heuristics; do first things first, ask what comes next, group like things together. There is a back and forth interaction among these three that is much of the art of good project planning. The art lies in being clever and insightful about sequencing and clustering activities. Suppose the chunk you are considering is “analyze the Midwest sales region.” What happens next? Do we combine that analysis with the outputs from analyzing other sales regions? Do we know how many sales regions exist? Have we included a step to learn how many regions exist? How about a step to get our hands on the pertinent data? Do we have the knowledge and skills to get the data? Is there someone we need to talk to in order to make sense of the incoming data? Is that a big enough question to warrant its own task in the project plan? Raising and answering these questions calls for good mix of domain knowledge and project insight.

All project managers learn that errors and rework are an essential element of creative work. For routine production work, the goal is to eliminate errors and rework. For project work, the goal is to know that they are inevitable and build time and tasks into the plan to deal with them when they occur. Dwight Eisenhower captures the essence of this point in his observation that “plans are useless, planning is everything.” As the Supreme Commander of the Allied Forces in World War II responsible for planning and leading the D-Day invasion, his words carry weight.

Essential tools

The growing list of chunks can rapidly become overwhelming. There is a substantial industry of vendors and tools promising to bring this complexity under control. As helpful, and possibly necessary, as these tools might be during the execution phase they are more hindrance than help during planning. Two simple tools—a messy outline and a calendar—help better navigate the planning step.

An outline captures the essential need to order and cluster tasks. An outline offers enough structure over a simple todo list to add value without getting lost in the intricacies of a more complex software tool. It helps discover similar tasks, deliverables, or resources that can be grouped together in your plans. It can highlight where preceding or subsequent tasks might be missing. Throughout the iterative process of developing and refining the task outline, a calendar keeps you tuned both to external time constraints and internal deadlines.

There are many good software tools available for working with outlines. If you’re going to be doing project planning on a regular basis—and you will be—it’s well worth adding one to your software toolkit. If you’re so inclined, you might also take advantage of mindmapping software, which typically has an outlining mode. You can use a spreadsheet program in a pinch, but spreadsheets are as well suited to the dynamic demands of thinking through alternate approaches to a project. Here’s an example of a project plan built using an outlining tool. It has everything you need to plan 80% of the projects you will encounter.

At the outset, we’re focused on the value of simply thinking through what needs to be done in what order before leaping to the first task that appears. That’s why an outline is a more useful tool at this point than Microsoft Project or an other full bore project management software. For those projects of sufficient scale and complexity, more powerful tools can be necessary. For most projects, simple tools are all that you will need. For those that ultimately need full-featured project management software, these same simple tools are often a better place to start.

All knowledge work is project work

Drucker’s observation that knowledge work begins with defining the task implies that knowledge work is essentially project work; the economic engine of today and tomorrow. Yet, projects remain foreign to most managers in organizations who are accustomed to running ongoing operations. What separates success from failure in projects bears little resemblance to what drives success in operations.

Why bother increasing project management capabilities at the base instead of the leading edge? All of us must develop a basic level of knowledge and skill in planning and leading projects if we wish to be competent leaders in today’s organizations. Project management is not only a job for professionals. As knowledge workers, we’re all called on to participate in project planning, and often we must lead projects without benefit of formal education in project management.

Staying in the question – shifting from problem identification to framing

“Don’t come to me with a problem, unless you also have a solution.” I got a version of that advice early in my career and I’ve dispensed it as well. It’s become bad advice.

The good part of the advice is that simply pointing at a problem isn’t terribly helpful. The trickier part is that seeing an apparent problem going unaddressed can’t be taken as evidence that everyone around you is stupid. More likely than not, it’s a clue that obvious solutions won’t work for reasons that you are too ignorant or inexperienced to grasp. The unstated premise is that experience consists of developing a repertoire of problem recognition patterns and corresponding solutions.

As our inventory of known problems and matching solutions grows, the remaining problems are more complex and demand a level of matching complexity in their solutions. This is the realm of design thinking. Second, this rising complexity also changes the problem-solving process. What was a process of problem identification and solution selection has become a process of problem framing and solution design.

Problem framing is qualitatively different from problem identification. Identification is a diagnostic process; a collection of presenting symptoms points to a finite set of possible diagnoses listed in order of likelihood. Framing is a more exploratory and interactive process; powerful questions turn into experimental probes to discover the boundaries of the problem space and the efficacy of possible interventions.

At the heart of this framing process is the patience to “stay in the question” long enough to map the boundaries and to calibrate the power and precision of interventions. Insisting on rapid problem identification and solution selection limits the opportunities to discover and design breakthrough innovations.

The hardest aspect of “staying in the question” is managing the pressure to get on with it; the bias for action that characterizes the best organizations. Taking time out to think is not typically valued or rewarded. Analysis paralysis is a real risk. Staying in the question is not about the depth or precision of answers; it is about asking better questions to illuminate choice points, mark out new directions, and identify more options.

Review – Only Humans Need Apply


Only Humans Need ApplyOnly Humans Need Apply: Winners and Losers in the Age of Smart Machines
Thomas H. Davenport, Julia Kirby

In his most recent book, Tom Davenport, along with co-author Julia Kirby, provides an excellent entry point and framework for understanding the evolving relationship between smart people and smart machines. There’s a great deal of hand-wringing over technology encroaching on jobs of all sorts. This is hand-wringing that arises with every new technology innovation stretching back long before the days of Ned Ludd. Davenport and Kirby avoid the hand-wringing and take a close look at how today’s technologies—artificial intelligence, machine learning, etc.—are changing the way jobs are designed and structured.

They articulate their goal as

“to persuade you, our knowledge worker reader, that you remain in charge of your destiny. You should be feeling a sense of agency and making decisions for yourself as to how you will deal with advancing automation.”

In large part, they succeed. They do so by digging into a series of case histories of how specific jobs are re-partitioned, task by task, between human and machine. It’s this dive into the task-level detail that allows them to tell a more interesting and more nuanced story than the simplistic “robots are coming for our jobs” version that populates too many articles and blog posts.
Central to this analysis is to distinguish between automation and augmentation, which they explain as

“Augmentation means starting with what minds and machines do individually today and figuring out how that work could be deepened rather than diminished by a collaboration between the two. The intent is never to have less work for those expensive, high-maintenance humans. It is always to allow them to do more valuable work.”

They give appropriate acknowledgement to Doug Engelbart’s work, although the nerd in me would have preferred a deeper dive. They know their audience, however, and offer a more approachable and actionable framework. They frame their analysis and recommendations in terms of the alternate approaches that we as knowledge workers can adopt to negotiate effective partnerships between ourselves and the machines around us. The catalog of approaches consists of:

  • Stepping Up—for a big picture perspective and role
  • Stepping Aside—to non-decision-oriented, people centric work
  • Stepping In—to partnership with machines to monitor and improve the decision making
  • Stepping Narrowly—into specialty work where automation isn’t economic
  • Stepping Forward—to join the systems design and building work itself

Perhaps a little cute for my tastes, but it does nicely articulate the range of possibilities.

There’s a lot of rich material, rich analysis, and rich insight in this book. Well worth the time and worth revisiting.

Lowering the costs of context switching

Context switching is expensive yet inevitable in our multi-tasking world. If you are a knowledge worker, lowering the costs of context switching may be one of the highest payoff investments you can make. How should we go about thinking about the problem of context switching to reduce those costs?

Switching contexts vs. switching tasks

A context switch is bigger than a task switch. Moving from answering one email to answering another is a task switch within a single context. Switching between answering email and drafting a client presentation or facilitating a planning meeting constitutes a context switch. Basic productivity advice encourages you to group like tasks precisely to avoid unnecessary context switches.

How can we organize our mental models, intellectual scaffolding, and the supporting environment to shorten the time it takes to shift gears from one context to the next? How do we reduce the time and effort it takes to get back in the zone and focus effectively on the task at hand?

It’s helpful to break the context switch process into three stages. For any one context there is a setup stage of getting all the elements for performing one class of tasks up and running. Second, when switching from one context to another, there is a teardown stage of clearing out all the elements of the first context to make room for the next. Finally, it helps to think of the space between two contexts as a third stage where we can do things to simplify and streamline the switch from Context A to Context B.

The first step is to make a particular context as standard, distinctive and evocative as possible. We want to setup a context to trigger the mental state we want to achieve. This is why writers often have a separate space dedicated to writing tasks. Research materials on the left, reference books on the shelf to the right, coffee mug in its familiar place and word processor open to where you left off the last time. Identify all the cues that evoke the mental state you seek and arrange them in their proper place. Other examples of physical contexts that prepare you mentally for the work at hand include a cook’s kitchen or a craft-person’s workshop.

Setting up a digital context

Increasingly, our contexts are largely or exclusively digital. Taking some cues from physical contexts we can call to mind, we might think of setting up our computer for a writing or a programming session in terms of opening a collection of applications and documents arrayed across our monitor(s) in exactly the same way every time we write or code. Few of us put that much thought into this setup process, but cognitive science suggests that we have something to gain if we do.

If you are a programmer, this is part of the logic behind encouraging the use of Integrated Development Environments (IDE); all of the tools and data you need for a coding session are available at once and each is in the same location on your physical screen. Over time, this means that the pattern of screens, data, and tools in front of your eyes maps directly to a comparable array in your mind’s eye. Using the physical patterns to evoke and trigger the mental patterns speeds your transition into programming mode.

By way of contrast, consider how else you might begin to work on a programming task. First, you open your favorite text editor, close whatever document might be left over from the last programming session, open today’s code module and begin to read. Launch a test machine and run the code until the code encounters an error. Then, load a debugger to examine the errant code. Then, open a document containing the program spec. Possibly, fire up a browser—or switch to a browser that is already open and pointing to a random site; search for a discussion thread relevant to the error code you are looking at.

Which scenario feels likely to be more productive and effective? Which is more common in your experience?

Breaking down a context

Whether analog or digital, switching from one context to another starts, ideally, by wiping the slate clean. In the analog world, leaving a context behind can be as simple as leaving a room and closing the door.

The digital world is a bit trickier. Leaving random elements of the preceding context cluttering the landscape means your mental landscape starts out comparably cluttered. Better to break down a digital context by closing all open documents and shutting down open applications.

The space between contexts

Switching contexts in the analog world might entail a short walk from an office to a conference room. Although brief, that physical transition serves an important function of easing the necessary shift of mental gears.

Shifting digital contexts can occur more or less instantly with a handful of keystrokes; that may not be a good idea. You want to give your mind time to complete its shift as well. This is one of the benefits of the Pomodoro technique, which deliberately builds in breaks between mental sprints. The breaks are the ideal spot to locate context shifts in your work.

There are some other things you can do to ease shifting from one digital context to another. For example, to the extent you can control it, order context switches such that adjacent contexts are as distinct as possible. For example, switch from working on a spreadsheet analysis to writing a report rather than switch between two reports. Better yet switch between writing a report and debugging computer code if your collection of tasks is sufficiently broad.

Make different contexts as visually distinct as possible to engage more of the senses in making the switch. If you are using the Pomodoro technique, contemplate clearing your monitors to some standard, neutral display as a resting step between adjacent contexts

Context switching and team work

We’ve been focused on context switching as an individual cognitive task. Most of us do large portions of our work in team settings. How does context switching apply to team environments? On the one hand, moving between teams and team venues probably provides more than sufficient triggers to switch smoothly. On the other hand, the proliferation of virtual teams and the all too common experience of days of one conference call after another may be making the context switching problem worse. Can we extend our thinking about context switching to the team and virtual team level or is that simply a bridge too far?

Technology in the Classroom

There’s a nice piece over at Studypool talking to a dozen “experts” about effective use of technology in the classroom. I put experts in quotes mostly because I was deemed one of those experts. Kidding aside, the end result is a nice overview of productive ways to think about incorporating technology into teaching and learning environments. Better yet, it offers pointers to a diverse group of folks thinking about this problem.

This advice is more broadly relevant when you consider that, as knowledge workers, we are all tasked with learning on an ongoing basis. From that perspective, we need to have more effective strategies to incorporate technology into our learning whether the classroom is in an ivy-covered hall, an office conference room, or somewhere in cyberspace. None of us have the luxury of working in stable environments. We all must operate on the assumption that we need to assimilate new ideas and techniques into our work practices and do so on technology platforms that are also evolving.

The costs of context switching

Multiple ScreensMulti-tasking doesn’t work but our lives demand it anyway. This leaves us with the problem of how to compensate for the productivity and quality losses generated by work environments that demand parallel processing our brains can’t handle.

Why can’t our brains multi-task and what happens when we try? Left brain/right brain discussions aside, we only have one brain and that brain is single-threaded; it’s built to work on one cognitive problem at a time. Most of us can manage to walk and chew gum at the same time, but we can’t read the paper and discuss changes in the day’s schedule with our spouse simultaneously.

The bottleneck is attention. When we pretend to multi-task, what we are doing is cycling focus among the tasks competing for our attention. Each time we switch focus, we have to re-establish where we were in our work when we left off before we can begin moving forward. We also have to set aside the work we were doing along with whatever supporting materials we were using.

This process of redirecting focus is a context switch. Context switching is expensive because complex tasks—writing a blog post, debugging code, analyzing sales data—depend on equally complex mental scaffolding. When writing a blog post, for example, that scaffolding can include notes on the points to be made, memory of relevant previous posts ideas about upcoming blog posts, links and open browser tabs to supporting research, and so on. That scaffolding might be spread across multiple computer screens and program windows. It might also handwritten notes or paper copies of relevant supporting articles. All of that supporting scaffolding, along with the current draft of the blog post, helps you build up the mental structures that eventually lead to a finished draft of your post.

Suppose now that I need to put aside the blog post in progress to take an incoming phone call from my boss. It’s a call about a proposal we are putting together for a client. It might be just a simple call to confirm a detail in the proposal document, or it might be a more complex discussion about whether to rethink and reorganize the entire proposal. Regardless, I need to set aside the work on the blog post and flush my mind of all the details. I then need to call to mind the salient details of the client and the draft proposal as the call unfolds. In the first moments of this call, I’m not likely to be terribly articulate or smart. As the call progresses, I may need to call up various supporting materials and gradually fill in an entirely new context to contribute to the conversation.

Switching tasks means that you have to also break down one context and stand up a new context before you can actually begin to do any meaningful work. When the call is complete, you need to reverse the process to resume work on your blog post. Will you recall the insight that was just coming into focus when you were interrupted by that call from your boss? Or is it lost forever?

How expensive is this context switch? Research from the world of software development (see Jeff Sutherland’s Scrum: The Art of Doing Twice the Work in Half the Time) suggests that switching between two projects can result in productivity losses of 20%. Add a third project to your list and the costs rise to 40%. This means that each project gets no more than 20% of your attention and focus. Is it any wonder then that professionals work the hours that they do?

Step one in solving any problem is recognizing it. Limiting the number of projects you are working on and carving out big blocks of time to focus exclusively on each project helps. This is the core advice of most time management gurus. Few of us, however, have that much control over our responsibilities. A more attractive target then is to think about ways to lower the costs of context switching. We’ll come back to that in the next post.

Review – The Art of Procrastination: A Guide to Effective Dawdling, Lollygagging and Postponing

The Art of Procrastination: A Guide to Effective Dawdling, Lollygagging and Postponing,
Perry, John

I began my formal relationship with the notion of procrastination at the age of 12. I was in my room working on a model plane instead of my homework. My dad came in and asked me to look up the word “procrastination”; without lifting my head or skipping a beat, I answered “sure, I’ll do it later.” The model parts were pushed aside and the dictionary landed on my desk. I was invited to read the definition aloud.

I read this slim volume in 2013, which confirms that I am part of its target audience. Written by Stanford philosopher John Perry, The Art of Procrastination is largely tongue-in-cheek but contains some of the most useful advice for living with procrastination that I’ve encountered since my abrupt introduction in my youth. Perry recommends a basic coping strategy that accepts the reality of putting things off and leverages a small bit of self-deception to actually manage to get things done. He labels his strategy “structured procrastination” and argues that procrastinators “can be motivated to do difficult, timely, and important tasks… as long as these tasks are a way of not doing something more important.

This will seem counter-intuitive to those who don’t routinely procrastinate. For those of us who do, however, this finally makes sense out of how we’ve ever managed to get anything done.

Perry offers other useful insights as well. For example, he differentiates between horizontal and vertical organizers; those whose work needs to be spread out to see versus those who can effectively file things away in cabinets when they aren’t pertinent to the task at hand. He also offers an intriguing take on perfectionism and procrastination. The impact of perfectionist thinking isn’t in soaking up time in endless cycles of tweaks and revisions. The impact comes from fantasies of producing the perfect deliverable preventing the procrastinator from starting the prosaic work of turning out a serviceable product. That was an insight that hit a little too close to home.

If Perry’s notions of structure procrastination ring true for you, add this slim volume to your pile and read it as a way to avoid working on some other pressing task.

Making knowledge work visible

Invisibility is an accidental and troublesome characteristic of knowledge work in a digital world. What makes it invisible? Why does it matter? What can you do about it?

How did knowledge work become invisible?

As a knowledge worker, I get paid for what happens inside my head but not until I get the work outside where it can be seen. Before the advent of a more or less ubiquitous digital environment, that head work generated multiple markers and visible manifestations. There were handwritten notes from interviews, a presentation might start with rough mockups of slides scribbled on a pad of paper. Flip charts would document the outcomes of a group brainstorming session. A consulting report would start as an outline on a legal pad that would be rearranged by literally cutting and pasting the paper into a new order and organization. Computer code started as forms to be filled out and forwarded to a separate department to transcribe the forms onto punch cards.

No one would want to return to that world of knowledge work.

Digital tools—text editors, word processors, spreadsheets, presentation software, email—have eliminated multiple manual, error-prone, steps. They’ve made many low-value roles obsolete—sometimes by unintentionally giving them back to high-cost knowledge workers.

These same tools also reduce the physical variety of knowledge work to a deceptively uniform collection of keystrokes stored as bits in digital files hiding behind obscure file names and equally  uninformative icons. A laptop screen offers few clues about the knowledge work process compared to an office full of papers and books. A file directory listing appears pretty thin in terms of useful knowledge content compared to rows of books on shelves.

Why does the visibility of knowledge work matter?

If you can’t see it, you can’t manage or improve it. This is true as an individual knowledge worker and as a team or organization.

Noticing that digital work is invisible reminds us of benefits of analog work that weren’t obvious. Among those non-obvious benefits;

  • Different physical representations (handwritten notes, typed drafts, 35mm slides) establish how baked a particular idea is
  • Multiple stacks of work in progress make it easier to gauge progress and see connections between disparate elements of work
  • Physically shared work spaces support incidental social interactions that enrich deliverables and contribute to the learning and development of multiple individuals connected to the effort

Consider how developing a presentation has changed over time. Before the advent of PowerPoint, presentations began with a pad of paper and a pencil. The team might rough out a set of potential slides huddled around a table in a conference room. Simply by looking at the roughed-out set of slides you knew that it was a draft; erasures, cross outs, and arrows made that more obvious.

A junior level staffer was then dispatched with the draft to the graphics department, where they were chastised for how little lead time they were provided. A commercial artist tackled the incomprehensible draft spending several days hand-lettering text and building the graphs and charts.

The completed draft was returned from the graphics department starting an iterative process, correcting and amending the presentation. The team might discover a hidden and more compelling story line by rearranging slides on a table or conference room wall or floor. Copies were circulated and marked up by the team and various higher ups. Eventually, the client got to see it and you hoped you’d gotten things right.

The work was visible throughout this old-style process. That visibility was a simple side effect of the work’s physicality. Contributors could assess their inputs in context. Junior staff could observe the process and witness the product’s evolution. Knowledge sharing was simultaneously a free and valuable side effect of processes that were naturally visible.

Putting knowledge work on the radar screen

The serendipitous benefits of doing knowledge work physically now must be explicitly considered and designed for when knowledge work becomes digital. The obvious productivity benefits of digital tools can obscure a variety of process losses. As individuals, teams, and organizations we now must think about how we obtain these benefits without incurring offsetting losses in the switch from physical to digital.

Improving knowledge work visibility has to start at the individual level. This might start with something as mundane as how you name and organize your digital files. You might also develop more systematic rules of thumb for managing versions of your work products as they evolve. Later, you might give thought to how you map software tools to particular stages in your thinking or your work on particular kinds of projects. For example, I use mind-mapping software when I am in the early stages of thinking about a new problem. For writing projects, I use Scrivener as a tool to collect and organize all of the moving pieces of notes, outlines, research links, drafts, etc. The specific answers aren’t important; giving thought to the visibility of your own digital work is.

Teams should take a look at the world of software development. Software development teams have given more thought than most to  how to see and track what is going on with the complex knowledge work products they develop and maintain. Software developers have carefully thought out tools and practices for version management, for example. Good ones also have practices and tools for monitoring and tracking everything from the tasks they are doing to the software bugs and issues they are working to eliminate. These are all ideas worth adapting to the broader range of knowledge work.

Organizations might best adopt an initial strategy of benign neglect. I’m not sure we understand knowledge work in today’s world well enough to support it effectively at the organizational level. Knowledge management efforts might seem relevant, but my initial hypothesis is that knowledge management is hampered, if not trapped, by clinging to industrial age thinking. We’re likely to see more progress by individual knowledge workers and local teams if we can persuade organizations to simply let the experiments occur.