Review: Planning for Everything: The Design of Paths and Goals

Planning is Everything Planning for Everything: The Design of Paths and Goals. Peter Morville

Conversations about project management often invoke Dwight Eisenhower’s dictum that “plans are useless, planning is essential.” We’re agreed that it is the process that matters. Peter Morville’s Planning for Everything offers an extended and illuminating reflection on the nature of that process.

It’s an effective interweaving of vignettes and case examples illustrating how planning principles and practices play out. That actually makes it more actionable and adaptable than generalities or detailed processes and checklists. It certainly doesn’t hurt that Morville is an excellent writer.

The title captures a perspective on planning that squares with where my thinking has been evolving – that effective planning is more about design than about execution and that the design process revolves around the interaction between paths and goals.

The book is organized around six central chapters:

  • Framing
  • Imagining
  • Narrowing
  • Deciding
  • Executing
  • Reflecting

that lay out the essence of planning processes. These chapters are bracketed by chapters stepping back to adopt a wider perspective; some may find those a bit too removed from the pragmatic, don’t let that stop you from working through the core.

There are nuggets of insight throughout the book. I’ve recently been teaching courses on project management and requirements analysis; the combination is triggering thinking about the conflicts between agile approaches and management needs for predictability and control. Morville observes “since no amount of subsequent planning can solve a problem insufficiently understood, problem framing is the most important step in planning.” That reminds me that you can only plan as far ahead as you understand. The debate shouldn’t be about agile or waterfall or scrum; it needs to be about how to generate the best understanding of the problem at hand. Not a stunning insight but something that is too easily forgotten.

This is a book worth re-reading, or at least taking a pass through, whenever you face a significant planning task.

Managerial alternatives to leaving smart people alone

I’ve often quoted Tom Davenport’s observation that the default HR strategy for knowledge-intensive organizations is to “hire smart people and leave them alone.” While there’s wisdom in that perspective, I fear it is no longer a desirable strategy.

I once had the office next door to Tom’s when we both worked at the Center for Information Technology and Strategy in Boston. The Center was an experiment by Ernst & Young to build better connections between academic research and business practice. It was an embodiment of Tom’s observation, grounded in our experiences in organizations that had pursued other approaches.

What does it mean to “leave them alone?” People don’t become managers to leave things alone. Managers set direction, they marshall resources, they evaluate and interpret results. Why should smart people be exempt from these eminently sensible actions?

They’re not.

But managers of smart people often aren’t qualified to do these essential management tasks. Setting direction, marshaling resources, and evaluating results depend on understanding practice. This is where managers struggle. In a world of manual tasks and procedural paperwork, managers could be expected to have a good grasp of how the work was and should be done. Managers understood practice. Thus, they were qualified to manage it.

We no longer live in that world.

In a world of knowledge work, it is knowledge workers—the smart people—who best understand practice. In this world, Tom’s strategy is a safe and responsible one; if you don’t know how to do what you manage, you’re well advised to resist the urge to meddle.

“First, do no harm” is commendable but not the same advice as “do nothing.” Setting direction, marshaling resources, and evaluating results depend on understanding practice. But understanding practice tells us nothing about whether that practice is advancing the goals of the organization. Practice is anchored in where we are; we also need to know where we would like to go.

Blending these perspectives implies that managing smart people requires a collaborative effort. Smart people provide sense-making of where we have been. Managers provide insights on desirable places to go. Smart people and managers jointly develop the maps and plans that connect the two.

It is not yet clear to me how this collaboration should play out in practice.

Smart people must be able to articulate what they do for those who can’t be expected to know as much as the smart people do about their domains. This places a responsibility on smart people to be able to educate others—particularly managers—about the point and promise of their expertise. Smart people who can’t, or won’t, explain how their expertise matters do a disservice to their organizations.

When smart people were a rare phenomenon in organizations, smart people and pretenders could both get away with jargon and faux-complexity. The numbers were small enough that a few wise gatekeepers could contain the downside risk and the upside of actual smart ideas was worth the trouble.

There’s an old, likely apocryphal, story told of Tom Watson in the early days of IBM. Seems an engineer had made a 10 million dollar design mistake. When asked why the engineer hadn’t been fired, Watson’s response was “I spent too much money learning from that mistake to get rid of the one person who won’t make it again.”

The calculus changes when smart people represent a significant proportion of your work force. For one, tolerating a single 10 million dollar mistake is reasonable; managing a hundred simultaneous potential mistakes—and upsides—becomes an existential problem. There is too much chance to leave the process to chance.

The organization needs a view into the mix of potential smart ideas. Further, the organization needs to actively shape the mix to align with the goals of the organization. This becomes a conversation about “command intent” and how that interacts with “ground truth.”

Leaving smart people alone makes sense when the alternative is to give orders that are ignorant of context and possibility. Far better to combine the perspectives of smart people and forward-looking managers and increase the smarts applied.

Creating your knowledge workshop

 

Electronics workshpoVendors and too many managers continue to promote and search for the One True Tool. This is a clear indicator that someone is trapped in an industrial mindset irrelevant to the actual world of knowledge work that we inhabit. If your work can be accomplished with one tool, then you are little different from or better off than the average wrench-turner on an assembly line. You are a replaceable component in a rigid system.

To build a body of work as a knowledge workers you need at least a well-equipped toolkit; ideally you will learn to operate within a proper knowledge workshop.

For simple projects, Swiss Army knives and Leatherman Tools are the answer. No one serious about their craft works with a single tool. Good craftspeople depend on a collection of tools that work together and co-exist in a workshop where they can be found and used as tasks require.

We are at a point in carrying out knowledge work where we would be well-advised to set aside the quest for the one true tool and turn toward the problem of creating and equipping a knowledge workshop suited to our needs.

What makes a workshop?

A workshop is

  • a collection of tools, each suited to particular tasks and projects. Some tools are old, some new; some are general purpose, some specialized; some are used every day, others less frequently
  • organized and arranged so the right tool is available whenever needed.
  • containing an inventory of common parts and useful raw materials already assembled just in case.
  • and a scrap bin full of fragments and discards sitting in the corner. These are handy to test new tools or to create quick jigs and fixtures that might be helpful in constructing a final product.

These typical features of physical workshops offer guidance about how to create a knowledge workshop suited to our needs.

For a few specialized forms of knowledge work, the nature of a knowledge workshop is already reasonably well understood. Software developers have rich choices for their development environments. Bond traders and other investment specialists can have very sophisticated custom work environments built and maintained in the quest for a few more basis points.

Those of us doing more general knowledge work need a strategy for getting from concept to the creation of our own knowledge workshop. That plan consists of three phases; setting the workshop up, learning to use it effectively, and dealing with the roadblocks that a craft-centered strategy will inevitably provoke in the typical organization.

Setting Up

The exact details of setting up a knowledge workshop will vary by the particular form of knowledge work you do. Are you extracting insight from numbers? Are you designing new organizations? Are you writing research reports? The specific form your knowledge work takes will guide you to the particular tools relevant to the deliverables you create.

There are some general guidelines that apply regardless of the specific area of knowledge work. First, you are building a workshop, not searching for the perfect tool. Pay attention to whether tools you are considering play nicely with one another. Second, be conscious of how the tool mix is developing. Is there a balance between big tools and little specialty tools? Do the specialty tools bridge the gaps between what the big tools handle? Do the specialty tools get used often enough to be worth keeping, or do they exact greater demands on your memory than they return in improved effectiveness?

While selecting, assembling, and (eventually) integrating a random collection of tools into something more useful, consider how you will assemble relevant supporting materials. If you are a wordsmith, do you want an online dictionary available? Do you want more than one? If you perform market analysis, are there general statistical tables or reports that you draw on repeatedly (e.g., the Statistical Abstract of the United States)? Are the tools and materials arranged and organized to make your work easier, or are they a long list of random entries or icons on your desktop?

Learning

Once your workshop is set up, you can begin what will become a never-ending task of learning to use it effectively. Set aside time to play with your tools and discover their limits and features. If you want to take advantage of pivot tables in Excel, waiting until they are essential to the product you must deliver by the end of the week is a mistake. Do you need to discover that pivot tables exist first?

This is all in the nature of “productive play,” of learning what is possible from the workshop you are designing.

Overcoming Resistance

“Productive play” may be essential to doing better knowledge work, but it is also a notion certain to trigger corporate antibodies in most organizations. You will encounter resistance, so you must have a plan for addressing it. Your most potent weapon: your ability to deliver better quality knowledge work.

Before you can do this, identify and enlist allies in your efforts and co-opt or counteract the most dangerous sources of resistance. The specifics will vary by organization, but expect to run afoul of your IT group and whoever ended up with oversight of Sarbanes-Oxley compliance for starters.

Step zero of any knowledge workshop strategy then becomes: “Take your CIO to lunch or befriend the folks staffing the help desk.” Their policy roles make them potential enemies, but their natural predispositions also make them potential allies.

Getting Started

The monoculture of office suites and corporate Web portals is rooted in outmoded assumptions about the nature of work as an industrial task.

Knowledge work is not factory work; factory strategies will not help knowledge workers. Tools are what you give to someone filling a well-defined role on the assembly line. A knowledge worker—you—needs to go further. Build your custom workshop now and see your work prosper.

Building a body of knowledge work

Case study cover pageI spent a year writing case studies before I began my doctoral program. More accurately, I was required to spend a year as a case writer to demonstrate my qualifications and commitment to the program before the admissions committee would accept me. My academic transcripts showed a bit more variance than the committee was accustomed to seeing and this was the compromise between the advisor who believed in me and the committee.

The first case I wrote dealt with Gillette and their efforts to figure out how to manage electronic data interchange with their customers. At the time this was a leading edge issue for IT organizations. I went with my advisor to our first set of interviews on the South Side of Boston. He  took three quarters of a page of notes at most; I was scribbling furiously to keep up.

The next day, we met to review what we had learned and my advisor’s first question was “where is your trip report?” My blank expression would not have been an encouraging sight to the admissions committee; my advisor was more forgiving.

What he expected and explained to me was to see my semi-legible and partial notes transformed into a coherent reaction to the previous day’s interview. If I was to eventually create a case study that would work in the classroom or extend our understanding of this issue, I needed to get my thinking out of my head and available for inspection, by myself first and foremost.

HBS believes deeply in the value of learning by doing. The case method immerses you in management and decision situations and you figure out what to do in the middle of the same mess and confusion you will later work in.  You learn to write cases in the same way—in the mess and confusion. The challenge is to discover the appropriate threads and themes without overwhelming what is there with your biases and preconceptions. Richard Feynman captured this challenge most succinctly thusly: “the first principle is that you must not fool yourself — and you are the easiest person to fool.” The discipline and practice of transforming raw interview notes into a trip report turns out to be a simple and useful technique for avoiding that trap.

After a good stretch of this learning by doing, I did discover that this approach exists in its own rich context, as does any fundamentally useful technique. Anthropologist Clifford Geertz called it “thick description,” Sociologists Barney Glaser and Anselm Strauss called it “grounded theory.”

I thought I was developing my skills to do organizational research. What I was also doing was developing a set of transferable knowledge work skills. I was laying the foundations of my personal knowledge management practices.

I’ve written before about the challenge of solving for pattern, which is a core requirement of knowledge work. This demands a respect for the richness of what is going on out in the world. We are pattern seeking/pattern recognizing creatures; our early survival depended on our ability to notice, extract, and react to patterns quickly. If we mistake a stick for a snake, there is little penalty; if we reach for a stick that turns out to be a snake, we die. Those instincts can be a problem in less threatening environments. In modern settings, imposing a bad pattern on data means missed opportunities; not death.

Our modern task is to get better at noticing what is interesting. We need to temper our instincts to instantly match a pattern and strive to remain grounded in the details that make up the phenomenon we wish to understand. What was once an essential research task is now a day-to-day requirement for the average knowledge worker.

What makes this tricky is that we don’t often know, at the outset, what constitutes the phenomena we are interested in. There is no easy way to separate the phenomena you are interested in from the surrounding environment and context. We may also not be able to easily differentiate between objective phenomena and our subjective reactions. Rather than pursue an unobtainable objective stance, we simply acknowledge and include our subjective responses as part of the package of data.

More often than not, this package of data—interview notes, trip report, exhibits—is only interesting to the individual knowledge worker; it is not yet a final deliverable for sharing. But the collection is worth keeping and organizing for two reasons. First, it provides an audit trail and supporting materials for whatever final deliverable does get produced. Second, it becomes a resource for future work.

As knowledge workers our value is built on a body of work that accumulates over time. We can make that body of work more valuable to ourselves and, therefore, to our organizations by becoming more systematic in how we create, assemble, and manage it.

Review-Managing the Unexpected. Karl Weick and Kathleen Sutcliffe

Cover - Managing the UnexpectedManaging the Unexpected: Sustained Performance in a Complex World. Third Edition. Karl Weick and Kathleen Sutcliffe

Conventional wisdom has it that the job of management is to “plan the work and work the plan.” Wall Street loves to see steady growth in reported earnings and managers learned to give Wall Street what they wanted. Sadly, the world is more complicated than Wall Street analysts would like to believe

Weick and Sutcliffe take an intriguing route in this book—now in its third edition. They ask what lessons might be found in the experiences and practices of high-reliability organizations. What’s an HRO? Flight-deck operations on an aircraft carrier. Nuclear power plant control room. Fire fighters. Cockpit operations on a 757. Common to all of these is a tension between routine operations and potential disaster. All face the problem of how to take ordinary, fallible, human beings and create organizations that work; organizations that operate reliably day-in and day-out, avoiding disasters for the most part, and coping effectively when they do.

While studying HROs is fascinating in its own right, Weick and Sutcliffe successfully connect lessons from HROs to the challenges of running more mundane organizations. The world is throwing more change and complexity at all of us. The problem is that most organizations, by design, are focused on the routine and the predictable. They effectively deny and eliminate the unexpected and the unpredictable, which works well in a stable environment. Less so in today’s world.

The core of the argument is that high-reliability organizations know how to operate mindfully as well as mindlessly (which is the default for most organizations). Mindfulness in this context breaks down into five characteristics focused toward two objectives.

The first objective is anticipating the unexpected. Three characteristics contribute to that:
1. preoccupation with failure,
2. reluctance to simplify interpretations, and
3. sensitivity to operations.

Each of these is a way to detect or amplify weak signals soon enough to do something useful. As Weick points out “unexpected” implies something that has already happened that wasn’t anticipated. You want to figure out that something relevant has happened as soon as possible. The problem is that stuff is always happening and we couldn’t get through an average day without ignoring most of it. The challenge is to differentiate between signal and noise.

One way of separating signal from noise is ignoring the routine. That’s why we call it routine. The trick is to avoid getting caught up with expanding the definition of routine so we can ignore more of it. Take a look back at the Challenger launch failure. Before the catastrophic failure, there had been a series of smaller failures of the O-rings. Each of these “failures” was explained away in the sense that the post-flight review processes concluded that the “minor” failures were actually evidence that the system was working as designed.

The issue is attitudinal. Most organizations, NASA included, treat earlier minor failures as “close calls” and ultimately interpret them as evidence of success. An HRO takes the same data but treats it as a “near miss.” Then the analysis focuses on how to avoid even a near miss the next time round. Small failures (weak signals) are sought out and treated as opportunities to learn instead of anomalies to be explained away.

If anticipating and recognizing the unexpected is the first objective, containing the unexpected in the second. Here the relevant characteristics are a commitment to resilience and a deference to expertise.

Resilience is the term of choice for Weick and Sutcliffe because it highlights key aspects of organizations that typically are denied or glossed over. It acknowledges the human fallibility is unavoidable, that error is pervasive, and reminds us that the unexpected has already happened. A strategy of resilience focuses on accepting that some small error has already occurred and on working to contain the consequences of that error while they are still small and manageable. To be resilient requires an organization to be serious about such practices as not shooting messengers.

Weick and Sutcliffe cite one example from carrier operations where operations were shutdown when a junior member of the crew reported a missing tool. Instead of punishing this person for losing the tool, the captain rewarded them even though operations were suspended while the missing tool was found. Dealing with the small problem was rewarded because everyone recognized the greater risk of ignoring it. The same issues exist in all organizations, although the responses are generally quite different. The result, of course, is that problems are ignored until they are too big both to ignore and, typically, to deal with.

The second dimension to containing problems while they are small and tractable is knowing how to defer to expertise. Expertise can correlate with experience (as long as the experience is relevant). It does not generally correlate with hierarchical rank. Successfully seeking out and benefitting from expertise takes two things. Those up the chain of command must be ready to set examples. Those on the lines need to be assertive about the expertise they have to offer, which includes developing a clearer sense for the expertise that they have.

While the world that Weick and Sutcliffe describe is quite different than the organizations we are accustomed to, it does not require wholesale organizational change programs to get there. The mindfulness that they describe–of anticipating and containing the unexpected–can be practiced at both the individual and small group level. If their analyses and recommendations are sound (they are), then those who practice this mindfulness will gradually take over on the basis of improved performance.

Fuzzy organizational boundaries; accepting complexity

Camp Pendleton

We all start off in simple organizations. The first organization I ever ran was a Junior Achievement company that made battery jumper cables. We fit inside a single workroom at the local JA operations. I suppose that constituted my first time inside an incubator. Our clever insight was to sell in bulk to local police departments and car dealerships.

When we encounter organizations in fiction, they are often equally simple. A single factory or shop. A clever employee in the mailroom working his way up to the top floor executive washroom. Bankers offering mortgages to the residents of Bedford Falls. In economics we learn of Adam Smith’s pin factory.

It can be a long time, if ever, before we see the complexities of real organizations in a real economy.

In all of our examples, it is a simple task to separate the organization from its environment. What is inside the organization and what is outside seems clear. That apparent simplicity leads us astray in real organizations; more importantly, our myopic view is less true now than it has ever been. The simple images of organization that were baked into our assumptions at an early age blind us to realities about today’s organizations and their environments that are essential to making good decisions.

When you look beyond the simplistic examples of a single factory or retail shop, organizational boundaries become a curious notion. We talk about organizations as if they were clearly identifiable and bounded entities, yet they are no such thing. I’m writing this on a MacBook Pro; Apple now has a market cap of over $1 trillion. How would you draw a picture of what is Apple vs. what is not Apple? How do you characterize the Apple quality engineer sitting inside FoxConn’s assembly operation in China?

Suppose I hack into Target’s point of sale systems from a van parked on the public street outside their Clark Street store in Chicago? I’m not trespassing on Target’s location, yet I’ve breached a metaphorical firewall. Talking about firewalls perpetuates an illusion that there is a boundary between the organization and its environment that we can manage as if it were a border. Our language and mental models haven’t kept up with Target’s organizational reality.

While the boundaries of organizations have always been fuzzier than we might think, over the past three decades they have become porous to the point of invisibility. We need to invent better ways to think about what distinguishes one organization from another and to discern how and when that matters.

We must abandon the notion that we have full control over the design or execution of business activities or processes. As individual knowledge workers and as knowledge organizations we operate in complex webs of interdependencies. Our ability to operate effectively depends on smooth interactions among multiple participants. When we pretend that the boundaries are sharp and clear, we will be surprised in very unpleasant ways.

Review: Making Work Visible

 

Making Work Visible: Exposing Time Theft to Optimize Work & Flow Dominica Degrandis.

While drawn largely from the realm of software design and development, Making Work Visible offers advice that applies to all forms of knowledge work. We’re all familiar with the problems; too many demands, arbitrary deadlines, constant interruptions. Degrandis offers practical advice on two levels. First, she lays out simple practices that anyone can use to see the work they are being asked to do and use that visibility to get more of the most important things done. Second, she offers a deeper look at a better way to look at knowledge work than our current bias toward thinking that knowledge work is a form of factory work.

Obviously, I was drawn to this book given my own interest in the challenges created by the invisible nature of knowledge work. We all know that we should be working on the highest value tasks on our lists, that we should carve out the necessary time to focus on those tasks, and that we are lying to ourselves when we pretend that we can multitask. It isn’t the knowing that’s hard, though, it’s the doing.

Degrandis offers simple methods to accomplish that anchored in the theory and practice of kanban; make the work to be done visible, limit work-in-process, and focus on managing flow. I’ve claimed that Degrandis offers insight into the limitations of viewing knowledge work as factory work. Is it a contradiction that the solution is drawn from the Toyota Production System? Not if you understand why kanban differs from our myths about factory work.

The purpose of a kanban system is to make the flow of work visible, then focus on making and keeping that flow smooth. You search for and eliminate spots where the flow slows down. You focus on the rhythm and cadence of the system as a whole. You learn that you cannot run such a system at 100% capacity utilization. As with a highway system, 100% capacity utilization equals gridlock.

What makes this book worth your time is that Degrandis keeps it simple without being simplistic. She offers a good blend of both “why to” and “how to.” That’s particularly important because you will need the whys to address the resistance you will encounter.

Can you make a mistake around here

I wrote my first book with Larry Prusak 25 years ago, while we were both working for Ernst & Young. In the intervening years he turned out another 8 or 10 books while I’ve only managed one more so far. I think he’s done writing books for now, so there’s some chance I may yet catch up.

When I was teaching knowledge management at Kellogg, I invited Larry as a guest speaker. He’s an excellent storyteller, so my students benefitted that afternoon. He opened with a wonderful diagnostic question for organizations: “Can you make a mistake around here?”

Organizations spend a great deal of energy designing systems and processes to be reliable and not make mistakes. This is as it should be. No one wants to fly in a plane that you can’t trust to be reliable.

But what can we learn about organizations from how they respond to mistakes? Do they recognize and acknowledge the fundamental unreliability of people? Or, do they lie to themselves and pretend that they can staff themselves with people who won’t make mistakes?

If you can’t make a mistake, you can’t learn. If you can’t learn, you can’t innovate. You can extend the logic from there.

Getting better at the craft of knowledge work

drafting CADHad lunch with my friend Buzz Bruggeman, CEO of ActiveWords, this week. Got a chance to look at some of the improvements in the pipeline. Not quite enough to persuade me to move back to Windows, but I do wish there was something as clever and powerful for OS X.

It led me to thinking about what makes some people more effective leveraging their tools and environment. Most of the advice about personal technology seems to focus on micro-productivity; how to tweak the settings of some application or how to clean up your inbox.

ActiveWords, for example, sits between your keyboard and the rest of your system. The simple use case is text expansion; type “sy” and get an email signature. Micro-productivity. If you’re a particularly slow or inaccurate typist and send enough email, perhaps the savings add up to justify the cost and the learning curve.

Watching an expert user like Buzz is more interesting. With a handful of keystrokes, he fired up a browser, loaded the New York Times website, grabbed an article, started a new email to me, dropped In the link, and sent it off. Nothing that you couldn’t do with a some mouse clicks and menu choices, so what’s the big deal? I’m a pretty fair touch typist; how much time can you really expect to save with this kind of tool? Isn’t this just a little bit more micro-productivity?

There’s something deeper going on here. What Buzz has done is transform his computer from a collection of individual power tools into a workshop for doing better knowledge work. It’s less about the tools and more about how you apply them collectively to accomplish the work at hand.

How do you study knowledge work with an eye toward turning out better end results?

We know how to do this for repetitive, essentially clerical, work. That’s the stuff of the systems analysis practices that built payroll systems, airline reservation systems, and inventory control systems. Building newer systems to take orders for books, electronics, and groceries still fall into the realm of routine systems analysis for routine work.

Most of this, however, isn’t really knowledge work; it’s factory work where the raw material happens to be data rather than steel. So the lessons and practices of industrial engineering apply.

What differentiates knowledge work from other work is that knowledge work seeks to create a unique result of acceptable quality. It is the logic of craft. One differentiator of craft is skill in employing the tools of the craft. Watching Buzz work was a reminder that craft skill is about how well you employ the tools at your disposal.

How do we bring that craft sensibility into our digital workshops? How do we create an environment that encourages and enables us to create quality work?

The way that Buzz employs ActiveWords smooths transitions and interactions between bigger tools. It also shifts attention away from the specifics of individual tools and towards the work product being created.

Consider email–a constant thorn for most of us. You can treat each email as a unique entity worthy of a unique response. You can perform an 80/20 analysis on your incoming email flow, build a half dozen boilerplate responses, program a bot to filter your inbox, and hope that your filters don’t send the wrong boilerplate to your boss.

Or, there is a third way. You can perform that 80/20 analysis at a more granular level to discover that 95% of your emails are best treated as a hybrid mix of pure boilerplate, formulaic paragraphs that combine boilerplate and a bit of personalization, and a sprinkling of pure custom response. Then you can craft a mini-flow of tools and data to turn out those emails and reduce your ongoing workload.

I can visualize how this might work. The tools are an element, but I’m more intrigued by how to be more systematic about exploring and examining work practices and crafting effective support for knowledge work.

Have others been contemplating similar questions? Who’s doing interesting things worth exploring?

Review: Filters Against Folly

Filters against follyFilters Against Folly: How To Survive Despite Economists, Ecologists, and the Merely Eloquent Garrett Hardin

You never know which books and ideas are going to stick with you. I first read Filters Against Folly in the early 1990s. Once a month, the group I was with met for lunch and discussed a book we thought might be interesting. I wish I could remember how this book got on the list. I’ve given away multiple copies and continue to find its approach relevant.

Some of the specific examples are dated and I think Hardin goes too far in some of his later arguments. What has stuck with me, however, is the value of the perspective Hardin adopts and the process he advocates.

We live in a world that depends on experts and expertise. At the same time, whatever expertise we possess, we are ignorant and un-expert about far more. Today, we seem to be operating in the worst stages of what Isaac Asimov described in the following observation:

There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge’.

Hardin offers a practical way out of this dilemma. We need not simply defer to expertise, nor reject it out of hand. Rather than focus on the experts, Hardin shifts our attention to the arguments that experts make and three basic filters anyone can apply to evaluate those arguments.

Hardin’s fundamental insight is that as lay persons our responsibility is to serve as a counterweight to expert advocacy; the expert argues for “why” while the rest of us argue for “why not?” It is our role to “think it possible you may be mistaken.”

The filters are organized around three deceptively simple questions:

  • What are the words?
  • What are the numbers?
  • And then what?

When looking at the language in advocacy arguments, the key trick is to look for language designed to end or cut off discussion or analysis. Of course, in today’s environment, it might seem that most language is deployed to cut off thinking rather than promote it. Hardin offers up a provocative array of examples of thought-stopping rather than thought-provoking language.

Shifting to numbers, Hardin does not expect us all to become statisticians or data analysts but he does think we’re all capable of some basic facility to recognize the more obvious traps hidden in expert numbers. That includes numerate traps laid inside expert language. In Hardins estimation “the numerate temperament is one that habitually looks for approximate dimensions, ratios, proportions, and rates of change in trying to grasp what is going on in the world.” Both zero and infinity hide inside literate arguments that ought to be numerate.

The Delaney Amendment, for example, forbids any substance in the human food supply if that substance can be shown to cause cancer at any level. That’s a literate argument hiding zero where it causes problems. The numerate perspective recognizes that our ability to measure  improves over time; what was undetectable in 1958 when the Delaney Amendment was passed is routinely measurable today. The question ought to be what dosage of an item represents a risk and is that risk a reasonable or unreasonable risk to take on?

Hardin’s final question “and then what?” is an ecological or systems filter. In systems terms we can never do merely one thing. Whatever intervention we make in a system will have a series of effects, some intended, some not. The responsible thing to do is to make the effort to identify potentially consequential effects and evaluate them collectively.

To be effective in holding experts to account, we must learn to apply all three of these filters in parallel. For example, labeling something as an “externality” in economics is an attempt to use language to treat an effect as a variable with a value of zero in the analysis.

For a small book, Filters Against Folly offers a wealth of insight into how each of us might be a better citizen. The questions we face are too important to be left in the hands of experts, no matter how expert.