Building a body of knowledge work

Case study cover pageI spent a year writing case studies before I began my doctoral program. More accurately, I was required to spend a year as a case writer to demonstrate my qualifications and commitment to the program before the admissions committee would accept me. My academic transcripts showed a bit more variance than the committee was accustomed to seeing and this was the compromise between the advisor who believed in me and the committee.

The first case I wrote dealt with Gillette and their efforts to figure out how to manage electronic data interchange with their customers. At the time this was a leading edge issue for IT organizations. I went with my advisor to our first set of interviews on the South Side of Boston. He  took three quarters of a page of notes at most; I was scribbling furiously to keep up.

The next day, we met to review what we had learned and my advisor’s first question was “where is your trip report?” My blank expression would not have been an encouraging sight to the admissions committee; my advisor was more forgiving.

What he expected and explained to me was to see my semi-legible and partial notes transformed into a coherent reaction to the previous day’s interview. If I was to eventually create a case study that would work in the classroom or extend our understanding of this issue, I needed to get my thinking out of my head and available for inspection, by myself first and foremost.

HBS believes deeply in the value of learning by doing. The case method immerses you in management and decision situations and you figure out what to do in the middle of the same mess and confusion you will later work in.  You learn to write cases in the same way—in the mess and confusion. The challenge is to discover the appropriate threads and themes without overwhelming what is there with your biases and preconceptions. Richard Feynman captured this challenge most succinctly thusly: “the first principle is that you must not fool yourself — and you are the easiest person to fool.” The discipline and practice of transforming raw interview notes into a trip report turns out to be a simple and useful technique for avoiding that trap.

After a good stretch of this learning by doing, I did discover that this approach exists in its own rich context, as does any fundamentally useful technique. Anthropologist Clifford Geertz called it “thick description,” Sociologists Barney Glaser and Anselm Strauss called it “grounded theory.”

I thought I was developing my skills to do organizational research. What I was also doing was developing a set of transferable knowledge work skills. I was laying the foundations of my personal knowledge management practices.

I’ve written before about the challenge of solving for pattern, which is a core requirement of knowledge work. This demands a respect for the richness of what is going on out in the world. We are pattern seeking/pattern recognizing creatures; our early survival depended on our ability to notice, extract, and react to patterns quickly. If we mistake a stick for a snake, there is little penalty; if we reach for a stick that turns out to be a snake, we die. Those instincts can be a problem in less threatening environments. In modern settings, imposing a bad pattern on data means missed opportunities; not death.

Our modern task is to get better at noticing what is interesting. We need to temper our instincts to instantly match a pattern and strive to remain grounded in the details that make up the phenomenon we wish to understand. What was once an essential research task is now a day-to-day requirement for the average knowledge worker.

What makes this tricky is that we don’t often know, at the outset, what constitutes the phenomena we are interested in. There is no easy way to separate the phenomena you are interested in from the surrounding environment and context. We may also not be able to easily differentiate between objective phenomena and our subjective reactions. Rather than pursue an unobtainable objective stance, we simply acknowledge and include our subjective responses as part of the package of data.

More often than not, this package of data—interview notes, trip report, exhibits—is only interesting to the individual knowledge worker; it is not yet a final deliverable for sharing. But the collection is worth keeping and organizing for two reasons. First, it provides an audit trail and supporting materials for whatever final deliverable does get produced. Second, it becomes a resource for future work.

As knowledge workers our value is built on a body of work that accumulates over time. We can make that body of work more valuable to ourselves and, therefore, to our organizations by becoming more systematic in how we create, assemble, and manage it.

Review-Managing the Unexpected. Karl Weick and Kathleen Sutcliffe

Cover - Managing the UnexpectedManaging the Unexpected: Sustained Performance in a Complex World. Third Edition. Karl Weick and Kathleen Sutcliffe

Conventional wisdom has it that the job of management is to “plan the work and work the plan.” Wall Street loves to see steady growth in reported earnings and managers learned to give Wall Street what they wanted. Sadly, the world is more complicated than Wall Street analysts would like to believe

Weick and Sutcliffe take an intriguing route in this book—now in its third edition. They ask what lessons might be found in the experiences and practices of high-reliability organizations. What’s an HRO? Flight-deck operations on an aircraft carrier. Nuclear power plant control room. Fire fighters. Cockpit operations on a 757. Common to all of these is a tension between routine operations and potential disaster. All face the problem of how to take ordinary, fallible, human beings and create organizations that work; organizations that operate reliably day-in and day-out, avoiding disasters for the most part, and coping effectively when they do.

While studying HROs is fascinating in its own right, Weick and Sutcliffe successfully connect lessons from HROs to the challenges of running more mundane organizations. The world is throwing more change and complexity at all of us. The problem is that most organizations, by design, are focused on the routine and the predictable. They effectively deny and eliminate the unexpected and the unpredictable, which works well in a stable environment. Less so in today’s world.

The core of the argument is that high-reliability organizations know how to operate mindfully as well as mindlessly (which is the default for most organizations). Mindfulness in this context breaks down into five characteristics focused toward two objectives.

The first objective is anticipating the unexpected. Three characteristics contribute to that:
1. preoccupation with failure,
2. reluctance to simplify interpretations, and
3. sensitivity to operations.

Each of these is a way to detect or amplify weak signals soon enough to do something useful. As Weick points out “unexpected” implies something that has already happened that wasn’t anticipated. You want to figure out that something relevant has happened as soon as possible. The problem is that stuff is always happening and we couldn’t get through an average day without ignoring most of it. The challenge is to differentiate between signal and noise.

One way of separating signal from noise is ignoring the routine. That’s why we call it routine. The trick is to avoid getting caught up with expanding the definition of routine so we can ignore more of it. Take a look back at the Challenger launch failure. Before the catastrophic failure, there had been a series of smaller failures of the O-rings. Each of these “failures” was explained away in the sense that the post-flight review processes concluded that the “minor” failures were actually evidence that the system was working as designed.

The issue is attitudinal. Most organizations, NASA included, treat earlier minor failures as “close calls” and ultimately interpret them as evidence of success. An HRO takes the same data but treats it as a “near miss.” Then the analysis focuses on how to avoid even a near miss the next time round. Small failures (weak signals) are sought out and treated as opportunities to learn instead of anomalies to be explained away.

If anticipating and recognizing the unexpected is the first objective, containing the unexpected in the second. Here the relevant characteristics are a commitment to resilience and a deference to expertise.

Resilience is the term of choice for Weick and Sutcliffe because it highlights key aspects of organizations that typically are denied or glossed over. It acknowledges the human fallibility is unavoidable, that error is pervasive, and reminds us that the unexpected has already happened. A strategy of resilience focuses on accepting that some small error has already occurred and on working to contain the consequences of that error while they are still small and manageable. To be resilient requires an organization to be serious about such practices as not shooting messengers.

Weick and Sutcliffe cite one example from carrier operations where operations were shutdown when a junior member of the crew reported a missing tool. Instead of punishing this person for losing the tool, the captain rewarded them even though operations were suspended while the missing tool was found. Dealing with the small problem was rewarded because everyone recognized the greater risk of ignoring it. The same issues exist in all organizations, although the responses are generally quite different. The result, of course, is that problems are ignored until they are too big both to ignore and, typically, to deal with.

The second dimension to containing problems while they are small and tractable is knowing how to defer to expertise. Expertise can correlate with experience (as long as the experience is relevant). It does not generally correlate with hierarchical rank. Successfully seeking out and benefitting from expertise takes two things. Those up the chain of command must be ready to set examples. Those on the lines need to be assertive about the expertise they have to offer, which includes developing a clearer sense for the expertise that they have.

While the world that Weick and Sutcliffe describe is quite different than the organizations we are accustomed to, it does not require wholesale organizational change programs to get there. The mindfulness that they describe–of anticipating and containing the unexpected–can be practiced at both the individual and small group level. If their analyses and recommendations are sound (they are), then those who practice this mindfulness will gradually take over on the basis of improved performance.

Fuzzy organizational boundaries; accepting complexity

Camp Pendleton

We all start off in simple organizations. The first organization I ever ran was a Junior Achievement company that made battery jumper cables. We fit inside a single workroom at the local JA operations. I suppose that constituted my first time inside an incubator. Our clever insight was to sell in bulk to local police departments and car dealerships.

When we encounter organizations in fiction, they are often equally simple. A single factory or shop. A clever employee in the mailroom working his way up to the top floor executive washroom. Bankers offering mortgages to the residents of Bedford Falls. In economics we learn of Adam Smith’s pin factory.

It can be a long time, if ever, before we see the complexities of real organizations in a real economy.

In all of our examples, it is a simple task to separate the organization from its environment. What is inside the organization and what is outside seems clear. That apparent simplicity leads us astray in real organizations; more importantly, our myopic view is less true now than it has ever been. The simple images of organization that were baked into our assumptions at an early age blind us to realities about today’s organizations and their environments that are essential to making good decisions.

When you look beyond the simplistic examples of a single factory or retail shop, organizational boundaries become a curious notion. We talk about organizations as if they were clearly identifiable and bounded entities, yet they are no such thing. I’m writing this on a MacBook Pro; Apple now has a market cap of over $1 trillion. How would you draw a picture of what is Apple vs. what is not Apple? How do you characterize the Apple quality engineer sitting inside FoxConn’s assembly operation in China?

Suppose I hack into Target’s point of sale systems from a van parked on the public street outside their Clark Street store in Chicago? I’m not trespassing on Target’s location, yet I’ve breached a metaphorical firewall. Talking about firewalls perpetuates an illusion that there is a boundary between the organization and its environment that we can manage as if it were a border. Our language and mental models haven’t kept up with Target’s organizational reality.

While the boundaries of organizations have always been fuzzier than we might think, over the past three decades they have become porous to the point of invisibility. We need to invent better ways to think about what distinguishes one organization from another and to discern how and when that matters.

We must abandon the notion that we have full control over the design or execution of business activities or processes. As individual knowledge workers and as knowledge organizations we operate in complex webs of interdependencies. Our ability to operate effectively depends on smooth interactions among multiple participants. When we pretend that the boundaries are sharp and clear, we will be surprised in very unpleasant ways.

Review: Making Work Visible

 

Making Work Visible: Exposing Time Theft to Optimize Work & Flow Dominica Degrandis.

While drawn largely from the realm of software design and development, Making Work Visible offers advice that applies to all forms of knowledge work. We’re all familiar with the problems; too many demands, arbitrary deadlines, constant interruptions. Degrandis offers practical advice on two levels. First, she lays out simple practices that anyone can use to see the work they are being asked to do and use that visibility to get more of the most important things done. Second, she offers a deeper look at a better way to look at knowledge work than our current bias toward thinking that knowledge work is a form of factory work.

Obviously, I was drawn to this book given my own interest in the challenges created by the invisible nature of knowledge work. We all know that we should be working on the highest value tasks on our lists, that we should carve out the necessary time to focus on those tasks, and that we are lying to ourselves when we pretend that we can multitask. It isn’t the knowing that’s hard, though, it’s the doing.

Degrandis offers simple methods to accomplish that anchored in the theory and practice of kanban; make the work to be done visible, limit work-in-process, and focus on managing flow. I’ve claimed that Degrandis offers insight into the limitations of viewing knowledge work as factory work. Is it a contradiction that the solution is drawn from the Toyota Production System? Not if you understand why kanban differs from our myths about factory work.

The purpose of a kanban system is to make the flow of work visible, then focus on making and keeping that flow smooth. You search for and eliminate spots where the flow slows down. You focus on the rhythm and cadence of the system as a whole. You learn that you cannot run such a system at 100% capacity utilization. As with a highway system, 100% capacity utilization equals gridlock.

What makes this book worth your time is that Degrandis keeps it simple without being simplistic. She offers a good blend of both “why to” and “how to.” That’s particularly important because you will need the whys to address the resistance you will encounter.

Can you make a mistake around here

I wrote my first book with Larry Prusak 25 years ago, while we were both working for Ernst & Young. In the intervening years he turned out another 8 or 10 books while I’ve only managed one more so far. I think he’s done writing books for now, so there’s some chance I may yet catch up.

When I was teaching knowledge management at Kellogg, I invited Larry as a guest speaker. He’s an excellent storyteller, so my students benefitted that afternoon. He opened with a wonderful diagnostic question for organizations: “Can you make a mistake around here?”

Organizations spend a great deal of energy designing systems and processes to be reliable and not make mistakes. This is as it should be. No one wants to fly in a plane that you can’t trust to be reliable.

But what can we learn about organizations from how they respond to mistakes? Do they recognize and acknowledge the fundamental unreliability of people? Or, do they lie to themselves and pretend that they can staff themselves with people who won’t make mistakes?

If you can’t make a mistake, you can’t learn. If you can’t learn, you can’t innovate. You can extend the logic from there.

Getting better at the craft of knowledge work

drafting CADHad lunch with my friend Buzz Bruggeman, CEO of ActiveWords, this week. Got a chance to look at some of the improvements in the pipeline. Not quite enough to persuade me to move back to Windows, but I do wish there was something as clever and powerful for OS X.

It led me to thinking about what makes some people more effective leveraging their tools and environment. Most of the advice about personal technology seems to focus on micro-productivity; how to tweak the settings of some application or how to clean up your inbox.

ActiveWords, for example, sits between your keyboard and the rest of your system. The simple use case is text expansion; type “sy” and get an email signature. Micro-productivity. If you’re a particularly slow or inaccurate typist and send enough email, perhaps the savings add up to justify the cost and the learning curve.

Watching an expert user like Buzz is more interesting. With a handful of keystrokes, he fired up a browser, loaded the New York Times website, grabbed an article, started a new email to me, dropped In the link, and sent it off. Nothing that you couldn’t do with a some mouse clicks and menu choices, so what’s the big deal? I’m a pretty fair touch typist; how much time can you really expect to save with this kind of tool? Isn’t this just a little bit more micro-productivity?

There’s something deeper going on here. What Buzz has done is transform his computer from a collection of individual power tools into a workshop for doing better knowledge work. It’s less about the tools and more about how you apply them collectively to accomplish the work at hand.

How do you study knowledge work with an eye toward turning out better end results?

We know how to do this for repetitive, essentially clerical, work. That’s the stuff of the systems analysis practices that built payroll systems, airline reservation systems, and inventory control systems. Building newer systems to take orders for books, electronics, and groceries still fall into the realm of routine systems analysis for routine work.

Most of this, however, isn’t really knowledge work; it’s factory work where the raw material happens to be data rather than steel. So the lessons and practices of industrial engineering apply.

What differentiates knowledge work from other work is that knowledge work seeks to create a unique result of acceptable quality. It is the logic of craft. One differentiator of craft is skill in employing the tools of the craft. Watching Buzz work was a reminder that craft skill is about how well you employ the tools at your disposal.

How do we bring that craft sensibility into our digital workshops? How do we create an environment that encourages and enables us to create quality work?

The way that Buzz employs ActiveWords smooths transitions and interactions between bigger tools. It also shifts attention away from the specifics of individual tools and towards the work product being created.

Consider email–a constant thorn for most of us. You can treat each email as a unique entity worthy of a unique response. You can perform an 80/20 analysis on your incoming email flow, build a half dozen boilerplate responses, program a bot to filter your inbox, and hope that your filters don’t send the wrong boilerplate to your boss.

Or, there is a third way. You can perform that 80/20 analysis at a more granular level to discover that 95% of your emails are best treated as a hybrid mix of pure boilerplate, formulaic paragraphs that combine boilerplate and a bit of personalization, and a sprinkling of pure custom response. Then you can craft a mini-flow of tools and data to turn out those emails and reduce your ongoing workload.

I can visualize how this might work. The tools are an element, but I’m more intrigued by how to be more systematic about exploring and examining work practices and crafting effective support for knowledge work.

Have others been contemplating similar questions? Who’s doing interesting things worth exploring?

Review: Filters Against Folly

Filters against follyFilters Against Folly: How To Survive Despite Economists, Ecologists, and the Merely Eloquent Garrett Hardin

You never know which books and ideas are going to stick with you. I first read Filters Against Folly in the early 1990s. Once a month, the group I was with met for lunch and discussed a book we thought might be interesting. I wish I could remember how this book got on the list. I’ve given away multiple copies and continue to find its approach relevant.

Some of the specific examples are dated and I think Hardin goes too far in some of his later arguments. What has stuck with me, however, is the value of the perspective Hardin adopts and the process he advocates.

We live in a world that depends on experts and expertise. At the same time, whatever expertise we possess, we are ignorant and un-expert about far more. Today, we seem to be operating in the worst stages of what Isaac Asimov described in the following observation:

There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge’.

Hardin offers a practical way out of this dilemma. We need not simply defer to expertise, nor reject it out of hand. Rather than focus on the experts, Hardin shifts our attention to the arguments that experts make and three basic filters anyone can apply to evaluate those arguments.

Hardin’s fundamental insight is that as lay persons our responsibility is to serve as a counterweight to expert advocacy; the expert argues for “why” while the rest of us argue for “why not?” It is our role to “think it possible you may be mistaken.”

The filters are organized around three deceptively simple questions:

  • What are the words?
  • What are the numbers?
  • And then what?

When looking at the language in advocacy arguments, the key trick is to look for language designed to end or cut off discussion or analysis. Of course, in today’s environment, it might seem that most language is deployed to cut off thinking rather than promote it. Hardin offers up a provocative array of examples of thought-stopping rather than thought-provoking language.

Shifting to numbers, Hardin does not expect us all to become statisticians or data analysts but he does think we’re all capable of some basic facility to recognize the more obvious traps hidden in expert numbers. That includes numerate traps laid inside expert language. In Hardins estimation “the numerate temperament is one that habitually looks for approximate dimensions, ratios, proportions, and rates of change in trying to grasp what is going on in the world.” Both zero and infinity hide inside literate arguments that ought to be numerate.

The Delaney Amendment, for example, forbids any substance in the human food supply if that substance can be shown to cause cancer at any level. That’s a literate argument hiding zero where it causes problems. The numerate perspective recognizes that our ability to measure  improves over time; what was undetectable in 1958 when the Delaney Amendment was passed is routinely measurable today. The question ought to be what dosage of an item represents a risk and is that risk a reasonable or unreasonable risk to take on?

Hardin’s final question “and then what?” is an ecological or systems filter. In systems terms we can never do merely one thing. Whatever intervention we make in a system will have a series of effects, some intended, some not. The responsible thing to do is to make the effort to identify potentially consequential effects and evaluate them collectively.

To be effective in holding experts to account, we must learn to apply all three of these filters in parallel. For example, labeling something as an “externality” in economics is an attempt to use language to treat an effect as a variable with a value of zero in the analysis.

For a small book, Filters Against Folly offers a wealth of insight into how each of us might be a better citizen. The questions we face are too important to be left in the hands of experts, no matter how expert.

A closer look at integration across organizations: thinking about coupling

Railcar CouplingWhen people ask me why I did something so strange as to leave a perfectly good career and get a Ph.D., the story I tell is this.

I designed and built information systems meant to improve the processes or the decision making of large organizations. I was troubled that organizational staff and managers routinely ignored the systems I created and continued running their departments and organizations pretty much as they always had. Either my designs were flawed or users were stupid (I’ll leave it to you to guess which hypothesis I favored).

I talked my way into a program—which involved explaining away aspects of my transcripts—and began hanging out with smarter people who were exploring similar questions about how organizations, systems, and technology fit together. This is the beauty of doctoral study; no one pretends to have the answers, everyone is trying to figure stuff out and, mostly, everyone wants to help you make progress.

This smart group led me toward the branch of organization theory and development that treated organizations as complex, designed, systems in their own right. The early days of organizational behavior and design as a discipline sought the “one best way” to organize. Paul Lawrence and Jay Lorsch of the Harvard Business School opened a different path; organizations should be designed to fit into and take advantage of the environments they operated within. In their seminal 1967 work, Organization and Environment, they made the case that effective organizations struck and maintained a balance between differentiation and integration. Where did you carve the organization into pieces and how did you fit the pieces together? Management’s responsibility was to make those decisions and to keep an eye on the environment to ensure that the balance points still made sense.

Two things make that managerial balancing responsibility far more difficult. One, the rate of change in the environment. Moderate pendulum swings have been replaced with what can feel like life inside a pinball machine. Two, the role of technology as an integrating mechanism that now spans internal and organizational boundaries.

Set the rate of change issue to the side; it’s well known even if not well addressed.

The technology links knitting organizations together were not something carefully contemplated in Lawrence and Lorsch’s work. Integration, in their formulation, was the task of managers in conversation with one another to identify and reconcile the nature of the work to be done. It was not something buried in the algorithms and data structures of the systems built to coordinate the activities of different functional departments—logistics, production, distribution, marketing, sales, and their kin—comprising the organization as a whole. Change in one function must now be carefully implemented and orchestrated with changes in all the other functions in the chain.

Electronic commerce further complicates this integration challenge. Now the information systems in discrete organizations must learn to talk to one another. The managers at the boundaries who could once negotiate and smooth working relationships have been displaced by code. Points of friction between organizational processes that could be resolved with a phone call now require coordinating modifications to multiple information systems. That couples organizations more tightly to one another and makes change slower and more difficult to execute, regardless of the willingness and commitment of the parties involved.

An example of how this coupling comes into play surfaced in in the early days of electronic data interchange. A grocery chain in the Southwestern United States agreed to connect their inventory and purchasing systems with Proctor & Gamble’s sales and distribution systems. P&G could monitor the grocery chain’s inventory of Pampers and automatically send a replenishment order. In order to make those systems talk to one another, P&G was issued a reserved block of purchase order numbers in the chain’s purchasing systems. Otherwise, replenishment orders from P&G were rejected at the chain’s distribution center receiving docks because they didn’t have valid purchase order numbers in their systems.

Now, these information systems in two separate organizations are intertwined. If the grocery chain upgrades to a new purchasing system and changes the format of their purchase order numbers, P&G’s sales department and IT department are both affected. Multiple that by all of your trading partners and even the simplest change becomes complex. Decisions about strategic relationships stumble over incompatibilities between coding systems.

We spend so much attention to the differentiation side of the equation that we overlook the importance of integration. Half a century ago, we had insights into why that was ill-advised. Maybe we’re overdue to take a closer look at integration.

Crumbling pyramids; knowledge work, leverage, and technology

PyramidThe consulting pyramid model needs to take its place alongside the monuments that gave it its name as a pretty but now obsolete structure. Making your living selling expertise by the hour is inherently self-limiting; you have to find a source of leverage other than the number of hours you can work or the hourly rate you can charge.

The default strategy in the professional services world—consulting, lawyering, auditing, and the like—has been to collect a set of apprentices and junior staff who will trade a portion of their hourly rates for the privilege of learning from you. It’s a reasonable tradeoff, a nice racket, and has supported the lifestyles of many a senior partner.

The last 25 years of technology development has eroded the foundational assumptions of how productive and effective knowledge work is best done. In the process the balance between learning and performing that made the leverage model make economic sense for both professional services firms and their clients has been upended. The failure to recognize this shift means that firms, their staffs, and their clients are all working harder and realizing less value than they could.

There are two elements to this erosion. The first is the challenge that today’s technologically mediated work environment imposes of making knowledge work difficult to observe. I’ve written about this problem of observable work elsewhere. In professional services, much of the apprenticeship activity is predicated on the ability of the more junior staff to watch and learn. If it’s hard to watch, then it’s hard to learn.

The second element is the increased productivity of the individual knowledge worker that technology enables. This may seem paradoxical. Why should the level of productivity be a challenge to the basic leverage model? Because leverage depends on being able to identify and carve out meaningful chunks of work that can be delegated to an apprentice.

It’s my hypothesis that changes in individual productivity clash with finding appropriate chunks to delegate. Often, the junior apprentice work was a mixture of necessary busy work with time and opportunity to inspect and understand what was going on and offer suggestions for options and improvements.

If technology eliminates or greatly reduces the necessary busy work, then the apprenticeship tasks begin to look a great deal more like training and development. The more training-like the task appears, the more difficult it becomes to charge for that person’s time and the more difficult it becomes to place them in the field where they must be to obtain the knowledge and experience they need.

The old cliche in professional services work is that the pyramid consists of “finders, minders, and grinders.” Built into this cliche is a set of assumptions about work processes anchored in a world of paper and manual analyses. That world is long gone, but we still haven’t updated our assumptions.

Data, insight, action; missing models

A common formulation in analytics circles is data yields insights which provoke action. Stripped to the core, this is the marketing pitch for every vendor of analytics or information management tools. This pitch works because people drawn to management prefer the action end of that cycle and are inclined to do no more analysis than necessary to justify an action. “Don’t just stand there, do something!” is a quintessential managerial command (and the exclamation point is required).

We have collections of advice and commentary supporting this stance from Tom Peters’ “bias for action” to the specter of “analysis paralysis.” Mostly, this is good. Why bother with analysis or look for insights if they will not inform a decision?

Despite the claims of vendors and consultants, this data -> insights -> action chain is incomplete. What’s overlooked in this cycle is the central role of underlying models. Analytics efforts are rife with models, so what am I talking about?

There’s a quote that is typically attributed to the late science fiction author, Isaac Asimov. Like most good quotes, it’s hard to pin down the source but being the good managerial sort we are, we’ll skip over that little detail. Asimov observed that

“The most exciting phrase to hear in science, the one that heralds new discoveries, is not “Eureka!” but “That’s funny…”

Noticing that a data point or a modeling result is odd depends on having an expectation of what the result ought to be. That expectation, in science or in organizations, is built on an implicit model of the situation at hand. So, for example, McDonalds knew that milkshakes go with hamburgers and fries. When sales data was mapped against time, however, morning drive time turned out to be a key component of sales for McDonalds’ shakes. Definitely, a “that’s funny” moment.

That surprise isn’t visible until you have a point of sale system that puts a timestamp on every item sold and someone decides to play with the new numbers to see what they show. But you still can’t see it unless you have an expectation in your head about what ought to be happening. For a chocolate shake, the anomaly stands out to most anyone. Other “that’s funny” moments will depend on making the effort to tease out our model of what we think should be happening before we dive into details of what is happening.