Fuzzy organizational boundaries; accepting complexity

Camp Pendleton

We all start off in simple organizations. The first organization I ever ran was a Junior Achievement company that made battery jumper cables. We fit inside a single workroom at the local JA operations. I suppose that constituted my first time inside an incubator. Our clever insight was to sell in bulk to local police departments and car dealerships.

When we encounter organizations in fiction, they are often equally simple. A single factory or shop. A clever employee in the mailroom working his way up to the top floor executive washroom. Bankers offering mortgages to the residents of Bedford Falls. In economics we learn of Adam Smith’s pin factory.

It can be a long time, if ever, before we see the complexities of real organizations in a real economy.

In all of our examples, it is a simple task to separate the organization from its environment. What is inside the organization and what is outside seems clear. That apparent simplicity leads us astray in real organizations; more importantly, our myopic view is less true now than it has ever been. The simple images of organization that were baked into our assumptions at an early age blind us to realities about today’s organizations and their environments that are essential to making good decisions.

When you look beyond the simplistic examples of a single factory or retail shop, organizational boundaries become a curious notion. We talk about organizations as if they were clearly identifiable and bounded entities, yet they are no such thing. I’m writing this on a MacBook Pro; Apple now has a market cap of over $1 trillion. How would you draw a picture of what is Apple vs. what is not Apple? How do you characterize the Apple quality engineer sitting inside FoxConn’s assembly operation in China?

Suppose I hack into Target’s point of sale systems from a van parked on the public street outside their Clark Street store in Chicago? I’m not trespassing on Target’s location, yet I’ve breached a metaphorical firewall. Talking about firewalls perpetuates an illusion that there is a boundary between the organization and its environment that we can manage as if it were a border. Our language and mental models haven’t kept up with Target’s organizational reality.

While the boundaries of organizations have always been fuzzier than we might think, over the past three decades they have become porous to the point of invisibility. We need to invent better ways to think about what distinguishes one organization from another and to discern how and when that matters.

We must abandon the notion that we have full control over the design or execution of business activities or processes. As individual knowledge workers and as knowledge organizations we operate in complex webs of interdependencies. Our ability to operate effectively depends on smooth interactions among multiple participants. When we pretend that the boundaries are sharp and clear, we will be surprised in very unpleasant ways.

Review: Making Work Visible

 

Making Work Visible: Exposing Time Theft to Optimize Work & Flow Dominica Degrandis.

While drawn largely from the realm of software design and development, Making Work Visible offers advice that applies to all forms of knowledge work. We’re all familiar with the problems; too many demands, arbitrary deadlines, constant interruptions. Degrandis offers practical advice on two levels. First, she lays out simple practices that anyone can use to see the work they are being asked to do and use that visibility to get more of the most important things done. Second, she offers a deeper look at a better way to look at knowledge work than our current bias toward thinking that knowledge work is a form of factory work.

Obviously, I was drawn to this book given my own interest in the challenges created by the invisible nature of knowledge work. We all know that we should be working on the highest value tasks on our lists, that we should carve out the necessary time to focus on those tasks, and that we are lying to ourselves when we pretend that we can multitask. It isn’t the knowing that’s hard, though, it’s the doing.

Degrandis offers simple methods to accomplish that anchored in the theory and practice of kanban; make the work to be done visible, limit work-in-process, and focus on managing flow. I’ve claimed that Degrandis offers insight into the limitations of viewing knowledge work as factory work. Is it a contradiction that the solution is drawn from the Toyota Production System? Not if you understand why kanban differs from our myths about factory work.

The purpose of a kanban system is to make the flow of work visible, then focus on making and keeping that flow smooth. You search for and eliminate spots where the flow slows down. You focus on the rhythm and cadence of the system as a whole. You learn that you cannot run such a system at 100% capacity utilization. As with a highway system, 100% capacity utilization equals gridlock.

What makes this book worth your time is that Degrandis keeps it simple without being simplistic. She offers a good blend of both “why to” and “how to.” That’s particularly important because you will need the whys to address the resistance you will encounter.

Can you make a mistake around here

I wrote my first book with Larry Prusak 25 years ago, while we were both working for Ernst & Young. In the intervening years he turned out another 8 or 10 books while I’ve only managed one more so far. I think he’s done writing books for now, so there’s some chance I may yet catch up.

When I was teaching knowledge management at Kellogg, I invited Larry as a guest speaker. He’s an excellent storyteller, so my students benefitted that afternoon. He opened with a wonderful diagnostic question for organizations: “Can you make a mistake around here?”

Organizations spend a great deal of energy designing systems and processes to be reliable and not make mistakes. This is as it should be. No one wants to fly in a plane that you can’t trust to be reliable.

But what can we learn about organizations from how they respond to mistakes? Do they recognize and acknowledge the fundamental unreliability of people? Or, do they lie to themselves and pretend that they can staff themselves with people who won’t make mistakes?

If you can’t make a mistake, you can’t learn. If you can’t learn, you can’t innovate. You can extend the logic from there.

Getting better at the craft of knowledge work

drafting CADHad lunch with my friend Buzz Bruggeman, CEO of ActiveWords, this week. Got a chance to look at some of the improvements in the pipeline. Not quite enough to persuade me to move back to Windows, but I do wish there was something as clever and powerful for OS X.

It led me to thinking about what makes some people more effective leveraging their tools and environment. Most of the advice about personal technology seems to focus on micro-productivity; how to tweak the settings of some application or how to clean up your inbox.

ActiveWords, for example, sits between your keyboard and the rest of your system. The simple use case is text expansion; type “sy” and get an email signature. Micro-productivity. If you’re a particularly slow or inaccurate typist and send enough email, perhaps the savings add up to justify the cost and the learning curve.

Watching an expert user like Buzz is more interesting. With a handful of keystrokes, he fired up a browser, loaded the New York Times website, grabbed an article, started a new email to me, dropped In the link, and sent it off. Nothing that you couldn’t do with a some mouse clicks and menu choices, so what’s the big deal? I’m a pretty fair touch typist; how much time can you really expect to save with this kind of tool? Isn’t this just a little bit more micro-productivity?

There’s something deeper going on here. What Buzz has done is transform his computer from a collection of individual power tools into a workshop for doing better knowledge work. It’s less about the tools and more about how you apply them collectively to accomplish the work at hand.

How do you study knowledge work with an eye toward turning out better end results?

We know how to do this for repetitive, essentially clerical, work. That’s the stuff of the systems analysis practices that built payroll systems, airline reservation systems, and inventory control systems. Building newer systems to take orders for books, electronics, and groceries still fall into the realm of routine systems analysis for routine work.

Most of this, however, isn’t really knowledge work; it’s factory work where the raw material happens to be data rather than steel. So the lessons and practices of industrial engineering apply.

What differentiates knowledge work from other work is that knowledge work seeks to create a unique result of acceptable quality. It is the logic of craft. One differentiator of craft is skill in employing the tools of the craft. Watching Buzz work was a reminder that craft skill is about how well you employ the tools at your disposal.

How do we bring that craft sensibility into our digital workshops? How do we create an environment that encourages and enables us to create quality work?

The way that Buzz employs ActiveWords smooths transitions and interactions between bigger tools. It also shifts attention away from the specifics of individual tools and towards the work product being created.

Consider email–a constant thorn for most of us. You can treat each email as a unique entity worthy of a unique response. You can perform an 80/20 analysis on your incoming email flow, build a half dozen boilerplate responses, program a bot to filter your inbox, and hope that your filters don’t send the wrong boilerplate to your boss.

Or, there is a third way. You can perform that 80/20 analysis at a more granular level to discover that 95% of your emails are best treated as a hybrid mix of pure boilerplate, formulaic paragraphs that combine boilerplate and a bit of personalization, and a sprinkling of pure custom response. Then you can craft a mini-flow of tools and data to turn out those emails and reduce your ongoing workload.

I can visualize how this might work. The tools are an element, but I’m more intrigued by how to be more systematic about exploring and examining work practices and crafting effective support for knowledge work.

Have others been contemplating similar questions? Who’s doing interesting things worth exploring?

Review: Filters Against Folly

Filters against follyFilters Against Folly: How To Survive Despite Economists, Ecologists, and the Merely Eloquent Garrett Hardin

You never know which books and ideas are going to stick with you. I first read Filters Against Folly in the early 1990s. Once a month, the group I was with met for lunch and discussed a book we thought might be interesting. I wish I could remember how this book got on the list. I’ve given away multiple copies and continue to find its approach relevant.

Some of the specific examples are dated and I think Hardin goes too far in some of his later arguments. What has stuck with me, however, is the value of the perspective Hardin adopts and the process he advocates.

We live in a world that depends on experts and expertise. At the same time, whatever expertise we possess, we are ignorant and un-expert about far more. Today, we seem to be operating in the worst stages of what Isaac Asimov described in the following observation:

There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge’.

Hardin offers a practical way out of this dilemma. We need not simply defer to expertise, nor reject it out of hand. Rather than focus on the experts, Hardin shifts our attention to the arguments that experts make and three basic filters anyone can apply to evaluate those arguments.

Hardin’s fundamental insight is that as lay persons our responsibility is to serve as a counterweight to expert advocacy; the expert argues for “why” while the rest of us argue for “why not?” It is our role to “think it possible you may be mistaken.”

The filters are organized around three deceptively simple questions:

  • What are the words?
  • What are the numbers?
  • And then what?

When looking at the language in advocacy arguments, the key trick is to look for language designed to end or cut off discussion or analysis. Of course, in today’s environment, it might seem that most language is deployed to cut off thinking rather than promote it. Hardin offers up a provocative array of examples of thought-stopping rather than thought-provoking language.

Shifting to numbers, Hardin does not expect us all to become statisticians or data analysts but he does think we’re all capable of some basic facility to recognize the more obvious traps hidden in expert numbers. That includes numerate traps laid inside expert language. In Hardins estimation “the numerate temperament is one that habitually looks for approximate dimensions, ratios, proportions, and rates of change in trying to grasp what is going on in the world.” Both zero and infinity hide inside literate arguments that ought to be numerate.

The Delaney Amendment, for example, forbids any substance in the human food supply if that substance can be shown to cause cancer at any level. That’s a literate argument hiding zero where it causes problems. The numerate perspective recognizes that our ability to measure  improves over time; what was undetectable in 1958 when the Delaney Amendment was passed is routinely measurable today. The question ought to be what dosage of an item represents a risk and is that risk a reasonable or unreasonable risk to take on?

Hardin’s final question “and then what?” is an ecological or systems filter. In systems terms we can never do merely one thing. Whatever intervention we make in a system will have a series of effects, some intended, some not. The responsible thing to do is to make the effort to identify potentially consequential effects and evaluate them collectively.

To be effective in holding experts to account, we must learn to apply all three of these filters in parallel. For example, labeling something as an “externality” in economics is an attempt to use language to treat an effect as a variable with a value of zero in the analysis.

For a small book, Filters Against Folly offers a wealth of insight into how each of us might be a better citizen. The questions we face are too important to be left in the hands of experts, no matter how expert.

A closer look at integration across organizations: thinking about coupling

Railcar CouplingWhen people ask me why I did something so strange as to leave a perfectly good career and get a Ph.D., the story I tell is this.

I designed and built information systems meant to improve the processes or the decision making of large organizations. I was troubled that organizational staff and managers routinely ignored the systems I created and continued running their departments and organizations pretty much as they always had. Either my designs were flawed or users were stupid (I’ll leave it to you to guess which hypothesis I favored).

I talked my way into a program—which involved explaining away aspects of my transcripts—and began hanging out with smarter people who were exploring similar questions about how organizations, systems, and technology fit together. This is the beauty of doctoral study; no one pretends to have the answers, everyone is trying to figure stuff out and, mostly, everyone wants to help you make progress.

This smart group led me toward the branch of organization theory and development that treated organizations as complex, designed, systems in their own right. The early days of organizational behavior and design as a discipline sought the “one best way” to organize. Paul Lawrence and Jay Lorsch of the Harvard Business School opened a different path; organizations should be designed to fit into and take advantage of the environments they operated within. In their seminal 1967 work, Organization and Environment, they made the case that effective organizations struck and maintained a balance between differentiation and integration. Where did you carve the organization into pieces and how did you fit the pieces together? Management’s responsibility was to make those decisions and to keep an eye on the environment to ensure that the balance points still made sense.

Two things make that managerial balancing responsibility far more difficult. One, the rate of change in the environment. Moderate pendulum swings have been replaced with what can feel like life inside a pinball machine. Two, the role of technology as an integrating mechanism that now spans internal and organizational boundaries.

Set the rate of change issue to the side; it’s well known even if not well addressed.

The technology links knitting organizations together were not something carefully contemplated in Lawrence and Lorsch’s work. Integration, in their formulation, was the task of managers in conversation with one another to identify and reconcile the nature of the work to be done. It was not something buried in the algorithms and data structures of the systems built to coordinate the activities of different functional departments—logistics, production, distribution, marketing, sales, and their kin—comprising the organization as a whole. Change in one function must now be carefully implemented and orchestrated with changes in all the other functions in the chain.

Electronic commerce further complicates this integration challenge. Now the information systems in discrete organizations must learn to talk to one another. The managers at the boundaries who could once negotiate and smooth working relationships have been displaced by code. Points of friction between organizational processes that could be resolved with a phone call now require coordinating modifications to multiple information systems. That couples organizations more tightly to one another and makes change slower and more difficult to execute, regardless of the willingness and commitment of the parties involved.

An example of how this coupling comes into play surfaced in in the early days of electronic data interchange. A grocery chain in the Southwestern United States agreed to connect their inventory and purchasing systems with Proctor & Gamble’s sales and distribution systems. P&G could monitor the grocery chain’s inventory of Pampers and automatically send a replenishment order. In order to make those systems talk to one another, P&G was issued a reserved block of purchase order numbers in the chain’s purchasing systems. Otherwise, replenishment orders from P&G were rejected at the chain’s distribution center receiving docks because they didn’t have valid purchase order numbers in their systems.

Now, these information systems in two separate organizations are intertwined. If the grocery chain upgrades to a new purchasing system and changes the format of their purchase order numbers, P&G’s sales department and IT department are both affected. Multiple that by all of your trading partners and even the simplest change becomes complex. Decisions about strategic relationships stumble over incompatibilities between coding systems.

We spend so much attention to the differentiation side of the equation that we overlook the importance of integration. Half a century ago, we had insights into why that was ill-advised. Maybe we’re overdue to take a closer look at integration.

Crumbling pyramids; knowledge work, leverage, and technology

PyramidThe consulting pyramid model needs to take its place alongside the monuments that gave it its name as a pretty but now obsolete structure. Making your living selling expertise by the hour is inherently self-limiting; you have to find a source of leverage other than the number of hours you can work or the hourly rate you can charge.

The default strategy in the professional services world—consulting, lawyering, auditing, and the like—has been to collect a set of apprentices and junior staff who will trade a portion of their hourly rates for the privilege of learning from you. It’s a reasonable tradeoff, a nice racket, and has supported the lifestyles of many a senior partner.

The last 25 years of technology development has eroded the foundational assumptions of how productive and effective knowledge work is best done. In the process the balance between learning and performing that made the leverage model make economic sense for both professional services firms and their clients has been upended. The failure to recognize this shift means that firms, their staffs, and their clients are all working harder and realizing less value than they could.

There are two elements to this erosion. The first is the challenge that today’s technologically mediated work environment imposes of making knowledge work difficult to observe. I’ve written about this problem of observable work elsewhere. In professional services, much of the apprenticeship activity is predicated on the ability of the more junior staff to watch and learn. If it’s hard to watch, then it’s hard to learn.

The second element is the increased productivity of the individual knowledge worker that technology enables. This may seem paradoxical. Why should the level of productivity be a challenge to the basic leverage model? Because leverage depends on being able to identify and carve out meaningful chunks of work that can be delegated to an apprentice.

It’s my hypothesis that changes in individual productivity clash with finding appropriate chunks to delegate. Often, the junior apprentice work was a mixture of necessary busy work with time and opportunity to inspect and understand what was going on and offer suggestions for options and improvements.

If technology eliminates or greatly reduces the necessary busy work, then the apprenticeship tasks begin to look a great deal more like training and development. The more training-like the task appears, the more difficult it becomes to charge for that person’s time and the more difficult it becomes to place them in the field where they must be to obtain the knowledge and experience they need.

The old cliche in professional services work is that the pyramid consists of “finders, minders, and grinders.” Built into this cliche is a set of assumptions about work processes anchored in a world of paper and manual analyses. That world is long gone, but we still haven’t updated our assumptions.

Data, insight, action; missing models

A common formulation in analytics circles is data yields insights which provoke action. Stripped to the core, this is the marketing pitch for every vendor of analytics or information management tools. This pitch works because people drawn to management prefer the action end of that cycle and are inclined to do no more analysis than necessary to justify an action. “Don’t just stand there, do something!” is a quintessential managerial command (and the exclamation point is required).

We have collections of advice and commentary supporting this stance from Tom Peters’ “bias for action” to the specter of “analysis paralysis.” Mostly, this is good. Why bother with analysis or look for insights if they will not inform a decision?

Despite the claims of vendors and consultants, this data -> insights -> action chain is incomplete. What’s overlooked in this cycle is the central role of underlying models. Analytics efforts are rife with models, so what am I talking about?

There’s a quote that is typically attributed to the late science fiction author, Isaac Asimov. Like most good quotes, it’s hard to pin down the source but being the good managerial sort we are, we’ll skip over that little detail. Asimov observed that

“The most exciting phrase to hear in science, the one that heralds new discoveries, is not “Eureka!” but “That’s funny…”

Noticing that a data point or a modeling result is odd depends on having an expectation of what the result ought to be. That expectation, in science or in organizations, is built on an implicit model of the situation at hand. So, for example, McDonalds knew that milkshakes go with hamburgers and fries. When sales data was mapped against time, however, morning drive time turned out to be a key component of sales for McDonalds’ shakes. Definitely, a “that’s funny” moment.

That surprise isn’t visible until you have a point of sale system that puts a timestamp on every item sold and someone decides to play with the new numbers to see what they show. But you still can’t see it unless you have an expectation in your head about what ought to be happening. For a chocolate shake, the anomaly stands out to most anyone. Other “that’s funny” moments will depend on making the effort to tease out our model of what we think should be happening before we dive into details of what is happening.

Keep Moving, Stay in Touch, Head for Higher Ground

High GroundWe are imitative creatures. We are happier copying someone else’s approach than inventing solutions based on the actual problem at hand. How often do you encounter situations where change is called for and the first response to any suggestion is “where has this been done before?” For all the talk of innovation, I often wonder how anything new ever happens.

Leaders and change agents have been fond of looking to the military for design ideas to emulate in their organizations. One all too persistent model was best captured by novelist Herman Wouk;

The Navy is a master plan designed by geniuses for execution by idiots. If you are not an idiot, but find yourself in the Navy, you can only operate well by pretending to be one. All the shortcuts and economies and common-sense changes that your native intelligence suggests to you are mistakes. Learn to quash them. Constantly ask yourself, “How would I do this if I were a fool?” Throttle down your mind to a crawl. Then you will never go wrong.

– Herman Wouk, The Caine Mutiny

This is a compelling image and in 1951 it likely represented a pretty accurate description of the Navy. Too many organizational leaders operate as if it were still 1951 and that military was an exemplar of good organizational practice. If you also believe that the military organize to fight the last war, what war are organizations fighting when they copy their designs from that military?

If we insist on copying ideas, perhaps we should at least update our mental models. If you want to draw lessons from the military, do it intelligently. If you look in the right places, the modern military has been more innovative about organization and leadership than other institutions because their consequences can mean the difference between life and death – something just a bit more compelling than making your quarterly budgets.

While business organizations remain fond of command and control, the military has had to cope with the disconnects between the view from headquarters and ground truth. Regardless of what TV and the movies suggest, orders do not flow from the Situation Room to the troops on the ground. Commanders spend their time articulating “commander’s intent,” not on developing detailed orders; they worry about making sure that all concerned understand “why” and what finished looks like first.

If the job at the top is to articulate why, what does that enable at the front lines? It grants those with first hand knowledge of the situation the freedom to exercise initiative and adapt action to the circumstances. It also eliminates “I was just following orders” as an excuse for failure.

This model places much greater responsibilities on the front lines, so you had better staff those lines accordingly. Perhaps put a better way, you must trust that they are capable of what they may be called on to do. Wouk was reminding us that those on the line only behave like idiots if you treat them so.

If you make the transition to starting from commander’s intent and trusting that you’ve put capable people in place, then orders become much simpler to devise. In the volatile world that constitutes today’s environment for most of us, you can offer a default set of orders to cover the unexpected; keep moving, stay in touch, head for higher ground.

Data plus models produce insight: a review of “Factfulness”

Factfulness coverFactfulness: Ten Reasons We’re Wrong About the World–and Why Things Are Better Than You Think, Hans Rosling, Anna Rosling Ronnlund, Ola Rosling. 2018

The best data available is useless if you filter it through the wrong mental model; the cleverest model is stupid with bad data. The late Hans Rosling explores this connection by looking at how flawed thinking—bad models—leads to poor judgments despite the availability of quite good data. Rosling’s field is population demographics and public health. He devotes his attention to how and why our intuitions are rooted in dated models. As citizens of an interconnected global economy his arguments matter to all of us. For those of us who worry about how data, analysis, and decisions fit together, Rosling’s insights offer important cautions and guidance.

Rosling’s fundamental argument is that the data, interpreted with the appropriate models, tells us that the world is improving along all of the dimensions that matter. He does not offer this as an excuse to stop working on making the world better, nor is he so naive as to claim that problems don’t exist. What he wants us to do is understand what the data is telling us and use that understanding to design and execute actions that will make a difference.

Take global population and income, for example. Most of us in the West filter the scraps of data we encounter through an old model of “developed” and “developing” world. There are a few of “us” and a lot of “them.” That split is long gone and Rosling offers a better lens to interpret the data. Instead of an over-simplified—and incorrect—model of rich vs. poor, Rosling offers the following richer breakdown, which groups population into four levels of daily income.

Global Population and income

What is important about this model is what it tells us about the middle. Level 4 is us; we can afford cars and smart phones and family vacations. Level 1 is the extreme poverty that most of us think of when we invoke the classic “us vs. them” perspective. Levels 2 and 3 represent people who can think about tomorrow and beyond; the car may be used but it runs, the phone is just as smart as yours, and a vacation is in the cards.

Rosling offers good advice about how to compensate for our dated models and the thinking habits that cause us to misinterpret the data. He organizes those bad thinking habits into ten instincts ranging from our preference for bad news to our tendency to think in straight lines to our search for someone to blame. For each of those instincts he demonstrates why they feel so go and lead us so far astray. He offers a wealth of mini-cases and stories to teach us how to tease better insights out of the numbers by being smarter about the models we put to use.

A good number of my consulting colleagues pride themselves on their ability to “torture the data until it confesses.” Rosling shows us that seduction is a much smarter strategy if you want deep insight. He shows how to combine the data with knowledge of the world to derive insights that wouldn’t otherwise be easily found. For example, the data about the rates of vaccination around the world tells us about investment opportunities:

Vaccines must be kept cold all the way from the factory to the arm of the child. They are shipped in refrigerated containers to harbors around the world, where they get loaded into refrigerated trucks. These trucks take them to local health clinics, where they are stored in refrigerators. These logistic distribution paths are called cool chains. For cool chains to work, you need all the basic infrastructure for transport, electricity, education, and health care to be in place. This is exactly the same infrastructure needed to establish new factories. (Here’s a video where Rosling and Gates help visualize the process)

Factfulness is a master class in data interpretation and storytelling with data. Along the way, you’ll unlearn some bad models you’ve been carrying around and replace them with more accurate and effective ones. It doesn’t hurt that Rosling views the world from a systems perspective, is comfortable with “it’s complicated” as a response, and still demands that we take responsibility for our actions.

Rosling passed away in 2017 but his work continues with this book and with his children. When you’re done with the book, plan to spend some more time exploring these ideas at Gapminder.org, where you’ll find more data, some software tools, and more to learn.