A closer look at integration across organizations: thinking about coupling

Railcar CouplingWhen people ask me why I did something so strange as to leave a perfectly good career and get a Ph.D., the story I tell is this.

I designed and built information systems meant to improve the processes or the decision making of large organizations. I was troubled that organizational staff and managers routinely ignored the systems I created and continued running their departments and organizations pretty much as they always had. Either my designs were flawed or users were stupid (I’ll leave it to you to guess which hypothesis I favored).

I talked my way into a program—which involved explaining away aspects of my transcripts—and began hanging out with smarter people who were exploring similar questions about how organizations, systems, and technology fit together. This is the beauty of doctoral study; no one pretends to have the answers, everyone is trying to figure stuff out and, mostly, everyone wants to help you make progress.

This smart group led me toward the branch of organization theory and development that treated organizations as complex, designed, systems in their own right. The early days of organizational behavior and design as a discipline sought the “one best way” to organize. Paul Lawrence and Jay Lorsch of the Harvard Business School opened a different path; organizations should be designed to fit into and take advantage of the environments they operated within. In their seminal 1967 work, Organization and Environment, they made the case that effective organizations struck and maintained a balance between differentiation and integration. Where did you carve the organization into pieces and how did you fit the pieces together? Management’s responsibility was to make those decisions and to keep an eye on the environment to ensure that the balance points still made sense.

Two things make that managerial balancing responsibility far more difficult. One, the rate of change in the environment. Moderate pendulum swings have been replaced with what can feel like life inside a pinball machine. Two, the role of technology as an integrating mechanism that now spans internal and organizational boundaries.

Set the rate of change issue to the side; it’s well known even if not well addressed.

The technology links knitting organizations together were not something carefully contemplated in Lawrence and Lorsch’s work. Integration, in their formulation, was the task of managers in conversation with one another to identify and reconcile the nature of the work to be done. It was not something buried in the algorithms and data structures of the systems built to coordinate the activities of different functional departments—logistics, production, distribution, marketing, sales, and their kin—comprising the organization as a whole. Change in one function must now be carefully implemented and orchestrated with changes in all the other functions in the chain.

Electronic commerce further complicates this integration challenge. Now the information systems in discrete organizations must learn to talk to one another. The managers at the boundaries who could once negotiate and smooth working relationships have been displaced by code. Points of friction between organizational processes that could be resolved with a phone call now require coordinating modifications to multiple information systems. That couples organizations more tightly to one another and makes change slower and more difficult to execute, regardless of the willingness and commitment of the parties involved.

An example of how this coupling comes into play surfaced in in the early days of electronic data interchange. A grocery chain in the Southwestern United States agreed to connect their inventory and purchasing systems with Proctor & Gamble’s sales and distribution systems. P&G could monitor the grocery chain’s inventory of Pampers and automatically send a replenishment order. In order to make those systems talk to one another, P&G was issued a reserved block of purchase order numbers in the chain’s purchasing systems. Otherwise, replenishment orders from P&G were rejected at the chain’s distribution center receiving docks because they didn’t have valid purchase order numbers in their systems.

Now, these information systems in two separate organizations are intertwined. If the grocery chain upgrades to a new purchasing system and changes the format of their purchase order numbers, P&G’s sales department and IT department are both affected. Multiple that by all of your trading partners and even the simplest change becomes complex. Decisions about strategic relationships stumble over incompatibilities between coding systems.

We spend so much attention to the differentiation side of the equation that we overlook the importance of integration. Half a century ago, we had insights into why that was ill-advised. Maybe we’re overdue to take a closer look at integration.

Crumbling pyramids; knowledge work, leverage, and technology

PyramidThe consulting pyramid model needs to take its place alongside the monuments that gave it its name as a pretty but now obsolete structure. Making your living selling expertise by the hour is inherently self-limiting; you have to find a source of leverage other than the number of hours you can work or the hourly rate you can charge.

The default strategy in the professional services world—consulting, lawyering, auditing, and the like—has been to collect a set of apprentices and junior staff who will trade a portion of their hourly rates for the privilege of learning from you. It’s a reasonable tradeoff, a nice racket, and has supported the lifestyles of many a senior partner.

The last 25 years of technology development has eroded the foundational assumptions of how productive and effective knowledge work is best done. In the process the balance between learning and performing that made the leverage model make economic sense for both professional services firms and their clients has been upended. The failure to recognize this shift means that firms, their staffs, and their clients are all working harder and realizing less value than they could.

There are two elements to this erosion. The first is the challenge that today’s technologically mediated work environment imposes of making knowledge work difficult to observe. I’ve written about this problem of observable work elsewhere. In professional services, much of the apprenticeship activity is predicated on the ability of the more junior staff to watch and learn. If it’s hard to watch, then it’s hard to learn.

The second element is the increased productivity of the individual knowledge worker that technology enables. This may seem paradoxical. Why should the level of productivity be a challenge to the basic leverage model? Because leverage depends on being able to identify and carve out meaningful chunks of work that can be delegated to an apprentice.

It’s my hypothesis that changes in individual productivity clash with finding appropriate chunks to delegate. Often, the junior apprentice work was a mixture of necessary busy work with time and opportunity to inspect and understand what was going on and offer suggestions for options and improvements.

If technology eliminates or greatly reduces the necessary busy work, then the apprenticeship tasks begin to look a great deal more like training and development. The more training-like the task appears, the more difficult it becomes to charge for that person’s time and the more difficult it becomes to place them in the field where they must be to obtain the knowledge and experience they need.

The old cliche in professional services work is that the pyramid consists of “finders, minders, and grinders.” Built into this cliche is a set of assumptions about work processes anchored in a world of paper and manual analyses. That world is long gone, but we still haven’t updated our assumptions.

Data, insight, action; missing models

A common formulation in analytics circles is data yields insights which provoke action. Stripped to the core, this is the marketing pitch for every vendor of analytics or information management tools. This pitch works because people drawn to management prefer the action end of that cycle and are inclined to do no more analysis than necessary to justify an action. “Don’t just stand there, do something!” is a quintessential managerial command (and the exclamation point is required).

We have collections of advice and commentary supporting this stance from Tom Peters’ “bias for action” to the specter of “analysis paralysis.” Mostly, this is good. Why bother with analysis or look for insights if they will not inform a decision?

Despite the claims of vendors and consultants, this data -> insights -> action chain is incomplete. What’s overlooked in this cycle is the central role of underlying models. Analytics efforts are rife with models, so what am I talking about?

There’s a quote that is typically attributed to the late science fiction author, Isaac Asimov. Like most good quotes, it’s hard to pin down the source but being the good managerial sort we are, we’ll skip over that little detail. Asimov observed that

“The most exciting phrase to hear in science, the one that heralds new discoveries, is not “Eureka!” but “That’s funny…”

Noticing that a data point or a modeling result is odd depends on having an expectation of what the result ought to be. That expectation, in science or in organizations, is built on an implicit model of the situation at hand. So, for example, McDonalds knew that milkshakes go with hamburgers and fries. When sales data was mapped against time, however, morning drive time turned out to be a key component of sales for McDonalds’ shakes. Definitely, a “that’s funny” moment.

That surprise isn’t visible until you have a point of sale system that puts a timestamp on every item sold and someone decides to play with the new numbers to see what they show. But you still can’t see it unless you have an expectation in your head about what ought to be happening. For a chocolate shake, the anomaly stands out to most anyone. Other “that’s funny” moments will depend on making the effort to tease out our model of what we think should be happening before we dive into details of what is happening.

Keep Moving, Stay in Touch, Head for Higher Ground

High GroundWe are imitative creatures. We are happier copying someone else’s approach than inventing solutions based on the actual problem at hand. How often do you encounter situations where change is called for and the first response to any suggestion is “where has this been done before?” For all the talk of innovation, I often wonder how anything new ever happens.

Leaders and change agents have been fond of looking to the military for design ideas to emulate in their organizations. One all too persistent model was best captured by novelist Herman Wouk;

The Navy is a master plan designed by geniuses for execution by idiots. If you are not an idiot, but find yourself in the Navy, you can only operate well by pretending to be one. All the shortcuts and economies and common-sense changes that your native intelligence suggests to you are mistakes. Learn to quash them. Constantly ask yourself, “How would I do this if I were a fool?” Throttle down your mind to a crawl. Then you will never go wrong.

– Herman Wouk, The Caine Mutiny

This is a compelling image and in 1951 it likely represented a pretty accurate description of the Navy. Too many organizational leaders operate as if it were still 1951 and that military was an exemplar of good organizational practice. If you also believe that the military organize to fight the last war, what war are organizations fighting when they copy their designs from that military?

If we insist on copying ideas, perhaps we should at least update our mental models. If you want to draw lessons from the military, do it intelligently. If you look in the right places, the modern military has been more innovative about organization and leadership than other institutions because their consequences can mean the difference between life and death – something just a bit more compelling than making your quarterly budgets.

While business organizations remain fond of command and control, the military has had to cope with the disconnects between the view from headquarters and ground truth. Regardless of what TV and the movies suggest, orders do not flow from the Situation Room to the troops on the ground. Commanders spend their time articulating “commander’s intent,” not on developing detailed orders; they worry about making sure that all concerned understand “why” and what finished looks like first.

If the job at the top is to articulate why, what does that enable at the front lines? It grants those with first hand knowledge of the situation the freedom to exercise initiative and adapt action to the circumstances. It also eliminates “I was just following orders” as an excuse for failure.

This model places much greater responsibilities on the front lines, so you had better staff those lines accordingly. Perhaps put a better way, you must trust that they are capable of what they may be called on to do. Wouk was reminding us that those on the line only behave like idiots if you treat them so.

If you make the transition to starting from commander’s intent and trusting that you’ve put capable people in place, then orders become much simpler to devise. In the volatile world that constitutes today’s environment for most of us, you can offer a default set of orders to cover the unexpected; keep moving, stay in touch, head for higher ground.

Data plus models produce insight: a review of “Factfulness”

Factfulness coverFactfulness: Ten Reasons We’re Wrong About the World–and Why Things Are Better Than You Think, Hans Rosling, Anna Rosling Ronnlund, Ola Rosling. 2018

The best data available is useless if you filter it through the wrong mental model; the cleverest model is stupid with bad data. The late Hans Rosling explores this connection by looking at how flawed thinking—bad models—leads to poor judgments despite the availability of quite good data. Rosling’s field is population demographics and public health. He devotes his attention to how and why our intuitions are rooted in dated models. As citizens of an interconnected global economy his arguments matter to all of us. For those of us who worry about how data, analysis, and decisions fit together, Rosling’s insights offer important cautions and guidance.

Rosling’s fundamental argument is that the data, interpreted with the appropriate models, tells us that the world is improving along all of the dimensions that matter. He does not offer this as an excuse to stop working on making the world better, nor is he so naive as to claim that problems don’t exist. What he wants us to do is understand what the data is telling us and use that understanding to design and execute actions that will make a difference.

Take global population and income, for example. Most of us in the West filter the scraps of data we encounter through an old model of “developed” and “developing” world. There are a few of “us” and a lot of “them.” That split is long gone and Rosling offers a better lens to interpret the data. Instead of an over-simplified—and incorrect—model of rich vs. poor, Rosling offers the following richer breakdown, which groups population into four levels of daily income.

Global Population and income

What is important about this model is what it tells us about the middle. Level 4 is us; we can afford cars and smart phones and family vacations. Level 1 is the extreme poverty that most of us think of when we invoke the classic “us vs. them” perspective. Levels 2 and 3 represent people who can think about tomorrow and beyond; the car may be used but it runs, the phone is just as smart as yours, and a vacation is in the cards.

Rosling offers good advice about how to compensate for our dated models and the thinking habits that cause us to misinterpret the data. He organizes those bad thinking habits into ten instincts ranging from our preference for bad news to our tendency to think in straight lines to our search for someone to blame. For each of those instincts he demonstrates why they feel so go and lead us so far astray. He offers a wealth of mini-cases and stories to teach us how to tease better insights out of the numbers by being smarter about the models we put to use.

A good number of my consulting colleagues pride themselves on their ability to “torture the data until it confesses.” Rosling shows us that seduction is a much smarter strategy if you want deep insight. He shows how to combine the data with knowledge of the world to derive insights that wouldn’t otherwise be easily found. For example, the data about the rates of vaccination around the world tells us about investment opportunities:

Vaccines must be kept cold all the way from the factory to the arm of the child. They are shipped in refrigerated containers to harbors around the world, where they get loaded into refrigerated trucks. These trucks take them to local health clinics, where they are stored in refrigerators. These logistic distribution paths are called cool chains. For cool chains to work, you need all the basic infrastructure for transport, electricity, education, and health care to be in place. This is exactly the same infrastructure needed to establish new factories. (Here’s a video where Rosling and Gates help visualize the process)

Factfulness is a master class in data interpretation and storytelling with data. Along the way, you’ll unlearn some bad models you’ve been carrying around and replace them with more accurate and effective ones. It doesn’t hurt that Rosling views the world from a systems perspective, is comfortable with “it’s complicated” as a response, and still demands that we take responsibility for our actions.

Rosling passed away in 2017 but his work continues with this book and with his children. When you’re done with the book, plan to spend some more time exploring these ideas at Gapminder.org, where you’ll find more data, some software tools, and more to learn.

A Swiss Army Knife for Project Management

outline I’ve carried a pocket knife since my days as a stage manager/techie in college. A handful of useful tools in hand beats the perfect tool back in the shop or office. Courtesy of the TSA I have to remember to leave it behind when I fly or surrender it to the gods of security theater but every other day it’s in my pocket. There is, in fact, an entire subculture devoted to discussions of what constitutes an appropriate EDC—Every Day Carry—for various occupations and environments.

I’ve been thinking about what might constitute the equivalent EDC or Swiss Army Knife for the demands of project planning and management. We live in a project based world but fail to equip managers with an appropriate set of essential project management tools.

Like all areas of expertise, project management professionals build their reputations by dealing with more complex and challenging situations. The Project Management Institute certifies project management professionals. PMBOK, the Project Management Book of Knowledge has reached its sixth edition and runs to several hundred pages.

The complexities of large-scale project management push much training and education into the weeds of work breakdown structures, scope creep, critical-path mapping, and more. The message that project management is a job for professionals and the amateur need not apply is painfully clear. But we’re all expected to participate in project planning, and often we must lead projects without benefit of formal training in project management.

Recently, I looked at the need for project design before project management. The essential problem is to use a picture of where you want to end up to lay out a map of how to get there from wherever you are now.

The end is where to begin. Until you can conjure a picture of where you want to go, you have no basis to map the effort it will take to create it. Imagine what you need to deliver in reasonable detail and you can work backwards to the steps that will bring it into being. If you have a clear sense of where you are, you can also identify  the next few steps forward.

Working out the steps that will take you from where you are to where you want to go can be done with two tools and three rules.

Tool #1: A calendar. If you can do it all without looking at one, you aren’t talking about a project.

Tool #2: A messy outline. An outline because it captures the essential features of ordering steps and clustering them. Messy because you can’t and won’t get it right the first time and the neat outlines you were introduced to in middle school interfere with that. (Personally, I’m partial to mind maps over pure outlines, but that is a topic for another time.)

Three rules generate the substance of the outline:

  1. Small chunks
  2. First things first
  3. Like things together

“Small chunks” is a reminder that the only way to eat an elephant is in small bites. There are a variety of heuristics about recognizing what constitutes an appropriate small chunk of a project. Somewhere between a day and a week’s worth of work for one person isn’t a bad starting point. Writing a blog post is a reasonable chunk; launching a new blog isn’t.

Generating an initial list of small chunks is the fuel that feeds an iterative process of putting first things first, grouping like things together, and cycling back to revise the list of small chunks.

The art in project management lies in being clever and insightful about sequencing and clustering activities. Here, we’re focused on the value of thinking through what needs to be done in what order before leaping to the first task that appears. That’s why an outline is a more useful tool at this point than Gantt charts or Microsoft Project. An outline adds enough structure over a simple to do list to be valuable without getting lost in the intricacies of a complex software tool. An outline helps you organize your work, helping you discover similar tasks, deliverables, or resources that can be grouped together in your plans. An outline gives you order and clustering. The calendar links the outline to time. For many projects that will be enough. For the rest, it is the right place to start.

The point of a project plan is not the plan itself, but the structure it brings to running the project. The execution value of widespread project planning capability in the organization is twofold. First, it adds capacity where it is needed: at the grassroots level. Second, it improves the inputs to those situations where sophisticated project-management techniques are appropriate.

 

Trust, Verify, and Triangulate

“Trust but verify” is no longer an effective strategy in an information saturated world. When Ronald Reagan quoted the Russian proverb in 1985, it seemed clever enough; today it sounds hopelessly naive. If we are reasonably diligent executives or citizens, we understand and seek to avoid confirmation bias when important decisions are at hand. What can we do to compensate for the forces working against us?

It’s a cliché that we live in a world of information abundance. That cliché, however, has not led to the changes in information behaviors that it implies. We still operate as though information were a scarce commodity, believing that anyone who holds relevant data is automatically in a position of power and that information seekers depend on the holder’s munificence.

As data becomes abundantly and seemingly easily available, the problem for the information seeker changes. (It changes for the information holder as well, but that is a topic for another time.) The problem transforms from “Can I get the data?” to “What data exists? How quickly can I get it? How do I know I can trust it? How do I evaluate conflicting reports?” It is no longer simply a question of getting an answer but of getting an answer whose strengths and limits you understand and can account for.

The temptation is to fall into a trust trap, abdicating responsibility to someone else. “4 out of 5 dentists recommend…” “According to the Wall Street Journal…” “The pipeline predicts we’ll close $100 million…”

There was a time when we could at least pretend to seek out and rely on trustworthy sources. We counted on the staffs at the New York Times or Wall Street Journal to do fact checking for us. Today we argue that the fact checkers are biased and no one is to be trusted.

Information is never neutral.

Whatever source is collecting, packaging, and disseminating information is doing so with its own interests in mind. Those interests must be factored into any analysis. For example, even data as seemingly impartial as flight schedules must be viewed with a skeptical eye. In recent years, airlines improved their on-time performance as much by adjusting schedules as by any operational changes. You can interpret new flight schedules as an acknowledgment of operational realities or as padding to enable to reporting of better on-time performance.

If we are a bit more sophisticated, we invest time in understanding how those trusted sources gather and process their information, verifying that their processes are sound, and accept their reports as reliable inputs. Unfortunately, “trust but verify” is no longer a sufficient strategy. The indicators we once used to assess trust have become too easy to imitate. Our sources of information are too numerous and too distributed to contemplate meaningful verification.

Are we doomed? Is our only response to abandon belief in objective truth and cling to whatever source best caters to our own bias. Fortunately, triangulation is a strategy well matched to the characteristics of today’s information environment.

In navigation, you determine your location in relation to known locations. You need at least three locations to fix your current position. Moreover, you need to account for the limits of your measurement tools to understand the precision of your fix. In today’s world of GPS navigation, that precision might well be very high but still imperfect.

In organizational (and other) settings where you are attempting to make sense of—or draw useful inferences from—a multitude of noisy and conflicting sources, the principles of triangulation offer a workable strategy for developing useful insights in a finite and manageable amount of time.

In navigation, the more widely and evenly dispersed your sightings, the more precisely you can fix your position. Focus your data collection on identifying and targeting multiple sources of input that represent divergent, and possibly conflicting, perspectives. Within an organization, for example, work with supporters and opponents, both active and passive, of a proposed reorganization or systems deployment to develop an implementation strategy. When evaluating and selecting a new application, seek out a wider assortment of potential references, vendors, and analysts.

Triangulation also helps counteract the simplistic notion of balance that undermines too many narratives. Triangulation is based on living in a three-dimensional world; it cannot tell you where you are with only two fixes. We should be at least as diligent when mapping more complex phenomena.

A data collection effort organized around seeking multiple perspectives and guidance risks spinning out of control. Analyzing data in parallel with collection manages that risk. Process and integrate data as it is collected. Look for emerging themes and issues and work toward creating a coherent picture of the current state. Such parallel analysis/collection efforts will identify new sources of relevant input and insight to be added to the collection process.

Monitoring the emergence of themes, issues, and insights will signal when to close out the collection process. In the early stages of analysis, the learning curve will be steep, but it will begin to flatten out over time. As the analysis begins to converge and the rate of new information and insight begins to drop sharply, the end of the data collection effort will be near.

The goal of fact finding and research is to make better decisions. You can’t set a course until you know where you are. How carefully you need to fix your position or assess the context for your decision depends on where you hope to go. Thinking in terms of triangulation—how widely you distribute your input base and what level of precision you need—offers a data collection and analysis strategy that in more effective and efficient than approaches we have grown accustomed to in simpler times.

Chaos players: knowledge work as performance art

Stage - Auditorium. Photo by Monica Silvestre from Pexels

All the world’s a stage, And all the men and women merely players; They have their exits and their entrances, And one man in his time plays many parts
As You Like It, Act II, Scene VII. William Shakespeare

I’ve been thinking about the role of mental models for sense-making. While we do this all the time, I think there is significant incremental value in making those models more explicit and then playing with them to tease out their implications. Organization as machine is a familiar example, one that I believe is largely obsolete. Organization as ecosystem or complex adaptive system has grown in popularity. It has the advantage of being richer and more sensitive to the complexities of modern organizations and their environments. On the other hand, that mental model is a bit too appealing to to academic and consulting desires to sound simultaneously profound and unintelligible. It fails to provide useful guidance through the day-to-day challenges of competing and surviving.

Organization as performance art or theatrical production offers a middle ground between simplistic and over-engineered. It appeals to me personally given a long history staging and producing. It’s my hypothesis that most of us have enough nodding familiarity with the theater to take advantage of the metaphor and model without so much knowledge as to let the little details interfere with the deeper value.

The goal of theater is to produce an experience for an audience. That experience must always be grounded in the practical art of the possible. This gives us something to work with.

Let’s work backwards from the performance. We have the players on the stage and an audience with expectations about what they are about to experience. If that is all we have, then we are in the realm of storytelling. Storytelling demands both the tellers of the tale and the creator of the tale itself. Our playwright starts with an idea and crafts a script to bring that idea to life and connect it to all of the other stories and ideas the audience will bring to the experience. We now have a story, its author, storytellers, and an audience with their expectations.

Theater takes us a step farther and asks us think about production values that contribute to and enhance the experience we hope to create. Stage and sets and lighting and sound can all be drawn into service of the story. Each calls for different expertise to design, create, and execute. We now have multiple experts who must collaborate and we have processes to be managed. Each must contribute to the experience being created. More importantly, those contributions must all be coordinated and integrated into the intended experience.

This feels like a potentially fruitful line of inquiry. It seems to align well with an environment that depends on creativity and innovation as much as or more than simple execution. How deeply should it be developed?

Going behind the screen: mental models and more effective software leverage

Osborne 1 Luggable PCI’ve been writing at a keyboard now for five decades. As it’s Mother’s Day, it is fitting that my mother was the one who encouraged me to learn to type. Early in that process, I was also encouraged to learn to think at the keyboard and skip the handwritten drafts. That was made easier by my inability to read my own handwriting after a few hours.

I first started text editing as a programmer writing Fortran on a Xerox SDS Sigma computer. I started writing consulting reports on Wang Word Processors. When PCs hit the market, I made my way through a variety of word processors including WordStar, WordPerfect, and Microsoft Word. I also experimented with an eclectic mix of other writing tools such as ThinkTank, More, Grandview, Ecco Pro, OmniOutliner, and MindManager. Today, I do the bulk of my long-form writing using Scrivener on a Mac together with a suite of other tools.

The point is not that I am a sucker for bright, shiny, objects—I am—or that I am still in search of the “one, true tool.” This parade of tools over years and multiple technology platforms leads me to the observation that we would be wise to spend much more attention to our mental models of software and the thinking processes they support.

That’s a problem because we are much more comfortable with the concrete than the abstract. You pick up a shovel or a hammer and what you can do is pretty clear. Sit down at a typewriter with a stack of paper and, again, you can muddle through on your own. Replace the typewriter and paper with a keyboard and a blank screen and life grows more complicated.

Fortunately, we are clever apes and as good disciples of Yogi Berra we “can observe a lot just by watching.” The field of user interface design exists to smooth the path to making our abstract tools concrete enough to learn.
UI design falls short, however, by focusing principally at the point where our senses—sight, sound, and touch—meet the surface of our abstract software tools. It’s as if we taught people how to read words and sentences but never taught them how to understand and follow arguments. We recognize that a book is a kind of interface between the mind of the author and the mind of the reader. A book’s interface can be done well or done badly, but the ultimate test is whether we find a window into the thoughts and reasoning of another.

We understand that there is something deeper than that words on the page. Our goal is to get behind the words and into the thinking of the author. This same goal exists in software; we need to go behind interface we see on the screen to grasp the programmer’s thinking.

We’ll come back to working with words in a moment. First, let’s look at the spreadsheet. I remember seeing Visicalc for the first time—one year too late to get me through first year Finance. What was visible on the screen mirrored the paper spreadsheets I used to prepare financial analyses and budgets. The things I understood from paper were there on the screen; things that I wished I could do with paper were now easy in software. They were already possible and available by way of much more complex and inscrutable software tools but Visicalc’s interface created a link between my mind and Dan Bricklin’s that opened up the possibilities of the software. I was able to get behind the screen and that gave me new power. That same mental model can also be a hindrance if it ends up limiting your perception of new possibilities.

Word processors also represent an interface between writers and the programmer’s model of how writing works. That model can be more difficult to discern. If writer and programmer have compatible models, the tools can make the process smoother. If the models are at odds, then the writer will struggle and not understand why.

Consider the first stand-alone word processors like the Wang. These were expensive, single function machines. The target market was organizations with separate departments dedicated to the production of documents; insurance policies, user manuals, formal reports, and the like. The users were clerical staff—generally women—whose job was to transform hand written drafts into finished product. The software was built to support that business process and the process was reflected in the design and operation of the software. Functions and features of the software supported revising copy, enforcing formatting standards, and other requirements of a business.

The economics that drove the personal computer revolution changed the potential market for software. While Wang targeted organizations and word processing as an organizational function, software programmers could now target individual writers. This led to a proliferation of word processing tools in the 1980s and 1990s reflecting multiple models of the writing process. For example, should the process of creating a draft be separate from the process of laying out the text on the page? Should the instructions for laying out the text be embedded in the text of the document or stored separately? Is a long-form product such as a book a single computer file, a collection of multiple file, or something else?

Those decisions influence the writer’s process. If your process meshes with the programmer’s, then life is good. If they clash, the tool will get in the way of good work.

If you don’t recognize the issue, then your success or failure with a specific tool can feel capricious. If you select a tool without thinking about this fit, then you might blame yourself for problems and limitations that are caused by using a tool that clashes with your process.

Suppose we recognize that this issue of mental models exists. How do we take advantage of that perspective to become more effective in leveraging available tools? A starting point is to reflect on your existing work practices and look for the models you may be using. Are there patterns in the way you approach the work? Do you start a writing project by collecting a group of interesting examples? Do you develop an explicit hypothesis and search out matching evidence? Do you dive into a period of research into what others have written and look for holes?

Can you draw a picture of your process? Identify the assumptions driving your process? Map your software tools against your process?

These are the kinds of questions that designers ask and answer about organizational processes. When we work inside organizations, we accept the processes as a given. In today’s environment for knowledge work, we have the capacity to operate effectively at a level that can match what organizations managed not that long ago. Given that level of potential productivity and effectiveness, we now need to apply the same level of explicit thought and design to our personal work practices.

Design projects before worrying about managing them

 I’m gearing up to teach project management again over the summer. There’s a notion lurking in the back of my mind that warrants development.

We would do a better job at managing projects if we spent more time designing them first.

We turn something into a project when we encounter a problem that we haven’t tackled before and can’t see exactly how to solve in one step. The mistake we make is to spend too much time thinking about the word “management” at the expense of the word “project.”

Suppose our problem is to improve the way we bring new hires on board in a rapidly growing organization. There’s an obligatory amount of HR and administrative paperwork to complete, but the goal is to integrate new people into the organization. Do you send them off immediately to new assignments? Do you design a formal training program? There’s a host of questions and a host of possible ideas so you now have a potential project.

Given the pressures in organization to “get on with it” the temptation is to grab a half-formed idea, sketch a a plausible plan, guess at a budget, pick a deadline, and start. If we have some project management practices and discipline, we’ll flesh out the plan and develop something that looks like a work breakdown structure and a timeline.

What we don’t give sufficient attention to is the design thinking that might transform this project idea into something that might add meaningful and unexpected value to the organization. We don’t give that attention because we don’t view a project as something worthy of design.

What would it mean to design a project? Two ideas come to mind. The first would be to generate ways to increase the value of the effort. In our on-boarding example, one objective was to offer new employees practice in preparing client presentations. Rather than develop a generic presentation skills module, the project team came up with a better idea. We asked teams in the field for research and competitive intelligence tasks on their to do lists and transformed those into assignments for the on-boarding program. This allowed us to accomplish the presentation skills training, contribute to actual client work, and build bridges between new employees and their future colleagues.

A second design thinking aspect would be a more deliberate focus on generating multiple concepts and approaches that would evolve into a work breakdown structure. My conjecture is that more projects are closer to one-off efforts than to executing the nth iteration of a programming or consulting project. Project management personalities are tempted to standardize and control prematurely. But standard work plans and templates aren’t relevant until there are enough projects of a certain category to warrant the investment. Doing this kind of design work calls for expanding the toolset beyond the conventional project management toolkit that is focused on tracking and monitoring the sequence of tasks.

This is a notion in development. It isn’t fully baked but my instinct is that it is worth fleshing out. Some questions I’m curious to get reactions to include:

  • Is making a distinction between project design and project execution worth the trouble?
  • How do standard project management tools—Gantt charts, project management software such as Microsoft Project or Primavera, or spreadsheets—interfere with early stage idea generation and creativity?
  • How might you/do you expand the toolset to support better design thinking?