Keep Moving, Stay in Touch, Head for Higher Ground

High GroundWe are imitative creatures. We are happier copying someone else’s approach than inventing solutions based on the actual problem at hand. How often do you encounter situations where change is called for and the first response to any suggestion is “where has this been done before?” For all the talk of innovation, I often wonder how anything new ever happens.

Leaders and change agents have been fond of looking to the military for design ideas to emulate in their organizations. One all too persistent model was best captured by novelist Herman Wouk;

The Navy is a master plan designed by geniuses for execution by idiots. If you are not an idiot, but find yourself in the Navy, you can only operate well by pretending to be one. All the shortcuts and economies and common-sense changes that your native intelligence suggests to you are mistakes. Learn to quash them. Constantly ask yourself, “How would I do this if I were a fool?” Throttle down your mind to a crawl. Then you will never go wrong.

– Herman Wouk, The Caine Mutiny

This is a compelling image and in 1951 it likely represented a pretty accurate description of the Navy. Too many organizational leaders operate as if it were still 1951 and that military was an exemplar of good organizational practice. If you also believe that the military organize to fight the last war, what war are organizations fighting when they copy their designs from that military?

If we insist on copying ideas, perhaps we should at least update our mental models. If you want to draw lessons from the military, do it intelligently. If you look in the right places, the modern military has been more innovative about organization and leadership than other institutions because their consequences can mean the difference between life and death – something just a bit more compelling than making your quarterly budgets.

While business organizations remain fond of command and control, the military has had to cope with the disconnects between the view from headquarters and ground truth. Regardless of what TV and the movies suggest, orders do not flow from the Situation Room to the troops on the ground. Commanders spend their time articulating “commander’s intent,” not on developing detailed orders; they worry about making sure that all concerned understand “why” and what finished looks like first.

If the job at the top is to articulate why, what does that enable at the front lines? It grants those with first hand knowledge of the situation the freedom to exercise initiative and adapt action to the circumstances. It also eliminates “I was just following orders” as an excuse for failure.

This model places much greater responsibilities on the front lines, so you had better staff those lines accordingly. Perhaps put a better way, you must trust that they are capable of what they may be called on to do. Wouk was reminding us that those on the line only behave like idiots if you treat them so.

If you make the transition to starting from commander’s intent and trusting that you’ve put capable people in place, then orders become much simpler to devise. In the volatile world that constitutes today’s environment for most of us, you can offer a default set of orders to cover the unexpected; keep moving, stay in touch, head for higher ground.

Data plus models produce insight: a review of “Factfulness”

Factfulness coverFactfulness: Ten Reasons We’re Wrong About the World–and Why Things Are Better Than You Think, Hans Rosling, Anna Rosling Ronnlund, Ola Rosling. 2018

The best data available is useless if you filter it through the wrong mental model; the cleverest model is stupid with bad data. The late Hans Rosling explores this connection by looking at how flawed thinking—bad models—leads to poor judgments despite the availability of quite good data. Rosling’s field is population demographics and public health. He devotes his attention to how and why our intuitions are rooted in dated models. As citizens of an interconnected global economy his arguments matter to all of us. For those of us who worry about how data, analysis, and decisions fit together, Rosling’s insights offer important cautions and guidance.

Rosling’s fundamental argument is that the data, interpreted with the appropriate models, tells us that the world is improving along all of the dimensions that matter. He does not offer this as an excuse to stop working on making the world better, nor is he so naive as to claim that problems don’t exist. What he wants us to do is understand what the data is telling us and use that understanding to design and execute actions that will make a difference.

Take global population and income, for example. Most of us in the West filter the scraps of data we encounter through an old model of “developed” and “developing” world. There are a few of “us” and a lot of “them.” That split is long gone and Rosling offers a better lens to interpret the data. Instead of an over-simplified—and incorrect—model of rich vs. poor, Rosling offers the following richer breakdown, which groups population into four levels of daily income.

Global Population and income

What is important about this model is what it tells us about the middle. Level 4 is us; we can afford cars and smart phones and family vacations. Level 1 is the extreme poverty that most of us think of when we invoke the classic “us vs. them” perspective. Levels 2 and 3 represent people who can think about tomorrow and beyond; the car may be used but it runs, the phone is just as smart as yours, and a vacation is in the cards.

Rosling offers good advice about how to compensate for our dated models and the thinking habits that cause us to misinterpret the data. He organizes those bad thinking habits into ten instincts ranging from our preference for bad news to our tendency to think in straight lines to our search for someone to blame. For each of those instincts he demonstrates why they feel so go and lead us so far astray. He offers a wealth of mini-cases and stories to teach us how to tease better insights out of the numbers by being smarter about the models we put to use.

A good number of my consulting colleagues pride themselves on their ability to “torture the data until it confesses.” Rosling shows us that seduction is a much smarter strategy if you want deep insight. He shows how to combine the data with knowledge of the world to derive insights that wouldn’t otherwise be easily found. For example, the data about the rates of vaccination around the world tells us about investment opportunities:

Vaccines must be kept cold all the way from the factory to the arm of the child. They are shipped in refrigerated containers to harbors around the world, where they get loaded into refrigerated trucks. These trucks take them to local health clinics, where they are stored in refrigerators. These logistic distribution paths are called cool chains. For cool chains to work, you need all the basic infrastructure for transport, electricity, education, and health care to be in place. This is exactly the same infrastructure needed to establish new factories. (Here’s a video where Rosling and Gates help visualize the process)

Factfulness is a master class in data interpretation and storytelling with data. Along the way, you’ll unlearn some bad models you’ve been carrying around and replace them with more accurate and effective ones. It doesn’t hurt that Rosling views the world from a systems perspective, is comfortable with “it’s complicated” as a response, and still demands that we take responsibility for our actions.

Rosling passed away in 2017 but his work continues with this book and with his children. When you’re done with the book, plan to spend some more time exploring these ideas at Gapminder.org, where you’ll find more data, some software tools, and more to learn.

Communications Divides Within the Organization; Look to Homer

HomerHave you ever wondered what’s behind the conflict between geeks and suits? Sure, they think differently, but what, exactly, does that mean? A Jesuit priest who passed away in 2003 at the age of 90 may hold one clue.

Walter Ong published a slim volume in 1982 titled “Orality and Literacy: The Technologizing of the Word” that explored what the differences between oral and literate cultures suggest about how we think.

Remember Homer, the blind epic poet credited with “The Iliad” and “The Odyssey”? If we remember anything, it’s something along the lines of someone who managed to memorize and then flawlessly recite book-length poems for his supper.

The real story, which Ong details, is more interesting and more relevant to our organizational world than you might suspect. Homer sits at the boundary between oral culture and the first literate cultures.

In an oral culture, what you can think is limited to what you can remember and tell—without visual aids. Ong’s work shows that oral thinking is linear, additive, redundant, situational, engaged, and conservative. The invention of writing and the emergence of literate cultures allows a new kind of thinking to develop: literate thinking is subordinate, analytic, objectively distanced, and abstract. It’s the underlying engine of science and the industrial revolution.

While this may be interesting for a college bull session, it’s particularly relevant to organizations. For all their dependence on the industrial revolution, organizations are human institutions first. Management is fundamentally an oral culture and is most comfortable with thought organized that way. Historically, leadership in organizations went to those most facile with the spoken word.

At the opposite extreme, information technology is a quintessentially literate activity with a literate mode of thought. In fact, IT cannot exist without the objective, rational, analytical thinking that literate culture enables.

How does the nature of this divide complicate conversations between IT and management? Can understanding the differing natures of oral and literate thought help us bridge that divide?

IT professionals have long struggled with getting a complex message across to management. In our honest and unguarded moments, we talk of “dumbing it down for the suits.” But the challenge is more subtle than that. We need to repackage the argument to work within the frame of oral thought. The easy part of that is about oratorical and rhetorical technique. The more important challenge is to deal with the deeper elements of oral culture; of being situational, engaged, and conservative. The right abstract answer can’t be understood until it is placed carefully within its context.

What management recognizes in its fundamentally oral mind is that organizations and their inhabitants spend most of their time in oral modes of thought. The oral mind is focused on tradition and stability because of how long it takes to embed a new idea. The techniques of change management that seem so obtuse to the literate, engineering mind are not irrational; they are oral. They are the necessary steps to embed new ideas and practices in oral minds.

Repeating a calculation or an analysis is nonsense in a literate culture. Management objections to an analytical proposal rarely turn on objections to the analysis. Walking through the analysis again at a deeper level of detail will not help. What needs to be done is to craft the oral culture story that will carry the analytical tale. It’s not about dumbing down an argument, it’s about repackaging it to match the fundamental thought processes of the target audience.

That might mean finding the telling anecdote or designating an appropriate hero or champion. Suppose, for example, that your analysis concludes it’s time to move toward document management to manage the files littering a shared drive somewhere or buried as attachments to three-year old e-mails. Analytical statistics on improved productivity won’t do it. A scenario of the “day in the life” of a field sales rep would be better. Best would be a story of the sales manager who can’t find the marked up copy of the last version of the contract.

These human stories are much more than the tricks of the trade of consultants and sales reps. They are recognition that what gets dressed up as the techniques of change management are really a bridge to the oral thinking needed to provoke action.

Seen in this light, what is typically labeled resistance to change is better understood as the necessary time and repetition to embed ideas in an oral mind.

Management understands something that those rooted in literate thinking may not. Knowing the right answer analytically has little or nothing to do with whether you can get the organization to accept that answer. What literate thinkers dismiss as “politics” is the essential work of translating and packaging an idea for acceptance and consumption in an oral culture.

The critical step in translating from a literate answer to an oral plan of action is finding a story to hang the answer on. The analysis only engages the mind; moving analysis to action must engage the whole person. Revealing this truth to the analytical minded can be discomforting. It’s equivalent to explaining to an accountant that the key to a Capital Expenditure proposal is theater, not economics. You might want to check out Steve Denning’s book, “The Springboard: How Storytelling Ignites Action in Knowledge-Era Organizations,” for some good insights into how to craft effective stories inside organizations.

In addition to helping the analytically inclined see the value of creating a compelling story, you need to help them see how and why story works differently than analysis. The best stories to drive change are not complex, literary, novels. They are epic poetry; tapping into archetypes and cliché, acknowledging tradition, grounded in the particular. You need to bring them to an understanding of why repetition and “staying on message” is key to shifting an oral culture’s course, not an evil invention of marketing.

Assume you teach the literate types in your IT organization how to repackage their analyses for consumption. They’ve now learned how to pitch their ideas in ways that will stick in the organization. What might you learn from their literate approach to thought? Is there an opportunity if you can get more of your organization and more of your management operating with literate modes of thinking?

Being able to write things down done permits you to develop an argument that is more complex and sophisticated. On the plus side, this makes rocket science possible. On the negative side, you get lawyers.

On the other hand, if you are operating in an environment whose complexity demands a corresponding complexity in your organizational responses, then encouraging more literate thinking by more members of the organization is a good strategy.

What would such an organization look like compared to today’s dominant oral design? The mere presence of e-mail and an intranet is insufficient. E-mail tends to mirror oral modes of thought, particularly among more senior executives. Intranets tend to be over-controlled and, to the extent they contain examples of literate thinking, are rooted in an organizational culture that strives to confine the literate mind to the role of well-pigeonholed expert. The presence of particular tools, then, isn’t likely to be a good predictor, although their absence might be.

What of possible case examples? A few knowledge management success stories hold hints. Buckman Labs used discussion groups successfully to get greater leverage out of its staff’s knowledge and expertise. Whether this success built on literate modes of thought or simply on better distribution of oral stories is less clear. The successes of some widely distributed software development teams are worth looking at from this perspective.

Although it’s a bit too early to tell, the take up of blogs and wikis inside organizations may be a harbinger of management based on literate thinking skills. They offer an interesting bridge between the oral and the literate by providing a way to capture conversation in a way that makes it visible and, hence, analyzable. As a class of tools, they begin to move institutional memory out of the purely oral and into the realm of literate.

A Swiss Army Knife for Project Management

outline I’ve carried a pocket knife since my days as a stage manager/techie in college. A handful of useful tools in hand beats the perfect tool back in the shop or office. Courtesy of the TSA I have to remember to leave it behind when I fly or surrender it to the gods of security theater but every other day it’s in my pocket. There is, in fact, an entire subculture devoted to discussions of what constitutes an appropriate EDC—Every Day Carry—for various occupations and environments.

I’ve been thinking about what might constitute the equivalent EDC or Swiss Army Knife for the demands of project planning and management. We live in a project based world but fail to equip managers with an appropriate set of essential project management tools.

Like all areas of expertise, project management professionals build their reputations by dealing with more complex and challenging situations. The Project Management Institute certifies project management professionals. PMBOK, the Project Management Book of Knowledge has reached its sixth edition and runs to several hundred pages.

The complexities of large-scale project management push much training and education into the weeds of work breakdown structures, scope creep, critical-path mapping, and more. The message that project management is a job for professionals and the amateur need not apply is painfully clear. But we’re all expected to participate in project planning, and often we must lead projects without benefit of formal training in project management.

Recently, I looked at the need for project design before project management. The essential problem is to use a picture of where you want to end up to lay out a map of how to get there from wherever you are now.

The end is where to begin. Until you can conjure a picture of where you want to go, you have no basis to map the effort it will take to create it. Imagine what you need to deliver in reasonable detail and you can work backwards to the steps that will bring it into being. If you have a clear sense of where you are, you can also identify  the next few steps forward.

Working out the steps that will take you from where you are to where you want to go can be done with two tools and three rules.

Tool #1: A calendar. If you can do it all without looking at one, you aren’t talking about a project.

Tool #2: A messy outline. An outline because it captures the essential features of ordering steps and clustering them. Messy because you can’t and won’t get it right the first time and the neat outlines you were introduced to in middle school interfere with that. (Personally, I’m partial to mind maps over pure outlines, but that is a topic for another time.)

Three rules generate the substance of the outline:

  1. Small chunks
  2. First things first
  3. Like things together

“Small chunks” is a reminder that the only way to eat an elephant is in small bites. There are a variety of heuristics about recognizing what constitutes an appropriate small chunk of a project. Somewhere between a day and a week’s worth of work for one person isn’t a bad starting point. Writing a blog post is a reasonable chunk; launching a new blog isn’t.

Generating an initial list of small chunks is the fuel that feeds an iterative process of putting first things first, grouping like things together, and cycling back to revise the list of small chunks.

The art in project management lies in being clever and insightful about sequencing and clustering activities. Here, we’re focused on the value of thinking through what needs to be done in what order before leaping to the first task that appears. That’s why an outline is a more useful tool at this point than Gantt charts or Microsoft Project. An outline adds enough structure over a simple to do list to be valuable without getting lost in the intricacies of a complex software tool. An outline helps you organize your work, helping you discover similar tasks, deliverables, or resources that can be grouped together in your plans. An outline gives you order and clustering. The calendar links the outline to time. For many projects that will be enough. For the rest, it is the right place to start.

The point of a project plan is not the plan itself, but the structure it brings to running the project. The execution value of widespread project planning capability in the organization is twofold. First, it adds capacity where it is needed: at the grassroots level. Second, it improves the inputs to those situations where sophisticated project-management techniques are appropriate.

 

Trust, Verify, and Triangulate

“Trust but verify” is no longer an effective strategy in an information saturated world. When Ronald Reagan quoted the Russian proverb in 1985, it seemed clever enough; today it sounds hopelessly naive. If we are reasonably diligent executives or citizens, we understand and seek to avoid confirmation bias when important decisions are at hand. What can we do to compensate for the forces working against us?

It’s a cliché that we live in a world of information abundance. That cliché, however, has not led to the changes in information behaviors that it implies. We still operate as though information were a scarce commodity, believing that anyone who holds relevant data is automatically in a position of power and that information seekers depend on the holder’s munificence.

As data becomes abundantly and seemingly easily available, the problem for the information seeker changes. (It changes for the information holder as well, but that is a topic for another time.) The problem transforms from “Can I get the data?” to “What data exists? How quickly can I get it? How do I know I can trust it? How do I evaluate conflicting reports?” It is no longer simply a question of getting an answer but of getting an answer whose strengths and limits you understand and can account for.

The temptation is to fall into a trust trap, abdicating responsibility to someone else. “4 out of 5 dentists recommend…” “According to the Wall Street Journal…” “The pipeline predicts we’ll close $100 million…”

There was a time when we could at least pretend to seek out and rely on trustworthy sources. We counted on the staffs at the New York Times or Wall Street Journal to do fact checking for us. Today we argue that the fact checkers are biased and no one is to be trusted.

Information is never neutral.

Whatever source is collecting, packaging, and disseminating information is doing so with its own interests in mind. Those interests must be factored into any analysis. For example, even data as seemingly impartial as flight schedules must be viewed with a skeptical eye. In recent years, airlines improved their on-time performance as much by adjusting schedules as by any operational changes. You can interpret new flight schedules as an acknowledgment of operational realities or as padding to enable to reporting of better on-time performance.

If we are a bit more sophisticated, we invest time in understanding how those trusted sources gather and process their information, verifying that their processes are sound, and accept their reports as reliable inputs. Unfortunately, “trust but verify” is no longer a sufficient strategy. The indicators we once used to assess trust have become too easy to imitate. Our sources of information are too numerous and too distributed to contemplate meaningful verification.

Are we doomed? Is our only response to abandon belief in objective truth and cling to whatever source best caters to our own bias. Fortunately, triangulation is a strategy well matched to the characteristics of today’s information environment.

In navigation, you determine your location in relation to known locations. You need at least three locations to fix your current position. Moreover, you need to account for the limits of your measurement tools to understand the precision of your fix. In today’s world of GPS navigation, that precision might well be very high but still imperfect.

In organizational (and other) settings where you are attempting to make sense of—or draw useful inferences from—a multitude of noisy and conflicting sources, the principles of triangulation offer a workable strategy for developing useful insights in a finite and manageable amount of time.

In navigation, the more widely and evenly dispersed your sightings, the more precisely you can fix your position. Focus your data collection on identifying and targeting multiple sources of input that represent divergent, and possibly conflicting, perspectives. Within an organization, for example, work with supporters and opponents, both active and passive, of a proposed reorganization or systems deployment to develop an implementation strategy. When evaluating and selecting a new application, seek out a wider assortment of potential references, vendors, and analysts.

Triangulation also helps counteract the simplistic notion of balance that undermines too many narratives. Triangulation is based on living in a three-dimensional world; it cannot tell you where you are with only two fixes. We should be at least as diligent when mapping more complex phenomena.

A data collection effort organized around seeking multiple perspectives and guidance risks spinning out of control. Analyzing data in parallel with collection manages that risk. Process and integrate data as it is collected. Look for emerging themes and issues and work toward creating a coherent picture of the current state. Such parallel analysis/collection efforts will identify new sources of relevant input and insight to be added to the collection process.

Monitoring the emergence of themes, issues, and insights will signal when to close out the collection process. In the early stages of analysis, the learning curve will be steep, but it will begin to flatten out over time. As the analysis begins to converge and the rate of new information and insight begins to drop sharply, the end of the data collection effort will be near.

The goal of fact finding and research is to make better decisions. You can’t set a course until you know where you are. How carefully you need to fix your position or assess the context for your decision depends on where you hope to go. Thinking in terms of triangulation—how widely you distribute your input base and what level of precision you need—offers a data collection and analysis strategy that in more effective and efficient than approaches we have grown accustomed to in simpler times.

Chaos players: knowledge work as performance art

Stage - Auditorium. Photo by Monica Silvestre from Pexels

All the world’s a stage, And all the men and women merely players; They have their exits and their entrances, And one man in his time plays many parts
As You Like It, Act II, Scene VII. William Shakespeare

I’ve been thinking about the role of mental models for sense-making. While we do this all the time, I think there is significant incremental value in making those models more explicit and then playing with them to tease out their implications. Organization as machine is a familiar example, one that I believe is largely obsolete. Organization as ecosystem or complex adaptive system has grown in popularity. It has the advantage of being richer and more sensitive to the complexities of modern organizations and their environments. On the other hand, that mental model is a bit too appealing to to academic and consulting desires to sound simultaneously profound and unintelligible. It fails to provide useful guidance through the day-to-day challenges of competing and surviving.

Organization as performance art or theatrical production offers a middle ground between simplistic and over-engineered. It appeals to me personally given a long history staging and producing. It’s my hypothesis that most of us have enough nodding familiarity with the theater to take advantage of the metaphor and model without so much knowledge as to let the little details interfere with the deeper value.

The goal of theater is to produce an experience for an audience. That experience must always be grounded in the practical art of the possible. This gives us something to work with.

Let’s work backwards from the performance. We have the players on the stage and an audience with expectations about what they are about to experience. If that is all we have, then we are in the realm of storytelling. Storytelling demands both the tellers of the tale and the creator of the tale itself. Our playwright starts with an idea and crafts a script to bring that idea to life and connect it to all of the other stories and ideas the audience will bring to the experience. We now have a story, its author, storytellers, and an audience with their expectations.

Theater takes us a step farther and asks us think about production values that contribute to and enhance the experience we hope to create. Stage and sets and lighting and sound can all be drawn into service of the story. Each calls for different expertise to design, create, and execute. We now have multiple experts who must collaborate and we have processes to be managed. Each must contribute to the experience being created. More importantly, those contributions must all be coordinated and integrated into the intended experience.

This feels like a potentially fruitful line of inquiry. It seems to align well with an environment that depends on creativity and innovation as much as or more than simple execution. How deeply should it be developed?

Going behind the screen: mental models and more effective software leverage

Osborne 1 Luggable PCI’ve been writing at a keyboard now for five decades. As it’s Mother’s Day, it is fitting that my mother was the one who encouraged me to learn to type. Early in that process, I was also encouraged to learn to think at the keyboard and skip the handwritten drafts. That was made easier by my inability to read my own handwriting after a few hours.

I first started text editing as a programmer writing Fortran on a Xerox SDS Sigma computer. I started writing consulting reports on Wang Word Processors. When PCs hit the market, I made my way through a variety of word processors including WordStar, WordPerfect, and Microsoft Word. I also experimented with an eclectic mix of other writing tools such as ThinkTank, More, Grandview, Ecco Pro, OmniOutliner, and MindManager. Today, I do the bulk of my long-form writing using Scrivener on a Mac together with a suite of other tools.

The point is not that I am a sucker for bright, shiny, objects—I am—or that I am still in search of the “one, true tool.” This parade of tools over years and multiple technology platforms leads me to the observation that we would be wise to spend much more attention to our mental models of software and the thinking processes they support.

That’s a problem because we are much more comfortable with the concrete than the abstract. You pick up a shovel or a hammer and what you can do is pretty clear. Sit down at a typewriter with a stack of paper and, again, you can muddle through on your own. Replace the typewriter and paper with a keyboard and a blank screen and life grows more complicated.

Fortunately, we are clever apes and as good disciples of Yogi Berra we “can observe a lot just by watching.” The field of user interface design exists to smooth the path to making our abstract tools concrete enough to learn.
UI design falls short, however, by focusing principally at the point where our senses—sight, sound, and touch—meet the surface of our abstract software tools. It’s as if we taught people how to read words and sentences but never taught them how to understand and follow arguments. We recognize that a book is a kind of interface between the mind of the author and the mind of the reader. A book’s interface can be done well or done badly, but the ultimate test is whether we find a window into the thoughts and reasoning of another.

We understand that there is something deeper than that words on the page. Our goal is to get behind the words and into the thinking of the author. This same goal exists in software; we need to go behind interface we see on the screen to grasp the programmer’s thinking.

We’ll come back to working with words in a moment. First, let’s look at the spreadsheet. I remember seeing Visicalc for the first time—one year too late to get me through first year Finance. What was visible on the screen mirrored the paper spreadsheets I used to prepare financial analyses and budgets. The things I understood from paper were there on the screen; things that I wished I could do with paper were now easy in software. They were already possible and available by way of much more complex and inscrutable software tools but Visicalc’s interface created a link between my mind and Dan Bricklin’s that opened up the possibilities of the software. I was able to get behind the screen and that gave me new power. That same mental model can also be a hindrance if it ends up limiting your perception of new possibilities.

Word processors also represent an interface between writers and the programmer’s model of how writing works. That model can be more difficult to discern. If writer and programmer have compatible models, the tools can make the process smoother. If the models are at odds, then the writer will struggle and not understand why.

Consider the first stand-alone word processors like the Wang. These were expensive, single function machines. The target market was organizations with separate departments dedicated to the production of documents; insurance policies, user manuals, formal reports, and the like. The users were clerical staff—generally women—whose job was to transform hand written drafts into finished product. The software was built to support that business process and the process was reflected in the design and operation of the software. Functions and features of the software supported revising copy, enforcing formatting standards, and other requirements of a business.

The economics that drove the personal computer revolution changed the potential market for software. While Wang targeted organizations and word processing as an organizational function, software programmers could now target individual writers. This led to a proliferation of word processing tools in the 1980s and 1990s reflecting multiple models of the writing process. For example, should the process of creating a draft be separate from the process of laying out the text on the page? Should the instructions for laying out the text be embedded in the text of the document or stored separately? Is a long-form product such as a book a single computer file, a collection of multiple file, or something else?

Those decisions influence the writer’s process. If your process meshes with the programmer’s, then life is good. If they clash, the tool will get in the way of good work.

If you don’t recognize the issue, then your success or failure with a specific tool can feel capricious. If you select a tool without thinking about this fit, then you might blame yourself for problems and limitations that are caused by using a tool that clashes with your process.

Suppose we recognize that this issue of mental models exists. How do we take advantage of that perspective to become more effective in leveraging available tools? A starting point is to reflect on your existing work practices and look for the models you may be using. Are there patterns in the way you approach the work? Do you start a writing project by collecting a group of interesting examples? Do you develop an explicit hypothesis and search out matching evidence? Do you dive into a period of research into what others have written and look for holes?

Can you draw a picture of your process? Identify the assumptions driving your process? Map your software tools against your process?

These are the kinds of questions that designers ask and answer about organizational processes. When we work inside organizations, we accept the processes as a given. In today’s environment for knowledge work, we have the capacity to operate effectively at a level that can match what organizations managed not that long ago. Given that level of potential productivity and effectiveness, we now need to apply the same level of explicit thought and design to our personal work practices.

Design projects before worrying about managing them

 I’m gearing up to teach project management again over the summer. There’s a notion lurking in the back of my mind that warrants development.

We would do a better job at managing projects if we spent more time designing them first.

We turn something into a project when we encounter a problem that we haven’t tackled before and can’t see exactly how to solve in one step. The mistake we make is to spend too much time thinking about the word “management” at the expense of the word “project.”

Suppose our problem is to improve the way we bring new hires on board in a rapidly growing organization. There’s an obligatory amount of HR and administrative paperwork to complete, but the goal is to integrate new people into the organization. Do you send them off immediately to new assignments? Do you design a formal training program? There’s a host of questions and a host of possible ideas so you now have a potential project.

Given the pressures in organization to “get on with it” the temptation is to grab a half-formed idea, sketch a a plausible plan, guess at a budget, pick a deadline, and start. If we have some project management practices and discipline, we’ll flesh out the plan and develop something that looks like a work breakdown structure and a timeline.

What we don’t give sufficient attention to is the design thinking that might transform this project idea into something that might add meaningful and unexpected value to the organization. We don’t give that attention because we don’t view a project as something worthy of design.

What would it mean to design a project? Two ideas come to mind. The first would be to generate ways to increase the value of the effort. In our on-boarding example, one objective was to offer new employees practice in preparing client presentations. Rather than develop a generic presentation skills module, the project team came up with a better idea. We asked teams in the field for research and competitive intelligence tasks on their to do lists and transformed those into assignments for the on-boarding program. This allowed us to accomplish the presentation skills training, contribute to actual client work, and build bridges between new employees and their future colleagues.

A second design thinking aspect would be a more deliberate focus on generating multiple concepts and approaches that would evolve into a work breakdown structure. My conjecture is that more projects are closer to one-off efforts than to executing the nth iteration of a programming or consulting project. Project management personalities are tempted to standardize and control prematurely. But standard work plans and templates aren’t relevant until there are enough projects of a certain category to warrant the investment. Doing this kind of design work calls for expanding the toolset beyond the conventional project management toolkit that is focused on tracking and monitoring the sequence of tasks.

This is a notion in development. It isn’t fully baked but my instinct is that it is worth fleshing out. Some questions I’m curious to get reactions to include:

  • Is making a distinction between project design and project execution worth the trouble?
  • How do standard project management tools—Gantt charts, project management software such as Microsoft Project or Primavera, or spreadsheets—interfere with early stage idea generation and creativity?
  • How might you/do you expand the toolset to support better design thinking?

Review – Scrum: The Art of Doing Twice the Work in Half the Time

 Scrum: The Art of Doing Twice the Work in Half the Time. Jeff Sutherland and JJ Sutherland

There’s a rule of thumb in software development circles that the best programmers can be ten times as productive as average programmers. This is the underlying argument for why organizations seek to find and hire the best people. There is research to support this disparity in productivity. There is similar research on the relative productivity of teams. There, the range in productivity between the best and the rest is closer to two orders of magnitude; as much as a 1,000 times more productive.

Jeff Sutherland makes the case that the practices that collectively make up “Scrum” are one strategy for realizing those kinds of payoffs.

Sutherland is a former fighter pilot, a software developer, one of the original signatories of the Agile Manifesto, and the inventor of Scrum. This is his story of how Scrum came to be, what it is, and why it’s worthwhile. The particular value of the book is its focus on why Scrum is designed the way it is and why that matters.

Scrum’s origins and primary applications have been in the realm of software development. Sutherland builds an argument that Scrum’s principles and methods apply more broadly. Although he doesn’t make this argument directly, this wider applicability flows from evolutionary changes in the the organizational environment. Changes in the pace and complexity of organizational work simultaneously make conventional approaches less effective and Scrum more so.

Scrum is a collection of simple ideas; it’s a point of view about effective problem solving more than a formalized methodology. That’s important to keep in mind because like too many solid ideas, the essence can get lost in the broader rush to capitalize on those ideas. There appear to be an unlimited supply of training courses, consultants, and the usual paraphernalia of a trendy business idea; you’re better off spending time reading and thinking about what Sutherland has to say first. That may be all you need if you’re then willing to make the effort to put those ideas into practice.

Sutherland traces the roots of Scrum to the thinking underlying the Toyota Production System. He also draws interesting links to John Boyd’s work on strategy embodied in the OODA Loop and to the martial arts. Scrum is built on shortening the feedback between plans and action. It is a systematic way of feeling your way forward and adapting to the terrain as you travel over it.

Sutherland draws a sharp contrast with more traditional management techniques such as Waterfall project management approaches and their well-worn trappings such as Gantt charts and voluminous unread and unreadable requirements documents.

Understanding the managerial appeal and limitations of these trappings is key to grasping the contrasting benefits of Scrum. Waterfalls and Gantt charts appeal to managers because they promise certainty and control. They can’t deliver on that promise in today’s environment. In the software development world, they never could and in today’s general organizational environment they also come up short.

Understanding that appeal and why it is misplaced clarifies the strengths of Scrum. There was a time when managers came from the ranks of the managed. They had done the work they were now responsible for overseeing and were, therefore, qualified to provide the direction and feedback needed to pick a path and follow it. Management was primarily about execution and not about innovation.

The illusion in waterfall and other planning exercises is that what we are doing next is a repeat of what we have done before. If we have built 100 houses, we can be confident of what it will take to build the 101st. If we are building a new road or a new bridge, then what we have learned from the previous roads and bridges we’ve built can provide a fairly precise estimate for the next.

This breaks down, however, when we are building in new terrain or experimenting with new designs. The insights and experience of those who’ve built in the past don’t transfer cleanly to this more dynamic environment. The world of software has always been new territory and we are always experimenting. The terrain is always in flux even when the technology is temporarily stable. Now, it is those who are doing the work who are best positioned to plan and manage as we move into new territories and terrain.

Scrum comes into play when we are moving into territory where there are no roads and are no maps. If you are moving into new territory you can only plan as far ahead as you can see. There are no maps to follow. Sutherland puts it thus:

Scrum embraces uncertainty and creativity. It places a structure around the learning process, enabling teams to assess both what they’ve created and, just as important, how they created it. The Scrum framework harnesses how teams actually work and gives them the tools to self-organize and rapidly improve both speed and quality of work.

There’s a terminology and a set of techniques that make up Scrum. Sutherland covers the basics of such notions as scrum masters, product owners, backlog, sprints, retrospectives, communication saturation, continuous improvement, and stand up meetings. But he’s no fan of turning these into dogma.

Scrum runs the risk of being viewed as no more than the latest management fad. Sutherland is a true believer and has evidence to support his belief. There are lots of true believers but only a few are willing to bring substantive evidence to back up that belief. That earns Sutherland the right to offer his own closing argument:

What Scrum does is alter the very way you think about time. After engaging for a while in Sprints and Stand-ups, you stop seeing time as a linear arrow into the future but, rather, as something that is fundamentally cyclical. Each Sprint is an opportunity to do something totally new; each day, a chance to improve. Scrum encourages a holistic worldview. The person who commits to it will value each moment as a returning cycle of breath and life.

The heart of Scrum is rhythm. Rhythm is deeply important to human beings. Its beat is heard in the thrumming of our blood and rooted in some of the deepest recesses of our brains. We’re pattern seekers, driven to seek out rhythm in all aspects of our lives.

What Scrum does is create a different kind of pattern. It accepts that we’re habit-driven creatures, seekers of rhythm, somewhat predictable, but also somewhat magical and capable of greatness.

When I created Scrum, I thought, What if I can take human patterns and make them positive rather than negative? What if I can design a virtuous, self-reinforcing cycle that encourages the best parts of ourselves and diminishes the worst? In giving Scrum a daily and weekly rhythm, I guess what I was striving for was to offer people the chance to like the person they see in the mirror.

Learning from cases; getting smarter by design

One of my enduring memories from my first days in business school is a video interview of a second year student offering advice on surviving the case method. While I didn’t fully appreciate it at the time, it was totally appropriate that his advice was a case study in its own right.

He began with a story of the business challenge of “sexing chickens.” In the chicken business, it turns out that it is important to separate roosters and hens when both are no more than puffs of feathers. The clues are subtle and not reducible to a checklist. Novice chicken sexers can’t be taught to do their job but they can learn.

Prospective chicken sexers go through an apprenticeship. They sit down with a batch of chicks, pick one up, flip it over, inspect, and guess. Sitting next to our prospect is a veteran chicken sexer. The veteran’s job is to give the prospect feedback; either a “yes” for a correct guess or a smack in the head for a mistake. After a hundred or so guesses, the prospect’s error rate will drop close to zero. Neither the prospect or the veteran will be able to offer an explicit theory of what they are doing, yet they are effective.

Learning by the case method is a similar process of guessing and rapid, memorable, feedback. As an aside, recognize that this also describes the essence of machine learning. It is experiential learning at its purest.

The craft in designing case method learning lies in selecting and sequencing cases so that the lessons can be delivered more rapidly and reliably than the random accumulation of experience permits. The assumption here is that there is an order that can be exploited to guide action. There must be an underlying pattern that one can solve for.

If you subscribe to the value of the case method as a learning strategy, you are making a claim that there is a balance between theory and practice to be managed. It is a claim that the particulars matter; that experience or theory can only go so far in crafting a response. That you have a responsibility to design a response that acknowledges that every situation is a mix of old and new, predictable and unpredictable.

There’s a notion here that I am struggling with that has to do with the rate of change. I think that case-based learning potentially makes this issue more visible.

In a slow-changing world, experience matters greatly. Recognize how this situation maps to what we’ve seen and responded to before and right action is clear. As the rate of change increases, the value of experience changes. Prior experiences suggest actions and responses that can serve as the basis for designing a modified response that blends old and new.

Pushing the rate of change still higher means that effective response demands more design and less “here’s what we’ve done before.” What does that imply for learning effectively?

My hypothesis is that we need to make the experiential learning process visible, explicit, and deliberate. In a conventional case-based learning environment, there is a separation between those who are learning and those who are facilitating the learning—which is not the same thing as teaching. The goal is for those learning to build a robust, internal, theory of action. The facilitators have strong ideas about what that theory should look like and cases are sequenced to force the learners to develop an internal theory that matches the target theory.

What’s happening in higher-paced environments is that our learners and facilitators are becoming harder to distinguish. In a sense, we are back to a world of pure learning from experience. What changes is that as learners we now must be responsible for building our theories dynamically.

While this can still be described as a form of reflective practice, that reflection now must operate at several levels of abstraction. We can’t rely simply on organic processes to slowly and unpredictably get smarter over time. We need to get smarter on purpose and by design.