It’s all a product of design

During my senior year in high school, several of my classmates and I decided that we wanted to put on a play. That wasn’t something baked into the flow of the year at that point. Some years, there was a production; some years, not. We made the case to the powers that be and got permission to proceed.

We were an all boy’s school so the first order of business was to partner with one of the nearby girls’s schools. The ecosystem of private Catholic high schools in St.Louis was no different that any other universe of institutions; there were clear pecking orders of who was worthy to associate with whom. At that time, our school was at the top of the natural order of things.

As one of the producers, I successfully pushed the decision in the direction of a school that was seen by many as beneath several other options. I used to joke that this was a clever ruse to gain access to an environment where we would face less competition. Regardless, it was an early example of a decision that called for incorporating a collection of factors beyond simple technical considerations. Not that I recognized that at the time.

The other curious aspect of this production was the play we chose to stage: The Absence of a Cello by Ira Wallach. I had forgotten most of the details of this comedy until I started thinking about this piece. It is yet another example of seeds that grow in unexpected ways.

Wallach’s play is a comedy about a chemist who is seeking to leave his academic life for a corporate job. He and his family conclude that success depends on concealing all of the interests and eccentricities that define them in order to conform to their expectation of what corporate uniformity demands.

It was a cautionary tale that brains and creativity had no place in the business world. That seemed like a perfectly plausible hypothesis to an eighteen year old nerd. But this particular nerd has also never had a good record of accepting conventional wisdom at face value. More a record of running experiments on things–including organizational systems–to discover what did or didn’t work.

Thinking about The Absence of a Cello reminds me of some of my other reading preferences. I’ve mentioned my love of science fiction, for example, As I think of the kinds of stories that draw and hold my interest, I can discern a long history of stories about subverting systems rather than tearing them down.

One interesting side effect of reading without close oversight or supervision is that you encounter all sorts of ideas before you are theoretically mature enough to understand them. One central idea that I encountered without someone to argue that it was naive was that all systems–including organizations, economies, and cultures–grow out of design choices made by someone.

Design choices then spawn a history of subsequent events. While it can be tempting to view the stream of events as inevitable or a product of natural law or divine intervention, there was always a design decision at the outset.

This is a liberating perspective. Knowledge and experience teach you that change can be slow and frustrating. But from the stance of design, any change is also possible. If there was human agency in the original design decisions, then human agency can always make a new design decision and trigger a new stream of events.

Going behind the screen: mental models and more effective software leverage

Osborne 1 Luggable PCI’ve been writing at a keyboard now for five decades. As it’s Mother’s Day, it is fitting that my mother was the one who encouraged me to learn to type. Early in that process, I was also encouraged to learn to think at the keyboard and skip the handwritten drafts. That was made easier by my inability to read my own handwriting after a few hours.

I first started text editing as a programmer writing Fortran on a Xerox SDS Sigma computer. I started writing consulting reports on Wang Word Processors. When PCs hit the market, I made my way through a variety of word processors including WordStar, WordPerfect, and Microsoft Word. I also experimented with an eclectic mix of other writing tools such as ThinkTank, More, Grandview, Ecco Pro, OmniOutliner, and MindManager. Today, I do the bulk of my long-form writing using Scrivener on a Mac together with a suite of other tools.

The point is not that I am a sucker for bright, shiny, objects—I am—or that I am still in search of the “one, true tool.” This parade of tools over years and multiple technology platforms leads me to the observation that we would be wise to spend much more attention to our mental models of software and the thinking processes they support.

That’s a problem because we are much more comfortable with the concrete than the abstract. You pick up a shovel or a hammer and what you can do is pretty clear. Sit down at a typewriter with a stack of paper and, again, you can muddle through on your own. Replace the typewriter and paper with a keyboard and a blank screen and life grows more complicated.

Fortunately, we are clever apes and as good disciples of Yogi Berra we “can observe a lot just by watching.” The field of user interface design exists to smooth the path to making our abstract tools concrete enough to learn.
UI design falls short, however, by focusing principally at the point where our senses—sight, sound, and touch—meet the surface of our abstract software tools. It’s as if we taught people how to read words and sentences but never taught them how to understand and follow arguments. We recognize that a book is a kind of interface between the mind of the author and the mind of the reader. A book’s interface can be done well or done badly, but the ultimate test is whether we find a window into the thoughts and reasoning of another.

We understand that there is something deeper than that words on the page. Our goal is to get behind the words and into the thinking of the author. This same goal exists in software; we need to go behind interface we see on the screen to grasp the programmer’s thinking.

We’ll come back to working with words in a moment. First, let’s look at the spreadsheet. I remember seeing Visicalc for the first time—one year too late to get me through first year Finance. What was visible on the screen mirrored the paper spreadsheets I used to prepare financial analyses and budgets. The things I understood from paper were there on the screen; things that I wished I could do with paper were now easy in software. They were already possible and available by way of much more complex and inscrutable software tools but Visicalc’s interface created a link between my mind and Dan Bricklin’s that opened up the possibilities of the software. I was able to get behind the screen and that gave me new power. That same mental model can also be a hindrance if it ends up limiting your perception of new possibilities.

Word processors also represent an interface between writers and the programmer’s model of how writing works. That model can be more difficult to discern. If writer and programmer have compatible models, the tools can make the process smoother. If the models are at odds, then the writer will struggle and not understand why.

Consider the first stand-alone word processors like the Wang. These were expensive, single function machines. The target market was organizations with separate departments dedicated to the production of documents; insurance policies, user manuals, formal reports, and the like. The users were clerical staff—generally women—whose job was to transform hand written drafts into finished product. The software was built to support that business process and the process was reflected in the design and operation of the software. Functions and features of the software supported revising copy, enforcing formatting standards, and other requirements of a business.

The economics that drove the personal computer revolution changed the potential market for software. While Wang targeted organizations and word processing as an organizational function, software programmers could now target individual writers. This led to a proliferation of word processing tools in the 1980s and 1990s reflecting multiple models of the writing process. For example, should the process of creating a draft be separate from the process of laying out the text on the page? Should the instructions for laying out the text be embedded in the text of the document or stored separately? Is a long-form product such as a book a single computer file, a collection of multiple file, or something else?

Those decisions influence the writer’s process. If your process meshes with the programmer’s, then life is good. If they clash, the tool will get in the way of good work.

If you don’t recognize the issue, then your success or failure with a specific tool can feel capricious. If you select a tool without thinking about this fit, then you might blame yourself for problems and limitations that are caused by using a tool that clashes with your process.

Suppose we recognize that this issue of mental models exists. How do we take advantage of that perspective to become more effective in leveraging available tools? A starting point is to reflect on your existing work practices and look for the models you may be using. Are there patterns in the way you approach the work? Do you start a writing project by collecting a group of interesting examples? Do you develop an explicit hypothesis and search out matching evidence? Do you dive into a period of research into what others have written and look for holes?

Can you draw a picture of your process? Identify the assumptions driving your process? Map your software tools against your process?

These are the kinds of questions that designers ask and answer about organizational processes. When we work inside organizations, we accept the processes as a given. In today’s environment for knowledge work, we have the capacity to operate effectively at a level that can match what organizations managed not that long ago. Given that level of potential productivity and effectiveness, we now need to apply the same level of explicit thought and design to our personal work practices.

Collaboration, games, and the real world

I’ve been thinking a lot about hard problems that need multiple people collaborating to solve. There’s no shortage of them to choose from.

This TED video from Jane McGonigal makes a persuasive case that I need to invest some more time looking at the world of online gaming for insight. Watch the video  and see if you don’t come to a similar conclusion.

 

Can you design business models? A review of "Seizing the White Space."

[cross posted at FASTforward blog]

Seizing the White Space: Business Model Innovation for Growth and Renewal, Johnson, Mark W.

What is a "business model" and can you create a new one in a systematic and disciplined way? That’s the question that Mark Johnson, chairman of the consulting firm Innosight, sets for himself in Seizing the White Space.

The term entered the popular business lexicon during the dotcom boom in the late 1990s. There wasn’t any particular definition behind the term at the outset. Effectively, it was shorthand for the answer to question zero about any business – "How are we planning to make money?" Before the dotcom boom, nine times out of ten, the answer was "we’ll copy what Company X is doing and execute better than they do." During the boom, the answer seemed to be "we have absolutely no idea, but it’s going to be great." Now we recognize that both of those answers are weak and that we need some theory to design answers that are likely to be successful.

Over the last decade and a half, there’s been a steady stream of excellent thought and research focuses on building that theory. One of the major tributaries in that stream has been the work of Clay Christensen on disruptive innovation. Christensen and his colleagues, including Johnson, have been engaged in a multi-year action research program working out the details and practical implications of the theory of disruptive innovation. Seizing the White Space is the latest installment in this effort and is best understood if you’ve already invested in understanding what has come before.

Johnson starts with a definition of white space as

the range of potential activities not defined or addressed by the company’s current business model, that is, the opportunities outside its core and beyond its adjacencies that require a different business model to exploit

p.7

Why do organizations need to worry about white space? Even with success at exploiting their current business model and serving existing customers, organizations reach a point where they can’t meet their growth goals. Many an ill-considered acquisition has been pursued to plug this growth gap. Haphazard efforts at innovations to create new products or services or enter new markets get their share of the action.

Johnson combines an examination of white space and business models in an effort to bring some more order and discipline to the challenge of filling those growth gaps. One implication of this approach is that the primary audience for his advice is existing organizations with existing successful business models. He is less interested in how disruptive innovation processes apply in start up situations.

Johnson’s model of business models is deceptively simple. He illustrates it with the following diagram:

Johnson-WhiteSpace-Four-BoxBusinessModel

Johnson expands the next level of detail for each of these elements. Most of that is straightforward. More importantly, this model places its emphasis on the importance of balancing each of these elements against the others.

In the middle third of the book, Johnson takes a deeper look at white space, dividing it into white space within, beyond, and between which correspond to transforming existing markets, creating new markets, and dealing with industry discontinuity. It’s a bit clever for my tastes, but it does provide Johnson with the opportunity to examine a series of illuminating cases including Dow Corning’s Xiameter, Hilti’s tool management and leasing program, Hindustan Unilever’s Shakti Initiative, and Better Place’s attempt to reconceptualize electric vehicles. While the organization of the stories is a bit too clever, it does serve a useful purpose. It takes a potentially skeptical reader from the familiar to the unfamiliar as they wrap their heads around Johnson’s ideas.

With a basic model and a collection of concrete examples in hand, the last third of the book lays out an approach to making business model innovation a repeatable process. This process starts from what has evolved into a core element of Christensen’s theories – the notion of "jobs to be done." This is an update on Ted Levitt’s old marketing saw that a customer isn’t in the store to buy a drill but to make a hole. The problem is that most established marketers forget Levitt’s point shortly after they leave business school and get wrapped up instead in pushing the products and services that already exist. "Jobs to be done" is an effort to persuade organizations to go back to the necessary open-ended research about customer behavior and needs that leads to deep insight about potential new products and services.

With insight into potential jobs to be done, Johnson’s four-box model provides the structure to design a business model to accomplish the job to be done. In his exposition, he works his way through each of the four boxes, offering up suggestions and examples at each point. With a potentially viable design in hand, he shifts to considerations of implementation and, here, emphasizes that the early stages of implementation need to focus on testing, tuning, and revising the assumptions built into the prospective business model.

Johnson clearly understands that creating a new business model is a design effort not an execution effort. Seizing the White Space puts shape and structure underneath this design process. All books represent compromises. The compromise that Johnson has made is to make this design process appear more linear and structured than it can ever be in practice. He knows that it isn’t in his emphasis on the need to balance the elements of a business model and  to learn during the early stages of implementation. There’s a reason that the arrows in his four-box model flow both ways. I’m not sure every reader will pick up on that nuance.

He also clearly points out the role of learning from failures as well as successes during implementation. But the demands of fitting the story into a finite space again undercut this central lesson. The models here will go a long way toward making business model design more manageable, but they can’t make it neat and orderly.

This review is part of a "blogger book tour" that Renee Hopkins, editor of Strategy and Innovation and Innoblog, arranged.

Previous stops on the tour:

Upcoming stops

If you’re interested in digging deeper into the work of Clay Christensen and his posse, here are some previous posts where I’ve pulled together some reviews and pointers. I hope you find them helpful.

Applying End-to-End Design Principles in Social Networks

Partial map of the Internet based on the Janua...

Image via Wikipedia

 Andy Lippman, at MIT’s Media Lab, offers provocative examples of learning how to think in network terms when designing services in a recent blog post from the Communications Futures Program at MIT. At the very heart of the Internet’s design is a notion called the end-to-end principle (pdf). The best network is one that treats all nodes in the network identically and pushes responsibility for decisions out to the nodes. Creating special nodes in the network and centralizing decisions in those nodes makes the network as a whole work less well.

In this essay, Lippman explores that notion by looking at examples of existing and potential services in telecommunications networks that could be improved by trusting the end-to-end principle more fully. Lippman takes a look at emergency services such as 911 calls in the US. As currently designed, these services allow individuals to reach a centralized dispatch center in the event of an emergency.

Emergencies are no longer solely about getting help for a fire or heart attack. Nor are they purely personal affairs, directed at or for a single individual. Consider the recent attempted attack on a Detroit-bound airplane where passengers provided the service (saving the plane). Early reports portrayed this as a fine solution. Indeed, there is discussion that the best result of increased airline security is that it has made people aware of the fact that they all have to pitch in to help when it is needed; they can no longer just rely on a remote entity a site to solve the problem for them.

End-to-End Social Networks
Andy Lippman
Fri, 01 Jan 2010 21:10:36 GMT

Lippman makes the point that we can benefit from thinking about ways to mobilize the network as a whole as an alternative to using it to direct messages to some centralized authority. Continuing to impose hierarchical notions on top of network designs risks missing other, potentially more powerful, options. We have a set of powerful new tools and ideas that we have yet to fully exploit.

The design reasoning that underlies the engineering of the Internet is applicable in organizational settings as well. Lippman’s examples are a good place to start in thinking how to apply them effectively.

Reblog this post [with Zemanta]

What is an Oreo?

Alan Matsumura and I had an excellent conversation earlier this month about the work he is starting up at SilverTrain. Part of the discussion centered on the unexpected problems that you run into when doing BI/information analytics work.

Suppose you work for Kraft. You’d like to know how many Oreos you sold last quarter. An innocent enough question and, seemingly, a simple one. That simply shows how little you’ve thought about the problems of data management.

Start with recipes. At the very least Kraft is likely to have a standard recipe and a kosher recipe (they do business in Israel). Are there other recipe variations; perhaps substituting high fructose corn syrup for sugar? Do we add up all the variations of recipe or do we keep track by recipe?

How about packaging variations? I’ve seen Oreos packaged in the classic three column package, in packages of six, and of two. I’ve seen them bundled as part of a Lunchables package. I’m sure other variations exist. Do we count the number of packages and multiply by the appropriate number of Oreos per package? Is there some system where we can count the number of Oreos we produced before they went into packages? If we can manage to count how many Oreos we made, how does that map to how many we will manage to sell?

That may get us through standard Oreos. How do we count the Oreos with orange-colored centers sold at Halloween in the US? Green-colored ones sold for St. Patrick’s Day? Double stuf Oreos? Double stuf Oreos with orange-colored centers? Mini-bite size snak paks? Or my personal favorite: chocolate fudge covered Oreos. I just checked the official Oreo website at Nabisco. They identify 46 different versions of the Oreo and don’t appear to count Oreos packaged within another product (the Lunchables question).

That covers most of the relevant business reasons that make counting Oreos tricky. There are likely additional, technical reasons that will make the problem harder, not easier. The various systems that track production, distribution, and sales have likely been implemented at different times and may have slight variations in how and when they count things. Those differences need to be identified and then reconciled. Someone will have to discover and reconcile the different codes and identifiers used to identify Oreos in each discrete system. And so on.

By the way, according to Wikipedia, over 490 billion Oreos have been sold since their debut in 1912. As for how many were sold last quarter, it depends.

Designing with “harmless failures” in mind

Ed Felten at Freedom to Tinker has some interesting points to add to Bruce Schneier’s piece on “Security Mindset” that I posted about yesterday. Felten focuses on the notion of “harmless failures.” It provides still more reason to approach all systems design problems with an eye firmly fixed on the social context in which your technology will operate.

The Security Mindset and “Harmless Failures”

…Not all “harmless failures” lead to big trouble, but it’s surprising how often a clever adversary can pile up a stack of seemingly harmless failures into a dangerous tower of trouble. Harmless failures are bad hygiene. We try to stamp them out when we can.

To see why, consider the donotreply.com email story that hit the press recently. When companies send out commercial email (e.g., an airline notifying a passenger of a flight delay) and they don’t want the recipient to reply to the email, they often put in a bogus From address like donotreply@donotreply.com. A clever guy registered the domain donotreply.com, thereby receiving all email addressed to donotreply.com. This included “bounce” replies to misaddressed emails, some of which contained copies of the original email, with information such as bank account statements, site information about military bases in Iraq, and so on. Misdirected ants might not be too dangerous, but misdirected email can cause no end of trouble.

The people who put donotreply.com email addresses into their outgoing email must have known that they didn’t control the donotreply.com domain, so they must have thought of any reply messages directed there as harmless failures. Having gotten that far, there are two ways to avoid trouble. The first way is to think carefully about the traffic that might go to donotreply.com, and realize that some of it is actually dangerous. The second way is to think, “This looks like a harmless failure, but we should avoid it anyway. No good can come of this.” The first way protects you if you’re clever; the second way always protects you.

Which illustrates yet another part of the security mindset: Don’t rely too much on your own cleverness, because somebody out there is surely more clever and more motivated than you are.

The Procrastinator’s Clock – User-centered design at its best

Now this is the kind of tool that demonstrates a deep understanding of its target users. Probably wouldn’t help me, as being late to scheduled events isn’t my particular procrastination issue, but I appreciate the design insight.

The Procrastinator’s Clock

clock.gif

If you’re a procrastinator, you don’t need a mathematical formula, you know who you are. Worse, the people who work with you know, too. I’ve tried the “set the clock ahead 10 minutes” trick, but it never works because I know that I really have that extra 10 minutes. If you’re nodding, then perhaps you need David Seah’s Procrastinator’s Clock.

It’s guaranteed to be up to 15 minutes fast. However, it also speeds up and slows down in an unpredictable manner so you can’t be sure how fast it really is. Furthermore, the clock is guaranteed not be slow, assuming your computer clock is sync’d with NTP; many computers running Windows and Mac OS X with persistent Internet connections already are.

Designing spaces for doing knowledge based work

This book contains an extensive series of case studies of designing space for learning and doing knowledge work in schools and universities. If you accept the premise that much of the work that will take place in Enterprise 2.0 organizations will be knowledge work, then you may find these a source of ideas and insights.

Learning Spaces

Diana Oblinger (of Educating the Net Generation fame) has edited/released a new book: Learning Spaces (not sure how long it has been available, but it has been referenced by several edubloggers over the last week). I love this quote: “Spaces are themselves agents for change. Changed spaces will change practice”. The bulk of the book consists of case studies of learning space design in different organizations.