Cognitive surplus and organizational slack

[Cross posted at FASTforward]

Clay Shirky’s got a new talk and he’s taking it on the road. It’s stimulating a good bit of thoughtful discussion around the web. Here’s a video version of his talk:

Shirky has also posted a transcript of the talk on his site, if you’d prefer to read instead of watch. The talk is a riff on one of the themes of his new book, Here Comes Everybody: The Power of Organizing Without Organizations. I’ll post a complete review of that shortly; it’s well worth you’re making time to read it.

One of the stories Shirky hangs his argument on is an interchange with a TV producer about the creation and growth of Wikipedia. Here’s how he tells it:

I started telling her about the Wikipedia article on Pluto. You may remember that Pluto got kicked out of the planet club a couple of years ago, so all of a sudden there was all of this activity on Wikipedia. The talk pages light up, people are editing the article like mad, and the whole community is in an ruckus–“How should we characterize this change in Pluto’s status?” And a little bit at a time they move the article–fighting offstage all the while–from, “Pluto is the ninth planet,” to “Pluto is an odd-shaped rock with an odd-shaped orbit at the edge of the solar system.”

So I tell her all this stuff, and I think, “Okay, we’re going to have a conversation about authority or social construction or whatever.” That wasn’t her question. She heard this story and she shook her head and said, “Where do people find the time?” That was her question. And I just kind of snapped. And I said, “No one who works in TV gets to ask that question. You know where the time comes from. It comes from the cognitive surplus you’ve been masking for 50 years.”

So how big is that surplus? So if you take Wikipedia as a kind of unit, all of Wikipedia, the whole project–every page, every edit, every talk page, every line of code, in every language that Wikipedia exists in–that represents something like the cumulation of 100 million hours of human thought. I worked this out with Martin Wattenberg at IBM; it’s a back-of-the-envelope calculation, but it’s the right order of magnitude, about 100 million hours of thought. And television watching? Two hundred billion hours, in the U.S. alone, every year. Put another way, now that we have a unit, that’s 2,000 Wikipedia projects a year spent watching television. Or put still another way, in the U.S., we spend 100 million hours every weekend, just watching the ads. This is a pretty big surplus. People asking, “Where do they find the time?” when they’re looking at things like Wikipedia don’t understand how tiny that entire project is, as a carve-out of this asset that’s finally being dragged into what Tim calls an architecture of participation. [Gin, Television, and Social Surplus]

The notion of “cognitive surplus” is a clever and useful way to frame the issue. Now, Shirky is primarily interested in the societal level impacts of new technologies. Big numbers help his argument tremendously, but they are a little bit like the arguments for why you might want to target your new consumer product at China (“if we only get one person in a hundred to drink our new sport drink, we’ll sell millions!”). Or the dotcom era arguments for capturing eyeballs. I don’t think that Shirky falls into this trap himself. Here, and in his book, he explicitly talks about how the design and architecture of systems such as Wikipedia leverage cognitive surplus in granular ways to exploit these large numbers.

My primary interests are inside organizations. How can we translate and adapt these insights into those environments? Organizational theorists, not being as clever or market oriented as Shirky, did not think up a notion as attractive as “cognitive surplus.” Instead, they talk about the notion of “organizational slack.” In hindsight, a very poor choice of words. For the last two decades, or more, organizations have been rooting out “slack” wherever they could find it. When the goal is efficiency, this is an appropriate strategy. However, it leaves no capacity for innovation and adaptation. Those few organizations that explicitly provide this capacity, such as Google’s 20% rule, are deemed notable and newsworthy.

The first order of business for business is to immediately appropriate Shirky’s term. Organizations that care about innovation and adaptive capacity should begin talking about “cognitive surplus.” Look for ways to measure it, if only crudely, and increase it.

The second task is to better understand and appreciate how various new technologies and tools let organizations derive benefit from smaller grains of cognitive surplus. Google’s 20% rule is a product of a time largely before blogs and wikis. Can an organization combine the tools with a one hour or 10 minute rule? Can we get value out of an hour a week or 10 minutes contributing to an internal wiki? Clearly, we will need to design some thoughtful support and encouragement processes around the tools in order to take advantage of a different scale of participation.

The third task is to monitor how well the large number phenomena outside the enterprise operate inside. We may discover critical mass issues; efforts below a certain scale are doomed to fail, while slightly larger efforts will need an extensive “life-support” system to survive. Other efforts may need support scaffolding but can become self-sustaining. Today, we have far more questions than answers. Shirky has provided us with some good new notions to start finding answers. I’d also recommend some of the following discussions that I’ve come across so far:

What is an Oreo?

Alan Matsumura and I had an excellent conversation earlier this month about the work he is starting up at SilverTrain. Part of the discussion centered on the unexpected problems that you run into when doing BI/information analytics work.

Suppose you work for Kraft. You’d like to know how many Oreos you sold last quarter. An innocent enough question and, seemingly, a simple one. That simply shows how little you’ve thought about the problems of data management.

Start with recipes. At the very least Kraft is likely to have a standard recipe and a kosher recipe (they do business in Israel). Are there other recipe variations; perhaps substituting high fructose corn syrup for sugar? Do we add up all the variations of recipe or do we keep track by recipe?

How about packaging variations? I’ve seen Oreos packaged in the classic three column package, in packages of six, and of two. I’ve seen them bundled as part of a Lunchables package. I’m sure other variations exist. Do we count the number of packages and multiply by the appropriate number of Oreos per package? Is there some system where we can count the number of Oreos we produced before they went into packages? If we can manage to count how many Oreos we made, how does that map to how many we will manage to sell?

That may get us through standard Oreos. How do we count the Oreos with orange-colored centers sold at Halloween in the US? Green-colored ones sold for St. Patrick’s Day? Double stuf Oreos? Double stuf Oreos with orange-colored centers? Mini-bite size snak paks? Or my personal favorite: chocolate fudge covered Oreos. I just checked the official Oreo website at Nabisco. They identify 46 different versions of the Oreo and don’t appear to count Oreos packaged within another product (the Lunchables question).

That covers most of the relevant business reasons that make counting Oreos tricky. There are likely additional, technical reasons that will make the problem harder, not easier. The various systems that track production, distribution, and sales have likely been implemented at different times and may have slight variations in how and when they count things. Those differences need to be identified and then reconciled. Someone will have to discover and reconcile the different codes and identifiers used to identify Oreos in each discrete system. And so on.

By the way, according to Wikipedia, over 490 billion Oreos have been sold since their debut in 1912. As for how many were sold last quarter, it depends.

David Maister on getting from strategy to execution

Strategy and the Fat Smoker; Doing What’s Obvious But Not Easy, Maister, David

 

David Maister has spent years advising professional service firms on the particular challenges of running their businesses. I first met David during my MBA days when I was a student in his course on the Management of Service Operations. I’ve come to trust his insights and perspectives about the professional world I occupy. More recently, I’ve come to see that his perspective is more generally relevant as more and more of us do work that is effectively professional, even if we are not inside actual professional services organizations. There is a substantial overlap between professional work and knowledge work, which makes Maister more relevant than ever.

Strategy and the Fat Smoker is David’s most recent effort to share his insights. In it, he turns his attention to the particular challenge of bridging from knowing what to do to actually managing to do it. In fact, David starts with the observation that “real strategy lies not in figuring out what to do, but in devising ways to ensure that, compared to others, we actually do more of what everybody knows they should do.”

Structurally, Maister works through his argument by working through what constitutes strategy in this particular perspective, the central importance of client relationships, and how those shape the kinds of management practices most likely to be effective.

For Maister, strategy is primarily a problem of organizational design and management, which is the soft stuff that always turns out to be hard. It is particularly hard, however, when the organization in question is populated with professionals/knowledge workers who must produce and deliver services to clients. You cannot succeed by designing systems and processes to compel behavior, because you have a workforce that can’t simultaneously be forced to comply with a system and exercise their independent and autonomous judgment. Maister explores this issue by focusing on two dimensions that characterize a professional; to what degree do they prefer to work solo vs. collaborate within a team and to what extent to they prefer immediate rewards vs. being willing to invest now in future payoffs. The point, of course, is not that one set of answers is better than another, but that trying to mix people with different answers in the same organizational environment is probably not a terribly good idea.

David also presents a provocative discussion of the importance of organizational purpose. While he acknowledges that shared purpose can be a very powerful tool within an organization, he argues that the power only comes when there are clear “consequences for non-compliance.” Until and unless you can translate generalities about purpose into clearly stated and observed rules of performance, then there’s no point to worrying about purpose.  Put more positively, the test of strategy comes in working out and then operating within the day-to-day rules of performance that make sense for your strategy.

In one sense, Maister doesn’t break any extraordinary new ground. What he does do is to challenge you about how willing you are to drive grand ideas deep into how you choose to do your work on a day-to-day basis. And he offers lots of good, concrete advice on how to make that transition.

George Carlin as strategy consultant?

Espen, my concern with Dr. Carlin as a potential consultant is that he has a reputation for calling it as he sees it, despite what the following might suggest. Would any large consulting firm be willing to take that risk with its clients?

Dr. GC floors’em

One of my academic colleagues suggested we hire Dr. G. Carlin as a faculty member in strategy based on the following test lecture – but in my view he would fit equally well in a consulting company. Perhaps a shared appointment?

Designing with “harmless failures” in mind

Ed Felten at Freedom to Tinker has some interesting points to add to Bruce Schneier’s piece on “Security Mindset” that I posted about yesterday. Felten focuses on the notion of “harmless failures.” It provides still more reason to approach all systems design problems with an eye firmly fixed on the social context in which your technology will operate.

The Security Mindset and “Harmless Failures”

…Not all “harmless failures” lead to big trouble, but it’s surprising how often a clever adversary can pile up a stack of seemingly harmless failures into a dangerous tower of trouble. Harmless failures are bad hygiene. We try to stamp them out when we can.

To see why, consider the donotreply.com email story that hit the press recently. When companies send out commercial email (e.g., an airline notifying a passenger of a flight delay) and they don’t want the recipient to reply to the email, they often put in a bogus From address like donotreply@donotreply.com. A clever guy registered the domain donotreply.com, thereby receiving all email addressed to donotreply.com. This included “bounce” replies to misaddressed emails, some of which contained copies of the original email, with information such as bank account statements, site information about military bases in Iraq, and so on. Misdirected ants might not be too dangerous, but misdirected email can cause no end of trouble.

The people who put donotreply.com email addresses into their outgoing email must have known that they didn’t control the donotreply.com domain, so they must have thought of any reply messages directed there as harmless failures. Having gotten that far, there are two ways to avoid trouble. The first way is to think carefully about the traffic that might go to donotreply.com, and realize that some of it is actually dangerous. The second way is to think, “This looks like a harmless failure, but we should avoid it anyway. No good can come of this.” The first way protects you if you’re clever; the second way always protects you.

Which illustrates yet another part of the security mindset: Don’t rely too much on your own cleverness, because somebody out there is surely more clever and more motivated than you are.

Rockwells Retro Encabulator

Geek/nerd humor. I’ve seen this before somewhere and I’m glad to see it again. I’ve certainly sat through my share of presentations like this. You wonder how many takes it took to get it right.

Funny: Rockwells Retro Encabulator

The yin to Common Craft’s yang:

I wish I knew more about this, the YouTube pages offer little info.
Thanks to Paul Ingram and Ryan Turner for the pointers.

Updated:  Here it says “This is a hoax video produced by Rockwell
for a sales meeting. See also:

Turboencabulator
” Thanks Bill

Tags:

Designing with failure in mind

Bruce Schneier is high on my list of smart people to pay attention to. His blog, Schneier on Security, always provides useful insights into the interplay between technology and people. Yesterday, he offered an interesting observation about what he labels “the security mindset.”

Schneier on Security: The Security Mindset.

….

Security requires a particular mindset. Security professionals — at least the good ones — see the world differently. They can’t walk into a store without noticing how they might shoplift. They can’t use a computer without wondering about the security vulnerabilities. They can’t vote without trying to figure out how to vote twice. They just can’t help it.

SmartWater is a liquid with a unique identifier linked to a particular owner. “The idea is for me to paint this stuff on my valuables as proof of ownership,” I wrote when I first learned about the idea. “I think a better idea would be for me to paint it on your valuables, and then call the police.”

Really, we can’t help it.

This kind of thinking is not natural for most people. It’s not natural for engineers. Good engineering involves thinking about how things can be made to work; the security mindset involves thinking about how things can be made to fail. It involves thinking like an attacker, an adversary or a criminal. You don’t have to exploit the vulnerabilities you find, but if you don’t see the world that way, you’ll never notice most security problems.

I’d push his observations a bit farther. When you are designing and building systems that incorporate people and technology, you had better think about both how to make things work and about how things might fail.

Human systems are interesting and effective because they are resilient. Good designers allow for the reality of human strengths and weaknesses and factor both into their designs. Too many poor or lazy designers ignore or gloss over failure modes. How many project plans have you seen, for example, that assume no one on the project team will ever be out sick? And then management complains when the project fails to meet its deadlines.

There’s actually quite a lot of good material on failure in human/technology systems and how to compensate for reality. I’d recommend the following as good starting points:

Tags:

Ray Sims is collecting definitions of knowledge management

Combine two slippery but important words and it’s little wonder that you can find such a proliferation of definitions. For some reason, this reminds me of Danny Kaye’s Choreography number from White Christmas.

43 knowledge management definitions – and counting

Before I really get going on my day, here is an entertaining (or sobering) list of 43 knowledge management definitions – and counting from Ray Sims, who is heading back into the world of being an official KM’er as I head out to do product management.  It might have been funnier had he stopped at 37.

For many years I’ve been saying that I didn’t like the term “knowledge management” as (a) it was fundamentally an oxymoron, (b) there was no consensus within the industry as to what the term meant, and (c) in many companies the term carries negative connotations due to a perceived lack of value from earlier so-called knowledge management efforts and/or belief that knowledge management was a fad that we have moved on past or has been absorbed into other disciplines.

I like a number of these and have used variations of them in the past.  As someone on the Act-KM mailing list noted, there are easily as many definitions of knowledge.  Ray or another enterprising individual might want to stack these definitions into buckets about how “knowledge” is perceived by the people using the definition.  Process-centric definitions would look at knowledge-as-verb.  Storage-centric definitions might think of knowledge as a thing to be controlled.  People-connection definitions might think of knowledge as appearing via interaction. etc.  Ray has already created a couple tag clouds of the definitions.

Another great TED talk to watch – Jill Bolte Taylor’s Stroke of Insight

What a great way to start off a St. Patrick’s Day. This is certainly worth 20 minutes of your life. As someone inclined to spend entirely too much of my time inside the left-hemisphere of my brain, I found this especially affecting.  

Stroke of insight: Jill Bolte Taylor on TED.com

Neuroanatomist Jill Bolte Taylor had an opportunity few brain scientists would wish for: One morning, she realized she was having a massive stroke. As it happened — as she felt her brain functions slip away one by one, speech, movement, understanding — she studied and remembered every moment. This is a powerful story of recovery and awareness — of how our brains define us and connect us to the world and to one another. (Recorded February 2008 in Monterey, California. Duration: 18:44.)

Watch Jill Bolte Taylor’s talk on TED.com, where you can download it, rate it, comment on it and find other talks and performances.

Read more about Jill Bolte Taylor on TED.com.

NEW: Read the transcript >>