Design Insight Continued

Synchronicity is real.

It’s always more rewarding when you discover that your thinking is in the same general vicinity as someone smart. Turns out Nancy Dixon has also been thinking about how to learn from experience. Her predictably excellent post focuses on pragmatic guidance on what it takes for an organization to be able to learn from its experience together with some very practical techniques to do so. Central to her analysis is the role of continuing conversation among peers and the importance of generative questions in those conversations.

Nancy’s advice is neutral with respect to what those conversations should be about, which was the thread I was trying to work out in my previous post.

The accelerating rate of change expands the focus of what you seek to extract from experience. We’ve grown accustomed to a simple equation that treats experience as templates or patterns to be copied onto new situations with minor tweaks and modifications. This is the world of startups seeking to be the “Uber of X” or the “Amazon of Y.” It is the world of countless copycats of Henry Ford all seeking to do what Ford did only better, faster, or cheaper.

The glib analysis of accelerating change is that experience is irrelevant to innovation. The Ubers and Amazons and Googles of the world all spring forth independently of prior experience. Stated baldly, this is nonsense. But it does leave us with figuring out how experience might play a role and brings us back to peer to peer conversations and generative questions.

My conjecture is that experience must be unpacked and fed into design processes that will lead to new processes and practices. The concrete particulars of experience will need to be actively abstracted into design principles and patterns. Experience then becomes the fuel that drives new innovations and practices.

Connecting the dots encourages bad thinking

Connecting the dots is one of those common metaphors that we don’t think about very much. When we do invoke it, however, it leads us astray more often than it helps.

The premise is that the elementary school activity of revealing a hidden picture by drawing lines from one numbered dot to the next teaches us something transferable about problem solving. The pleasure of reading mysteries and detective stories flows reinforces the same message. There are a series of clues lying about waiting to be found; rearrange and connect them in the right order to reveal the culprit or foil the terrorist.

When the bad guys succeed, we criticize the good guys because they failed to arrange the obvious dots into the now obvious order. Hindsight is 20/20, of course, but that obscures a more important critique of the method and its premises.

Connecting the dots makes one of two assumptions about the end of the process. Either, there is a picture to be perceived when the dots are connected in the right sequence. Or, there is an answer to the story or riddle when the facts are arranged in order. Both of these assumptions lead us wrong in the real world. First, they presume a single, correct, answer; this works for puzzles and simple problems in math, nowhere else. Second, they presume a single, correct, ordering of the dots.

At best, which isn’t very good, connecting the dots is a label for retrospective sense-making masquerading as a problem-solving strategy. Possibly useful if your goal is to divert blame. Useless as a thinking strategy.

Interested in being part of a unique problem solving team?

For the past several years we’ve been working to create the world’s largest high-performance team for problem-solving. This two-minute video captures the essence of what we are trying to accomplish:

We’ve been actively recruiting for the next stage in our development, which will be a beta test that will run over the next six months. We expect the time commitment for this phase will be 2-3 hours per week. If you think this is something you’d be interested in, drop us a line at info@cminds.net and we’ll be in touch.

How to read and understand a scientific paper: a guide for non-scientists « Violent metaphors

I only wish I had been this organized and diligent when I was doing the research for my dissertation. Or that I had had this kind of excellent advice available when I did. 

How to read and understand a scientific paper: a guide for non-scientists « Violent metaphors: “What constitutes enough proof? Obviously everyone has a different answer to that question. But to form a truly educated opinion on a scientific subject, you need to become familiar with current research in that field.  And to do that, you have to read the ‘primary research literature’ (often just called ‘the literature’). You might have tried to read scientific papers before and been frustrated by the dense, stilted writing and the unfamiliar jargon. I remember feeling this way!  Reading and understanding research papers is a skill which every single doctor and scientist has had to learn during graduate school.  You can learn it too, but like any skill it takes patience and practice.

I want to help people become more scientifically literate, so I wrote this guide for how a layperson can approach reading and understanding a scientific research paper. It’s appropriate for someone who has no background whatsoever in science or medicine, and based on the assumption that he or she is doing this for the purpose of getting a basic understanding of a paper and deciding whether or not it’s a reputable study.”

While excellent advice in its own right, this blog post is also a reminder that knowledge work requires some pretty sophisticated skills and those skills require practice to develop and maintain.

Never let “being realistic” get in the way of real problem solving

The good folks at xkcd always have something useful to say. Too many problem solving efforts are sabotaged when someone decides to redirect the conversation towards “being realistic.” 

Realistic Criteria

xkcd: Realistic Criteria

I’m planning on posting this little gem somewhere close by to remind me to view these requests more skeptically. 

When is a request to “be realistic” an honest effort to advance a problem solving effort and when is it a covert effort to derail or delay?

Poking effectively on complex systems

Here is a brief clip from an interview Steve Jobs did in 1995 while he was at Next. It neatly captures an important attitude about dealing with complex systems:

Whenever you poke at a system, the system pokes back. If you grew up with siblings, you learned this at a visceral level. 

Too many of us take a limited lesson from those experiences; we come to believe that we are powerless in the face of a larger, more powerful system (or sibling). The better lesson, which Jobs embodied in his life, is to seek places and directions to poke where your impact can be amplified.

Adam Savage on the path from simple ideas to discovery

Here’s a relatively short talk by Adam Savage of Mythbusters at TED-Ed on how curiosity and simple ideas lead to discovery. Savage is an excellent storyteller and these are stories worth hearing.

I’m particularly taken with his observation that it can sometimes take years before you understand why your brain holds on to a particular idea.

 

A hat tip to the always excellent FlowingData for pointing this one out.

 

 

Rethinking data and decisions – Big Data’s Impact in the World – NYTimes.com

The New York Times recently ran an excellent overview of the evolving state of data analytics.

Big Data’s Impact in the World – NYTimes.com: “The story is similar in fields as varied as science and sports, advertising and public health – a drift toward data-driven discovery and decision-making. “It’s a revolution,” says Gary King, director of Harvard’s Institute for Quantitative Social Science. “We’re really just getting under way. But the march of quantification, made possible by enormous new sources of data, will sweep through academia, business and government. There is no area that is going to be untouched.”

Welcome to the Age of Big Data. The new megarich of Silicon Valley, first at Google and now Facebook, are masters at harnessing the data of the Web – online searches, posts and messages – with Internet advertising. At the World Economic Forum last month in Davos, Switzerland, Big Data was a marquee topic. A report by the forum, “Big Data, Big Impact,” declared data a new class of economic asset, like currency or gold.

This is the latest iteration in the ongoing interplay between judgment and evidence in decision making. Which means it’s worth considering how this argument has been evolving over time and how new discoveries, technologies, and techniques could change the issues or cause lasting change in how we go about making decisions.

Probability theory traces its roots to a conversation between Blaise Pascal and Pierre de Fermat over how to divide the pot in a card game if it weren’t possible to finish the game. Could you estimate the relative chances of each player winning the game based on the current state of the game and use those estimates to fairly distribute the pot? In other words, how can you use the evidence at hand to make a better decision?

When I was getting an undergraduate degree in probability and statistics (a long time ago), the core issues centered on what inferences you could draw about the real world from limited samples. What kinds of errors and mistakes did you need to protect yourself from? What precautions were appropriate to keep you from going beyond the data? The tools would always find some pattern in the data and we were repeatedly cautioned to take care not to see things that weren’t there.

A few years later, in graduate school, I revisited the topic in various required methods and analytical tools courses. The software tools were more powerful and were still capable of finding the slightest hint of a pattern in the noise. The faculty offered their obligatory cautions, but I watched plenty of students wreaking intellectual havoc with their new power tools, spinning conclusions from the thinnest threads of pattern in the data. For every hundred MBAs who learned to run a multivariable regression, one might read Darrel Huff’s How to Lie with Statistics.

Today, the tools continue to become more and more powerful at teasing out patterns from the data. At the same time, the exponential growth in available data means that we aren’t sampling so much as we are searching for patterns in the population as a whole. What is the lag between the power of our analytical tools and our capacity to apply sound judgment to the results?

Here are some of the questions I am beginning to explore:

  1. How does statistical inference change as we move from small, representative, samples to all, or most, of the population of interest?
  2. How do we distinguish between patterns in the data that are spurious and patterns that reveal important underlying drivers?
  3. When is an arbitrary or spurious correlation good enough to support a business course of action (Amazon doesn’t, and probably shouldn’t, care why “other people who bought title X, also bought title Y.” Calling my attention to title Y leads the incremental sales; who needs a causal model?)
  4. How does our deepening understanding of the limits and biases of human decision making connect to the opportunities presented in “Big Data”? Here, I’m thinking of the work Dan Ariely on behavioral economics and Daniel Kahneman on decision making.

I would value pointers and suggestions on where to look next for answers or insight.

Richard Feynman On The Folly Of Crafting Precise Definitions

I’ve often struggled with the notion of definitions when working in organizations. On the one hand, too many of us hide our ignorance and uncertainty behind a wall of jargon and terminology. Terms fall in and out of favor and their relationship to the underlying real world is often less important than their value from a marketing perspective.

On the other hand, new terms and language can help us point to and see new ideas and new opportunities for action. Here’s a recent post from Bob Sutton that sheds light on these challenges and is worth thinking about.

One of my best friends in graduate school was a former physics major named Larry Ford. When behavioral scientists started pushing for precise definitions of concepts like effectiveness and leadership, he would sometimes confuse them (even though Larry is a very precise thinker) by arguing “there is a negative relationship between precision and accuracy.” I just ran into a quote from the amazing Nobel winner Richard Feynman that makes a similar point in a lovely way:

We can’t define anything precisely. If we attempt to, we get into that paralysis of thought that comes to philosophers one saying to the other: “you don’t know what you are talking about!”. The second one says: “what do you mean by talking? What do you mean by you? What do you mean by know?

Feynman’s quote reminded me of the opening pages of the 1958 classic “Organizations” by James March (quite possibly the most prestigious living organizational theorist, and certainly, one of the most charming academics on the planet) and Herbert Simon (another Nobel winner). They open the book with a great quote that sometimes drives doctoral students and other scholars just crazy. They kick-off by saying:

“This is a book about a theory of formal organizations. It is easier, and probably more useful, to give examples of formal organizations than to define them.”

After listing a bunch of examples of organizations including the Red Cross and New York State Highway Department, they note in words that would have pleased Feynman:

“But for the present purposes we need not trouble ourselves with the precise boundaries to be drawn around an organization or the exact distinction between an “organization” and a “non-organization.” We are dealing with empirical phenomena, and the world has an uncomfortable way of not permitting itself to be fitted into clean classifications.”

I must report, however, that for the second edition of the book, published over 20 years later, the authors elected to insert a short definition in the introduction:

“Organizations are systems of coordinated action among individuals and groups whose preferences, information, interests, or knowledge differ.”

When I read this, I find myself doing what Feynman complained about. I think of things they left out: What about norms? What about emotions? I think of situations where it might not apply: Doesn’t a business owned and operated by one person count as an organization? I think of the possible overemphasis on differences: What about all the times and ways that people and groups in organizations have similar preferences, information, interests, and knowledge? Isn’t that part of what an organization is as well? I could go on and on.

I actually think it is a pretty good definition, but my bias is still that I like original approach, as they did such a nice job of arguing, essentially, that if they tried to get more precise, they would sacrifice accuracy. Nonetheless, I confess that I still love trying to define things and believe that trying to do so can help clarifying your thinking. You could argue that while the outcome, in the end, will always be flawed and imprecise, the process is usually helpful and there are many times when it is useful pretend that you have a precise and accurate definition even if you don’t (such as when you are developing metrics). “

Richard Feynman On The Folly Of Crafting Precise Definitions – Bob Sutton:

Understanding the world around you – more insights from Richard Feynman

Another gem from Richard Feynman. In this clip he uses the game of chess to illustrate how scientists go from making observations about the world to better and better theories that account for the observations. There’s a lot of depth in this simple analogy and it’s well worth dedicating some of your own brain cycles to following Feynman’s reasoning.

Your morning dose of Feynman Boing Boing: “Your morning dose of Feynman By Maggie Koerth-Baker at 7:05 am Wednesday, Oct 12

Richard Feynman, God of Perfect Analogies, explains why it’s not a failure or a scandal when scientists adapt and change their understanding of the world. This is a really important point, applicable in a lot of public debates over science, especially those focused on evolution and climate change. Science isn’t about writing things on tablets of stone. It’s about taking a theory and constantly digging deeper into it adding layers of nuance, finding stuff that doesn’t make sense, and using both to build a more complete picture. Even if the big idea is right, the details will change. That’s how science is supposed to work.

Via W. Younes