Interested in being part of a unique problem solving team?

For the past several years we’ve been working to create the world’s largest high-performance team for problem-solving. This two-minute video captures the essence of what we are trying to accomplish:

We’ve been actively recruiting for the next stage in our development, which will be a beta test that will run over the next six months. We expect the time commitment for this phase will be 2-3 hours per week. If you think this is something you’d be interested in, drop us a line at info@cminds.net and we’ll be in touch.

How to read and understand a scientific paper: a guide for non-scientists « Violent metaphors

I only wish I had been this organized and diligent when I was doing the research for my dissertation. Or that I had had this kind of excellent advice available when I did. 

How to read and understand a scientific paper: a guide for non-scientists « Violent metaphors: “What constitutes enough proof? Obviously everyone has a different answer to that question. But to form a truly educated opinion on a scientific subject, you need to become familiar with current research in that field.  And to do that, you have to read the ‘primary research literature’ (often just called ‘the literature’). You might have tried to read scientific papers before and been frustrated by the dense, stilted writing and the unfamiliar jargon. I remember feeling this way!  Reading and understanding research papers is a skill which every single doctor and scientist has had to learn during graduate school.  You can learn it too, but like any skill it takes patience and practice.

I want to help people become more scientifically literate, so I wrote this guide for how a layperson can approach reading and understanding a scientific research paper. It’s appropriate for someone who has no background whatsoever in science or medicine, and based on the assumption that he or she is doing this for the purpose of getting a basic understanding of a paper and deciding whether or not it’s a reputable study.”

While excellent advice in its own right, this blog post is also a reminder that knowledge work requires some pretty sophisticated skills and those skills require practice to develop and maintain.

Never let “being realistic” get in the way of real problem solving

The good folks at xkcd always have something useful to say. Too many problem solving efforts are sabotaged when someone decides to redirect the conversation towards “being realistic.” 

Realistic Criteria

xkcd: Realistic Criteria

I’m planning on posting this little gem somewhere close by to remind me to view these requests more skeptically. 

When is a request to “be realistic” an honest effort to advance a problem solving effort and when is it a covert effort to derail or delay?

Poking effectively on complex systems

Here is a brief clip from an interview Steve Jobs did in 1995 while he was at Next. It neatly captures an important attitude about dealing with complex systems:

Whenever you poke at a system, the system pokes back. If you grew up with siblings, you learned this at a visceral level. 

Too many of us take a limited lesson from those experiences; we come to believe that we are powerless in the face of a larger, more powerful system (or sibling). The better lesson, which Jobs embodied in his life, is to seek places and directions to poke where your impact can be amplified.

Adam Savage on the path from simple ideas to discovery

Here’s a relatively short talk by Adam Savage of Mythbusters at TED-Ed on how curiosity and simple ideas lead to discovery. Savage is an excellent storyteller and these are stories worth hearing.

I’m particularly taken with his observation that it can sometimes take years before you understand why your brain holds on to a particular idea.

 

A hat tip to the always excellent FlowingData for pointing this one out.

 

 

Rethinking data and decisions – Big Data’s Impact in the World – NYTimes.com

The New York Times recently ran an excellent overview of the evolving state of data analytics.

Big Data’s Impact in the World – NYTimes.com: “The story is similar in fields as varied as science and sports, advertising and public health – a drift toward data-driven discovery and decision-making. “It’s a revolution,” says Gary King, director of Harvard’s Institute for Quantitative Social Science. “We’re really just getting under way. But the march of quantification, made possible by enormous new sources of data, will sweep through academia, business and government. There is no area that is going to be untouched.”

Welcome to the Age of Big Data. The new megarich of Silicon Valley, first at Google and now Facebook, are masters at harnessing the data of the Web – online searches, posts and messages – with Internet advertising. At the World Economic Forum last month in Davos, Switzerland, Big Data was a marquee topic. A report by the forum, “Big Data, Big Impact,” declared data a new class of economic asset, like currency or gold.

This is the latest iteration in the ongoing interplay between judgment and evidence in decision making. Which means it’s worth considering how this argument has been evolving over time and how new discoveries, technologies, and techniques could change the issues or cause lasting change in how we go about making decisions.

Probability theory traces its roots to a conversation between Blaise Pascal and Pierre de Fermat over how to divide the pot in a card game if it weren’t possible to finish the game. Could you estimate the relative chances of each player winning the game based on the current state of the game and use those estimates to fairly distribute the pot? In other words, how can you use the evidence at hand to make a better decision?

When I was getting an undergraduate degree in probability and statistics (a long time ago), the core issues centered on what inferences you could draw about the real world from limited samples. What kinds of errors and mistakes did you need to protect yourself from? What precautions were appropriate to keep you from going beyond the data? The tools would always find some pattern in the data and we were repeatedly cautioned to take care not to see things that weren’t there.

A few years later, in graduate school, I revisited the topic in various required methods and analytical tools courses. The software tools were more powerful and were still capable of finding the slightest hint of a pattern in the noise. The faculty offered their obligatory cautions, but I watched plenty of students wreaking intellectual havoc with their new power tools, spinning conclusions from the thinnest threads of pattern in the data. For every hundred MBAs who learned to run a multivariable regression, one might read Darrel Huff’s How to Lie with Statistics.

Today, the tools continue to become more and more powerful at teasing out patterns from the data. At the same time, the exponential growth in available data means that we aren’t sampling so much as we are searching for patterns in the population as a whole. What is the lag between the power of our analytical tools and our capacity to apply sound judgment to the results?

Here are some of the questions I am beginning to explore:

  1. How does statistical inference change as we move from small, representative, samples to all, or most, of the population of interest?
  2. How do we distinguish between patterns in the data that are spurious and patterns that reveal important underlying drivers?
  3. When is an arbitrary or spurious correlation good enough to support a business course of action (Amazon doesn’t, and probably shouldn’t, care why “other people who bought title X, also bought title Y.” Calling my attention to title Y leads the incremental sales; who needs a causal model?)
  4. How does our deepening understanding of the limits and biases of human decision making connect to the opportunities presented in “Big Data”? Here, I’m thinking of the work Dan Ariely on behavioral economics and Daniel Kahneman on decision making.

I would value pointers and suggestions on where to look next for answers or insight.

Richard Feynman On The Folly Of Crafting Precise Definitions

I’ve often struggled with the notion of definitions when working in organizations. On the one hand, too many of us hide our ignorance and uncertainty behind a wall of jargon and terminology. Terms fall in and out of favor and their relationship to the underlying real world is often less important than their value from a marketing perspective.

On the other hand, new terms and language can help us point to and see new ideas and new opportunities for action. Here’s a recent post from Bob Sutton that sheds light on these challenges and is worth thinking about.

One of my best friends in graduate school was a former physics major named Larry Ford. When behavioral scientists started pushing for precise definitions of concepts like effectiveness and leadership, he would sometimes confuse them (even though Larry is a very precise thinker) by arguing “there is a negative relationship between precision and accuracy.” I just ran into a quote from the amazing Nobel winner Richard Feynman that makes a similar point in a lovely way:

We can’t define anything precisely. If we attempt to, we get into that paralysis of thought that comes to philosophers one saying to the other: “you don’t know what you are talking about!”. The second one says: “what do you mean by talking? What do you mean by you? What do you mean by know?

Feynman’s quote reminded me of the opening pages of the 1958 classic “Organizations” by James March (quite possibly the most prestigious living organizational theorist, and certainly, one of the most charming academics on the planet) and Herbert Simon (another Nobel winner). They open the book with a great quote that sometimes drives doctoral students and other scholars just crazy. They kick-off by saying:

“This is a book about a theory of formal organizations. It is easier, and probably more useful, to give examples of formal organizations than to define them.”

After listing a bunch of examples of organizations including the Red Cross and New York State Highway Department, they note in words that would have pleased Feynman:

“But for the present purposes we need not trouble ourselves with the precise boundaries to be drawn around an organization or the exact distinction between an “organization” and a “non-organization.” We are dealing with empirical phenomena, and the world has an uncomfortable way of not permitting itself to be fitted into clean classifications.”

I must report, however, that for the second edition of the book, published over 20 years later, the authors elected to insert a short definition in the introduction:

“Organizations are systems of coordinated action among individuals and groups whose preferences, information, interests, or knowledge differ.”

When I read this, I find myself doing what Feynman complained about. I think of things they left out: What about norms? What about emotions? I think of situations where it might not apply: Doesn’t a business owned and operated by one person count as an organization? I think of the possible overemphasis on differences: What about all the times and ways that people and groups in organizations have similar preferences, information, interests, and knowledge? Isn’t that part of what an organization is as well? I could go on and on.

I actually think it is a pretty good definition, but my bias is still that I like original approach, as they did such a nice job of arguing, essentially, that if they tried to get more precise, they would sacrifice accuracy. Nonetheless, I confess that I still love trying to define things and believe that trying to do so can help clarifying your thinking. You could argue that while the outcome, in the end, will always be flawed and imprecise, the process is usually helpful and there are many times when it is useful pretend that you have a precise and accurate definition even if you don’t (such as when you are developing metrics). “

Richard Feynman On The Folly Of Crafting Precise Definitions – Bob Sutton:

Understanding the world around you – more insights from Richard Feynman

Another gem from Richard Feynman. In this clip he uses the game of chess to illustrate how scientists go from making observations about the world to better and better theories that account for the observations. There’s a lot of depth in this simple analogy and it’s well worth dedicating some of your own brain cycles to following Feynman’s reasoning.

Your morning dose of Feynman Boing Boing: “Your morning dose of Feynman By Maggie Koerth-Baker at 7:05 am Wednesday, Oct 12

Richard Feynman, God of Perfect Analogies, explains why it’s not a failure or a scandal when scientists adapt and change their understanding of the world. This is a really important point, applicable in a lot of public debates over science, especially those focused on evolution and climate change. Science isn’t about writing things on tablets of stone. It’s about taking a theory and constantly digging deeper into it adding layers of nuance, finding stuff that doesn’t make sense, and using both to build a more complete picture. Even if the big idea is right, the details will change. That’s how science is supposed to work.

Via W. Younes

Collaborating Minds

Some details about what my partner in collaboration, David Friedman, and I have been up to lately.

Improved logo with background and tagline
For the past few months, my colleague Jim McGee and I have been hard at work on a project we’ve named Collaborating Minds. It will be an online problem-solving community — with a unique membership recruiting strategy. The goal is to create a resource that will be able to assist organizations with hard problems by providing rich insights and multiple perspectives. It’s a marriage of some of the ideas of crowdsourcing with the principles that make for high performance teams. It’s an example of getting more people to work together better, a topic I wrote about a while back.

Collaborating Minds’ main assets will be:

  1. Its network of 500-700 part-time participants
  2. Its approach to community building and structured problem solving,and
  3. Its software platform that supports and enables the community building and structured problem-solving. 

The people will be recruited and selected based on their interest and ability to work together in the community in just the way the software platform allows. They will include people from a very diverse set of backgrounds. We’ll have scientists of various stripes, engineers of various types, humanists, consultants, experts in all kinds of fields. So in that respect it will be like crowdsourcing.

The community and the problem-solving will be actively managed, and the members will be expected to get to know at least some of the other community members outside the context of the specific problems we are working on. Community members will help each other on their own issues and challenges, and can use the problem-solving tools provided to do so if they like.

The software platform includes a social network of a particular kind, and a structured problem-solving process and spaces for the problem-solving to occur. The problem-solving method will combine structured asynchronous elements and structured synchronous elements (online meetings). We also will have an alternative free-form option for members to use when the structure isn’t right for the problem at hand.

There’s a lot more info available now at the Collaborating Minds site. We are almost finished with the alpha version of the software platform and are starting to talk with people about recruiting and membership. We have a lot of unanswered questions (e.g., precise target markets, compensation, and governance) and probably some wrong answers to others. One of the best things about this idea though, is that we can aim our group at ourselves; if this sort of group can generate insightful and powerful solutions to hard problems (which I believe it can) then it help us solve the issues that remain ahead.

Collaborating Minds
David Friedman
Fri, 24 Jun 2011 19:07:11 GMT

Choosing to be productively stupid

Finally had a chance to read a very interesting essay in the Journal of Cell Science titled "The Importance of Stupidity in Scientific Research." In the wondrous ways of the web, this little gem from 2008 found its way into my life by way of a blog post by Matthew Cornell in January of this year. Here’s the key notion, but the whole thing is worth the time to read and to consider:

Productive stupidity means being ignorant by choice. Focusing on important questions puts us in the awkward position of being ignorant. One of the beautiful things about science is that it allows us to bumble along, getting it wrong time after time, and feel perfectly fine as long as we learn something each time. No doubt, this can be difficult for students who are accustomed to getting the answers right. No doubt, reasonable levels of confidence and emotional resilience help, but I think scientific education might do more to ease what is a very big transition: from learning what other people once discovered to making your own discoveries. The more comfortable we become with being stupid, the deeper we will wade into the unknown and the more likely we are to make big discoveries.

From The importance of stupidity in scientific research

This willingness to move forward without knowing has made for much of the progress we’ve seen and benefitted from in the science and technology real. I wish I saw more of that same willingness manifest in business, education, and elsewhere. Maybe we’d learn to be more comfortable listening to people with provocative and productive questions and less willing to pay attention to people peddling the illusion of right answers.