David Reed on What The Internet Is, and Should Continue To Be – #ccourses

240px Internet map 4096




David Reed is one of the creators of the underlying protocols and design that is the Internet. Here is a lengthy reflection from David on why those design decisions worked so well and why we ought to be very cautious about messing with them. It’s long and a bit dense. Read it anyway. The better you understand what David is saying here, the better you will be able to navigate and leverage the world he helped create. This is the world we live in; understand it.

The Internet must be fit to be the best medium of discourse and intercourse [not just one of many media, and not just limited to democratic discourse among humans]. It must be fit to be the best medium for commercial intercourse as well, though that might be subsumed as a proper subset of discourse and intercourse.

Which implies interoperability and non-balkanization of the medium, of course. But it also implies flexibility and evolvability – which *must* be permissionless and as capable as possible of adapting to as-yet-unforeseen uses and incorporating as-yet-unforeseen technologies.

I’ve used the notion of a major language of inter-cultural interaction, like English, Chinese, or Arabic, as an explicit predecessor and model for the Internet’s elements – it’s protocols and subject matter, it’s mechanism of self-extension, and it’s role as a “universal solvent”.

We create English or Chinese or Arabic merely by using it well. We build laws in those frameworks, protocols of all sorts in those frameworks, etc.

But those frameworks are inadequate to include all subjects and practices of discourse and intercourse in our modern digital world. So we invented the Internet – a set of protocols that are extraordinarily simple and extraordinarily independent of medium, while extensible and infinitely complex. They are mature, but they have run into a limit: they cannot be a framework for all forms of (digital information). One cannot encode a photograph for transmission in English, yet one can in the framework we have built beginning with the Internet’s IP datagrams, addressing scheme, and agreed-upon mechanics.

The Internet and its protocols are sufficient to support an evolving and ultimately ramifying set of protocols and intercourse forms – ones that have *real* impact beyond jurisdiction or “standards body”.

The key is that the Internet is created by its users, because its users are free to create it. There is no “governor” who has the power to say “no” – you cannot technically communicate that way or about that. 

And the other key is that we (the ones who began it, and the ones who now add to it every day, making it better) have proven that we don’t need a system that draws boundaries, says no, and proscribes evolution in order to have a system that flourishes. 

It just works.

This is a shock to those who seem to think that one needs to hand all the keys to a powerful company like the old AT&T or to a powerful central “coordinating body” like the ITU, in order for it not to fall apart.

The Internet has proven that the “Tower of Babel” is not inevitable (and it never was), because communications is an increasing returns system – you can’t opt out and hope to improve your lot. Also because “assembly” (that is, group-forming) is an increasing returns system. Whether economically or culturally, the joint creation of systems of discourse and intercourse *by the users* of those systems creates coherence while also supporting innovation.

The problem (if we have any) is those who are either blind to that, or willfully reject what has been shown now for at least 30 years – that the Internet works.

Also there is too much (mis)use of the Fallacy of Composition that has allowed the Internet to be represented as merely what happens when you have packets rather than circuits, or merely what happens when you choose to adopt certain formats and bit layouts. That’s what the “OSI model” is often taken to mean: a specific design document that sits sterile on a shelf, ignoring the dynamic and actual phenomenon of the Internet. A thing is not what it is, at the moment, made of. A river is not the water molecules that currently sit in the river. This is why the neither owners of the fibers and switches nor the IETF can make the Internet safe or secure – that idea is just another Fallacy of Composition. [footnote: many instances of the “end-to-end argument” are arguments based on a Fallacy of Composition]. 

The Internet is not the wires. It’s not the wires and the fibers. It’s never been the same thing as “Broadband”, though there has been an active effort to confuse the two. It’s not the packets. It’s not the W3C standards document or the IETF’s meetings. It’s NONE of these things – because those things are merely epiphenomena that enable the Internet itself. 

The Internet is an abstract noun, not a physical thing. It is not a frequency band or a “service” that should be regulated by one of the service-specific offices of the FCC. It is not a “product” that is “provided” by a provider.

But the Internet is itself, and it includes and is defined by those who have used it, those who are using it and those who will use it.

[dpr: What the Internet Is, and Should Continue to Be

Rethinking organizational functions and components in a freelance economy

An story on NPR this morning about Grind, a new co-working start-up raises some intriguing questions about where organizations may be evolving in an increasingly freelance economy.



JaegerSloan: Workers share office space at Grind, a co-working company in New York City. Those who want to use Grind’s facilities are vetted through a competitive application process.

April 10, 2012

The recession brought widespread unemployment across the U.S., but it also prompted a spike in the number of freelance or independent workers.

More than 30 percent of the nation’s workers now work on their own, and the research firm IDC projects the number of nontraditional office workers — telecommuters, freelancers and contractors — will reach 1.3 billion worldwide by 2015.

Typically, freelancers get to choose when and where they work. Many opt to set up shop in “co-working” arrangements, where they can rent a cubicle and other office resources by the day or the month.

It was once a relatively simple process to sign up with a co-working site.

But now, more companies are adopting a selective approach known as “curated co-working.” One such company, New York City’s Grind, requires an application — and you have to be accepted to get started.

That means some would-be co-workers will find they don’t make the cut…

For Freelancers, Landing A Workspace Gets Harder : NPR: by KAOMI GOETZ

(Someday I will produce a rant about the overuse of the word “curated.”)

Two interesting questions come to mind:

  1. How will the application and profile process evolve? We are all social animals. We also have a pretty solid understanding of what differentiates successful groups and successful teams. As freelancers and as potential co-workers, will we become more mindful about how we manage our associations?
  2. Grind is testing the hypothesis that there is value in filtering the freelancers who will have access to their space. Is this a leading indicator that the physical, social, psychological, and economic functions of the organization can be effectively decomposed and rearranged in new formats?

It’s certainly time to reread Ronald Coase’s The Nature of the Firm. I might also take a look at Jay Galbraith’s Designing Organizations and Bob Keidel’s Seeing Organizational Patterns.

Where IS Health Care Going? Technology Leader’s Presentation

Last week, JoAnn Becker  and I ran an interactive discussion with the monthly TLA Manager’s breakfast meeting here in Chicago. We had a lively and excellent debate among a group of technology executives, health care executives, and other smart people about the real challenges of successfully deploying information technology to improve productivity and quality in delivering health care in this country.

That, of course, is an immense issue and would could barely scratch the surface in the hour we had. For those who are interested, we’ve uploaded our slides to Slideshare.


We used two recent TV ads from GE and IBM to kick off the discussion. On the surface, each provides a sense for the promise of information technology to make health care more effective:

GE TV ad – Doctors
IBM TV Ad – “Data Baby”

In the tradition of all good technology vendor advertising, both also completely gloss over the complex organizational adaptation and evolution necessary to bring these hypothetical worlds into being. They also gloss over the existing institutional and industry complexity that needs to be understood and addressed through a combination of design, leadership, and management.

Fred  Brooks, professor of computer science at UNC and author of The Mythical Man-Month : Essays on Software Engineering, draws a critical distinction in the final chapter of the book, which is titled "No Silver Bullet," between accidental and essential complexity. His point is that software is so difficult to design and develop because it must successfully model the essential complexity of the domain it addresses. Technology and software efforts can stumble on a variety of barriers and roadblocks, but failing to understand and address essential complexity is the worst.

Health care provides its own mix of accidental and essential complexity. If the decision makers aren’t careful to draw distinctions between accidental and essential, then a great deal of time and effort will be expended without corresponding returns. On the one hand, we may simply succeed in "speeding up the mess" as my friend Benn Konsynski so liked to put it. Or, we may obliterate  essential complexities in a quest for uniformity and productivity that is blind to those complexities. Or, finally, we may invest the appropriate level of design time and talent in systems that account for essential complexity and eliminate accidental complexity.


We drew on a variety of excellent resources in preparing for this talk and wanted to make them more easily available here.

Here are several books that provide useful context and background

Here are pointers to a variety of health care related web resources worth paying attention to:

Fred Brooks on the Design of Design

The Design of Design: Essays from a Computer Scientist, Brooks, Frederick P.

Currently a professor of computer science at the University of North Carolina, Fred Brooks led the development of IBM’s System/360 and its operating system. He’s the author of The Mythical Man-Month : Essays on Software Engineering, which remains one of the best books on project management in the real world. In The Design of Design,  Brooks reflects on what he has learned about the problems of design over the course of his long and distinguished career. He combines his reflections with case studies drawn from multiple design efforts. Here is his justification for adding one more volume to the growing literature about design:

the design process has evolved very rapidly since World War II, and the set of changes has rarely been discussed. Team design is increasingly the norm for complex artifacts. Teams are often geographically dispersed. Designers are increasingly divorced from both use and implementation — typically they no longer can build with their own hands the things they design. All kinds of designs are now captured in computer models instead of drawings. Formal design processes are increasingly taught, and they are often mandated by employers.

I believe a "science of design" to be an impossible and indeed misleading goal. This liberating skepticism gives license to speak from intuition and experience — including the experience of other designers who have graciously shared their insights with me.  [The Design of Design, pp.xi-xii]

Brooks begins with a look at various rational, engineering-centric, models of the design process including Herbert Simon’s view of design as a search process and various waterfall models of software development. His take, and mine, is that these models bear only a passing resemblance to how real designers actually do design. Whatever value they might have as reminders to experienced designers is outweighed by the risks they pose in the hands of those without the necessary experience base to appreciate their limitations.

Brooks frames the design process problem this way:

  • If the Rational model is really wrong,
  • If having a wrong model really matters, and
  • If there are deep reasons for the long persistence of the wrong model,

then what are better models that

  • Emphasize the progressive discovery and evolution of design requirements,
  • Are memorably visualized so that they can be readily taught and readily understood by team and stakeholders, and
  • Still facilitate contracting among fallen humans?  {p.52]

Brooks thinks that something along the lines of Barry Boehm’s Spiral Model of software development will best meet these criteria.

In the middle section of his book, Brooks explores a variety of topics and issues relating to design including

  • when collaboration is useful vs. when it is not
  • conceptual integrity
  • identifying the core budgeted constraint (rarely money)
  • finding and developing great designers

In the final section, Brooks examines several cases in depth.

As a series of essays and reflections, this book is most valuable to those who have wrestled with design problems of their own. Given the frequency with which all of us are presented with design problems, Brooks’ reflections on real design problems offers many useful insights. Among the insights that I will be mulling over:

  • the boldest design decisions, whoever made them, have accounted for a high fraction of the goodness of the outcome
  • great designs have conceptual integrity–unity, economy, clarity. They not only work, they delight.
  • An articulated guess beats an unspoken assumption
  • wrong explicit assumptions are much better than vague ones
  • If a design, particularly a team design, is to have conceptual integrity, one should name the scarce resource explicitly, track it publicly, control it firmly 

Can you design business models? A review of "Seizing the White Space."

[cross posted at FASTforward blog]

Seizing the White Space: Business Model Innovation for Growth and Renewal, Johnson, Mark W.

What is a "business model" and can you create a new one in a systematic and disciplined way? That’s the question that Mark Johnson, chairman of the consulting firm Innosight, sets for himself in Seizing the White Space.

The term entered the popular business lexicon during the dotcom boom in the late 1990s. There wasn’t any particular definition behind the term at the outset. Effectively, it was shorthand for the answer to question zero about any business – "How are we planning to make money?" Before the dotcom boom, nine times out of ten, the answer was "we’ll copy what Company X is doing and execute better than they do." During the boom, the answer seemed to be "we have absolutely no idea, but it’s going to be great." Now we recognize that both of those answers are weak and that we need some theory to design answers that are likely to be successful.

Over the last decade and a half, there’s been a steady stream of excellent thought and research focuses on building that theory. One of the major tributaries in that stream has been the work of Clay Christensen on disruptive innovation. Christensen and his colleagues, including Johnson, have been engaged in a multi-year action research program working out the details and practical implications of the theory of disruptive innovation. Seizing the White Space is the latest installment in this effort and is best understood if you’ve already invested in understanding what has come before.

Johnson starts with a definition of white space as

the range of potential activities not defined or addressed by the company’s current business model, that is, the opportunities outside its core and beyond its adjacencies that require a different business model to exploit


Why do organizations need to worry about white space? Even with success at exploiting their current business model and serving existing customers, organizations reach a point where they can’t meet their growth goals. Many an ill-considered acquisition has been pursued to plug this growth gap. Haphazard efforts at innovations to create new products or services or enter new markets get their share of the action.

Johnson combines an examination of white space and business models in an effort to bring some more order and discipline to the challenge of filling those growth gaps. One implication of this approach is that the primary audience for his advice is existing organizations with existing successful business models. He is less interested in how disruptive innovation processes apply in start up situations.

Johnson’s model of business models is deceptively simple. He illustrates it with the following diagram:


Johnson expands the next level of detail for each of these elements. Most of that is straightforward. More importantly, this model places its emphasis on the importance of balancing each of these elements against the others.

In the middle third of the book, Johnson takes a deeper look at white space, dividing it into white space within, beyond, and between which correspond to transforming existing markets, creating new markets, and dealing with industry discontinuity. It’s a bit clever for my tastes, but it does provide Johnson with the opportunity to examine a series of illuminating cases including Dow Corning’s Xiameter, Hilti’s tool management and leasing program, Hindustan Unilever’s Shakti Initiative, and Better Place’s attempt to reconceptualize electric vehicles. While the organization of the stories is a bit too clever, it does serve a useful purpose. It takes a potentially skeptical reader from the familiar to the unfamiliar as they wrap their heads around Johnson’s ideas.

With a basic model and a collection of concrete examples in hand, the last third of the book lays out an approach to making business model innovation a repeatable process. This process starts from what has evolved into a core element of Christensen’s theories – the notion of "jobs to be done." This is an update on Ted Levitt’s old marketing saw that a customer isn’t in the store to buy a drill but to make a hole. The problem is that most established marketers forget Levitt’s point shortly after they leave business school and get wrapped up instead in pushing the products and services that already exist. "Jobs to be done" is an effort to persuade organizations to go back to the necessary open-ended research about customer behavior and needs that leads to deep insight about potential new products and services.

With insight into potential jobs to be done, Johnson’s four-box model provides the structure to design a business model to accomplish the job to be done. In his exposition, he works his way through each of the four boxes, offering up suggestions and examples at each point. With a potentially viable design in hand, he shifts to considerations of implementation and, here, emphasizes that the early stages of implementation need to focus on testing, tuning, and revising the assumptions built into the prospective business model.

Johnson clearly understands that creating a new business model is a design effort not an execution effort. Seizing the White Space puts shape and structure underneath this design process. All books represent compromises. The compromise that Johnson has made is to make this design process appear more linear and structured than it can ever be in practice. He knows that it isn’t in his emphasis on the need to balance the elements of a business model and  to learn during the early stages of implementation. There’s a reason that the arrows in his four-box model flow both ways. I’m not sure every reader will pick up on that nuance.

He also clearly points out the role of learning from failures as well as successes during implementation. But the demands of fitting the story into a finite space again undercut this central lesson. The models here will go a long way toward making business model design more manageable, but they can’t make it neat and orderly.

This review is part of a "blogger book tour" that Renee Hopkins, editor of Strategy and Innovation and Innoblog, arranged.

Previous stops on the tour:

Upcoming stops

If you’re interested in digging deeper into the work of Clay Christensen and his posse, here are some previous posts where I’ve pulled together some reviews and pointers. I hope you find them helpful.

Applying End-to-End Design Principles in Social Networks

Partial map of the Internet based on the Janua...

Image via Wikipedia

 Andy Lippman, at MIT’s Media Lab, offers provocative examples of learning how to think in network terms when designing services in a recent blog post from the Communications Futures Program at MIT. At the very heart of the Internet’s design is a notion called the end-to-end principle (pdf). The best network is one that treats all nodes in the network identically and pushes responsibility for decisions out to the nodes. Creating special nodes in the network and centralizing decisions in those nodes makes the network as a whole work less well.

In this essay, Lippman explores that notion by looking at examples of existing and potential services in telecommunications networks that could be improved by trusting the end-to-end principle more fully. Lippman takes a look at emergency services such as 911 calls in the US. As currently designed, these services allow individuals to reach a centralized dispatch center in the event of an emergency.

Emergencies are no longer solely about getting help for a fire or heart attack. Nor are they purely personal affairs, directed at or for a single individual. Consider the recent attempted attack on a Detroit-bound airplane where passengers provided the service (saving the plane). Early reports portrayed this as a fine solution. Indeed, there is discussion that the best result of increased airline security is that it has made people aware of the fact that they all have to pitch in to help when it is needed; they can no longer just rely on a remote entity a site to solve the problem for them.

End-to-End Social Networks
Andy Lippman
Fri, 01 Jan 2010 21:10:36 GMT

Lippman makes the point that we can benefit from thinking about ways to mobilize the network as a whole as an alternative to using it to direct messages to some centralized authority. Continuing to impose hierarchical notions on top of network designs risks missing other, potentially more powerful, options. We have a set of powerful new tools and ideas that we have yet to fully exploit.

The design reasoning that underlies the engineering of the Internet is applicable in organizational settings as well. Lippman’s examples are a good place to start in thinking how to apply them effectively.

Reblog this post [with Zemanta]

Emergent behavior and unintended consequences in social systems

One of the defining characteristics of Enterprise 2.0 implementation efforts according to Andy McAfee, among others, is the presence of emergent behaviors in the organization as participants interact with and adapt to new technology functions and features. The notion of ’emergent behavior’ is pretty well established in the study of complex systems. Yet it still seems to trouble many executives, particularly those with strong project management and operations backgrounds.

I was pondering this over the weekend and I think I’ve found a way to explain it in a more satisfying way.

Emergent behaviors are unintended consequences that make you happy.

We are social animals that have evolved to operate optimally in small groups (check out Dunbar’s number). As social systems get larger, they exceed our capacity to make accurate inferences and predictions. Complex organizations and political entities represent design solutions that compensate for these limits and allow us to take on tasks and efforts beyond the grasp of small groups. Technology adds to the complexity and increases the capacity of the system at the expense of making the system still more difficult to predict.

‘Unintended consequences’ is a consulting term for ‘oops.’ It’s a belated admission that it’s difficult to predict all the ways in which a system will react to its environment. A typical response is to work more diligently to lock things down, usually by squeezing out opportunities for human judgment and adaptability. This leads to the TSA and zero-tolerance policies that suspend six-year olds.

A better response is to stop treating people like interchangeable components in a machine and start designing with an eye toward integrating human limits and human creativity into our systems. Assume that the new system will produce unexpected results. Focus your design effort more on swinging the balance toward pleasant surprises and less on eliminating surprises altogether.

Thinking in Systems: A Primer

Thinking in Systems: A Primer,

Meadows, Donella

From time to time, I recommend Meadows’ article, Places to Intervene in a System. It’s a succinct summary of her long experience at finding leverage points for effective change in complex human and organizational systems. In this slim volume, she provides an accessible and understandable introduction to systems thinking in general and "Places to Intervene" takes its place as a penultimate chapter.

We spend our days surrounded by and embedded in multiple, complex, interacting systems: transportation, education, health care, our employers, our customers, our suppliers. The systems we encounter are those that by design and by adaptation have found stable ways to operate and to survive.

Thinking in Systems explains why systems work the way they do and why our intuitions about them are so often wrong. Feedback loops drive system behavior. Positive feedback loops give us population explosions and Internet billionaires; negative feedback loops let us steer cars or regulate the temperature in our offices. Unrecognized feedback loops and lag times between action and response lead to most of the surprises we encounter with systems in the real world. What Meadows does here is make that all understandable and accessible with apt examples and clear explanations.

Gary Hamel and innovations in management

The Future of Management, Hamel, Gary


Gary Hamel has been an astute observer of organizations and management for several decades now. For all the reasons that seemed to make sense at the time, this book sat on my shelf for a while before I got to it. Based on the current state of the economy, I suspect a number of executives who could have benefitted from Hamel’s insights also failed to get them in a timely fashion. Hamel’s central thesis is that management is a mature technology and is ripe for disruptive innovation. Although he makes only passing reference to Clay Christensen’s work, there are important points of linkage between these two management thinkers.

The underlying rationale behind management philosophy and practices was largely laid down in the early decades of the twentieth century during the growth and ascendancy of the large multi-divisional industrial organization. In other words, most managers continue to operate with the mindset and practices originally developed to handle the problems encountered by the railroads, GM, IBM, and the other organizations making up the Dow Jones average between 1930 and 1960. While we’ve experienced multiple innovations in products, technologies, services, and strategies, the basics of management have changed little. Here’s how Hamel puts it:

While a suddenly resurrected 1960s-era CEO would undoubtedly be amazed by the flexibility of today’s real-time supply chains, and the ability to provide 24/7 customer service, he or she would find a great many of today’s management rituals little changed from those that governed corporate life a generation or two ago. Hierarchies may have gotten flatter, but they haven’t disappeared. Frontline employees may be smarter and better trained, but they’re still expected to line up obediently behind executive decisions. Lower-level managers are still appointed by more senior managers. Strategy still gets set at the top. And the big calls are still made by people with big titles and even bigger salaries. there may be fewer middle managers on the payroll, but those that remain are doing what managers have always done–setting budgets, assigning tasks, reviewing performance, and cajoling their subordinates to do better. (p. 4)

Hamel sets out to explore what innovation in the practice of management would look like and how organizations and managers might tackle the problems of developing and deploying those innovations. I don’t think he gets all the way there, but the effort is worth following.

The first section of the book lays out the case for management innovation as compared to other forms. the second examines three organizations that Hamel considers worthy exemplars: Whole Foods, W.L. Gore, and Google. The last two section build a framework for how you might start doing managerial innovation within your own organization.

Hamel does a good job of extracting useful insights from the case examples he presents. Hamel’s own preference is for a managerial future that is less hierarchical and less mechanical. At the same time, he wants each of us to commit to doing managerial innovation for ourselves. This leaves him in a bit of a bind. I suspect that Hamel would like to be more prescriptive, but his position forces him to leave the prescription as an exercise for the reader. While I agree with Hamel that both individuals and organizations need to be formulating their own theories of management and experimenting on their own, this is not likely to happen in most organizations and particularly so in the current economic climate. Necessity is not the mother of invention; rather it forces us to cling to the safe and familiar. We need a degree of safety and a degree of slack to do the kinds of thinking and experimenting that will produce meaningful managerial innovations. I fear that may be hard to come by in the current environment; no matter how relevant or necessary.

What you can do in the interim is research and reflection to discover or define opportunities for possible managerial innovations. This book is one excellent starting point, but insufficient on its own.

Is this an agenda worth pursuing? What else would you recommend to move forward?