Andy Kessler is a former Wall Street investment analyst turned author. He learned his trade following Silicon Valley and its successful, long-term, obsession with Moore’s Law. In that world, as technology scales, costs fall predictably, and new markets emerge. In The End of Medicine, Kessler takes to the world of health care and medicine to discover how and where that underlying investment model might apply. It’s an interesting premise and, despite some annoying stylistic quirks, Kessler delivers some real value. It doesn’t get to anything remotely like an answer, but it collects and organizes a lot of useful information that might help us get closer to one.
Kessler opts for a highly anecdotal style; presumably to put a more human face on a large, complex, subject. For me, he overshoots the mark and loses the big picture. The color commentary overwhelms the underlying story line, which was my primary interest. But there is a good story line that is worth finding and holding on to.
Medicine’s roots are in making the sick and injured better. Triage is baked into the system at all levels. Observe symptoms, diagnose problem, apply treatment, repeat. Over time we’ve increased our capacity to observe symptoms and have gotten more sophisticated in the treatments we can apply, but the underlying logic is based on pathology. Also over time, a collection of industries have evolved around this core logic and these industries have grown in particularly organic and unsystematic ways.
Kessler runs into these roots and this logic throughout his journey. However, coming from the semiconductor and computer industries, as he does, he doesn’t fully pick up on their relevance. As industries, computers and semiconductors are infants compared to medicine and health care. Not only do Kessler’s industries operate according to Moore’s Law, but they are structurally designed around it. His analysis of health care identifies a number of crucial pieces, but he stops short of assembling a picture of the puzzle.
Kessler focuses much of his attention on developments in imaging and diagnostics. Both areas have seen tremendous advances and hold out promises of continued technological development similar to what we’ve seen in semiconductors.
Imaging is a computationally intensive area that benefits fully as an application of computing technologies. What is far less clear is whether the current structure of the health care industry will be able to absorb advances in imaging technologies at the pace that will let Moore’s Law play out in full force.
There is a second problem with imaging technologies that applies equally to other diagnostic improvement efforts. As we get better and better at capturing detail, we run into the problem of correctly distinguishing normal from pathological. While we may know what a tumor looks like on a mammogram what we really want to know is whether that fuzzy patch is an early warning sign of a future tumor or something we can safely ignore. The better we get at detecting and resolving the details of smaller and smaller fuzzy patches, the more we run into the problem of false positives; finding indicators of what might be a tumor that turn out on closer inspection to be false alarms. Our health care system is organized around pathologies; we fix things that are broken. Because of that, the data samples we work with are skewed; we have a much fuzzier picture of what normal looks like than what broken looks like.
This is the underlying conceptual problem that efforts to improve diagnostics and early detection have to tackle. Kessler devotes much of his later stories to this problem. He profiles the work of Don Listwin, successful Silicon Valley entrepreneur, and his Canary Fund efforts. Here’s the conundrum. If you detect cancers early, treatment is generally straightforward and highly successful. If you catch them later, treatment is difficult and success is problematic. Figuring out how to reliably detect cancer early has a huge potential payoff.
The kicker is the word “reliably” and the problem of false positives, especially as you begin screening larger and larger populations. If you have a test that is 99% accurate, then for every 100 people you screen you will get the answer wrong for one person. The test will either report a false positive – that you have cancer when, in fact, you don’t – or a false negative – that you are cancer-free when you aren’t. As you pursue early detection, the false positive problem becomes the bigger problem. Screen a million people and you will have 10,000 mistakes to deal with, the vast majority of which will be false positives. That represents a lot of worry and a lot of unnecessary expense to get to the right answer.
Kessler brings us to this point but doesn’t push through to a satisfactory analysis of the implications. Implicitly, he leaves it as an exercise for the reader. His suggestion is that this transition will present an opportunity for the scaling laws he is familiar with to operate. I think that shows an insufficient appreciation for the complexities of industry structure in health care. Nonetheless, Kessler’s book is worth your time in spite of its flaws.