Three Bottlenecks in Healthcare Delivery
Why healthcare at escape velocity might not work. At certain points
If you were to characterize the modern mindset around AI it would have to be abundance. The going narrative is all about how much and how fast we’ll be able to do things.
In fact, when you listen to those making these tools there’s a manic sense that we’re approaching a kind of escape velocity. Our shiniest, best days are just around the corner.
In healthcare the promise of the inflection point is that technology and computation will finally fix our problems. This solutionist mindset imagines not only making the system smoother, but smashing it to smithereens for a fresh start.
James Bridle in New Dark Age - Technology and the End of the Future labeled this as accelerationism:
Acceleration itself is one of the bywords of the age. In the last couple of decades, a variety of theorists have put forward versions of accelerationist thought, advocating that technological processes perceived to be damaging society should not be opposed, but should be sped up - either to be commandeered and repurposed for socially beneficial ends, or simply to destroy the current order.
Last week Graham Walker dropped a great essay on the realities of abundance and the challenges it will lay bare in patient care. Technology, he suggests, will get faster and faster, but the touch points that involve judgment and synthesis will get choked.
I got to thinking about these non-negotiable choke points in human care. These are sometimes called bottlenecks — a kind of pejorative term for the things that throttle Silicon Valley’s Promethean vision of escape velocity.
Physical stuff
Let’s start with the most basic reality: Care for humans will always be grounded in the body itself (and mind). And no matter what the longevity folks sell us, or how well we optimize the system around the body, we will always heal at a neanderthal rate. Flesh is the ultimate bottleneck.
Beyond our ability to heal ourselves, there are things that need to be done physically — in person, on a gurney, in an OR, or wherever. Think IVs, infusions, operations, and the like. A 10-day course of oral antibiotics is physical and it definitely doesn’t scale.
Telemedicine showed us that when it comes to human bodies with lacerations and palpable masses, there’s a limit to what can be delivered virtually. So as long as we’re in the business of caring for humans, we’re restricted by the physical embodiment of the person.
Judgment
Given the wild variability of the human mind there will always need to be some level of orchestration that involves judgment.
Judgment is contextual and relational. It requires knowing this patient, not the distribution of patients who look like this patient. It takes what’s unspoken (grief, fear, ambivalence) and weighs it against what’s been measured. It considers that the patient’s stated preference and their deeper interests are often not the same thing.
I’m not sure where this bottleneck gets resolved. Probably during some kind of synchronous human-to-human exchange at select points in the patient journey. Or as we used to say, a visit.
Trust and legitimacy
Knowledge being nearly free and accessible doesn’t mean everyone’s going to believe it. And when it comes to health technology we bring all kinds of baggage with us. Part of this baggage is fear, doubt, and suspicion.
The diagnosis of a simple problem by an LLM, for example, may be spot on but still not leave someone empowered to act. The bottleneck may be less about information and more about the credentialed background and baked-in human approval that makes a recommendation actionable for a patient. This isn’t judgment as much as faith in the machine.
It’s worth noting that people can distrust an algorithm even when it outperforms a doctor. This has been called algorithmic aversion. A single error can blemish a solid track record of accuracy. What it means is that human adoption of AI health tools may not follow performance in a straight line. I suspect credentialed doctors will likely carry the diagnostic burden well after it’s technically necessary.
—————
I can imagine a near-future entry point to the healthcare system that takes place entirely through a reliable, legitimized AI ... something that intercepts and treats the simple things and escalates the sticky stuff to some kind of interpretive core where the important decisions are negotiated. Or places where physical intervention is done.
In this scenario the human moments become exceptions rather than the rule. Which is okay. It offloads the swipe right things to the machine while allowing us to leverage our greatest skill as guides in the healthcare journey.
All of this is part of an imagined system with unlimited access to medical knowledge, perhaps diagnostics, but rationed access to human judgment and wisdom.
If anyone’s interested, I’ve been doing my illustrations by hand on Procreate. I’m having as much fun sketching as I am writing….



Hey — I came across your writing and really liked how you think.
I’m exploring something similar from a different angle — writing about human behavior through a system design lens (like debugging internal patterns).
Just started publishing on Substack. If you ever get a moment to read, I’d genuinely value your perspective.
Also happy to support your work — feels like there’s an interesting overlap here.
Great essay. You write: "Judgment is contextual and relational. It requires knowing this patient, not the distribution of patients who look like this patient." A minor quibble - doesn't it take both? Without knowing the distribution of patients, how can you assess this patient? I agree that individualized care is key but so is knowledge of the people who look like me, so you can put my condition in context. In my fields (law and economics) I am finding agentic AI to be an incredible tool that gives me the equivalent of a time of tireless RAs, able to search for relevant literature and data, distill it, and provide me with the knowledge that enables me to make individualized judgments. From my experience with medicine (too much, unfortunately), the same is true there in terms of the need. As is often noted, AI is a "jagged frontier" - really good at one thing, terrible at an adjacent task. Perhaps not enough of medicine is in the "really good" part yet, but I'm pretty sure it will be and that doctors using AI tools will be delivering better individualized care very soon.