We are standing at one of the most important crossroads in the history of healthcare. The decisions we make in the next few years about how we implement artificial intelligence will shape medicine for generations. This isn't hyperbole. It's simply the reality of what happens when the most powerful optimization tool humanity has ever created meets the system that determines whether people live or die.
How We Got Here
American healthcare is in the state it's in because, over decades, it has been systematically optimized for corporate profits rather than patient care. The roots of this go back to the 1970s and perhaps earlier - policy decisions, regulatory capture, the rise of for-profit hospital systems, the insurance industry's consolidation of power.
But here's the thing: the history is largely irrelevant now. We can't undo those decisions. We can't un-build the systems that were built. What matters is where we go from here.
And where we go from here depends entirely on what we choose to optimize AI for.
AI Can Help - But Only If We're Honest
I understand why so many people are looking for something - anything - to fix healthcare. Patients navigate a maze of paperwork and gatekeepers just to get basic care. Families watch bills pile up while feeling like the system treats them as transactions, not people. And physicians? We're drowning in administrative burden, spending more time with screens than with the people we trained to help. It makes sense to hope AI might finally break through where everything else has failed.
AI can help - but only if we're honest about what's actually broken.
The problem isn't a shortage of talent or dedication. Healthcare is full of brilliant, committed people working themselves to exhaustion. The problem is that trust has collapsed and the incentives point in the wrong direction. Patients know that the system prioritizes revenue over their wellbeing. Physicians feel they can't practice medicine without fighting through layers of administrative interference. And the painful truth is that both groups are right.
Right now, we're on a path where technology risks reinforcing that damage - making the extraction more efficient, the barriers more automated, the inhumanity more scalable. I'm arguing for a course correction while that choice is still ours to make.
The Power of Optimization
AI is, at its core, an optimization engine. You give it an objective, and it finds ways to achieve that objective that no human would ever discover. It sees patterns across millions of data points. It works tirelessly, without fatigue, without bias toward "how things have always been done."
AI can optimize better than any tool humanity has ever had - by a factor that's difficult to comprehend.
This is both the promise and the peril. Because AI will optimize for whatever objective it's given. If you tell it to maximize revenue, it will find ways to maximize revenue that would never occur to a human administrator - and it won't care whether those methods serve patients or harm them. If you tell it to minimize costs, it will minimize costs with ruthless efficiency, regardless of the human consequences.
But if you tell it to optimize for patient outcomes? For genuine health? For the quality of the doctor-patient relationship? It will do that too - with the same relentless effectiveness.
The Choice We Face
This is the crossroads. As a society, as a healthcare system, we have to decide what we will optimize AI for.
Right now, the default trajectory is clear. Large healthcare corporations and insurance companies are already deploying AI. And they're optimizing it for what they've always optimized for: shareholder returns. Cost reduction. Claim denial. Administrative efficiency that serves the institution, not the patient.
This isn't speculation. We're already seeing AI systems that:
- Automatically deny insurance claims at scale, creating barriers to care
- Generate prior authorization requirements designed to exhaust patients and providers
- Optimize staffing levels to the bare minimum, burning out healthcare workers
- Find legal loopholes to maximize billing while minimizing actual care delivered
And here's the insidious part: AI gives corporations a convenient place to displace blame. When your prior authorization is denied, when costs seem inexplicably inflated, when the system feels designed to frustrate you into giving up - they can shrug and say "it's not us, it's the AI optimizing resource allocation." But they're the ones who trained the AI. They chose the objective function. They optimized for quarterly earnings, not for the betterment of human health. The AI is just doing exactly what it was built to do.
This is what happens when AI is optimized for profits. And because AI is so much more effective than previous tools, the extraction will be so much more efficient, the barriers to care so much more impenetrable, and the accountability so much easier to obscure.
A Different Path
But it doesn't have to be this way. AI can be optimized for whatever we choose.
Imagine AI that's optimized for:
- Patient outcomes - identifying the interventions most likely to improve health, not just generate revenue
- Early detection - finding diseases when they're still treatable, rather than when they're profitable to treat
- Provider wellbeing - reducing the documentation burden so doctors can focus on patients
- Access to care - making healthcare more affordable and available, not more restricted
- The doctor-patient relationship - freeing physicians to be present with patients instead of trapped in their EHR
This is the promise of AI tools built by physicians, for physicians. Tools designed not to extract value from the healthcare system, but to add value to patient care.
Why Patients and Physicians Must Lead
The corporations deploying AI in healthcare have billions of dollars and armies of engineers. They're moving fast. They're building the systems that will shape medicine for decades.
It is critical that patients and physicians work together to make sure AI is appropriately optimized for patients and health.
Why patients and physicians together? Because their interests are aligned in a way that corporate interests never will be. Patients want to get better. Physicians want to help them get better. That's it. That's the whole equation.
When you strip away the billing codes and prior authorizations and administrative complexity, healthcare is fundamentally about one human being helping another. AI should serve that relationship, not exploit it.
The Democratization of Healthcare Technology
There's hope in this story, and it comes from an unexpected place: the democratization of software development.
For decades, building healthcare software required massive teams, enormous budgets, and years of development time. Only large corporations could afford to play. And they built what served their interests.
AI has changed this equation. Today, a physician with deep domain knowledge can build production-quality healthcare software. I know this because I've done it. The barriers to entry have collapsed.
At the same time, open-source AI tools are ensuring that this technology doesn't remain locked behind corporate walls. Anyone can access powerful AI models. Anyone can build on them. Anyone can create healthcare tools that serve different objectives than shareholder returns.
This democratization is critical. It means that the future of healthcare AI doesn't have to be written by corporations alone. Physicians, patients, advocates, and independent developers can all participate in building the healthcare technology future we want to see.
What We're Building at Grail Health
This is why I built Grail Health. Not to extract value from an already-strained healthcare system, but to give something back to it.
Our AI scribe isn't optimized for billing codes or claim denials. It's optimized for helping physicians provide better care with less administrative burden. It's optimized for documentation that reflects the actual quality of care being delivered. It's optimized for giving doctors back the time they need to be present with their patients.
This is what physician-led, patient-centered AI looks like. And we need more of it.
The Stakes
The decisions we make now will compound. AI systems build on themselves. The optimization objectives we embed today will shape the systems that shape the systems that shape medicine for decades to come.
If we let corporate interests define those objectives unopposed, we'll get a healthcare system that's more efficient at extraction, more effective at denial, more optimized for everything except actual health.
But if patients and physicians work together - if we build and advocate for AI that's optimized for health - we have a chance to create something better. A healthcare system that uses the most powerful optimization tool in human history to actually optimize for what matters: people getting better.
The Time Is Now
This crossroads won't last forever. The systems being built now will become entrenched. The optimization objectives being encoded now will become immutable. The window for shaping the future of healthcare AI is open, but it won't stay open indefinitely.
The most important choice we'll make about healthcare in our lifetimes is happening right now. And we all have a role to play in making sure we choose wisely.