How Four Experts Think AI Will (And Won’t) Transform Healthcare

It’s no secret that our sector is captivated by artificial intelligence (AI) right now. The numbers speak for themselves: venture capital deal activity in AI for healthcare has skyrocketed over the past five years, growing twice as fast as the tech industry overall. In fact, one in four healthcare investment dollars goes to a startup leveraging AI, with the majority (60%) of these companies focused on administrative and clinical applications.

The momentum shows no signs of slowing down. In 2024 so far, AI healthcare companies have already raised $2.8 billion in venture capital, and Silicon Valley Bank projects we’ll see $11.1 billion in investments by year's end— a figure not seen since 2021.

But what does this surge in VC dollars really mean for the future of healthcare? Will AI really help lower costs, streamline operations, and improve outcomes, or will it struggle to deliver on its hype?

To get to the bottom of this, I reached out to four experts in the field: 

I asked them their thoughts on how AI will change healthcare in both the near and long-term. The theme in all their answers? Cautious optimism. While each expert brought a unique perspective to the table, they all agreed that AI has great potential for our sector— but only if we approach it thoughtfully and strategically. Let’s dive in. 

Where AI will have the greatest impact in five years 

When asked where they see AI having the most tangible impact on healthcare in the near term, all four brought up opportunities for AI to improve clinical practice. 

Daneshjou believes that FDA approved image assistant tools (like AI software to improve the precision and efficiency of lung cancer biopsies) are going to be increasingly integrated into clinical practice. She also suspects we'll see widespread use of AI scribes for note taking and improvements in medical text summarization.

Wallach is excited for pattern recognition tasks where the technology already performs better than humans, such as risk identification in hospitalized patients.

Kohane also emphasized the potential for AI to improve clinical practice, and predicted AI will also serve as an increasingly useful tool for patients and paraprofessionals as primary care continues to face challenges.

Kaushik brought up streamlining practice management and reducing administrative burdens on healthcare providers by automating tasks like scheduling, documentation, and preparing data for coding and billing. He also expects "co-pilots" will support physicians by analyzing large volumes of patient data and medical literature, summarizing past events, making labs and images easier to find and interpret, and highlighting any information gaps.

The 20-year outlook 

Looking ahead to 2043, our experts envision a healthcare system that looks quite different from today. 

Kaushik believes that AI will be mostly invisible, embedded in products without users even thinking about it. He predicts that the most visible AI will be virtual health assistants serving as first points of contact for most patients, understanding concerns, providing education, and guiding patients to the most appropriate level of care. 

Kaushik also sees the potential for precision medicine to reach its full potential, with AI integrating multimodal data to tailor treatment plans, drug dosing, and preventive care recommendations based on comprehensive patient profiles. He envisions predictive analytics fed by wearables and sensors allowing early detection of deviations from baseline health.

Kohane predicts a splitting of hospital-based functions and direct patient support, driven by companies resembling concierge services. He also anticipates more explicit customization of care. 

Wallach hopes for greater centralization of information systems, protocols, and knowledge, combined with greater decentralization of actual care delivery. He believes this shift away from the current centralized model is only possible with AI enabling a more franchised approach to high-quality, evidence-based care.

Daneshjou can envision a future where AI has helped improve patient care, but also worries for a concerning future where AI actually increases disparities and makes outcomes worse for vulnerable populations. 

Potential pitfalls and how to avoid them 

While bullish on the potential, our experts also highlighted some risks to watch out for. Kaushik emphasized the importance of ensuring that AI doesn't perpetuate or exacerbate existing healthcare inequities, but instead is used to mitigate them. He stressed the need to proactively audit training data, ensure underrepresented populations are properly represented and treated, and continuously monitor AI systems for unwanted biases and behaviors.

Kaushik also raised concerns about protecting patient privacy and data security, especially as generative models can be prompted to reveal their training data. He called for data governance frameworks based on privacy and safety risks to avoid cases where patient data is exposed through chatbots.

Daneshjou again warned about bias and worsening inequities if AI models aren't trained on diverse data. "Equity and fairness cannot be an afterthought; it must be top of mind during every step of the development and testing process," she stressed. She also cautioned against the common mistake of not designing and testing AI in the intended deployment scenario, noting that models performing well on retrospective data may fail in a real clinical environment.

“Equity and fairness cannot be an afterthought; it must be top of mind during every step of the development and testing process” - Dr. Roxana Daneshjou

Wallach cautioned against the temptation to focus AI tools on improving financial efficiency for providers and payers without adding value for patients. He suggested increased regulation on providers could help standardize quality and reduce care variance. Kohane warned that if AI is not aligned to maximize patient health but rather the utilities of a third party, it could upend the residual trust in healthcare.

Advice for founders 

For entrepreneurs looking to build AI solutions in healthcare, our experts offered a few pieces of advice. 

Kaushik, who sold his company ScienceIO to Veradigm, emphasized deeply engaging with all healthcare stakeholders, especially frontline clinicians and patients, to understand their needs, workflows, and pain points. We’ve seen the backlash from nurses this year as some protested the use of AI at Kaiser. Kaushik advises founders to put ethics, bias mitigation, transparency, and data protection at the center of the development process from day one, not as an afterthought. "Earning trust is essential for adoption of AI in the high-stakes healthcare domain," he noted. Above all, he stressed never losing sight of the goal of improving patient outcomes and experience.

Daneshjou echoed the importance of involving domain experts (physicians) and stakeholders (patients, administrators) in every step of the process. "If you don't have domain experts and stakeholders involved in every step, it's highly likely you will fail," she warned. 

Kohane similarly emphasized that success requires both engineering and social engineering expertise. "Make sure you have techniques to succeed in both disciplines," he said.

Wallach, who has backed healthcare AI companies like Iterative Health, advised founders to focus on building solutions that can demonstrate substantial, incremental benefits to patient outcomes in a reasonable timeframe: “Don't focus on making our horrible system slightly better. That is like moving deck-chairs around on the titanic.”

The path forward 

These four experts share a sense of cautious optimism about AI's potential to transform healthcare. Yes, there are significant challenges and risks to navigate— from ensuring equitable access and combating bias to protecting privacy and maintaining trust. But if we approach these hurdles thoughtfully and strategically, AI can be one of the most exciting advances we’ve seen in medicine. 

As VC activity in healthcare AI continues to surge, we should steer that capital toward solutions that will have the greatest impact on patient outcomes. That means supporting founders committed to responsible development, rigorous testing, and close collaboration with stakeholderes.

We also need to be realistic about the limitations of AI and the importance of human judgment in healthcare. AI should be seen as a tool not to replace clinicians, but to enable them to do their best work, more efficiently. This means we should ensure that providers are properly trained on how to use AI effectively and ethically, and that patients understand how their data is being used.

Ultimately, the success of AI in healthcare will depend on building trust— trust between technologists and clinicians, between providers and patients, and between the public and the healthcare system as a whole. This will require transparency, accountability, and a relentless focus on improving patient outcomes.

The challenges are huge, but so is the potential. If we can put AI to work responsibly and ethically, we have the opportunity to bend the cost curve, free up clinicians to do what they do best, deliver care that is more personalized, and ultimately create a better, more equitable healthcare system for all. 

Previous
Previous

I’m Joining the Board of Collective Health!

Next
Next

Why Healthcare Startups Fail