Serial entrepreneur Munjal Shah sees potential in using large language models (LLMs), specifically healthcare applications, provided they stay within their capabilities. While advancements in AI like ChatGPT showcase improved conversational abilities, Shah believes deploying them for medical diagnosis crosses an ethical line and risks patient harm. However, he identified promising use cases in supplemental health services that don’t require diagnosis. This thinking led him to launch the h startup Hippocratic AI, which uses an LLM for nondiagnostic support across areas like nursing, dietetics, and patient navigation.
Shah calls LLMs like ChatGPT and GPT-4 “a true breakthrough,” saying generative stands explicitly apart from past innovations. Unlike classifier AI optimized for categorization tasks, generative models produce entirely new content by understanding and mimicking patterns in data. This ability enables more human-like conversation critical for patient engagement. Hippocratic AI’s LLM is trained on medical professional-patient dia and adopts a knowledgeable, empathetic communication style. Shah sees properly designed language as vital, contrasting ChatGPT’s blog-like tone since it was trained on written texts. With correct reinforcement learning from doctor feedback, he believes LLMs can speak conversationally to patients in a way classifier AI cannot.
While hype surrounds generative AI, Shah thinks its capabilities still need to be underestimated relative to transformative potential. He sees today’s tech finally delivering on promises he has followed since first exploring neural networks in 1992. And in healthcare, chronic understaffing amid ballooning patient populations highlights the need for alternative solutions. An estimated shortage of 15 million healthcare workers globally shows why new ideas must complement overburdened human care teams. This is where “super staffing” through cost-effective, infinitely scalable AI could make a difference.
If given enough time, human nurses could guide each chronic care patient, but limitations on time and people make this impossible. So, rather than essential diagnosis, Shah envisions LLMs handling time-intensive nursing duties like treatment adherence monitoring, appointment coordination, social service referrals, and patient education. By automating tedious but meaningful responsibilities, human healthcare staff could focus on more advanced duties only they can perform. With staff shortages expected to worsen in nursing, supplementing overworked teams with conversational AI support allows more patients to receive regular, high-quality care.
While risks exist in applying generative models too broadly, Shah believes careful implementation targeting the right complementary tasks unlocks significant upsides. Thoughtful regulation can help balance innovation with ethical AI principles in medicine. However, by recognizing issues around trust and potential harm, health systems could deploy LLMs to alleviate strained resources exactly when needed. With shortages driving worsening patient outcomes and higher costs, AI’s benefits may outweigh risks in specific nondiagnostic use cases. If generative models responsibly assist nurses, social workers, dietitians, and care coordinators as “virtual team members,” Shah thinks the healthcare crisis plaguing providers and communities may find some relief.