Munjal Shah Bets on Health Care Applications for Large Language Models
2 mins read

Munjal Shah Bets on Health Care Applications for Large Language Models

Serial entrepreneur Munjal Shah sees potential in using large language models (LLMs) for certain health care applications, but argues they should not be used for diagnosis. Shah, who has a background in AI and machine learning, believes LLMs excel at engaging in conversations and quickly assimilating knowledge that would take humans years to build up. However, as LLMs can sometimes present false information as fact, using them to directly diagnose patients could prove dangerous.

Shah sees great promise in LLMs for non-diagnostic health care services facing massive staffing shortages, like chronic care nursing. By using LLM’s conversational abilities, empathetic communication style, and infinite capacity to interact with patients, they could help fill gaps in care. Shah’s new startup, Hippocratic AI, aims to do just that – provide supplemental health services, not a diagnosis.

Hailing LLMs as a “true breakthrough,” Shah says generative AI has been “underhyped,” given it creates new content rather than just categorizing data like past AI. By training LLMs on how human health workers communicate and getting feedback from medical professionals, Hippocratic AI can create agents that engage with and help patients in meaningful ways.

The vision is to use this technology for “super staffing,” with LLMs providing supportive care at a large scale due to their cost effectiveness and tirelessness. This could help overburdened health systems better meet patient needs. Rather than replacing health workers, Shah sees it as supplementing and expanding care.

While acknowleding that “you’re crazy” if you think LLMs can be doctors today and directly diagnose, Shah sees great promise in using them for other assisting health roles. With proper training and application, he believes they could help patients navigate chronic conditions, understand treatment plans, access services, and more. Rather than overstepping, Shah wants to hone in on exactly what generative AI currently does well in order to create systems patients actually want to and can safely engage with.

By demonstrating these collaborative applications, Shah hopes to show the technology’s practical promise for health care while debunking overblown expectations about diagnosing. With responsible development and deployment, he sees a path for LLMs to become valued, supplemental members of health care teams, expanding access and better meeting patient needs. But placing life and death diagnostic decisions solely in the hands of AI remains dangerously premature in Shah’s view. Overall he seeks to walk a thoughtful line – tapping into LLMs’ potential while respecting their limits, at least for now.