At the recent HLTH 2023 conference, CEO of Hippocratic AI Munjal Shah shared his ambitious vision for utilizing AI to alleviate worsening healthcare staffing troubles. Shah sees AI-enabled “super-staffing” as the key to overcoming shortages and expanding care access.
The 2023 HLTH event in Las Vegas convened health industry leaders to discuss innovation and technology, with much focus on generative AI’s potential. During the “There’s No ‘AI’ in Team” panel, Munjal Shah spotlighted staffing as a prime area for AI application.
Shah explained that while diagnostic AI merits caution, tools like large language models (LLMs) could securely assist with non-clinical capacities facing drastic understaffing. He proposed AI “super-staffing” – massively scaling access by lowering costs for overwhelmed yet non-technical roles.
The Dire Healthcare Staffing Crisis
Munjal Shah spotlighted the unfolding global health worker deficit as creating an imperative for change. Citing World Health Organization projections of over 10 million additional health workers needed by 2030, Shah noted strained systems and underserved communities are already impacted. He argued generative AI represents a powerful solution if focused on the staffing crisis.
Shah pointed to non-clinical yet vital capacities – from patient navigation to chronic care nursing – where need vastly exceeds supply. With costs approaching $100 per hour yet millions lacking access, Munjal Shah sees AI as the key to affordable, scalable support.
The “super-staffing” concept means leveraging AI tools to provide personalized services impossible given financial and human limitations. This could encompass explaining billing, delivering test results, diet advice, appointment reminders, and post-visit follow-ups.
Crucially, the objective is not replacing but rather multiplying human efforts. AI would take on supplementary responsibilities allowing people to focus specialized skills where irreplaceable.
Trust Through Health System Partnerships
The panel discussion revolved around AI’s role within a “centaur” model fusing human and machine strengths. Speakers agreed fully automated solutions are unrealistic in healthcare – oversight and governance are imperative.
Munjal Shah explained Hippocratic AI’s approach centers partnerships and feedback. He noted safely fielding AI demands involvement of health systems in catching potential errors. Hippocratic AI has established a safety council with clients, emphasizing external guidance in design.
Vetting through end-users enables vital tuning before any patient impact. Shah explained this human-in-the-loop refinement helped shift the company’s focus towards conversational AI for patient-facing uses.
They found generative models’ personable interactiveness makes them uniquely suited for previously unscalable staffing needs. Once trust in responsible application is earned, enormous expansions in access become viable.
“Overtraining” LLMs to Mirror Human Expertise
Munjal Shah stressed that responsibly employing AI – especially regarding personal health data – mandates exceeding usual safety standards. He detailed Hippocratic AI’s “overtraining” method which helps ensure recommendations align with clinical guidelines.
This technique leverages the core competency of systems like ChatGPT: synthesizing responses from a given dataset. Hippocratic AI has hired over 11,000 medical professionals to curate an expert knowledge base for “overtraining” client tools.
Nurses and doctors prepare ideal, evidence-backed responses for various scenarios. The LLM then repeatedly practices matching their logic, rerunning until satisfactorily consistent. Shah explains this instills healthcare-specific reasoning missing in generic chatbots.
Experts additionally serve as human validators – flagging any concerning model mistakes for further correction. This reinforcement continually tightens performance before clients pilot applications.
Shah compared overtraining to learning driving etiquette: the LLM shouldn’t just memorize rules but reflexively adopt human-level courtesy and safety habits. This helps instill reliable reasoning augmenting clinical judgement rather than undermining it.
Research indicates the approach is working. A recent JAMA study saw participants view LLM health advice as superior to doctors’ in both quality and empathy. Shah believes overtraining helps align AI tone and content with patient-centric clinical standards.
The Future of AI “Super-Staffing” in Healthcare
Panelists agreed integrating AI demands a nuanced, goal-oriented approach versus crude automation. Shah sees conversational generosity as key in healthcare – an empathetic manner and willingness to explain unfamiliar concepts.
He noted while AI struggles to match specialized human skills, its versatility across document comprehension and interpersonal tasks creates huge potential value. Generative tools’ unique strengths center personable, detailed and tireless communication.
Rather than pursuing elusive perfection in isolated applications, Shah advocates employing collaborative AI to multiply reach. No model can replace custom human services, but even good enough automation can enhance millions of lives otherwise lacking access.
Shah foresees a coming revolution in utilizing AI staffing to facilitate universal care. Today 68 million Americans battle multiple chronic illnesses without regular nursing – a service with demonstrated outcomes benefits but prohibitively expensive to scale. He envisions chatbots making constant check-ins feasible across entire impacted populations.
Munjal Shah closed arguing that responsibly harnessing AI’s exponential reach could allow titanic expansions in holistic care. He believes clinical judgement should remain distinctly human, but supplemental tasks could be supported by artificial assistants – powered by overtraining. This hybrid approach will define the patient experience as staffing legacies give way to the era of “super-staffing.”