In a significant advancement for the medical field, particularly oncology, the European Society for Medical Oncology (ESMO) has introduced the ESMO Guidance on the Use of Large Language Models in Clinical Practice (ELCAP). This groundbreaking set of recommendations seeks to integrate artificial intelligence (AI) language models into oncology in a manner that prioritizes patient safety and clinical efficacy. The publication coincided with the ongoing ESMO Congress 2025 in Berlin, where discussions around AI’s transformative role in cancer care are increasingly becoming central to the discourse within the oncology community.
The rise of large language models represents not just a technological leap, but also a paradigm shift in how healthcare professionals will interact with vast amounts of medical knowledge. ESMO President Fabrice André emphasized the organization’s commitment to ensuring that innovation in this area translates into tangible benefits for patients while offering workable solutions for healthcare providers. The ELCAP framework allows for a nuanced approach, acknowledging the varied contexts in which AI language models might be applied—whether they be aimed at patients, clinicians, or healthcare institutions.
Fundamentally, ELCAP is structured around three distinct categories that cater to user-specific needs and contexts. The first, Type 1, is tailored for patient-facing applications. These include chatbots designed for education and symptom management, which are intended to complement traditional clinical care. However, they operate under a stringent supervision protocol, ensuring that there is a clear pathway for escalation in serious cases. This careful balancing act aims to protect patient data while providing them with immediate access to information tailored to their needs.
Type 2 addresses tools intended for healthcare professionals, focusing on decision support systems, clinical documentation, and necessary translations. The recommendations stipulate that these instruments undergo formal validation to ensure their reliability in clinical circumstances. Moreover, transparency about the limitations of these models is critical, establishing a framework where human accountability is at the forefront of clinical decision-making.
The third category, Type 3, pertains to institutional systems integrated with electronic health records. These systems can simplify processes such as data extraction, create automated summaries, and facilitate matching patients with clinical trials. ELCAP emphasizes that these systems must not only be tested prior to deployment but also continuously monitored for bias and performance shifts. The guidance highlights the importance of institutional governance, stressing that any change in data source or process necessitates re-validation to ensure ongoing compliance with safety protocols.
As ELCAP points out, the quality of the output produced by these AI systems is fundamentally linked to the quality of the input data. Incomplete clinical documentation or vague patient queries could result in erroneous or misleading responses, reinforcing the necessity for vigilant supervision and clear escalation routes for addressing issues that arise. The guidance acts as both a roadmap for navigating potential pitfalls and a springboard for the responsible application of AI tools within the healthcare setting.
Miriam Koopman, who chairs ESMO’s Real World Data & Digital Health Task Force and contributed to the paper, reinforced that the effectiveness of language models is highly dependent on the context of their use. By categorizing applications based on their audience—patients, clinicians, and institutions—expectations can be appropriately aligned. This structured approach is designed to protect patients, ensure validated tools for clinicians, and maintain governance in institutional settings.
ELCAP emphasizes the role of assistive large language models, which are meant to support clinicians rather than supplant their expertise. By providing essential information or drafting preliminary content, these systems are set to enhance clinical workflows and decision-making processes. Deputy Chair Jakob N. Kather, also co-author of the study, noted that while current models offer promising enhancements to patient care, guidance must evolve to address autonomous AI models capable of initiating actions without direct human input, as these present unique safety, regulatory, and ethical challenges.
Looking forward, the foundation of trust in AI-driven cancer care hinges not just on the technology itself, but also on the establishment of shared standards across varying applications. André’s concluding remarks stressed that the integration of algorithms in oncology must go hand-in-hand with maintaining trust in clinical judgment. ELCAP serves as a crucial step in outlining how language models can be harnessed to improve the quality, equity, and efficiency of cancer care, all while safeguarding the integrity of medical decisions.
The development of ELCAP was an extensive process, undertaken by a diverse international panel comprised of experts in oncology, AI, biostatistics, digital health, ethics, and patient advocacy. This collaborative effort took place between November 2024 and February 2025, and exemplifies the commitment to an interdisciplinary approach in addressing the complexities of AI integration into health practices.
In summary, the ESMO’s ELCAP framework signals a pioneering approach to the adaptation of AI in oncology, setting the stage for future innovations while embedding essential safeguards to uphold patient welfare and clinical integrity. As this guidance takes root in clinical practice, it is poised to facilitate a transformative evolution in the delivery of cancer care, solidifying the place of AI as a valuable ally in the ongoing battle against cancer.
Subject of Research: Integration of Large Language Models in Oncology
Article Title: ESMO Guidance on the Use of Large Language Models in Clinical Practice (ELCAP)
News Publication Date: 20 October 2025
Web References: https://www.annalsofoncology.org/article/%20S0923-7534(25)04698-8%20/fulltext
References: E.Y.T. Wong et al. Annals of Oncology. doi: 10.1016/j.annonc.2025.09.001
Image Credits: Not provided
Keywords
AI, Oncology, Large Language Models, Clinical Practice, Patient Safety, Clinical Decision-Making
Tags: artificial intelligence in cancer careELCAP framework for clinical practiceenhancing medical knowledge access with AIESMO Congress 2025 highlightsESMO guidelines for AI in oncologyethical considerations in AI healthcare integrationhealthcare innovation and patient benefitslarge language models in medical applicationspatient safety in oncology technologysafe integration of language models in healthcaretailored AI solutions for clinicianstransforming oncology with AI