Studies

AI in Healthcare: Current Uses, Shared Challenges and Future Stakeholder Opportunities

Written by Trilliant Health | Feb 10, 2026 3:56:26 PM

As excitement around AI’s healthcare applications grows, understanding how stakeholders across the health economy are actually using AI—along with the challenges they face and the opportunities ahead—is critical. Explore how different stakeholder groups are applying AI today and what it means for the future of the health economy.

 

In 2026, nothing seems to be generating more attention than AI. Every industry, including the healthcare sector, is exploring opportunities to integrate AI into daily operations to enhance productivity, automate routine tasks and create new products. All health economy stakeholders – providers, payers and life sciences companies – are actively pursuing options for AI integration. As the U.S. health economy grapples with rising costs, workforce shortages and declining population health status, stakeholders are increasingly exploring how AI might help address these problems.

However, as with any emerging technology, stakeholders must critically evaluate AI’s practical applications and distinguish between anticipated potential and current capabilities. AI’s capabilities are often misunderstood – even by those charged with implementing it – and the pace of innovation can outstrip the sector’s ability to assess, regulate and adopt it effectively. Monitoring how each stakeholder group evaluates and responds to AI will be essential to understanding the extent to which it will, or will not, transform the health economy.

Background

AI refers to the application of technology to mimic intelligent human behavior, including learning from experience, identifying patterns and making decisions based on data.1 In healthcare, AI has three primary areas of application: administration (e.g., revenue cycle management automation or clinical documentation transcription), care delivery (e.g., AI-assisted clinical decision-making or surgical technology) and research (e.g., clinical candidate identification powered by algorithmic models).

While terms like AI, machine learning (ML) and deep learning (DL) are often used interchangeably by health economy stakeholders, they refer to different technologies. In practice, most stakeholders do not clearly distinguish between them when discussing use cases. Broadly, AI is the overarching field, encompassing any system that simulates human intelligence.2 ML is a subset of AI focused on enabling systems to learn from data and improve over time without explicit reprogramming (e.g., predicting patient deterioration based on vital signs). DL, a subset of ML, uses neural networks to automatically identify patterns in large, often unstructured datasets like medical images or clinical notes, enabling more complex tasks like tumor detection or imaging interpretation.

However, not everything labeled “AI” meets the definition. Some advanced tools in healthcare (e.g., electronic healthcare record systems that give alerts based on preset clinical rules, automated appointment reminders, keyword-based tools that flag potential billing errors) are often mistaken for AI despite lacking learning or reasoning capabilities. As interest in AI accelerates, a shared understanding of its components and limitations will be critical for making informed decisions about adoption and investment.

The application of AI in healthcare is not a recent development, but its scope and sophistication have evolved significantly in recent years. As early as 1971, the INTERNIST-1 system was developed to assist physicians in making diagnoses (Figure 1).3 In 2014, Pfizer started using AI to categorize and analyze reports of adverse drug events. In 2016, HCA Healthcare introduced Cancer Patient ID, an AI-enabled system that automatically reviewed patient records with the goal of detecting potentially undiagnosed cancers.4 By 2017, Olive launched its AI-powered solution to optimize the healthcare revenue cycle, finance, IT and supply chain operations.5 In 2019, Amazon released Transcribe Medical, an AI tool designed to support real-time clinical transcription, and the U.S. Food and Drug Administration (FDA) approved the first AI-enabled devices for cancer diagnostics.6,7


While there is no shortage of excitement around AI’s potential to transform healthcare, much of the enthusiasm still outpaces real-world impact. For example, since 2018, the American Medical Association has assigned at least 16 AI-related Current Procedural Terminology (CPT) codes to support medical coding and reimbursement. However, actual use has been limited: total patient volume across all AI CPT codes reached just 201.7K between 2018 and 2023, with usage largely confined to cardiac conditions like coronary artery disease and cardiac dysfunction (Figure 2). A similar pattern appears in drug development. Although the number of AI-discovered molecules entering clinical trials jumped from one in 2015 to 67 in 2023, more than two-thirds (67.2%) are still in early Phase I trials, and only one molecule has made it to market (Figure 3).8

As interest in AI has ramped up across the health economy, there has been growing pressure for policymakers to regulate AI use, given that the healthcare system generates sensitive and protected data and AI can pose patient safety risks. At the Federal level, efforts have been inconsistent. The Biden Administration initiated a voluntary AI agreement with major healthcare organizations in 2023 and issued Executive Order (EO) 14110 to guide national AI governance.9,10 In the EO, President Biden directed the Department of Health and Human Services to create an AI Task Force, develop quality control frameworks for AI-enabled technologies, document AI-related safety incidents and advance health equity through responsible AI deployment.11 In parallel, 28 healthcare providers and payers signed a voluntary agreement to align their AI efforts with the “FAVES” principles – Fair, Appropriate, Valid, Effective and Safe – committing to responsible AI use, transparency when content is AI-generated and risk management practices to mitigate potential harms.12 However, the Trump Administration revoked this EO in early 2025, favoring a looser Federal regulatory environment and calling for a new AI action plan.13 Although the Trump Administration has yet to issue updated guidance, the FDA is moving ahead with AI regulation, proposing new guidance in 2025 for managing AI-enabled medical devices across their life cycle, emphasizing transparency, cybersecurity and bias mitigation.14

In parallel, state policymakers have proposed laws to regulate AI in healthcare, mainly in relation to how AI is used by payers. In 2024, California enacted SB1120 to bar insurance companies from solely using AI in insurance coverage decisions and utilization management, while Colorado passed a law requiring AI developers for “high-risk” systems, including in healthcare, to prevent algorithmic discrimination and meet reporting requirements.15,16  Additionally, legislators in Georgia, Massachusetts and Illinois have introduced legislation that would prohibit payers from basing coverage decisions based solely on results generated from AI-enabled software.17,18,19 Notably, the Georgia bill would broadly apply to all healthcare decisions, including those related to patient care and health insurance coverage. Overall, the U.S. faces a patchwork of Federal and state regulations that could create long-term challenges for AI developers and healthcare organizations operating across multiple jurisdictions.

Analytic Approach

To understand how AI is being adopted and viewed across the health economy, the primary stakeholder groups were identified: payers, providers, life sciences companies and patients. A review of more than 70 sources – including surveys, peer-reviewed studies, white papers, news coverage and executive interviews – was conducted to capture the range of stakeholder perspectives on AI in healthcare. Sentiments were categorized into current applications, challenges and future opportunities.

Current Healthcare AI Landscape

Across providers, payers, life science organizations and patients, the application of AI converges around four primary domains: clinical decision support, patient engagement, administrative automation and research innovation. These shared priorities demonstrate broad recognition of AI’s potential to enhance both care delivery and operational efficiency. However, each stakeholder is focused on different applications within these domains (Figure 4).

Providers are leveraging AI to transform clinical decision support systems and diagnosis, ease administrative burden and improve patient engagement and satisfaction:

  • AI models are improving diagnostic accuracy across specialties such as radiology, dermatology and oncology, identifying conditions earlier and with greater precision.20,21,22,23,24,25,26
  • Wearable devices and remote patient monitoring tools allow for continuous tracking of vital signs, enabling personalized, proactive care.27,28,29
  • Ambient scribe technologies are being used to reduce documentation burden, improve clinician well-being, increase interaction time between providers and patients and develop personalized patient education materials (Figure 5).30,31,32,33,34,35,36,37,38


Payers
are using AI to streamline claims processing and enhance fraud detection, improving accuracy and efficiency while reducing manual workload:

  • AI is used in utilization management, helping payers to assess and process prior authorization requests faster by organizing medical data and automating decisions for routine cases.39,40,41,42
  • In customer service, AI supports payers with sending beneficiaries personalized communications and automates interactions.43,44,45
  • Predictive analytics enable payers to identify high-risk individuals early, refine underwriting practices and offer targeted preventive interventions, ultimately aiming to control costs and improve population health outcomes.46,47,48,49,50,51,52


“AI is going to enable us to free up some of those tasks that could be routine and could be automated to allow us to be more personal.”
– Jessica Brooks-Woods, CEO of the National Association of Benefits and Insurance Professionals

 


AI is being used to provide patients with personalized health guidance and improve access to healthcare resources:

  • Personalized treatment recommendations, powered by AI’s ability to integrate genomic, clinical and lifestyle data, are offering patients more customized care pathways (Figure 6).53,54,55,56,57
  • Administrative tools like AI-based appointment scheduling and reminders are enhancing convenience and compliance.58,59


Life sciences manufacturers
are using AI to accelerate drug discovery by screening chemical compounds, predicting side effects and identifying new uses for existing drugs:

  • AI is being used to automate patient recruitment for trials, predict the outcomes of trials to optimize design, automate adverse event reporting and monitor results in real-time.60,61,62,63,64,65,66,67
  • Manufacturing and supply chains are becoming more efficient with AI-driven process optimization, demand forecasting and quality control.68,69,70,71,72,73
  • Life science manufacturers are using AI to analyze genomics and create more personalized medicine.74,75,76,77,78,79


“AI-powered manufacturing processes are increasing throughput by 20%, enabling us to deliver more medicines to patients faster.”

– Albert Bourla, Chairman and CEO of Pfizer


Primary Challenges with AI in Healthcare

While AI is already in use across the health economy, common barriers to adoption are surfacing for all stakeholders. Across providers, payers, patients and life sciences, organizations are grappling with challenges around workflow integration, data privacy and security, bias and equity risks and regulatory uncertainty (Figure 5). A fundamental barrier cutting across all groups is the difficulty of measuring AI’s effectiveness and determining whether these tools actually deliver better outcomes. While each stakeholder group will need to respond to challenges differently, all are grappling with how to balance AI’s potential benefits with the need for responsible, transparent and equitable implementation.

Providers are facing obstacles integrating AI into clinical practice while maintaining data security, equity and regulatory compliance:

  • Workflow integration challenges make it difficult for AI tools to fit seamlessly into clinical routines, requiring significant training and system upgrades, which can be expensive.80,81,82 For example, one case study looked at a health system’s adoption of an AI-powered radiology imaging tool. The total initial investment was $950K, which included $500K to license the software, $200K to upgrade the health system’s hardware, $100K to train the staff and $150K to integrate the tool within the health system’s technology.83 Although AI tools can eventually lead to substantial savings, the initial investment might be burdensome for some health systems.
  • Provider concerns about data privacy and security are heightened by the need to safeguard patient data privacy when integrating AI, with regulatory compliance adding complexity.84,85,86
  • Bias in AI models and concerns about AI “hallucinations” – when AI systems generate incorrect or misleading information that seems plausible but is not true – erode clinician trust and risk clinical misjudgment, especially when AI outputs conflict with physician judgment.87,88,89,90
  • Legal and regulatory uncertainty raises questions about liability when AI-enabled errors occur, discouraging full adoption.91,92,93,94,95

“Safeguards need to be put in place before we will ever realize a true improvement in our overall medical errors. Over-reliance on AI to correct mistakes could potentially result in different types of errors.”
– Dr. Donald Rodriguez, Professor of Medical Education and Program Director of MD/MS in AI Degree Program at the University of Texas Health Sciences Center


Payers
seeking to implement AI face barriers around bias, data privacy and an unclear regulatory environment:

  • Algorithmic bias threatens to create inequitable coverage decisions, and lack of transparency in AI systems complicates auditing and appeals processes, drawing scrutiny for payers.96,97,98
  • Regulatory frameworks for AI in insurance are lagging, creating uncertainty around the ethical use of AI in utilization management and claims handling.99,100,101,102,103
  • Managing data privacy is critical for payers as AI processes sensitive personal and clinical information at scale.104,105,106
  • Concerns about workforce disruption arise as AI automates administrative tasks, creating tension between efficiency gains and potential job losses.107,108,109


“‘Good’ AI governance not only requires companies to be aware of what they are doing and what models they are using; they must also have a regular assessment to ensure models behave appropriately.”
– Health Plan Executive


Life sciences
manufacturers face challenges scaling AI solutions responsibly and compliantly:

  • Data limitations and bias, especially for rare diseases and underrepresented populations, undermine the robustness and trustworthiness of AI-driven insights for manufacturers using AI in drug development.110,111,112
  • Lack of transparency and explainability in AI models hinders regulatory approvals and trust from clinicians and patients.113,114
  • Integrating AI into existing research and development and manufacturing workflows is complicated by infrastructure gaps and lack of internal expertise.115,116,117,118


“I worry that we’re becoming more efficient at making medicines that fail in the clinic. The big challenge [with AI] is still the translation challenge.”

– Steve Crossan, Founder of Dayhoff Labs and AlphaFold 1


Patients
are cautious about AI in healthcare, driven by concerns about data use, bias in AI tools and the erosion of human connection:

  • Privacy and data security fears remain high as AI tools collect and process sensitive health information. Moreover, existing trust in AI is undermined by worries about bias, opaque decision making and the potential for healthcare disparities to worsen if AI is scaled across healthcare (Figure 8).119,120,121,122,123,124,125,126
  • Concerns about the patient-provider relationship are widespread, with fears that AI could replace critical human interactions in healthcare.127,128,129,130,131
  • Confusion about AI’s role in healthcare, coupled with distrust of healthcare institutions overall, is fueling patient resistance to widespread AI adoption.132,133,134
  • Uncertainty about liability and regulation leaves patients unsure of who is responsible if AI-guided care leads to harm.135,136

Primary Opportunities for AI Integration in Healthcare

While stakeholders face challenges when it comes to implementing or accepting AI in healthcare, AI also presents numerous opportunities, from predicting patient health events before they occur, to fully automating administrative tasks for providers and payers, to creating accessible chatbots that can improve patient access to care in areas with provider shortages. In the coming years, patients, providers, payers and life sciences organizations will aim to harness AI to drive operational efficiency, compete more effectively, deliver highly personalized care and improve health outcomes (Figure 6). Over the next five years, most AI applications will likely continue to enhance existing workflows (e.g., reducing administrative burden, supporting clinical decisions), but broader transformation – where AI becomes foundational to care delivery, coordination and personalization – may take far longer to fully materialize. Eventually, however, AI is expected to eventually become embedded in core processes and function independently (e.g., interpreting patient data to make diagnoses, fully automating claims adjudication, designing clinical trials for new medications to optimize outcomes), rather than working on the margins. Such a shift is set to reshape how care is delivered, financed and experienced.

Providers aim to evolve from point solutions to a fully AI-augmented clinical environment:

  • AI could continuously monitor patient data streams to predict deterioration or complications in real time, enabling earlier, targeted clinical interventions.137,138,139,140,141
  • Enhance diagnostic decision-making with AI models that integrate imaging, labs, patient-level genetic makeup and clinical notes.142,143,144
  • Build more resilient, efficient care teams with AI-supported staffing and optimize staffing and resource allocation based on predictive models of patient demand and acuity.145,146,147
  • AI is being used to enable the diagnosis of rare diseases by rapidly analyzing complex datasets and analyzing similar cases, leading to faster treatment decisions and improved outcomes.148,149,150,151


“Rather than replace physicians, AI would help in diagnosis and then aid in what would be the most effective treatment for individuals. More personalized medicine… The idea that we will be able to identify patients before they get sick, to treat a patient before they become a patient."

– Dr. Tony Hebden, Former Global Vice President of Health Economics and Outcomes Research of AbbVie


Payers
are looking to AI to evolve operations from manual processing to predictive, personalized health management:

  • AI could help insurers proactively identify members at high risk of costly health events, enabling earlier, targeted interventions that lower costs and improve outcomes.152,153,154,155
  • Automated claims adjudication and prior authorization decisions could free up human resources for more complex cases, while reducing errors and improving transparency for providers and patients.156,157,158,159


“Think of the power of generative AI to take out a bunch of manual work that we do on both the payer side and the provider side. It feels like we are on the precipice of unlocking a ton of savings and value.”

— Martha Wofford, President and CEO of Blue Cross and Blue Shield of Rhode Island


Life sciences
organizations hope AI will drive faster, smarter innovation across research and development and commercialization:

  • In the future, life sciences manufacturers envision AI enabling a more precise understanding of how therapies perform across diverse populations, supporting better patient outcomes.160,161,162
  • Successful first movers in integrating AI could achieve lasting competitive differentiation by setting new standards for speed, efficiency and patient-centric innovation.163,164


“Ultimately, AI in drug manufacturing can lead to faster production times, lower costs, higher-quality products, reduced waste and potentially accelerate the delivery of life-saving medications to patients.”

– Dan Sheeran, General Manager of Healthcare and Life Sciences at Amazon Web Services


Despite concerns, patients have hopes that AI will lead to more accessible, personalized and trustworthy healthcare experiences:

  • Remote monitoring and personalized alerts driven by AI could allow patients to proactively manage their health, diagnosing health conditions at earlier stages and reducing instances of preventable emergency interventions.165,166,167,168
  • AI-powered health navigation tools could offer 24/7 support, improving access to care for underserved populations and reducing delays in accessing treatment.169,170,171,172
  • Patients hope AI can tailor prevention and treatment plans to their unique genetic, behavioral and environmental profiles, moving beyond one-size-fits-all approaches.173,174,175,176,177


“Maybe AI could help keep track of patient records… AI could get an overview fast and see that this patient has now shown these symptoms for the fifth time, so maybe it is time to look into that instead of the GP missing it.”

– Patient response to survey

Conclusion

The AI industry is quickly evolving, with new use cases and capabilities emerging almost daily. This synthesis of provider, payer, life sciences and patient perspectives reveals a stark reality: while all stakeholders acknowledge AI’s potential to radically change care delivery, reimbursement, research and administrative processes, crucial barriers persist. Some stakeholders see AI as a tool that can be implemented widely across the health economy and serve as a substitute for human labor in many circumstances, while others still have concerns about AI use in place of, rather than as a complement, to human capital. The potential future state – where AI could alleviate physician burden and improve access to care – remains elusive. Achieving these outcomes will require all stakeholders to overcome substantial hurdles, notably widespread mistrust and skepticism of AI from providers and patients and concerns related to algorithmic biases that could exacerbate health inequities. Currently, the full magnitude of AI's clinical implications, financial considerations and societal consequences remain largely theoretical rather than empirically validated. The evolving nature of AI necessitates constant monitoring and decisive action from all stakeholders as they develop strategic approaches to harness – and control – the broad range of AI-enabled technology.

In early 2026, this rapid evolution became especially visible with the launch of consumer-facing health platforms by leading AI developers. In January, OpenAI and Anthropic announced ChatGPT Health and Claude Health, respectively – tools designed to allow users to upload medical records, connect wellness applications and receive AI-generated health guidance.178,179 Both companies emphasized enhanced privacy protections and included disclaimers that these tools are not intended for diagnosis or treatment. However neither platform incorporates systematic clinical oversight, and neither relies on a dedicated, health-specific model. With more than 40M daily health-related queries already directed to ChatGPT as of January 2026, these tools are rapidly becoming a source of medical information for many patients.180 While proponents argue that such platforms could help address access barriers amid provider shortages and rising costs, experts have raised concerns about inaccurate or overly confident recommendations, limited accountability and the potential for patient harm. These developments underscore the growing tension between innovation and governance, highlighting the need for clear standards to ensure that AI augments, rather than replaces, clinical judgment.