In 2026, nothing seems to be generating more attention than AI. Every industry, including the healthcare sector, is exploring opportunities to integrate AI into daily operations to enhance productivity, automate routine tasks and create new products. All health economy stakeholders – providers, payers and life sciences companies – are actively pursuing options for AI integration. As the U.S. health economy grapples with rising costs, workforce shortages and declining population health status, stakeholders are increasingly exploring how AI might help address these problems.
However, as with any emerging technology, stakeholders must critically evaluate AI’s practical applications and distinguish between anticipated potential and current capabilities. AI’s capabilities are often misunderstood – even by those charged with implementing it – and the pace of innovation can outstrip the sector’s ability to assess, regulate and adopt it effectively. Monitoring how each stakeholder group evaluates and responds to AI will be essential to understanding the extent to which it will, or will not, transform the health economy.
AI refers to the application of technology to mimic intelligent human behavior, including learning from experience, identifying patterns and making decisions based on data.1 In healthcare, AI has three primary areas of application: administration (e.g., revenue cycle management automation or clinical documentation transcription), care delivery (e.g., AI-assisted clinical decision-making or surgical technology) and research (e.g., clinical candidate identification powered by algorithmic models).
While terms like AI, machine learning (ML) and deep learning (DL) are often used interchangeably by health economy stakeholders, they refer to different technologies. In practice, most stakeholders do not clearly distinguish between them when discussing use cases. Broadly, AI is the overarching field, encompassing any system that simulates human intelligence.2 ML is a subset of AI focused on enabling systems to learn from data and improve over time without explicit reprogramming (e.g., predicting patient deterioration based on vital signs). DL, a subset of ML, uses neural networks to automatically identify patterns in large, often unstructured datasets like medical images or clinical notes, enabling more complex tasks like tumor detection or imaging interpretation.
However, not everything labeled “AI” meets the definition. Some advanced tools in healthcare (e.g., electronic healthcare record systems that give alerts based on preset clinical rules, automated appointment reminders, keyword-based tools that flag potential billing errors) are often mistaken for AI despite lacking learning or reasoning capabilities. As interest in AI accelerates, a shared understanding of its components and limitations will be critical for making informed decisions about adoption and investment.
The application of AI in healthcare is not a recent development, but its scope and sophistication have evolved significantly in recent years. As early as 1971, the INTERNIST-1 system was developed to assist physicians in making diagnoses (Figure 1).3 In 2014, Pfizer started using AI to categorize and analyze reports of adverse drug events. In 2016, HCA Healthcare introduced Cancer Patient ID, an AI-enabled system that automatically reviewed patient records with the goal of detecting potentially undiagnosed cancers.4 By 2017, Olive launched its AI-powered solution to optimize the healthcare revenue cycle, finance, IT and supply chain operations.5 In 2019, Amazon released Transcribe Medical, an AI tool designed to support real-time clinical transcription, and the U.S. Food and Drug Administration (FDA) approved the first AI-enabled devices for cancer diagnostics.6,7
While there is no shortage of excitement around AI’s potential to transform healthcare, much of the enthusiasm still outpaces real-world impact. For example, since 2018, the American Medical Association has assigned at least 16 AI-related Current Procedural Terminology (CPT) codes to support medical coding and reimbursement. However, actual use has been limited: total patient volume across all AI CPT codes reached just 201.7K between 2018 and 2023, with usage largely confined to cardiac conditions like coronary artery disease and cardiac dysfunction (Figure 2). A similar pattern appears in drug development. Although the number of AI-discovered molecules entering clinical trials jumped from one in 2015 to 67 in 2023, more than two-thirds (67.2%) are still in early Phase I trials, and only one molecule has made it to market (Figure 3).8
As interest in AI has ramped up across the health economy, there has been growing pressure for policymakers to regulate AI use, given that the healthcare system generates sensitive and protected data and AI can pose patient safety risks. At the Federal level, efforts have been inconsistent. The Biden Administration initiated a voluntary AI agreement with major healthcare organizations in 2023 and issued Executive Order (EO) 14110 to guide national AI governance.9,10 In the EO, President Biden directed the Department of Health and Human Services to create an AI Task Force, develop quality control frameworks for AI-enabled technologies, document AI-related safety incidents and advance health equity through responsible AI deployment.11 In parallel, 28 healthcare providers and payers signed a voluntary agreement to align their AI efforts with the “FAVES” principles – Fair, Appropriate, Valid, Effective and Safe – committing to responsible AI use, transparency when content is AI-generated and risk management practices to mitigate potential harms.12 However, the Trump Administration revoked this EO in early 2025, favoring a looser Federal regulatory environment and calling for a new AI action plan.13 Although the Trump Administration has yet to issue updated guidance, the FDA is moving ahead with AI regulation, proposing new guidance in 2025 for managing AI-enabled medical devices across their life cycle, emphasizing transparency, cybersecurity and bias mitigation.14
In parallel, state policymakers have proposed laws to regulate AI in healthcare, mainly in relation to how AI is used by payers. In 2024, California enacted SB1120 to bar insurance companies from solely using AI in insurance coverage decisions and utilization management, while Colorado passed a law requiring AI developers for “high-risk” systems, including in healthcare, to prevent algorithmic discrimination and meet reporting requirements.15,16 Additionally, legislators in Georgia, Massachusetts and Illinois have introduced legislation that would prohibit payers from basing coverage decisions based solely on results generated from AI-enabled software.17,18,19 Notably, the Georgia bill would broadly apply to all healthcare decisions, including those related to patient care and health insurance coverage. Overall, the U.S. faces a patchwork of Federal and state regulations that could create long-term challenges for AI developers and healthcare organizations operating across multiple jurisdictions.
To understand how AI is being adopted and viewed across the health economy, the primary stakeholder groups were identified: payers, providers, life sciences companies and patients. A review of more than 70 sources – including surveys, peer-reviewed studies, white papers, news coverage and executive interviews – was conducted to capture the range of stakeholder perspectives on AI in healthcare. Sentiments were categorized into current applications, challenges and future opportunities.
Across providers, payers, life science organizations and patients, the application of AI converges around four primary domains: clinical decision support, patient engagement, administrative automation and research innovation. These shared priorities demonstrate broad recognition of AI’s potential to enhance both care delivery and operational efficiency. However, each stakeholder is focused on different applications within these domains (Figure 4).
Providers are leveraging AI to transform clinical decision support systems and diagnosis, ease administrative burden and improve patient engagement and satisfaction:
Payers are using AI to streamline claims processing and enhance fraud detection, improving accuracy and efficiency while reducing manual workload:
AI is being used to provide patients with personalized health guidance and improve access to healthcare resources:
Life sciences manufacturers are using AI to accelerate drug discovery by screening chemical compounds, predicting side effects and identifying new uses for existing drugs:
While AI is already in use across the health economy, common barriers to adoption are surfacing for all stakeholders. Across providers, payers, patients and life sciences, organizations are grappling with challenges around workflow integration, data privacy and security, bias and equity risks and regulatory uncertainty (Figure 5). A fundamental barrier cutting across all groups is the difficulty of measuring AI’s effectiveness and determining whether these tools actually deliver better outcomes. While each stakeholder group will need to respond to challenges differently, all are grappling with how to balance AI’s potential benefits with the need for responsible, transparent and equitable implementation.
Providers are facing obstacles integrating AI into clinical practice while maintaining data security, equity and regulatory compliance:
Payers seeking to implement AI face barriers around bias, data privacy and an unclear regulatory environment:
Life sciences manufacturers face challenges scaling AI solutions responsibly and compliantly:
Patients are cautious about AI in healthcare, driven by concerns about data use, bias in AI tools and the erosion of human connection:
While stakeholders face challenges when it comes to implementing or accepting AI in healthcare, AI also presents numerous opportunities, from predicting patient health events before they occur, to fully automating administrative tasks for providers and payers, to creating accessible chatbots that can improve patient access to care in areas with provider shortages. In the coming years, patients, providers, payers and life sciences organizations will aim to harness AI to drive operational efficiency, compete more effectively, deliver highly personalized care and improve health outcomes (Figure 6). Over the next five years, most AI applications will likely continue to enhance existing workflows (e.g., reducing administrative burden, supporting clinical decisions), but broader transformation – where AI becomes foundational to care delivery, coordination and personalization – may take far longer to fully materialize. Eventually, however, AI is expected to eventually become embedded in core processes and function independently (e.g., interpreting patient data to make diagnoses, fully automating claims adjudication, designing clinical trials for new medications to optimize outcomes), rather than working on the margins. Such a shift is set to reshape how care is delivered, financed and experienced.
Providers aim to evolve from point solutions to a fully AI-augmented clinical environment:
Payers are looking to AI to evolve operations from manual processing to predictive, personalized health management:
Life sciences organizations hope AI will drive faster, smarter innovation across research and development and commercialization:
Despite concerns, patients have hopes that AI will lead to more accessible, personalized and trustworthy healthcare experiences:
The AI industry is quickly evolving, with new use cases and capabilities emerging almost daily. This synthesis of provider, payer, life sciences and patient perspectives reveals a stark reality: while all stakeholders acknowledge AI’s potential to radically change care delivery, reimbursement, research and administrative processes, crucial barriers persist. Some stakeholders see AI as a tool that can be implemented widely across the health economy and serve as a substitute for human labor in many circumstances, while others still have concerns about AI use in place of, rather than as a complement, to human capital. The potential future state – where AI could alleviate physician burden and improve access to care – remains elusive. Achieving these outcomes will require all stakeholders to overcome substantial hurdles, notably widespread mistrust and skepticism of AI from providers and patients and concerns related to algorithmic biases that could exacerbate health inequities. Currently, the full magnitude of AI's clinical implications, financial considerations and societal consequences remain largely theoretical rather than empirically validated. The evolving nature of AI necessitates constant monitoring and decisive action from all stakeholders as they develop strategic approaches to harness – and control – the broad range of AI-enabled technology.
In early 2026, this rapid evolution became especially visible with the launch of consumer-facing health platforms by leading AI developers. In January, OpenAI and Anthropic announced ChatGPT Health and Claude Health, respectively – tools designed to allow users to upload medical records, connect wellness applications and receive AI-generated health guidance.178,179 Both companies emphasized enhanced privacy protections and included disclaimers that these tools are not intended for diagnosis or treatment. However neither platform incorporates systematic clinical oversight, and neither relies on a dedicated, health-specific model. With more than 40M daily health-related queries already directed to ChatGPT as of January 2026, these tools are rapidly becoming a source of medical information for many patients.180 While proponents argue that such platforms could help address access barriers amid provider shortages and rising costs, experts have raised concerns about inaccurate or overly confident recommendations, limited accountability and the potential for patient harm. These developments underscore the growing tension between innovation and governance, highlighting the need for clear standards to ensure that AI augments, rather than replaces, clinical judgment.