top of page
happy-multiethnic-family-spending-time-together-new-normal.jpg

Implementing Safe AI Practices in State Medicaid Programs

Artificial intelligence (AI) is rapidly reshaping healthcare operations, clinical decision support, and administrative processes across Medicaid programs. States are increasingly deploying AI for eligibility determinations, utilization management, care coordination, and member engagement. While these technologies offer meaningful opportunities to improve efficiency and access to care, they also introduce significant risks related to equity, safety, and accountability if not properly governed.


In 2025 alone, 47 states introduced legislation addressing healthcare AI, signaling heightened regulatory scrutiny. For Medicaid programs—serving populations with complex health needs, social risk factors, and historical inequities—the stakes are particularly high. AI tools developed using commercial insurance data often fail to accurately reflect Medicaid populations, increasing the risk of biased predictions, inappropriate service denials, and unequal outcomes.
Real-world experience underscores these risks. In Arkansas, an algorithm used to determine home- and community-based service hours significantly underestimated beneficiary needs, resulting in widespread service reductions and legal challenges. Separately, national research has demonstrated that widely used healthcare risk algorithms underestimated the needs of Black patients by relying on prior healthcare spending as a proxy for illness severity—reinforcing historical underutilization rather than clinical need.


Regulators are responding. In December 2024, the Centers for Medicare & Medicaid Services (CMS) proposed guardrails for Medicare Advantage plans that prohibit sole reliance on AI for medical necessity determinations and require meaningful human review. Although these requirements apply directly to Medicare Advantage plans, they establish clear expectations that are likely to influence Medicaid managed care plans oversight and state procurement standards.
States are already moving in this direction. California now requires disclosure and consent for certain healthcare AI uses. Illinois prohibits utilization review decisions based solely on automated systems. New York requires insurers to submit algorithms for certification to assess discriminatory impact. Collectively, these actions reflect a shift toward targeted regulation of high-risk AI use cases particularly for mental health chatbots, payor algorithms, and high-risk clinical tools rather than broad, principles-only governance.

Best Practices for Medicaid AI Governance

​

To safely deploy AI while protecting beneficiaries, Medicaid agencies and their partners should adopt the following practices:

​

  • Validate before deployment
    Test AI models using Medicaid-representative data and evaluate performance across demographic and socioeconomic subgroups.

  • Preserve human authority
    Use AI to support—not replace—clinical judgment and coverage decision-making. Require documented human review for eligibility, medical necessity, and service determinations.

  • Ensure transparency and disclosure
    Inform beneficiaries when AI meaningfully influences coverage or care decisions and provide clear escalation pathways.

  • Monitor continuously
    Track real-world performance, identify bias or drift, and implement remediation processes.

  • Establish formal governance
    Create AI oversight structures, conduct periodic fairness audits, and integrate AI risk management into existing compliance and audit frameworks.

Policy Recommendation Summary

​

When governed responsibly, the integration of AI in Medicaid programs can strengthen care delivery, reduce disparities, and measurably improve the lives of Medicaid beneficiaries nationwide.

​

Medicaid programs should implement structured AI governance frameworks aligned with emerging CMS expectations. These frameworks should require pre-deployment validation, prohibit sole reliance on AI for high-impact decisions, mandate transparency to beneficiaries, and establish ongoing monitoring and accountability mechanisms. With appropriate safeguards, AI can enhance efficiency and access while advancing—not undermining—Medicaid’s equity and coverage goals.

Citations
 

Sidley Austin LLP. (2024). CMS proposes artificial intelligence limits and utilization management guardrails for Medicare Advantage. Retrieved from https://www.sidley.com


Hopkins Bloomberg Public Health Magazine. (2023). Rooting out AI's biases. Retrieved from https://magazine.publichealth.jhu.edu


Becker's Hospital Review. (2025). 47 states introduced healthcare AI bills in 2025. Retrieved from https://www.beckershospitalreview.com


About Cyquent
Published by Cyquent, a public sector firm with hands-on experience implementing AI in Medicaid and other government programs, focused on aligning innovation with public policy objectives. 

bottom of page