Why this year demands a smarter approach to payment integrity
Rising medical costs, tightening margins and a fragile operations backbone have converged to make accuracy before payment mission-critical. PwC’s Health Research Institute projects commercial medical cost trend remains elevated in 2025 — approx. eight percent for the group market and seven-and-a-half percent fort he individual market — pressured by drugs, utilization and inflationary dynamics.
At the same time, U.S. health spending jumped seven-and-a-half percent to $4.9T in 2023 and is projected to outpace GDP over the next decade — meaning spend discipline must come from smarter, earlier interventions.
Operationally, the last year reminded us how brittle the revenue cycle can be. The Change Healthcare cyberattack created nationwide disruptions, forcing emergency advances and exposing single-point-of-failure risks across claims and payments.
Meanwhile, preventable administrative friction still drains resources: the 2024 CAQH Index finds a $20B annual savings opportunity from automation and workflow modernization across core transactions.
And providers are feeling it — denial pressure keeps climbing, with surveys reporting rejection rates commonly hitting 10–15 percent and a 16 percent rise in denials since 2018.
Bottom line: Payment integrity must move left — toward prepayment — and it must be precise, transparent and fast.
AI is an enabler, not the decider*
As a physician, I’m often asked, “How far should we let AI go?” Our stance is clear: AI is an assistant that accelerates detection and routing; people make the decisions.
In our program, the “machine” never determines payment or medical appropriateness — experts remain accountable for every conclusion.
This “human-in-the-loop” governance isn’t just philosophy; it’s risk management. We frame AI as AI-assisted, not “AI-driven,” and we avoid over-automation reviews that can create provider abrasion or compliance exposure.
The real unlock: Prepayment coding detection and intelligent routing
The fastest path to sustainable savings is getting the coding right the first time and ensuring each claim is routed to the best review path, not just any review path. Analyst perspectives show the market shifting away from post-pay recovery toward proactive pre-pay accuracy, using AI to spot risky claims and check them in real time.
That requires three capabilities working together:
- Predictive identification of error-prone claims: Models scan for patterns (bundling conflicts, modifiers, high-cost drug anomalies, DRG outliers) and prioritize which submissions warrant additional scrutiny. This lets clean claims flow to payment, while the right subset gets elevated.
- Intelligent claims routing: Once risk is flagged, the system sends each flagged claim to the right team — whether that’s coding experts or clinical validation reviewers. Routing logic sends the claim to the optimal channel (e.g., claims editing vs. expert clinical review), aligned to plan policies, contracts and prompt pay obligations.
- Explainable outcomes– Every recommendation must map to transparent policy, coding guidance or contract rationale so plan teams can stand behind the decision with providers.
With this design, AI lowers noise and lifts signal: fewer unnecessary record requests, fewer low yield reviews and a focus on the claims that truly matter — before dollars leave the door.

Keeping abrasion low by design
Provider relationships are strategic assets. AI should reduce abrasion by being selective and transparent. Our approach: don’t ask for documentation on every claim. Ask when the probability of error is high and the rationale is clear. Then have expert reviewers validate findings and communicate precisely what changed and why. That’s how you speed resolution and maintain trust.
This matters because denial volume has real downstream effects on A/R, staffing and patient experience. By applying AI to triage which claims to review prepay — and using clinicians and coders to validate the edits — you can minimize disputes, provider abrasion and comply with prompt pay timelines.
Guardrails that make AI safe — and valuable
To keep AI helpful and compliant, we operate under guardrails we recommend for every plan:
- Human governance:The machine never makes determinations. Experts do.
- Explainability over mystery: No black boxes. Tie every recommendation to clinical coding policy, plan rules and contract terms.
- Security & privacy first: Adopt enterprise controls and certifications; 2024–2025 cyber events demand nothing less.
- Policy currency:Keep AI tuned to the latest CMS/AMA updates and plan policies; governance beats guesswork.
These are the differences between AI you can audit and AI you have to apologize for.
A clinician’s closing thought
AI is here to stay — but in healthcare finance, how we use it matters more than if we use it. Use AI to surface the right claims and route them to the right review. Keep experts in the loop to validate, explain and improve the system. That’s how plans lower cost trends, reduce abrasion and meet their obligations to members and providers with integrity.
*Zelis exclusively employs closed-source artificial intelligence (AI) platforms to enrich its product suite, ensuring adherence to legal, ethical and industry-leading standards. All outcomes are rigorously and continuously reviewed by Zelis personnel to ensure quality and reliability.