Care Management Quality Metrics and Performance Measures

Quality metrics and performance measures in care management form the quantitative and qualitative backbone through which health plans, provider organizations, and regulatory agencies evaluate whether care coordination programs achieve clinical, operational, and patient experience goals. This page covers the definitional scope of care management metrics, the structural frameworks used to classify and deploy them, the causal drivers that shape measurement priorities, and the practical tensions that arise when competing measures produce conflicting incentives. The regulatory context spans Centers for Medicare & Medicaid Services (CMS) requirements, National Committee for Quality Assurance (NCQA) HEDIS standards, and The Joint Commission accreditation criteria.


Definition and scope

Care management quality metrics are structured, reproducible indicators used to assess whether a care management program delivers measurable benefit to defined patient populations. They differ from general clinical quality measures in that they evaluate the coordination infrastructure itself — not only discrete clinical events — including care plan completion rates, follow-up contact adherence, medication reconciliation rates, and transition timeliness.

The scope of measurement in care management spans three domains recognized by CMS: process measures (what was done), outcome measures (what resulted), and patient experience measures (how care was perceived). These three domains map directly to the Donabedian framework — structure, process, outcome — which the Agency for Healthcare Research and Quality (AHRQ) has adopted as a foundational model for healthcare quality assessment (AHRQ, National Quality Measures Clearinghouse conceptual framework).

Care management performance measures specifically apply to programs such as Chronic Care Management (CCM), Transitional Care Management (TCM), and Principal Care Management (PCM), all of which carry CMS billing codes (CPT 99490, 99495, 99496, 99424–99427) and corresponding documentation requirements. Programs operating under Medicare care management programs must meet defined time thresholds and documentation standards before billing, making metrics both a compliance instrument and a quality tool simultaneously.

The National Quality Forum (NQF) maintains an endorsement process for care coordination measures. As of the NQF's care coordination portfolio, endorsed measures include transitions of care documentation, medication reconciliation post-discharge, and care plan content standards — each with explicit technical specifications covering denominator definitions, numerator criteria, and exclusion logic.


Core mechanics or structure

Quality metrics in care management operate through a measure lifecycle: specification, data collection, aggregation, reporting, and performance improvement response. Each stage introduces sources of variance that affect measure validity.

Measure specification defines numerator, denominator, exclusions, and measurement period. The HEDIS measurement framework, published annually by NCQA, specifies measures such as Transitions of Care (TRC), which tracks four sub-measures: notification of inpatient admission, receipt of discharge information, patient engagement after discharge, and medication reconciliation post-discharge. Each sub-measure carries its own rate calculation.

Data collection draws from administrative claims, electronic health record (EHR) abstraction, or hybrid sources. Hybrid collection — combining claims with medical record review — is the most resource-intensive but typically produces the most complete picture, particularly for care coordination vs care management programs where clinical documentation varies across settings.

Aggregation occurs at the patient, provider, program, and plan level. Star Ratings under CMS Medicare Advantage aggregate individual measure scores into a composite rating system where each measure is weighted. As of CMS's 2024 Star Ratings technical notes, the overall Star Rating incorporates measures across five domains, with the Patient Experience/Complaints and Access domain weighted at 1.5x compared to process measures weighted at 1x (CMS Medicare Star Ratings).

Reporting timelines vary: HEDIS data are typically submitted annually by May 15 for the prior measurement year, while CMS quality reporting programs such as the Merit-based Incentive Payment System (MIPS) carry calendar-year measurement windows with submission deadlines in the following March.


Causal relationships or drivers

The selection and weighting of care management metrics is not arbitrary — it reflects specific policy priorities, payer incentive structures, and epidemiological patterns that shape what gets measured and rewarded.

Value-based contracting is the primary structural driver. As value-based care and care management models have expanded, payers link reimbursement directly to metric performance. Accountable Care Organizations (ACOs) participating in the Medicare Shared Savings Program (MSSP) are evaluated on 23 quality measures (as of the 2023 program year, per CMS MSSP reporting requirements), covering preventive care, chronic disease management, and patient experience.

Risk stratification logic determines which patients enter measurement denominators. High-complexity patients generate more touchpoints — and therefore more opportunities to succeed or fail on process metrics — which means risk stratification in care management methodology directly influences apparent program performance.

Regulatory mandates establish floor-level requirements. CMS Conditions of Participation for hospitals (42 CFR §482) require discharge planning processes, creating mandatory measurement points around transition planning. Similarly, state Medicaid managed care rules (42 CFR §438.330) require quality assessment and performance improvement (QAPI) programs with measurable outcomes targets.

Technology infrastructure acts as an enabling or constraining variable. Programs with interoperable EHR systems can capture real-time care plan completion data; programs relying on fax-based communication cannot. The health information technology in care management infrastructure available to a program directly affects which measures can be practically collected and reported.


Classification boundaries

Care management quality measures fall into four primary classification categories, each with distinct technical characteristics:

  1. Process measures — Track whether a defined action occurred within a specified timeframe (e.g., medication reconciliation within 30 days of discharge). Process measures are generally more controllable by care management staff but do not directly confirm patient benefit.

  2. Outcome measures — Track clinical or utilization results (e.g., all-cause readmission rate, hemoglobin A1c control rate in diabetes care management). Outcome measures are subject to case-mix confounding; risk adjustment is frequently required for valid comparison.

  3. Patient experience measures — Derived from standardized survey instruments such as the Consumer Assessment of Healthcare Providers and Systems (CAHPS) survey. CMS requires CAHPS Health Plan Survey administration for Medicare Advantage plans, with results feeding into Star Ratings.

  4. Structural measures — Assess the presence of systems, staff, or policies required to deliver quality (e.g., 24/7 nurse line availability, care manager-to-patient ratios). Structural measures are collected via attestation and are less frequently tied to pay-for-performance arrangements.

The NQF's taxonomy further subdivides measures by measurement level (individual patient vs. population), data source (claims vs. EHR vs. survey), and attribution methodology (episode-based vs. prospective enrollment-based).


Tradeoffs and tensions

Specificity vs. comprehensiveness. Highly specific measures (e.g., 7-day follow-up after psychiatric hospitalization) capture clinically meaningful events but miss broad program performance. Composite or summary measures reduce administrative burden but can obscure areas of underperformance within subpopulations.

Process compliance vs. outcome achievement. A care management program can achieve 100% care plan completion rates while patients continue to experience preventable hospitalizations. Process measure performance does not guarantee outcome improvement, yet payers frequently reward process compliance because it is measurable and attributable.

Attribution ambiguity. When a patient touches multiple providers, health plans, and care management programs within a measurement period, assigning credit or responsibility for an outcome becomes contested. The MSSP's Beneficiary Attribution methodology (prospective vs. retrospective) has been revised multiple times precisely because attribution directly determines which ACO receives shared savings or incurs shared losses.

Equity and disparity masking. Aggregate metrics can obscure differential performance across racial, ethnic, and socioeconomic subgroups. NCQA's 2023 Health Equity Accreditation framework introduced requirements for stratified reporting of HEDIS measures, recognizing that aggregate performance can mask structural disparities. Programs evaluated through population health management lenses increasingly face pressure to report stratified measure rates.


Common misconceptions

Misconception: Higher HEDIS scores automatically indicate better patient outcomes. HEDIS measures are process-oriented or intermediate-outcome-oriented. A health plan may achieve a high rate on Controlling High Blood Pressure while patient populations in specific ZIP codes remain undertreated — the aggregate masks the distributional reality.

Misconception: NQF endorsement makes a measure suitable for all care management contexts. NQF endorsement confirms that a measure meets psychometric criteria for validity and reliability under specific data conditions. A measure endorsed for inpatient hospital use may not translate to outpatient or community-based care management without re-specification. The NQF explicitly publishes measure applicability criteria within each endorsed measure's technical specifications.

Misconception: CMS Star Ratings reflect only clinical quality. Of the five Star Rating domains under Medicare Advantage, two directly measure administrative or operational performance (Plan Administration and Customer Service). Star Rating movement can occur without any change in clinical care delivery.

Misconception: Care management programs can control their readmission metrics independently. Thirty-day all-cause readmission rates are influenced by post-acute care availability, housing stability, social determinants, and primary care access — all factors that extend beyond care manager scope. The social determinants of health in care management literature consistently identifies non-clinical variables as significant readmission predictors.


Checklist or steps (non-advisory)

The following sequence describes the standard components present in a care management quality measurement program, drawn from CMS QAPI requirements (42 CFR §438.330) and NCQA Health Plan Accreditation standards:

  1. Define the measurement population. Specify inclusion criteria (enrollment, diagnosis, risk score threshold), exclusion criteria (hospice enrollment, age limits), and denominator logic for each measure.

  2. Select measure specifications. Identify source specification documents (HEDIS Technical Specifications, NQF measure steward documentation, CMS Quality Measure specifications from the Measures Management System).

  3. Establish data collection methodology. Determine whether data will be extracted from claims, EHR abstraction, care management platform fields, or hybrid sources. Document data validation procedures.

  4. Set baseline performance rates. Calculate current-year performance before implementing interventions, establishing a denominator-adjusted baseline rate for each measure.

  5. Identify stratification variables. Determine how results will be disaggregated — by age cohort, chronic condition, geographic region, or race/ethnicity — consistent with NCQA Health Equity Accreditation requirements.

  6. Implement measurement period tracking. Align data collection windows with reporting program timelines (HEDIS measurement year, MIPS performance period, state Medicaid contract year).

  7. Aggregate and validate results. Run audit logic to identify data completeness gaps, duplicate records, and denominator misclassifications before generating final rates.

  8. Report to required entities. Submit HEDIS data to NCQA via IDSS (Interactive Data Submission System); submit MIPS data via CMS's Quality Payment Program portal; submit state Medicaid QAPI reports per contract specifications.

  9. Document performance improvement response. For measures below benchmark thresholds, document a formal Performance Improvement Project (PIP) with root cause analysis, intervention plan, and re-measurement timeline — consistent with 42 CFR §438.330(d).


Reference table or matrix

Measure Category Example Measure Source Specification Primary Data Source Attribution Level Regulatory Context
Process Medication Reconciliation Post-Discharge HEDIS TRC (NCQA) Medical record / claims Health plan Medicare Advantage Star Ratings
Process Care Plan Documented within 30 Days CMS CCM documentation requirements EHR / care management platform Provider / program CCM billing (CPT 99490)
Outcome 30-Day All-Cause Readmission Rate CMS Hospital Readmissions Reduction Program Claims Hospital / ACO MSSP, HRRP (42 USC §1395ww)
Outcome Hemoglobin A1c Poor Control (>9%) HEDIS CDC (NCQA) Claims / lab data Health plan Medicare Advantage, Medicaid
Patient Experience Getting Needed Care CAHPS Health Plan Survey (CMS/AHRQ) Member survey Health plan Medicare Advantage Star Ratings
Patient Experience Care Coordination (CAHPS CG-CAHPS) CAHPS Clinician & Group Survey Patient survey Provider / group MIPS Quality Category
Structural 24/7 Nurse Advice Line Availability NCQA Health Plan Accreditation standards Attestation Health plan NCQA Accreditation
Structural Care Manager Caseload Ratio CMSA Standards of Practice Organizational documentation Program / employer CMSA voluntary standards

CMSA = Case Management Society of America. HEDIS = Healthcare Effectiveness Data and Information Set. CAHPS = Consumer Assessment of Healthcare Providers and Systems. CCM = Chronic Care Management. MSSP = Medicare Shared Savings Program. HRRP = Hospital Readmissions Reduction Program.

The performance benchmarks applicable to each measure are published annually by their respective stewards: NCQA releases HEDIS national percentile benchmarks each fall, CMS publishes MIPS performance thresholds by specialty via the Quality Payment Program website, and state Medicaid agencies publish contract-specific benchmarks in their Managed Care Quality Strategy documents as required under 42 CFR §438.340.

For programs managing care management outcomes measurement across complex patient populations, measure selection should align with the clinical priorities documented in the organization's Population Health Management strategy and the accreditation requirements of the applicable body — whether NCQA, URAC, or The Joint Commission. Programs seeking case management certification requirements through bodies such as CCMC (Commission for Case Manager Certification) will encounter overlapping but distinct competency frameworks that influence how individual care managers document and report their contributions to organizational metrics.


References

Explore This Site