Skip To Navigation Skip To Content Skip To Footer
    ModMed Scribe 2.0
    MGMA Stat
    Home > MGMA Stat > MGMA Stat
    Chris Harrop
    Chris Harrop

    A manager discovers someone used a consumer chatbot to rewrite a patient complaint — it has a helpful tone, polished phrasing, and pasted snippets of the actual encounter. Or a physician wants to try an ambient scribe that “writes the whole note.” Or the revenue cycle team is excited about a “denial predictor,” but no one can explain whether it’s a rules engine, a machine-learning (ML) model, or a black box fed by payer behavior. 

    Meanwhile, your EHR and PM vendors roll out upgrades promising AI summaries and draft responses, and the front desk is experimenting with Claude to handle scheduling scripts. AI is showing up in your practice — can you say, clearly and consistently, who is accountable, what is allowed, what data can be used, and how you will catch problems? 

    MGMA Stat poll - January 20, 2026 - AI governance and formal policies


    A Jan. 20, 2026, MGMA Stat poll finds that 42% of medical group leaders say their organization either has AI governance or a formal policy on AI use (20%) or are working on developing it (22%), while most (56%) say “no” and another 2% are unsure. The poll had 328 applicable responses. 

    This points to a sizable shift toward oversight since MGMA’s joint research with Humana in fall/winter 2024 that found 73% of surveyed organizations did not have a formal governance structure for AI use, even as adoption accelerated and the range of use cases broadened. That “governance lag” is where preventable risk lives. 

    What you told us 

    Most practice leaders responding to this week’s poll review their AI governance or formal AI‑use policies on a regular cadence, with annual reviews being the most common, followed by quarterly, monthly, or biannual cycles in some organizations. A few note that oversight is handled by committees or corporate offices, and several groups review policies more frequently when AI tools are newly implemented or as situations require. 

    Leaders who are actively developing AI policies were primarily focused on defining a clear scope of AI use, along with establishing data rules, privacy safeguards, and approval processes. Many also emphasize supporting areas such as training, clinical use cases, and compliance with emerging regulations. 

    While some groups without AI policies simply aren’t using AI yet, many are experimenting with a wide spectrum of AI tools without governance; by mid-2025, 71% of practice leaders reported some use of AI for patient visits. Among this weeks’ respondents using AI sans policies, the most common applications focused on documentation efficiency (e.g., scribing/ambient note capture, dictation, charting support), revenue cycle tasks (e.g., prior authorizations), and operational support/administrative functions like meeting notes, online scheduling, SOP writing, call‑center assistance, and general ChatGPT‑style support. 

    Why AI governance is harder than other tech governance 

    Most practices already know how to govern “traditional” technology: you assess security, negotiate a contract, train users, and support go-live. AI strains that familiar rhythm in three ways: 

    1. AI changes behavior, not just tools. It can influence how clinicians document, how staff communicate, and how leaders make decisions. 
    2. AI can span the entire practice, not a single department. Your highest-risk use cases may be clinical (diagnostic support, risk prediction, triage), but some of the most common failure points are operational: patient messaging, HR content, denial appeals, marketing copy, call scripts, and internal analytics. 
    3. The pace is faster than governance instincts. As one AMA leader put it: “Technology is moving very, very quickly … so setting up an appropriate governance structure now is more important than it’s ever been.” 

    So what does “good governance” look like in a practice without adding unnecessary layers? 

    Governance failures: A primer 

    Early AI failures can be hard to spot and expensive to unwind: 

    • Shadow AI becomes the default: People use what’s easiest — usually consumer tools — because they’re fast and free. As Chris Bevil cautioned on the MGMA Insights podcast: “When using tools like ChatGPT or other generative AI platforms, you don’t always know where the data is going.” If your policy doesn’t create a safe, approved path, staff will create their own. 
    • Data leakage happens through ordinary work: It isn’t malicious when someone pastes an email thread, a phone message, or a portion of a note into a chatbot to “clean it up.” If your practice hasn’t defined what data is allowed in which tools, you’re relying on personal judgment. 
    • Automation bias creeps in: Outputs look confident, and people stop double-checking. That’s true for clinical suggestions and for administrative decisions (like prioritizing denials or flagging “high-risk” patients). The Federation of State Medical Boards (FSMB) emphasizes that guardrails and understanding are necessary so these tools don’t introduce risk in clinical practice, and that governance should be anchored in ethical principles with human accountability at the center. 
    • “AI quality” is a moving target: Models get updated, prompts change, performance shifts, and workflows evolve. 

    Govern by use cases, not hype 

    A workable approach for practices is to govern AI the way you govern clinical risk: categorize, set thresholds, require evidence, and monitor. MGMA’s AI issue brief points toward two concepts that translate well into practice policy: transparency and alignment with FAVES principles (fair, appropriate, valid, effective, safe): 

    • Fair: Does it behave differently across patient populations, and do we know? 
    • Appropriate: Is this use case suited to AI assistance, or does it require human judgment end-to-end? 
    • Valid/Effective: What evidence do we have it works in our setting and specialty mix? 
    • Safe: What is the failure mode, and how will we catch it quickly? 

    Even if your practice is not building certified health IT, these themes shape vendor expectations, contracting conversations, and what responsible deployment looks like. 

    Build governance that fits your practice 

    Governance should focus on making decisions predictable (and auditable) when the next AI tool appears. A clean structure usually includes: 

    1. An executive owner 

    If AI touches clinical care, this cannot sit only with IT. It needs joint ownership between a clinical leader and an operational leader, with compliance and security embedded. 

    The AMA’s “Governance for Augmented Intelligence” toolkit lays out an eight-step path that begins with executive accountability and governance structure, then moves through working group formation, assessment of current state, policy development, vendor evaluation, implementation processes, oversight/monitoring, and organizational readiness. That sequencing helps prevent writing policy before you understand where AI is already used. 

    2. A small AI governance working group 

    Think of this like a formulary committee meets change-management committee: One physician champion, nursing/clinical operations, compliance/privacy, security/IT, revenue cycle, patient experience, and HR. MedPro’s risk guidance recommends an AI governance committee with diverse representation — clinicians, IT, legal/ethical expertise, risk managers, and even patient representatives — because responsibility includes safety and quality in deployment. 

    3. A lightweight intake process 

    If someone can buy or enable an AI tool without review, you don’t have governance. At minimum, require that any new AI tool (or any major new AI feature in existing platforms) triggers a short use-case description; a data flow description (what goes in, what comes out, where it’s stored); a pilot plan and success metrics; and an owner for monitoring and issue escalation. 

    Write policies people will follow 

    Policies work best when they answer frontline questions in plain language: 

    1. Start with one umbrella policy that defines “AI” in your office 

    Your policy should define categories you will govern differently, such as: 

    • Consumer generative AI (ChatGPT/Claude-like tools used through public interfaces) 
    • Enterprise generative AI (contracted tools with defined data handling terms) 
    • Embedded AI in the EHR/RCM platform (summaries, scribing, coding suggestions) 
    • Predictive/ML tools (risk stratification, propensity models, decision support) 
    • Patient-facing AI (chatbots, intake tools, symptom navigation, messaging assistants) 

    The ONC’s definition of predictive tools is helpful for specificity: many things that staff informally call “analytics” may function like predictive decision support. 

    2. Make data rules unmistakable 

    If your policy requires interpretation, it will be ignored. Spell out: 

    • What cannot be entered into consumer tools (PHI, patient identifiers, staff HR details, sensitive contract terms, anything you wouldn’t post publicly). 
    • What can be entered (de-identified scenarios, generic templates, publicly available education content). 
    • What “de-identified” means in practice (e.g., “I removed the name” is not de-identification if the story still identifies the patient). 

    3. Require human review and define what “review” means 

    “Human in the loop” is too vague. Define expected behaviors: 

    • Clinicians remain responsible for final clinical content and decisions (consistent with FSMB’s emphasis on human accountability). 
    • For ambient scribing, define who must validate the note, what must be re-checked (medications, allergies, assessment/plan), and whether AI-generated text must be labeled internally for QA purposes. 
    • For patient messaging drafts, require a staff member to verify clinical appropriateness, tone, and privacy. 

    4. Figure out patient transparency 

    Practices differ on whether and how to disclose AI involvement in documentation, triage, or patient messaging, but your governance group should decide. MedPro calls out ethical standards for disclosure and informed consent, including telling patients when AI is involved in their care and for what purposes. Even if you decide that some uses (e.g., internal drafting assistance) do not require explicit patient disclosure, you should still define which uses require disclosure; what the script/language is; and how patients can ask questions or opt out (where feasible). 

    5. Build vendor requirements into policy, not just contracts 

    A strong AI governance policy informs procurement processes. Require, at minimum: 

    • Clear statements on data ownership and reuse (e.g., whether your data can be used to train models). 
    • Security controls and breach notification expectations. 
    • Update/change controls (how model updates are communicated; how you evaluate material changes). 
    • Evidence of performance, limitations, and known failure modes relevant to your specialty and patient mix. 
    • Ongoing monitoring support. 

    This aligns with the NIST generative AI profile’s emphasis on governing AI risk across the lifecycle. 

    6. Give staff a clear way to report problems 

    Create an “AI issue reporting” channel that feels as normal as reporting a safety event: 

    • Hallucinated content in a note, 
    • Inaccurate patient instruction draft, 
    • Suspected bias in risk scoring, 
    • Privacy concerns about data entry or output exposure, 
    • Workflows where staff are bypassing safeguards. 
    MGMA members: Sign in below for more exclusive insights on this topic.

    Sign in to access this material

    Sign In Become a Member
    Chris Harrop

    Written By

    Chris Harrop

    Chris Harrop is Senior Editor on MGMA's Training and Development team, leading Strategy, Growth & Governance content and helping turn data complexity into practical advice for medical group leaders. He previously led MGMA's publications as Senior Editorial Manager, managing MGMA Connection magazine, the MGMA Insights newsletter, and MGMA Stat, and MGMA summary data reports. Before joining MGMA, he was a journalist and newsroom leader in many Denver-area news organizations.


    Explore Related Content

    More MGMA Stats

    An error has occurred. The page may no longer respond until reloaded. An unhandled exception has occurred. See browser dev tools for details. Reload 🗙