Skip To Navigation Skip To Content Skip To Footer
    MGMA Stat
    Home > MGMA Stat > MGMA Stat
    Chris Harrop
    Chris Harrop

    Almost every strategy and ongoing challenge in healthcare administration discussed at the 2023 Leaders Conference in Nashville this week came back to a single concept: Is this something that artificial intelligence (AI) can or will revolutionize? 

    Similar conversations are happening for physicians today: Bernard S. Chang, MD, MMSc, dean for medical education at Harvard Medical School, recently spoke to JAMA about how AI could accelerate medical education. A new article from DHI outlines several new uses for ChatGPT for helping patient education tasks, such as answering questions about gastrointestinal diseases, myopia and other health conditions. 

    MGMA Stat poll results - Oct. 24, 2023 - 80% of medical group leaders say use of AI will become an essential skill.

     

    To that end, an Oct. 24, 2023, MGMA Stat poll sought to understand how healthcare leaders view the prospects for adding use of AI to the essential skill set to excel in the profession: 83% of respondents reported that they think using AI will be an essential skill (80%) or that it already is (3%), compared to 17% who responded “no.” The poll had 494 responses. 

    Thankfully, the recently completed conference was loaded with preeminent thinkers and frontline experts to show where the opportunities and limits of AI in healthcare are today and for the near future. 

    Keynote speaker Scott Cullen, MD, chief clinical officer of AVIA Healthcare, gave a mainstage keynote address to the attendees on Monday and separately took questions from the members of the CEO Summit in Nashville, on why administrative leaders should be at the forefront of the next chapter of AI’s expansion in healthcare. 

    “Lots of folks who are nonclinical seem to think that the most important thing we can do with AI is fine-tune and improve on clinical decision-making, but the real opportunity … at the front end is to use generative AI to improve our processes to the point where we can increase access,” Cullen said. 

    Cullen said the expanded use of ChatGPT4 — vastly more powerful than the 3.5 version that many use, as one might expect (see Moore’s law) — will be “really remarkable” compared to what individuals have seen from AI in the past 20 years, including beyond the recent major advances in radiology and imaging.  

    “What might have taken three to five years five years ago may take less time than that now, but the wildcard is what we do with it,” Cullen said. “All the technology in the world won't have any impact until we've transformed the people and the processes, as well. As a former big tech consultant, our mantra was always, ‘people, processes and technology,’ in that order. I think the rate-limiting step now is not the technology — it's us." 

    As petabytes of data are used for training new predictive models, Cullen expects the healthcare applications to grow; for example: “the potential for having a digital twin of your institution or your environment, and then digital twins of individual patients,” Cullen said. “By digital twins, what I mean is, all the data from the clinical perspective, from the socioeconomic perspective, from the physical perspective — everything that's occurring in that environment at any given time. A model like that could then begin to start drawing inferences about the interactions between the individual patients and the environment, so imagine the power of that.” 

    Cullen pointed to the power of predictive modeling for patient throughput in hospitals — regarding clinical staffing needs, environment sciences efficiency and more — as something that could significantly reduce the need for time-sensitive, people-sensitive huddles. "If we only improved our decision-making and our efficiency by 50%, we would still be way ahead of where we are, meeting at 8 in the morning to guess about what's going to come in the door and what's going to go out.” 

    Just as so many healthcare leaders are looking to AI to help them automate and solve problems that have been exacerbated by staffing shortages, it’s not that easy to hire the types of data scientists that can build AI models from the ground up. “It’s really hard to find data scientists, even if you’re OpenAI, Microsoft or any of the big consulting firms,” Cullen said. “We’re at a disadvantage in the healthcare provider space because it’s not been our area of expertise, and it’s not something that we really understand well enough yet to know exactly what we need, under what circumstances.” 

    But what’s even more intriguing for healthcare is beyond the predictive modeling, and that’s in the arena of generative AI (such as ChatGPT), though Cullen noted that many large language models (LLMs) that power these tools are still prone to hallucination (nonsensical or inaccurate outputs, such as incorrect citations). "The problem of hallucination is going to still require that there's a human in the loop for quite a long time,” Cullen added, but it still holds fantastic potential for many elements of provider organizations to automate and reduce needed staff resources for certain tasks. 

    At the same time, we are somewhere between a “healthy friction” and “an arms race” between payers and providers in their own applications of AI. On the payer side, they have plenty of data to train AI models to “look at the clinical data and adjudicate the validity of coding,” Cullen said, but more medical groups are starting to match them with tools to determine which claims to prioritize based on predictive modeling of denials that could likely be successfully appealed. Use of AI algorithms in claims review has already spurred one notable lawsuit earlier this summer. 

    While there is all this talk about what AI in healthcare can do, attendees of the CEO Summit and Cullen did discuss the underlying work to ensure that all the data collected from patients can legally and appropriately be fed into AI models without patients feeling as thought their rights or privacy have been infringed. 

    "There's a lot of discussion about this data rights piece,” Cullen said of the question of patients opting in or out of use of data, which often is still being addressed by having patients sign blanket waivers.  Soon, it’s possible to see more patients decide to be protective of their data rights and actively choose to opt out.  

    "Some systems are asking patients already,” Cullen said. “They're saying, ‘we have tools that involve artificial intelligence that we use to do a better job of making decisions about your care; if you don't want us to use the tools, would you like to opt out?’ But that's not necessarily the standard yet." 

    Beyond that, there’s the question of how machine learning (ML) model training now relies on “synthetic data” that have been created through statistical modeling of real data to produce new values based on the original’s properties. “All the downstream secondary product of that real data, the chain of ownership becomes murkier and murkier as you go down that road,” Cullen cautioned. The attendees also explored how the realm of liability insurance may continue to evolve as carriers expand offerings to include (or exclude) coverage for burgeoning use (or misuse) or AI. 

    Cullen finished his CEO Summit appearance giving us a succinct answer to how medical group leaders can think about how much AI they need to understand to be successful in the profession going forward: “We know that this is a field in which it's not going to be possible for us as leaders to necessarily understand every single minute detail of how this technology works,” Cullen said. “But we do need to understand the implications, and we need to have trusted allies and advisers in this.” 

    Chris Harrop

    Written By

    Chris Harrop

    A veteran journalist, Chris Harrop serves as managing editor of MGMA Connection magazine, MGMA Insights newsletter, MGMA Stat and several other publications across MGMA. Email him.


    Explore Related Content

    More MGMA Stats

    Ask MGMA
    An error has occurred. The page may no longer respond until reloaded. Reload 🗙