AI is here to help, not replace – Integrated Practitioner


Written by Erin Yeh

As artificial intelligence (AI) continues to enter clinical fields, there is understandable concern about AI overtaking the role and skills of the physician. However, as patients use AI for themselves and attend their appointments using their AI outputs, healthcare professionals need to adapt and develop the skills they need to use these new tools.

At last month’s Integrative Health Symposium, a panel discussed how AI is redefining core competencies and how it can enhance both the patient and physician experience. Tom Blue, Co-Founder of OvationLab and Senior Vice President of Healthcare at AndHealth; Sonja Schweig, MD, founder and president of the California Clinic for Functional Medicine; and Lexi Gonzales, ND, MS, IFMCP, senior clinical implementation specialist and AI specialist at OvationLab, discover how AI can be used in practical ways to improve workflow efficiency, gather meaningful insights, and develop personalized and accurate care plans for patients.

“(Patients) want a doctor who uses AI,” Blue said. “However, if the doctor is only using AI, it is a cheap experience.” What lies in between is the skill set that can take technology and turn it into something of value for both the patient and the doctor.

Three basic transformations

“I think one of the biggest misconceptions is that it’s difficult or somewhat scary” to use artificial intelligence, Schweig said. He noted that this misconception is rooted in the “tsunami of information” surrounding AI that can make things difficult and scary to keep up with. The panel covered three key shifts that could help doctors “surf the waves” and develop the skills they need for AI.

  1. Friction to flow
    Identifying friction in practice is key to developing a smoother workflow. This process, called “friction mapping,” is a “thinking tool” where the doctor thinks about the workflow process, the factors that go into it, and how long it might take to complete. For example, the process of preparing for a visit requires all kinds of preparatory work: questions, labs, notes, etc. The doctor thinks about how long it will take to process that data and translate it into notes. This is a common point of friction for many doctors, as it can be a time-consuming and tedious process. As the doctor goes about his workday, he can identify where those impediments are in the workflow. When these checkpoints are factored into the workflow, the doctor begins to develop an “intuitive eye” for sorting things out, according to Schweig.

    “I think it’s a good exercise to just think about…your clinical day and start to see where those withdrawal areas are,” Gonzalez added.

  2. Data to insight
    Gonzalez mentioned “pajama time,” the time when doctors type chart notes on their computers outside of work hours. This is where artificial intelligence can intervene by organizing ideas and keeping a record of the work completed. By entering data, doctors can de-identify it and use a template or prompt to automatically generate a complete summary that may consist of the patient’s medical history, tests performed, and treatments provided—all compiled into a condensed, concise report that is then shared with the patient.

    Schweig also pointed out the gap between consumer AI and medical AI. Consumer platforms, including ChatGPT and Cloud, are not HIPAA compliant. It’s important to keep this in mind when choosing which platform to use.

    It’s not just brief reports and pajama time notes. Schweig talked about an experiment in which they conducted a study on sleep at home. For the study, there were five different reports from participants. Using custom GPT, they list each PDF file and start searching each file for differences. They de-identified the data, put it in GPT, and had the normal values ​​put into a table that highlighted anything abnormal, showed trends over time, and provided some analysis. All of this information was then put on a chart and shared with the patient.

    Insight amplification also applies to patients, according to Gonzalez. “They interact with ChatGPT… (and) try to validate our recommendations based on those extracted from ChatGPT.”

  3. Personal accuracy
    AI tools can also be used to hyper-personalize care plans. Gonzalez recalled a patient who had signs of cardiovascular disease and diabetes. She recommended the Mediterranean diet as a starting point. However, the patient worked as a long-distance truck driver and was unable to cook and obtain fresh food due to being on the road for long periods of time.

    By uploading the Mediterranean food plan to ChatGPT, I had the AI ​​model generate a list of foods that could be purchased at gas stations and fast food restaurants. I have created a cheat sheet that the patient can refer to to find foods related to their diet and purchase them at stores and restaurants on the road.

    “He offered an element of trust because I wasn’t saying no to him saying no to my nutritional plan. I was meeting him where he was at,” Gonzalez said. She described it as a rewarding experience that added great value to their relationship and empowered the patient by giving him a plan he could follow.

Beware of over reliance

Completely trusting what AI generates is never a good idea. Models are still subject to hallucinations and require a human eye to comb through any errors. Additionally, Gonzalez stated that the AI ​​will be “very convincing” and will provide evidence to support its answers. However, the model actually pulls out the keywords and strings them together to produce an answer that looks convincing and accurate.

“I rarely accept the first response from the AI,” Gonzalez said. She encouraged pushing the form to give a better answer, correcting any errors, and then entering a second request to see if a more accurate answer was generated.

“The way the AI ​​expresses things sounds really nice, but a lot of times, when you sit down and try to figure out what it’s saying, it doesn’t make sense,” she added.

Schweig also recounted a conversation he had with an OpenAI employee at a JP Morgan healthcare conference in January. Schweig told the employee about his experience using ChatGPT and how it was “worse” compared to previous experiences. The employee confirmed that the company is aware of the issues. While ChatGPT got better with complex programming and mathematics, it got worse with simple, everyday responses. The OpenAI team is working to discover these issues.

Schweig explained that one reason AI models have these problems is because they are probabilistic, meaning they link sequences together and compare themselves to words found on the Internet, and are not deterministic. “To be honest, the people who build these models don’t quite understand it. They’re also surprised.”

The committee stressed vigilance when using artificial intelligence tools because of how common hallucinations are. They also acknowledged that it may be easy to overlook such hallucinations given the amount of energy that goes into multitasking and stressed the importance of slowing down and carefully reading the output generated by the AI. Schweig also mentioned asking administrators to read the output and catch errors.

Clinical triad

Using AI in the clinic can be stressful and intimidating, but Schweig stressed that the tools are very intuitive and require little practice. “Just try a thing or two,” Schweig said. “And your mind will adapt to it.” Modern healthcare practices are no longer just doctor and patient. It is doctor, patient and artificial intelligence: the clinical triad.

Multitasking is indeed a skill set that doctors possess. During a patient visit, a physician may have an electronic health record open for notes, a typewriter running, and artificial intelligence tools, such as ChatGPT or Claude, open. A patient might inquire about something, and the doctor can ask the AI ​​model that question and ask it to conduct research research during the visit. By the end of the visit, the doctor can complete a chart note and develop an action plan and research summary that can be reviewed. This is called parallel thought processing, which is essentially doing multiple tasks at once instead of saving them all for after the visit.

The key is to ensure that one’s awareness and attention are not fragmented while doing this at the same time. As Gonzalez says, it takes some practice. But doctors already have the basic skills needed to adapt, use and maximize AI to its full potential – and they have a suitable assistant who can ease some of the burden and allow more focus on the patient.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *