By Jane Ehrhardt
“That’s one of the things that most people don’t recognize—AI is nothing more than a prediction machine,” says Donald Monistere, CEO of General Informatics, who created his own AI personal assistant.
The foundation of AI lies in its algorithms, which are basically rules or instructions for solving a very specific problem or a very specific task. When joined together along with certain systems, artificial intelligence expands into decision-making, problem-solving, and natural language processing. “Algorithms can be very complex,” Monistere said. “But they’re built for something specific, whereas AI can adapt and evolve based on the data that you’re putting in it.”
Smart algorithms currently enable AIs to analyze diagnostic videos and images for abnormalities, including X-rays and MRI scans. “There is already information suggesting skin cancer can be recognized by taking a picture on your phone and having an artificial intelligence model look at it. They’re saying it’s starting to get as good, if not better, than your dermatologist,” Monistere said.
That high level of precision has led some radiologists to allow AI to perform every initial scan. If their results find nothing abnormal, it never sees human eyes. The rest are passed on to a radiologist for review. “Because we have a radiologist shortage, imagine if we can let artificial intelligence rule out the known negatives, or for that matter known positives,” Monistere said.
The remote access to AI and its faster diagnoses would make a notable change in patient care, especially in rural areas. “You’re going to see those areas have access to specialists that they may have not otherwise had, because AI is taking over the busy work,” Monistere said.
With the right algorithms, artificial intelligence moves beyond diagnosis to identifying risk factors, predicting an illness’ progression, and presenting tailored treatment plans based on sizable pool of patient data, lab results, and clinical outcomes.
That versatility and rapid assessment of vast data makes it all the more alluring to healthcare. A Stanford study found 18 percent of healthcare providers say they already use open source AI for education and learning about specific ailments or diseases. Almost 80 percent reported using AI for patient care. “And that’s a little scary to me,” Monistere said. “Even using the latest large language models such as ChatGPT 4.0, I would estimate a 20 percent chance of the information being inaccurate or not sourced appropriately. And there are no standards in healthcare today for using AI in patient care. Nobody’s training the doctors or nurses on how to fact check the information that they might be getting out of LLM.”
The good news about Open AI is that it can show its source, allowing providers to verify the origin of the information. “I’ve seen the 3.0 version ChatGPT reference a source that didn’t exist,” Monistere said. “Fortunately, 4.0 does that a lot less.
“While ChatGPT and other large language models are prediction machines, pulling together and then presenting information, they are close to reasoning. Right now, it’s taking all of the information that it has and presumes to know what answer you want as opposed to necessarily what the answer is.”
Providers should check the source of any data from AI, and then verify it. “Does the article exist, who wrote it, and are they reputable in that field?” Monistere said. “A lot of people are checking multiple large language models. Ask the same question of several LLMs, like ChatGPT and Llama, and if the answers are consistent among all of them, then the chances of that data being complete are good.”
Monistere also finds searching the source online, and if that ties back to the topic, then the data should be sound. “I’m very careful to know what the source’s expertise is,” he said. “Because you don’t have to be qualified to write a blog.”