What will medicine look like in the era of artificial intelligence?

Nature featured an article recently, weighing in on what the future of medicine might look like, now that artificial intelligence is taking off. Before discussing this article, I would like to review where things are with artificial intelligence and machine learning. The most significant development to the public, and which has garnered the most media coverage, has got to be generative chat A.I.  It's hard to believe that back in November 2022, there were no users of ChatGPT.  Within a week, there were a million users.
from https://research.aimultiple.com/chatgpt/

Since then, there have been a plenitude of other models that have made use of the transformer architecture to develop similar natural language processing (NLP) interfaces, and now many are using chat software routinely in their daily lives. Advances in machine learning and data science has been less visible, and have been utilized mainly in the corporate world to get a better handle on what's going on with the business and planning side of things.  Computer vision advances have allowed for things like autonomous vehicles, but even in non-autonomous cars, computer vision apps help to display speed limits to drivers, as well as warnings about entering school zones and other road events. So I was eager to read about how the world of medicine has benefited from this, especially for the practicing clinician.

The trees of artificial intelligence and machine learning have borne fruit mainly in the field of computer vision (specifically with imaging assistance for the radiologist) and sensor technology (interpretation of rythms, heart sounds, and hemodynamic data).
from https://healthexec.com/topics/artificial-intelligence/fda-has-now-cleared-more-500-healthcare-ai-algorithms

Other areas where deep learning has made inroads have been in clinical decision support and in the interpretation of sensor data generated by neurologic and hematologic lab devices.  Natural language processing has not much visible impact. It has the potential to allow for the processing of unstructured physician notes, with all the jargon and ungrammatical text, the barely-readable handwriting, into searchable and indexable vector databases for analysis and retrieval. This could unlock a large corpus of data, previously resistant to easy parsing. Doctors could use NLP apps to transcribe their notes more accurately, right into the EHR, without having to dictate them for transcription later.  An NLP-driven chat interface could help them retrieve information, not only over the Internet, but from their own notes, and their own private libraries of PDF articles that they've saved on their computers.

So what did the Nature reporter have to say? Once again, the dominant inroad has been with radiology and the interpretation of imaging. It is of interest that some radiologists have soured on the use of A.I.-assistance, claiming that it did not really help them and sometimes resulted in more time spent. They didn't think that there were enough of those cases where the algorithm detected a significant finding that the human radiologist missed. We still aren't clear on the impact of an incorrect A.I.-generated diagnosis. Who gets the blame? How does a radiologist defend his/her opinion if one incorrectly defers to the algorithm? 

The article glosses over other imaging applications, then moves on to eye (retinal) imaging, which is essentially the same concept, which is using machine learning to interpret findings seen on ophthalmologic imaging. 

I would like to believe that there is more to the future of medicine that can be enhanced by artificial intelligence than just help with interpretation of medical imaging. Hospitals are using it to help with billing and logistics. Insurance companies are using data science techniques to look at practice patterns and elicit cost saving interventions.  But efforts to help the poo clinician do his/her work more efficiently and more quickly, are still lagging.

The advances in chat to provide a natural and quick retrieval system of large datasets still eludes the access of the clinician. The main reasons are because the interface still does not have the precision and recall needed for mission-critical situations, and chat models still hallucinate.  Such drawbacks could lead to a wrong and harmful clinical decision.  But I feel that much of the needed information resides behind proprietary digital walls such as journal paywalls and even the National Library of Medicine itself. Having an NLP interface, like Bing does with its search engine, would be interesting to experiment with, but one doesn't want to take a legal risk with wrong or incomplete searches in the medical world.  With each medical query, physicians still need to peruse a collection of articles to determine whether a clinical study is appropriate to their patient, by scanning the study methodology and patient population, as well as the statistics and limitations. At this time, no chat model will produce all this information.

Another greatly-needed application is clinical trial search.  It is truly heartbreaking that patients have to have special skills and contacts to be able to locate appropriate and life-saving clinical trials. This is something that might benefit from a combination of vector databses and natural language processing to make trial searching easier and faster.  These are some of the things I had hoped that there would breakthroughs.