Could a ChatGPT-type app be the indispensable physician's assistant?

In late December 2022, Google reportedly raised a Code Red alarm regarding the successes seen with the ChatGPT app developed by OpenAI.  

ChatGPT was tested against Google in fielding various queries.  The results were surprisingly quite good, and the weakness were primarily in retrieving images, videos and tweets.  But when it was accurate, the testers were satisfied with getting relevant search results that provided them with the information they were looking for, instead of a list of links that they had to investigate themselves.  CNBC also found ChatGPT to be superior to Google.  These are just a few instances of the recognition that Google's business model might fall apart very soon, considering the velocity at which technology like this develops. 

It has been noted that ChatGPT seems to have a better understanding of the semantics of the query than the Google search engine, which is one of the challenges of retrieving accurate and relevant medical information, where there are different ways of referring to medical conditions, different abbreviations, etc.  How many times, would I have to manually sort through a list of search results for "small cell lung cancer" weeding out "non small cell lung cancer" results, because the search engine couldn't understand the query.  Even PubMed, with their MeSH term system, still fails to distinguish between the two. 

Andrew Ng, of DeepLearning.AI points out that a searching based on querying Large Language Models (LLMs) are, at this time, limited by memory requirements.  The amount of memory needed to store the billions of parameters needed for the model to perform, could be as much as 800 GB, which is far greater than something like Wikipedia, which only takes up about 150 GB.  (You can even download Wikipedia if you want to.)  But the size of available medical and scientific knowledge is likely to be far larger.  At this time, it would be too much to ask for ChatGPT to be the dream medical assistant.  It also has a bad habit of hallucinating knowledge it does not know, and doesn't seem to be aware that it is doing so.  To do this in a medical context would be devastating.

Still, I will be watching this space closely, and hope to get involved in a project that will spur this along.  

Update:
Some are already impressed with the potential of OpenAI in the diagnostician space.  So am I, although this is just one example, and the disturbing tendency for OpenAI to confabulate is a legitimate concern.  There are already efforts made to remedy this.