A recent study published in JAMA Pediatrics Explains how the use of artificial intelligence chatbots, e.g. chatgptFor diagnosing medical conditions in children, it is unreliable.
The research revealed that this AI system has only 17% accuracy in diagnosing childhood diseases, which is a quite low figure.
According to the research, this shows that the experience of pediatricians is irreplaceable and highlights the importance of their clinical knowledge. Despite this, many experts in the health sector believe that integration aye Medical care is possibly imminent.
artificial intelligence It is growing rapidly in the healthcare sector and is used for a wide range of applications.
These include analyzing large amounts of medical data to identify patterns that can help prevent and treat diseases, developing algorithms for more accurate diagnosis, personalizing treatment for patients, and improving the efficiency of health services. Involves automating administrative tasks.
However, in a recent study Cohen Children’s Medical Center In New York Found the latest version of chatgpt It is not yet ready to diagnose diseases in children. Children are different from adults because they undergo a lot of changes as they age and they cannot really say what is happening to them.
in use with chatgpt, scientists used texts from 100 real children’s health cases and asked the system to try to tell which disease they had. Then two expert doctors looked at whether the artificial intelligence’s reactions were good, bad, or similar.
Sometimes, chatgpt It says there is a disease that has something to do with it, but that’s not true because it’s very common. For example, chatgpt Thought a child had some kind of lump in his neck, he actually had a genetic disease that also affects the ears and kidneys, and which can cause these lumps to appear.
Of the 100 cases he examined, chatgpt He got the answer correct only in 17. In 11 cases they did not get the complete answer, and in 72 they were completely wrong. Furthermore, 47 out of 83 times they did not get it right, on 47 occasions they reported the disease. Related to the correct body part, but still, the diagnosis was wrong.
The researchers noted that aye She wasn’t very good at understanding things that experienced doctors knew. For example, it is not a given that a child with autism may get scurvy because he or she does not eat enough vitamin C.
This may be because sometimes people with autism eat different foods and are deficient in vitamins. Doctors know they need to pay attention to these vitamin problems in children in countries where they generally eat well. But chatbot He didn’t see it and thought the child had some other, much less common disease.
The chatbot didn’t do much right in this test, but researchers said it could improve if it were better trained with specialized medical books instead of sometimes inaccurate information from the Internet.
He said if the chatbot could use updated medical data, it would make better diagnoses. They call it “adjust“The system so that it works in a more optimized way.”
The medical authors of the research concluded, “This presents an opportunity for researchers to test whether specific training and tuning of medical data helps improve the diagnostic accuracy of chatbots based on large language models. “