News
Article
Author(s):
Study found AI outperformed doctors in assessing patients and was correct 97% of the time.
In another AI versus doctor showdown, AI has another victory.
European researchers found that the artificial intelligence chatbot known as ChatGPT performed as well as – and to some extent better – than a trained doctor in suggesting likely diagnoses for patients being assessed in emergency departments.
The results were published in the Annals of Emergency Medicine.
The researchers noted that more work is needed, but said the results suggest AI may one day be able to support doctors working in emergency medicine, which could lead to shorter waiting times for patients.
The study took anonymized details on 30 patients treated in an emergency department last year. ChatGPT was given physicians’ notes on patients’ signs, symptoms, and physical examinations. It was also given lab results. The researchers then compared the shortlist of likely diagnoses generated by the chatbot to the same list made by emergency medicine doctors and then to the patient’s correct diagnosis.
The results showed an overlap of about 60% between the doctors’ lists and ChatGPTs. Doctors had the correct diagnosis within their top five likely diagnoses in 87% of the cases, compared to 97% for ChatGPT. Interestingly, ChatGPT version 3.5, the free version, scored better than the subscription version, which had the same success rate as the doctors at 87%.
Researchers say the AI may be best used as support for inexperienced doctors or in spotting rare diseases.