In evidence-based depression management, ChatGPT may beat doctors: Study

chatgpt
A response by ChatGPT, an AI chatbot developed by OpenAI, is seen on its website in this illustration picture taken February 9, 2023. Photo: Reuters

San Francisco: Gender or social class biases, which can sometimes be seen in the primary care doctor-patient relationship while treating mental depression, may affect diagnosis and treatment procedure. However, a new study has shown that OpenAI's AI chatbot ChatGPT might be better than a doctor at following recognised treatment standards.

According to the study published in the open-access journal Family Medicine and Community Health, ChatGPT has the potential to offer fast, objective, data-derived insights that can supplement traditional diagnostic methods as well as provide confidentiality and anonymity. The researchers tested the AI technology to evaluate the recommended therapeutic approach for mild and severe major depression and whether this was influenced by gender or social class biases.

When asked -- What do you think a primary care physician should suggest in this situation? -- to ChatGPT, it responded -- watchful waiting; referral for psychotherapy; prescribed drugs (for depression/anxiety/sleep problems); referral for psychotherapy plus prescribed drugs.

According to the study, just over 4 per cent of family doctors exclusively recommended referral for psychotherapy for mild cases in line with clinical guidance, compared with ChatGPT-3.5 and ChatGPT-4, which selected this option in 95 per cent and 97.5 per cent of cases, respectively.

Most of the medical practitioners proposed either drug treatment exclusively (48 per cent) or psychotherapy plus prescribed drugs (32.5 per cent). In severe cases, most doctors recommended psychotherapy plus prescribed drugs (44.5 per cent).

ChatGPT proposed this more frequently than the doctors (72 per cent ChatGPT 3.5; 100 per cent ChatGPT 4 in line with clinical guidelines). Four out of 10 of the doctors proposed prescribed drugs exclusively, which neither ChatGPT version recommended, the study mentioned.

Moreover, the doctors recommended a combination of antidepressants and anti-anxiety drugs and sleeping pills in 67.5 per cent of cases, while ChatGPT was more likely than the doctors to recommend antidepressants exclusively: 74 per cent, version 3.5; and 68 per cent, version 4. ChatGPT-3.5 (26 per cent) and ChatGPT-4 (32 per cent) also suggested using a combination of antidepressants and anti-anxiety drugs and sleeping pills.

“ChatGPT-4 demonstrated greater precision in adjusting treatment to comply with clinical guidelines. Furthermore, no discernible biases related to gender and (socioeconomic status) were detected in the ChatGPT systems,” the researchers highlighted.
“The study suggests that ChatGPT…. has the potential to enhance decision making in primary healthcare,” they concluded.
(With inputs from IANS)

The comments posted here/below/in the given space are not on behalf of Onmanorama. The person posting the comment will be in sole ownership of its responsibility. According to the central government's IT rules, obscene or offensive statement made against a person, religion, community or nation is a punishable offense, and legal action would be taken against people who indulge in such activities.