Chief & Presenting Author: Dr.Srawani Sarkar
Co Author(s): Dr.Lakshmi S, Dr.Nilanjan Kaushik Thakur
Abstract
Purpose: Several recent researches have shown that these large language models have approached expert-level clinical knowledge and reasoning in ophthalmology. This study aims to compare the performance of ChartGPT and Gemini in diagnosing and reasoning routine eye diseases.
Method: Clinical Case scenarios of eye diseases were presented to both and diagnosis provided along with the reasoning and treatment were analyzed and compared by 2 independent ophthalmologists. A qualitative analysis of ChatGPT and Gemini's responses on ease of understanding, conciseness, accuracy, completeness, and relevance was evaluated.
Results: Both AI applications produced similar diagnostic responses to common clinical scenarios. However, ChatGPT proved superior in its accuracy and relevance in a few scenarios.
Conclusion: Both AI applications performed well in providing reasonable differential diagnosis for routine eye diseases. ChatGPT answers were more relevant and accurate compared to Gemini.
