Use my Search Websuite to scan PubMed, PMCentral, Journal Hosts and Journal Archives, FullText.
Kick-your-searchterm to multiple Engines kick-your-query now !>
A dictionary by aggregated review articles of nephrology, medicine and the life sciences
Your one-stop-run pathway from word to the immediate pdf of peer-reviewed on-topic knowledge.

suck abstract from ncbi


10.1186/s12872-025-05431-y

http://scihub22266oqcxt.onion/10.1186/s12872-025-05431-y
suck pdf from google scholar
41366313!?!41366313

Warning: file_get_contents(https://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=41366313&cmd=llinks): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 215

suck abstract from ncbi

pmid41366313      BMC+Cardiovasc+Disord 2025 ; ? (?): ?
Nephropedia Template TP

gab.com Text

Twit Text FOAVip

Twit Text #

English Wikipedia


  • Comparative study of the performance of ChatGPT-4, Claude, Gemini, Mistral, and perplexity on multiple-choice questions in cardiology #MMPMID41366313
  • Nacanabo MW; Bayala YLT; Seghda AAT; Tall/Thiam A; Yameogo AR; Yameogo NV; Samadoulougou AK; Zabsonre P
  • BMC Cardiovasc Disord 2025[Dec]; ? (?): ? PMID41366313show ga
  • BACKGROUND: Artificial intelligence, particularly Large Language Models (LLMs), has revolutionized the field of medicine. Their ability to understand and answer medical questions is generating growing interest, especially in cardiology, where diagnostic and therapeutic accuracy is essential. OBJECTIVE: The objective of our study was to assess and compare the performance of five LLMs on multiple-choice questions (MCQs) in cardiology. MATERIALS AND METHODS: This was a comparative study conducted in the cardiology department of the Bogodogo University Hospital, Ouagadougou, involving 83 MCQs derived from the 2020 French national cardiology curriculum. The questions were submitted to ChatGPT-4, Claude, Gemini, Mistral, and Perplexity. Performance was evaluated based on overall and thematic accuracy, as well as the number of discordances. Agreement between the LLMs was assessed using the Kruskal-Wallis test. RESULTS: Claude achieved the highest overall accuracy (78.31%), followed by ChatGPT-4 and Gemini (75.90%), then Mistral (72.29%) and Perplexity (68.67%). Each LLM demonstrated a distinct performance profile by topic, with Claude excelling in heart failure (100%) and arrhythmias (90.9%), and ChatGPT-4 in diagnostic investigations (87.5%). The analysis of discordances showed a slightly higher precision for ChatGPT-4. The Kruskal-Wallis test with effect size revealed statistically significant differences in performance between the LLMs, whether globally, by topic (p < 0.05) and with generally large effect sizes. CONCLUSION: Despite variations in their performance profiles, these five LLMs studied have relatively similar capabilities for answering well-structured cardiology multiple-choice questions. They could therefore be valuable tools in medical education in our resource-limited context.
  • ?


  • DeepDyve
  • Pubget Overpricing
  • suck abstract from ncbi

    Linkout box