Use my Search Websuite to scan PubMed, PMCentral, Journal Hosts and Journal Archives, FullText.
Kick-your-searchterm to multiple Engines kick-your-query now !>
A dictionary by aggregated review articles of nephrology, medicine and the life sciences
Your one-stop-run pathway from word to the immediate pdf of peer-reviewed on-topic knowledge.

suck abstract from ncbi


10.1186/s12910-025-01316-z

http://scihub22266oqcxt.onion/10.1186/s12910-025-01316-z
suck pdf from google scholar
41366422!?!41366422

Warning: file_get_contents(https://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=41366422&cmd=llinks): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 215

suck abstract from ncbi

pmid41366422      BMC+Med+Ethics 2025 ; 26 (1): 168
Nephropedia Template TP

gab.com Text

Twit Text FOAVip

Twit Text #

English Wikipedia


  • Performance of large language models in non-English medical ethics-related multiple choice questions: comparison of ChatGPT performance across versions and languages #MMPMID41366422
  • Kim Y; Shin S; Yoo SH
  • BMC Med Ethics 2025[Dec]; 26 (1): 168 PMID41366422show ga
  • BACKGROUND: As large language models (LLMs) evolve, assessing their competence in ethically sensitive domains such as medical ethics has become increasingly important. Since medical ethics is a universal component of medical education, disparities in AI performance across languages may result in unequal benefits for learners. Therefore, it is essential to examine performances in non-English contexts. While previous studies have evaluated performance of Chat Generative Pre-trained Transformer(ChatGPT) on English-language multiple-choice questions (MCQs) in medical ethics, none have examined version-based improvements across non-English contexts. This study therefore evaluated ChatGPT versions 3.5, 4.0, and 4.5 for MCQs on Korean medical ethics and their English translations, with a focus on performance trends across versions and languages. METHODS: We selected 36 MCQs from the Korean National Medical Licensing Examination and the Comprehensive Clinical Medicine Evaluation databases. Each question was entered ten times per ChatGPT version (3.5, 4.0, 4.5) and language (Korean, English) for a total of 60 trials. Additionally, to assess the model's capacity to identify the ethical core without relying on the options provided, 31 of the 36 questions were modified by masking the correct choice. Accuracy was analyzed using independent sample t-tests and Mann Whitney U test, and consistency was assessed using Krippendorff's alpha. RESULTS: Overall, the accuracy and consistency of ChatGPT improved with each version. Version 4.5 achieved near-perfect scores and high reliability in both languages, while version 3.5 showed limited performance, particularly in the Korean test. Performance gaps between languages decreased with model upgrades but remained statistically significant in version 4.5 for some questions. In the masked-answer condition, all versions showed notable drops in accuracy and consistency, with version 4.5 still outperforming earlier versions. However, the performance remained below 50%, indicating limitations in the model's autonomous ethical reasoning. CONCLUSIONS: ChatGPT demonstrated substantial improvements in medical ethics MCQ performance across versions, particularly in terms of consistency and accuracy. However, performance disparities between languages and reduced accuracy under masked answer conditions highlight the ongoing limitations of non-English ethical reasoning and context recognition. These findings emphasize the need for further research on language-sensitive fine-tuning and the evaluation of LLMs in specialized ethical domains. The findings suggest that advanced LLMs may serve as valuable supplementary tools in medical education and clinical ethics training. At the same time, the observed language disparities call for context-sensitive adaptations to prevent inequities in practice.
  • |*Educational Measurement/methods[MESH]
  • |*Ethics, Medical/education[MESH]
  • |*Language[MESH]
  • |Generative Artificial Intelligence[MESH]
  • |Humans[MESH]
  • |Large Language Models[MESH]
  • |Republic of Korea[MESH]


  • DeepDyve
  • Pubget Overpricing
  • suck abstract from ncbi

    Linkout box