Use my Search Websuite to scan PubMed, PMCentral, Journal Hosts and Journal Archives, FullText.
Kick-your-searchterm to multiple Engines kick-your-query now !>
A dictionary by aggregated review articles of nephrology, medicine and the life sciences
Your one-stop-run pathway from word to the immediate pdf of peer-reviewed on-topic knowledge.

suck abstract from ncbi


10.1016/j.cmpb.2021.106288

http://scihub22266oqcxt.onion/10.1016/j.cmpb.2021.106288
suck pdf from google scholar
34352688!ä!34352688

suck abstract from ncbi


Deprecated: Implicit conversion from float 231.6 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534

Deprecated: Implicit conversion from float 231.6 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534

Deprecated: Implicit conversion from float 231.6 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534

Deprecated: Implicit conversion from float 231.6 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534

Deprecated: Implicit conversion from float 231.6 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534

Deprecated: Implicit conversion from float 231.6 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534

Deprecated: Implicit conversion from float 231.6 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534

Deprecated: Implicit conversion from float 231.6 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534

Deprecated: Implicit conversion from float 265.2 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534

Deprecated: Implicit conversion from float 265.2 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534

Deprecated: Implicit conversion from float 265.2 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534
pmid34352688      Comput+Methods+Programs+Biomed 2021 ; 208 (ä): 106288
Nephropedia Template TP

gab.com Text

Twit Text FOAVip

Twit Text #

English Wikipedia


  • The importance of being external methodological insights for the external validation of machine learning models in medicine #MMPMID34352688
  • Cabitza F; Campagner A; Soares F; Garcia de Guadiana-Romualdo L; Challa F; Sulejmani A; Seghezzi M; Carobene A
  • Comput Methods Programs Biomed 2021[Sep]; 208 (ä): 106288 PMID34352688show ga
  • Background and Objective Medical machine learning (ML) models tend to perform better on data from the same cohort than on new data, often due to overfitting, or co-variate shifts. For these reasons, external validation (EV) is a necessary practice in the evaluation of medical ML. However, there is still a gap in the literature on how to interpret EV results and hence assess the robustness of ML models. METHODS: We fill this gap by proposing a meta-validation method, to assess the soundness of EV procedures. In doing so, we complement the usual way to assess EV by considering both dataset cardinality, and the similarity of the EV dataset with respect to the training set. We then investigate how the notions of cardinality and similarity can be used to inform on the reliability of a validation procedure, by integrating them into two summative data visualizations. RESULTS: We illustrate our methodology by applying it to the validation of a state-of-the-art COVID-19 diagnostic model on 8 EV sets, collected across 3 different continents. The model performance was moderately impacted by data similarity (Pearson rho = 0.38, p< 0.001). In the EV, the validated model reported good AUC (average: 0.84), acceptable calibration (average: 0.17) and utility (average: 0.50). The validation datasets were adequate in terms of dataset cardinality and similarity, thus suggesting the soundness of the results. We also provide a qualitative guideline to evaluate the reliability of validation procedures, and we discuss the importance of proper external validation in light of the obtained results. CONCLUSIONS: In this paper, we propose a novel, lean methodology to: 1) study how the similarity between training and validation sets impacts the generalizability of a ML model; 2) assess the soundness of EV evaluations along three complementary performance dimensions: discrimination, utility and calibration; 3) draw conclusions on the robustness of the model under validation. We applied this methodology to a state-of-the-art model for the diagnosis of COVID-19 from routine blood tests, and showed how to interpret the results in light of the presented framework.
  • |*COVID-19[MESH]
  • |Cohort Studies[MESH]
  • |Humans[MESH]
  • |Machine Learning[MESH]
  • |Reproducibility of Results[MESH]


  • DeepDyve
  • Pubget Overpricing
  • suck abstract from ncbi

    Linkout box