Use my Search Websuite to scan PubMed, PMCentral, Journal Hosts and Journal Archives, FullText.
Kick-your-searchterm to multiple Engines kick-your-query now !>
A dictionary by aggregated review articles of nephrology, medicine and the life sciences
Your one-stop-run pathway from word to the immediate pdf of peer-reviewed on-topic knowledge.

suck abstract from ncbi


10.1186/s12859-016-1228-x

http://scihub22266oqcxt.onion/10.1186/s12859-016-1228-x
suck pdf from google scholar
C5009551!5009551!27586051
unlimited free pdf from europmc27586051    free
PDF from PMC    free
html from PMC    free

suck abstract from ncbi

pmid27586051      BMC+Bioinformatics 2016 ; 17 (1): ä
Nephropedia Template TP

gab.com Text

Twit Text FOAVip

Twit Text #

English Wikipedia


  • The parameter sensitivity of random forests #MMPMID27586051
  • Huang BF; Boutros PC
  • BMC Bioinformatics 2016[]; 17 (1): ä PMID27586051show ga
  • Background: The Random Forest (RF) algorithm for supervised machine learning is an ensemble learning method widely used in science and many other fields. Its popularity has been increasing, but relatively few studies address the parameter selection process: a critical step in model fitting. Due to numerous assertions regarding the performance reliability of the default parameters, many RF models are fit using these values. However there has not yet been a thorough examination of the parameter-sensitivity of RFs in computational genomic studies. We address this gap here. Results: We examined the effects of parameter selection on classification performance using the RF machine learning algorithm on two biological datasets with distinct p/n ratios: sequencing summary statistics (low p/n) and microarray-derived data (high p/n). Here, p, refers to the number of variables and, n, the number of samples. Our findings demonstrate that parameterization is highly correlated with prediction accuracy and variable importance measures (VIMs). Further, we demonstrate that different parameters are critical in tuning different datasets, and that parameter-optimization significantly enhances upon the default parameters. Conclusions: Parameter performance demonstrated wide variability on both low and high p/n data. Therefore, there is significant benefit to be gained by model tuning RFs away from their default parameter settings. Electronic supplementary material: The online version of this article (doi:10.1186/s12859-016-1228-x) contains supplementary material, which is available to authorized users.
  • ä


  • DeepDyve
  • Pubget Overpricing
  • suck abstract from ncbi

    Linkout box