Use my Search Websuite to scan PubMed, PMCentral, Journal Hosts and Journal Archives, FullText.
Kick-your-searchterm to multiple Engines kick-your-query now !>
A dictionary by aggregated review articles of nephrology, medicine and the life sciences
Your one-stop-run pathway from word to the immediate pdf of peer-reviewed on-topic knowledge.

suck abstract from ncbi


10.1093/jamia/ocu048

http://scihub22266oqcxt.onion/10.1093/jamia/ocu048
suck pdf from google scholar
C4986662!4986662!26063745
unlimited free pdf from europmc26063745    free
PDF from PMC    free
html from PMC    free

suck abstract from ncbi


Deprecated: Implicit conversion from float 209.6 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534
pmid26063745      J+Am+Med+Inform+Assoc 2015 ; 22 (5): 967-79
Nephropedia Template TP

gab.com Text

Twit Text FOAVip

Twit Text #

English Wikipedia


  • Domain adaptation for semantic role labeling of clinical text #MMPMID26063745
  • Zhang Y; Tang B; Jiang M; Wang J; Xu H
  • J Am Med Inform Assoc 2015[Sep]; 22 (5): 967-79 PMID26063745show ga
  • Objective Semantic role labeling (SRL), which extracts a shallow semantic relation representation from different surface textual forms of free text sentences, is important for understanding natural language. Few studies in SRL have been conducted in the medical domain, primarily due to lack of annotated clinical SRL corpora, which are time-consuming and costly to build. The goal of this study is to investigate domain adaptation techniques for clinical SRL leveraging resources built from newswire and biomedical literature to improve performance and save annotation costs.Materials and Methods Multisource Integrated Platform for Answering Clinical Questions (MiPACQ), a manually annotated SRL clinical corpus, was used as the target domain dataset. PropBank and NomBank from newswire and BioProp from biomedical literature were used as source domain datasets. Three state-of-the-art domain adaptation algorithms were employed: instance pruning, transfer self-training, and feature augmentation. The SRL performance using different domain adaptation algorithms was evaluated by using 10-fold cross-validation on the MiPACQ corpus. Learning curves for the different methods were generated to assess the effect of sample size.Results and Conclusion When all three source domain corpora were used, the feature augmentation algorithm achieved statistically significant higher F-measure (83.18%), compared to the baseline with MiPACQ dataset alone (F-measure, 81.53%), indicating that domain adaptation algorithms may improve SRL performance on clinical text. To achieve a comparable performance to the baseline method that used 90% of MiPACQ training samples, the feature augmentation algorithm required <50% of training samples in MiPACQ, demonstrating that annotation costs of clinical SRL can be reduced significantly by leveraging existing SRL resources from other domains.
  • ä


  • DeepDyve
  • Pubget Overpricing
  • suck abstract from ncbi

    Linkout box