Deprecated: Implicit conversion from float 233.6 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534
Deprecated: Implicit conversion from float 233.6 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534
Deprecated: Implicit conversion from float 233.6 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534
Deprecated: Implicit conversion from float 233.6 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534
Deprecated: Implicit conversion from float 233.6 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534
Deprecated: Implicit conversion from float 233.6 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534
Deprecated: Implicit conversion from float 267.2 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534
Deprecated: Implicit conversion from float 267.2 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534
Warning: imagejpeg(C:\Inetpub\vhosts\kidney.de\httpdocs\phplern\25925131
.jpg): Failed to open stream: No such file or directory in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 117 BMC+Bioinformatics
2015 ; 16
(ä): 138
Nephropedia Template TP
gab.com Text
Twit Text FOAVip
Twit Text #
English Wikipedia
An overview of the BIOASQ large-scale biomedical semantic indexing and question
answering competition
#MMPMID25925131
Tsatsaronis G
; Balikas G
; Malakasiotis P
; Partalas I
; Zschunke M
; Alvers MR
; Weissenborn D
; Krithara A
; Petridis S
; Polychronopoulos D
; Almirantis Y
; Pavlopoulos J
; Baskiotis N
; Gallinari P
; Artiéres T
; Ngomo AC
; Heino N
; Gaussier E
; Barrio-Alvers L
; Schroeder M
; Androutsopoulos I
; Paliouras G
BMC Bioinformatics
2015[Apr]; 16
(ä): 138
PMID25925131
show ga
BACKGROUND: This article provides an overview of the first BIOASQ challenge, a
competition on large-scale biomedical semantic indexing and question answering
(QA), which took place between March and September 2013. BIOASQ assesses the
ability of systems to semantically index very large numbers of biomedical
scientific articles, and to return concise and user-understandable answers to
given natural language questions by combining information from biomedical
articles and ontologies. RESULTS: The 2013 BIOASQ competition comprised two
tasks, Task 1a and Task 1b. In Task 1a participants were asked to automatically
annotate new PUBMED documents with MESH headings. Twelve teams participated in
Task 1a, with a total of 46 system runs submitted, and one of the teams
performing consistently better than the MTI indexer used by NLM to suggest MESH
headings to curators. Task 1b used benchmark datasets containing 29 development
and 282 test English questions, along with gold standard (reference) answers,
prepared by a team of biomedical experts from around Europe and participants had
to automatically produce answers. Three teams participated in Task 1b, with 11
system runs. The BIOASQ infrastructure, including benchmark datasets, evaluation
mechanisms, and the results of the participants and baseline methods, is publicly
available. CONCLUSIONS: A publicly available evaluation infrastructure for
biomedical semantic indexing and QA has been developed, which includes benchmark
datasets, and can be used to evaluate systems that: assign MESH headings to
published articles or to English questions; retrieve relevant RDF triples from
ontologies, relevant articles and snippets from PUBMED Central; produce "exact"
and paragraph-sized "ideal" answers (summaries). The results of the systems that
participated in the 2013 BIOASQ competition are promising. In Task 1a one of the
systems performed consistently better from the NLM's MTI indexer. In Task 1b the
systems received high scores in the manual evaluation of the "ideal" answers;
hence, they produced high quality summaries as answers. Overall, BIOASQ helped
obtain a unified view of how techniques from text classification, semantic
indexing, document and passage retrieval, question answering, and text
summarization can be combined to allow biomedical experts to obtain concise,
user-understandable answers to questions reflecting their real information needs.