Warning: file_get_contents(https://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=27350873
&cmd=llinks): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 215
Setting health research priorities using the CHNRI method: V Quantitative
properties of human collective knowledge
#MMPMID27350873
Rudan I
; Yoshida S
; Wazny K
; Chan KY
; Cousens S
J Glob Health
2016[Jun]; 6
(1
): 010502
PMID27350873
show ga
INTRODUCTION: The CHNRI method for setting health research priorities has
crowdsourcing as the major component. It uses the collective opinion of a group
of experts to generate, assess and prioritize between many competing health
research ideas. It is difficult to compare the accuracy of human individual and
collective opinions in predicting uncertain future outcomes before the outcomes
are known. However, this limitation does not apply to existing knowledge, which
is an important component underlying opinion. In this paper, we report several
experiments to explore the quantitative properties of human collective knowledge
and discuss their relevance to the CHNRI method. METHODS: We conducted a series
of experiments in groups of about 160 (range: 122-175) undergraduate Year 2
medical students to compare their collective knowledge to their individual
knowledge. We asked them to answer 10 questions on each of the following: (i) an
area in which they have a degree of expertise (undergraduate Year 1 medical
curriculum); (ii) an area in which they likely have some knowledge (general
knowledge); and (iii) an area in which they are not expected to have any
knowledge (astronomy). We also presented them with 20 pairs of well-known
celebrities and asked them to identify the older person of the pair. In all these
experiments our goal was to examine how the collective answer compares to the
distribution of students' individual answers. RESULTS: When answering the
questions in their own area of expertise, the collective answer (the median) was
in the top 20.83% of the most accurate individual responses; in general
knowledge, it was in the top 11.93%; and in an area with no expertise, the group
answer was in the top 7.02%. However, the collective answer based on mean values
fared much worse, ranging from top 75.60% to top 95.91%. Also, when confronted
with guessing the older of the two celebrities, the collective response was
correct in 18/20 cases (90%), while the 8 most successful individuals among the
students had 19/20 correct answers (95%). However, when the system in which the
students who were not sure of the correct answer were allowed to either choose an
award of half of the point in all such instances, or withdraw from responding, in
order to improve the score of the collective, the collective was correct in 19/20
cases (95%), while the 3 most successful individuals were correct in 17/20 cases
(85%). CONCLUSIONS: Our experiments showed that the collective knowledge of a
group with expertise in the subject should always be very close to the true
value. In most cases and under most assumption, the collective knowledge will be
more accurate than the knowledge of an "average" individual, but there always
seems to be a small group of individuals who manage to out-perform the
collective. The accuracy of collective prediction may be enhanced by allowing the
individuals with low confidence in their answer to withdraw from answering.
|*Knowledge
[MESH]
|Achievement
[MESH]
|Crowdsourcing/methods
[MESH]
|Curriculum
[MESH]
|Education, Medical, Undergraduate/*statistics & numerical data
[MESH]