Use my Search Websuite to scan PubMed, PMCentral, Journal Hosts and Journal Archives, FullText.
Kick-your-searchterm to multiple Engines kick-your-query now !>
A dictionary by aggregated review articles of nephrology, medicine and the life sciences
Your one-stop-run pathway from word to the immediate pdf of peer-reviewed on-topic knowledge.

suck abstract from ncbi


10.2196/publichealth.3953

http://scihub22266oqcxt.onion/10.2196/publichealth.3953
suck pdf from google scholar
C4763717!4763717!26925459
unlimited free pdf from europmc26925459    free
PDF from PMC    free
html from PMC    free

suck abstract from ncbi


Deprecated: Implicit conversion from float 231.6 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534

Deprecated: Implicit conversion from float 231.6 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534

Deprecated: Implicit conversion from float 231.6 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534

Deprecated: Implicit conversion from float 231.6 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534
pmid26925459      JMIR+Public+Health+Surveill 2015 ; 1 (1): ä
Nephropedia Template TP

gab.com Text

Twit Text FOAVip

Twit Text #

English Wikipedia


  • Using Twitter to Measure Public Discussion of Diseases: A Case Study #MMPMID26925459
  • Burton S; Salathe M; Weeg C; Schwartz HA; Hill S; Merchant RM; Arango C; Ungar L
  • JMIR Public Health Surveill 2015[Jan]; 1 (1): ä PMID26925459show ga
  • Background: Twitter is increasingly used to estimate disease prevalence, but such measurements can be biased, due to both biased sampling and inherent ambiguity of natural language. Objective: We characterized the extent of these biases and how they vary with disease. Methods: We correlated self-reported prevalence rates for 22 diseases from Experian?s Simmons National Consumer Study (n=12,305) with the number of times these diseases were mentioned on Twitter during the same period (2012). We also identified and corrected for two types of bias present in Twitter data: (1) demographic variance between US Twitter users and the general US population; and (2) natural language ambiguity, which creates the possibility that mention of a disease name may not actually refer to the disease (eg, ?heart attack? on Twitter often does not refer to myocardial infarction). We measured the correlation between disease prevalence and Twitter disease mentions both with and without bias correction. This allowed us to quantify each disease?s overrepresentation or underrepresentation on Twitter, relative to its prevalence. Results: Our sample included 80,680,449 tweets. Adjusting disease prevalence to correct for Twitter demographics more than doubles the correlation between Twitter disease mentions and disease prevalence in the general population (from .113 to .258, P <.001). In addition, diseases varied widely in how often mentions of their names on Twitter actually referred to the diseases, from 14.89% (3827/25,704) of instances (for stroke) to 99.92% (5044/5048) of instances (for arthritis). Applying ambiguity correction to our Twitter corpus achieves a correlation between disease mentions and prevalence of .208 ( P <.001). Simultaneously applying correction for both demographics and ambiguity more than triples the baseline correlation to .366 ( P <.001). Compared with prevalence rates, cancer appeared most overrepresented in Twitter, whereas high cholesterol appeared most underrepresented. Conclusions: Twitter is a potentially useful tool to measure public interest in and concerns about different diseases, but when comparing diseases, improvements can be made by adjusting for population demographics and word ambiguity.
  • ä


  • DeepDyve
  • Pubget Overpricing
  • suck abstract from ncbi

    Linkout box