Use my Search Websuite to scan PubMed, PMCentral, Journal Hosts and Journal Archives, FullText.
Kick-your-searchterm to multiple Engines kick-your-query now !>
A dictionary by aggregated review articles of nephrology, medicine and the life sciences
Your one-stop-run pathway from word to the immediate pdf of peer-reviewed on-topic knowledge.

suck abstract from ncbi


10.2196/mededu.8527

http://scihub22266oqcxt.onion/10.2196/mededu.8527
suck pdf from google scholar
C5826977!5826977!29434018
unlimited free pdf from europmc29434018    free
PDF from PMC    free
html from PMC    free

suck abstract from ncbi


Deprecated: Implicit conversion from float 213.6 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534

Deprecated: Implicit conversion from float 213.6 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534
pmid29434018      JMIR+Med+Educ 2018 ; 4 (1): ä
Nephropedia Template TP

gab.com Text

Twit Text FOAVip

Twit Text #

English Wikipedia


  • Medical YouTube Videos and Methods of Evaluation: Literature Review #MMPMID29434018
  • Basch C; Hassona Y; Drozd B; Couvillon E; Suarez A
  • JMIR Med Educ 2018[Jan]; 4 (1): ä PMID29434018show ga
  • Background: Online medical education has relevance to public health literacy and physician efficacy, yet it requires a certain standard of reliability. While the internet has the potential to be a viable medical education tool, the viewer must be able to discern which information is reliable. Objective: Our aim was to perform a literature review to determine and compare the various methods used when analyzing YouTube videos for patient education efficacy, information accuracy, and quality. Methods: In November 2016, a comprehensive search within PubMed and Embase resulted in 37 included studies. Results: The review revealed that each video evaluation study first established search terms, exclusion criteria, and methods to analyze the videos in a consistent manner. The majority of the evaluators devised a scoring system, but variations were innumerable within each study?s methods. Conclusions: In comparing the 37 studies, we found that overall, common steps were taken to evaluate the content. However, a concrete set of methods did not exist. This is notable since many patients turn to the internet for medical information yet lack the tools to evaluate the advice being given. There was, however, a common aim of discovering what health-related content the public is accessing, and how credible that material is.
  • ä


  • DeepDyve
  • Pubget Overpricing
  • suck abstract from ncbi

    Linkout box