From 4708d783844f09aba7649ad240bc7186cea9c3fd Mon Sep 17 00:00:00 2001 From: "Anna M. Wilke" Date: Tue, 31 May 2022 11:57:48 -0400 Subject: [PATCH 1/3] adding a missing citation --- meta-analysis/meta-analysis.Rmd | 4 +++- meta-analysis/meta-analysis.html | 21 ++++++++++++++------- 2 files changed, 17 insertions(+), 8 deletions(-) diff --git a/meta-analysis/meta-analysis.Rmd b/meta-analysis/meta-analysis.Rmd index 3e334d9..9c5b7cd 100644 --- a/meta-analysis/meta-analysis.Rmd +++ b/meta-analysis/meta-analysis.Rmd @@ -64,7 +64,9 @@ Beware of meta-analyses that combine experimental and observational estimates. == Because meta-analyses draw their data from reported results, publication bias presents a serious threat to the interpretability of meta-analytic results. If the only results that see the light of day are splashy or statistically significant, meta-analysis may simply amplify publication bias. Methodological guidance to meta-analytic researchers therefore places special emphasis on conducting and carefully documenting a broad-ranging search for relevant studies, whether published or not, including languages other than English. This task is, in principle, aided by pre-registration of studies in public archives; unfortunately, pre-registration in the social sciences is not sufficiently comprehensive to make this a dependable approach on its own. -When assembling a meta-analysis, it is often impossible to know whether one has missed relevant studies. Some statistical methods have been developed in order to detect publication bias, but these tests tend to have low power and therefore may give more reassurance than is warranted. For example, one common approach is to construct a scatterplot to assess the relationship between study size (whether measured by the N of subjects or the standard error of the estimated treatment effect) and effect size. A telltale symptom of publication bias is a tendency for smaller studies to produce larger effects (as would be the case if studies were published only if they showed statistically significant results; to reach the significance bar, small studies (with large standard errors) would need to generate larger effect estimates. Unfortunately, this test often produces ambiguous results (CITE), and methods to correct publication bias in the wake of such diagnostic tests (e.g., the trim-and-fill method) may do little to reduce bias. Given growing criticism of statistical tests for publication bias and accompanying statistical correctives, there is an increasing sense that the quality of a meta-analysis hinges on whether research reports in a given domain can be assembled in a comprehensive manner. +When assembling a meta-analysis, it is often impossible to know whether one has missed relevant studies. Some statistical methods have been developed in order to detect publication bias, but these tests tend to have low power and therefore may give more reassurance than is warranted. For example, one common approach is to construct a scatterplot to assess the relationship between study size (whether measured by the N of subjects or the standard error of the estimated treatment effect) and effect size. A telltale symptom of publication bias is a tendency for smaller studies to produce larger effects (as would be the case if studies were published only if they showed statistically significant results; to reach the significance bar, small studies (with large standard errors) would need to generate larger effect estimates. Unfortunately, this test often produces ambiguous results (Bürkner and Doebler 2014),[^7] and methods to correct publication bias in the wake of such diagnostic tests (e.g., the trim-and-fill method) may do little to reduce bias. Given growing criticism of statistical tests for publication bias and accompanying statistical correctives, there is an increasing sense that the quality of a meta-analysis hinges on whether research reports in a given domain can be assembled in a comprehensive manner. + +[^7]: Bürkner, P. C., & Doebler, P. (2014). Testing for publication bias in diagnostic meta‐analysis: a simulation study. *Statistics in Medicine, 33(18)*, 3061-3077. 9. Modeling inter-study heterogeneity using meta-regression == diff --git a/meta-analysis/meta-analysis.html b/meta-analysis/meta-analysis.html index c93fa28..955cfac 100644 --- a/meta-analysis/meta-analysis.html +++ b/meta-analysis/meta-analysis.html @@ -266,6 +266,7 @@ +