Skip to content

Commit

Permalink
Merge pull request #64 from egap/Conducting-a-Meta-Analysis
Browse files Browse the repository at this point in the history
Conducting a meta analysis
  • Loading branch information
jwbowers authored Jun 2, 2022
2 parents 27e2b8e + f9f9ca8 commit dd86265
Show file tree
Hide file tree
Showing 2 changed files with 32 additions and 14 deletions.
10 changes: 7 additions & 3 deletions meta-analysis/meta-analysis.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -64,19 +64,23 @@ Beware of meta-analyses that combine experimental and observational estimates.
==
Because meta-analyses draw their data from reported results, publication bias presents a serious threat to the interpretability of meta-analytic results. If the only results that see the light of day are splashy or statistically significant, meta-analysis may simply amplify publication bias. Methodological guidance to meta-analytic researchers therefore places special emphasis on conducting and carefully documenting a broad-ranging search for relevant studies, whether published or not, including languages other than English. This task is, in principle, aided by pre-registration of studies in public archives; unfortunately, pre-registration in the social sciences is not sufficiently comprehensive to make this a dependable approach on its own.

When assembling a meta-analysis, it is often impossible to know whether one has missed relevant studies. Some statistical methods have been developed in order to detect publication bias, but these tests tend to have low power and therefore may give more reassurance than is warranted. For example, one common approach is to construct a scatterplot to assess the relationship between study size (whether measured by the N of subjects or the standard error of the estimated treatment effect) and effect size. A telltale symptom of publication bias is a tendency for smaller studies to produce larger effects (as would be the case if studies were published only if they showed statistically significant results; to reach the significance bar, small studies (with large standard errors) would need to generate larger effect estimates. Unfortunately, this test often produces ambiguous results (CITE), and methods to correct publication bias in the wake of such diagnostic tests (e.g., the trim-and-fill method) may do little to reduce bias. Given growing criticism of statistical tests for publication bias and accompanying statistical correctives, there is an increasing sense that the quality of a meta-analysis hinges on whether research reports in a given domain can be assembled in a comprehensive manner.
When assembling a meta-analysis, it is often impossible to know whether one has missed relevant studies. Some statistical methods have been developed in order to detect publication bias, but these tests tend to have low power and therefore may give more reassurance than is warranted. For example, one common approach is to construct a scatterplot to assess the relationship between study size (whether measured by the N of subjects or the standard error of the estimated treatment effect) and effect size. A telltale symptom of publication bias is a tendency for smaller studies to produce larger effects (as would be the case if studies were published only if they showed statistically significant results; to reach the significance bar, small studies (with large standard errors) would need to generate larger effect estimates. Unfortunately, this test often produces ambiguous results (Bürkner and Doebler 2014),[^7] and methods to correct publication bias in the wake of such diagnostic tests (e.g., the trim-and-fill method) may do little to reduce bias. Given growing criticism of statistical tests for publication bias and accompanying statistical correctives, there is an increasing sense that the quality of a meta-analysis hinges on whether research reports in a given domain can be assembled in a comprehensive manner.

[^7]: Bürkner, P. C., & Doebler, P. (2014). Testing for publication bias in diagnostic meta‐analysis: a simulation study. *Statistics in Medicine, 33(18)*, 3061-3077.

9. Modeling inter-study heterogeneity using meta-regression
==
Researchers often seek to investigate systematic sources of treatment effect heterogeneity. These systematic sources may reflect differences among subjects (Do certain drugs work especially well for men or women?), contexts (Do lab studies of exposure to mass media produce stronger effects than field studies?), outcomes (Are treatment effects especially large when outcomes are measured via opinion surveys as opposed to direct observation of behavior?), or treatments (Are partisan messages more effective at mobilizing voters than nonpartisan messages?). Quite often, these investigations are best studied directly, via an experimental design. For example, variation in treatment may be studied by randomly assigning different treatment arms. Variation in effects associated with different outcome measures may also be studied in the context of a given experiment by gathering data on more than one outcome or by randomly assigning how outcomes are measured.

A second-best approach is to compare studies that differ on one or more of these dimensions (subjects, treatments, context, or outcomes). The drawback of this approach is that it is essentially descriptive rather than causal – the researcher is basically characterizing the features of studies that contribute to especially large or small effect sizes. That said, this exercise can be conducted via meta-regression: the estimated effect size is the dependent variable, while study attributes (e.g., whether outcomes were measured through direct observation or via survey self-reports) constitute the independent variables. Note that meta-regression is a generalization of random effects meta-analysis, with measured predictors of effect sizes as well as unmeasured sources of heterogeneity.

Since meta-analysis is a technique for combining information across different studies, we do not here discuss the detection or modeling of heterogeneous treatment effects within any single study.
Since meta-analysis is a technique for combining information across different studies, we do not here discuss the detection or modeling of heterogeneous treatment effects within any single study. See our guide [10 Things to Know About Heterogeneous Treatment Effects](https://egap.org/resource/10-things-to-know-about-heterogeneous-treatment-effects/) for more on this topic.

10. Methods for assessing the accuracy of meta-analytic results
==
A skeptic might ask whether meta-analysis improves our understanding of cause-and-effect in any practical way. Do we learn anything from pooling existing studies via a weighted average versus presenting the studies one at a time and leaving the synthesis to the reader? To address this question EGAP conducted an experiment among the academics and policy experts attending a conference to reveal the results of the first round of EGAP’s Metaketa Initiative, which focused on conducting a coordinated meta-analysis on the impact of [information and accountability programs on electoral outcomes](https://egap.org/our-work-0/the-metaketa-initiative/round1-information-accountability/). The round consisted of six studies measuring the impact of the same causal mechanism.

To test the idea that accumulated knowledge (in the form of meta-analysis) allows for better inferences about the effect of a given program, the Metaketa committee randomized the audience to hear a presentation of the meta-analysis, each component study, a placebo, and an external study of a similar intervention that was not part of the Metaketa round or the subsequent meta-analysis. Each group of participants was not exposed to one of the above group of studies. And the participants were asked to predict the results of the left out study. This allowed the committee to measure the effect of each study type on attendees’ predictive abilities. The event attendees were then asked to predict the findings of the one study they had not yet seen. The resulting analysis found that exposure to the meta-analysis led to greater accuracy in predicting the effect in the left-out study in comparison to the external study (which, as a reminder, was not part of the meta-analysis in any way). For more on this Metaketa round, along with a more substantial discussion of this “evidence summit” look for the upcoming book Information, Accountability, and Cumulative Learning: Lessons from Metaketa I.
To test the idea that accumulated knowledge (in the form of meta-analysis) allows for better inferences about the effect of a given program, the Metaketa committee randomized the audience to hear a presentation of the meta-analysis, each component study, a placebo, and an external study of a similar intervention that was not part of the Metaketa round or the subsequent meta-analysis. Each group of participants was not exposed to one of the above group of studies. And the participants were asked to predict the results of the left out study. This allowed the committee to measure the effect of each study type on attendees’ predictive abilities. The event attendees were then asked to predict the findings of the one study they had not yet seen. The resulting analysis found that exposure to the meta-analysis led to greater accuracy in predicting the effect in the left-out study in comparison to the external study (which, as a reminder, was not part of the meta-analysis in any way). For more on this Metaketa round, along with a more substantial discussion of this “evidence summit” see the book Information, Accountability, and Cumulative Learning: Lessons from Metaketa I.[^8]

[^8]: Dunning, T., Grossman, G., Humphreys, M., Hyde, S. D., McIntosh, C., & Nellis, G. (Eds.). (2019). *Information, accountability, and cumulative learning: Lessons from Metaketa I.* Cambridge: Cambridge University Press.

36 changes: 25 additions & 11 deletions meta-analysis/meta-analysis.html
Original file line number Diff line number Diff line change
Expand Up @@ -266,6 +266,7 @@




<style type="text/css">
.main-container {
max-width: 940px;
Expand All @@ -287,6 +288,9 @@
summary {
display: list-item;
}
details > summary > p:only-child {
display: inline;
}
pre code {
padding: 0;
}
Expand Down Expand Up @@ -600,13 +604,13 @@ <h1>8. Publication bias as a threat to meta-analysis</h1>
statistically significant results; to reach the significance bar, small
studies (with large standard errors) would need to generate larger
effect estimates. Unfortunately, this test often produces ambiguous
results (CITE), and methods to correct publication bias in the wake of
such diagnostic tests (e.g., the trim-and-fill method) may do little to
reduce bias. Given growing criticism of statistical tests for
publication bias and accompanying statistical correctives, there is an
increasing sense that the quality of a meta-analysis hinges on whether
research reports in a given domain can be assembled in a comprehensive
manner.</p>
results (Bürkner and Doebler 2014),<a href="#fn7" class="footnote-ref" id="fnref7"><sup>7</sup></a> and methods to correct publication bias in
the wake of such diagnostic tests (e.g., the trim-and-fill method) may
do little to reduce bias. Given growing criticism of statistical tests
for publication bias and accompanying statistical correctives, there is
an increasing sense that the quality of a meta-analysis hinges on
whether research reports in a given domain can be assembled in a
comprehensive manner.</p>
</div>
<div id="modeling-inter-study-heterogeneity-using-meta-regression" class="section level1">
<h1>9. Modeling inter-study heterogeneity using meta-regression</h1>
Expand Down Expand Up @@ -638,7 +642,10 @@ <h1>9. Modeling inter-study heterogeneity using meta-regression</h1>
as unmeasured sources of heterogeneity.</p>
<p>Since meta-analysis is a technique for combining information across
different studies, we do not here discuss the detection or modeling of
heterogeneous treatment effects within any single study.</p>
heterogeneous treatment effects within any single study. See our guide
<a href="https://egap.org/resource/10-things-to-know-about-heterogeneous-treatment-effects/">10
Things to Know About Heterogeneous Treatment Effects</a> for more on
this topic.</p>
</div>
<div id="methods-for-assessing-the-accuracy-of-meta-analytic-results" class="section level1">
<h1>10. Methods for assessing the accuracy of meta-analytic results</h1>
Expand Down Expand Up @@ -668,9 +675,9 @@ <h1>10. Methods for assessing the accuracy of meta-analytic results</h1>
greater accuracy in predicting the effect in the left-out study in
comparison to the external study (which, as a reminder, was not part of
the meta-analysis in any way). For more on this Metaketa round, along
with a more substantial discussion of this “evidence summit” look for
the upcoming book Information, Accountability, and Cumulative Learning:
Lessons from Metaketa I.</p>
with a more substantial discussion of this “evidence summit” see the
book Information, Accountability, and Cumulative Learning: Lessons from
Metaketa I.<a href="#fn8" class="footnote-ref" id="fnref8"><sup>8</sup></a></p>
</div>
<div class="footnotes footnotes-end-of-document">
<hr />
Expand All @@ -691,6 +698,13 @@ <h1>10. Methods for assessing the accuracy of meta-analytic results</h1>
<li id="fn6"><p>Lau, R.R., Sigelman, L., &amp; Rovner, I.B. (2007). The
Effects of Negative Political Campaigns: A Meta‐Analytic Reassessment.
<em>The Journal of Politics, 69(4)</em>, 1176-1209.<a href="#fnref6" class="footnote-back">↩︎</a></p></li>
<li id="fn7"><p>Bürkner, P. C., &amp; Doebler, P. (2014). Testing for
publication bias in diagnostic meta‐analysis: a simulation study.
<em>Statistics in Medicine, 33(18)</em>, 3061-3077.<a href="#fnref7" class="footnote-back">↩︎</a></p></li>
<li id="fn8"><p>Dunning, T., Grossman, G., Humphreys, M., Hyde, S. D.,
McIntosh, C., &amp; Nellis, G. (Eds.). (2019). <em>Information,
accountability, and cumulative learning: Lessons from Metaketa I.</em>
Cambridge: Cambridge University Press.<a href="#fnref8" class="footnote-back">↩︎</a></p></li>
</ol>
</div>

Expand Down

0 comments on commit dd86265

Please sign in to comment.