Skip to content

Commit

Permalink
Updating information on metaketa 1
Browse files Browse the repository at this point in the history
  • Loading branch information
amwilk committed May 31, 2022
1 parent 2bcd185 commit f9f9ca8
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 4 deletions.
4 changes: 3 additions & 1 deletion meta-analysis/meta-analysis.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -80,5 +80,7 @@ Since meta-analysis is a technique for combining information across different st
==
A skeptic might ask whether meta-analysis improves our understanding of cause-and-effect in any practical way. Do we learn anything from pooling existing studies via a weighted average versus presenting the studies one at a time and leaving the synthesis to the reader? To address this question EGAP conducted an experiment among the academics and policy experts attending a conference to reveal the results of the first round of EGAP’s Metaketa Initiative, which focused on conducting a coordinated meta-analysis on the impact of [information and accountability programs on electoral outcomes](https://egap.org/our-work-0/the-metaketa-initiative/round1-information-accountability/). The round consisted of six studies measuring the impact of the same causal mechanism.

To test the idea that accumulated knowledge (in the form of meta-analysis) allows for better inferences about the effect of a given program, the Metaketa committee randomized the audience to hear a presentation of the meta-analysis, each component study, a placebo, and an external study of a similar intervention that was not part of the Metaketa round or the subsequent meta-analysis. Each group of participants was not exposed to one of the above group of studies. And the participants were asked to predict the results of the left out study. This allowed the committee to measure the effect of each study type on attendees’ predictive abilities. The event attendees were then asked to predict the findings of the one study they had not yet seen. The resulting analysis found that exposure to the meta-analysis led to greater accuracy in predicting the effect in the left-out study in comparison to the external study (which, as a reminder, was not part of the meta-analysis in any way). For more on this Metaketa round, along with a more substantial discussion of this “evidence summit” look for the upcoming book Information, Accountability, and Cumulative Learning: Lessons from Metaketa I.
To test the idea that accumulated knowledge (in the form of meta-analysis) allows for better inferences about the effect of a given program, the Metaketa committee randomized the audience to hear a presentation of the meta-analysis, each component study, a placebo, and an external study of a similar intervention that was not part of the Metaketa round or the subsequent meta-analysis. Each group of participants was not exposed to one of the above group of studies. And the participants were asked to predict the results of the left out study. This allowed the committee to measure the effect of each study type on attendees’ predictive abilities. The event attendees were then asked to predict the findings of the one study they had not yet seen. The resulting analysis found that exposure to the meta-analysis led to greater accuracy in predicting the effect in the left-out study in comparison to the external study (which, as a reminder, was not part of the meta-analysis in any way). For more on this Metaketa round, along with a more substantial discussion of this “evidence summit” see the book Information, Accountability, and Cumulative Learning: Lessons from Metaketa I.[^8]

[^8]: Dunning, T., Grossman, G., Humphreys, M., Hyde, S. D., McIntosh, C., & Nellis, G. (Eds.). (2019). *Information, accountability, and cumulative learning: Lessons from Metaketa I.* Cambridge: Cambridge University Press.

10 changes: 7 additions & 3 deletions meta-analysis/meta-analysis.html
Original file line number Diff line number Diff line change
Expand Up @@ -675,9 +675,9 @@ <h1>10. Methods for assessing the accuracy of meta-analytic results</h1>
greater accuracy in predicting the effect in the left-out study in
comparison to the external study (which, as a reminder, was not part of
the meta-analysis in any way). For more on this Metaketa round, along
with a more substantial discussion of this “evidence summit” look for
the upcoming book Information, Accountability, and Cumulative Learning:
Lessons from Metaketa I.</p>
with a more substantial discussion of this “evidence summit” see the
book Information, Accountability, and Cumulative Learning: Lessons from
Metaketa I.<a href="#fn8" class="footnote-ref" id="fnref8"><sup>8</sup></a></p>
</div>
<div class="footnotes footnotes-end-of-document">
<hr />
Expand All @@ -701,6 +701,10 @@ <h1>10. Methods for assessing the accuracy of meta-analytic results</h1>
<li id="fn7"><p>Bürkner, P. C., &amp; Doebler, P. (2014). Testing for
publication bias in diagnostic meta‐analysis: a simulation study.
<em>Statistics in Medicine, 33(18)</em>, 3061-3077.<a href="#fnref7" class="footnote-back">↩︎</a></p></li>
<li id="fn8"><p>Dunning, T., Grossman, G., Humphreys, M., Hyde, S. D.,
McIntosh, C., &amp; Nellis, G. (Eds.). (2019). <em>Information,
accountability, and cumulative learning: Lessons from Metaketa I.</em>
Cambridge: Cambridge University Press.<a href="#fnref8" class="footnote-back">↩︎</a></p></li>
</ol>
</div>

Expand Down

0 comments on commit f9f9ca8

Please sign in to comment.