Skip to content

Latest commit

 

History

History
121 lines (97 loc) · 6.33 KB

principles.md

File metadata and controls

121 lines (97 loc) · 6.33 KB
title author date
Behavioral and Data Science Insights Teams Principles [DRAFT]
Jake Bowers
4 Nov 2016

We are scholars working in government. We believe that the explanations, findings, and methods from the social, behavioral, and statistical sciences can improve the work of governments. In order to best serve and empower the public and agency partners we pledge to uphold the following principles:^[This document is inspired by the EGAP Research Principles]

The general principle underlying these specific commitments is that the long term success of efforts to improve public policy will depend on the scientific integrity and transparency of the work we do.^[This document is currently hosted at http://github.com/jwbowers/publicscience.]

DRAFT NOTE: These ideas are currently aspirational. Comments appreciated!

  1. We serve and protect the public

Our first responsibility is to the people who constitute and are served by the government. Protection of the human subjects of our research will be our primary responsibility. This can include gaining approval for our research from Institutional Review Boards when appropriate and certainly involves following the standards for protection of human subjects established by the government in which we work or the US Federal Government Common Rule, whichever affords the public the most protection.

  1. We empower governmental actors.

Every project in the government requires collaboration. No lasting change can occur if the people within the bureaucracies do not have a say in the process or do not learn some of the basic principles of policy evaluation let alone appreciate some of the basic aspects of the study of human behavior from which we generate concrete policy recommendations. We recognize and commit to nurturing this process. We are not government contractors. We are not management consultants. We want to give away the tools for assessment and policy creation to the people working in the agencies. And we want to include the ideas of those people in our testing.^[It turns out that new ideas from governmental actors create new scientific questions, too.]

  1. We test our proposals

The scholarly consensus rests on a foundation of data and studies from outside of the governmental context. Although, we believe that the theories that have received support from studies in academia ought to form the basis for our policy advice, we also recognize that our ideas may not translate easily or directly into policies affecting the lives of millions. Thus, we commit to testing our proposals. Whenever possible we will test our ideas using randomized field experiments. Randomization allows for (1) impersonal assessment of what works, (2) allows for transparent and easy to convey explanations that admit little debate about methodology (the statistics of experiments are particularly clear and easy to explain), and (3) provide compellingly interpretable comparisons (borrowing Kinder and Palfrey 1993 on how experiments make it easier for us to learn about theory from observational comparisons).

  1. We publish all findings.

If we do not learn from our past efforts, then the public loses. Therefore, we (a) keep a public record of all experiments fielded, (b) publicize our findings, whether they are null, supporting or opposing our own prior expectations or hopes, or those of government leaders, and (c) submit our work for publication via peer-review.^[peer-review holds us to a different set of standards than might be applied wihtin the government or our town or social networks]

  1. We work in a transparent manner to enhance learning and credibility

If we are to change public policy, we must produce credible results. Publicized results from randomized experiments arising from close and equal collaborations between scholars and governmental actors help with credibility. We also must do our work at the highest standards of our sciences: this includes pre-registering analyses, writing and releasing code under the assumption that others will want to reproduce our work, or at least learn from our mistakes and/or successes and/or strategies.

Our work is high stakes So, we may also pursue other strategies such as relying only on publicly available open-source tools, instituting secondary results/code blind re-analysis either within teams or using outside volunteers or research assistants, subjecting our work to peer-review, ...

  1. We follow a public standard operating procedure

We publicize our process. In addition to releasing our code, we will make public our general guidelines for the decisions we make during analysis and design. That is, we try to explain why we make the choices that we do (or did).^[We might further keep this SOP in a publicly version controlled location so that our past decisions and rationales are recorded and available for scrutiny.] This enables us to learn from criticism and commentary. And further enhances the credibility of our work as arising from fair and impartial processes. After our work as policy advisors, our job as evaluators is to tell the truth. This truth-telling role (as opposed to a role seeking to bolster a particular point of view) is what gives us the power to help the public relate better to government and for government to better serve the people.^[The idea of a SOP is inspired by The Don Green Lab SOP by Winston Lin, Don Green, and Alex Coppock.]

  1. We work in the open.
  • Public designs and methods and results serve the public: maybe other towns, states, or countries would implement a new policy that we tested in the federal government.
  • Openness teaches easily.^[For a model see our collaborators 18F https://pages.18f.gov/partnership-playbook/1-build-in-the-open/ and https://18f.gsa.gov/2014/07/31/working-in-public-from-day-1/]
  • If we model openness, then we have the opportunity to learn from the new groups and from people not directly involved in our team or the broader multidisciplinary effort.(ex. "pull requests" on github)
  • Sharing all of our designs, results, materials as a norm and doing research using impersonal methods (like randomized trials) makes it difficult for political agents to lie about or hide our results or process. So, openness enhances our reputation and thus our impact on public policy.