AI智能总结
Emma DeAngeli and Michael A. Livermore About the Authors Emma DeAngeliis a senior research analyst at Resources for the Future (RFF). Shethen earned her Master of Environmental Management degree with a specializationin Environmental Economics and Policy from Duke University in 2022, focusing oncommunity-based environmental management and climate change policy. Michael A. Livermoreis a professor of law at the University of Virginia School ofLaw. His scholarly work focuses on administrative regulation, environmental law, andthe integration of artificial intelligence into legal analysis and decision-making. Hisresearch has been published in premier law reviews, including theYale Law Journaland theUniversity of Chicago Law Review, as well as in leading peer-reviewed journalsspanning the legal, scientific, and social science disciplines. Prior to his appointmentat the University of Virginia, Professor Livermore served as the founding ExecutiveDirector of the Institute for Policy Integrity at New York University School of Law, athink tank focused on enhancing the quality of government decision-making. Acknowledgements We gratefully acknowledge financial support from the Alfred P. Sloan Foundation(Grant G-2022019372) and the RFF Retrospective Studies of Regulatory Performancegrant. All errors are our own. About RFF Resources for the Future (RFF) is an independent, nonprofit research institution inWashington, DC. Its mission is to improve environmental, energy, and natural resourcedecisions through impartial economic research and policy engagement. RFF iscommitted to being the most widely trusted source of research insights and policysolutions leading to a healthy environment and a thriving economy. Working papers are research materials circulated by their authors for purposes ofinformation and discussion. They have not necessarily undergone formal peer review.The views expressed here are those of the individual authors and may differ from thoseof other RFF experts, its officers, or its directors. Sharing Our Work Our work is available for sharing and adaptation under an Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license. Youcan copy and redistribute our material in any medium or format; you must giveappropriate credit, provide a link to the license, and indicate if changes were made,and you may not apply additional restrictions. You may do so in any reasonablemanner, but not in any way that suggests the licensor endorses you or your use.You may not use the material for commercial purposes. If you remix, transform, orbuild upon the material, you may not distribute the modified material. For moreinformation, visithttps://creativecommons.org/licenses/by-nc-nd/4.0/. Executive Summary In principle, there is a long-standing and broad bipartisan consensus in favor of robustex-post evaluation of federal environmental policy. Proponents of ex-post evaluationnote its many possible benefits—these include identifying (and eliminating) ineffectiveprograms; improving the quality of ex-ante analysis; and facilitating regulatoryexperimentation. In reality, a number of political, institutional, and cultural barriers haveblocked the emergence of any consistent practice of ex-post review. These barriershave persisted for decades, through administrations of both political parties and duringperiods of both united and divided government. They arise from consistent features ofthe administrative state, rather than the temporary politics of any particular moment.Overcoming them will likely require sustained reforms efforts. Based in part on a series of interviews with senior government officials with decadesof experience in environmental policymaking, this report identifies a set of teninterrelated barriers that have inhibited the development of a robust practice ofenvironmental policy evaluation at federal agencies. These barriers are partially theresult of political dynamics, but they also arise from agency institutional constraints,incentives, and cultures. Our research identifies the following leading barriers to ex-post evaluation: 1.Many agencies lackdedicated fundingto carry out ex-post evaluation.2.Agencies have highlyconstrained agenda space, making it difficult to prioritizeex-post evaluation.3.Agencies faceadverse political incentivesto engage in self-scrutiny that mayopen them to political attacks.4.Although agency personnel often have essential domain knowledge, they maylack theexpertiseto carry out rigorous ex-post analyses.5.Agencies may lack (or believe they lack)legislative authorityto carry out certaintypes of ex-post review.6.It is often not clear how agencies shouldintegratethe results of ex-postevaluation into decisionmaking, given that major regulatory decisions are rare.7.The self-evaluations of agencies, even if carried out in good faith, may not beperceived as trustworthyby relevant stakeholders.8.Ex-post evaluation often raises difficult questions ofcausal inferencet