您的浏览器禁用了JavaScript(一种计算机语言,用以实现您与网页的交互),请解除该禁用,或者联系我们。[CSET]:Government AI Hire, Use, Buy (HUB) Roundtable Series – Roundtable 4: Capstone - 发现报告

Government AI Hire, Use, Buy (HUB) Roundtable Series – Roundtable 4: Capstone

信息技术2025-03-21CSET一***
AI智能总结
查看更多
Government AI Hire, Use, Buy (HUB) Roundtable Series – Roundtable 4: Capstone

Introduction The U.S. government has embraced the potential ofartificial intelligence, but there arenot yet sufficiently comprehensive strategies and guardrails across federal agencies tomaximize AI public good applications while mitigating risks. In light of this gap,Georgetown’sGovernment AI Hire, Use, Buy (HUB) Roundtable Seriesbroughttogether leading voices to grapple with the legal liability questions that AI poses,examine AI’s potential to transform government services, and consider how thegovernment can better attract and use AI talent. The federal government has taken action on AI over the last two administrations, withactivity particularly ramping up in the last few years. Following the release of severaladvanced large language models, PresidentJoeBiden signed the groundbreakingExecutive Order on the Safe, Secure, and Trustworthy Development and Use ofArtificial Intelligence in October 2023.1In March 2024, the Office of Management andBudget(OMB)released its government-wide policy, Advancing Governance,Innovation, and Risk Management for Agency Use of Artificial Intelligence, whichcomplements the recently finalizedEuropean Union AI Act.2Additionally, theGovernment Accountability Office conducted its audit of current AI uses by federalagencies.3Much of this work built on early efforts by the first Trumpadministration,such asits Executive Order 13960, Promoting the Use of Trustworthy ArtificialIntelligence in the Federal Government, which started the AI inventory process.However, many questions on the U.S. government’s use, procurement, and hiring ofAItools and talent remain unaddressed. Georgetown University’sCenter for Security Emerging TechnologyandBeeck Centerfor Social Impact and Innovation, together with theGeorgetown Law Institute forTechnology Law and Policy, led a series ofprivate,invitation-only roundtablesover thecourse of2024to discuss these pressing questions.This resulting report wasauthored in 2024 after those discussions. These events culminated in a capstone discussion on December 16, which aimed tosynthesize the three prior roundtable findings into potential policy recommendationsand a final set of key challenges for government agencies to consider as they continueto adopt and deploy AI. This roundtable, like the three prior discussions, wasconducted undertheChatham House Rule, and included participants fromgovernment, industry, civil society, and academia. All names, affiliations, and otheridentifying information have been omitted from this report. Overview of Discussion The capstone roundtable was initially framed around three key themes drawn out fromthepreviousdiscussions: 1.The needforgreater agency flexibility and agility.2.A suite of efforts, such as use case inventory and data organization, thatagencies need to improve inorder to lay better groundwork for adopting AI.3.The importance of providing the government workforce with training andeducation to maximize human-machine teams. While these themes served as the framework for the day’s discussions, theconversation ultimately evolved organically, touching on thesetopicsand many othersraised by the group. The pending transition in U.S. presidential administrations also sat as a key backdropto much of the discussion. Participants considered areas where government AI activitymight carry on just as before and ways in which the government’s approach to AImight shift. Key takeaways and recommendations voiced by participants included: 1.Common efforts to advance AI in government are likely to persist in the nextadministration, even if described differently.2.Investment in time and personnel is necessary to maximize efficiency and thebenefits that AI can offer to the government.3.Transparency is key in order to maintain public trust and secure the social andpolitical capital to apply AI fully.4.Perspectives on AI as a tool for automation versus as an enabler for humansmay vary, but the role of the human-machine team is likely to remain central ingovernment. Key Takeaways and Recommendations “Similar in Substance, ifNot Terminology” A centraltopicofthe day’s discussion was the effect that the second Trumpadministration would have on government use of AI, particularly given Elon Musk’spotentially formal and informal roles. While the consensus was that it is too early to definitively say how things might differ,many agreed that there are a number of areas of AI policy where common cause couldbe identified between theBidenandTrumpadministrations. One participant noted that agency AI inventory efforts began underPresidentTrump’sfirst termand that much of the language in current OMB guidance reflects that of thefirstTrumpadministration. Others pointed to different areas in which the Trump andBiden administrations’AI policiesmirrored one another, even if their respectiveproponents would describe the work differently. One participant noted the similaritiesthat are often evident in sectoral-based or other approaches that are pr