您的浏览器禁用了JavaScript(一种计算机语言,用以实现您与网页的交互),请解除该禁用,或者联系我们。[ITIF]:调查:大多数美国人认为应该允许科技公司设定人工智能限制 - 发现报告

调查:大多数美国人认为应该允许科技公司设定人工智能限制

信息技术2026-02-26ITIF李***
AI智能总结
查看更多
调查:大多数美国人认为应该允许科技公司设定人工智能限制

As theDepartmentof War levels threats and ultimatums against Anthropic, Morning Consult conducteda nationally representative survey of 1,976 U.S. adults to better understand attitudes around the use ofartificial intelligence (AI) in military actions, whether technology companies have a responsibility to setlimits on their products, and how Americans view mass surveillance. The findings from the survey are asfollows: Americans Want Humans in Control of AI The Big Picture Americans are deeply skeptical of AI in military operations. Nearly 8 in 10 (79%) say ahuman beingshould always make the final decisionbefore any use of lethal force—a view held equally byDemocrats (81%) and Republicans (81%). Three-quarters (75%) say AI technology is not yet reliableenough to be trusted with life-or-death military decisions without human oversight (Dem: 77%, Rep:73%). Concerns are Intense and Bipartisan Autonomous Weapons: Research, Don’t Deploy The public draws a clear line between understanding the technology and putting it on the battlefield. Only21%support developing and deploying AI-controlled weapons (Dem: 16%, Rep: 35%). The plurality position (49%) istoresearch but not deploy(Dem: 54%, Rep: 44%)—with another 13% opposing any research at all. 71%agree the U.S. should still research and develop AI-controlled weapons to understand the technology anddefend against enemies who might use them against us, even if we choose not to deploy them (Dem: 72%,Rep: 79%). Republicans are notably divided: while 48% say the U.S.mustdevelop these weapons to stayahead of adversaries, 34% say they should be banned because they are too dangerous and unethical. Surveillance: Americans Want Legal Process, Not Blank Checks A majority (54%) say AI-powered mass surveillance is too dangerous and violates privacy and civil liberties(Dem: 63%, Rep: 45%), versus 30% who see it as necessary for safety. Even Republicans are more likely tosay mass surveillance is too dangerous (45%) than to call it necessary (40%). But the public isn’t reflexively anti-security—46%say the government shouldonlybe able to use AIsurveillanceon specific targets with a court-issued warrant(Dem: 45%, Rep: 51%). The constitutional principle is clear:70%agree that using AI to monitor Americans without a court-issuedwarrant violates the Fourth Amendment’s protection against unreasonable searches (Dem: 74%, Rep: 71%) . Americans Back Companies Setting Limits Two-thirds (67%) believe private technology companies have a responsibility to set limits on how their productscan be used, even if the government wants to use them differently (Dem: 73%, Rep: 65%). When the trade-offis explicit,53%say private AI companies should be allowed to restrict how their technology is used, includingbanning its use for domestic surveillance or autonomous weapons (Dem: 58%, Rep: 43%), versus just29%who say companies should be required to provide the military with full access to ensure national security. On the Anthropic dispute specifically, half(50%)of those who are aware of the dispute view penalizing thecompany as government overreach that sets adangerousprecedent (Dem: 57%, Rep: 39%), while 35% call itnecessary for national security. Among Republicans who are aware of the dispute, opinion is closely split andmany are undecided: 44% say it’s necessary, 39% call it overreach, and 16% are unsure. Important Context: A Public Still Forming Its Views Most Americans haven’t engaged deeply with these issues yet.56%have heard “not much” or “nothing atall” about the Anthropic–Department of War dispute; only12%have heard “a lot.” Opinion on specificpolicy tools remains unsettled:30%are unsure about supply chain risk designations, and20%are unsureabout using emergency laws to compel company compliance. The trust landscape is fragmented. No institution commands majority confidence on AI decisions. Themost trusted entity isan independentscientificor ethics review board (22%), followed by the militaryand AI companies (14% each). A quarter of Americans (25%) say they’re simply not sure who to trust.Notably,45%oppose using emergency laws to force AI company compliance (Dem: 57%, Rep: 29%),versus35%who support it (Dem: 28%, Rep: 54%)—but these numbers could shift as awareness grows. Key Stats Top-line findings for quick reference: •79% say a human should always make the final decision before any use of lethal force•75% say AI is not yet reliable enough to be trusted with life-or-death military decisions without humanoversight•54% say AI-powered mass surveillance is too dangerous and violates privacy and civil liberties•70% agree that using AI to monitor Americans without a court-issued warrant violates the FourthAmendment•67% say private tech companies have a responsibility to set limits on how their products can be used,even if the government disagrees•53% say AI companies should be allowed to restrict their technology from uses like domesticsurveillance or autonomous weapons, vs. just 29% wh