Responsible AI Impact Report Urgent risks,emerging safeguards,and publicinterest solutions impacting society Introduction How do we ensure that the rapid development of AI is more considerate of In our inauguralResponsible AI Impact Report,All Tech Is Human(ATIH)aims to reveal our mosturgent risks,emerging safeguards,and public-interest solutions,and provide a roadmap for howwe will shape how AI impacts society in the year ahead.We examine the state of Responsible AI(RAI)throughout 2025 and highlight what we consider to be some of the mostimpactful We believe the Responsible AI field can only thrive if we effectively tackle the complex challengesat the intersection of technology and society.When we refer to“Responsible AI,”we mean AI thatis well-regulated and guard-railed,governed and assured(documented,standardized,and As we outlined in our recent Responsible Tech Guide(2025),our organization believes in ahuman-centered future that values our agency in desired outcomes and rejects tech determinism.As such,we are focused on elevating AI models that do as little harm as possible,for use cases inwhich risks have been carefully considered and meaningfully mitigated;and ethically deployed AI, ThisResponsible AI Impact Reporthighlights the growing focus on Public Interest AI that is of,by,for,and in service to the people.This Public Interest AI should be applied to humanity’s mostpressing challenges and enable us to reimagine what a better tech future entails.This reportalsoexplores a future in which Public Interest AI is developed on public infrastructures for an AI-literate At the heart of the years ahead lies a defining question:who determines the purpose of AI and thekinds of lives it will shape? “An AI-enabled future grounded in human dignity depends on institutions that cangovern powerful technologies with a commitment to the public good.When societiesbuild this kind of civic architecture,people gain the ability to direct technologicaldevelopment rather than be shaped by it.This report highlights how that architectureis emerging through civil society’s work:accountable standards,rigorous evaluations, Vilas Dhar President,The Patrick J.McGovern Foundation Acknowledgements&Contributions Sherine KazimSandra Khalil Leah FerentinosMerve HickokMichelle Shevin Dr.Nathan C.WalkerTheodora Skeadas Featured Organizations MITREMLCommonsMontreal AI EthicsInstituteMozilla FoundationThe OpenFoldConsortiumPartnership on AIThe Patrick J.McGovernFoundationProject Liberty Executive Summary This report examines the Responsible AI ecosystem in 2025,highlighting the field’s most impactfulresources and tracing its contributions toward developing concrete governance,assurance,and Five Key Takeaways Responsible AI is shifting from principles to practice,with civil society leading thedevelopment of concrete governance tools.This includes standards,benchmarks,audits,and red-team methods that turn abstract values into verifiable,lifecycle-based AI risks are intensifying across safety,security,privacy,fairness,and informationintegrity,driven by increasingly capable frontier and agentic systems that create newfailure modes and exacerbate existing harms.The report examines synthetic media Public AI is emerging as a powerful alternative to proprietary AI,emphasizingshared compute,community-governed datasets,open safety tools,and public-interest institutions so that universities,nonprofits,governments,and communities, Societal impacts such as labor displacement,climate burdens,economicconcentration,and threats to democracyrequire whole-of-society governance,including stronger rights-anchored regulations,global standards alignment,and A central challenge for 2026 is determining who decides what AI is for,with ourreport calling for expanded RAI literacy,stronger information-integrity defenses,clearer safeguards for high-risk uses(e.g.,AI companions,synthetic media),and a The Audience for the Responsible AI Impact Report Thisreport has been written for anyone committed to ensuring AI is well-regulated,less harmful,publicly accountable,and aligned with societal needs.Similar to every All Tech Is Human initiative,we have designed it for a multistakeholder,cross-sector audience.Intended audiences include: AI governance practitioners in industryCivil society organizationsResearchers and academicsGovernment regulators and policymakersPublic-interest technologistsEducators,students,and RAI newcomersPhilanthropic funders supporting Responsible Tech Civil society organizations make the case that long-term AI leadership requires advancing people- first alternatives centered on public well-being,shared prosperity,and democratic accountabilityrather than frontier model exceptionalism.In parallel,new work on model risks,safety,security,privacy,bias and fairness,human rights,labor,climate,and economic concentration reveals thatfrontier and agentic systems are amplifying existing harms while creating new ones,from scaled Thi