您的浏览器禁用了JavaScript(一种计算机语言,用以实现您与网页的交互),请解除该禁用,或者联系我们。[Common Sense Media]:生成式AI在K-12教育中的挑战和机遇 - 发现报告

生成式AI在K-12教育中的挑战和机遇

AI智能总结
查看更多
生成式AI在K-12教育中的挑战和机遇

Generative AIin K–12 Education:Challenges and Opportunities Updated August 2024 Table of Contents IntroductionHow to read this paperBackgroundResponsible AIEducation-specific considerationsExamples of principles and guidelines for responsible AICautionary notesHow Americans view AIGeneral attitudesAwareness and knowledgeUsageAI literacy among teachersThe state of school policyPolicy approaches in the U.S., across the Atlantic, and beyondUnited StatesEuropean UnionUnited Kingdom and AustraliaGovernance modelsOpportunitiesStudent-supportingAdaptive learningCreativityProject-based learning and collaborationTeacher-supportingLesson planning and content differentiationCoaching and feedbackGrading and assessmentsSystem-supportingData interoperabilityAdministration and logisticsParent engagementSeckinger High: All In on AIRisks, harms, and challengesInaccuracies and hallucinationsBias and inequityLack of representationInformation censorshipCheating and plagiarismOverreliance on technology and loss of critical thinkingData securityConsent and privacyHuman replacementWhat students thinkRecommendationsConclusionAcknowledgmentsAPPENDIXResources, links, and informationEducator snapshot: how one social studies teacher is navigating AIEducator snapshot: how one fourth grade teacher uses Kyron LearningEducator snapshot: how one fourth grade math teacher uses TeachFXAbout Background Ask 10 people the definition of artificial intelligence and you'll get 20 answers. Merriam-Webster defines AI as"the capability of computer systems or algorithms to imitate intelligent human behavior."5That sums it upsuccinctly, but other, more technically accurate definitions refrain from any reference to humans, given the risksof anthropomorphizing the technology. The European Commission's High-Level Expert Group on ArtificialIntelligence, for example, defines AI as "systems that display intelligent behavior by analyzing their environmentand taking actions—with some degree of autonomy—to achieve specific goals."6And the Google Developersglossary uses this definition: "a non-human program or model that can solve sophisticated tasks."7AI has beenaround for decades, from recommendation engines on Netflix and voice assistants such as Siri to customerservice chatbots, search engines, and monitoring systems that predict when factory machines will requiremaintenance. This earlier form of AI performs discrete tasks and responds to a particular set of inputs.8Thesesystems can learn from data and make predictions based on that data, but they're not producing original content. Generative AI, on the other hand, can generate content, such as text, images, audio, and video, that is complex,coherent, and original, in response to a submitted prompt or query, by learning from large reference sets ofexisting data.9Alexa setting a timer for your turkey while your hands are covered in raw poultry goop isconvenient, but ChatGPT generating an original sonnet about your second cousin in the style of Petrarch is prettymind-blowing. The spectrum of reactions to generative AI we encountered as we researched this report are similar to thespectrum in the AI industry itself, though not as polarized. On one end, "doomers" advocate for a slower approachto AI development due to concerns about the speculative risks that some believe it poses to humankind's futuresurvival.10On the other end, "boomers" believe the positive benefits far outweigh any risks, and that developmentcan't and shouldn't be curbed. But while the extremes get most of the attention, neither addresses thecomplicated reality and myriad concerns of current usage. In the middle, there's a growing effort to developresponsible, trustworthy AI that works for all people in the here and now.The vast majority of people in generalalso lie somewhere in the middle in how they're thinking about AI in their own lives: skeptical, but curious. Andthey're looking for guidance. There's one big problem with the emphasis on preparing for a possible future world where AI rules supreme: Thatvision obscures present-day challenges that need to be addressed. Longtermism—the focus on ensuring thathumankind will survive millions of years from now11—is terrible for AI, says Tracy Pizzo Frey, a former Google AI executive who led the development of Common Sense Media's AI principles. "Some people are talking aboutcolonizing other planets and not thinking about climate change and systemic racism. It's a sleight of hand to talkabout existential risk—it covers up what is happening now." In education, Pizzo Frey sees similarities to the rise of edtech over the past 20 years, and its acceleration duringand since the pandemic. "What you have at a high level is a pretty bifurcated experience: people who are really,really excited about AI and new technology and want to implement as fast as possible, and people who are reallyskeptical, especially frontline educators who were already skeptical of technology in education and are alsoske