您的浏览器禁用了JavaScript(一种计算机语言,用以实现您与网页的交互),请解除该禁用,或者联系我们。[韩国个人信息保护委员会]:为国民可信的人工智能时代提出AI隐私风险管理模式(韩) - 发现报告

为国民可信的人工智能时代提出AI隐私风险管理模式(韩)

AI智能总结
查看更多
为国民可信的人工智能时代提出AI隐私风险管理模式(韩)

2024.12. 【일러두기】 ◆발간목적 -본모델은AI프라이버시리스크관리의방향과원칙을제시하기위해마련되었으며,AI모델·시스템개발자및제공자등은개별여건에맞게적용할수있습니다. -본모델은개인정보보호위원회가구성·운영중인「AI프라이버시민·관정책협의회」논의를바탕으로마련되었으며,향후법·제도·기술발전에따라지속적으로수정·보완될수있습니다. ◆주요대상 -AI기술을도입,적용하면서프라이버시관련내부관리체계를마련,정립,정비하고자하는기업·기관등을주요잠재독자층으로상정하였습니다. -본모델은AI모델사전학습및추가학습,AI시스템개발및제공등AI전주기를망라하는리스크관리체계를안내한것으로,향후소규모조직,스타트업등세부대상과영역에특화된안내자료를구체화해나갈예정입니다. ◆문의처 -내용관련문의:인공지능프라이버시팀(☎02-2100-3073,3078)-혁신지원원스톱서비스:전담담당관(☎02-2100-3045/onestoppipc@korea.kr) ※개인정보위「혁신지원원스톱서비스」란? ▸기업지원에특화된위원장직속기구로서,기업현장의애로사항에대해부서간칸막이를넘어빠르고안전하게실질적해결책을제공하는서비스▸신청기업에원칙적으로5일(근무일기준)이내에답변을제공하며,추가지원절차*가필요한경우에는지원방향에대한1차답변제공후분야별검토진행*△규제샌드박스,△사전적정성검토제,△개인정보안심구역,△법령적극해석등 목차 Ⅰ.개요1 논의배경··················································································1적용범위··················································································3개인정보보호법및타안내서등과의관계····················5 Ⅱ.AI프라이버시리스크관리의절차6 AI의유형·용례파악····························································7용례에대응하는리스크의식별··········································8리스크의측정··········································································8 리스크경감방안의검토와도입··········································9 [붙임1]AI리스크관련글로벌논의현황···························································10[붙임2]지향되는리스크관리체계의원리·························································12[붙임3]AI리스크관리관련표준·········································································13 Ⅲ.리스크의식별14 기획·개발단계···································································14 [붙임4]AI학습데이터수집·이용의적법근거예시··········································17 서비스제공단계·································································20 Ⅳ.리스크의경감26 관리적조치···········································································26기술적조치···········································································31 Ⅴ.AI프라이버시리스크관리체계36 [부록1]AI개인정보리스크자율평가항목39[부록2]언어모델(LLM)대상프라이버시리스크경감기술의유형및효과46[부록3]AI프라이버시리스크유형및경감방안도식화59 Ⅰ.개요 1논의배경 1.인공지능과프라이버시리스크의관계 □인공지능(AI)의발전은개인정보를포함한대규모데이터처리에기초하고있어AI와프라이버시리스크는불가분의관계임 ○AI기술이요구하는데이터처리방식의근본적변화는개인정보유·노출등전형적인프라이버시리스크를심화시키고,나아가기존정보처리환경에서예측하지못한새로운유형의리스크를유발 ○AI기술발전과함께복잡한변화양상을보이는프라이버시리스크의적정관리·완화는지속가능한AI발전의선결요건 참고 AI기술이프라이버시리스크에미치는영향1) ▪개인정보보호법상의전형적인프라이버시리스크를확대·악화 AI학습에필요한개인정보규모와범위의증대로적법근거없는무분별한개인정보수집·이용및추적·감시리스크악화 AI데이터처리의복잡한가치망으로인해보안상취약점확대 ▪AI기술이새로운유형의프라이버시리스크를유발 개인에대한사소하거나식별성없는파편화된정보를연결→학습데이터에포함되지않은행동·의도예측(예:범죄발생위험예측) 개인에대한물리적속성(예:안면사진)으로부터개인의성격,감정추론 AI가암기한개인정보를원본그대로노출할위험(예:이름,집주소등) 개인의신원을도용하여가짜이미지·오디오등을생성 2.리스크기반접근의필요성 □AI프라이버시리스크에대한체계적이해는AI기술발전을선험적,일률적으로규제하기보다는사회적으로필요한AI개발과혁신활동을지원하는안전장치역할을수행 ☞불확실성이높은AI영역에서‘리스크기반접근’을통해AI의부정적영향을관리·완화하면서AI기술의이점을극대화할필요 3.목적과한계 □본모델은디지털시대의핵심경쟁력인AI가프라이버시친화적으로활용될수있도록AI프라이버시리스크요인을체계화하고,리스크관리의방향과원칙을제시하기위해마련되었으며, ○현시점에서학계·정부등에서논의되고있는AI데이터처리특성,리스크유형,국제적상호운용성등을고려해마련되었음 ○본모델의준수는자율사항으로서,AI기업등은개별여건에따라구체적인AI프라이버시리스크관리체계를수립할수있음 □또한본모델은AI기업·기관등의개인정보보호법준수가능성을높이고,안전관리체계마련에대한유인을제공하기위한것으로, ○기업·기관등이본모델에적시된안전성확보를위한최선의노력을다하였을때개인정보보호법준수사실을인정하거나행정제재시참작사유로고려될수있음 □다만,AI프라이버시리스크관리체계는전세계적으로초기단계에있어본모델은향후기술,정책,표준발전등에따라수정·보완될예정임 【안내서적용방안예시】 ▸AI모델·시스템개발및제공자는기존에수행하고있던개인정보영향평가또는개인정보보호를위한검토항목등에본모델의내용을추가반영하거나, -또한,PbD(Privacyby Design)관점에서AI시스템의초기구상단계부터본모델의내용을기획에참고할수있음 ▸정부·연구기관등이AI시스템의위험관리와관련한정책등을수립할때개인정보보호관련사항은본모델의내용을참고할수있음 2적용범위 □(적용대상)AI모델·시스템개발자및제공자등 ○(모델·시스템2))AI시스템은AI모델을포함하여데이터수집·저장·전송·접근관리등데이터처리를수행하는여러구성요소를포괄 -AI모델은특성값(feature)간의상관관계를표상하는파라미터(parameter)의집합으로서,그자체로는개인정보를포함하거나처리하지않는다는일부견해도있으나, -파라미터값으로부터개인정보가추론될가능성이있을뿐아니라,AI모델이개인정보처리를수반하는AI시스템의일부로작동할수있으므로AI모델개발자·제공자또한동안내서를참고할수있음 참고 AI시스템의개념도3) ■AI시스템은①외부환경에서입력데이터를수집하는센서,②데이터를해석하고출력을제공하는운영논리(AI모델등),③출력에따라환경을변경하는장치(actuator)세가지주요요소로구성 2)최근AI모델과AI시스템을구분하여개인정보처리여부및프라이버시리스크수준을달리보는견해가존재하며,이와관련한글로벌논의가지속될전망. LLM의개인정보암기·저장가능성 부인 독일함부르크 미국CCPA,학계연구 ※(독일함부르크개인정보감독기관(HmbBfDI)의견서(’24.7.15.))LLM에는개인정보가저장되지않기때문에LLM을단순히저장하는것은GDPR에따른개인정보처리에해당하지않음.단,LLM등으로구성된AI시스템이쿼리나출력등을통해개인정보를처리하는한,그처리는GDPR의요구사항을준수해야함 ※(미국캘리포니아개인정보보호법(CCPA)개정안(AB-1008)(‘25.1.1.시행))AI시스템의개인정보암기및유·노출위험을전제,개인정보가존재할수있는‘추상적디지털포맷’에‘개인정보를출력할수있는AI시스템’을포함 ※(연구동향)대규모언어모델(LLM)이데이터를손실없이압축·복원하는고성능무손실압축기(lossless compressor)로작용할수있다는연구존재(GrégoireDelétang,AnianRuoss,Paul-AmbroiseDuquenne,ElliotCatt,TimGenewein,ChristopherMattern,Jordi Grau-Moya,Li Kevin Wenliang,Matthew Aitchison,Laurent Orseau,et al.Languagemodelingiscompression.arXivpreprintarXiv:2309.10668v2,2024.) 3)OECD,"How artificial intelligenceworks",https://oecd.ai/en/inside-artificial-intelligence ○(개발자4))AI모델·시스템개발자는모델아키텍처및매개변수설정등AI데이터처리*의목적,범위,수단등의결정에영향력을행사하고, -개발이후다운스트림(downstream)단계에서발생할수있는예견가능한리스크를예측·통제할일정한책임을부담한다는점에서본모델을참고 *AI모델을사전학습(pretrain)하기위한대규모학습데이터투입뿐만아니라,(1)AI모델미세조정,도메인적응적학습(DAL)등추가학습,(2)퓨샷러닝등문맥내학습(in-contextlearning),(3)인적정렬(alignment),(4)검색증강생성(RAG)과정에서추가투입된데이터처리포함 ○(제공자)AI모델·시스템제공자는개발완료된AI가쿼리(query)등을통해최종이용자와상호작용하면서, -정보주체의권리·의무에영향을미치는결정,추론등을출력하는단계의리스크를예측·통제할일정한책임을부담한다는점에서본모델을참고 □(리스크의범위)AI모델·시스템의개발및제공과정에서파생될수있는다양한리스크중프라이버시측면에서국내외에서중점적으로논의되고있는리스크를상정 ○문헌조사,기업인터뷰등을통해파악한AI기술의고유한특성,기능및데이터요구사항등으로인해새롭게나타나거나심화되는정보주체권리침해,개인정보보호법위반리스크등을중점적으로다룸 ※(예)▲생성AI의합성콘텐츠➔딥페이크로인한인격권침해등새로운위