您的浏览器禁用了JavaScript(一种计算机语言,用以实现您与网页的交互),请解除该禁用,或者联系我们。[Orca Security]:2024年AI安全状况报告:揭示云计算中AI风险的普遍性 - 发现报告

2024年AI安全状况报告:揭示云计算中AI风险的普遍性

AI智能总结
查看更多
2024年AI安全状况报告:揭示云计算中AI风险的普遍性

State ofSecurity Report Unveiling the numbers andinsights behind the prevalenceof AI risks in the cloud Inside This Report 4.Insecure access17 1.Exposed access keys2.Exposed keys in commit history3.Roles and permissions181920 About the Orca Research Pod02 Executive summary03 5.Misconfigurations21 Key findings05 1.Session authentication (IMDSv2)2.Root access3.Private endpoints232425 1. AI usage06 1.General AI usage2.Usage by AI service3.Usage by AI model4.Usage by AI package07080910 6.Encryption26 7. Conclusion29 Challenges in AI securityKey recommendationsAI GoatHow can Orca help?30313233 2.Vulnerabilities in AI packages11 3.Exposed AI models14 1.Introduction2.Default Amazon SageMaker bucket names1516 About Orca Security34 2024 STATE OF AI SECURITY REPORT Foreword AI usage is exploding. Gartner predicts that the AI software market will grow19.1% annually, reaching $298 billion by 2027. In many ways, AI is now inthe stage reminiscent of where cloud computing was over a decade ago. At that time, speed of innovation was the focus, and it came at the expense of security.One such example was where storage buckets were spun up at the speed of the cloud,but were being left exposed to the Internet - without considering the security implications. Fast forward to today, we are now witnessing the signs that history may repeat itself.Many AI services are defaulting to wide access and full permissions, focusing on speed ofdelivery while sacrificing security measures. Yet unlike a decade ago, we are now more prepared to secure emerging AI technologiesand models. Awareness and education play a key role in achieving this goal, which is whywe are releasing this inaugural report. We hope the report will help developers, CISOs, and security professionals betterunderstand how to secure their AI models, while not slowing down innovation. Thank you for reading our research. Gil Geron CEO and Co-Founder of Orca Security 2024 STATE OF AI SECURITY REPORT 25+vulnerabilities discovered onAWS, Azure, and Google Cloud About the Orca Research Pod 2024 +System:authenticated default Google Kubernetes Engine(GKE) group+LeakyCLI in AWS and Google Cloud The Orca Research Pod is a group of cloud security researchers that discover and analyzecloud risks and vulnerabilities to strengthen the Orca Cloud Security Platform and promotecloud security best practices. 2023 +Azure Digital Twins SSRF+Azure Functions App SSRF+Azure API Management SSRF+Azure Machine Learning SSRF+Azure Storage Account Keys Exploitation+Azure Super FabriXss+Two Azure PostMessage IFrame Vulnerabilities+Bad.Build Supply Chain Risk in GCP+8 Cross-Site Scripting (XSS) vulnerabilities on Azure HDInsight+Unauthenticated Access Risk to GCP Dataproc Research Methodology This report focuses on the security of deployed AI models in cloud services and environments. It wascompiled by analyzing data captured from billions of cloud assets on AWS, Azure, Google Cloud,Oracle Cloud, and Alibaba Cloud scanned by the Orca Cloud Security Platform. Report Data Set: ●Cloud workload and configuration data●Billions of real-world production cloud assets●Data referenced in this report was collected from January - August 2024●AWS, Azure, Google Cloud, Oracle Cloud, and Alibaba Cloud environments 2022 +AWS BreakingFormation+AWS Superglue+Databricks+Azure AutoWarp+Azure SynLapse+Azure FabriXxs+Azure CosMiss 2024 STATE OF AI SECURITY REPORT Executive summary Our three primary findings are as follows: 1.More than half of organizations are deploying their own AI models We found that56%of organizations have adopted AI to build custom applications.Azure OpenAI is currently the front runner among cloud provider AI services, with39%of organizations with Azure using it. Sckit-learn is the most used AI package (43%)and GPT-35 is the most popular AI model, with79%of organizations using GPT-35 intheir cloud. 2.Default AI settings are often accepted without regard for securityThe default settings of AI services tend to favor development speed rather thansecurity, which results in most organizations using insecure default settings. Forexample,45%of Amazon SageMaker buckets are using non randomized defaultbucket names, and98%of organizations have not disabled the default root access forAmazon SageMaker notebook instances. 3.Most vulnerabilities in AI models are low to medium risk - for now 62%of organizations have deployed an AI package with at least one CVE. Most ofthese vulnerabilities are low to medium risk with an average CVSS score of6.9, andonly0.2%of the vulnerabilities have a public exploit (compared to the2.5%average). This report harnesses unique insights from scans performed by theOrca Cloud Security Platform, and uncovers key AI security risksand considerations for CISOs, developers, and securityprofessionals. The AI security risks discussed in this report aremapped to each of the OWASP Top 10 Machine Learning Risks. 2024 STATE OF AI SECURITY REPORT O