Table of ContentsCyber Criminal LLMs vs. Agentic AIWormGPTXanthorox AIAI Threat 1 – Phishing AttacksAI Threat 2 – Malware InfectionsAI Threat 3 – Vulnerability ExploitationRecommendations and ConclusionsWhat’s Next?AppendixContact Us AI & Speed in Cyber3345111517192023 AI & Speed in Cyber3Cyber Criminal LLMs vs. Agentic AILLMs have already been used by cyber criminals for a while now. Prompt injection makes itrelatively easy to bypass security guardrails with popular LLMs (Claude, ChatGPT, etc). There arealso several outright malicious LLMs, though use of popular LLMs is likely far more common thanuse of these. One of the first malicious LLMs to emerge was WormGPT and one of the more recentcompetitors is Xanthorox AI.WormGPTWormGPT was an AI module, based on the GPT-J language model, marketed as a "blackhat"alternative to legitimate GPT models. Designed for malicious activities, it offered features likeunlimited characters, chat history retention, and code formatting, and was reportedly trained onmalware-related data. Although sold through a dedicated website with subscription plans, itscreators, linked to a Russian hacking forum, announced an end to its operations and updates,suggesting that its functionalities could be replicated by others.WormGPT is part of a growing trend of offensive AI tools developed for nefarious purposes,alongside examples like BurpGPT, PentestGPT, FraudGPT, and PassGPT. Threat actors continue toadvance such models, with one recently announcing plans for an "Evil-GPT-Web3" model. Thishighlights the ongoing development of AI tools specifically engineered to facilitate cyber crime.AI is revolutionizing cyber security by bringing speed, accuracy, and automation to a field that'sconstantly battling evolving threats. Xanthorox AIXanthorox AI is one of the latest entrants, an advanced, autonomous malicious AI platform thatemerged in Q1 2025, touted as superior to previous tools like WormGPT and EvilGPT. Unlike those,Xanthorox is an independent, modular system built from scratch, operating entirely offline onprivate infrastructure for enhanced anonymity and resilience.Designed for offensive cyber operations, it offers a self-contained, customizable, and untraceabletoolkit with five specialized AI models:Its features include full offline operation, live internet search scraping across over 50 engines, voiceand image integration for control and data analysis, and file analysis for various document types.Xanthorox AI represents a significant shift towards resilient, self-hosted AI tools in cyber crime,posing a substantial threat due to its ability to automate malware development, reconnaissance,social engineering, and coordinated attacks without external oversight.Xanthorox AI is clearly designed as an Agentic AI, not just a Large Language Model and it representsa logical shift where cyber criminals are utilizing all tools available to coordinate successful cyberattacks.Brand Protection has taken a major hit from cyber criminal AI developments, in particular phishingattacks using AI analysis, AI generated emails and deepfakes.Xanthorox Coder:Generates malicious code, automates exploit development, and analyzesvulnerabilities.V4 Model:Creates phishing content, chatbot responses, and processes text for socialengineering.Xanthorox Vision:Analyzes images and screenshots for sensitive data and visual information.Xanthorox Reasoner Advanced:Simulates human-like reasoning for adaptive decision-makingand persuasive attack narratives.Fifth Coordination Module:Orchestrates tasks and manages interactions across all modules.How Check Point protectsbrands from phishing attacksusing their brand assets.Learn More AI & Speed in Cyber4 AI Threat 1 – Phishing AttacksAs noted previously in this report, threat actors are already using prompt injection techniques totrick LLMs like ChatGPT into providing text that can be used in phishing emails, as well as using theaforementioned LLMs/AgentsAt the same time, because threat actors don’t need to spend time writing this content themselves,it’s also safe to assume that phishing attacks will be launched faster and more frequently. Accordingto Security Brief, 67.4% of phishing incidents in 2024 globally involved AI tactics, with finance amongthe top targeted industries This threat will continue to be a serious challenge for organizationsacross all industries and regions.Threat actors leverage artificial intelligence to craft highly personalized and convincing campaigns,including spear-phishing, deepfakes, and advanced social engineering techniques.EmailsFigure 1: Example of BEC phishing emailIt is reasonable to assume that this will make phishing emails more convincing, as they will be freefrom the grammatical errors and spelling mistakes that often indicate an email is suspicious.The speed of creation and personalization can also be significantly augmented by LLMs, enablingphishing email attacks to be launched and scaled more rapidly. Research from the SingaporeG