OpenAI has expanded access to its cybersecurity-focused language model, GPT-5.4-Cyber, making it more readily available to defenders. This move follows Anthropic's recent reveal of their own AI model, Mythos, in the cybersecurity space. The fine-tuned model aims to lower barriers for legitimate cybersecurity professionals.
A new cybercrime platform named ATHR has been discovered that leverages AI-powered voice agents to conduct fully automated vishing attacks. These attacks aim to harvest user credentials by combining AI and human operators for social engineering.
Many "AI SOCs" are currently limited to accelerating alert triage rather than truly automating and reducing the workload of security analysts. True progress in AI for security operations centers (SOCs) requires end-to-end workflows that can take action across systems, not just summarize incoming alerts.
Researchers have discovered a method to trick Anthropic's Claude AI into approving malicious code changes in Git repositories. By forging Git commit metadata, attackers can make the AI believe that harmful modifications originate from a trusted developer, bypassing security reviews.
Artemis, a cybersecurity startup, has secured $70 million in funding. The company is focusing on using AI to defend against AI-powered cyberattacks that target applications, users, machines, and cloud environments.
Despite security overhauls, Microsoft's Windows Recall feature can still allow malware to silently extract all captured data without administrator privileges. A cybersecurity researcher demonstrated this vulnerability with a proof-of-concept tool, highlighting that decrypted data handled by unprotected processes remains accessible.
VulnCheck reports that Anthropic's Project Glasswing, a controlled access program for their AI model Mythos, has only one confirmed CVE publicly attributable to its efforts. While Anthropic researchers are contributing to vulnerability discovery, the specific impact of Glasswing itself remains limited based on current public data.
Microsoft's Zero Day Quest 2026 hacking contest awarded over $2.3 million to researchers who discovered more than 80 high-impact vulnerabilities. The event focused on cloud and AI security, with a total prize pool of $5 million.
Major insurance carriers are increasingly hesitant to provide cybersecurity and errors & omissions coverage for companies utilizing AI in their internal processes. This reluctance stems from the inability to trace the AI's decision-making, leading some insurers to exclude AI-generated outputs from policies or significantly increase premiums.
Research indicates that humans expect rationality and cooperation from LLM opponents in strategic games, leading them to choose significantly lower numbers and favor 'zero' Nash-equilibrium choices when playing against LLMs compared to human opponents. This behavior is particularly pronounced among subjects with high strategic reasoning ability, who rationalize their strategies by attributing reasoning ability and even cooperation to LLMs.
A researcher has detailed a new AI attack method dubbed 'Comment and Control' which exploits prompt injection vulnerabilities in AI tools. This attack targets Claude Code, Gemini CLI, and GitHub Copilot Agents by leveraging comments to manipulate their behavior.
Privacy consultant Alexander Hanff claims that Google Chrome, despite its marketing, lacks protection against browser fingerprinting. This technique tracks users online by collecting specific technical details about their browser, and Hanff asserts that Chrome is vulnerable to this common tracking method.
Anthropic's Project Glasswing allows over 50 organizations to test its Mythos LLM for security vulnerabilities in their own products. However, the exact number of vulnerabilities discovered remains undisclosed, mirroring the situation with other companies participating in similar initiatives.
A critical vulnerability, dubbed 'MCPwn' and identified as CVE-2026-33032, has been discovered in the nginx UI web server configuration tool. This flaw allows unauthenticated attackers to gain full control of web servers by injecting malicious configurations, with active exploitation noted since March.
A security researcher has developed a tool called "TotalRecall Reloaded" that can access the data stored by Windows 11's controversial Recall feature, even when encryption is enabled. This tool bypasses the intended security measures by exploiting a vulnerability in how the data is stored, allowing unauthorized access.
EPIC (Electronic Privacy Information Center) is supporting two bills in South Carolina aimed at regulating chatbot harms. One bill, S. 896, is modeled after EPIC's People-First Chatbot Bill, indicating a focus on protecting individuals from potential negative impacts of AI chatbots.
Asia's digital supply chain faces unique security risks due to varying regulatory landscapes, highly interconnected digital ecosystems, and the increasing adoption of AI. These factors create a complex environment that organizations in the region must navigate to ensure security.
Microsoft's Zero Day Quest hacking contest concluded with $2.3 million awarded to researchers for identifying nearly 700 vulnerabilities. The program incentivized the discovery of flaws in Microsoft's cloud and AI products.
Quantum computers pose a future threat to current encryption methods, with experts warning that achieving 'quantum-safe' systems could take years. This necessitates proactive quantum risk management to prepare for the eventual obsolescence of widely used cryptographic algorithms.
Capsule Security, an Israeli startup, has secured $7 million in funding to develop solutions for securing AI agents at runtime. The company's approach focuses on continuous monitoring of AI agent behavior to prevent unsafe actions.
Researchers have identified a design flaw in Anthropic's Model Context Protocol (MCP) that allows for the silent execution of unsanitized commands. This vulnerability could be exploited to compromise entire AI systems and facilitate widespread AI supply chain attacks.
Sophos CISO Ross McKerchar discusses leadership challenges in scaling security operations, the importance of talent retention, and the evolving threat landscape, particularly concerning AI-enabled attacks. He also highlights a growing trust deficit within the cybersecurity industry.
Security researchers have discovered prompt-injection vulnerabilities in Microsoft Copilot Studio and Salesforce Agentforce, allowing attackers to exfiltrate sensitive data by tricking the AI agents into executing malicious instructions. These flaws exploit the way AI agents process user input, blurring the lines between trusted commands and untrusted data, leading to potential theft of PII and business information.
Microsoft Copilot and Salesforce Agentforce have been patched to address prompt injection vulnerabilities. These flaws could have allowed external attackers to access and leak sensitive data from the AI agents.
The article discusses the rapid adoption of AI across industries, with leadership teams and boards pushing for its integration into operational and security functions. Pentera's AI Security and Exposure Report 2026 indicates this momentum, with all surveyed CISOs acknowledging the trend.
Deepfake technology has advanced to the point where it can convincingly fool individuals and bypass traditional security heuristics, posing a significant risk to organizations. A Gartner survey indicates a substantial increase in audio and video deepfake incidents experienced by cybersecurity leaders.
Security researchers discovered a new prompt injection attack targeting AI agents integrated with GitHub Actions. This attack allows them to steal API keys and access tokens, with vendors like Anthropic, Google, and Microsoft failing to disclose the vulnerabilities to users.
Mallory has launched an AI-native threat intelligence platform designed to provide actionable insights for enterprise security teams. The platform analyzes global threat data, contextualizes it against an organization's specific attack surface, and prioritizes threats for proactive defense. It aims to move beyond traditional alert systems by offering answers to critical security questions.
Researchers have identified malicious Large Language Model (LLM) proxy routers being used in the wild. These routers are designed to facilitate malicious activities by leveraging LLMs.
OpenAI has announced GPT-5.4-Cyber, a specialized version of its GPT-5.4 model designed to assist cybersecurity professionals. This new model aims to enhance defenders' capabilities in identifying and resolving security issues, following a trend of AI companies developing tailored solutions for the cybersecurity sector.
Curity is introducing Access Intelligence, an extension to its IAM platform, to address the unique security challenges posed by autonomous AI agents. Traditional IAM tools are insufficient for securing these agents due to their complex and dynamic access needs.
Starting March 10, 2026, DShield sensors began detecting probes targeting various AI models like Claude, OpenClaw, and Hugging Face. This activity has been consistently observed in the DShield database since its inception.
Microsoft has announced a $10 billion investment in Japan over the next two years, focusing on AI adoption and cybersecurity development. This strategic move is intended to bolster Japan's digital infrastructure, train its workforce in AI technologies, and foster new cybersecurity partnerships, aligning with global trends in sovereign AI and data center development.
Commvault has introduced AI Protect, a new software designed to discover and monitor AI agents operating within AWS, Azure, and GCP. The software also offers the capability to revert actions taken by these AI agents if issues arise, effectively providing a 'Ctrl+Z' function for AI operations.
The N-able and Futurum Report highlights how AI is transforming cybersecurity, acting as both a tool for attackers and a crucial defense mechanism. It emphasizes a shift from traditional perimeter security to continuous cyber resilience, focusing on the ability to withstand, adapt to, and recover from threats in real-time.
While grassroots opposition to renewing FISA Section 702 is growing, fueled by concerns over AI's role in data surveillance, Democratic leaders are not actively campaigning against its extension. This suggests a potential lack of robust political pushback despite public anxieties.
The UK government's Mythos AI system has successfully completed a challenging multi-step infiltration challenge, demonstrating its capabilities in cybersecurity threat assessment. This marks the first AI system to achieve such a feat, suggesting a growing potential for AI in analyzing and understanding complex cyber threats. The tests aim to distinguish genuine cybersecurity risks from exaggerated claims.
European regulators have been largely denied early access to Anthropic's new AI model, Mythos, which is designed for cybersecurity use cases and capable of identifying and exploiting vulnerabilities. This limited access, primarily granted to US tech giants like Apple, Microsoft, and Amazon, raises concerns among experts about private companies dictating the distribution of powerful AI technology over independent authorities.
A wargame exercise named "Capture the Narrative" simulated social media manipulation by having students create bots to influence a fictional election. This exercise aimed to educate participants on how influence operations can be carried out in real-world political contexts.
Bruce Schneier has announced his upcoming speaking engagements for early 2026. These include appearances at DemocracyXChange 2026, the SANS AI Cybersecurity Summit 2026, Nemertes [Next] Virtual Conference Spring 2026, and RightsCon 2026.
This article emphasizes the need for organizations to build cyber defenses based on real-world attack patterns rather than solely relying on vendor guidance and theoretical frameworks. It highlights that attackers adapt faster than defensive programs and exploit predictable gaps, advocating for a shift towards continuous adaptation and mitigation of human error.
Organizations are rapidly adopting AI agents, creating significant security blind spots as traditional identity and access management (IAM) frameworks are inadequate for managing these autonomous systems. These agents can gain system-level access and operate at high speeds, posing risks of breaches and compliance failures. Addressing this requires treating AI agents as a distinct identity class with policy-as-code, dynamic authorization, and full observability.
Google has integrated a new Rust-based DNS parser into the modem firmware for Pixel devices. This move aims to enhance device security by mitigating a class of vulnerabilities often found in critical network parsing components.
A new ad fraud scheme is using AI-generated content and SEO poisoning to spread scareware and financial scams via Google Discover. The campaign manipulates search results to push deceptive news stories, tricking users into enabling browser notifications that lead to malicious outcomes.
This article discusses the increasing legal complexities faced by cybersecurity professionals due to geopolitical uncertainty and evolving regulations. It highlights growing personal liability, including criminal prosecution, and reviews key legal trends in AI and privacy legislation across the US and EU.
The Cloud Security Alliance (CSA) is urging CISOs to prepare for an accelerated threat landscape due to advancements in AI models like Mythos. These models are rapidly shortening the time between identifying vulnerabilities and exploiting them, leading to a new era of faster cyberattacks.
A study analyzing cybercrime forum conversations reveals how cybercriminals perceive and discuss the exploitation of AI. While expressing curiosity about AI's criminal applications, they also harbor doubts about its effectiveness and impact on their operations, with documented attempts to misuse legitimate AI tools and develop bespoke criminal models.
Google has incorporated a DNS parser written in Rust into Pixel phones, aiming to enhance security by addressing memory safety bugs common in lower-level programming environments. This move is intended to mitigate an entire class of vulnerabilities.
OX Security's analysis of 216 million security findings from 250 organizations revealed a 52% year-over-year increase in raw security alerts. More significantly, the prioritized critical risk saw a nearly 400% surge, indicating a growing density of high-impact vulnerabilities.
Artificial intelligence is significantly enhancing threat detection by enabling security teams to analyze vast amounts of data, identify subtle malicious activities, and detect potential attacks faster than traditional methods. Gartner predicts that by 2028, 50% of threat detection, investigation, and response (TDIR) platforms will incorporate agentic AI capabilities, up from less than 10% in 2024.