OpenAI Widens Access to Cybersecurity Model After Anthropic’s Mythos Reveal

OpenAI has expanded access to its cybersecurity-focused language model, GPT-5.4-Cyber, making it more readily available to defenders. This move follows Anthropic's recent reveal of their own AI model, Mythos, in the cybersecurity space. The fine-tuned model aims to lower barriers for legitimate cybersecurity professionals.

New ATHR vishing platform uses AI voice agents for automated attacks

A new cybercrime platform named ATHR has been discovered that leverages AI-powered voice agents to conduct fully automated vishing attacks. These attacks aim to harvest user credentials by combining AI and human operators for social engineering.

Most "AI SOCs" Are Just Faster Triage. That's Not Enough.

Many "AI SOCs" are currently limited to accelerating alert triage rather than truly automating and reducing the workload of security analysts. True progress in AI for security operations centers (SOCs) requires end-to-end workflows that can take action across systems, not just summarize incoming alerts.

Git identity spoof fools Claude into giving bad code the nod

Researchers have discovered a method to trick Anthropic's Claude AI into approving malicious code changes in Git repositories. By forging Git commit metadata, attackers can make the AI believe that harmful modifications originate from a trusted developer, bypassing security reviews.

Artemis Emerges From Stealth With $70 Million in Funding

Artemis, a cybersecurity startup, has secured $70 million in funding. The company is focusing on using AI to defend against AI-powered cyberattacks that target applications, users, machines, and cloud environments.

Microsoft’s Windows Recall still allows silent data extraction

Despite security overhauls, Microsoft's Windows Recall feature can still allow malware to silently extract all captured data without administrator privileges. A cybersecurity researcher demonstrated this vulnerability with a proof-of-concept tool, highlighting that decrypted data handled by unprotected processes remains accessible.

Behind the Mythos hype, Glasswing has just one confirmed CVE

VulnCheck reports that Anthropic's Project Glasswing, a controlled access program for their AI model Mythos, has only one confirmed CVE publicly attributable to its efforts. While Anthropic researchers are contributing to vulnerability discovery, the specific impact of Glasswing itself remains limited based on current public data.

Insurance carriers quietly back away from covering AI outputs

Major insurance carriers are increasingly hesitant to provide cybersecurity and errors & omissions coverage for companies utilizing AI in their internal processes. This reluctance stems from the inability to trace the AI's decision-making, leading some insurers to exclude AI-generated outputs from policies or significantly increase premiums.

Human Trust of AI Agents

Research indicates that humans expect rationality and cooperation from LLM opponents in strategic games, leading them to choose significantly lower numbers and favor 'zero' Nash-equilibrium choices when playing against LLMs compared to human opponents. This behavior is particularly pronounced among subjects with high strategic reasoning ability, who rationalize their strategies by attributing reasoning ability and even cooperation to LLMs.

Anthropic's Project Glasswing CVE tally is still anyone's guess

Anthropic's Project Glasswing allows over 50 organizations to test its Mythos LLM for security vulnerabilities in their own products. However, the exact number of vulnerabilities discovered remains undisclosed, mirroring the situation with other companies participating in similar initiatives.

Critical nginx UI tool vulnerability opens web servers to full compromise

A critical vulnerability, dubbed 'MCPwn' and identified as CVE-2026-33032, has been discovered in the nginx UI web server configuration tool. This flaw allows unauthenticated attackers to gain full control of web servers by injecting malicious configurations, with active exploitation noted since March.

"TotalRecall Reloaded" tool finds a side entrance to Windows 11's Recall database

A security researcher has developed a tool called "TotalRecall Reloaded" that can access the data stored by Windows 11's controversial Recall feature, even when encryption is enabled. This tool bypasses the intended security measures by exploiting a vulnerability in how the data is stored, allowing unauthorized access.

EPIC Supports South Carolina Bills to Rein in Chatbot Harms

EPIC (Electronic Privacy Information Center) is supporting two bills in South Carolina aimed at regulating chatbot harms. One bill, S. 896, is modeled after EPIC's People-First Chatbot Bill, indicating a focus on protecting individuals from potential negative impacts of AI chatbots.

Navigating the Unique Security Risks of Asia's Digital Supply Chain

Asia's digital supply chain faces unique security risks due to varying regulatory landscapes, highly interconnected digital ecosystems, and the increasing adoption of AI. These factors create a complex environment that organizations in the region must navigate to ensure security.

Microsoft pays $2.3M for cloud and AI flaws at Zero Day Quest

Microsoft's Zero Day Quest hacking contest concluded with $2.3 million awarded to researchers for identifying nearly 700 vulnerabilities. The program incentivized the discovery of flaws in Microsoft's cloud and AI products.

Prepping for 'Q-Day': Why Quantum Risk Management Should Start Now

Quantum computers pose a future threat to current encryption methods, with experts warning that achieving 'quantum-safe' systems could take years. This necessitates proactive quantum risk management to prepare for the eventual obsolescence of widely used cryptographic algorithms.

Capsule Security Emerges From Stealth With $7 Million in Funding

Capsule Security, an Israeli startup, has secured $7 million in funding to develop solutions for securing AI agents at runtime. The company's approach focuses on continuous monitoring of AI agent behavior to prevent unsafe actions.

CISO Conversations: Ross McKerchar, CISO at Sophos

Sophos CISO Ross McKerchar discusses leadership challenges in scaling security operations, the importance of talent retention, and the evolving threat landscape, particularly concerning AI-enabled attacks. He also highlights a growing trust deficit within the cybersecurity industry.

Copilot and Agentforce fall to form-based prompt injection tricks

Security researchers have discovered prompt-injection vulnerabilities in Microsoft Copilot Studio and Salesforce Agentforce, allowing attackers to exfiltrate sensitive data by tricking the AI agents into executing malicious instructions. These flaws exploit the way AI agents process user input, blurring the lines between trusted commands and untrusted data, leading to potential theft of PII and business information.

Microsoft, Salesforce Patch AI Agent Data Leak Flaws

Microsoft Copilot and Salesforce Agentforce have been patched to address prompt injection vulnerabilities. These flaws could have allowed external attackers to access and leak sensitive data from the AI agents.

Deterministic + Agentic AI: The Architecture Exposure Validation Requires

The article discusses the rapid adoption of AI across industries, with leadership teams and boards pushing for its integration into operational and security functions. Pentera's AI Security and Exposure Report 2026 indicates this momentum, with all surveyed CISOs acknowledging the trend.

The deepfake dilemma: From financial fraud to reputational crisis

Deepfake technology has advanced to the point where it can convincingly fool individuals and bypass traditional security heuristics, posing a significant risk to organizations. A Gartner survey indicates a substantial increase in audio and video deepfake incidents experienced by cybersecurity leaders.

Mallory Launches AI-Native Threat Intelligence Platform, Turning Global Threat Data Into Prioritized Action

Mallory has launched an AI-native threat intelligence platform designed to provide actionable insights for enterprise security teams. The platform analyzes global threat data, contextualizes it against an organization's specific attack surface, and prioritizes threats for proactive defense. It aims to move beyond traditional alert systems by offering answers to critical security questions.

OpenAI Launches GPT-5.4-Cyber with Expanded Access for Security Teams

OpenAI has announced GPT-5.4-Cyber, a specialized version of its GPT-5.4 model designed to assist cybersecurity professionals. This new model aims to enhance defenders' capabilities in identifying and resolving security issues, following a trend of AI companies developing tailored solutions for the cybersecurity sector.

Curity looks to reinvent IAM with runtime authorization for AI agents

Curity is introducing Access Intelligence, an extension to its IAM platform, to address the unique security challenges posed by autonomous AI agents. Traditional IAM tools are insufficient for securing these agents due to their complex and dynamic access needs.

Scanning for AI Models, (Tue, Apr 14th)

Starting March 10, 2026, DShield sensors began detecting probes targeting various AI models like Claude, OpenClaw, and Hugging Face. This activity has been consistently observed in the DShield database since its inception.

Microsoft Bets $10 Billion to Boost Japan's AI, Cybersecurity

Microsoft has announced a $10 billion investment in Japan over the next two years, focusing on AI adoption and cybersecurity development. This strategic move is intended to bolster Japan's digital infrastructure, train its workforce in AI technologies, and foster new cybersecurity partnerships, aligning with global trends in sovereign AI and data center development.

Commvault has a Ctrl+Z for rogue AI agents

Commvault has introduced AI Protect, a new software designed to discover and monitor AI agents operating within AWS, Azure, and GCP. The software also offers the capability to revert actions taken by these AI agents if issues arise, effectively providing a 'Ctrl+Z' function for AI operations.

5 trends defining the future of AI-powered cybersecurity

The N-able and Futurum Report highlights how AI is transforming cybersecurity, acting as both a tool for attackers and a crucial defense mechanism. It emphasizes a shift from traditional perimeter security to continuous cyber resilience, focusing on the ability to withstand, adapt to, and recover from threats in real-time.

UK gov's Mythos AI tests help separate cybersecurity threat from hype

The UK government's Mythos AI system has successfully completed a challenging multi-step infiltration challenge, demonstrating its capabilities in cybersecurity threat assessment. This marks the first AI system to achieve such a feat, suggesting a growing potential for AI in analyzing and understanding complex cyber threats. The tests aim to distinguish genuine cybersecurity risks from exaggerated claims.

EU regulators largely denied access to Anthropic Mythos

European regulators have been largely denied early access to Anthropic's new AI model, Mythos, which is designed for cybersecurity use cases and capable of identifying and exploiting vulnerabilities. This limited access, primarily granted to US tech giants like Apple, Microsoft, and Amazon, raises concerns among experts about private companies dictating the distribution of powerful AI technology over independent authorities.

Wargame Exercise Demonstrates How Social Media Manipulation Works

A wargame exercise named "Capture the Narrative" simulated social media manipulation by having students create bots to influence a fictional election. This exercise aimed to educate participants on how influence operations can be carried out in real-world political contexts.

Upcoming Speaking Engagements

Bruce Schneier has announced his upcoming speaking engagements for early 2026. These include appearances at DemocracyXChange 2026, the SANS AI Cybersecurity Summit 2026, Nemertes [Next] Virtual Conference Spring 2026, and RightsCon 2026.

Learning from Mistakes: Hard Lessons in Building Cyber Defenses

This article emphasizes the need for organizations to build cyber defenses based on real-world attack patterns rather than solely relying on vendor guidance and theoretical frameworks. It highlights that attackers adapt faster than defensive programs and exploit predictable gaps, advocating for a shift towards continuous adaptation and mitigation of human error.

AI Agents Unleashed: Governing the Invisible Workforce

Organizations are rapidly adopting AI agents, creating significant security blind spots as traditional identity and access management (IAM) frameworks are inadequate for managing these autonomous systems. These agents can gain system-level access and operate at high speeds, posing risks of breaches and compliance failures. Addressing this requires treating AI agents as a distinct identity class with policy-as-code, dynamic authorization, and full observability.

The Pitfalls of Cybersecurity, Privacy and AI Law in 2026

This article discusses the increasing legal complexities faced by cybersecurity professionals due to geopolitical uncertainty and evolving regulations. It highlights growing personal liability, including criminal prosecution, and reviews key legal trends in AI and privacy legislation across the US and EU.

How Hackers Are Thinking About AI

A study analyzing cybercrime forum conversations reveals how cybercriminals perceive and discuss the exploitation of AI. While expressing curiosity about AI's criminal applications, they also harbor doubts about its effectiveness and impact on their operations, with documented attempts to misuse legitimate AI tools and develop bespoke criminal models.

Google Adds Rust DNS Parser to Pixel Phones for Better Security

Google has incorporated a DNS parser written in Rust into Pixel phones, aiming to enhance security by addressing memory safety bugs common in lower-level programming environments. This move is intended to mitigate an entire class of vulnerabilities.

How AI is transforming threat detection

Artificial intelligence is significantly enhancing threat detection by enabling security teams to analyze vast amounts of data, identify subtle malicious activities, and detect potential attacks faster than traditional methods. Gartner predicts that by 2028, 50% of threat detection, investigation, and response (TDIR) platforms will incorporate agentic AI capabilities, up from less than 10% in 2024.