Human Trust of AI Agents

Research indicates that humans expect rationality and cooperation from LLM opponents in strategic games, leading them to choose significantly lower numbers and favor 'zero' Nash-equilibrium choices when playing against LLMs compared to human opponents. This behavior is particularly pronounced among subjects with high strategic reasoning ability, who rationalize their strategies by attributing reasoning ability and even cooperation to LLMs.

Anthropic's Project Glasswing CVE tally is still anyone's guess

Anthropic's Project Glasswing allows over 50 organizations to test its Mythos LLM for security vulnerabilities in their own products. However, the exact number of vulnerabilities discovered remains undisclosed, mirroring the situation with other companies participating in similar initiatives.

Critical nginx UI tool vulnerability opens web servers to full compromise

A critical vulnerability, dubbed 'MCPwn' and identified as CVE-2026-33032, has been discovered in the nginx UI web server configuration tool. This flaw allows unauthenticated attackers to gain full control of web servers by injecting malicious configurations, with active exploitation noted since March.

"TotalRecall Reloaded" tool finds a side entrance to Windows 11's Recall database

A security researcher has developed a tool called "TotalRecall Reloaded" that can access the data stored by Windows 11's controversial Recall feature, even when encryption is enabled. This tool bypasses the intended security measures by exploiting a vulnerability in how the data is stored, allowing unauthorized access.

EPIC Supports South Carolina Bills to Rein in Chatbot Harms

EPIC (Electronic Privacy Information Center) is supporting two bills in South Carolina aimed at regulating chatbot harms. One bill, S. 896, is modeled after EPIC's People-First Chatbot Bill, indicating a focus on protecting individuals from potential negative impacts of AI chatbots.

Navigating the Unique Security Risks of Asia's Digital Supply Chain

Asia's digital supply chain faces unique security risks due to varying regulatory landscapes, highly interconnected digital ecosystems, and the increasing adoption of AI. These factors create a complex environment that organizations in the region must navigate to ensure security.

Microsoft pays $2.3M for cloud and AI flaws at Zero Day Quest

Microsoft's Zero Day Quest hacking contest concluded with $2.3 million awarded to researchers for identifying nearly 700 vulnerabilities. The program incentivized the discovery of flaws in Microsoft's cloud and AI products.

Prepping for 'Q-Day': Why Quantum Risk Management Should Start Now

Quantum computers pose a future threat to current encryption methods, with experts warning that achieving 'quantum-safe' systems could take years. This necessitates proactive quantum risk management to prepare for the eventual obsolescence of widely used cryptographic algorithms.

Capsule Security Emerges From Stealth With $7 Million in Funding

Capsule Security, an Israeli startup, has secured $7 million in funding to develop solutions for securing AI agents at runtime. The company's approach focuses on continuous monitoring of AI agent behavior to prevent unsafe actions.

CISO Conversations: Ross McKerchar, CISO at Sophos

Sophos CISO Ross McKerchar discusses leadership challenges in scaling security operations, the importance of talent retention, and the evolving threat landscape, particularly concerning AI-enabled attacks. He also highlights a growing trust deficit within the cybersecurity industry.

Copilot and Agentforce fall to form-based prompt injection tricks

Security researchers have discovered prompt-injection vulnerabilities in Microsoft Copilot Studio and Salesforce Agentforce, allowing attackers to exfiltrate sensitive data by tricking the AI agents into executing malicious instructions. These flaws exploit the way AI agents process user input, blurring the lines between trusted commands and untrusted data, leading to potential theft of PII and business information.

Microsoft, Salesforce Patch AI Agent Data Leak Flaws

Microsoft Copilot and Salesforce Agentforce have been patched to address prompt injection vulnerabilities. These flaws could have allowed external attackers to access and leak sensitive data from the AI agents.

Deterministic + Agentic AI: The Architecture Exposure Validation Requires

The article discusses the rapid adoption of AI across industries, with leadership teams and boards pushing for its integration into operational and security functions. Pentera's AI Security and Exposure Report 2026 indicates this momentum, with all surveyed CISOs acknowledging the trend.

The deepfake dilemma: From financial fraud to reputational crisis

Deepfake technology has advanced to the point where it can convincingly fool individuals and bypass traditional security heuristics, posing a significant risk to organizations. A Gartner survey indicates a substantial increase in audio and video deepfake incidents experienced by cybersecurity leaders.

Mallory Launches AI-Native Threat Intelligence Platform, Turning Global Threat Data Into Prioritized Action

Mallory has launched an AI-native threat intelligence platform designed to provide actionable insights for enterprise security teams. The platform analyzes global threat data, contextualizes it against an organization's specific attack surface, and prioritizes threats for proactive defense. It aims to move beyond traditional alert systems by offering answers to critical security questions.

OpenAI Launches GPT-5.4-Cyber with Expanded Access for Security Teams

OpenAI has announced GPT-5.4-Cyber, a specialized version of its GPT-5.4 model designed to assist cybersecurity professionals. This new model aims to enhance defenders' capabilities in identifying and resolving security issues, following a trend of AI companies developing tailored solutions for the cybersecurity sector.

Curity looks to reinvent IAM with runtime authorization for AI agents

Curity is introducing Access Intelligence, an extension to its IAM platform, to address the unique security challenges posed by autonomous AI agents. Traditional IAM tools are insufficient for securing these agents due to their complex and dynamic access needs.

Scanning for AI Models, (Tue, Apr 14th)

Starting March 10, 2026, DShield sensors began detecting probes targeting various AI models like Claude, OpenClaw, and Hugging Face. This activity has been consistently observed in the DShield database since its inception.

Microsoft Bets $10 Billion to Boost Japan's AI, Cybersecurity

Microsoft has announced a $10 billion investment in Japan over the next two years, focusing on AI adoption and cybersecurity development. This strategic move is intended to bolster Japan's digital infrastructure, train its workforce in AI technologies, and foster new cybersecurity partnerships, aligning with global trends in sovereign AI and data center development.

Commvault has a Ctrl+Z for rogue AI agents

Commvault has introduced AI Protect, a new software designed to discover and monitor AI agents operating within AWS, Azure, and GCP. The software also offers the capability to revert actions taken by these AI agents if issues arise, effectively providing a 'Ctrl+Z' function for AI operations.

5 trends defining the future of AI-powered cybersecurity

The N-able and Futurum Report highlights how AI is transforming cybersecurity, acting as both a tool for attackers and a crucial defense mechanism. It emphasizes a shift from traditional perimeter security to continuous cyber resilience, focusing on the ability to withstand, adapt to, and recover from threats in real-time.

UK gov's Mythos AI tests help separate cybersecurity threat from hype

The UK government's Mythos AI system has successfully completed a challenging multi-step infiltration challenge, demonstrating its capabilities in cybersecurity threat assessment. This marks the first AI system to achieve such a feat, suggesting a growing potential for AI in analyzing and understanding complex cyber threats. The tests aim to distinguish genuine cybersecurity risks from exaggerated claims.

EU regulators largely denied access to Anthropic Mythos

European regulators have been largely denied early access to Anthropic's new AI model, Mythos, which is designed for cybersecurity use cases and capable of identifying and exploiting vulnerabilities. This limited access, primarily granted to US tech giants like Apple, Microsoft, and Amazon, raises concerns among experts about private companies dictating the distribution of powerful AI technology over independent authorities.

Wargame Exercise Demonstrates How Social Media Manipulation Works

A wargame exercise named "Capture the Narrative" simulated social media manipulation by having students create bots to influence a fictional election. This exercise aimed to educate participants on how influence operations can be carried out in real-world political contexts.

Upcoming Speaking Engagements

Bruce Schneier has announced his upcoming speaking engagements for early 2026. These include appearances at DemocracyXChange 2026, the SANS AI Cybersecurity Summit 2026, Nemertes [Next] Virtual Conference Spring 2026, and RightsCon 2026.

Learning from Mistakes: Hard Lessons in Building Cyber Defenses

This article emphasizes the need for organizations to build cyber defenses based on real-world attack patterns rather than solely relying on vendor guidance and theoretical frameworks. It highlights that attackers adapt faster than defensive programs and exploit predictable gaps, advocating for a shift towards continuous adaptation and mitigation of human error.

AI Agents Unleashed: Governing the Invisible Workforce

Organizations are rapidly adopting AI agents, creating significant security blind spots as traditional identity and access management (IAM) frameworks are inadequate for managing these autonomous systems. These agents can gain system-level access and operate at high speeds, posing risks of breaches and compliance failures. Addressing this requires treating AI agents as a distinct identity class with policy-as-code, dynamic authorization, and full observability.

The Pitfalls of Cybersecurity, Privacy and AI Law in 2026

This article discusses the increasing legal complexities faced by cybersecurity professionals due to geopolitical uncertainty and evolving regulations. It highlights growing personal liability, including criminal prosecution, and reviews key legal trends in AI and privacy legislation across the US and EU.

How Hackers Are Thinking About AI

A study analyzing cybercrime forum conversations reveals how cybercriminals perceive and discuss the exploitation of AI. While expressing curiosity about AI's criminal applications, they also harbor doubts about its effectiveness and impact on their operations, with documented attempts to misuse legitimate AI tools and develop bespoke criminal models.

Google Adds Rust DNS Parser to Pixel Phones for Better Security

Google has incorporated a DNS parser written in Rust into Pixel phones, aiming to enhance security by addressing memory safety bugs common in lower-level programming environments. This move is intended to mitigate an entire class of vulnerabilities.

How AI is transforming threat detection

Artificial intelligence is significantly enhancing threat detection by enabling security teams to analyze vast amounts of data, identify subtle malicious activities, and detect potential attacks faster than traditional methods. Gartner predicts that by 2028, 50% of threat detection, investigation, and response (TDIR) platforms will incorporate agentic AI capabilities, up from less than 10% in 2024.

The AI inflection point: What security leaders must do now

AI is rapidly moving from experimental phases to production in cybersecurity, fundamentally changing how security operations work. Security leaders are grappling with the accelerated threat landscape, where adversary activity has increased significantly, and the speed of attacks has decreased to minutes or even seconds, demanding a shift in defensive capabilities to match machine-speed threats.

Anthropic’s Mythos signals a structural cybersecurity shift

A new briefing by the Cloud Security Alliance (CSA) argues that Anthropic's Glasswing, an AI system capable of autonomously identifying and exploiting vulnerabilities, is not an outlier but an early indicator of a significant shift in cybersecurity. This AI capability dramatically accelerates the process of finding flaws and developing exploits, potentially overwhelming security teams with a surge of disclosures and autonomous attacks.

The Dangers of California’s Legislation to Censor 3D Printing

California's A.B. 2047 bill proposes mandating censorware on all 3D printers and criminalizing the use of open-source alternatives, aiming to restrict the printing of firearms. The EFF argues this legislation will stifle innovation, harm consumers through surveillance and platform lock-in, and is an ineffective approach to security.

CSA: CISOs Should Prepare for Post-Mythos Exploit Storm

The Cloud Security Alliance (CSA) is warning CISOs to prepare for a potential "AI vulnerability storm" following the introduction of Anthropic's Claude Mythos. This development suggests that new AI models could lead to an increase in exploitable vulnerabilities in AI systems.

Empty Attestations: OT Lacks the Tools for Cryptographic Readiness

Asset owners in Operational Technology (OT) environments are facing regulatory pressure to demonstrate their readiness for post-quantum cryptography. However, the lack of adequate tooling prevents them from genuinely assessing or achieving this readiness, leading to a situation where compliance efforts are merely symbolic rather than substantive security measures.

On Anthropic’s Mythos Preview and Project Glasswing

Anthropic has previewed its new AI model, Claude Mythos Preview, which possesses significant cyberattack capabilities. To proactively address these risks, Anthropic has launched Project Glasswing, an initiative to use the model to discover and patch software vulnerabilities before they can be exploited by malicious actors.

Speaking Freely: Dr. Jean Linis-Dinco

This article introduces Dr. Jean Linis-Dinco, an activist-researcher focused on human rights and technology, particularly in relation to cybersecurity. She has a PhD in Cybersecurity and works with the Manushya Foundation, advocating for digital rights and challenging policies that restrict online freedom of expression.