Research indicates that humans expect rationality and cooperation from LLM opponents in strategic games, leading them to choose significantly lower numbers and favor 'zero' Nash-equilibrium choices when playing against LLMs compared to human opponents. This behavior is particularly pronounced among subjects with high strategic reasoning ability, who rationalize their strategies by attributing reasoning ability and even cooperation to LLMs.
A researcher has detailed a new AI attack method dubbed 'Comment and Control' which exploits prompt injection vulnerabilities in AI tools. This attack targets Claude Code, Gemini CLI, and GitHub Copilot Agents by leveraging comments to manipulate their behavior.
Privacy consultant Alexander Hanff claims that Google Chrome, despite its marketing, lacks protection against browser fingerprinting. This technique tracks users online by collecting specific technical details about their browser, and Hanff asserts that Chrome is vulnerable to this common tracking method.
Anthropic's Project Glasswing allows over 50 organizations to test its Mythos LLM for security vulnerabilities in their own products. However, the exact number of vulnerabilities discovered remains undisclosed, mirroring the situation with other companies participating in similar initiatives.
A critical vulnerability, dubbed 'MCPwn' and identified as CVE-2026-33032, has been discovered in the nginx UI web server configuration tool. This flaw allows unauthenticated attackers to gain full control of web servers by injecting malicious configurations, with active exploitation noted since March.
A security researcher has developed a tool called "TotalRecall Reloaded" that can access the data stored by Windows 11's controversial Recall feature, even when encryption is enabled. This tool bypasses the intended security measures by exploiting a vulnerability in how the data is stored, allowing unauthorized access.
EPIC (Electronic Privacy Information Center) is supporting two bills in South Carolina aimed at regulating chatbot harms. One bill, S. 896, is modeled after EPIC's People-First Chatbot Bill, indicating a focus on protecting individuals from potential negative impacts of AI chatbots.
Asia's digital supply chain faces unique security risks due to varying regulatory landscapes, highly interconnected digital ecosystems, and the increasing adoption of AI. These factors create a complex environment that organizations in the region must navigate to ensure security.
Microsoft's Zero Day Quest hacking contest concluded with $2.3 million awarded to researchers for identifying nearly 700 vulnerabilities. The program incentivized the discovery of flaws in Microsoft's cloud and AI products.
Quantum computers pose a future threat to current encryption methods, with experts warning that achieving 'quantum-safe' systems could take years. This necessitates proactive quantum risk management to prepare for the eventual obsolescence of widely used cryptographic algorithms.
Capsule Security, an Israeli startup, has secured $7 million in funding to develop solutions for securing AI agents at runtime. The company's approach focuses on continuous monitoring of AI agent behavior to prevent unsafe actions.
Researchers have identified a design flaw in Anthropic's Model Context Protocol (MCP) that allows for the silent execution of unsanitized commands. This vulnerability could be exploited to compromise entire AI systems and facilitate widespread AI supply chain attacks.
Sophos CISO Ross McKerchar discusses leadership challenges in scaling security operations, the importance of talent retention, and the evolving threat landscape, particularly concerning AI-enabled attacks. He also highlights a growing trust deficit within the cybersecurity industry.
Security researchers have discovered prompt-injection vulnerabilities in Microsoft Copilot Studio and Salesforce Agentforce, allowing attackers to exfiltrate sensitive data by tricking the AI agents into executing malicious instructions. These flaws exploit the way AI agents process user input, blurring the lines between trusted commands and untrusted data, leading to potential theft of PII and business information.
Microsoft Copilot and Salesforce Agentforce have been patched to address prompt injection vulnerabilities. These flaws could have allowed external attackers to access and leak sensitive data from the AI agents.
The article discusses the rapid adoption of AI across industries, with leadership teams and boards pushing for its integration into operational and security functions. Pentera's AI Security and Exposure Report 2026 indicates this momentum, with all surveyed CISOs acknowledging the trend.
Deepfake technology has advanced to the point where it can convincingly fool individuals and bypass traditional security heuristics, posing a significant risk to organizations. A Gartner survey indicates a substantial increase in audio and video deepfake incidents experienced by cybersecurity leaders.
Security researchers discovered a new prompt injection attack targeting AI agents integrated with GitHub Actions. This attack allows them to steal API keys and access tokens, with vendors like Anthropic, Google, and Microsoft failing to disclose the vulnerabilities to users.
Mallory has launched an AI-native threat intelligence platform designed to provide actionable insights for enterprise security teams. The platform analyzes global threat data, contextualizes it against an organization's specific attack surface, and prioritizes threats for proactive defense. It aims to move beyond traditional alert systems by offering answers to critical security questions.
Researchers have identified malicious Large Language Model (LLM) proxy routers being used in the wild. These routers are designed to facilitate malicious activities by leveraging LLMs.
OpenAI has announced GPT-5.4-Cyber, a specialized version of its GPT-5.4 model designed to assist cybersecurity professionals. This new model aims to enhance defenders' capabilities in identifying and resolving security issues, following a trend of AI companies developing tailored solutions for the cybersecurity sector.
Curity is introducing Access Intelligence, an extension to its IAM platform, to address the unique security challenges posed by autonomous AI agents. Traditional IAM tools are insufficient for securing these agents due to their complex and dynamic access needs.
Starting March 10, 2026, DShield sensors began detecting probes targeting various AI models like Claude, OpenClaw, and Hugging Face. This activity has been consistently observed in the DShield database since its inception.
Microsoft has announced a $10 billion investment in Japan over the next two years, focusing on AI adoption and cybersecurity development. This strategic move is intended to bolster Japan's digital infrastructure, train its workforce in AI technologies, and foster new cybersecurity partnerships, aligning with global trends in sovereign AI and data center development.
Commvault has introduced AI Protect, a new software designed to discover and monitor AI agents operating within AWS, Azure, and GCP. The software also offers the capability to revert actions taken by these AI agents if issues arise, effectively providing a 'Ctrl+Z' function for AI operations.
The N-able and Futurum Report highlights how AI is transforming cybersecurity, acting as both a tool for attackers and a crucial defense mechanism. It emphasizes a shift from traditional perimeter security to continuous cyber resilience, focusing on the ability to withstand, adapt to, and recover from threats in real-time.
While grassroots opposition to renewing FISA Section 702 is growing, fueled by concerns over AI's role in data surveillance, Democratic leaders are not actively campaigning against its extension. This suggests a potential lack of robust political pushback despite public anxieties.
The UK government's Mythos AI system has successfully completed a challenging multi-step infiltration challenge, demonstrating its capabilities in cybersecurity threat assessment. This marks the first AI system to achieve such a feat, suggesting a growing potential for AI in analyzing and understanding complex cyber threats. The tests aim to distinguish genuine cybersecurity risks from exaggerated claims.
European regulators have been largely denied early access to Anthropic's new AI model, Mythos, which is designed for cybersecurity use cases and capable of identifying and exploiting vulnerabilities. This limited access, primarily granted to US tech giants like Apple, Microsoft, and Amazon, raises concerns among experts about private companies dictating the distribution of powerful AI technology over independent authorities.
A wargame exercise named "Capture the Narrative" simulated social media manipulation by having students create bots to influence a fictional election. This exercise aimed to educate participants on how influence operations can be carried out in real-world political contexts.
Bruce Schneier has announced his upcoming speaking engagements for early 2026. These include appearances at DemocracyXChange 2026, the SANS AI Cybersecurity Summit 2026, Nemertes [Next] Virtual Conference Spring 2026, and RightsCon 2026.
This article emphasizes the need for organizations to build cyber defenses based on real-world attack patterns rather than solely relying on vendor guidance and theoretical frameworks. It highlights that attackers adapt faster than defensive programs and exploit predictable gaps, advocating for a shift towards continuous adaptation and mitigation of human error.
Organizations are rapidly adopting AI agents, creating significant security blind spots as traditional identity and access management (IAM) frameworks are inadequate for managing these autonomous systems. These agents can gain system-level access and operate at high speeds, posing risks of breaches and compliance failures. Addressing this requires treating AI agents as a distinct identity class with policy-as-code, dynamic authorization, and full observability.
Google has integrated a new Rust-based DNS parser into the modem firmware for Pixel devices. This move aims to enhance device security by mitigating a class of vulnerabilities often found in critical network parsing components.
A new ad fraud scheme is using AI-generated content and SEO poisoning to spread scareware and financial scams via Google Discover. The campaign manipulates search results to push deceptive news stories, tricking users into enabling browser notifications that lead to malicious outcomes.
This article discusses the increasing legal complexities faced by cybersecurity professionals due to geopolitical uncertainty and evolving regulations. It highlights growing personal liability, including criminal prosecution, and reviews key legal trends in AI and privacy legislation across the US and EU.
The Cloud Security Alliance (CSA) is urging CISOs to prepare for an accelerated threat landscape due to advancements in AI models like Mythos. These models are rapidly shortening the time between identifying vulnerabilities and exploiting them, leading to a new era of faster cyberattacks.
A study analyzing cybercrime forum conversations reveals how cybercriminals perceive and discuss the exploitation of AI. While expressing curiosity about AI's criminal applications, they also harbor doubts about its effectiveness and impact on their operations, with documented attempts to misuse legitimate AI tools and develop bespoke criminal models.
Google has incorporated a DNS parser written in Rust into Pixel phones, aiming to enhance security by addressing memory safety bugs common in lower-level programming environments. This move is intended to mitigate an entire class of vulnerabilities.
OX Security's analysis of 216 million security findings from 250 organizations revealed a 52% year-over-year increase in raw security alerts. More significantly, the prioritized critical risk saw a nearly 400% surge, indicating a growing density of high-impact vulnerabilities.
Artificial intelligence is significantly enhancing threat detection by enabling security teams to analyze vast amounts of data, identify subtle malicious activities, and detect potential attacks faster than traditional methods. Gartner predicts that by 2028, 50% of threat detection, investigation, and response (TDIR) platforms will incorporate agentic AI capabilities, up from less than 10% in 2024.
AI is rapidly moving from experimental phases to production in cybersecurity, fundamentally changing how security operations work. Security leaders are grappling with the accelerated threat landscape, where adversary activity has increased significantly, and the speed of attacks has decreased to minutes or even seconds, demanding a shift in defensive capabilities to match machine-speed threats.
A new briefing by the Cloud Security Alliance (CSA) argues that Anthropic's Glasswing, an AI system capable of autonomously identifying and exploiting vulnerabilities, is not an outlier but an early indicator of a significant shift in cybersecurity. This AI capability dramatically accelerates the process of finding flaws and developing exploits, potentially overwhelming security teams with a surge of disclosures and autonomous attacks.
California's A.B. 2047 bill proposes mandating censorware on all 3D printers and criminalizing the use of open-source alternatives, aiming to restrict the printing of firearms. The EFF argues this legislation will stifle innovation, harm consumers through surveillance and platform lock-in, and is an ineffective approach to security.
The Cloud Security Alliance (CSA) is warning CISOs to prepare for a potential "AI vulnerability storm" following the introduction of Anthropic's Claude Mythos. This development suggests that new AI models could lead to an increase in exploitable vulnerabilities in AI systems.
Asset owners in Operational Technology (OT) environments are facing regulatory pressure to demonstrate their readiness for post-quantum cryptography. However, the lack of adequate tooling prevents them from genuinely assessing or achieving this readiness, leading to a situation where compliance efforts are merely symbolic rather than substantive security measures.
Anthropic has previewed its new AI model, Claude Mythos Preview, which possesses significant cyberattack capabilities. To proactively address these risks, Anthropic has launched Project Glasswing, an initiative to use the model to discover and patch software vulnerabilities before they can be exploited by malicious actors.
This article introduces Dr. Jean Linis-Dinco, an activist-researcher focused on human rights and technology, particularly in relation to cybersecurity. She has a PhD in Cybersecurity and works with the Manushya Foundation, advocating for digital rights and challenging policies that restrict online freedom of expression.
Claims that Microsoft is operating a massive corporate espionage operation through LinkedIn's browser extension are being examined by security researchers. Initial findings suggest the extension's probing activity may not constitute spying as alleged, potentially refuting broader espionage claims.
The article discusses how football stadiums are increasingly adopting facial recognition technology, ostensibly for security purposes. However, this expansion of surveillance capabilities raises significant privacy concerns regarding the scope and potential misuse of the collected data.