Methodology
Source Feeds
InfoSecRadar monitors 22 RSS feeds from trusted cybersecurity sources, organized into six tiers:
- Tier 1 — High-Volume News: Dark Reading, Bleeping Computer, The Hacker News, The Register, Ars Technica, SecurityWeek, CSO Online
- Tier 2 — Expert Analysis: Krebs on Security, Schneier on Security, Risky Business News, SANS Internet Storm Center
- Tier 3 — Government Advisories: CISA Alerts, NIST NVD
- Tier 4 — Vendor Research: Google Project Zero, Sophos News
- Tier 5 — Privacy & Civil Liberties: EFF Deeplinks, EPIC, The Intercept, Privacy International
- Tier 6 — Training & Research: BrightTALK InfoSec, NIST Cybersecurity Insights, Recorded Future Blog
Feeds are checked every 2 hours (12 times per day).
Deduplication
When the same story appears across multiple sources, we keep the most detailed version from the most reputable source. Deduplication uses both exact URL matching and fuzzy title comparison to catch the same story reported under different headlines.
IFF Sentiment Classification
Every story is classified from a defender's perspective:
- Friend (positive) — Good news for defenders: patches released, threat actors arrested, security tools improved, vulnerabilities disclosed responsibly.
- Foe (negative) — Bad news for defenders: new vulnerabilities exploited, data breaches, ransomware attacks, new malware discovered.
The classification includes a one-sentence explanation of the reasoning.
Severity Assessment
Severity is expressed as a CVSS score (0.0–10.0):
- When a real CVSS score is published by the source or available from NVD, we use that score.
- When no official score exists, our AI estimates a score based on the article content. AI-estimated scores are always clearly marked with an asterisk (e.g., "CVSS 6.0*").
- CVSS scores are not assigned to articles about policy, breaches, industry news, AI strategy, or general AI security trends — only articles discussing a specific, identified vulnerability receive a score. For AI & Cybersecurity stories this means a score only appears when a concrete CVE or technical flaw in an AI product or framework is central to the article.
AI Disclosure
All summaries, classifications, and analyses are generated by large language models (LLMs). Our primary model is Google Gemini 2.0 Flash, with Anthropic Claude Haiku 4.5 as an automatic failover. AI-generated content is intended to provide quick context — always read the original article for full coverage.