External Signal Layer
AI Security Framework Intelligence
A public-source framework intelligence layer for tracking AI security frameworks, their public assets, machine-readable coverage, and directional crosswalks across MITRE ATLAS, NIST AI RMF, OWASP LLM guidance, regulatory references, and public supply-chain guidance.
Methodology Caveat
Framework Intelligence is a directional public-source signal. Crosswalks are heuristic analytical mappings, not official equivalence claims, certifications, compliance determinations, legal advice, or accusatory company-level findings.
Framework Manifest
Public framework source coverage
| Framework | Publisher | Status | Machine-readable | Version / tag | Source |
|---|---|---|---|---|---|
MITRE ATLAS adversary_tactics_techniques | MITRE | success | 77 | 8ee2c68 | canonical |
NIST AI Risk Management Framework risk_management | NIST | success | 0 | — | canonical |
OWASP Top 10 for Large Language Model Applications application_security_top10 | OWASP | success | 77 | 0205957 | canonical |
OWASP Generative AI Security Project generative_ai_security | OWASP | success | 0 | — | canonical |
ISO/IEC AI Management and Security References standards_reference | ISO/IEC | metadata only | 0 | — | canonical |
EU AI Act Official Implementation Resources regulatory_reference | European Union | success | 0 | — | canonical |
CNCF AI and MLSecOps Public Guidance References cloud_native_supply_chain | CNCF and related public working groups | success | 76 | 5fb87f4 | canonical |
CISA AI Security Resources public_sector_security_guidance | CISA | success | 0 | — | canonical |
Crosswalk Samples
High-confidence directional mappings
These rows are heuristic research aids. They are designed to help compare framework language and coverage, not to assert official equivalence between frameworks.
mitre_atlas:AML.T0020 → owasp_llm_top10:LLM04
Poison Training Data → Data and Model Poisoning
Poisoning training data directly aligns with data and model poisoning.
inferred
mitre_atlas:AML.T0051 → owasp_llm_top10:LLM01
LLM Prompt Injection → Prompt Injection
Both address prompt injection against LLM applications, including untrusted instructions influencing model behavior.
inferred
mitre_atlas:AML.T0046 → owasp_llm_top10:LLM03
ML Supply Chain Compromise → Supply Chain
ML supply-chain compromise aligns strongly with OWASP supply-chain risk.
inferred
mitre_atlas:AML.T0056 → owasp_llm_top10:LLM06
LLM Plugin Compromise → Excessive Agency
Plugin compromise maps to excessive agency and unsafe tool execution in LLM applications.
inferred
mitre_atlas:AML.T0054 → owasp_llm_top10:LLM01
LLM Jailbreak → Prompt Injection
Jailbreak behavior is a closely related prompt-injection and instruction-bypass pattern.
inferred
nist_ai_rmf:MANAGE → owasp_llm_top10:LLM06
Manage → Excessive Agency
Managing excessive agency requires constraints, approvals, and incident handling.
heuristic
mitre_atlas:AML.T0018 → owasp_llm_top10:LLM03
Backdoor ML Model → Supply Chain
Backdoored models are a supply-chain integrity risk for AI components.
heuristic
mitre_atlas:AML.T0024 → owasp_llm_top10:LLM02
Exfiltration via ML Inference API → Sensitive Information Disclosure
Inference API exfiltration can result in sensitive information disclosure.
heuristic
API + Data
Public-safe endpoints
/api/external/framework-intel/api/external/framework-intel/metrics/api/chart-data/framework-intel/data/external/framework-intel/framework.manifest.v1.jsonFreshness
Generated
2026-05-09T07:23:27.082Z
Missing dates, metadata-only sources, and failed fetches are treated as coverage gaps, not as vendor or framework maturity claims.