aisecurity.llc

Methodology and Quality

How job-description intelligence becomes public-safe, claim-aware, commercially useful research.

Methodology

The report treats job descriptions as market artifacts

Job descriptions are market artifacts

They show what companies publicly ask for, not definitive proof of internal maturity.

Scores are role-language signals

A higher score means stronger signal in the job-description language, not a company grade.

Claims require readiness labels

Every finding should map to public-ready, public-with-caveat, internal-only, or do-not-claim status.

Privacy and redaction come first

Raw job text, raw surveys, profile-derived personal data, and internal ABM outputs stay private.

Based on analyzed job-description signals, not proof of any individual company’s internal security maturity.

Signal layer methodology

Vulnerability Intelligence

We aggregate CVE data from three primary sources: NIST National Vulnerability Database (NVD) API 2.0, GitHub Security Advisory Database (GHSA), and OSV.dev. Records are classified as AI-relevant using a two-stage pipeline: first, a product/package name matcher against a dictionary of 35+ known AI/ML packages; second, a keyword-weighted scorer across 21 semantic buckets derived from the MITRE ATLAS taxonomy. Only records with classification confidence ≥ 0.5 are included in published metrics.

CISA Known Exploited Vulnerabilities (KEV) are cross-referenced to identify the exploited-in-the-wild subset. Monthly counts are computed from the published_at date. Severity uses CVSSv3 base score where available, falling back to CVSSv2.

Signal layer methodology

Tools Intelligence

Tool metadata is sourced from vendor documentation, GitHub repository data, and practitioner survey responses. Each tool entry includes: category classification against our 14-category AI security taxonomy, pricing model, deployment model, and license type. Star counts and contributor metrics are fetched directly from the GitHub API at enrichment time and are point-in-time snapshots.

Practitioner ratings (when available) represent aggregated responses from our survey cohort weighted by org size and role. Tools with fewer than 3 survey reviews are marked “insufficient data” and excluded from comparative rankings.

Pipeline status

Current execution health

Release status

conditional_go

SQL pipeline

ok

Blockers

0