State of AI Security Engineering 2026
What this report is
This is a purpose-driven applied research report. We analyzed 2,847 job descriptions posted between Q3 2024 and Q2 2025, manually reviewed and coded by practitioners, to answer one question: what are companies actually asking for when they hire for AI security?
The answer is not what the certification vendors say. It is not what the framework writers claim. It is what shows up in the job postings, skills requirements, team structures, and governance documents of organizations building and deploying AI systems at scale.
Key findings
1. "AI Security Engineer" is not yet a stable title
The job title "AI Security Engineer" appears in fewer than 12% of relevant postings. Most AI security work is distributed across security architect, ML engineer, platform engineer, and risk/compliance roles — often without explicit AI labeling.
Organizations hiring for AI security capability should audit their job descriptions against the skills taxonomy in Chapter 3 — most are missing 40–60% of the relevant skills.
2. The skills split is 60/40 security/ML
Job postings that explicitly describe AI security responsibilities require approximately 60% traditional security skills (threat modeling, pen testing, SIEM, IAM) and 40% ML/AI-specific skills (model evaluation, prompt injection, data pipeline security, LLM architecture).
The 40% ML component is not optional — organizations that hire pure security practitioners without ML fluency consistently report friction in execution.
3. Governance is lagging hiring
Organizations that are actively hiring AI security talent are 2–3 governance maturity stages behind their hiring velocity. They hire practitioners before policies exist, and practitioners arrive to environments where governance is in draft or pre-draft state.
4. Prompt injection is the most-cited threat class
In postings that enumerate specific threat classes, prompt injection appears in 67% of them — higher than model exfiltration (43%), supply chain attacks on model weights (38%), or training data poisoning (31%).
Methodology
Job posting corpus
- Source: LinkedIn, Indeed, Greenhouse/Lever/Ashby ATS postings, direct company career pages
- Collection window: Q3 2024 – Q2 2025
- Initial corpus: 4,211 postings matching keyword filters
- After deduplication and filtering: 2,847 postings
- Manual review: 100% of corpus reviewed by at least one practitioner analyst
Skills taxonomy construction
We used a bottom-up approach: extract skills from postings, cluster by semantic similarity, validate clusters against practitioner review panels, then map to a hierarchical taxonomy.
The resulting taxonomy has 6 top-level domains and 87 leaf skills. It is available as structured data in the report appendix.
Governance corpus
120+ organizations, governance documents collected from public sources (NIST AI RMF implementations, EU AI Act compliance filings, published AI policies, SEC risk disclosures with AI-specific language).
The AI Security Engineering discipline
Definition
An AI Security Engineer is a practitioner who designs, implements, and validates security controls across the full lifecycle of AI systems — from data pipeline security through model development, deployment, inference, and monitoring.
The role requires fluency in both traditional security engineering and ML/AI systems architecture. It is not a specialization of either discipline alone — it is an integration.
Core competency areas
1. AI Threat Modeling Extending traditional threat modeling to cover model-specific attack surfaces: training data, model weights, inference endpoints, prompt handling, output pipelines.
2. LLM Security Prompt injection defense, output validation, context window security, system prompt hardening, jailbreak resistance evaluation, RAG pipeline security.
3. MLSecOps Secure model development pipelines, artifact signing and verification, model registry security, CI/CD integration for security validation, automated red teaming.
4. AI Governance Integration Translating governance requirements (risk classification, impact assessment, documentation requirements) into technical controls and audit artifacts.
5. Supply Chain Security for AI Third-party model evaluation, dependency analysis for ML libraries, data provenance verification, model card validation.
What's next
The 2027 edition will include:
- Longitudinal comparison of 2025 vs 2026 hiring trends
- Compensation band data (aggregated, anonymized)
- Skills validation benchmarks — moving from taxonomy to assessment
- Governance maturity scoring for specific industries
If you are building an AI security team or program and want to contribute data to the 2027 edition, contact us at research@davidwolf.org.