David Wolf · Portfolio Use Case
A flagship research report turning AI security job-market noise into evidence about roles, skills, control gaps, hiring signals, and the emerging AI security engineering discipline.
Designed and authored a flagship 2026 research report on AI security engineering, using a corpus of AI and security job descriptions, role analysis, market signals, practitioner framing, and AI governance research to explain how the AI Security Engineer role is emerging, where job descriptions are inflated or confused, which skills are actually required, and how organizations can turn vague hiring language into executable security capabilities.
Client
AI Security LLC / Independent Research
Engagement Type
Research Product
Period
2026
Role
Author / Research Lead / AI Security Engineer
Focus Areas
AI Security Engineering, AI Security Labor Market, Job Description Analysis
The Context
AI security engineering is becoming a visible hiring category before the industry has agreed on what the role actually means. Job descriptions often combine product security, AppSec, governance, model safety, detection engineering, privacy, cloud, and platform security into one overloaded specification.
The Challenge
The report's challenge is to separate signal from noise. Some roles genuinely require new AI security capabilities. Others are traditional security roles with AI branding. The research needed to expose that difference and explain what organizations actually need to build.
What I Did
The Outcome
The result is a flagship research asset for AI Security LLC. It gives the portfolio a sharper public thesis: AI security engineering is not a buzzword, but an emerging discipline with specific product, governance, evaluation, authorization, and evidence responsibilities.
Research
Corpus planned around 10k AI and security job descriptions, with AI+Security roles compared against Product Security and AppSec baselines
As
An annual flagship report for 2026
Includes
Role-market concepts such as the Frankenstein Role, Chimera Spec, Unicorn Index, Skill Washing, Evidence Gap, Probability Pivot, Agentic Anarchy, Governance Evidence, and Signal over Noise
Audience
Includes CISOs, hiring managers, recruiters, candidates, founders, security leaders, and AI governance stakeholders
Extensions
Include CISO survey data, LinkedIn profile analysis, GitHub signal analysis, arXiv trend analysis, and media/news trend analysis
Key Deliverables
Collaboration
The report was created as an independent research and advisory asset, designed to support conversations with CISOs, hiring managers, recruiters, sponsors, founders, and candidates. It connects security-market research to practical AI product-security capability building.
Client
AI Security LLC / Independent Research
Engagement Type
Research Product
Period
2026
Role
Author / Research Lead / AI Security Engineer
Focus Areas
AI Security Engineering, AI Security Labor Market, Job Description Analysis
The Context
AI security engineering is becoming a visible hiring category before the industry has agreed on what the role actually means. Job descriptions often combine product security, AppSec, governance, model safety, detection engineering, privacy, cloud, and platform security into one overloaded specification.
The Challenge
The report's challenge is to separate signal from noise. Some roles genuinely require new AI security capabilities. Others are traditional security roles with AI branding. The research needed to expose that difference and explain what organizations actually need to build.
What I Did
The Outcome
The result is a flagship research asset for AI Security LLC. It gives the portfolio a sharper public thesis: AI security engineering is not a buzzword, but an emerging discipline with specific product, governance, evaluation, authorization, and evidence responsibilities.
Research
Corpus planned around 10k AI and security job descriptions, with AI+Security roles compared against Product Security and AppSec baselines
As
An annual flagship report for 2026
Includes
Role-market concepts such as the Frankenstein Role, Chimera Spec, Unicorn Index, Skill Washing, Evidence Gap, Probability Pivot, Agentic Anarchy, Governance Evidence, and Signal over Noise
Audience
Includes CISOs, hiring managers, recruiters, candidates, founders, security leaders, and AI governance stakeholders
Extensions
Include CISO survey data, LinkedIn profile analysis, GitHub signal analysis, arXiv trend analysis, and media/news trend analysis
Key Deliverables
Collaboration
The report was created as an independent research and advisory asset, designed to support conversations with CISOs, hiring managers, recruiters, sponsors, founders, and candidates. It connects security-market research to practical AI product-security capability building.
At a Glance
Focus Areas
Tools & Technologies
Evidence & Artifacts
Public-Safe Caveat
This case study describes an independent research report and public-facing research product. Claims about corpus size, survey results, sponsors, or final findings should be updated once the final report data and publication package are complete.
David Wolf
AI Security · Product Security · Security Leadership
Based on analyzed public signals, not proof of any individual's or company's internal state.