David Wolf · Portfolio Use Case
Implementing practical AI control evidence for ISO 42001, NIST AI RMF, AIMS, agent identities, permissions, red teaming, privacy, and output evaluation.
Designed a practical AI governance control layer using Garak, NeMo Guardrails, Microsoft Presidio, Promptfoo, agentic identities, permission scoping, evaluation gates, and evidence-generation workflows to support ISO 42001, NIST AI RMF, and AIMS-style control objectives for agentic AI systems.

Client
Confidential / Internal AI Governance Program
Engagement Type
Consulting / Research / Buildout
Period
2025–2026
Role
AI Product Security Architect / AI Governance Engineer
Focus Areas
AI Governance Engineering, ISO 42001 Control Implementation, NIST AI RMF Control Implementation
The Context
AI governance cannot remain a binder of policies once systems become agentic. When models can retrieve data, call tools, trigger workflows, and act across systems, governance needs enforceable controls: red-team tests, guardrails, privacy checks, permission boundaries, logs, scoring, and acceptance evidence.
The Challenge
Frameworks such as ISO 42001, NIST AI RMF, and AIMS-style AI management systems describe what responsible management should achieve, but they do not automatically produce engineering controls. The challenge was converting governance language into technical control points that could be tested and evidenced.
What I Did
The Outcome
The result was a practical AI governance engineering pattern. Instead of slowing delivery with abstract compliance language, the system created repeatable tests, evidence artifacts, scoring, and review gates. That made AI governance more useful to product teams, security teams, risk owners, and executives.
Governance
Objectives across ISO 42001, NIST AI RMF, and AIMS-style control categories into implementable engineering controls
A
Control stack using Garak-style adversarial testing, NeMo Guardrails-style policy flows, Presidio-style PII detection, Promptfoo-style evaluation, and agentic identity/permission boundaries
Repeatable
Evaluation gates for prompts, workflows, model behavior, PII handling, guardrail behavior, and tool permissions
Evidence
Artifacts suitable for governance review, security review, risk acceptance, and continuous control monitoring
Key Deliverables
Collaboration
The work bridged security architecture, AI governance, privacy engineering, product security, and workflow automation. It translated high-level AI governance obligations into concrete controls that engineering teams could test, log, score, review, and improve continuously.
Client
Confidential / Internal AI Governance Program
Engagement Type
Consulting / Research / Buildout
Period
2025–2026
Role
AI Product Security Architect / AI Governance Engineer
Focus Areas
AI Governance Engineering, ISO 42001 Control Implementation, NIST AI RMF Control Implementation
The Context
AI governance cannot remain a binder of policies once systems become agentic. When models can retrieve data, call tools, trigger workflows, and act across systems, governance needs enforceable controls: red-team tests, guardrails, privacy checks, permission boundaries, logs, scoring, and acceptance evidence.
The Challenge
Frameworks such as ISO 42001, NIST AI RMF, and AIMS-style AI management systems describe what responsible management should achieve, but they do not automatically produce engineering controls. The challenge was converting governance language into technical control points that could be tested and evidenced.
What I Did
The Outcome
The result was a practical AI governance engineering pattern. Instead of slowing delivery with abstract compliance language, the system created repeatable tests, evidence artifacts, scoring, and review gates. That made AI governance more useful to product teams, security teams, risk owners, and executives.
Governance
Objectives across ISO 42001, NIST AI RMF, and AIMS-style control categories into implementable engineering controls
A
Control stack using Garak-style adversarial testing, NeMo Guardrails-style policy flows, Presidio-style PII detection, Promptfoo-style evaluation, and agentic identity/permission boundaries
Repeatable
Evaluation gates for prompts, workflows, model behavior, PII handling, guardrail behavior, and tool permissions
Evidence
Artifacts suitable for governance review, security review, risk acceptance, and continuous control monitoring
Key Deliverables
Collaboration
The work bridged security architecture, AI governance, privacy engineering, product security, and workflow automation. It translated high-level AI governance obligations into concrete controls that engineering teams could test, log, score, review, and improve continuously.
At a Glance
Focus Areas
Tools & Technologies
Evidence & Artifacts
Public-Safe Caveat
This case study describes a public-safe AI governance and control-engineering pattern. It generalizes private implementation details, sensitive prompts, internal test suites, client-specific control mappings, and proprietary workflow logic.
David Wolf
AI Security · Product Security · Security Leadership
Based on analyzed public signals, not proof of any individual's or company's internal state.