David Wolf · Portfolio Use Case
A product-security architecture for governing browser extensions, Tauri sidecars, MITM interception, local AI, schema normalization, agent authority, and audit-ready automation.
Designed a browser-native AI security control plane connecting Chrome extension automation, persistent offscreen workers, WebLLM, Transformers, Rust/WASM engines, Tauri sidecar processing, authorized MITM request/response capture, WebSocket streaming, schema normalization, agent permission boundaries, and audit-ready event trails into a governed product-security architecture.

Client
Confidential / Internal AI Automation Platform
Engagement Type
Consulting / Internal Buildout
Period
2025–2026
Role
AI Product Security Architect / Browser-Native Automation Engineer / Rust-WASM Systems Architect
Focus Areas
AI Product Security, Browser-Native Automation, Agentic Authority Boundaries
The Context
Browser-native AI automation sits at a dangerous intersection: web pages, extension APIs, local models, native sidecars, credentials, application traffic, WebSocket streams, and agents that can act. A pile of capabilities is not a secure product architecture.
The Challenge
The challenge was defining authority. Content scripts should not own persistent workers. Agents should not inherit unlimited tool reach. MITM capture should not be invisible. Local model output should not become action without review. Every boundary needed a control.
What I Did
The Outcome
The result was a reusable security architecture for high-authority AI systems operating across browser and native desktop surfaces. It shows how powerful agentic automation can be made inspectable, governable, and useful without pretending the risk disappears.
Concepts
Across Chrome extension runtime, persistent offscreen workers, WebLLM, Transformers, Rust/WASM engines, Tauri sidecar processing, MITM request/response capture, WebSocket listeners, and schema normalization
Around
Explicit authority boundaries between content scripts, workers, local AI, sidecar services, network capture, and downstream agents
For
Audit-ready event trails, typed records, normalized schemas, and reviewable agent actions
Directly
To product-security risks such as prompt injection, excessive agency, tool overreach, credential exposure, data leakage, and unreviewed automation
Key Deliverables
Collaboration
The work bridged browser-extension engineering, Rust/Tauri native architecture, local model integration, MITM processing, schema engineering, agentic workflow design, and AI product-security control thinking. It was designed to make powerful automation reviewable, scoped, and governable.
Client
Confidential / Internal AI Automation Platform
Engagement Type
Consulting / Internal Buildout
Period
2025–2026
Role
AI Product Security Architect / Browser-Native Automation Engineer / Rust-WASM Systems Architect
Focus Areas
AI Product Security, Browser-Native Automation, Agentic Authority Boundaries
The Context
Browser-native AI automation sits at a dangerous intersection: web pages, extension APIs, local models, native sidecars, credentials, application traffic, WebSocket streams, and agents that can act. A pile of capabilities is not a secure product architecture.
The Challenge
The challenge was defining authority. Content scripts should not own persistent workers. Agents should not inherit unlimited tool reach. MITM capture should not be invisible. Local model output should not become action without review. Every boundary needed a control.
What I Did
The Outcome
The result was a reusable security architecture for high-authority AI systems operating across browser and native desktop surfaces. It shows how powerful agentic automation can be made inspectable, governable, and useful without pretending the risk disappears.
Concepts
Across Chrome extension runtime, persistent offscreen workers, WebLLM, Transformers, Rust/WASM engines, Tauri sidecar processing, MITM request/response capture, WebSocket listeners, and schema normalization
Around
Explicit authority boundaries between content scripts, workers, local AI, sidecar services, network capture, and downstream agents
For
Audit-ready event trails, typed records, normalized schemas, and reviewable agent actions
Directly
To product-security risks such as prompt injection, excessive agency, tool overreach, credential exposure, data leakage, and unreviewed automation
Key Deliverables
Collaboration
The work bridged browser-extension engineering, Rust/Tauri native architecture, local model integration, MITM processing, schema engineering, agentic workflow design, and AI product-security control thinking. It was designed to make powerful automation reviewable, scoped, and governable.
At a Glance
Focus Areas
Tools & Technologies
Evidence & Artifacts
Public-Safe Caveat
This case study describes confidential consulting and internal platform work in public-safe terms. Client name, repository paths, credentials, private schemas, intercepted service details, raw traffic, internal prompts, and sensitive implementation logic are omitted.
David Wolf
AI Security · Product Security · Security Leadership
Based on analyzed public signals, not proof of any individual's or company's internal state.