aisecurity.llc

Trust Center

How we handle privacy, evidence, claims, and sponsor independence across the AI Security Engineering research platform.

Trust posture

Operating principles

Research independence

Sponsor support does not influence methodology, scoring, findings, chart outputs, or editorial conclusions.

Public-safety boundaries

We do not publish raw job descriptions, raw ATS payloads, raw survey answers, personal data, or secrets.

Claim language discipline

We treat job descriptions as public hiring signals and role-language evidence, not proof of company security maturity.

Governance-by-default

Public outputs are aggregate benchmarks with caveats and quality checks designed for executive and practitioner scrutiny.

Control statements

Platform commitments

  • Protect private data and avoid identity-level exposure.
  • Keep sponsor influence separate from research outputs.
  • Use aggregate benchmark framing for public claims.
  • Avoid accusatory company-level language.
  • Use psychometric outputs as role-language signals, not diagnosis.
  • Publish artifacts that are useful for CISOs, hiring leaders, practitioners, sponsors, and researchers.
Based on analyzed job-description signals, not proof of any individual company's internal security maturity.

Legal execution

Contracts and signer-ready documents

The trust center now includes a dedicated contracts hub for sponsorship agreements, NDA workflows, a $0 services retainer, and commercial addenda.

Trust Center — aisecurity.llc | aisecurity.llc