Trust Center

Security & Governance Model

Governed analytics infrastructure for regulated investment teams — built for Azure-tenant deployment, deterministic reproducibility, and full audit chains.

Last reviewed: April 2026  · System status

Data stays in your Azure regionClient-isolated storageDeterministic reproducibilityAPI-level approval enforcementFull run audit trail

Deployment & data residency

Where your data lives

Your fund data does not leave your designated Azure region. Data residency is confirmed in writing before any data is transferred — as a condition of onboarding, not a post-hoc configuration.

Storage

Client-isolated containers — no shared blobs across tenants

Inference

No shared inference endpoints for fund data

Deployment

Nexqion-managed environment or fully client-owned Azure tenant

Architecture and data flow diagrams available on request. Request documentation

Governance & auditability

Every run produces a complete audit chain

The audit trail is structural — not optional. Every run records who initiated it, on which dataset, with which method version, who approved it, and what the output was. This record is durable and cannot be modified after the fact.

Deterministic reproducibility: same input + same method = same output, always

Approval gates enforced at API and worker level — not bypassable via the UI

Methodology elections configured per firm, applied automatically on every run

Run records include: inputs, method version, outputs, approval decisions with timestamp and user identity

Sample governed-report package available on request. Request sample report

AI use & boundaries

The LLM role is clearly bounded

All computations are performed by deterministic, versioned methods. The LLM handles two tasks only: generating an analysis plan from your request, and narrating results from verified computed output. Every sentence in the narrative cites the metric it came from.

LLM: plan generation and narrative from verified numbers only

No metric or risk figure is derived from LLM output

Approval gates interrupt every run before a governed output is produced

Designed to support EU AI Act (Annex III), SR 11-7, and ESMA 2024 AI requirements — control mapping on request

AI governance note available on request. Request AI governance note

Security controls

Control domains

Data residency & isolation

Fund data is stored in your designated Azure region. Storage containers are client-isolated — no shared blobs, no shared inference endpoints.

Architecture diagrams available on request

Governance & audit trail

Every run produces an immutable record: who ran it, which dataset, which method version, who approved it, and when. Exportable per run.

Sample governed-report package available on request

Access control & identity

MFA enforced via Microsoft Entra External ID. RBAC applied at API level — not just the UI. Least-privilege access patterns throughout.

Access control design available on request

Encryption & edge security

Traffic routed through Azure Front Door with WAF enforcement. TLS 1.2+ in transit. AES-256 encryption at rest across all storage layers.

Network boundary documentation available on request

Approval gate enforcement

Human approval checkpoints are enforced at the API and worker level. They cannot be bypassed via the UI. Approval decisions are logged per run with timestamp and user identity.

Enforcement design available on request

Operations & resilience

Operations, availability & incident management

Structured incident response process and documented BC/DR posture. System status is publicly available.

Incident response

Incidents are classified, escalated, and resolved with client notification within defined SLAs

Business continuity

Azure regional redundancy, documented recovery posture

Status

System status publicly available at nexqion.com/status

Incident response and BC/DR documentation available on request. Request operational documentation

Compliance posture

Compliance status

Transparent presentation — certified, implemented, and in preparation. "Implemented" means designed and documented controls, not independent certification.

Certified / Independently audited

No independently audited certifications are currently in place. This bucket will be updated when the first external audit is completed.

Implemented / Available for review

GDPRData processing controls, DPA template available
EU AI Act (Annex III)Architecture and governance model aligned — control mapping on request
SR 11-7Architecture and governance model aligned — control mapping on request
ESMA 2024 AIArchitecture aligned — control mapping on request

"Implemented" means designed, built, and documented controls. This does not constitute independent certification. Control mapping available on request under NDA.

Roadmap / In preparation

SOC 2 Type IITarget 2026/2027
ISO 27001Target 2027

Roadmap items represent planned programmes. They are not current controls or certifications.

Security documentation

Available on request

For initial conversation, NDA-based review, and full vendor diligence. Initial package available without NDA. Full package after NDA.

Security overview (PDF)
Architecture & data flow diagrams
Privacy, DPA template & sub-processor list
Incident response summary
BC/DR summary
Questionnaire responses (SIG Lite / bespoke)
Compliance & assurance mapping (EU AI Act, SR 11-7, GDPR)
AI governance note
Onboarding & pilot ingestion note
Sample governed-report package (synthetic data)

Documents are released in stages based on review progress. Initial package without NDA. Full package after NDA.

Request security documentation

Trust review

How a trust review works

Four steps from initial conversation to pilot approval.

01

Initial conversation

Clarify requirements, deployment model, and data boundaries. 30 minutes.

02

NDA if required

Standard NDA or client-specific NDA. Typically completed within 2 business days.

03

Security package & mapping

Security overview, architecture diagrams, control mapping, and Q&A with your IT/security team.

04

Pilot approval

Joint definition of data boundaries, deployment model, and pilot scope — then go.