Document ID
AN-SEC-ZTP-001
Version
1.1
Classification
Public
Effective
Mar 22, 2026
Next Review
Sep 22, 2026
Reviewed By
CEO & Compliance Team

The Principle

Autonomous systems are coming. They are already here — in the AI models that process clinical notes, in the algorithms that triage immigration cases, in the language models that generate translations, in the content moderation systems that classify harmful speech. These systems are becoming more capable, more pervasive, and more consequential with each generation. And with each generation, the privacy questions become harder.

When an AI model processes a Dari-speaking patient’s medical history to generate a clinical summary, who controls the data? When an automated translation system processes an asylum seeker’s persecution narrative, is the narrative retained in the model’s training data? When a content moderation algorithm classifies a Pashto-language post as hate speech, did a human review the decision before the post was removed and the user was flagged?

Ariana Nexus exists at the intersection of autonomous systems and human judgment. Our position is unambiguous: autonomous systems augment human capability. They do not replace human accountability. No AI tool processes client data without enterprise-grade security, contractual authorization, and a disclosed legal basis. No AI-assisted output leaves Ariana Nexus without human review and approval. No client data is used to train any AI model — ever — unless the client has provided explicit, documented authorization in the Master Service Agreement.

This page documents how Ariana Nexus governs privacy in the context of autonomous and semi-autonomous systems — both the AI tools we use internally and the AI services we deliver to clients.

The HITL Principle: Nothing Ships Without Human Approval

Human-in-the-Loop (HITL) is not a feature of Ariana Nexus’s operations. It is a foundational principle. It applies to every service, every output, every deliverable, and every decision that affects a client, a data subject, or a vulnerable population.

What HITL means at Ariana Nexus:

Why this is non-negotiable:

In the domains where Ariana Nexus operates — healthcare, government, justice, and AI — the consequences of AI error are not abstract. A mistranslated medical instruction can kill a patient. A misclassified asylum narrative can result in deportation to a country where the applicant faces death. A hallucinated cultural assertion can corrupt an AI model that serves millions of Afghan-language users. A false content moderation decision can silence a legitimate voice or amplify a dangerous one.

Human review is not a quality assurance step. It is a safety mechanism. It is the difference between an AI-augmented institution and an AI-automated liability.

Internal AI Tool Governance

AI Tools Used by Ariana Nexus

Ariana Nexus uses AI tools in its internal operations to enhance productivity, quality, and analytical capability. The following governance framework applies to all AI tool usage:

Tier 1 — Enterprise-Grade AI (Authorized for Client Data):

Microsoft 365 Copilot — Microsoft. BAA executed; Microsoft Online Services DPA Authorized — for client data processing as disclosed in the Master Service Agreement Microsoft does not use customer data to train foundation models (per Microsoft data protection commitment)

Microsoft Azure AI Services — Microsoft. BAA executed; Azure DPA Authorized — for engagement-specific AI processing as disclosed in MSA Same Microsoft no-training commitment

Tier 2 — Internal Operations AI (Not Authorized for Client Data):

Claude (Anthropic) — Anthropic. Internal research, document drafting, strategic analysis, operational planning No client data — internal operations only

ChatGPT (OpenAI) — OpenAI. Internal research, brainstorming, process development No client data — internal operations only

Other AI tools — Various. Evaluated on a case-by-case basis per AI Tool Assessment Protocol No client data without Privacy Review Gate approval, enterprise-grade security, and MSA disclosure

The No-Training Rule

Ariana Nexus will never allow any AI provider to use client data for model training. This is an absolute prohibition with no exceptions:

AI Tool Assessment Protocol

Before any new AI tool is introduced into Ariana Nexus operations, the following assessment is conducted through the Privacy Review Gate:

Client-Facing AI Privacy Commitments

Master Service Agreement Disclosure

When Ariana Nexus uses AI tools in the delivery of client services, the following is disclosed in the Master Service Agreement or engagement-specific addendum:

AI Opt-Out Right

Clients have the right to request that their engagement be delivered entirely without AI tool assistance. When a client exercises this right:

Automated Decision-Making Safeguards

Ariana Nexus does not use automated decision-making that produces legal effects or similarly significant effects on individuals without human intervention. This commitment aligns with GDPR Article 22, CCPA/CPRA profiling provisions, and the EU AI Act’s human oversight requirements:

AI Data Factory Privacy Architecture

Privacy Controls in the AI Pipeline

The AI Data Factory — Ariana Nexus’s infrastructure for delivering AI validation, annotation, content moderation, and RLHF services — incorporates privacy controls at every stage:

Stage 1 — Data Ingestion: - Client data is received through encrypted channels and classified immediately upon receipt. - Data containing PII is classified as Restricted. - Data provenance and consent chain are verified before processing begins. - Client authorization for each processing activity is confirmed against the DPA.

Stage 2 — Data Preparation: - PII detection algorithms (planned: Purview trainable classifiers) scan incoming data for personal identifiers. - Where feasible, PII is pseudonymized or anonymized before annotation. - Data is stored in engagement-specific SharePoint environments with named-individual access.

Stage 3 — Human Annotation and Processing: - Annotators access only the data assigned to their specific task. - Annotators are trained on PHI handling, PII recognition, and sensitive population protections. - Annotation outputs are classified at the same tier as the source data. - Inter-annotator agreement metrics ensure consistency without exposing individual data subject information.

Stage 4 — Quality Assurance: - QA reviewers access data within the same access-controlled environment. - QA findings are documented without unnecessary reproduction of personal data. - Anomalies that suggest data quality issues, bias, or privacy violations are escalated.

Stage 5 — Output Delivery: - Deliverables (annotated datasets, validation reports, moderation decisions) undergo human review before delivery. - Deliverables are encrypted in transit (TLS 1.2+, OME, or Sensitivity Label encryption). - Client receives documented confirmation that HITL review was performed.

Stage 6 — Data Retention and Deletion: - Client data is retained per the DPA retention schedule. - Upon engagement completion, data is deleted through the documented engagement exit procedure with destruction certification. - No client data from the AI Data Factory is retained for Ariana Nexus’s own purposes without explicit client authorization.

Cultural Intelligence API — Privacy Architecture (Planned)

The Cultural Intelligence API — Ariana Nexus’s planned software product providing programmatic access to cultural validation, translation assessment, dialect identification, and document verification — will incorporate the following privacy protections:

Privacy by Design in API Architecture

GDPR Compliance for API

EU AI Act Compliance for API

Privacy in Emerging Autonomous Technologies

Ariana Nexus monitors and prepares for the privacy implications of emerging autonomous technologies that will affect its operational domains:

Autonomous Translation Systems

As machine translation systems approach human-level quality for high-resource languages, and as quality improves for low-resource Afghan languages:

Autonomous Content Moderation

As AI content moderation becomes more automated across platforms:

Autonomous Clinical Documentation

As AI systems increasingly generate clinical documentation, discharge summaries, and patient communications:

Agentic AI Systems

As AI agents become capable of autonomous multi-step actions — browsing, email, file manipulation, data retrieval, and decision execution — without human approval for each step:

Autonomous Biometric Systems

As biometric identification, voice recognition, and facial analysis become embedded in authentication and identity verification:

Alignment with AI Privacy Frameworks

GDPR (Article 22) — Right not to be subject to solely automated decision-making with legal effects. Aligned — no automated decisions without HITL; human review right guaranteed

GDPR (Article 25) — Data protection by design and default in automated systems. Aligned — Privacy by Design applied to all AI tools and the AI Data Factory

EU AI Act (Articles 13, 14, 26) — Transparency, human oversight, deployer obligations. Aligned — HITL mandatory; AI tool disclosure in MSA; client opt-out right

EU AI Act (Article 10) — Data governance for AI system training and validation. Aligned — no-training rule; data provenance; quality assurance

NIST AI RMF — Govern, Map, Measure, Manage for AI privacy risk. Aligned — AI Tool Assessment Protocol; HITL; no-training rule

NIST Privacy Framework — Control-P, Protect-P for automated data processing. Aligned — privacy controls embedded in AI pipeline

CCPA/CPRA — Right to opt-out of automated decision-making technology. Aligned — no automated decisions; client opt-out right

HIPAA Security Rule — Safeguards for automated ePHI processing. Compliant — Copilot under BAA; Tier 2 tools excluded from PHI

White House EO 14110 (revoked January 20, 2025) — AI safety, transparency, and accountability. Historical alignment maintained — HITL, MSA disclosure, and no-training commitment retained as institutional best practice; superseded by EO 14179 (January 23, 2025)

UNESCO AI Ethics (2021) — Human oversight, transparency, accountability in AI. Aligned — HITL principle; transparent tool disclosure

OECD AI Principles — Transparency, accountability, human-centred AI. Aligned — all principles reflected in AI governance

ISO/IEC 42001:2023 — AI Management System — privacy controls. Roadmap (2028) — certification planned

Illinois BIPA — Biometric information privacy. Monitoring — no biometric systems deployed; compliance planned if adopted

Council of Europe AI Convention (2024) — Human rights in autonomous systems. Aligned — HITL and privacy protections address convention requirements

What Privacy in Autonomous Systems Means for Our Clients and Partners

For healthcare clients: AI tools that touch your patient data are limited to BAA-covered Microsoft services. No PHI is ever processed through consumer-grade AI tools. Every AI-assisted clinical output undergoes human review by qualified medical linguists. Your data is never used to train any AI model.

For government clients: CUI is never processed through any AI tool other than Microsoft services within the BAA-covered environment. No automated decisions affect individuals in government programs without human review. NIST SP 800-171 and DFARS flow-down requirements apply to all AI tool usage.

For AI labs and Big Tech: Our AI Data Factory pipeline has privacy controls at every stage — ingestion through deletion. Your proprietary training data is isolated, classified as Restricted, and governed by a strict no-training, no-repurposing policy. HITL review is documented for every deliverable.

For all clients: You have the right to opt out of AI tool usage in your engagement entirely. If AI tools are used, you will be informed through the MSA, told which tools and for what purpose, and assured that human review occurs before any output reaches you. Your data will never train any AI model — this commitment has no exceptions and no expiration.

If your organization requires AI privacy documentation, tool governance evidence, or an autonomous systems privacy briefing, contact privacy@ariananexus.com or +1 (202) 771-0224.

Maturity Roadmap

Current (2026) — HITL mandatory for all AI-assisted outputs; AI Tool Governance framework (Tier 1/Tier 2 classification); No-training rule enforced; Copilot as primary enterprise AI under BAA; Client data boundary enforcement; MSA disclosure for AI tool usage; Client AI opt-out right; AI Tool Assessment Protocol; AI Data Factory privacy pipeline. Operational

Hardening (Q3–Q4 2026) — AI Tool Registry formalization; Automated PII detection in AI Data Factory (Purview classifiers); Cultural Intelligence API privacy architecture design; Agentic AI governance policy development; Enhanced AI privacy training module. In Planning

Product Launch (2027) — Cultural Intelligence API with stateless privacy architecture; API DPA and EU AI Act compliance documentation; SOC 2 Type II (AI privacy controls); Sentinel SIEM deployment for autonomous behavior detection. Planned

Certification (2028) — ISO 42001 AI Management System certification; ISO 27701 Privacy Information Management certification; EU AI Act conformity assessment for Cultural Intelligence API; Biometric privacy impact assessment (if applicable). Planned

Advanced (2029–2030) — Differential privacy in AI Data Factory; Privacy-preserving computation for AI validation; Autonomous system privacy monitoring dashboard; Cross-client AI privacy compliance automation. Planned

Long-Horizon (2030+) — Privacy-aware autonomous orchestration engine; Real-time privacy impact scoring for AI decisions; Quantum-resistant privacy for AI systems; Decentralized privacy governance for multi-agent AI environments; Human oversight architecture maintained through 2080 horizon. Vision

Limitation of Liability and Disclaimers

AI Tool Performance. Ariana Nexus uses AI tools to augment human capability. AI tools may produce errors, inaccuracies, or inappropriate outputs. Ariana Nexus mitigates this risk through mandatory HITL review but does not warrant that every AI-assisted output will be free from error. Human reviewers apply professional judgment to AI-assisted outputs, and the final deliverable reflects human professional responsibility.

Third-Party AI Providers. Ariana Nexus relies on Microsoft, Anthropic, OpenAI, and other AI providers for the tools described on this page. Ariana Nexus does not control these providers’ operations, model behavior, or data handling beyond the commitments documented in applicable agreements (BAA, DPA, terms of service). Ariana Nexus disclaims liability for AI provider actions that violate their own stated commitments.

No-Training Rule Enforcement. The no-training rule depends on the contractual commitments of AI providers. Ariana Nexus selects providers with documented no-training commitments and monitors compliance through published transparency reports and contractual provisions. However, Ariana Nexus cannot independently verify the internal data handling practices of AI providers.

Emerging Technology. Sections of this page addressing emerging autonomous technologies (agentic AI, autonomous translation, biometric systems) describe Ariana Nexus’s current positions and planned governance approaches. These are forward-looking statements that will be updated as technologies and regulations evolve.

Limitation of Liability. TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, ARIANA NEXUS’S TOTAL AGGREGATE LIABILITY FOR ALL CLAIMS ARISING OUT OF OR RELATED TO AI TOOL USAGE, AUTONOMOUS SYSTEMS, OR AUTOMATED PROCESSING SHALL NOT EXCEED THE AMOUNTS SET FORTH IN THE APPLICABLE ENGAGEMENT AGREEMENT, OR, WHERE NO ENGAGEMENT AGREEMENT EXISTS, ONE HUNDRED DOLLARS ($100). ARIANA NEXUS SHALL NOT BE LIABLE FOR ANY INDIRECT, INCIDENTAL, SPECIAL, CONSEQUENTIAL, PUNITIVE, OR EXEMPLARY DAMAGES ARISING FROM OR RELATED TO AI-ASSISTED OUTPUTS, AUTOMATED PROCESSING, OR AI TOOL PERFORMANCE. NOTHING IN THIS SECTION SHALL LIMIT OR EXCLUDE ARIANA NEXUS’S LIABILITY FOR: (A) FRAUD OR FRAUDULENT MISREPRESENTATION; (B) DEATH OR PERSONAL INJURY CAUSED BY NEGLIGENCE; OR (C) ANY OTHER LIABILITY THAT CANNOT BE EXCLUDED OR LIMITED BY APPLICABLE LAW, INCLUDING BUT NOT LIMITED TO LIABILITY UNDER THE UK UNFAIR CONTRACT TERMS ACT 1977, THE UK CONSUMER RIGHTS ACT 2015, OR GDPR.

Dispute Resolution. Any dispute arising out of or relating to this page shall be subject to the dispute resolution provisions in the Terms of Use, Section 18.

This page is provided for informational purposes and does not constitute a warranty, guarantee, or binding commitment regarding Ariana Nexus’s AI governance or autonomous systems privacy practices. AI technologies and regulations are evolving rapidly. Capabilities described herein are subject to change. Nothing in this page shall be construed as a waiver of any right, defense, or immunity available to Ariana Nexus under applicable law.

This page is provided for informational purposes and does not constitute legal advice, a warranty, guarantee, or binding commitment regarding Ariana Nexus’s compliance posture. Capabilities described herein are subject to change.