Document ID
AN-SEC-ZTP-001
Version
1.1
Classification
Public
Effective
Mar 22, 2026
Next Review
Sep 22, 2026
Reviewed By
CEO & Compliance Team

The Principle

Every AI system carries risk. A language model trained on imbalanced data will produce biased outputs. A clinical NLP system validated without Afghan-language test cases will fail when deployed in a hospital serving Afghan patients. A content moderation algorithm calibrated to Western cultural norms will misclassify Afghan political speech. A machine translation system optimized for Kabuli Dari will produce dangerous errors when processing Herati medical terminology.

The question is not whether AI systems carry risk. The question is whether the organization that validates, annotates, moderates, and governs those systems has a framework for identifying, measuring, treating, and monitoring that risk — systematically, transparently, and with the institutional courage to decline engagements that cross ethical lines.

Ariana Nexus maintains a formal AI Risk Governance Model that governs every AI-related activity the organization undertakes — both the AI tools it uses internally and the AI services it delivers to clients. The model is built on the NIST AI Risk Management Framework, aligned with the EU AI Act’s risk-tiered obligations, and enforced by the CEO and Compliance Team with authority to accept, condition, or reject any AI engagement based on its risk profile.

Governance Structure

Decision Authority

AI governance decisions at Ariana Nexus are made by the CEO and Compliance Team, who hold authority over:

Compliance Team Responsibilities

The Compliance Team supports the CEO in AI governance through:

Planned: AI Governance Advisory Board (2027)

As Ariana Nexus scales its AI service portfolio, a formal AI Governance Advisory Board is planned:

AI Risk Assessment Framework

Pre-Engagement AI Risk Assessment

Before accepting any AI service engagement — validation, annotation, content moderation, RLHF, or cultural advisory for AI systems — Ariana Nexus conducts a formal AI Risk Assessment. The assessment evaluates the engagement against five risk dimensions:

Dimension 1: AI System Risk Classification

The engagement is classified based on the risk level of the AI system the client is developing, deploying, or operating:

Unacceptable — Article 5 — Prohibited Practices. AI systems that manipulate, exploit, perform social scoring, or enable real-time mass biometric surveillance Engagement categorically declined — no exceptions

High-Risk — Annex III — High-Risk Systems. AI systems used in healthcare, biometrics, critical infrastructure, education, employment, law enforcement, immigration, justice Engagement accepted with enhanced governance: dedicated risk owner, elevated QA, ethics review, client disclosure requirements

Limited-Risk — Transparency Obligations. AI systems with transparency obligations (chatbots, emotion recognition, deepfake generation) Engagement accepted with standard governance plus transparency compliance verification

Minimal-Risk — No Specific Obligations. AI systems with minimal regulatory burden (spam filters, AI-assisted search, recommendation systems) Engagement accepted with standard governance

Dimension 2: Data Sensitivity Assessment

PHI (Protected Health Information) — Critical. BAA required; HIPAA controls; minimum necessary; healthcare-specific QA

CUI (Controlled Unclassified Information) — Critical. NIST 800-171 controls; U.S.-person access only; DFARS compliance

Sensitive Population Data (refugees, asylum seekers, religious minorities, atheists) — Critical. Enhanced protections per Sensitive Populations & Scholar Safety page; named-individual access; hostile-actor prevention

PII (Personally Identifiable Information) — High. Restricted classification; DPA required; GDPR/CCPA compliance

Proprietary/Trade Secret Data — High. NDA required; engagement-specific isolation; Confidential classification

General Content (no PII, no regulated data) — Standard. Standard controls; Internal or Confidential classification per content

Dimension 3: Harm Potential Assessment

The assessment evaluates the potential harm that could result from AI system failure, bias, or misuse in the context where Ariana Nexus’s services contribute:

Physical safety — Could AI failure cause physical harm to individuals?. Medical AI, autonomous vehicles, industrial safety, military/defense applications

Psychological harm — Could AI outputs cause psychological distress, trauma, or manipulation?. Mental health AI, child-facing systems, content moderation of violent/extremist material

Legal consequences — Could AI outputs affect legal rights, liberty, or due process?. Immigration AI, criminal justice AI, legal document analysis, employment screening

Financial harm — Could AI failure cause significant financial loss?. Financial AI, insurance AI, credit scoring

Discrimination — Could AI perpetuate or amplify bias against protected groups?. Any AI processing demographic data, hiring AI, healthcare triage AI

Cultural harm — Could AI misrepresent, erase, or distort Afghan cultural, linguistic, or religious identity?. Translation AI, content generation, educational AI, voice synthesis for Afghan languages

Reputational harm — Could the engagement damage Ariana Nexus’s reputation or mission?. Controversial clients, surveillance applications, weapons systems

Sovereignty harm — Could AI undermine data sovereignty or enable foreign government targeting of vulnerable populations?. Cross-border AI systems, government surveillance AI, biometric databases

Dimension 4: Client Due Diligence

The assessment evaluates the client’s own AI governance maturity:

Dimension 5: Regulatory and Compliance Assessment

Assessment Outcome

Each AI Risk Assessment produces one of three outcomes:

Accepted: The engagement meets all ethical, legal, and operational criteria. Standard governance applies.

Conditionally Accepted: The engagement presents manageable risks that require additional controls, contractual protections, or governance measures. Conditions are documented and must be satisfied before work begins.

Declined: The engagement conflicts with Ariana Nexus’s AI ethics principles, involves prohibited practices, presents unacceptable risk to vulnerable populations, or fails client due diligence. The engagement is formally declined with documented rationale.

Assessment results are retained in Ariana Nexus compliance records and are available for client audit upon request (with appropriate confidentiality protections for declined-engagement rationale).

NIST AI Risk Management Framework Alignment

Ariana Nexus’s AI Risk Governance Model is aligned with all four functions of the NIST AI Risk Management Framework (AI RMF 1.0):

GOVERN — Cultivate a Culture of Risk Management

GOVERN 1 — Policies and procedures — AI Governance Policy documented; six core principles; five prohibited practices; AI Tool Assessment Protocol

GOVERN 2 — Accountability structures — CEO and Compliance Team governance authority; named risk owners for high-risk engagements; AI Advisory Board planned (2027)

GOVERN 3 — Workforce diversity and competency — Afghan diaspora subject-matter experts; multilingual team; Cornell, Harvard, Oxford-educated professionals; AI ethics training

GOVERN 4 — Organizational culture — HITL as non-negotiable principle; ethical engagement decline authority; continuous regulatory monitoring

GOVERN 5 — Stakeholder engagement — Client MSA disclosure; AI opt-out right; published Trust Center; planned Advisory Board with external members

GOVERN 6 — Legal and regulatory compliance — EU AI Act alignment; NIST AI RMF; EO 14179 (replacing revoked EO 14110); sector-specific regulations tracked

MAP — Contextualize AI Risks

MAP 1 — Intended purposes and context — Pre-engagement risk assessment Dimension 1 (system classification) and Dimension 3 (harm potential)

MAP 2 — Interdependencies and downstream impacts — Assessment of how Ariana Nexus’s validation, annotation, or moderation affects the AI system’s downstream behavior

MAP 3 — Affected individuals and communities — Specific focus on Afghan diaspora populations, religious minorities, women, and other vulnerable groups as affected communities

MAP 4 — Measurement and benchmarking — Evaluation rubrics, quantitative metrics (accuracy, F1, BLEU, cultural appropriateness scores), and bias assessment frameworks

MAP 5 — Risks from third-party components — Assessment of client’s AI system, data sources, and third-party integrations that may introduce risk

MEASURE — Analyze and Assess AI Risks

MEASURE 1 — Appropriate methods and metrics — Structured evaluation rubrics per service type; Afghan-language-specific metrics; cultural accuracy scoring

MEASURE 2 — AI system performance — Model accuracy, hallucination rates, bias metrics, cultural appropriateness scores documented in AI Validation Reports

MEASURE 3 — Risks and impacts — Bias detection, safety evaluation, cultural harm assessment, and sensitive population impact analysis

MEASURE 4 — Continual assessment — Ongoing re-evaluation for long-term engagements; performance drift monitoring; emerging risk identification

MANAGE — Treat and Monitor AI Risks

MANAGE 1 — Risk treatment plans — Findings documented in AI Validation Reports with remediation recommendations; conditional engagement requirements

MANAGE 2 — Risk response strategies — Client notification of identified risks; engagement suspension for critical findings; AI incident response procedures

MANAGE 3 — Documentation and accountability — All assessments, reports, and decisions documented; audit trail maintained; corrective action tracking

MANAGE 4 — Continuous monitoring — Re-evaluation cadence for ongoing engagements; regulatory landscape monitoring; threat intelligence for AI-specific risks

EU AI Act Risk Governance

Ariana Nexus as AI Service Provider

Under the EU AI Act, Ariana Nexus occupies a unique position in the AI value chain: Ariana Nexus does not develop or deploy AI systems, but provides services (validation, annotation, moderation, RLHF) that directly contribute to the safety, accuracy, and compliance of AI systems developed by others. This creates specific obligations:

Supporting Provider Compliance (Article 10 — Data Governance): When Ariana Nexus annotates training data, validates model outputs, or moderates AI-generated content, the quality of Ariana Nexus’s work directly affects the AI system’s compliance with Article 10 data governance requirements. Ariana Nexus’s QA processes, inter-annotator agreement metrics, bias detection, and data provenance tracking provide the evidence that AI providers need to demonstrate Article 10 compliance.

Supporting Deployer Compliance (Article 26 — Deployer Obligations): When Ariana Nexus validates AI systems deployed by healthcare providers, government agencies, or justice system institutions, the validation findings support the deployer’s obligation to ensure appropriate use and ongoing monitoring of high-risk AI systems.

Transparency Contributions (Article 13): Ariana Nexus’s AI Validation Reports provide documented evidence of model performance, limitations, bias findings, and cultural accuracy assessments — directly supporting the transparency requirements of the EU AI Act.

Prohibited AI Practices (Article 5)

Ariana Nexus categorically refuses to provide services for AI systems that fall within the EU AI Act’s prohibited practice categories:

These prohibitions are incorporated into Ariana Nexus’s AI Risk Assessment and engagement acceptance criteria. An engagement that involves any prohibited practice is automatically declined regardless of financial value or client relationship.

Ethical Red Lines

Engagements Ariana Nexus Will Not Accept

Beyond the EU AI Act’s prohibited practices, Ariana Nexus maintains its own ethical red lines that define the boundary of engagement acceptance:

Absolute Prohibitions (no exception, no override): - AI systems designed for mass surveillance of civilian populations. - AI systems designed for social scoring or profiling based on ethnicity, religion, national origin, or political belief. - Autonomous weapons systems or lethal autonomous AI. - AI-generated content intended to deceive, manipulate, or defraud. - AI training on data obtained through exploitation, coercion, or violation of consent. - AI systems designed to identify, locate, or target individuals for persecution based on ethnicity, religion (including atheism), gender, political opinion, or sexual orientation. - AI systems designed to suppress, censor, or manipulate information in service of authoritarian governance. - AI used to create non-consensual intimate imagery or deepfakes of real individuals.

Conditional Prohibitions (engagement declined unless conditions are met): - AI systems used in immigration enforcement: Declined unless the engagement specifically supports due process, language access, or the rights of immigrants and asylum seekers — not enforcement actions targeting vulnerable populations. - AI systems used in law enforcement: Declined unless the engagement specifically supports language access, community engagement, or the rights of individuals — not surveillance, profiling, or predictive policing targeting ethnic or religious communities. - AI systems used in military/defense contexts: Declined unless the engagement involves language services, cultural advisory, or humanitarian support — not targeting, weapons guidance, or combat operations.

Ethical Decline Process

When an engagement is declined on ethical grounds:

AI Risk Taxonomy

Ariana Nexus maintains a structured taxonomy of AI risks relevant to its operational domains:

Technical Risks

Model hallucination — AI generates confident but factually incorrect outputs. HITL validation; Cultural Hallucination Controls (separate Trust Center page)

Bias and discrimination — AI produces systematically unfair outputs across demographic groups. Bias detection framework; Afghan cultural context expertise; representativeness assessment

Data poisoning — Training data is compromised by malicious or erroneous content. QA processes; inter-annotator agreement; anomaly detection; data provenance verification

Privacy leakage — AI memorizes and reproduces personal data from training sets. No-training rule for client data; PII detection; data minimization; pseudonymization

Model drift — AI performance degrades over time as deployment context changes. Periodic re-evaluation for ongoing engagements; drift monitoring

Adversarial manipulation — AI is tricked into producing harmful outputs through crafted inputs. Adversarial testing in validation methodology; red-teaming for Afghan-language attacks

Cultural Risks

Cultural erasure — AI treats Afghan languages and cultures as secondary or derivative. Coverage of all 24 Afghan languages; dialect-specific validation; cultural accuracy scoring

Dialect conflation — AI fails to distinguish between Afghan dialects with clinically or legally significant differences. Dialect-specific test cases; Herati vs. Kabuli Dari validation; Hazaragi-specific evaluation

Religious misrepresentation — AI generates content that misrepresents Islamic, Shia, Hindu, Sikh, or secular perspectives. Cultural Compliance Bureau review; religious sensitivity validation; multi-perspective testing

Gender bias amplification — AI perpetuates harmful gender stereotypes in Afghan cultural context. Gender-aware evaluation criteria; female subject-matter expert review

Historical distortion — AI generates inaccurate or politically biased accounts of Afghan history. Subject-matter expert validation; multiple-source verification

Operational Risks

Engagement scope creep — Client requests services beyond the assessed and authorized scope. Engagement-specific scope documentation; change control requiring re-assessment

Subcontractor quality variance — Annotator or validator quality varies across the Collective. QA framework; inter-annotator agreement; performance evaluation; training

Regulatory change — New AI regulations change the compliance requirements for ongoing engagements. Continuous regulatory monitoring by Compliance Team; engagement-specific compliance updates

Client misuse of deliverables — Client uses validation report or annotated data for purposes beyond the agreed scope. Engagement Agreement purpose limitation clauses; DPA terms; contractual restrictions

AI Governance Across Service Domains

Healthcare AI Governance

AI systems used in healthcare — clinical NLP, diagnostic support, patient communication, pharmaceutical research — carry the highest harm potential. Ariana Nexus applies enhanced governance:

Government AI Governance

AI systems deployed by government agencies — immigration processing, law enforcement, public benefits, national security — affect individual rights and liberties:

Justice System AI Governance

AI systems used in the justice system — case management, risk assessment, evidence analysis, translation — directly affect liberty and due process:

Alignment with AI Governance Frameworks

NIST AI RMF 1.0 — Four functions: Govern, Map, Measure, Manage. Aligned — all functions implemented with documented subcategory mapping

EU AI Act (Regulation (EU) 2024/1689) — Risk-tiered obligations; prohibited practices; high-risk requirements. Aligned — risk classification integrated into engagement assessment; prohibited practices enforced

White House EO 14110 (revoked January 20, 2025) — AI safety testing, red-teaming, transparency, bias prevention. Historical alignment maintained — validation, red-teaming, and bias detection practices retained as best practice; superseded by EO 14179 (January 23, 2025) which does not include comparable testing/transparency provisions

OMB M-24-10 (under review per EO 14179) — Federal agency AI governance. Historical alignment maintained — government AI validation services support federal AI governance principles; memorandum under review following EO 14110 revocation

UNESCO AI Ethics Recommendation (2021) — Ten ethical principles for AI. Aligned — principles integrated into AI Governance Policy

OECD AI Principles (2024) — Five principles for trustworthy AI. Aligned — all five principles reflected in governance model

Council of Europe AI Convention (2024) — Human rights in AI systems. Aligned — HITL, ethical red lines, and rights-based assessment

G7 Hiroshima AI Process (2023) — Voluntary code of conduct for advanced AI. Aligned — validation and red-teaming support code compliance

ISO/IEC 42001:2023 — AI Management System standard. Roadmap (2028) — governance model designed for ISO 42001 certification

ISO/IEC 23894:2023 — AI Risk Management guidance. Aligned — risk assessment follows ISO 23894 structure

NIST AI 100-2 — Adversarial machine learning taxonomy. Aligned — adversarial threats addressed in risk taxonomy

FDA AI/ML Guidance — Clinical AI regulatory framework. Aligned — healthcare AI governance follows FDA guidance

WHO AI Ethics for Health (2021) — Six principles for AI in healthcare. Aligned — healthcare AI governance integrates WHO principles

HIPAA Security Rule — Safeguards for AI processing of ePHI. Compliant — BAA-covered environment; PHI controls enforced

What AI Risk Governance Means for Our Clients and Partners

For AI labs (Anthropic, OpenAI, Google DeepMind): Ariana Nexus evaluates every AI engagement through a five-dimension risk assessment before acceptance. Our governance model is aligned with NIST AI RMF and the EU AI Act. We will decline engagements that conflict with our ethics principles — because an organization that will accept any engagement regardless of risk is an organization that cannot be trusted with the engagements that matter.

For Big Tech platforms (Microsoft, Google, Amazon, Meta): Our AI Risk Assessment evaluates your system’s risk classification, data sensitivity, harm potential, and regulatory exposure before we begin work. Our governance structure (CEO + Compliance Team, with AI Advisory Board planned) provides accountability at the highest organizational level.

For government agencies: Our AI governance model supports NIST AI RMF adoption and federal AI governance requirements. We evaluate civil rights impact, due process implications, and fairness for every government AI engagement. Note: EO 14110 was revoked January 20, 2025; OMB M-24-10 is under review pursuant to EO 14179. Ariana Nexus maintains its governance practices as institutional best practice independent of the current federal executive posture.

For healthcare systems: Patient safety is the overriding governance criterion. If our validation reveals safety concerns in your clinical AI system, we notify you immediately and recommend suspension pending remediation — because our governance obligation is to the patient, not the deployment timeline.

For all clients: Our willingness to decline engagements on ethical grounds is not a limitation — it is a credential. It means that when we accept your engagement, you know it has cleared a governance framework that takes AI risk seriously.

If your organization requires AI governance documentation, risk assessment methodology, or an AI governance briefing, contact trust@ariananexus.com or +1 (202) 771-0224.

Maturity Roadmap

Current (2026) — AI Governance Policy documented with six principles and prohibited practices; Formal pre-engagement AI Risk Assessment (five dimensions); CEO + Compliance Team governance authority; Engagement acceptance/decline framework; Continuous regulatory monitoring; NIST AI RMF four-function alignment; EU AI Act prohibited practices enforcement; Ethical red lines documented; AI Risk Taxonomy maintained. Operational

Hardening (Q3–Q4 2026) — Automated risk assessment scoring; AI engagement risk dashboard; Sector-specific risk assessment templates (healthcare, government, justice); AI ethics training program for all AI-facing personnel; Engagement portfolio risk reporting. In Planning

Advisory Board (2027) — AI Governance Advisory Board established with external advisors; Annual AI Governance Policy review cycle; Published AI risk governance methodology; SOC 2 Type II (AI governance controls); First public AI governance thought leadership. Planned

Certification (2028) — ISO/IEC 42001 AI Management System certification; EU AI Act conformity assessment participation; Formal third-party AI governance audit; AI risk maturity assessment (independent evaluation). Planned

Advanced (2029–2030) — AI risk quantification model; Automated regulatory compliance mapping; Cross-client AI risk benchmarking; AI governance API for client integration; Participation in international AI governance standard-setting. Planned

Long-Horizon (2030+) — Autonomous AI risk monitoring with human escalation; Predictive AI risk intelligence; Multi-jurisdictional AI compliance automation; AI governance architecture maintained through 2080 horizon. Vision

Limitation of Liability and Disclaimers

AI Risk Assessment Limitations. Ariana Nexus’s AI Risk Assessment evaluates engagements based on information available at the time of assessment. AI risks are inherently uncertain and may evolve during the engagement. Ariana Nexus does not guarantee that its risk assessment will identify all risks or that risk mitigation recommendations will eliminate all harm.

Ethical Decline Discretion. Ariana Nexus’s decision to accept or decline an engagement on ethical grounds is made at its sole discretion. The ethical red lines documented on this page represent current principles that may evolve as AI technologies and regulations develop.

Client AI Systems. Ariana Nexus provides AI services (validation, annotation, moderation) but does not develop, deploy, or operate client AI systems. Clients retain full responsibility for the design, deployment, use, and governance of their own AI systems.

Framework Alignment vs. Certification. Where “aligned” is stated, the governance model follows the framework but formal certification has not been obtained unless stated otherwise.

Roadmap Items. The maturity roadmap reflects current plans. Roadmap items are forward-looking statements, not binding commitments.

Limitation of Liability. TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, ARIANA NEXUS’S TOTAL AGGREGATE LIABILITY FOR ALL CLAIMS ARISING OUT OF OR RELATED TO AI RISK GOVERNANCE SHALL NOT EXCEED THE AMOUNTS SET FORTH IN THE APPLICABLE ENGAGEMENT AGREEMENT, OR, WHERE NO ENGAGEMENT AGREEMENT EXISTS, ONE HUNDRED DOLLARS ($100). ARIANA NEXUS SHALL NOT BE LIABLE FOR ANY INDIRECT, INCIDENTAL, SPECIAL, CONSEQUENTIAL, PUNITIVE, OR EXEMPLARY DAMAGES ARISING FROM OR RELATED TO AI RISK ASSESSMENT, GOVERNANCE DECISIONS, OR ENGAGEMENT ACCEPTANCE OR DECLINE. NOTHING IN THIS SECTION SHALL LIMIT OR EXCLUDE ARIANA NEXUS’S LIABILITY FOR: (A) FRAUD OR FRAUDULENT MISREPRESENTATION; (B) DEATH OR PERSONAL INJURY CAUSED BY NEGLIGENCE; OR (C) ANY OTHER LIABILITY THAT CANNOT BE EXCLUDED OR LIMITED BY APPLICABLE LAW, INCLUDING BUT NOT LIMITED TO LIABILITY UNDER THE UK UNFAIR CONTRACT TERMS ACT 1977, THE UK CONSUMER RIGHTS ACT 2015, OR GDPR.

Dispute Resolution. Any dispute arising out of or relating to this page shall be subject to the dispute resolution provisions in the Terms of Use, Section 18.

This page is provided for informational purposes and does not constitute a warranty, guarantee, or binding commitment regarding Ariana Nexus’s AI governance practices. AI technologies and regulations are evolving. Capabilities described herein are subject to change. Nothing in this page shall be construed as a waiver of any right, defense, or immunity available to Ariana Nexus under applicable law.

This page is provided for informational purposes and does not constitute legal advice, a warranty, guarantee, or binding commitment regarding Ariana Nexus’s compliance posture. Capabilities described herein are subject to change.