Responsible AI Policy

Praxis Library governs, monitors, and ensures the responsible use of artificial intelligence across every aspect of this website. AI Safety and Responsible Use is at the heart of what we do.

AI for Everybody Built With UD/UDL Security A+ 100% Performance 100% AI Assisted Building Claude Code Community GitHub

A website that educates the public on responsible AI practices has an obligation to uphold those same standards within its own operations. This policy documents the principles, governance mechanisms, risk management procedures, regulatory alignment, and continuous monitoring processes that govern how Praxis Library develops, deploys, and maintains AI-assisted educational content.

This policy applies to all AI systems used in the creation, maintenance, and delivery of Praxis Library content and tooling. It is reviewed and updated with each significant change to our AI governance practices.

Responsible AI Principles

Six foundational commitments that govern every decision at Praxis Library

Principle Commitment Evidence
Transparency & Disclosure Every use of AI in content creation is disclosed through commit attribution, page badges, and the site-wide ethics ticker. AI Assisted Building
Human Oversight No AI output is published without human review. All strategic, editorial, and architectural decisions are made by humans. Collaboration Model
Privacy by Architecture Zero cookies, analytics, tracking, or telemetry. All tools process data client-side only. Standard hosting infrastructure logs (IP addresses, browser headers) are managed by our hosting provider and are not accessed or retained by Praxis. Data Retention Policy
Accessibility & Inclusivity WCAG AA compliant, dedicated neurodivergent resources, no fees, no account required. AI literacy for everyone. AI for Everybody
Verification & Accuracy All claims verified against a 4-tier institutional authority model—government agencies, peer-reviewed research, primary company sources, and research-grade editorial—with a rolling 2-year freshness window (currently 2024–2026), advancing annually and human verification of every external link. AI Safety & Ethics
Open Source Governance Complete source code, content, and governance documentation publicly available on GitHub for independent verification. GitHub Repository

Evidence of Implementation

Documented practices across the website—not aspirational goals, but verified operational procedures

RAI Requirement Implementation Verification
AI Disclosure Co-Authored-By: Claude attribution in every git commit AI Assisted Building
User Safety Education VERIFY framework, hallucination training, bias awareness, high-stakes domain warnings AI Safety & Ethics
Data Privacy Zero data collection, client-side processing, no cookies, no tracking Data Retention Policy
Cybersecurity CSP A+ rating, zero external dependencies, no inline scripts or styles Security Policy
Continuous Ethics Awareness 24 rotating ethics messages delivered via site-wide ticker on every page Ethics ticker (visible on all pages)
Technique-Level Ethics AI ethics reminder banner injected on all 177 technique & framework pages Every page in Learn
Source Transparency Complete source code, content, and governance documentation on GitHub GitHub Repository
Inclusive Design UD/UDL principles, accessibility dashboard, dedicated neurodivergent resources Universal Design

AI Systems Documentation

Formal inventory of all AI systems used in the development and delivery of Praxis Library

System Category Purpose Data & Privacy
Claude Code (Anthropic) Development Code generation, content drafting, and architecture implementation under direct human supervision. All output undergoes mandatory human review. Local environment only. No user data processed. Subject to hallucination; all output is factually verified before publication.
Automated Site Audit Quality Assurance Deterministic Python script enforcing 12 audit categories: security, accessibility, citations, broken links, and content consistency. Not an AI model. Rule-based checks with zero probabilistic output. Runs before every deployment.
Interactive Tools (7) User-Facing Prompt Analyzer, Technique Finder, Preflight Checklist, Prompt Builder, Persona Architect, Hallucination Spotter, and Readiness Quiz. 100% client-side. Zero server communication. No data stored, transmitted, or retained.

No Additional AI Systems

Praxis Library does not use analytics AI, recommendation engines, content moderation systems, advertising algorithms, A/B testing platforms, or user profiling tools. The three systems documented above represent the complete AI inventory.

Risk Self-Assessment

Identified risks associated with Praxis Library’s use of AI, with documented mitigation strategies

Assessment Scope

This self-assessment evaluates risks arising from the use of AI in the development and maintenance of Praxis Library. It is reviewed with each significant change to AI governance practices and updated in the Transparency Changelog below.

Content Accuracy & Hallucination

Risk: AI-assisted content generation may introduce factual errors, fabricated citations, or hallucinated technical information into educational materials relied upon by the public.

Likelihood: Moderate (inherent to all large language model outputs).

Mitigation Controls:

  • Mandatory human review of all AI-generated content before publication
  • Citation sourcing standards requiring .gov and .edu domains with 2024–2026 publication dates
  • Automated site audit tool with citation freshness and domain validation checks
  • Internal application of the VERIFY framework to all factual claims
  • Historical context notices on all framework pages to flag evolving information
Residual Risk Level: Low — multiple overlapping controls reduce hallucination exposure to an operationally acceptable level.

Bias in AI-Generated Content

Risk: The AI model used for content generation may reflect biases present in its training data, potentially producing content that is exclusionary, culturally insensitive, or inequitable.

Likelihood: Moderate (systemic in all language models).

Mitigation Controls:

  • Automated bias scanning across all 235 HTML files using a 6-category detection system: exclusionary framing, gendered language, tech exclusionary terms, ableist language, profanity, and slurs
  • Term database (bias-terms.json) loaded at runtime—editable without code changes, aligned with Google, Microsoft, Apple, and APA style guides
  • Zero-tolerance policy: profanity and slurs flagged as WARNING-level audit violations; bias terms flagged for human editorial review
  • 100% of site content scanned on every audit run—no pages exempted
  • Human editorial review for inclusive language and diverse representation
  • Dedicated neurodivergent learning resources developed with ND-informed design principles
  • Universal Design (UD/UDL) framework applied across all content
  • Open-source codebase enabling community review and bias identification
  • Prompt examples designed to model equitable and responsible AI use
Residual Risk Level: Low — automated bias scanning of all site content supplemented by human editorial oversight and community review.

AI Provider Dependency

Risk: Praxis Library relies on Anthropic’s Claude Code for AI-assisted development, creating a single-provider dependency that could affect continuity of operations.

Likelihood: Low (Anthropic maintains stable commercial operations).

Mitigation Controls:

  • Praxis Library is a static website—no runtime AI dependency exists in production
  • All AI output is committed as plain HTML, CSS, and JavaScript files
  • The website would continue to function indefinitely without any AI provider
  • Development methodology is AI-model-agnostic and could transition to any capable code assistant
  • Policy change contingency: If Claude Code becomes unavailable or terms change materially, the codebase remains fully maintainable by human developers with no AI-specific dependencies
  • No proprietary AI formats, APIs, or vendor lock-in—all output is standard HTML, CSS, and JavaScript that any developer can maintain
  • Complete development methodology documented in public repository, enabling continuity independent of any single AI provider
Residual Risk Level: Minimal — static architecture eliminates runtime dependency entirely.

Content Currency & Obsolescence

Risk: AI models have training data cutoff dates. Content generated with AI assistance may become outdated as the AI field evolves rapidly.

Likelihood: Moderate (AI landscape changes continuously).

Mitigation Controls:

  • Rolling 2-year citation freshness window (currently 2024–2026), advancing annually to maintain currency as AI research evolves
  • Historical context notices on all framework pages acknowledging the evolving nature of techniques
  • Regular content review cycles integrated into the development workflow
  • Automated audit tool flags pre-2024 citations as ERROR-level violations
Residual Risk Level: Moderate — inherent to any educational resource covering a rapidly evolving field. Mitigated through active maintenance.

Independent Verification

Transparency about how our governance claims are validated

Self-Governed with Community Oversight

Praxis Library’s governance framework is currently self-governed and community-reviewed. All governance documentation, audit tooling, source code, and policy files are publicly available on GitHub for independent verification by any party.

Interim verification mechanisms:

  • Complete source code and governance documentation publicly auditable on GitHub
  • 12-category automated audit tool with publicly documented methodology
  • Human verification loop with screenshot proof for all external citations
  • Git commit history providing full attribution and change traceability

Governance-Grade Rating (9/10)

An independent governance assessment evaluated Praxis Library’s Responsible AI framework against NIST AI RMF, EU AI Act, GDPR, and COPPA requirements. Key findings:

  • Overall strength: 9/10 — governance-grade for a low-risk system
  • Assessment: “Structured governance” — not aspirational ethics but operational controls
  • Exceeds what most educational AI projects publish
  • Governance maturity comparable to mid-tier research labs and policy-aligned open projects
Verdict: Coherent, operationalized, framework-aligned, transparently documented, and proportionate to risk level.

Formal Third-Party Audit

Praxis is committed to pursuing formal independent audit certification as the project scales. Current priorities:

  • Complete human verification of all 180+ external citations through the Bas/AI Trust Loop
  • Evaluate formal audit frameworks (SOC 2, ISO 27001) appropriate for educational platforms
  • Engage independent auditor when user base and organizational scale warrant the investment
  • Continue strengthening automated governance controls and public documentation
Status: Planned — open-source transparency and community review serve as interim independent verification.

Regulatory Compliance Matrix

Alignment with recognized AI governance and data protection frameworks

Technique Requirement Status Notes
NIST AI RMF GOVERN: Policies and accountability structures Aligned This policy, open governance via GitHub, documented accountability chain
NIST AI RMF MAP: Context and risk identification Aligned Risk self-assessment above; low-risk educational use case classification
NIST AI RMF MEASURE: Analysis, monitoring, and metrics Aligned 12-category automated audit, citation freshness enforcement, security scanning
NIST AI RMF MANAGE: Risk response and communication Aligned GitHub Issues for incident reporting, public transparency changelog
EU AI Act Risk classification Minimal Risk (Tier 1) Educational website with no personal data processing and no high-risk AI deployment; voluntary transparency measures exceed tier requirements
EU AI Act Transparency obligations for AI-generated content Compliant Full AI disclosure in commits, dedicated transparency page, site-wide ethics ticker
EU AI Act General-purpose AI model provisions N/A Praxis uses AI tools; it does not provide, distribute, or deploy AI models or services
GDPR Data minimization and purpose limitation Compliant by Design No intentional data collection by Praxis. Standard hosting infrastructure logs managed by provider under their own retention policies.
GDPR Rights to access, rectification, and erasure N/A Praxis does not store personal data. Hosting provider infrastructure logs are subject to the provider’s own GDPR obligations.
GDPR Data protection by design and default Compliant by Design Client-side-only architecture; CSP A+ security; no external data processors
COPPA Children’s online privacy protection Compliant No data collection from any user regardless of age; no account creation required
US State AI Laws AI transparency requirements (48 states) Compliant AI disclosure on every page via ethics ticker, dedicated AI-Assisted Building page

Compliance Posture

Praxis Library is an educational website that does not process personal data, make automated decisions about individuals, or deploy high-risk AI applications. This compliance matrix documents voluntary alignment with recognized governance frameworks. It does not constitute legal advice or a formal legal compliance certification. Organizations seeking regulatory compliance guidance should consult qualified legal counsel.

Continuous Monitoring & Evaluation

Systematic processes that verify ongoing compliance with the standards documented in this policy

Praxis Library verifies compliance through six systematic monitoring processes that run continuously:

  • Automated Site Audit — A 12-category Python tool performs 100+ checks across all HTML files before every deployment, covering security, accessibility, citations, and content consistency.
  • Citation Freshness Enforcement — All external citations are validated against a 4-tier institutional authority model (government agencies, research bodies, primary company sources, research-grade editorial) with publication dates between 2024 and 2026. Every external link requires human verification with screenshot proof. Pre-2024 sources trigger ERROR-level violations.
  • Security Scanning — CSP A+ rating verified continuously. Zero inline scripts, zero inline styles, and zero external dependencies enforced by audit tool and deployment architecture.
  • Accessibility Verification — WCAG AA compliance audited programmatically: language attributes, alt text, heading hierarchy, keyboard navigation, and 4.5:1 contrast ratios.
  • Content Consistency — Automated cross-referencing of page counts, glossary terms, tool inventories, and navigation links against documented values to detect drift.
  • Community Review — Open-source GitHub repository enables continuous public review through issues, pull requests, and discussions.

Incident Reporting & Response

Procedures for reporting concerns related to AI governance, content accuracy, bias, or safety

Praxis Library maintains clear pathways for reporting concerns. We take every report seriously and respond promptly, whether it involves a factual error, an outdated citation, biased language, or a governance question. Transparency and accountability guide every step of our response process.

Email Us (Primary)

Send your concern directly to thepraxislibrary@gmail.com. Include the page URL, a description of the issue, and any supporting details. We read every message and aim to acknowledge reports within 48 hours. This is the fastest way to reach us for any content, accuracy, or governance concern.

Send Report

GitHub Issues

For public tracking, open an issue on the Praxis Library repository. Use the label rai-concern for responsible AI matters or content-accuracy for factual errors, hallucinated citations, outdated information, or biased content. All issues are publicly visible and tracked to resolution.

Pull Requests

See something you can fix? We welcome community contributions. Fork the repository, make your changes, and submit a pull request. Please include a clear description of what you changed and why, reference any related issue numbers, and ensure your changes follow our existing code notation standards. Every contribution is reviewed by a maintainer before merging.

Help Us Praxis What We Preach

Praxis Library is built on the principle that AI literacy should be accurate, inclusive, and continuously improving. We encourage readers, educators, researchers, and practitioners to review our content with a critical eye. If a citation looks stale, a definition feels incomplete, or an example could be more inclusive, let us know. The best educational resources are shaped by the communities they serve, and your feedback makes this library stronger for everyone.

Response Commitment

Praxis Library commits to acknowledging all reports within 48 hours via email or within 7 days on GitHub issue threads. For content accuracy concerns involving active misinformation, we prioritize same-day review. For urgent matters such as harmful content or security vulnerabilities, email us directly for the fastest response. All response actions are documented publicly when resolved through GitHub.

Transparency Changelog

Public record of significant changes to AI governance practices at Praxis Library

Date Change Impact
2026-02-11 Responsible AI Policy published Formal RAI governance framework established; compliance matrix, risk self-assessment, and incident reporting procedures documented
2026-02-11 Site-wide number accuracy audit All page counts, technique counts, and tool counts verified against filesystem and corrected across 157+ files
2026-02-10 AI Benchmarks system launched 9 company benchmark pages with verified performance data from official provider sources; 53 models documented
2026-02-09 Citation sourcing standards formalized 4-tier institutional authority model replacing domain-based filtering, 2024–2026 freshness policy, human verification workflow, and automated enforcement codified
2026-02-08 Site audit tool expanded to 12 categories Automated quality assurance coverage expanded to include documentation, citation validation, and content consistency
2026-02-01 Ethics ticker expanded to 24 messages Continuous responsible AI messaging increased from initial set to 24 rotating messages
2026-01-15 AI ethics banner deployed on framework pages Responsible AI reminder banner injected on all 177 technique & framework pages via DOM API
2025-12-01 Data retention policy published Formal documentation of zero-collection data architecture and user privacy rights