Policy on the Acceptable Use of Artificial Intelligence in Software Development
Purpose and Scope
This document establishes the formal governance, security, and operational framework for the use of Artificial Intelligence (AI) and autonomous agentic coding tools in the software development lifecycle. This policy specifically governs the tripartite relationship between:
- The Product Company: The entity that owns the proprietary software, intellectual property (IP), and data.
- The Developer: The individual software engineer assigned to write, review, and commit code for the Product Company.
Section 1: Data Privacy, Confidentiality, and Sovereignty
The Risk: Public AI models may retain prompts and codebase inputs to train future models, potentially exposing the Product Company's trade secrets, proprietary algorithms, and Personally Identifiable Information (PII) to the public domain.
- The Developer's Responsibilities:
- Must sanitize all inputs and strictly prohibit the entry of PII, API keys, passwords, or sensitive business logic into any AI tool.
- Must use strictly segregated, company-provisioned AI accounts; the use of personal AI accounts (e.g., consumer ChatGPT or Claude) for Product Company code is explicitly forbidden.
- The Product Company's Responsibilities:
- Must provision and fund "Enterprise-tier" AI tools equipped with strict Zero Data Retention (ZDR) agreements, ensuring the AI vendor cannot view, log, or train models on the Product Company's codebase.
- Must configure safe data boundaries, such as enforcing "Privacy Mode" centrally across all managed development environments.
Section 2: Code Security and Vulnerability Management
The Risk: AI assistants generate code based on public internet patterns, frequently introducing outdated libraries or critical vulnerabilities (e.g., OS command injections, improper input validation). Blind trust in AI ("vibe coding") without rigorous review bypasses traditional security gates.
- The Developer's Responsibilities:
- Must adopt a "Zero Blind Trust" posture, treating all AI-generated code as if it were authored by an untrusted junior contributor.
- Must manually trace AI logic, verify dependencies, and ensure comprehensive unit testing is applied prior to submitting a pull request.
- The Product Company's Responsibilities:
- Must integrate automated Static and Dynamic Application Security Testing (SAST/DAST) into the CI/CD pipeline to block pull requests containing critical vulnerabilities or hardcoded secrets before deployment.
- Must implement AI Security Posture Management (AI-SPM) to continuously monitor model behavior and block unsafe outputs in real-time.
Section 3: Intellectual Property (IP) and Open-Source Licensing
The Risk: AI tools may inadvertently reproduce verbatim snippets of code protected by restrictive "copyleft" licenses (e.g., GPL, AGPL), which could legally force the Product Company to open-source its proprietary software. Furthermore, code generated entirely by AI without sufficient human creative input cannot be copyrighted, creating an "ownership void".
- The Developer's Responsibilities:
- Must abstain from "Accept All" workflows. The Developer must actively modify, architect, and structurally refine AI outputs to guarantee a threshold of human creative contribution necessary for copyright protection.
- The Product Company's Responsibilities:
- Must implement automated snippet-scanning tools to continuously audit the codebase for open-source license contamination.
- Must exclusively procure AI vendor contracts that provide explicit IP indemnification, protecting the Product Company from third-party copyright infringement claims.
Section 4: Autonomous Agents and System Access
The Risk: Modern agentic tools are capable of autonomous reasoning, multi-file editing, executing terminal commands, and browsing the web. If improperly constrained, these agents pose severe risks of unauthorized system modification, prompt injection attacks, and the downloading of malicious external packages.
- The Developer's Responsibilities:
- Must run autonomous agents strictly within sandboxed or isolated environments.
- Must configure AI tools to mandate human approval ("Request Review") before executing any terminal commands or network fetches.
- The Product Company's Responsibilities:
- Must establish technical guardrails, including explicitly denying high-risk terminal commands (e.g.,
curl, wget) and enforcing URL allow-lists to prevent agents from interacting with unauthorized web domains.
Section 5: Tool Governance and Shadow AI
The Risk: The use of unvetted, unauthorized AI tools by personnel ("Shadow AI") circumvents all established legal, security, and privacy protections, causing total loss of visibility and governance.
- The Developer's Responsibilities:
- Must strictly utilize only the specific AI coding assistants and platforms officially vetted, approved, and licensed by the Product Company.
- The Product Company's Responsibilities:
- Must maintain a centralized, clearly communicated inventory of approved AI tools.
- Must establish an AI Governance Committee (comprising Legal, IT, and Security) to continuously monitor network traffic for shadow AI usage and vet new tooling requests.
Section 6: Compliance, Auditing, and Export Controls
The Risk: AI-assisted software development must remain auditable to satisfy strict regulatory frameworks (e.g., SOC 2, HIPAA, ISO 27001, EU AI Act) and international export control laws (EAR/ITAR) governing sensitive technologies.
- The Developer's Responsibilities:
- Must transparently tag and flag code commits or pull requests that were heavily generated by AI to ensure accurate provenance and traceability.
- The Product Company's Responsibilities:
- Must mandate the generation of a Software Bill of Materials (SBOM) for all builds to track AI-generated components.
- Must map AI risks against established frameworks, such as the NIST AI Risk Management Framework (AI RMF), to ensure demonstrable regulatory compliance.