Key Takeaway
An acceptable use policy that provides clear examples of both permitted and prohibited uses gets higher compliance than one that only lists restrictions.
When to Use This Template
Use this template to set clear expectations for how employees and contractors can use AI tools in their work. This covers both internally developed AI systems and third-party tools like code assistants, chatbots, and image generators. The policy should be distributed to all employees and included in onboarding materials. Update it when new AI tools are adopted or when incidents reveal gaps in current guidance.
Policy Sections
List approved AI tools and their permitted use cases. Include specific examples: code assistants for writing and reviewing code (with human review before committing), chatbots for drafting internal communications (with human editing), translation tools for non-confidential content, and summarization tools for public information. Specify the data classification levels that can be used with each tool (e.g., public and internal data only, never confidential or restricted).
List specific prohibited uses with examples: inputting customer PII into third-party AI tools, using AI to make autonomous hiring or termination decisions, generating content that impersonates real people, using AI outputs without human review in customer-facing communications, and bypassing security controls with AI tools. Explain the rationale for each prohibition so employees understand the risk being mitigated.
Define the process for requesting approval to use a new third-party AI tool: submit a registration form identifying the tool, use case, data types involved, and security review results. Maintain an approved tools list that is updated as tools are evaluated. Employees must not use unregistered AI tools for work purposes. This process prevents shadow AI usage that creates data protection and security risks.
Define human review requirements: all AI-generated code must be reviewed before deployment, all AI-generated customer communications must be reviewed by a human before sending, and all AI-generated analysis must be verified against source data. Define enforcement: first violation results in a conversation with the employee's manager, repeated violations result in escalation to HR and potential tool access revocation.
Customization Guidance
Customize the permitted and prohibited uses lists for your organization's specific risk profile and industry requirements. Regulated industries (healthcare, finance, legal) will need more restrictive policies around AI-generated content and decision-making. Technology companies may have more permissive policies for internal tool usage. The key is to be specific enough that employees can make clear decisions without requesting approval for every use.
Review this policy when any new AI tool is widely adopted within the organization. Shadow AI usage (employees using unauthorized AI tools) is a significant data protection risk. Make the registration process fast and lightweight to reduce the incentive to bypass it.
Version History
1.0.0 · 2026-03-01
- • Initial AI acceptable use policy template