Skip to content

Latest commit

 

History

History
64 lines (39 loc) · 3.56 KB

sample-agency-ai-policy.md

File metadata and controls

64 lines (39 loc) · 3.56 KB

This repository contains a sample AI Integration and Experimentation Policy. The policy outlines our approach to the ethical use of AI, promotion of experimentation, and protection against data breaches. It is designed to guide the integration of AI technologies, particularly Large Language Models (LLMs).

AI Integration and Experimentation Policy


Ethics and Transparency

  • Employees are expected to consider and uphold the highest ethical standards in their use of AI, including ensuring fairness, transparency, and privacy.
  • Any use of AI must comply with our organization's existing data privacy and confidentiality policies to ensure the protection of sensitive information.

Promotion of Experimentation

  • Employees are encouraged to experiment with AI technologies within the guidelines of our policies.
  • Organizational leaders are urged to promote a culture of innovation and openness, emphasizing the importance of calculated risk-taking in AI experimentation.
  • Feedback mechanisms should be established where employees can voice their experiences, challenges, and successes in AI experimentation.
  • Our organization encourages professional development and training in AI technologies and educational stipends or dedicated time for such activities may be provided.

Protection Against Data Breaches

  • Employees are strictly prohibited from disclosing sensitive information to AI technologies, particularly cloud-based LLMs. Examples include:
    • Campaign strategies or plans
    • Client or donor personal data such as emails, phone numbers, or personal information
    • Employee personal data
    • Non-public research data or findings
    • Software source code
    • Any information subject to non-disclosure agreements (NDAs)
  • Any significant use of AI technologies, especially in the case of new projects or integration into existing processes, should be reported to managers or supervisors.

Evaluating and Mitigating AI Risks

  • Our organization will conduct regular risk assessments of our AI use, taking into account potential threats to data security, privacy, and ethical standards.
  • Our organization will maintain a dedicated team or designate a person responsible for overseeing the ethical use of AI and addressing any issues that arise.

Respect for Industry Professionals

  • AI integration is intended to facilitate and enhance the work of industry professionals, not to replace them.
  • We encourage our employees to cultivate and refine their uniquely human capabilities, such as strategic thinking, creative problem-solving, and empathetic communication, which are indispensable as we increasingly integrate AI tools in our operations.
  • Professionals are advised not to overly rely on AI for their work and to balance the use of AI tools with human skills and judgment.

Adaptability

  • This policy will be regularly reviewed and updated as necessary to adapt to the rapidly changing field of AI technology.
  • We will engage in an ongoing dialogue with peer organizations to stay informed about best practices and emerging issues in AI use.

Contributing

Contributions to this policy are welcome. If you have suggestions for improvements or additions, please follow these steps:

  1. Fork this repository.
  2. Create a new branch in your forked repository.
  3. Make your changes in the new branch.
  4. Submit a pull request detailing the changes you've made.

License

This policy is released under the MIT License. You are free to use, modify, and distribute this policy, provided that you include the original copyright notice and disclaimers.