Skip to content

Latest commit

 

History

History
36 lines (30 loc) · 6.65 KB

README.md

File metadata and controls

36 lines (30 loc) · 6.65 KB

AI Policy Document: Employee Guidelines for Cloud-Hosted AI Tools

1. Introduction

  • This AI policy outlines our organization's guidelines for employees engaging in the use and development of AI tools and services which are hosted outside [Company] such as OpenAI, Midjourney, and Anthropic (AI Services). Of particular interest is the protection of sensitive and proprietary information, such as peronal and peronsal health information, trade secrets, financial information, and intellectual property. [Company] intends to take a risk based approach to the use and governance of AI and related systems. [Company] is committed to protecting our employees, partners, and customers from damaging or illegal actions perpetrated either knowingly or unknowingly. Effective security is the responsibility of all employees, contractors, consultants, temporary and other workers at [Company] who work with such tools and services. Employees must understand how and when to use these tools responsibly to avoid any misuse or violations of this policy.

2. Definitions

  • AI Services:
  • Company: This company and its affiliate companies
  • Data: Any non-proprietary and non-sensitive informaton
  • Sensitive Data: Any data which contains confidential or proprietary information or any elements of PHI or PII as defined by HIPPA and related laws and regulations company or third-party proprietary or confidential information, any personal information, or any Company, customer or third-party

3. General Usage Guidelines for AI Tools

  • All employees and teams are approved to use AI Services for educational, research, experimental, and testing purposes, provided that no Data which could reasonably be considered Sensitive Data is used as input without security controls and a clear, written privacy statement or, where appropriate, signed agreement.
  • Employees are expected to understand how these services collect and use any Data provided to the service and to take appropriate measures to identify, evaluate, and mitigate risks.
  • Employees should implement and use AI Services as intended. Employees are expected to monitor and guard against misuse or inappropriate use, vulnerabilities, and emerging risks, and to take appropriate actions to address these. Employees are expected to document and report identified risk and vulnerabilities and their respective mitigations.
  • To use any third-party generative services with Sensitive Data, the vendor must be approved for use in accordance with our vendor process including Legal, IT and Information Security review.
  • Employees should not use AI Services in ways which undermine or present harm to individuals, the business, partners, or customers, or which enable criminal misuse, or pose substantial risks to safety and security.
  • Employees are prohibited from using generative AI Services to conduct business without human review of any materials resulting from such use. This includes using such services to generate computer code, customer communication, contracts, proposals, policies, documentation, or any other material used in connection with running the business.
  • The result of any [Company] trademark materials used as input must be reviewed and must comply with current policies and regulations governing [Company] trademark use.
  • Company business may be conducted using tools like the Grammarly, CoPilot or similar tools which explicitly do not retain or use the submitted data for training purposes. Employees may be asked to provide evidence of service privacy practices. Employees are expected obtain approval before using these tools for any business production related tasks.
  • Employees are expected to comply with these guidelines when using AI tools within the company. Failure to comply may result in disciplinary action.
  • The use of third-party Artificial Intelligence applications or plug-in’s require the same consideration, review and approval process required by any other software installation & use as outlined in the Software Installation Policy and Software Usage Policy. This approval process includes but is not limited to: approval to purchase licenses, justification for business use, and listing authorized users.
  • To prevent the unauthorized transfer of Sensitive Data into public databases, no Sensitive Data is to be used in publicly accessible Artificial Intelligence interfaces or databases without review and authorization from [Company]’s Information Security Team.
  • Access to any account enabling use of Artificial Intelligence software must comply with all provisions of the: Password Policy: separate, unique password, no shared use of password or accounts, and mandatory Multi-Factor Authentication: passwords used in company systems must be created and stored in [Company]’s approved Password Manager software; Password Construction Guidelines: passwords must be properly constructed according to the guidelines to create secure passwords
  • When the authorized use of Artificial Intelligence produces results such as business content, computer code, graphics & images, and data analysis, those results remain the property of [Company].

4. Compliance

  • The approved uses of Artificial Intelligence will be in accordance with all other security laws, regulations, and policies including, but not limited to prohibitions on: inappropriate conduct; malicious activities; violation of laws and regulations of the United States or any other nation or any state, city, province, or other local jurisdiction in any way.
  • The approved use is expected to be in compliance with: Any applicable license agreements for the software; employees' conduct provisions, including but not limited to: intellectual property, confidentiality, company information dissemination, standards of conduct, misuse of company resources, and information and data security; the use of authorized data storage locations; the prevention of unauthorized access.
  • [Company] will conduct a periodic Data Impact Assessment to document the extent to which a company’s processing of Protected Information presents a “heightened risk of harm” to [Company], partners and customers
  • Compliance Measurement [Company]’s Information Security Team must verify compliance with this policy through various methods, which may include but not limited to, periodic questionnaires, walk-throughs, video monitoring, business tool reports, internal and external audits, and feedback to the policy owner.
  • Exceptions Any exception to this Artificial Intelligence Acceptable Use Policy must be approved in advance and documented by [Company]’s Information Security Team.
  • Non-Compliance An employee found to have violated this policy may be subject to disciplinary action, up to and including termination of employment.