Policy for Artificial Intelligence (AI)

Introduction

Artificial intelligence tools and various AI assistants (collectively referred to as AI tools or AI solutions) are rapidly becoming part of work life. Hypergene embraces the use and exploration of AI solutions with an open mindset. We want to stay ahead in development while acting responsibly maintaining a strong commitment to responsibility, ethics and compliance.

This policy primarily addresses the use of GitHub Copilot, Microsoft Copilot, and OpenAI ChatGPT products and will be updated and supplemented as new tools are adopted in the company.

Information Security and Data Protection when using AI

AI tools must be used in accordance with General Data Protection Regulation (GDPR) and ISO 27001, including principles of data minimization, purpose limitation and appropriate security measures.

Intellectual property and regulatory risks must also be considered. AI-related legislation continues to evolve, future changes may impact the permitted use of these tools.  Legislative requirements are reviewed annually as a part of our Information Security Management System review.

Additionally, all employees are responsible for reporting any potential risks brought about by new or changing regulations. This same obligation applies to any changes in the terms of use of AI solutions they use.

Permitted Use

AI tools may be used to support tasks such as text generation, content suggestions, analysis and automation – provided the use complies with applicable laws, internal guidelines and ethical standards.

Employees creating and distributing content based on AI generated data, including text, code or images are responsible for the correctness of the produced content and that it does not break any IPR.

AI may be used for defined marketing purposes, such as customer segmentation, personalization, campaign optimization or content generation. If AI is used in marketing activities, such as automated recommendations or profiling, it should be clearly communicated. Users must be informed how their personal data is processed and offered the option to opt out of AI-based personalization.

Restrictions

Information security settings of when using AI tools must be considered by critically reviewing the solution's settings and doing so regularly, as settings may change and evolve over time.

Inputs to AI tools must be carefully limited to ensure that no confidential customer information, personal data (PII data), or company classified or highly classified information is transferred to the tool.

Employees using AI tools must ensure that content such as images, code, audio or text used in the input to the AI tool is allowed to be redistributed. Content may be copyrighted, or redistribution may be restricted by license agreements.

Business information must not be used in AI services unless it can be reliably ensured—such as through licensing agreements—that the information will not be used for training AI models.

Copyright of AI-generated output is transferred to the company when the result is modified to suit company needs. All AI-generated content must be reviewed and refined as if it were created manually. The user is responsible for the final output.

Approval of Policy

This AI-policy is approved by Jakob Melander, CTO, Hypergene.