AI Policy
Last Updated: 15th October 2025
​
1. Purpose
​
The purpose of this Artificial Intelligence (AI) Policy is to set out how CMS Institute (Creative Media Skills Group Ltd) (“we”, “our”, or “us”) uses AI technologies responsibly, ethically, and in compliance with applicable data protection and regulatory standards.
This policy ensures transparency in our use of AI systems and demonstrates our commitment to upholding privacy, fairness, accountability, and human oversight in all AI-driven activities.
​
2. Scope
​
This policy applies to:
​
-
All staff, contractors, and partners who use or manage AI tools on behalf of CMS Institute.
-
All AI-driven processes, software, or systems used in the delivery of our services, communications, marketing, or internal operations.
-
Any external vendors or service providers supplying AI technologies to CMS Institute.
​​
3. Our Approach to Responsible AI
​
CMS Institute is committed to using AI systems responsibly and only where they support legitimate educational, creative, and operational objectives. Our approach is guided by the following principles:
​
-
Transparency - We will clearly communicate when AI systems are used and what purpose they serve.
-
Fairness & Non-Discrimination - We will monitor AI outputs to ensure they do not result in bias or unfair treatment.
-
Privacy & Data Protection - We will ensure all AI use complies with UK GDPR, EU GDPR, and relevant privacy legislation.
-
Human Oversight - AI tools will assist — not replace — human decision-making. All final decisions affecting individuals will be made by qualified staff.
-
Accountability - Responsibility for AI use rests with our management team and designated data protection officers.
-
Safety & Security - We will safeguard AI systems and the data they process from misuse or unauthorised access.
​
4. How We Use AI
​
We may use AI responsibly for the following purposes:
​
-
Educational and creative enhancement: assisting with course development, content generation, and learning resource creation.
-
Operational efficiency: automating routine administrative tasks such as scheduling or analytics.
-
Marketing and communications: personalising outreach and improving audience engagement, subject to consent and privacy standards.
-
Data insights: analysing anonymised data to improve our services and learner experience.
​​
All AI use is reviewed for accuracy, relevance, and fairness before being applied in practice.
​
5. Data Protection and Privacy
​
When AI systems process personal data, CMS Institute ensures compliance with all relevant privacy laws, including:
​
-
Processing only the minimum necessary data;
-
Using anonymisation or pseudonymisation wherever possible;
-
Maintaining a lawful basis for data use under UK GDPR Article 6;
-
Conducting Data Protection Impact Assessments (DPIAs) for any high-risk AI deployment;
-
Ensuring data subjects retain their rights of access, correction, and erasure.
​
We will not use AI to make automated decisions that produce significant effects on individuals without meaningful human involvement.
​
6. Third-Party AI Tools and Providers
​
We may use AI systems provided by reputable third-party organisations (e.g., generative AI tools, analytics platforms, or learning assistants).
Before use, we will:
​
-
Review the provider’s privacy and ethical policies;
-
Ensure data is processed securely and lawfully;
-
Enter into data processing agreements (DPAs) where required;
-
Avoid uploading confidential, sensitive, or personally identifiable data into AI systems unless necessary and approved.
​​
7. Staff Use of AI Tools
​
Staff are encouraged to use AI responsibly to support their work, provided they adhere to the following rules:
​
-
Transparency: disclose when AI tools have been used to create, summarise, or edit content.
-
Confidentiality: do not input confidential or personal information into AI tools unless approved by management.
-
Verification: fact-check and review all AI-generated outputs for accuracy and appropriateness.
-
Copyright compliance: ensure AI use respects intellectual property rights and avoids plagiarism.
Improper use of AI systems may lead to disciplinary action under our internal policies.
​
8. Students and Learners
​
Where AI tools are offered as part of our learning experience:
​
-
Students will be informed when and how AI is integrated into course materials.
-
Guidance will be provided on ethical, critical, and transparent use of AI in creative work.
-
AI-assisted outputs must always be clearly identified as such.
-
CMS Institute will promote digital literacy and responsible AI use as part of its educational mission.
​​
9. Governance and Accountability
​
Overall responsibility for AI governance rests with the CMS Institute Senior Management Team, supported by:
​
-
The Data Protection Officer (DPO) for privacy compliance.
-
The Technology & Innovation Lead for AI implementation oversight.
-
Department heads ensuring responsible AI use within their teams.
​
All AI projects are reviewed for ethical, legal, and reputational implications prior to deployment.
​
10. Monitoring and Review
​
We will regularly review:
​
-
The accuracy, fairness, and impact of AI tools we use;
-
Updates to UK and EU AI regulatory frameworks;
-
Emerging risks and ethical considerations.
​​
This policy will be reviewed annually or sooner if significant changes occur in AI technology or legislation.
​
11. Contact
​
For questions or concerns regarding our use of AI or this policy, please contact:
​
CMS Institute (Creative Media Skills Group Ltd)
Pinewood Studios, Pinewood Road,
Buckinghamshire, SL0 0NH, United Kingdom
info@creative-media-skills.com
01753 656168
© CMS Institute / Creative Media Skills Group Ltd. All rights reserved.
​



