Privacy & Compliance
- Use approved tools (e.g., Microsoft Copilot, Teams transcription) and always review AI outputs before sharing.
- Minimise personal data and disclose AI use in academic work to maintain transparency and integrity.
- Complete a Data Protection Impact Assessment (DPIA) for high-risk AI projects, such as those involving sensitive data or automated decision-making.
- Assess and mitigate bias in AI outputs, keep clear records of decisions, and consult Information Compliance for novel or complex use cases.
Artificial Intelligence is embedded in many tools we use daily, often invisibly. Used well, it saves time and improves productivity — but it also introduces compliance risks, especially when personal data or decisions about people are involved. It’s essential to understand what you can do freely with approved tools, when to involve the data-protection@bristol.ac.uk, and why awareness of “hidden AI” matters.
Everyday AI Use
You can use University-approved tools such as Microsoft Copilot and Teams transcription for routine tasks. These operate within the University’s secure Microsoft 365 environment, so prompts and outputs are protected under our tenancy. Examples of safe, everyday use include:
- Summarising documents
- Drafting content
- Generating ideas
- Transcribing meetings
Always review outputs before sharing — AI can misinterpret or invent details. Treat its work as a draft, not a final version.
Stay SAFE
- S – Stick to approved tools: Use University-approved platforms like Copilot and Teams transcription.
- A – Always review outputs: Check for errors, bias, and hallucinations before sharing.
- F – Follow academic integrity: Disclose AI use in teaching or research where relevant.
- E – Enter minimal data: Keep personal and confidential data to a minimum, even within approved tools. Anything entered into Copilot is subject to legal processes, including Freedom of Information requests (FOI).
Never share these with AI tools:
🚫 Personal data: Student records, staff information, health data
🚫 Confidential research: Unpublished findings, grant applications under review
🚫 Commercial sensitive: Partnership agreements, financial information
🚫 Legal privileged: Legal advice, disciplinary proceedings
🚫 Security sensitive: Passwords, system configurations, access credentials
Privacy Risk Assessment
Not all AI use is low risk. If you plan to use a new tool or apply an approved tool in a novel way, speak to the data-protection@bristol.ac.uk before you start.
You will need to complete a Data Protection Impact Assessment (DPIA) for high-risk processing, such as:
- Automated decision-making (even with human checking)
- Large-scale processing of sensitive data
- Combining datasets for profiling or prediction
Indicators you need compliance checks:
- AI processes significant personal or special category data
- AI influences decisions about people (grading, recruitment, disciplinary)
- Combining multiple datasets or profiling individuals
- Project feels novel, complex, or high risk
Under UK GDPR and ICO guidance, high-risk processing must be assessed for lawfulness, fairness, transparency, and accountability. A DPIA helps identify risks and document controls.
Key requirements for high-risk AI use:
- Provide meaningful human review for automated decisions
- Offer clear privacy information and opt-out options
- Explain AI logic and impact in plain English
- Assess and mitigate bias in training data and outputs _ Record roles, responsibilities, DPIAs, and decisions
- Consult the ICO if risks cannot be mitigated
Intellectual Property and Academic Integrity
AI-generated content raises questions about ownership and copyright. Under UK law, works created by AI without human authorship may not attract copyright protection. If you use AI to assist with creating materials, you remain responsible for:
- Rights to any source material used
- Avoiding infringement of third-party IP
- Acknowledging AI use where required by academic integrity policies
For research and teaching, treat AI outputs as supporting material, not original scholarship.
Checklists
Everyday AI Use
✅ Use University-approved tools (Copilot, Teams transcription)
✅ Review AI outputs before sharing
✅ Minimise personal data
✅ Be transparent—disclose AI use in academic work
✅ Watch for hidden AI features
You Need a DPIA
If you tick yes to any, contact Information Compliance before starting:
Scenarios
| Scenario | Category | Action Required |
|---|---|---|
| Using Copilot to draft lecture notes | Everyday use | Review outputs, treat as source material |
| Summarising published research with Copilot | Everyday use | Check accuracy |
| Teams transcription for meeting minutes | Everyday use | Review for errors before ratifying |
| Generating ideas for teaching materials | Everyday use | Treat as source material, check references |
| Uploading student records into AI tool | High-risk / Novel use | Consult Information Compliance |
| Using AI to shortlist job applicants | High-risk / Novel use | Consult Information Compliance |
| Combining datasets for profiling | High-risk / Novel use | Consult Information Compliance |
| AI making decisions with legal/significant effects | High-risk / Novel use | Consult Information Compliance |
| Processing special category data (health, ethnicity) | High-risk / Novel use | Consult Information Compliance |
| Developing a new AI model for research | High-risk / Novel use | Consult Information Compliance |
Data Privacy and Security
When you use AI tools for work-related tasks, whether through web interfaces, APIs, or integrated applications, your data typically:
- Travels over the internet to AI company servers
- Gets processed by AI systems you don’t control
- May be stored temporarily or permanently by the AI provider
- Could potentially be used for training future AI models
- Might be subject to different legal jurisdictions
Understanding this data flow is crucial for making informed decisions about what information you share with AI systems, especially when handling sensitive data or proprietary content. Always evaluate these risks and follow University guidance.
Low Risk ✅
- Public information already available online
- General knowledge questions
- Anonymous, aggregated data
- Published research you’re summarizing
Medium Risk ⚠️
- Internal documents with no personal data
- Draft policies before approval
- Academic work in progress (with proper disclosure)
High Risk ❌
- Any personal or confidential information
- Unpublished research data
- Student or staff records
- Commercially sensitive material
Data Protection Best Practices
Implementing proper data protection measures helps you minimizing privacy and security risks. Consider:
- Use institutional accounts when available (better data protection)
- Anonymize data before sharing with AI tools
- Use placeholder data for testing and training
- Check privacy policies of AI tools you use
- Follow university guidelines on data classification
- Consider on-premises alternatives for sensitive work