Privacy & Compliance
- Follow University guidelines - Ensure academic integrity, maintain transparency, disclose AI use appropriately
- Never share sensitive data - No personal records, unpublished research, financial info, or security credentials
- Assess privacy risk first - Low risk (public info), medium risk (internal docs), high risk (personal/confidential)
- Use best practices - Anonymize data, use institutional accounts when available, check privacy policies
We must be mindful of “environmental impacts, risks of bias and stereotyping, and ethical concerns about data privacy and security” when using AI tools.
Compliance
Adhering to University of Bristol guidelines is essential for maintaining the University’s legal compliance, protecting individuals’ privacy rights, and preserving our institutional reputation. Violations of these data protection principles can result in serious consequences.
University’s approach to AI is built on these foundational principles:
- Educational Excellence: AI should enhance, not replace, learning and critical thinking
- Academic Integrity: Transparency and honesty in all AI usage
- Ethical Responsibility: Consideration of bias, privacy, and societal impact
- Environmental Awareness: Sustainable and responsible technology use
- Inclusive Innovation: Ensuring AI benefits all members of our community
Never share these with AI tools:
🚫 Personal data: Student records, staff information, health data
🚫 Confidential research: Unpublished findings, grant applications under review
🚫 Commercial sensitive: Partnership agreements, financial information
🚫 Legal privileged: Legal advice, disciplinary proceedings
🚫 Security sensitive: Passwords, system configurations, access credentials
Data Privacy and Security
When you use AI tools for work-related tasks, whether through web interfaces, APIs, or integrated applications, your data typically:
- Travels over the internet to AI company servers
- Gets processed by AI systems you don’t control
- May be stored temporarily or permanently by the AI provider
- Could potentially be used for training future AI models
- Might be subject to different legal jurisdictions
Understanding this data flow is crucial for making informed decisions about what information you share with AI systems, especially when handling sensitive data or proprietary content. Always evaluate these risks and follow University guidance.”
Privacy Risk Assessment
Before using AI tools, ask:
Low Risk ✅
- Public information already available online
- General knowledge questions
- Anonymous, aggregated data
- Published research you’re summarizing
Medium Risk ⚠️
- Internal documents with no personal data
- Draft policies before approval
- Academic work in progress (with proper disclosure)
High Risk ❌
- Any personal or confidential information
- Unpublished research data
- Student or staff records
- Commercially sensitive material
Data Protection Best Practices
Implementing proper data protection measures helps you minimizing privacy and security risks. Consider:
- Use institutional accounts when available (better data protection)
- Anonymize data before sharing with AI tools
- Use placeholder data for testing and training
- Check privacy policies of AI tools you use
- Follow university guidelines on data classification
- Consider on-premises alternatives for sensitive work