Context matters
- Context is all the information you give LLMs to understand your request (role, purpose, audience, requirements)
- Be specific, not vague - Clear context gets relevant responses; vague prompts get generic ones
- Refine iteratively - Start basic, review output, add details, repeat until satisfied
- Never share sensitive data - No personal records, unpublished research, or confidential information
Large Language Models work by navigating through vast embedding spaces—multidimensional representations of knowledge and concepts. Vague or poorly defined context can lead the model to explore irrelevant areas of this space, producing generic or off-target responses. Well-crafted context act as precise navigation instructions, guiding the model to the most relevant knowledge areas and ensuring outputs that match your specific needs and context.
Why Context Matters for LLMs
Context is everything you provide to an LLM to help it understand your request and generate appropriate responses during your chatbot session. Without proper context, LLMs often produce generic, inaccurate, or inappropriate outputs.
Because LLMs have no memory between separate sessions, they have no inherent knowledge about who you are, your organization’s policies or the purpose of your request.
Example of poor context
Write me a report about productivity
Example of better context
I am a senior manager at the University of Bristol. I have to write a 2-page executive summary about remote work productivity for University’s department heads, focusing on evidence-based strategies and including practical implementation steps.”
Note that while LLMs have no memory between separate chat sessions and don’t retain information from previous conversations, the data you share within each individual session may still be stored by the service provider. Always follow University’s data protection policies when sharing any information.
When writing your context, never share these with AI tools:
🚫 Personal data: Student records, staff information, health data
🚫 Confidential research: Unpublished findings, grant applications under review
🚫 Commercial sensitive: Partnership agreements, financial information
🚫 Legal privileged: Legal advice, disciplinary proceedings
🚫 Security sensitive: Passwords, system configurations, access credentials
Types of Context
Explicit Context. Information you directly provide to the LLM, for example:
- Your role and organization
- The purpose of the task
- Target audience
Implicit Context. Assumptions the LLM makes based on your prompt, for example:
- Cultural assumptions
- Educational level expectations
- Language formality
Instead of assuming the LLM will understand your context, state it clearly:
Instead of
Help me write a proposal on sustainable materials
Try
I’m a researcher at the University of Bristol. Help me write a 3-page research funding proposal, targeting the EPSRC, for a project on sustainable materials in engineering
Working Within Context Limits
LLMs have context windows, limits on how much text they can process at once or over different iterations. Some practical strategies to overcome this limitation are:
- Summarize lengthy background information and prioritize the most important context
- Break complex tasks into smaller parts
- Use previous outputs as context for follow-up requests
Iterative Context Building
We will not always get what we want at the first attempt. A useful strategy is starting with basic context and refine:
- Initial request: Provide core context
- Review output: Identify what’s missing or wrong
- Refine context: Add specific details or corrections
- Iterate: Repeat until satisfactory
Exercise 2: Context Writing
Transform a vague prompt into an effective, contextualised request. Use an AI tool of your choice to observe the difference that context have on your request.
Scenario: You need to create a policy document about flexible working arrangements for your department.
Initial prompt:
Write a 1-page-long flexible working policy