What is a prompt?
A prompt is an input into an AI model to obtain an output or response, often in the form of an instruction or a question.
Crafting effective prompts significantly impacts on the quality of the response from an AI model. Poor prompts can lead to increased hallucinations irrelevant outputs, loss of time .
What is Prompt Engineering?
The practice of designing prompts in order to optimise the response from an AI model. Well-designed prompts lead to more accurate and relevant answers, reduce the chance of biased or hallucinatory responses and give you more control over the final output.
One of the major advantages of LLMs is their adaptability, allowing them to create a vast array of outputs. However, this ability makes it crucial that the prompt guides the model to the desired output. Without clear guidance the generated outputs can be poor quality, leading to frustration. Setting roles, adding constraints, specifying format or giving examples will all help to optimise the output and maximise the benefit of using AI models.
Successful prompt engineering often requires a good deal of critical and strategic thinking about the elements that make up an instruction or question. Although AI chatbots are often cited as time-savers it can take significant time and thought to craft and refine adequate prompts.
Prompt Limitations
Even the best prompts cannot compensate for all flaws in the AI models generating output. These could be the result of limitations in the training data, such as biases, gaps in representation, or outdated information. This makes it vital to evaluate all outputs critically. For more information on critiquing AI generated material see the AI Literacy tab.
1. Vague or Ambiguous Prompts
Mistake: “Tell me about history.”
Why it’s a problem: Too broad—AI doesn’t know what time period, region, or aspect to focus on.
Fix: “Summarize the economic causes of the French Revolution in under 400 words.”
2. Overloading the Prompt
Mistake: “Explain quantum mechanics, list famous physicists, and write a poem about atoms.”
Why it’s a problem: Too many tasks at once can confuse the model or dilute the response quality.
Fix: Break it into separate prompts for each task.
3. Lack of Context
Mistake: “Write a summary.”
Why it’s a problem: The AI doesn’t know what to summarize or for whom.
Fix: “Summarize the following article for an undergraduate engineering audience…”
4. Ignoring the Role of the AI
Mistake: “Fix this code.”
Why it’s a problem: No clarity on what kind of help is expected—debugging, explaining, or rewriting?
Fix: “You are a Python tutor. Explain what’s wrong with this code and suggest a fix.”
5. Using Biased or Leading Language
Mistake: “Why is X the best solution?”
Why it’s a problem: Assumes a conclusion and may lead to biased outputs.
Fix: “Compare the pros and cons of X and Y as potential solutions.”
6. Not Iterating
Mistake: Using the first prompt and accepting the first result.
Why it’s a problem: AI responses often improve with refinement.
Fix: Review, tweak, and re-prompt based on the initial output.
Prompting Technique | Description | Examples | Used For |
---|---|---|---|
Zero-shot prompting | The model is given a task with no examples. It relies entirely on its pre-trained knowledge to generate a response. | "Translate this sentence into French: 'I love libraries.'" | Quick tasks where the model is expected to generalize from prior training. |
Few-shot prompting | The model is given a few examples of the task within the prompt to guide its response. | "Translate the following: 1. 'Hello' → 'Bonjour' 2. 'Goodbye' → 'Au revoir' Now translate: 'Thank you'" |
Tasks where context or format needs to be demonstrated. |
Chain-of-thought prompting | The model is encouraged to reason step-by-step before arriving at an answer. | "If there are 3 apples and you buy 2 more, how many apples do you have? Let's think step by step." | Complex reasoning, math problems, or logic-based tasks. |
Iterative prompting | The user refines the prompt or builds on previous outputs to improve results. | Initial prompt: 'Summarize this article.' Follow-up: 'Now make it more concise and suitable for a newsletter.' |
Refining outputs, creative writing, or improving accuracy over multiple turns. |
To support the prompt engineering process, a variety of prompting frameworks have been developed. These frameworks offer structured approaches to help you design effective prompts. While well-designed prompts can significantly improve the quality of output, they do not guarantee accuracy. It's still essential to critically evaluate the information you receive.
To get better results from generative-AI chatbots, write CAREful prompts. Include context, what you’re asking the system to do, rules for how to do it, and examples of what you want.
Include these four key components in your prompts:
Context: Describe the situation
Ask: Request specific action
Rules: Provide constraints
Examples: Demonstrate what you want
While CARE is a helpful mnemonic for remembering the components, you don’t always have to write prompts in this exact order.
You also will not need such detailed prompts for every interaction with generative AI. General information-seeking tasks may not require giving the AI this much information. However, when looking for complex or specific outputs, provide more details to ensure you get results from AI that meet your expectations.
Read more about CAREful prompting.
Download the CAREful framework.
TU Dublin Library Services This work is licensed under CC BY-NC-SA 4.0.
Here are the five components of the CLEAR framework:
Read the article.
Watch the author explain the framework.
Goal: Describe the situation
Context: Request specific action
Source(s): Provide constraints
Expectations: Demonstrate what you want
This is the suggested framework for designing prompts for the Copilot AI chatbot from Microsoft.
Read more guidance on the framework here.