Prompt Engieering
Mostly notes on the whitepaper from Google
- Prompt engieering is an iterative process
- Document various prompt attempts
- When zero shot does not work, try one shot and few shot prompting, i.e., single example and multiple examples. Rule of thumb is 3-5 examples. Include edge cases if the input could be diverse
- Prompting for a JSON format output forces the model to create a structure and limit hallucinations.
- Growing research suggests that focusing on positive instructions in prompting can be more effective than relying heavily on constraints. This approach aligns with how humans prefer positive instructions over lists of what not to do.
- Placing the detailed Context before the specific Task/Ask is often preferred as it helps the LLM build a complete understanding before acting.
System prompting
- sets the overall context and purpose for the language model. It defines the ‘big picture’ of what the model should be doing, like translating a language, classifying a review etc.
- The name ‘system prompt’ actually stands for ‘providing an additional task to the system’
Classify movie reviews as positive, neutral or negative. Return valid JSON.
Review:...
Schema:...
JSON Response:...
Contextual prompting
Provides immediate, task-specific information to guide the response. It’s highly specific to the current task or input, which is dynamic.
Context: You are writing for a blog about retro 80's arcade video games.
Suggest 3 topics to write an article about with a few lines of description of what this article should contain.
Role prompting
There can be considerable overlap between system, contextual, and role prompting. E.g. a prompt that assigns a role to the system, can also have a context.
I want you to act as a travel guide.
I will write to you about my location and you will suggest 3 places to visit near me.
In some cases, I will also give you the type of places I will visit.
My suggestion: "I am in Amsterdam and I want to visit only museums."
Travel Suggestions:
Step-back prompting
Prompting the LLM to first consider a general question related to the specific task at hand, and then feeding the answer to that general question into a subsequent prompt for the specific task. This ‘step back’ allows the LLM to activate relevant background knowledge and reasoning processes before attempting to solve the specific problem.
Based on popular first-person shooter action games, what are 5 fictional key settings that contribute to a challenging and engaging level storyline in a first-person shooter video game?
Context: 5 engaging themes for a first person shooter video game:
...
Take one of the themes and write a one paragraph storyline for a new level of a first-person shooter video game that is challenging and engaging.
CoT
- Generally, any task that can be solved by ‘talking through is a good candidate for a chain of thought. If you can explain the steps to solve the problem, try chain of thought.
When I was 3 years old, my partner was 3 times my age. Now, I am 20 years old. How old is my partner? Let's think step by step.
- You can combine it with few-shot prompting to get better results on more complex tasks that require reasoning before responding as it’s a challenge with a zero-shot chain of thought.
Q: When my brother was 2 years old, I was double his age. Now I am 40 years old. How old is my brother? Let's think step by step.
A: When my brother was 2 years, I was 2 * 2 = 4 years old. That's an age difference of 2 years and I am older. Now I am 40 years old, so my brother is 40 - 2 = 38 years old. The answer is 38.
Q: When I was 3 years old, my partner was 3 times my age. Now, I am 20 years old. How old is my partner? Let's think step by step.
A:
Self-consistency
- Generating diverse reasoning paths: The LLM is provided with the same prompt multiple times. A high temperature setting encourages the model to generate different reasoning paths and perspectives on the problem.
- Extract the answer from each generated response.
- Choose the most common answer.
Prompts for explaining code
Explain to me the below Bash code:
{code in markdown}
Prompts for debugging and reviewing code
The below Python code gives an error:
{log in markdown}
Debug what's wrong and explain how I can improve the code.
{code in markdown}
Use variables in prompts
VARIABLES
{city} = "Amsterdam"
PROMPT
You are a travel guide. Tell me a fact about {city}