Notes from ChatGPT Prompt Engineering for Developers

Notes from ChatGPT Prompt Engineering for Developers

This post consists of my notes from the ChatGPT Prompt Engineering for Developers course that can be found here.

Prompting Principles

1. Write Clear Specific Instructions

  1. Provide as clear and specific prompts to guide the model to the desired output and prevent irrelevant and/or incorrect responses.
  2. Clear =/= Short. Longer prompts provide more clarity and context for the model.
  3. There should be clarity in the prompt so much so that the model should be able to pick up the essence of what is being asked.

Tactic 1: Use delimiters to clearly indicate distinct parts of the input.

  1. This means put your user input in delimiters that your prompt knows how to interpret. Delimiters can be anything like: ```, """, < >, .
    1. This is helpful because it prevents exploitative prompts or prompt injections.
    2. We make it extremely clear what the user input is and augment the input based on the what we want to achieve.
    3. Example Prompt: "Summarize within the triple backticks: {prompt}".

Tactic 2: Ask for a structured output

  1. JSON, HTML etc., could be all examples out structured output.
  2. Example: prompt = f"""Generate a list of three made-up book titles along with their authors and genres. Provide them in JSON format with the following keys:
    book_id, title, author, genre."""

Tactic 3: Ask the model to check whether conditions are satisfied.

  1. This tactic validates the user input by making sure what's requested is correct.
  2. Your prompt should include the validation steps and reporting those.

Tactic 4: "Few-shot" prompting

  1. "Few-shot" implies providing successful examples of completing tasks then ask the model to perform the task.

2. Give the Model Time To Think

  1. If the model is making reasoning errors by rushing to an incorrect conclusion, you should try to reframe the query to request a chain or series of relevant reasoning before the model provides its final answer.
    1. In other words, if the task is to complex to do in a short amount of time or a small number of words, it may make up a guess that's likely to be incorrect.
    2. In these situations, make the model think longer about the problem to get to a more well-thought out final answer.

Tactic 1: Specify the steps required to complete a task

  1. Explicitly state the steps involved before you expect final answer.
  2. Prompt example with some given text:
Perform the following actions: 
1 - Summarize the following text delimited by triple \
backticks with 1 sentence.
2 - Translate the summary into French.
3 - List each name in the French summary.
4 - Output a json object that contains the following \
keys: french_summary, num_names.

Separate your answers with line breaks.

Tactic 2: Instruct the model to work out its own solution before rushing to a conclusion

prompt = f"""
Your task is to determine if the student's solution \
is correct or not.
To solve the problem do the following:
- First, work out your own solution to the problem. 
- Then compare your solution to the student's solution \ 
and evaluate if the student's solution is correct or not. 
Don't decide if the student's solution is correct until 
you have done the problem yourself.

Use the following format:
Question:

question here

Student's solution:

student's solution here

Actual solution:

steps to work out the solution and your solution here

Is the student's solution the same as actual solution \
just calculated:

yes or no

Student grade:

correct or incorrect

Model Limitations: Hallucinations

The model might know the boundaries of what it knows very well - the implication here is that it may try to answer questions about obscure topics and make statements that sound plausible but aren't true.
To reduce hallucinations:

  1. First find relevant information such quotes from the source it was trained on.
  2. Then use the quotes to answer the question based on the source document.

Iterative Prompt Development

  1. Try a prompt.
  2. Analyze if the prompt gives you what you want.
  3. Clarify instructions.
  4. Refine prompts with a batch of examples.

Usages

Usage Example
Summarizing Your task is to generate a short summary of a product review from an ecommerce site. Summarize the review below, delimited by triple backticks, in at most 30 words. Review: {prod_review}" (You could replace summarizing with extraction).
Inferring Identify the following items from the review text: 1. Sentiment (positive or negative) 2. Is the reviewer expressing anger? (true or false) 3. Item purchased by reviewer 4. Company that made the item The review is delimited with triple backticks. Format your response as a JSON object with "Sentiment", "Anger", "Item" and "Brand" as the keys. If the information isn't present, use "unknown" as the value. Make your response as short as possible. Format the Anger value as a boolean. Review text: '''{lamp_review}'''
Transforming Proofread and correct the following text and rewrite the corrected version. If you don't find and errors, just say "No errors found". Don't use any punctuation around the text:{t}
Expanding You are a customer service AI assistant. Your task is to send an email reply to a valued customer. Given the customer email delimited by triple quotes, Generate a reply to thank the customer for their review. If the sentiment is positive or neutral, thank them for their review. If the sentiment is negative, apologize and suggest that they can reach out to customer service. Make sure to use specific details from the review. Write in a concise and professional tone. Sign the email as "AI customer agent". Customer review: {review} Review sentiment: {sentiment}

Temperature As An Input

For higher temperatures, the models are more random, less predictable and more creative. Lower temperatures should be used for more reliable and predictable outcomes and the output doesn't change.

System, Assistant and User Messages

  • The assistant is the model that the user interacts with.
  • The system message sets the behavior of the assistant or in other words describes how the assistant should structure it's response to the user.
  • The benefit of the system message is that YOU, the developer, can frame the conversation to your liking without making the request itself part of the conversation.
  • The data that's sent to the input of a chat bot is a context which is a list of dictionaries consisting of a role which, is either system, user or assistance and the prompt.
  • Based on this context we get the results and the append to this context for the next set of results.

Code Setup

# Import the libraries.
import openai
import os

from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv())

openai.api_key  = os.getenv('OPENAI_API_KEY')


# Helper method.
def get_completion(prompt, model="gpt-3.5-turbo"):
    messages = [{"role": "user", "content": prompt}]
    response = openai.ChatCompletion.create(
        model=model,
        messages=messages,
        temperature=0, # this is the degree of randomness of the model's output
    )
    return response.choices[0].message["content"]