AI In Plain English, For Normal People. PART 5.c

6 Current Practices to Improve your LLM output

We continue with the best guide to understand AI and don't get behind in the future!

Last week, we talked about why it is important to have a basic knowledge of how to prompt effectively and improve our communication skills. In this one, we will go through:

  • Current practices and prompting tricks to improve LLM output

  • Helpful tools

And as always, at the bottom, you have a selection of the news of the week to stay updated and spark your curiosity, as well as cool tools and lessons that can enhance your life.

That’s damn right!

Here, I am presenting a summary based on the prompt engineering portion of the 2.5 hour “Application Development Using Large Language Models” tutorial given by Andrew Ng and Open AI’s Isa Fulford at NeurIPS (Dec 11, 2023) (* Big call out to Sarah Chieng to put it together, link here.)

  1. Write clear and specific instructions

  2. Adopt a persona

  3. Guide the model

  4. Break down the task or prompt

  5. Prompt multiple times

  6. Use external tools and don’t rely 100% on the model.

How do you talk to it?

Think like you are providing instructions to an employee of your business.

  • Be direct. Use affirmative directives like do, don’t, your task is, you must, I am providing you, etc., and why not put please at the end of it? It doesn't hurt.

  • Don’t ask leading questions. The model is eager to please, so guide but leave prompts open-ended (yes or no answers)

  • To have expertise in the specific subject area or domain you want the model to help you with is going to help immensely.

  • Another thing: be polite to them; they learn by example. This is called emotional prompting. You never know how mad robots will be in the future. ;)

Let’s start.

1- Write clear and specific instructions

  • The longer the prompt or instructions, the more possibilities there are for wrong or inaccurate responses.

  • Be super specific in providing the instructions on what you want.

  • Give detailed context for the problem you need help with; don’t assume the model knows what you are talking about.

  • Reducing ambiguity reduces the likelihood of irrelevant or incorrect outputs.

2- Adopt a persona

  • One method is to ask the model to pretend to be a specialist in certain domain, a known character, etc. For example:

“Hello, ChatGPT.

Pretend that you are Dave Chappelle, one of the best comedians ever to exist, and write in his comedy style.

I need you to write a congratulations letter to my friend Tony. Can you make a funny letter in the form of a poem that is very short and to the point? (We just introduced the style, form and length as instructions.)

He is 40 years old and just had a promotion at his job. He works as a clothes seller in a big store down town in Chicago. Tony is a responsible, funny guy who loves jokes. (We gave more context about who Tony is.)

Besides pretending to be someone, another thing that we can do is:

  • Integrate the intended audience. For example:

When you have an idea about the audience you are writing to or if you want to understand complex concepts, you can ask, “Make it so an 8-year-old can understand it.”

“You are an experienced AI scientist specializing in teaching kids around 8 years old. I want you to write an introduction to AI systems, making sure you write for a young audience. Provide the content in the form of bullet points and respond in roughly two sentences.”

3- Guide the model

Models make more reasoning errors when they respond immediately. It is better for them to think incrementally.

  • You can ask for a “chain of thought” or specify the specific steps. “Think step by step.” works well. You can use it to solve a math or logic problem, for example.

  • Provide examples. It would go like this:

    1. First example (first “shot”): Give an example of a prompt and the corresponding output or response.

    2. Second example (second “shot”): Give a second example of a prompt and output.

    3. Give your actual prompt. Now, your model can follow the pattern established by the first two examples.

  • You can require the model to ask you clarifying questions. Requesting explanations, drawing comparisons, etc. For example:

    “Clarify if you have understood my prompt, and request any explanation you need.”

  • Help it self-correct. If the model starts incorrectly, its hard to self-correct. "I've received an explanation about… from you; are you sure about your answer? Can you review it and provide a corrected explanation, starting with…?”

4- Break down the task or prompt

Divide and conquer.

  • If the document is too long, the model might stop reading early. You can guide the model to process long documents piecewise and construct a full summary recursively.

  • Break down complex tasks into multiple, simple tasks in an interactive conversation. This helps because complex tasks have higher error rates than simple tasks.

  • You can use intent classification to identify the most relevant instructions, then combine the responses to create a cohesive output. For example:

"I'm planning a camping trip in the Rocky Mountains for a week. I need advice on essential gear, wildlife safety, and the best trails for hiking."

Breaking Down:

  • Intent 1: Recommendations for essential camping gear for the Rocky Mountains.

  • Intent 2: Tips on wildlife safety in the Rocky Mountains.

  • Intent 3: Suggestions for the best hiking trails in the Rocky Mountains.

You will address each intent individually, offering detailed suggestions for each before combining these into a well-rounded guide for the camping trip.”

5- Prompt multiple times. Iterate

This is a game of iterations; what you are looking for is not going to come on your first try. Sometimes, you will have to iterate (have a conversation) to find the best output.

  • You can ask the model to answer a question multiple times and determine the best answer.

  • Use the Temperature. It regulates the randomness or creativity of the LLM’s response. Higher temperatures give more varied, creative responses. Lower temperatures give more conservative, predictable responses. You can go from 0.1 to 2.0. For example, “Using a temperature of 0.2, write an introduction for a blog post about the importance of healthy nutrition. Try it now using a temperature of 1.5.”

6- Use external tools and don't rely 100 % on it

As a rule of thumb, if a task can be done more reliably or efficiently by a tool rather than a language model, offload it to get the best of both.

I suppose this is going to change in the future, when multimodal models are going to become way more capable.

This brings us to the concept of hallucinations, which are responses generated by an AI that contain false or misleading information presented as fact.

Watch this video to see these practices in action:

Helpful tools to guide you while creating your prompts and learn prompt engineering more in depth:

Next week, we will talk about the new startup that may replace searching for information with Google. It did for me! You just have to be patient!

Thanks for your time.

Stay Kind

News Picks

  • Augmented reality is coming!

Tools

  • NotesGPT seamlessly converts your voice notes into organized
    summaries and clear action items using AI. Or, try Flipner AI, a similar AI app, and compare.

  • GitMind generates mind maps, flowcharts, and more.

  • Roast My Website is a GPT that will give you improvement tips with no compassion. I love this one.

  • Glif remixes any image on the web

Educational

And that’s all. I hope these insights, news, and tools help you prepare for the future!

Have a really nice week.

Stay kind.

Rafa TV