Skip to content

Three Ways to Improve Prompts in ChatGPT & Other LLMs

By Matthew Edgar · Last Updated: August 16, 2023

If you haven’t already started to play with ChatGPT and other generative AI tools, you should. As you begin working with basic prompts, you will quickly realize that you can get so much more out of generative AI tools by writing more thoughtful and detailed prompts. Here are some tips and example prompts to help you improve your work with ChatGPT, Google Bard, Bing’s conversational search and other generative AI tools.

Tip #1: Clear, Specific and Unambiguous

Prompts need to be clear and specific. Anything ambiguous in the prompt will cause ChatGPT, Bard, or other LLMs to generate irrelevant or misleading responses. This includes avoiding grammatical errors in prompts or writing poorly structured sentences.

All of this is because LLMs are trained to detect patterns in language and use that training to predict the next word to generate based on the context of the preceding words. The clearer and more specific your prompt, the more context the LLM has and can make a more accurate prediction in how to generate a result. However, a prompt that is ambiguous or poorly worded will cause the LLM to misunderstand the context and make an inaccurate prediction about what response to generate.

Example: Bad Prompt

“Check the following code for issues. [paste HTML code here]”

This prompt is ambiguous. What issues should be prioritized? Are you checking if the code is valid, if there are ways to improve speed, if it won’t load in certain browsers, if bots can interpret the code, if it will pass accessibility guidelines, or something else? When you paste HTML code, there will be CSS and JavaScript code included, so should the LLM check that as well? You’ll get a response to this prompt but it likely won’t be very helpful.

Example: Good Prompt

“I need your help analyzing my CSS code for potential performance bottlenecks. The CSS code is pasted below. Look for ways the CSS could be written to produce the same output but with less code. Check for CSS conflicts, ways to improve organization, and ways to avoid style overrides. Please provide a list summarizing the specific steps to take to optimize the CSS file. Include specific examples of code a front-end developer should fix. You do not need to rewrite all the code. [paste CSS code here]”

This prompt is very specific. It asks the LLM to look at CSS code for ways to improve performance and provides specific examples of things that should be analyzed. It then asks for two specific types of output, a list of the issues to address, and examples of code that should be fixed. The prompt also notes the intended audience of the generated response will be a front-end developer. All of this gives the LLM (Bard in the example below) more context making it easier for the LLM to generate a response.

Google Bard - Example prompt response to analyze code

Tip #2: Use Examples and Demonstrations

It is also helpful to use examples or demonstrations in the prompt, especially when you are asking the LLM to complete a particular task. This relates to being clear and specific because examples can help the model understand context and more effectively determine what response should be generated.

Also, the training data ChatGPT, Bard and others rely on contains a wide array of writing styles, language usage, and output structure. By providing examples, you are showing the model towards which type of style, language, and structure you desire.

Example: Bad Prompt

“Here are the URLs and headers for five pages on my website. Write title tags for these pages that will target relevant keywords and entice people to click to this page from a search result listing. [paste list of URLs and the page’s H1 tag]”

This prompt is clear and specific, and it will generate a decent response. However, there are many different styles and approaches to writing title tags. These styles and approaches change based on the industry. As well, a brand may have a distinctive voice that needs to be reflected in the title tag. In some cases, title tags be more professional and in other cases more casual. Some titles work better as questions and others as quick statements. There are also different characters you can use as a separator. Given all of this, it can be helpful to provide an example of what types of title tags already work for your website.

Example: Good Prompt

“Here are the URLs and headers for three pages on my website. A good title tag on my website is [insert two to three adjectives to describe the better title tags]. An example of our best title tag would be: [paste good title tag] With those examples in mind, please write title tags for these pages that will target relevant keywords and entice people to click to this page from a search result listing. [paste list of URLs and the page’s H1 tag]”

By adding in a summary of what makes a title tag better for your specific website along with an example of a good title tag (or multiple examples, if you’d prefer), the LLM will have an easier time writing title tags that are appropriate for your website.

ChatGPT - Example of writing title tags

Tip #3: Employ Multi-turn Conversations

We typically work with computers in a single-turn interaction. We input a request and get a response. Think about a traditional search engine: you input your query and you get a list of results. If you want a different set of results, you input a different query to get different results. For the most part, the second query is independent of the first query; the search engine isn’t retaining any history between the queries input (Google does personalize search results to an extent but search queries are largely single-turn interactions).

With newer LLMs, we can move beyond single-turn interactions. This means your prompt can turn into a conversation with the model. This is helpful when you need to break down complex queries into sequential steps. It can also help the LLM better comprehend your intent and context because you can build that context through a conversation.

Example Prompt: Analyze Code

There are a lot of examples of multi-turn conversations. One of the more helpful multi-turn conversations for technical SEO is using this type of interaction to understand and analyze a website’s code. For example, in this exchange, I asked Bing’s AI to help me analyze a website’s .htaccess file. After pasting in the code from the file, I was able to ask Bing several follow up questions.

Bing Conversational AI Multi-Turn Conversation, Part 1
Bing Conversational AI Multi-Turn Conversation, Part 2

For any multi-turn conversation, you want to begin the prompt by asking a general question. The goal is to let the LLM know you are starting the conversation. This sets a multi-turn conversation prompt apart from task-based prompts like the examples provided earlier in this document.

Final Thoughts

Learning how to interact with ChatGPT or other language models is like learning other new tools. With time, patience and practice, your prompts will get better and the responses you receive will improve. As those improve you will start to see new ways to incorporate ChatGPT into your work and research. If you need help using these tools as part of your SEO work or finding the best prompts, please contact me.

You may also like

Languages to Know For Tech SEO

Working on technical SEO and/or web analytics inevitably will require getting into the code. What languages do you need to know to succeed?

Glossary of Technical SEO Terms

The technical side of SEO is full of jargon, acronyms, and highly-specific and technical words. What are the main terms and acronyms to know?

10 Books for the Technical Marketing Professional

Looking for something to read? Here are ten books recommendations that explore technology, how humans interact with technology, and the way technology intersects with our lives.