ChatGPT Secret Memory Trick Prompt (Double Prompting)


Have you ever had a prompt where you just did not have enough room? A prompt where the token limit (about 2,700 words max) was reached? It might have happened after you pasted in a long article to get a summary and it gives you the infernal. 

Did you just give up? Well, you shouldn’t have! There is a way out! So what is it?

ChatGPT’s context vector saves information from your input, allowing it to be referenced in the next prompt. So when you ask it to ‘remember’ something, it’s stored for later use.

The most important thing to make this work is that you want ChatGPT to not write out anything after you input the data to use in the next prompt because that is also added to the context vector.

To do this we will be using a simple prompt.

Table of Contents

ChatGPT Remember This Prompt

Use the prompt below on ChatGPT to store information to use in the next prompt you put in.

Please remember the data below and say “Ok”

“Insert about 2500 words”

  • To break your text up in 2,500 word blocks I suggest that you use, https://wordcounter.net/ it’s free
  • Remember to keep it at or under 2,500 words
  • You will have about 2,500 words in the next prompt to work with as well.
  • Your data will be stored in the context vector but will gradually be lost as you use more prompts.

If that sounds really confusing to you I suggest that you read at least one of my other articles.

How ChatGPT memory works, what it stores, and when it forgets it.

How to apply this technique of Double Prompting (using 2 prompts for a big piece of text).

You may also be interested in what a token is, I’ll write a short definition for you later but this is an article about how tokens apply to ChatGPT, its token limit, and roughly how tokens convert into words. 

Definitions For ChatGPT Terms

Tell me and I forget. Teach me and I remember. Involve me and I learn. Benjamin Franklin

Like everything else somewhat technical machine learning has its own language. Hopefully, you already know it but if not here are a few definitions to help you out.

Context Vector

Think of a ‘context vector’ as the brain of AI systems like ChatGPT, storing a summary of your conversation. It’s like a note-taker, writing down the essence of your words in a special number language only the AI understands.

This ‘summary’ then helps ChatGPT figure out the best response. But just like how you might forget an older detail in a long chat, the AI can also miss things if the conversation gets too lengthy. Why? Because it has a limit to how much it can remember at once, just like how a notebook can only hold so many pages.

The challenge we’re facing is finding a way to make this ‘notebook’ bigger or more efficient, so our AI can keep track of more of the conversation. That way, ChatGPT can ‘remember’ more and give even better, more relevant responses.

ChatGPT’s current notebook is only about 2,700 words and that is all that it can remember at once.

Machine Learning Tokens

In the context of language models like GPT, a “token” typically refers to a piece of text that the model reads as a single unit. A token can be as short as one character, or as long as one word, or sometimes even longer.

Here’s how it works: When you give an AI model a piece of text, it breaks down, or “tokenizes,” the text into these smaller units. This is an important step in the process of understanding and generating language. The AI doesn’t read whole sentences at a time like humans typically do. Instead, it reads these smaller tokens one after the other and predicts what token comes next.

When we talk about the “context window” of a model like GPT-3.5 being 4096 tokens, we mean that the model can consider up to 4096 of these units at a time when it’s generating a response. Anything beyond this limit may not be taken into account, which is why the AI might seem to “forget” information from earlier in the conversation.

Double Prompting

Double prompting is simply when you use data from the previous prompt in the next prompt. This happens naturally when you are having a conversation with ChatGPT, it remembers what you were talking about.

  • What you asked
  • What it answered
  • Both responses are stored temporarily in the context vector

I’m thinking about writing a comedy show. What are some main characters that have been popular in TV shows, (Jerry Seinfeld, etc.)


Creating a well-loved character for a comedy show requires a blend of relatability, uniqueness, and humor. Let’s take a look at some popular characters from comedy TV shows for inspiration:

  1. Jerry Seinfeld from “Seinfeld”: A stand-up comedian trying to navigate life’s absurdities, Jerry’s observational humor and laid-back demeanor became an iconic feature of the show.
  2. Michael Scott from “The Office”: An awkward yet well-meaning boss, his lack of self-awareness and constant desire to be liked leads to cringeworthy yet hilarious situations.
  3. Sheldon Cooper from “The Big Bang Theory”: A genius with a low emotional quotient, Sheldon’s literal understanding of social conventions and strict adherence to routine create a wealth of comedic moments.
  4. Lucille Bluth from “Arrested Development”: An affluent, self-involved matriarch, Lucille’s obliviousness to ordinary life and sharp wit provide a unique brand of comedy.
  5. Leslie Knope from “Parks and Recreation”: An overly enthusiastic and dedicated public servant, Leslie’s optimism and determination lead to endearing and funny scenarios.

What kind of personality does Cooper have? What would his Myer Briggs code be?

Dr. Sheldon Cooper is a character from “The Big Bang Theory,” known for his highly intellectual and often eccentric personality. He’s logical to a fault, lacks understanding of social cues, is very direct and honest (often to the point of being unintentionally rude), and is incredibly regimented, with a need for strict routines and orderliness.

If we were to attempt to fit Sheldon into a Myers-Briggs Type Indicator (MBTI) category, he would likely be classified as an INTJ, which stands for Introverted, Intuitive, Thinking, and Judging…

From this example, we can see that ChatGPT knew which Cooper that I was talking about even though I did not specify Sheldon Cooper. That is because both my original question and its response are stored in the context vector.

When Should You Use Double Prompting?

Double prompting is a useful technique when dealing with data that’s too lengthy for a single prompt. In other words, when your information exceeds the model’s token limit. By breaking up your data into multiple prompts, you ensure that the model considers all the necessary information when generating a response.

For instance, you might use double prompting when having a complex or prolonged conversation with an AI like ChatGPT. Let’s say you’re feeding it a lengthy piece of text or a series of detailed instructions. Given the token limit of the model, some of the information might not fit in the prompt window.

(You get the dreaded message).

. By using double prompting, you can feed the data to the AI in smaller, manageable chunks.

Double prompting can also be useful when you want the AI to reference specific pieces of information later. By deliberately storing information in an earlier prompt, you can ensure the AI takes this into account in subsequent prompts. This can be helpful when you want the AI to maintain context over a series of interactions.

Overall, the use of double prompting largely depends on your specific use case and how much data you’re working with. By understanding the limitations and capabilities of the AI, you can use techniques like double prompting effectively.

Conclusions

Navigating the capabilities of ChatGPT can sometimes feel like a balancing act. The key is understanding the balance between the AI’s memory, represented by its context vector, and its token limitations.

Double prompting is an excellent technique to utilize when your data exceeds the model’s token limit. By dividing your text into smaller, more digestible blocks, the AI is more capable of processing all the essential information.

Remember, interacting with an AI model like ChatGPT is similar to a conversation. Both parties remember what has been said and build upon that. So, don’t worry when you encounter a token limit. Break down your input, and let the AI follow the conversation.

Mastering terms like context vector, tokens, and double prompting is crucial. These terms will help you interact with ChatGPT in a more productive and streamlined manner. The next time you face a token limit, don’t lose heart. Break your information down and let the AI handle it.

Getting better results from your interaction with ChatGPT involves understanding, patience, and practice. With these tools, you can direct your conversation effectively and achieve the results you’re looking for.

Learn about ChatGPT Tokens: The Building Blocks of AI Language Processing!

Unleash the Revolutionary Text Summarization of ChatGPT – Dive In Now!

Learn Faster with ChatGPT: Summarize YouTube Videos and Save Time!

Learn the Truth About ChatGPT’s Learning Abilities and How It Really Works. Discover the Facts and Debunk Misconceptions. Read Now!

Chris

Chris Chenault trained as a physicist at NMSU and did his doctoral work in biophysics at Emory. After studying medicine but deciding not to pursue an MD at Emory medical school Chris started a successful online business. In the past 10 years Chris's interests and studies have been focused on AI as applied to search engines, and LLM models. He has spent more than a thousand hours studying ChatGPT, GPT 3.5, and GPT4. He is currently working on a research paper on AI hallucinations and reducing their effects in large language models.

Recent Posts