Back to blog
Jul 12, 2024
3 min read

The Words We Recite

We, like LLMs, recite words to ourselves before we act.

It’s fascinating to ponder the similitaries of how ChatGPT thinks compared to the human mind.

My favorite insight so far is that humans have system prompts too, and we often decide how well or terrible something will go before we even start.

If you don’t know what a system prompt is, it’s a prompt of directions ChatGPT and other LLM’s will tell itself about it’s own identity/personality, how it should behave, and rules it should follow. A LLM will read it’s system prompt BEFORE it ever sees any user input, and it always recites those words to itself before responding. “You’re a helpful assistant” is a common system prompt for example.

I could change it’s system prompt to “you are the best at everything” or “you’re pathetic and stupid”. You the user could say the same thing to both of these equally talented AI’s, but you’d get very different responses.

We as people, have collections of system prompts we tell ourselves. We often decide how well or terrible something will go before we even start.

Some of these words we tell ourselves are situational. “I hate math.” *puts even less effort into their 3rd grade hw.

Some of these are about character traits. “Everyone likes me. I’m fun to talk to.” *confidently and warmly approaches people.

I’ve seen first hand people that have thrived or died because of their system prompts. There are those I think of that quite honestly, I didn’t find to be the brightest or particulary talented, but have done well for themselves because they really really really believed in themselves. And I’ve seen the opposite too, people that seem so gifted in so many ways, seemingly unable to believe in themselves in certain vital skills or settings.

I wish what I took away from all this is simply, “I need to think positvely”, but that is too simple. Because, to name just one reason, I think of those who seemingly despite many negative signals, still tell themselve positive things, when maybe they should change course. In some settings you should be concerned about external negative signals, and in others power through them, right?

I think part of the difficulty is trying to decide if the thoughts are the chicken or the egg? Do I think this because of what I’ve experienced? Or do I experience this because I think this? My guess is that most things are partially both.

Still, I’ve found value in being aware of my own system prompts. I like to think I can rewrite them as easily as I can for chatGPT. I’m the developer for myself. I can make a new one anytime I want.