Subject
The mechanics every prompt rests on: system vs. user messages, context windows, zero- and few-shot patterns, templates, and how to give a model what it needs to answer well.
Prompting is the part of working with an LLM that everyone has to do and almost no one writes about clearly. Most guides either treat it as magic ("just be specific!") or as a craft you have to learn over years. Neither is useful when you are sitting at a workbench and need an answer.
This page is the long version of the quick answer above. It is the article we wish we had read first.
Every prompt is doing four jobs. When a prompt fails, it is almost always because one of these four is missing, not because of phrasing.
Pick a prompt that is failing and run it through this list. Nine times out of ten one lever is empty.
You want the model to extract action items from a meeting transcript. Here is the bad version:
Summarize this meeting and tell me what to do next.
[transcript]
This will produce something. It will not produce what you want. The instruction is fuzzy ("summarize" and "tell me what to do" are different jobs), there are no examples of an action item, and the output is unspecified.
The same prompt with all four levers:
You are extracting action items from a meeting transcript.
An action item is a specific task assigned to a specific person, with an implied or explicit deadline. "We should think about pricing" is not an action item. "Maria will draft the new pricing page by Friday" is.
Example transcript:
"... so I'll talk to legal about the new clauses next week, and Sam will pull the metrics ..."
Example output:
[
{ "owner": "speaker", "task": "talk to legal about new clauses", "deadline": "next week" },
{ "owner": "Sam", "task": "pull the metrics", "deadline": null }
]
Now extract action items from the transcript below. Return ONLY a JSON array, no prose.
Transcript:
[transcript]
This works because every lever is loaded. The instruction is specific. The context is the transcript. The example shows the model what counts and what does not. The output spec rules out the chatty prose the model would otherwise volunteer.
A lot of prompting advice is theatre. It does not hurt your output, but it does not help it either. You can safely skip:
What you do need to do is write clearly. The frontier models read English the way you would: ambiguity in your prompt produces ambiguity in the answer.
Some problems are not prompting problems. If the model lacks the information, no prompt fixes it. Three signs you have left prompting territory:
The rest of this hub covers those cases. Start with the four levers, and graduate to the deeper techniques when you have a problem the four levers do not solve.
A short editorial reading list. Pick whichever fits how you like to learn.