Prompting is the part of working with an LLM that everyone has to do and almost no one writes about clearly. Most guides either treat it as magic ("just be specific!") or as a craft you have to learn over years. Neither is useful when you are sitting at a workbench and need an answer.
This page is the long version of the quick answer above. It is the article we wish we had read first.
The four levers
Every prompt is doing four jobs. When a prompt fails, it is almost always because one of these four is missing, not because of phrasing.
- Instruction. What is the task, in one sentence? Not "help me with this" but "rewrite this paragraph as a 100-word summary for a non-technical reader."
- Context. What does the model need to know that it does not already know? The document, the user, the constraints, the prior decisions. If you are asking about your codebase, paste the relevant code.
- Examples. What does a good answer look like? One or two worked examples shift the model from generic output to the shape you want.
- Output spec. How should the answer be structured? Markdown headings? JSON with these keys? Bullets, max five? If you do not say, you get the model's default, and the default rarely matches what you want.
Pick a prompt that is failing and run it through this list. Nine times out of ten one lever is empty.
A worked example
You want the model to extract action items from a meeting transcript. Here is the bad version:
Summarize this meeting and tell me what to do next.
[transcript]
This will produce something. It will not produce what you want. The instruction is fuzzy ("summarize" and "tell me what to do" are different jobs), there are no examples of an action item, and the output is unspecified.
The same prompt with all four levers:
You are extracting action items from a meeting transcript.
An action item is a specific task assigned to a specific person, with an implied or explicit deadline. "We should think about pricing" is not an action item. "Maria will draft the new pricing page by Friday" is.
Example transcript:
"... so I'll talk to legal about the new clauses next week, and Sam will pull the metrics ..."
Example output:
[
{ "owner": "speaker", "task": "talk to legal about new clauses", "deadline": "next week" },
{ "owner": "Sam", "task": "pull the metrics", "deadline": null }
]
Now extract action items from the transcript below. Return ONLY a JSON array, no prose.
Transcript:
[transcript]
This works because every lever is loaded. The instruction is specific. The context is the transcript. The example shows the model what counts and what does not. The output spec rules out the chatty prose the model would otherwise volunteer.
What you do not need to do
A lot of prompting advice is theatre. It does not hurt your output, but it does not help it either. You can safely skip:
- Telling the model it is "a world-class expert." Roles only help when they imply a method.
- Saying "please" and "thank you." Models do not care.
- Threatening to fire the model, tipping the model, or pretending lives depend on the answer. These tricks worked on some early models and were promptly trained out.
- Adding "be concise" and "be thorough" in the same prompt. Pick one.
What you do need to do is write clearly. The frontier models read English the way you would: ambiguity in your prompt produces ambiguity in the answer.
When you need more than a prompt
Some problems are not prompting problems. If the model lacks the information, no prompt fixes it. Three signs you have left prompting territory:
- The model is confidently wrong about a domain fact. You need retrieval, not a better prompt.
- The model keeps drifting from your format. You need structured output (a JSON schema), not stricter wording.
- The model is slow or expensive at the scale you need. You need a smaller or local model, not a cleverer prompt.
The rest of this hub covers those cases. Start with the four levers, and graduate to the deeper techniques when you have a problem the four levers do not solve.