What to learn first
-
Prompting Fundamentals
The mechanics every prompt rests on: system vs. user messages, context windows, zero- and few-shot patterns, templates, and how to give a model what it needs to answer well.
Take me to the Fundamentals hub -
Prompting Patterns
Reusable techniques like role prompting, tree-of-thoughts, ReAct, prompt chaining, self-consistency, and negative prompting — with the kind of example you can paste and modify.
Take me to the Patterns hubSelected articles
-
Prompting By Use Case
Long-form, specific guides for the actual things people prompt for: code, writing, summarization, data extraction, classification, and analysis.
Take me to the Use Cases hubSelected articles
-
Local Models
Running LLMs on your own hardware: how the stack works, which runtimes to pick, what quantization actually changes, and which open-weight models are genuinely usable right now.
Take me to the Local hubSelected articles
-
Model Benchmarks
Honest head-to-heads between frontier and open-weight models. We disclose the prompts, the temperature, the seed, and the limits — every comparison is timestamped.
Take me to the Bench hubSelected articles
-
Release Radar
A dated, sourced tracker of new and rumored model releases. Every claim is tagged Confirmed, Strong signal, or Speculation, with a link back to the primary source.
Take me to the Radar hubSelected articles