Prompt Engineering
Getting the Best from LLMs
What Is a Prompt?
A prompt is the input text you provide to an LLM to get a desired output. It's the interface between human intent and model behavior — the only lever you have (besides fine-tuning) to control what the model generates.
A prompt is just tokens. The model doesn't "understand" your intent — it processes your tokens and generates the most likely continuation. This means how you phrase your prompt dramatically affects the output quality.
Prompt components:
- •System prompt: Sets the model's role and behavior (e.g., "You are a helpful coding assistant")
- •User message: The actual request or question
- •Context: Any background information the model needs
- •Examples: Optional demonstrations of desired input/output format
- •Constraints: Explicit rules ("Respond in JSON", "Keep under 100 words")
The art of prompt engineering is crafting inputs that reliably steer the model toward the output you want. It's not magic — it's applied understanding of how LLMs process and generate text.
Key mental model: The model is a text completion engine. Your prompt sets up the context, and the model generates the most natural continuation of that context. A well-crafted prompt makes the desired output the most natural continuation.
Prompt + Completion Flow
Anatomy of a Prompt
| Prompt Component | Purpose | Example |
|---|---|---|
| System prompt | Define role and behavior | "You are an expert Python developer" |
| Context | Provide background information | "Given the following code: ..." |
| Instruction | State the specific task | "Find and fix the bug in this function" |
| Examples | Demonstrate desired format | "Input: X → Output: Y" |
| Constraints | Set boundaries on output | "Respond in JSON format, max 200 words" |
| Output primer | Start the response format | "{ \"answer\":" (forces JSON) |