Prompt Engineering

Getting the Best from LLMs

Difficulty
Beginner
Duration
10-12 min
Prerequisites
What is an LLM
Step
1/ 7

What Is a Prompt?

A prompt is the input text you provide to an LLM to get a desired output. It's the interface between human intent and model behavior — the only lever you have (besides fine-tuning) to control what the model generates.

A prompt is just tokens. The model doesn't "understand" your intent — it processes your tokens and generates the most likely continuation. This means how you phrase your prompt dramatically affects the output quality.

Prompt components:

  • System prompt: Sets the model's role and behavior (e.g., "You are a helpful coding assistant")
  • User message: The actual request or question
  • Context: Any background information the model needs
  • Examples: Optional demonstrations of desired input/output format
  • Constraints: Explicit rules ("Respond in JSON", "Keep under 100 words")

The art of prompt engineering is crafting inputs that reliably steer the model toward the output you want. It's not magic — it's applied understanding of how LLMs process and generate text.

Key mental model: The model is a text completion engine. Your prompt sets up the context, and the model generates the most natural continuation of that context. A well-crafted prompt makes the desired output the most natural continuation.

Prompt + Completion Flow

[System]
Pos: 0
You are a helpful assistant.
Pos: 1
[User]
Pos: 2
What is Python?
Pos: 3
[Assistant]
Pos: 4
Python is a programming language...
Pos: 5

Anatomy of a Prompt

Prompt ComponentPurposeExample
System promptDefine role and behavior"You are an expert Python developer"
ContextProvide background information"Given the following code: ..."
InstructionState the specific task"Find and fix the bug in this function"
ExamplesDemonstrate desired format"Input: X → Output: Y"
ConstraintsSet boundaries on output"Respond in JSON format, max 200 words"
Output primerStart the response format"{ \"answer\":" (forces JSON)