Type hints in Python don’t enforce anything at runtime. That surprises people who come from typed languages. You can annotate a function as returning str and return an integer, and Python won’t complain when you run it. The value of type hints comes from static analysis — your editor, mypy, or pyright catches the mistake before the code runs. That’s a real benefit, but only if you set it up and actually use it.
:::note[TL;DR]
- Type hints are static — they don’t run at runtime unless you use a tool like Pydantic or
dataclasses - Start with function signatures; don’t annotate every local variable
Optional[X]is shorthand forX | None— prefer the union syntax in Python 3.10+TypedDictfor dicts with known shape;dataclassor Pydantic for anything you want validation on- Use
pyright(faster, VSCode native) ormypy(more configurable); pick one and stick with it Anyleaks — oneAnyin a call chain disables type checking for everything downstream :::
Prerequisites
- Python 3.10+ (union syntax
X | Noneinstead ofOptional[X], cleaner type error messages) - A type checker installed:
pip install mypyorpip install pyright - Basic familiarity with Python functions and classes
How do you annotate basic functions?
def greet(name: str) -> str:
return f"Hello, {name}"
def process(items: list[str], max_count: int = 10) -> dict[str, int]:
return {item: len(item) for item in items[:max_count]}
Annotate parameters and return types. Skip annotating local variables unless the type isn’t obvious — the checker infers them.
For None returns, be explicit:
def log_event(event: str) -> None:
print(f"[LOG] {event}")
How do you handle optional values?
Before Python 3.10, Optional[str] was the idiom. It means “str or None”:
from typing import Optional
def find_user(user_id: int) -> Optional[str]:
...
In Python 3.10+, use the union syntax — it’s cleaner:
def find_user(user_id: int) -> str | None:
...
The runtime behavior is identical. The union syntax works anywhere you’d use Optional.
A common mistake: forgetting to handle the None case after annotating something as T | None:
def greet_user(user_id: int) -> str:
name = find_user(user_id) # str | None
return f"Hello, {name.upper()}" # ERROR: name might be None
The type checker catches this. The fix:
def greet_user(user_id: int) -> str:
name = find_user(user_id)
if name is None:
return "Hello, stranger"
return f"Hello, {name.upper()}"
How do you annotate collections and generics?
Modern Python (3.9+) lets you use built-in types directly:
# Python 3.9+
def process(items: list[str]) -> dict[str, int]:
...
# Older style (still works, but verbose)
from typing import List, Dict
def process(items: List[str]) -> Dict[str, int]:
...
For tuples with fixed structure:
def get_coordinates() -> tuple[float, float]:
return 37.7749, -122.4194
# Variable-length tuple of one type
def get_scores() -> tuple[int, ...]:
...
For callables:
from typing import Callable
def apply(fn: Callable[[int, int], int], a: int, b: int) -> int:
return fn(a, b)
When do you use TypedDict vs dataclass vs Pydantic?
This is the question that matters most in practice.
TypedDict — for dicts with a known shape, especially when you’re working with JSON responses or existing code that passes dicts around:
from typing import TypedDict
class UserRecord(TypedDict):
id: int
name: str
email: str
active: bool
def get_user() -> UserRecord:
return {"id": 1, "name": "Alice", "email": "alice@example.com", "active": True}
TypedDict gives you type checking but no runtime validation. If the dict has wrong types at runtime, Python won’t catch it unless you use a validator.
dataclass — for structured data you create in code, not parse from external input:
from dataclasses import dataclass
@dataclass
class User:
id: int
name: str
email: str
active: bool = True
Dataclasses give you __init__, __repr__, and __eq__ for free. Still no runtime validation — if you pass a string where an int is expected, Python stores the string.
Pydantic — for data you parse from external sources (API requests, config files, environment variables) where you need runtime validation:
from pydantic import BaseModel, EmailStr
class User(BaseModel):
id: int
name: str
email: EmailStr
active: bool = True
# Raises ValidationError if types are wrong
user = User(id="not-an-int", name="Alice", email="alice@example.com")
Use Pydantic when the data comes from outside your codebase and correctness matters. Use dataclasses for internal data structures. Use TypedDict when you need type checking on existing dict-based code without refactoring.
What’s the difference between mypy and pyright?
Both are static type checkers, but they make different tradeoffs.
mypy is the original Python type checker, developed by the Python core team. It’s highly configurable, has a large plugin ecosystem (mypy-django, sqlalchemy-stubs, etc.), and is what most CI setups use. It can be slow on large codebases.
pyright (Microsoft) is what VSCode’s Pylance extension uses. It’s significantly faster than mypy, has better inference in many cases, and gives inline errors in the editor in real time. It’s stricter by default about some things mypy lets through.
In practice: use pyright in your editor for real-time feedback (via Pylance or the pyright CLI), and use mypy in CI if you need its plugin ecosystem. If you don’t have plugins, pyright alone is fine for CI too.
Run mypy with strict mode to get real value:
mypy --strict src/
--strict enables the checks that catch actual bugs — without it, mypy’s defaults are permissive enough to miss most issues.
What are the common gotchas?
Any leaks. Any is the escape hatch — it disables type checking for that value. But it spreads: if a function returns Any, every variable that receives its return value is also Any, and every function those variables are passed into loses its type safety. One untyped library import can poison a call chain.
import some_untyped_lib # Returns Any
result = some_untyped_lib.get_data() # Any
user_id = result["id"] # Any
process_user(user_id) # parameter type ignored — Any bypasses checking
Fix: add explicit type annotations at the boundary:
result = some_untyped_lib.get_data()
user_id: int = result["id"] # narrow the type here
Mutable default arguments. Python’s mutable default argument bug isn’t caught by type checkers because it’s semantically valid — just wrong:
def add_item(item: str, items: list[str] = []) -> list[str]: # bug
items.append(item)
return items
Use None as the default and initialize inside:
def add_item(item: str, items: list[str] | None = None) -> list[str]:
if items is None:
items = []
items.append(item)
return items
Type narrowing with isinstance. Type checkers understand isinstance checks:
def process(value: str | int) -> str:
if isinstance(value, str):
return value.upper() # value is str here
return str(value) # value is int here
Without the isinstance check, calling .upper() on str | int would be a type error — int has no .upper().
cast is a lie. cast(T, x) tells the type checker “trust me, this is T” without any runtime check. Use it sparingly and only when you genuinely know better than the checker.
Summary
- Annotate function signatures first; skip local variables unless inference fails
- Use
X | Noneinstead ofOptional[X]in Python 3.10+ - TypedDict for dict shapes, dataclass for internal structures, Pydantic for external data with validation
- Run mypy with
--strictor pyright in strict mode — defaults catch almost nothing Anydisables type checking everywhere it flows; narrow it at external boundaries
FAQ
Do type hints affect performance?
No. Type annotations are ignored at runtime unless you explicitly use typing.get_type_hints() or a framework like Pydantic that reads them. There’s a negligible import cost for the typing module, which is irrelevant in practice.
Should I annotate every file, or just new code?
Start with new code and gradually annotate existing code as you touch it. Annotating an entire legacy codebase at once is rarely worth the effort. Running mypy with --ignore-missing-imports on legacy code while being strict on new code is a reasonable middle ground. Some teams add type: ignore comments to silence errors in legacy files they haven’t gotten to yet.
What’s the Protocol type for?
Protocol is Python’s way to define structural subtyping — what Go calls interfaces and what TypeScript calls structural types. Instead of requiring a class to inherit from a base class, you define what methods it needs to have:
from typing import Protocol
class Serializable(Protocol):
def to_json(self) -> str: ...
def save(obj: Serializable) -> None:
data = obj.to_json()
...
Any class with a to_json(self) -> str method satisfies Serializable, regardless of its inheritance chain. This is particularly useful for writing functions that work with third-party objects you can’t modify.
What to read next
- Python Async/Await: The Complete Guide — type annotations for async functions follow the same patterns
- Python Virtual Environments — setting up an environment where mypy and pyright are installed correctly
- PostgreSQL Performance Tuning — Rachel’s other recent article on backend production patterns