Python Testing Guide: How to Test Your Code with pytest
Testing isn't as hard as it seems. With AI and generative code, it is even more important now to build confidence and provide checks when making changes. Here's your complete guide to testing Python code with pytest.
Quick Start / TL;DR
- Install pytest:
uv add pytest
- Test functions start with
test_
and useassert
statements - Run tests:
pytest
(all tests) orpytest test_file.py
(specific file) - Test exceptions: Use
with pytest.raises(ExceptionType):
- Use fixtures for setup/teardown and shared test data
- Parametrize tests with
@pytest.mark.parametrize
for multiple inputs - Test files, APIs, databases with temporary resources and mocks
Common Questions
How do I get started with Python testing?
Start by installing pytest and writing your first test. Testing becomes easier once you establish the basic pattern:
- Install pytest:
uv add pytest
- Create a function to test
- Write a test function that starts with
test_
- Use
assert
to check expected outcomes - Run with
pytest
What should I test in my Python code?
Focus on testing these areas:
- Public function behavior - inputs and expected outputs
- Edge cases - empty inputs, None values, boundary conditions
- Error conditions - invalid inputs that should raise exceptions
- Integration points - database connections, API calls, file operations
- Business logic - the core functionality users depend on
Don't test:
- Private implementation details
- Third-party library functionality
- Simple getters/setters without logic
When should I write tests?
Test-Driven Development (TDD) approach:
- Write a failing test first
- Write minimal code to make it pass
- Refactor and improve
- Repeat
Practical approach:
- Before fixing bugs (write a test that reproduces the bug)
- When adding new features
- When refactoring existing code
- When you find yourself manually testing the same thing repeatedly
How do I install and set up pytest?
Install pytest
using uv
, see Dev Environment for more on using uv
.
# Using uv
uv add pytest
# Alternative: using pip
pip install pytest
That's it, pytest
is ready to go. No configuration needed for basic usage.
How do I write my first test?
Let's start with something simple. I'll create a basic function and then test it.
Create a file called calculator.py
:
def add(a, b):
"""Add two numbers together."""
return a + b
def multiply(a, b):
"""Multiply two numbers."""
return a * b
def divide(a, b):
"""Divide two numbers, raising ValueError if dividing by zero."""
if b == 0:
raise ValueError("Cannot divide by zero")
return a / b
Now create a test file called test_calculator.py
:
from calculator import add, multiply, divide
import pytest
def test_add():
"""Test addition function with various inputs."""
assert add(2, 3) == 5
assert add(-1, 1) == 0
assert add(0, 0) == 0
assert add(2.5, 1.5) == 4.0
def test_multiply():
"""Test multiplication function."""
assert multiply(3, 4) == 12
assert multiply(-2, 5) == -10
assert multiply(0, 100) == 0
def test_divide():
"""Test division function."""
assert divide(10, 2) == 5
assert divide(7, 2) == 3.5
def test_divide_by_zero():
"""Test that division by zero raises appropriate exception."""
with pytest.raises(ValueError, match="Cannot divide by zero"):
divide(10, 0)
Run the tests:
pytest test_calculator.py
You'll see output like this:
========================= test session starts =========================
collected 4 items
test_calculator.py .... [100%]
========================= 4 passed in 0.02s =========================
How do I use assertions effectively?
The assert
statement is your main tool for testing. It lets you verify that your code works as expected by checking if a condition is True. If the condition is False, the test fails and raises an AssertionError.
def test_assertions():
"""Examples of different assertion patterns."""
# Basic equality
assert 2 + 2 == 4
# Not equal
assert 5 != 3
# Greater than, less than
assert 10 > 5
assert 3 < 7
assert 10 >= 10
assert 5 <= 10
# In/not in
assert 'hello' in 'hello world'
assert 'x' not in 'hello'
assert 2 in [1, 2, 3]
# Boolean checks
assert True
assert not False
assert bool([1, 2, 3]) # Non-empty list is truthy
assert not bool([]) # Empty list is falsy
# None checks
result = None
assert result is None
assert result is not False # None is not False
# Type checks
assert isinstance([1, 2, 3], list)
assert isinstance("hello", str)
assert isinstance(42, int)
# Length and size checks
assert len([1, 2, 3]) == 3
assert len("hello") == 5
# Approximate floating point comparisons
assert abs(0.1 + 0.2 - 0.3) < 1e-10
# Or use pytest.approx for cleaner float testing
assert 0.1 + 0.2 == pytest.approx(0.3)
How do I test exceptions and error conditions?
When you expect your code to raise an exception, use pytest.raises()
:
import pytest
def validate_age(age: int) -> int:
"""Validate age input, raising ValueError for invalid ages."""
if age < 0:
raise ValueError("Age cannot be negative")
if age > 150:
raise ValueError("Age seems unrealistic")
return age
def test_age_validation():
"""Test age validation with valid and invalid inputs."""
# These should work fine
assert validate_age(25) == 25
assert validate_age(0) == 0
assert validate_age(150) == 150
# These should raise exceptions
with pytest.raises(ValueError):
validate_age(-1)
with pytest.raises(ValueError):
validate_age(200)
# Test specific exception messages
with pytest.raises(ValueError, match="Age cannot be negative"):
validate_age(-5)
with pytest.raises(ValueError, match="Age seems unrealistic"):
validate_age(300)
How do I run tests in different ways?
Here are the different ways to run tests with various options:
# Run all tests in current directory and subdirectories
pytest
# Run tests in a specific file
pytest test_calculator.py
# Run a specific test function
pytest test_calculator.py::test_add
# Run tests with more verbose output (shows test names)
pytest -v
# Run tests and stop at first failure
pytest -x
# Run only tests that failed last time
pytest --lf
# Run tests that match a pattern
pytest -k "test_add"
# Show the slowest 10 tests
pytest --durations=10
How do I use fixtures for test setup?
Fixtures in pytest
are helper functions that run before your test, setting up data or resources your test needs. Think of them as a way to prepare your test environment.
A Basic Fixture
Let's start with the simplest possible fixture - one that provides test data:
import pytest
@pytest.fixture
def sample_numbers():
"""Provide a list of numbers for testing."""
return [1, 2, 3, 4, 5]
def test_sum(sample_numbers):
"""Test that we can sum the numbers."""
assert sum(sample_numbers) == 15
def test_length(sample_numbers):
"""Test that we have 5 numbers."""
assert len(sample_numbers) == 5
How It Works
- The
@pytest.fixture
decorator tells pytest this is a fixture - When a test function has a parameter with the same name as the fixture (
sample_numbers
), pytest automatically runs the fixture first - The fixture's return value gets passed to your test function
When to use fixtures
A few common examples when you might use fixtures in tests:
- Authentication: provide a logged‑in user, API token, or session context.
- Configuration: set environment variables, feature flags, or config objects.
- Temporary workspace: create and clean up temp directories/files.
- Database setup: create or connect to test db, run migrations, seed sample rows.
- File-based test data: load data to ensure they’re available per test.
How do I test the same function with multiple inputs?
When you want to test the same function with different inputs, use @pytest.mark.parametrize
:
import pytest
from typing import Union
def is_even(n: int) -> bool:
"""Check if a number is even."""
return n % 2 == 0
@pytest.mark.parametrize("number,expected", [
(2, True),
(3, False),
(4, True),
(5, False),
(0, True),
(-2, True),
(-3, False),
])
def test_is_even(number: int, expected: bool):
"""Test is_even function with multiple inputs."""
assert is_even(number) == expected
How do I test file operations safely?
When testing code that works with files, use pytest's tmp_path fixture, it will create temporary directory so nothing touches your real file system. The following example app writes a list of lines out to a file.
# app.py
from pathlib import Path
from typing import Iterable
def write_lines(path: Path, lines: Iterable[str]) -> None:
"""Write one item per line to the given file path."""
with path.open("w", encoding="utf-8") as f:
for line in lines:
f.write(f"{line}\n")
The test code
# test_app.py
from pathlib import Path
from app import write_lines
def test_write_lines(tmp_path: Path):
# tmp_path is a temporary directory that pytest cleans up automatically.
temp_file = tmp_path / "data.txt"
items = ["apple", "banana", "cherry"]
write_lines(temp_file, items)
# Verify the file contents without needing a separate "load" function.
contents = temp_file.read_text(encoding="utf-8").splitlines()
assert contents == items
How do I test external dependencies with mocks?
Sometimes your code depends on external things like APIs, databases, or file systems. Mocks let you fake these dependencies so your tests are fast and reliable.
What is a mock? A mock is a fake object that pretends to be the real thing. Instead of making actual API calls, you create a mock that returns the data you want for testing.
Use Python's built-in unittest.mock
:
from unittest.mock import patch
import requests
def get_weather(city):
"""Get weather from an API."""
response = requests.get(f"https://api.weather.com/current?city={city}")
data = response.json()
return data['temperature']
def test_get_weather():
"""Test weather function with mocked API call."""
# Mock the requests.get call
with patch('requests.get') as mock_get:
# Set up the fake response
mock_get.return_value.json.return_value = {'temperature': 75}
# Test your function
temp = get_weather("New York")
# Check it worked
assert temp == 75
# Verify the API was called correctly
mock_get.assert_called_once_with("https://api.weather.com/current?city=New York")
def test_get_weather_error():
"""Test what happens when the API fails."""
with patch('requests.get') as mock_get:
# Make the mock raise an exception
mock_get.side_effect = requests.RequestException("Network error")
# Your function should handle the error gracefully
# (You might need to modify your function to catch exceptions)
Key concepts:
patch()
replaces the real function with a fake one during the testreturn_value
sets what the mock should returnside_effect
makes the mock raise an exceptionassert_called_once_with()
checks the mock was called with the right parameters
Troubleshooting Common Testing Issues
Why are my tests not being discovered?
Problem: Running pytest
shows "collected 0 items"
Solutions:
- Ensure test files start with test_
or end with _test.py
- Ensure test functions start with test_
- Check that test files are in the correct directory
- Use pytest --collect-only
to see what pytest discovers
# Debug test discovery
pytest --collect-only
# Run with verbose output
pytest -v
# Specify test directory explicitly
pytest tests/
How do I fix import errors in tests?
Problem: ModuleNotFoundError
when importing your code
Solutions:
# Option 1: Add project root to Python path
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent.parent))
# Option 2: Use relative imports (if tests are in package)
from ..mymodule import myfunction
# Option 3: Install your package in development mode
# pip install -e .
Why do my tests pass individually but fail together?
Problem: Tests pass when run alone but fail in a suite
Common causes:
- Shared state: Tests modifying global variables or class attributes
- File system: Tests leaving files or directories behind
- Database: Tests not cleaning up database changes
- Network: Tests interfering with each other's network mocks
Solutions:
# Use fixtures for cleanup
@pytest.fixture(autouse=True)
def reset_global_state():
"""Reset global state before each test."""
global_variable = None
yield
global_variable = None
# Use fresh instances
@pytest.fixture
def calculator():
"""Provide fresh calculator instance for each test."""
return Calculator()
# Clean up files in fixtures
@pytest.fixture
def temp_file():
file_path = "test_file.txt"
yield file_path
if os.path.exists(file_path):
os.remove(file_path)
How do I debug failing tests?
Use pytest debugging options:
# Stop on first failure and show local variables
pytest -x -l
# Show print statements (pytest captures them by default)
pytest -s
# Show why tests were skipped
pytest -rs
# Run only failed tests from last run
pytest --lf
Add debug output in tests:
def test_complex_calculation():
data = [1, 2, 3, 4, 5]
result = complex_calculation(data)
# Add debug output
print(f"Input data: {data}")
print(f"Calculation result: {result}")
print(f"Expected: {15}")
assert result == 15
How do I handle slow tests?
Mark slow tests:
@pytest.mark.slow
def test_large_dataset_processing():
"""This test takes a long time."""
# ... slow test code
# Run only fast tests
pytest -m "not slow"
# Run only slow tests
pytest -m slow
Use fixtures to optimize setup:
@pytest.fixture(scope="session")
def expensive_setup():
"""Run expensive setup once per test session."""
# This runs once for all tests
return setup_expensive_resource()
@pytest.fixture(scope="module")
def database_setup():
"""Setup database once per test module."""
# This runs once per test file
return create_test_database()
Advanced Testing Strategies
How do I implement Test-Driven Development (TDD)?
TDD Cycle: Red → Green → Refactor
- Red: Write a failing test
- Green: Write minimal code to pass
- Refactor: Improve the code while keeping tests green
Example TDD workflow:
Using the First Test Example and the beginning of the article, you would create the test first, probably just for the first function of adding two numbers.
=> Create a test file called test_calculator.py
:
from calculator import add
import pytest
def test_add():
"""Test addition function with various inputs."""
assert add(2, 3) == 5
If you run this test, it would fail, because there is no calculator code and no function called add. So you then create what is needed for that test to pass.
=> Create the app file called calculator.py
def add(a, b):
return 5
Now you could run the test and it would pass, however it really isn't doing addition. So you would want to add more tests within test script. So you could update the test script to be:
from calculator import add
import pytest
def test_add():
"""Test addition function with various inputs."""
assert add(-1, 1) == 0
assert add(0, 0) == 0
assert add(2.5, 1.5) == 4.0
Now you have to really implement addition to get the test to pass.
You repeat this cycle of writing a test that checks the function does what you want, and then write the function to do it. The reason TDD is popular is it forces you to think about what success means before you start.
How do I set up tests for CI/CD?
Create .github/workflows/test.yml
for GitHub Actions:
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.9, 3.10, 3.11, 3.12]
steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pytest
pip install -r requirements.txt
- name: Run tests
run: |
pytest
Tips for Better Testing
Testing Strategy Guidelines
- Test behavior, not implementation - Focus on what the function does, not how it does it
- Write descriptive test names -
test_user_login_with_invalid_password_returns_error
is better thantest_login
- Test the happy path first, then edge cases and error conditions
- Keep tests independent - Each test should run in isolation
- Use the AAA pattern: Arrange (setup), Act (execute), Assert (verify)
Test code organized using AAA pattern
def test_user_registration():
"""Test user registration with valid data."""
# Arrange
user_data = {
"name": "John Doe",
"email": "john@example.com",
"password": "secure123"
}
user_service = UserService()
# Act
result = user_service.register_user(user_data)
# Assert
assert result.success is True
assert result.user_id is not None
assert "@" in result.user.email
Practice Challenge
Challenge: Fix this buggy function by writing tests first:
def calculate_discount(price, discount_percent):
"""Calculate discounted price."""
if discount_percent > 100:
return 0
return price - (price * discount_percent)
Your task:
- Write comprehensive tests that would catch the bugs
- Run tests to see failures
- Fix the function to make tests pass
- Add edge case tests (negative prices, invalid discounts)
This is test-driven development in action! Testing saves you hours of debugging and gives you confidence that your code works correctly.