Skip to main content

Measure execution time of a function or endpoint in FastAPI

· 4 min read
Serhii Hrekov
software engineer, creator, artist, programmer, projects founder

The best and most precise way to measure the execution time of a function or endpoint in FastAPI is by using a custom decorator or a middleware. This approach allows you to wrap your functions with timing logic without modifying the function's code itself, promoting clean, reusable, and maintainable code.

Using a Decorator for a Specific Function

A decorator is an excellent choice for measuring the execution time of a single function or a few specific functions. It's concise and easy to apply.

The Decorator

import time
import functools

def measure_execution_time(func):
"""A decorator to measure and print the execution time of a function."""
@functools.wraps(func)
async def wrapper(*args, **kwargs):
start_time = time.perf_counter()
result = await func(*args, **kwargs)
end_time = time.perf_counter()
execution_time_ms = (end_time - start_time) * 1000
print(f"'{func.__name__}' executed in {execution_time_ms:.4f} ms")
return result
return wrapper

We use time.perf_counter() because it provides the most precise timer available for performance measurement, as it's not affected by system clock changes. The @functools.wraps decorator is crucial for preserving the original function's metadata (like __name__ and docstrings).

Applying the Decorator to an Endpoint

You can now apply this decorator to any of your FastAPI endpoints.

from fastapi import FastAPI

app = FastAPI()

@app.get("/items/{item_id}")
@measure_execution_time
async def read_item(item_id: int):
# Simulate a long-running operation
await asyncio.sleep(0.1)
return {"item_id": item_id}

Using Middleware for All Endpoints

Middleware is the preferred solution when you want to measure the execution time of all requests to your API. It wraps every request and response, centralizing the timing logic and providing a consistent metric across your application.

The Middleware

You can add middleware to your FastAPI application using app.middleware.

from fastapi import FastAPI, Request
from starlette.responses import Response

app = FastAPI()

@app.middleware("http")
async def add_process_time_header(request: Request, call_next):
start_time = time.perf_counter()
response = await call_next(request)
end_time = time.perf_counter()
execution_time_ms = (end_time - start_time) * 1000

# Add the execution time to the response headers for clients to see
response.headers["X-Process-Time-Ms"] = str(execution_time_ms)

# You can also log the time
print(f"Request to '{request.url.path}' took {execution_time_ms:.4f} ms")

return response

This middleware will be executed for every incoming HTTP request. call_next(request) handles the request by passing it to the relevant endpoint function. The code before this line runs before the endpoint, and the code after it runs after the response has been generated.

By adding a custom header like X-Process-Time-Ms, you also provide valuable information to clients and monitoring systems.


Summary of Approaches

MethodProsConsBest For
DecoratorFine-grained control, targets specific functions.Requires manual application to each function.Measuring specific, critical endpoints.
MiddlewareCentralized, measures all requests automatically.Less granular, might include middleware overhead.Measuring overall API performance.

For precise, global timing, middleware is the most robust solution. For micro-optimizing a few specific functions, a decorator is the best choice.