Python Logging Best Practices: The Expert's Handbook
β The Expert's Handbook: Python Logging Best Practicesβ
Logging is not just about tracing execution; it's about creating an intelligent, observable, and auditable application. A truly experienced software engineer approaches Python logging with a strategic mindset, treating log data as a primary source of truth for operations and debugging.
This guide outlines the critical best practices, complete with detailed code examples, to elevate your Python logging from simple print statements to a powerful operational asset.
1. Never Log Directly via the Root Loggerβ
The root logger is configured by basicConfig(), but relying on it makes large application configuration fragile. Always use named loggers to manage configuration hierarchy and compartmentalize log sources.
β Avoidβ
# In module_a.py
import logging
# Directly uses the root logger
logging.info("Using the root logger is bad practice.")
β Best Practice: Use Module-Level Named Loggersβ
By convention, name the logger after the current module using __name__. This allows you to selectively filter or handle logs from specific files via configuration.
# In my_app/data_processor.py
import logging
# Logger name will be 'my_app.data_processor'
logger = logging.getLogger(__name__)
def process_data(data):
logger.info("Starting data processing for %s items.", len(data))
# ... logic ...
logger.debug("Intermediate result calculated.")
2. Use f-strings for Messages, but NOT for Argumentsβ
Log messages should be formatted lazy-ly (only when the log level is enabled). Using standard f-strings or string concatenation calculates the log message before the logging system checks the level, wasting CPU cycles on unnecessary string formatting.
β Avoid (Performance Hit)β
logger.debug(f"Input is {expensive_calculation(user_data)}")
# The expensive_calculation runs even if DEBUG is disabled.
β Best Practice: Use Placeholder Argumentsβ
Pass variables as arguments to the logging method. The logging module uses % formatting internally, but it only executes the formatting if the message is actually going to be processed.
# The string formatting and expensive function call are deferred
logger.debug("Input is %s", expensive_calculation(user_data))
# For multiple variables:
logger.info("User %s accessed resource %s.", user_id, resource_id)
3. Separate Runtime Errors from System Errorsβ
Reserve ERROR and CRITICAL levels for true system failures (e.g., database connection loss, disk full). For predictable application flow failures (e.g., failed user login, invalid API input), use WARNING or a custom level (like AUDIT or SECURITY).
| Level | Use Case | Example |
|---|---|---|
| ERROR/CRITICAL | Application is fundamentally broken or unstable. | logger.error("Database connection dropped unexpectedly.") |
| WARNING | Normal flow deviation, but code handled it. | logger.warning("User %s login attempt failed (bad password).", user_id) |
4. Use exception() for Handling Errorsβ
When catching an exception, use the dedicated logger.exception() method instead of logger.error(). The exception() method is syntactical sugar for calling error() with the exc_info=True flag automatically included, ensuring the full traceback is included in the log message.
β Avoid (Missing Traceback)β
try:
result = 10 / 0
except ZeroDivisionError as e:
# Traceback is missing unless you manually add exc_info
logger.error("A division error occurred: %s", e)
β Best Practice (Full Traceback Included)β
try:
result = 10 / 0
except ZeroDivisionError:
# Automatically captures the current exception traceback
logger.exception("A critical calculation failed.")
5. Centralize Configuration via dictConfigβ
Manually attaching handlers, setting formatters, and setting levels in code is error-prone and hard to maintain in production. Use logging.config.dictConfig() to load the entire logging setup from a configuration file (YAML, JSON, or TOML) at startup.
# In app_setup.py (executed once at startup)
import logging.config
import json
LOGGING_CONFIG = {
'version': 1,
'disable_existing_loggers': False, # Important!
'formatters': {
'standard': {'format': '[%(asctime)s] %(name)s %(levelname)s: %(message)s'},
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'formatter': 'standard',
'level': 'INFO',
},
'file_rotation': {
'class': 'logging.handlers.TimedRotatingFileHandler',
'filename': 'app_audit.log',
'when': 'midnight',
'backupCount': 7,
'formatter': 'standard',
},
},
'loggers': {
# Root Logger (fallback)
'': {'handlers': ['console'], 'level': 'WARNING'},
# Specific Logger (e.g., our data processor module)
'my_app.data_processor': {'handlers': ['file_rotation'], 'level': 'DEBUG', 'propagate': False},
}
}
logging.config.dictConfig(LOGGING_CONFIG)
# Now, any loggers created in modules will obey this central config
logger = logging.getLogger('my_app.data_processor')
logger.debug("This DEBUG message goes ONLY to the file.")
6. Enrich Logs with Contextual Dataβ
Log lines are far more useful when they contain data beyond the default fields. Use extra dictionaries or logging.Filter to add request IDs, user IDs, or custom environment tags to every log record.
Mechanism: The extra Parameterβ
import logging
import uuid
logger = logging.getLogger("MyApp")
logger.setLevel(logging.INFO)
# Assumes a handler has been set up...
def handle_request(user_id):
# Add a unique ID for this request flow
request_id = str(uuid.uuid4())
# Pass the context via the 'extra' keyword
context = {'user_id': user_id, 'request_id': request_id}
logger.info("Starting request.", extra=context)
# ... logic ...
logger.warning("Request finished with a non-critical error.", extra=context)
To make the custom fields (user_id, request_id) appear in the output, you must update your formatter:
# Updated Formatter string: note the custom keys added
FORMAT = '%(asctime)s %(request_id)s %(user_id)s [%(levelname)s] %(message)s'
7. Use NOTSET for Library Loggersβ
If you are writing a reusable library, do not configure a level or attach a handler to your logger. Leave the level at the default (NOTSET or 0).
- Why? This ensures that your library logs are passed up to the root logger, allowing the consuming application to decide where and how to handle them. Setting a level (e.g.,
logging.INFO) might accidentally silence important messages for the consumer.
# In my_library/api.py
import logging
# β
Best Practice: Let the consumer decide the level and destination
library_logger = logging.getLogger(__name__)
def connect():
library_logger.debug("Attempting connection...") # The consuming app's root logger handles this.
# ...
