
Picture by Writer
# Introduction
Most Python builders deal with logging as an afterthought. They throw round print() statements throughout improvement, possibly change to fundamental logging later, and assume that’s sufficient. However when points come up in manufacturing, they be taught they’re lacking the context wanted to diagnose issues effectively.
Correct logging strategies provide you with visibility into software conduct, efficiency patterns, and error situations. With the suitable method, you’ll be able to hint consumer actions, determine bottlenecks, and debug points with out reproducing them domestically. Good logging turns debugging from guesswork into systematic problem-solving.
This text covers the important logging patterns that Python builders can use. You’ll learn to construction log messages for searchability, deal with exceptions with out shedding context, and configure logging for various environments. We’ll begin with the fundamentals and work our manner as much as extra superior logging methods that you need to use in initiatives instantly. We shall be utilizing solely the logging module.
You will discover the code on GitHub.
# Setting Up Your First Logger
As a substitute of leaping straight to complicated configurations, allow us to perceive what a logger truly does. We’ll create a fundamental logger that writes to each the console and a file.
import logging
logger = logging.getLogger('my_app')
logger.setLevel(logging.DEBUG)
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)
file_handler = logging.FileHandler('app.log')
file_handler.setLevel(logging.DEBUG)
formatter = logging.Formatter(
'%(asctime)s - %(identify)s - %(levelname)s - %(message)s'
)
console_handler.setFormatter(formatter)
file_handler.setFormatter(formatter)
logger.addHandler(console_handler)
logger.addHandler(file_handler)
logger.debug('It is a debug message')
logger.data('Utility began')
logger.warning('Disk house operating low')
logger.error('Failed to connect with database')
logger.essential('System shutting down')
Here’s what each bit of the code does.
The getLogger() operate creates a named logger occasion. Consider it as making a channel in your logs. The identify ‘my_app’ helps you determine the place logs come from in bigger functions.
We set the logger stage to DEBUG, which suggests it can course of all messages. Then we create two handlers: one for console output and one for file output. Handlers management the place logs go.
The console handler solely exhibits INFO stage and above, whereas the file handler captures every part, together with DEBUG messages. That is helpful since you need detailed logs in information however cleaner output on display.
The formatter determines how your log messages look. The format string makes use of placeholders like %(asctime)s for the timestamp and %(levelname)s for severity.
# Understanding Log Ranges and When to Use Every
Python’s logging module has 5 normal ranges, and understanding when to make use of each is necessary for helpful logs.
Right here is an instance:
logger = logging.getLogger('payment_processor')
logger.setLevel(logging.DEBUG)
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter('%(levelname)s: %(message)s'))
logger.addHandler(handler)
def process_payment(user_id, quantity):
logger.debug(f'Beginning fee processing for consumer {user_id}')
if quantity <= 0:
logger.error(f'Invalid fee quantity: {quantity}')
return False
logger.data(f'Processing ${quantity} fee for consumer {user_id}')
if quantity > 10000:
logger.warning(f'Giant transaction detected: ${quantity}')
attempt:
# Simulate fee processing
success = charge_card(user_id, quantity)
if success:
logger.data(f'Fee profitable for consumer {user_id}')
return True
else:
logger.error(f'Fee failed for consumer {user_id}')
return False
besides Exception as e:
logger.essential(f'Fee system crashed: {e}', exc_info=True)
return False
def charge_card(user_id, quantity):
# Simulated fee logic
return True
process_payment(12345, 150.00)
process_payment(12345, 15000.00)
Allow us to break down when to make use of every stage:
- DEBUG is for detailed info helpful throughout improvement. You’d use it for variable values, loop iterations, or step-by-step execution traces. These are often disabled in manufacturing.
- INFO marks regular operations that you simply wish to report. Beginning a server, finishing a activity, or profitable transactions go right here. These verify your software is working as anticipated.
- WARNING indicators one thing sudden however not breaking. This contains low disk house, deprecated API utilization, or uncommon however dealt with conditions. The applying continues operating, however somebody ought to examine.
- ERROR means one thing failed however the software can proceed. Failed database queries, validation errors, or community timeouts belong right here. The particular operation failed, however the app retains operating.
- CRITICAL signifies critical issues which may trigger the applying to crash or lose knowledge. Use this sparingly for catastrophic failures that want quick consideration.
If you run the above code, you’ll get:
DEBUG: Beginning fee processing for consumer 12345
DEBUG:payment_processor:Beginning fee processing for consumer 12345
INFO: Processing $150.0 fee for consumer 12345
INFO:payment_processor:Processing $150.0 fee for consumer 12345
INFO: Fee profitable for consumer 12345
INFO:payment_processor:Fee profitable for consumer 12345
DEBUG: Beginning fee processing for consumer 12345
DEBUG:payment_processor:Beginning fee processing for consumer 12345
INFO: Processing $15000.0 fee for consumer 12345
INFO:payment_processor:Processing $15000.0 fee for consumer 12345
WARNING: Giant transaction detected: $15000.0
WARNING:payment_processor:Giant transaction detected: $15000.0
INFO: Fee profitable for consumer 12345
INFO:payment_processor:Fee profitable for consumer 12345
True
Subsequent, allow us to proceed to know extra about logging exceptions.
# Logging Exceptions Correctly
When exceptions happen, you want extra than simply the error message; you want the total stack hint. Right here is the best way to seize exceptions successfully.
import json
logger = logging.getLogger('api_handler')
logger.setLevel(logging.DEBUG)
handler = logging.FileHandler('errors.log')
formatter = logging.Formatter(
'%(asctime)s - %(identify)s - %(levelname)s - %(message)s'
)
handler.setFormatter(formatter)
logger.addHandler(handler)
def fetch_user_data(user_id):
logger.data(f'Fetching knowledge for consumer {user_id}')
attempt:
# Simulate API name
response = call_external_api(user_id)
knowledge = json.hundreds(response)
logger.debug(f'Obtained knowledge: {knowledge}')
return knowledge
besides json.JSONDecodeError as e:
logger.error(
f'Didn't parse JSON for consumer {user_id}: {e}',
exc_info=True
)
return None
besides ConnectionError as e:
logger.error(
f'Community error whereas fetching consumer {user_id}',
exc_info=True
)
return None
besides Exception as e:
logger.essential(
f'Surprising error in fetch_user_data: {e}',
exc_info=True
)
increase
def call_external_api(user_id):
# Simulated API response
return '{"id": ' + str(user_id) + ', "identify": "John"}'
fetch_user_data(123)
The important thing right here is the exc_info=True parameter. This tells the logger to incorporate the total exception traceback in your logs. With out it, you solely get the error message, which regularly shouldn’t be sufficient to debug the issue.
Discover how we catch particular exceptions first, then have a basic Exception handler. The particular handlers allow us to present context-appropriate error messages. The final handler catches something sudden and re-raises it as a result of we have no idea the best way to deal with it safely.
Additionally discover we log at ERROR for anticipated exceptions (like community errors) however CRITICAL for sudden ones. This distinction helps you prioritize when reviewing logs.
# Making a Reusable Logger Configuration
Copying logger setup code throughout information is tedious and error-prone. Allow us to create a configuration operate you’ll be able to import wherever in your undertaking.
# logger_config.py
import logging
import os
from datetime import datetime
def setup_logger(identify, log_dir="logs", stage=logging.INFO):
"""
Create a configured logger occasion
Args:
identify: Logger identify (often __name__ from calling module)
log_dir: Listing to retailer log information
stage: Minimal logging stage
Returns:
Configured logger occasion
"""
# Create logs listing if it would not exist
if not os.path.exists(log_dir):
os.makedirs(log_dir)
logger = logging.getLogger(identify)
# Keep away from including handlers a number of instances
if logger.handlers:
return logger
logger.setLevel(stage)
# Console handler - INFO and above
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)
console_format = logging.Formatter("%(levelname)s - %(identify)s - %(message)s")
console_handler.setFormatter(console_format)
# File handler - every part
log_filename = os.path.be a part of(
log_dir, f"{identify.exchange('.', '_')}_{datetime.now().strftime('%Ypercentmpercentd')}.log"
)
file_handler = logging.FileHandler(log_filename)
file_handler.setLevel(logging.DEBUG)
file_format = logging.Formatter(
"%(asctime)s - %(identify)s - %(levelname)s - %(funcName)s:%(lineno)d - %(message)s"
)
file_handler.setFormatter(file_format)
logger.addHandler(console_handler)
logger.addHandler(file_handler)
return logger
Now that you’ve arrange logger_config, you need to use it in your Python script like so:
from logger_config import setup_logger
logger = setup_logger(__name__)
def calculate_discount(worth, discount_percent):
logger.debug(f'Calculating low cost: {worth} * {discount_percent}%')
if discount_percent < 0 or discount_percent > 100:
logger.warning(f'Invalid low cost share: {discount_percent}')
discount_percent = max(0, min(100, discount_percent))
low cost = worth * (discount_percent / 100)
final_price = worth - low cost
logger.data(f'Utilized {discount_percent}% low cost: ${worth} -> ${final_price}')
return final_price
calculate_discount(100, 20)
calculate_discount(100, 150)
This setup operate handles a number of necessary issues. First, it creates the logs listing if wanted, stopping crashes from lacking directories.
The operate checks if handlers exist already earlier than including new ones. With out this verify, calling setup_logger a number of instances would create duplicate log entries.
We generate dated log filenames robotically. This prevents log information from rising infinitely and makes it straightforward to seek out logs from particular dates.
The file handler contains extra element than the console handler, together with operate names and line numbers. That is invaluable when debugging however would litter console output.
Utilizing __name__ because the logger identify creates a hierarchy that matches your module construction. This allows you to management logging for particular elements of your software independently.
# Structuring Logs with Context
Plain textual content logs are high-quality for easy functions, however structured logs with context make debugging a lot simpler. Allow us to add contextual info to our logs.
import json
from datetime import datetime, timezone
class ContextLogger:
"""Logger wrapper that provides contextual info to all log messages"""
def __init__(self, identify, context=None):
self.logger = logging.getLogger(identify)
self.context = context or {}
handler = logging.StreamHandler()
formatter = logging.Formatter('%(message)s')
handler.setFormatter(formatter)
# Test if handler already exists to keep away from duplicate handlers
if not any(isinstance(h, logging.StreamHandler) and h.formatter._fmt == '%(message)s' for h in self.logger.handlers):
self.logger.addHandler(handler)
self.logger.setLevel(logging.DEBUG)
def _format_message(self, message, stage, extra_context=None):
"""Format message with context as JSON"""
log_data = {
'timestamp': datetime.now(timezone.utc).isoformat(),
'stage': stage,
'message': message,
'context': {**self.context, **(extra_context or {})}
}
return json.dumps(log_data)
def debug(self, message, **kwargs):
self.logger.debug(self._format_message(message, 'DEBUG', kwargs))
def data(self, message, **kwargs):
self.logger.data(self._format_message(message, 'INFO', kwargs))
def warning(self, message, **kwargs):
self.logger.warning(self._format_message(message, 'WARNING', kwargs))
def error(self, message, **kwargs):
self.logger.error(self._format_message(message, 'ERROR', kwargs))
You need to use the ContextLogger like so:
def process_order(order_id, user_id):
logger = ContextLogger(__name__, context={
'order_id': order_id,
'user_id': user_id
})
logger.data('Order processing began')
attempt:
gadgets = fetch_order_items(order_id)
logger.data('Objects fetched', item_count=len(gadgets))
whole = calculate_total(gadgets)
logger.data('Whole calculated', whole=whole)
if whole > 1000:
logger.warning('Excessive worth order', whole=whole, flagged=True)
return True
besides Exception as e:
logger.error('Order processing failed', error=str(e))
return False
def fetch_order_items(order_id):
return [{'id': 1, 'price': 50}, {'id': 2, 'price': 75}]
def calculate_total(gadgets):
return sum(merchandise['price'] for merchandise in gadgets)
process_order('ORD-12345', 'USER-789')
This ContextLogger wrapper does one thing helpful: it robotically contains context in each log message. The order_id and user_id get added to all logs with out repeating them in each logging name.
The JSON format makes these logs straightforward to parse and search.
The **kwargs in every logging technique enables you to add further context to particular log messages. This combines international context (order_id, user_id) with native context (item_count, whole) robotically.
This sample is very helpful in internet functions the place you need request IDs, consumer IDs, or session IDs in each log message from a request.
# Rotating Log Information to Forestall Disk Area Points
Log information develop rapidly in manufacturing. With out rotation, they may ultimately fill your disk. Right here is the best way to implement automated log rotation.
from logging.handlers import RotatingFileHandler, TimedRotatingFileHandler
def setup_rotating_logger(identify):
logger = logging.getLogger(identify)
logger.setLevel(logging.DEBUG)
# Dimension-based rotation: rotate when file reaches 10MB
size_handler = RotatingFileHandler(
'app_size_rotation.log',
maxBytes=10 * 1024 * 1024, # 10 MB
backupCount=5 # Preserve 5 outdated information
)
size_handler.setLevel(logging.DEBUG)
# Time-based rotation: rotate day by day at midnight
time_handler = TimedRotatingFileHandler(
'app_time_rotation.log',
when='midnight',
interval=1,
backupCount=7 # Preserve 7 days
)
time_handler.setLevel(logging.INFO)
formatter = logging.Formatter(
'%(asctime)s - %(identify)s - %(levelname)s - %(message)s'
)
size_handler.setFormatter(formatter)
time_handler.setFormatter(formatter)
logger.addHandler(size_handler)
logger.addHandler(time_handler)
return logger
logger = setup_rotating_logger('rotating_app')
Allow us to now attempt to use rotation of log information:
for i in vary(1000):
logger.data(f'Processing report {i}')
logger.debug(f'File {i} particulars: accomplished in {i * 0.1}ms')
RotatingFileHandler manages logs based mostly on file dimension. When the log file reaches 10MB (laid out in bytes), it will get renamed to app_size_rotation.log.1, and a brand new app_size_rotation.log begins. The backupCount of 5 means you’ll hold 5 outdated log information earlier than the oldest will get deleted.
TimedRotatingFileHandler rotates based mostly on time intervals. The ‘midnight’ parameter means it creates a brand new log file each day at midnight. You would additionally use ‘H’ for hourly, ‘D’ for day by day (at any time), or ‘W0’ for weekly on Monday.
The interval parameter works with the when parameter. With when='H' and interval=6, logs would rotate each 6 hours.
These handlers are important for manufacturing environments. With out them, your software might crash when the disk fills up with logs.
# Logging in Completely different Environments
Your logging wants differ between improvement, staging, and manufacturing. Right here is the best way to configure logging that adapts to every surroundings.
import logging
import os
def configure_environment_logger(app_name):
"""Configure logger based mostly on surroundings"""
surroundings = os.getenv('APP_ENV', 'improvement')
logger = logging.getLogger(app_name)
# Clear current handlers
logger.handlers = []
if surroundings == 'improvement':
# Improvement: verbose console output
logger.setLevel(logging.DEBUG)
handler = logging.StreamHandler()
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter(
'%(levelname)s - %(identify)s - %(funcName)s:%(lineno)d - %(message)s'
)
handler.setFormatter(formatter)
logger.addHandler(handler)
elif surroundings == 'staging':
# Staging: detailed file logs + necessary console messages
logger.setLevel(logging.DEBUG)
file_handler = logging.FileHandler('staging.log')
file_handler.setLevel(logging.DEBUG)
file_formatter = logging.Formatter(
'%(asctime)s - %(identify)s - %(levelname)s - %(funcName)s - %(message)s'
)
file_handler.setFormatter(file_formatter)
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.WARNING)
console_formatter = logging.Formatter('%(levelname)s: %(message)s')
console_handler.setFormatter(console_formatter)
logger.addHandler(file_handler)
logger.addHandler(console_handler)
elif surroundings == 'manufacturing':
# Manufacturing: structured logs, errors solely to console
logger.setLevel(logging.INFO)
file_handler = logging.handlers.RotatingFileHandler(
'manufacturing.log',
maxBytes=50 * 1024 * 1024, # 50 MB
backupCount=10
)
file_handler.setLevel(logging.INFO)
file_formatter = logging.Formatter(
'{"timestamp": "%(asctime)s", "stage": "%(levelname)s", '
'"logger": "%(identify)s", "message": "%(message)s"}'
)
file_handler.setFormatter(file_formatter)
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.ERROR)
console_formatter = logging.Formatter('%(levelname)s: %(message)s')
console_handler.setFormatter(console_formatter)
logger.addHandler(file_handler)
logger.addHandler(console_handler)
return logger
This environment-based configuration handles every stage otherwise. Improvement exhibits every part on the console with detailed info, together with operate names and line numbers. This makes debugging quick.
Staging balances improvement and manufacturing. It writes detailed logs to information for investigation however solely exhibits warnings and errors on the console to keep away from noise.
Manufacturing focuses on efficiency and construction. It solely logs INFO stage and above to information, makes use of JSON formatting for straightforward parsing, and implements log rotation to handle disk house. Console output is restricted to errors solely.
# Set surroundings variable (usually achieved by deployment system)
os.environ['APP_ENV'] = 'manufacturing'
logger = configure_environment_logger('my_application')
logger.debug('This debug message will not seem in manufacturing')
logger.data('Consumer logged in efficiently')
logger.error('Didn't course of fee')
The surroundings is decided by the APP_ENV surroundings variable. Your deployment system (Docker, Kubernetes, or different cloud platforms) units this variable robotically.
Discover how we clear current handlers earlier than configuration. This prevents duplicate handlers if the operate is named a number of instances throughout the software lifecycle.
# Wrapping Up
Good logging makes the distinction between rapidly diagnosing points and spending hours guessing what went flawed. Begin with fundamental logging utilizing acceptable severity ranges, add structured context to make logs searchable, and configure rotation to forestall disk house issues.
The patterns proven right here work for functions of any dimension. Begin easy with fundamental logging, then add structured logging while you want higher searchability, and implement environment-specific configuration while you deploy to manufacturing.
Completely satisfied logging!
Bala Priya C is a developer and technical author from India. She likes working on the intersection of math, programming, knowledge science, and content material creation. Her areas of curiosity and experience embrace DevOps, knowledge science, and pure language processing. She enjoys studying, writing, coding, and low! Presently, she’s engaged on studying and sharing her data with the developer neighborhood by authoring tutorials, how-to guides, opinion items, and extra. Bala additionally creates partaking useful resource overviews and coding tutorials.
