Skip to Content

Logging in Python : A Step-by-Step Tutorial

Logging is a crucial part of any software application, helping developers monitor, debug, and maintain code more efficiently. In Python, the built-in logging module provides a powerful and flexible framework for adding log messages to the programs. Whether building a small script or a large-scale application, understanding how to implement logging correctly can save the countless hours of troubleshooting. 

This step-by-step tutorial will walk you through the basics of Python logging from setting up your first log message to configuring advanced loggers, handlers, and formatters. We will learn how to capture useful information, direct logs to files or consoles, and control the verbosity of messages through logging levels. By the end , you'll be equipped with the knowledge to create robust logging systems that improve visibility into the application's behavior. 

Table of Contents  

  • Introduction to Logging in Python
    • What is Logging?
    • Why Logging is Essential in Modern Development
    • Logging vs Print: What's the Difference?
  • The Logging Module in Python
    • Logging Levels in Python
    • Basic Syntax and How It Works
    • Anatomy of a Log: Loggers, Handlers, Formatters, Filters
  • Five Standard Logging Levels
    • DEBUG, INFO, WARNING, ERROR, CRITICAL Explained
  • Python method to log to console and file 
    • Logging to Console
    • Logging to a File
    • Changing Format and Timestamp
  • Logging Best Practices
    • Avoid the Root Logger
    • Don't Log Sensitive Information
    • Use Structured Logs for Machines and Text for Humans
    • Avoid Logging in Tight Loops
    • Include Context in Logs (function name, user ID, etc.)
  • Structured Logging with JSON
    • What is Structured Logging?
    • Output Logs in JSON Format
  • Rotating Logs and Archiving
    • FileHandler vs RotatingFileHandler
    • Setup Daily/Size-Based Rotation
    • Archiving Old Logs Automatically
  • Logging Configuration Techniques
    • basicConfig() vs dictConfig()
    • Using Configuration Files (YAML/INI)
    • Environment-Based Config (Dev vs Prod)
  • Logging in Real-World Projects
    • Logging in Flask/Django Applications
    • Logging in Microservices
    • Logging in CLI Tools and Daemons
  • Logging in DevOps and SRE
    • Centralized Logging with ELK/Loki
    • Monitoring Logs for Alerting (Promtail + Grafana)
    • Logging in CI/CD Pipelines
    • Handling Logs in Containers (Docker/K8s)
  • Contextual and Correlation Logging
    • Adding Request IDs, Trace IDs
    • Logging Per User or Session
    • Correlating Logs Across Services
  • Testing and Validating Logs
    • Unit Testing Log Output
    • Mocking Loggers in Tests
    • Validating Format and Destination
  • Common Pitfalls and How to Avoid Them
    • Logging Infinite Loops or Sensitive Data
    • Reconfiguring Loggers Multiple Times
    • Memory Leaks with File Handlers
    • Log Flooding in Production
  • Logging Tools and Libraries
    • Loguru
    • Structlog
    • Python-json-logger
    • Logging with Sentry/NewRelic/Datadog
  • Frequently Asked Questions
    • How to log in JSON format?
    • How to configure logs in production?
    • What’s the best logger for microservices?
  • Summary Table of Logging Levels and Use Cases
  • Conclusion
    • The Importance of Logging Discipline
    • What You’ve Learned
    • Next Steps (Monitoring, Alerting, etc.)

Introduction to Logging in Python

At first let's understand the basics as how logging is a foundational practice that enables developers to monitor, debug, and maintain their applications effectively.


What is Logging?

Logging refers to the systematic recording of events, messages, and data points generated by software during its execution. These messages, known as logs, provide insight into the internal workings of an application and can include information such as:

  • Errors and exceptions
  • Warnings and potential issues
  • Informational messages about system state or progress
  • Debugging details for developers


Key Components of Logging

  • Log Message: The actual text or data recorded.
  • Log Level: The severity or importance of the message (e.g., DEBUG, INFO, WARNING, ERROR, CRITICAL).
  • Logger: The object responsible for creating log messages.
  • Handler: Determines where the log messages go (console, file, remote server, etc.).
  • Formatter: Controls the structure and content of the log message output.


Basic Logging in Python


import logging logging.basicConfig(level=logging.INFO) logging.info("Application started") logging.warning("Low disk space") logging.error("An error occurred")


Why Logging is Essential in Python ?

Logging is not just a debugging tool—it is a core part of application observability, operational excellence, and security. Here’s why logging is indispensable in today’s development environments:

1. Troubleshooting and Debugging

  • Logs provide a chronological record of events, making it easier to trace the root cause of bugs and failures.
  • Detailed logs allow developers to reproduce and fix issues that might only occur in production environments.

2. Monitoring and Alerting

  • Logs are the backbone of monitoring systems that track application health, performance, and usage.
  • Automated tools can parse logs for specific patterns (e.g., repeated errors) and trigger alerts for rapid incident response.

3. Audit and Compliance

  • In regulated industries, logs serve as an audit trail, showing who did what and when.
  • Proper logging supports compliance with standards like GDPR, HIPAA, and PCI DSS.

4. Security

  • Logs help detect suspicious activities, unauthorized access, and potential breaches.
  • Security teams rely on logs to investigate incidents and enforce policies.

5. Performance Analysis

  • By logging metrics and timings, developers can identify bottlenecks and optimize system performance.


Logging vs Print: What’s the Difference ?

Although both print() statements and logging output information, they serve fundamentally different purposes and should not be used interchangeably in production code.

Featureprint()logging Module
PurposeSimple output to consoleStructured, configurable event recording
ControlNo built-in control over outputFine-grained control (levels, handlers, format)
Severity LevelsNoneYes (DEBUG, INFO, WARNING, ERROR, CRITICAL)
Output DestinationsConsole onlyConsole, files, remote servers, email, etc.
ConfigurabilityMinimalHighly configurable
PerformanceCan slow down apps if overusedCan be tuned and filtered efficiently
Production UseNot recommendedIndustry standard
Thread SafetyNoYes


Why Not Use Print for Logging ?

  • Lack of Context: print() provides no information about time, severity, or source.
  • No Filtering: Cannot filter messages by importance or disable output in production.
  • No Flexibility: Cannot easily redirect output to files, monitoring systems, or external services.
  • Hard to Maintain: Scattered print() statements make code harder to read and maintain.

Logging vs Print Example -

Using print :


print("Connecting to database...") print("Error: Connection failed!")

Using logging :


import logging logging.basicConfig(level=logging.INFO) logging.info("Connecting to database...") logging.error("Connection failed!")

With logging, you can later change the log level, redirect output, or add more context without modifying every stateme​nt.

Logging Module in Python

Python’s built-in logging module is a powerful and flexible system for tracking events that happen during the execution of your applications. It supports everything from simple console logging to complex, multi-destination, and multi-format event recording.


Logging Levels in Python

Python's logging module uses predefined levels to indicate the severity of events. These levels help categorize messages by importance, making it easier to filter and focus on critical issues. For example, DEBUG is used for deep internal messages during development, while CRITICAL flags fatal errors that may cause system failure. You can set a logger’s level to control which messages get processed. For instance, setting it to WARNING will ignore INFO and DEBUG messages. Proper use of levels ensures cleaner logs, better visibility, and faster debugging. It's a vital practice in both development and production environments.


Level NameNumeric ValueDescriptionPriority
CRITICAL50Very serious error; application may crashHighest
ERROR40Serious problem; prevents part of program from running
WARNING30Something unexpected; not an error, but worth attention
INFO20General information about program execution
DEBUG10Detailed information for debugging
NOTSET0Default level; no specific filteringLowest


Basic Syntax and How It Works

The logging module is easy to get started with, but also supports advanced configuration for complex needs.

Minimal Example


import logging logging.basicConfig(level=logging.INFO) logging.info("Application started") logging.warning("Low disk space") logging.error("An error occurred")

  • basicConfig(): Sets up the default logging configuration (level, format, output).
  • Log Level: Controls the minimum severity of messages to capture.
  • Logging Functions: Use logging.debug(), logging.info(), logging.warning(), logging.error(), logging.critical() for different severities.


Customizing Output

You can specify the log format and output file :


logging.basicConfig( level=logging.DEBUG, format="%(asctime)s [%(levelname)s] %(name)s: %(message)s", filename="app.log", filemode="w" )

  • format: Controls the structure of each log message.
  • filename and filemode: Direct logs to a file instead of the console.


Anatomy of a Log : Loggers, Handlers, Formatters, Filters

The logging module is built around four main components, each serving a distinct role:

1. Loggers

  • Definition: The interface the code uses to create log messages.
  • Hierarchy: Loggers are organized in a hierarchy by name (e.g., myapp, myapp.db).
  • Usage: Call logging.getLogger(name) to create or retrieve a logger.


logger = logging.getLogger("myapp") logger.info("Starting application")

2. Handlers

  • Definition: Determine where log messages go (console, file, email, HTTP, etc.).
  • Multiple Handlers: A logger can have multiple handlers to send the same message to different places.

Example: Adding a file handler


file_handler = logging.FileHandler("app.log") logger.addHandler(file_handler)


Handlers determine where log messages go:

  • StreamHandler() sends logs to the console (standard output)
  • FileHandler("app.log") sends logs to a file named app.log

3. Formatters

  • Definition: Control the layout and content of log messages.
  • Customization: Define timestamp, log level, logger name, message, etc.

Example: Custom formatter


formatter = logging.Formatter('%(asctime)s [%(levelname)s] %(message)s') file_handler.setFormatter(formatter)

4. Filters

  • Definition: Provide fine-grained control over which log records are processed.
  • Use Cases: Filter by logger name, severity, or custom logic.

Example: Custom filter


class OnlyErrorsFilter(logging.Filter): def filter(self, record): return record.levelno == logging.ERROR file_handler.addFilter(OnlyErrorsFilter())


Putting It All Together : Custom Logging Setup in Python


import logging # Create logger logger = logging.getLogger("myapp") logger.setLevel(logging.DEBUG) # Create handlers console_handler = logging.StreamHandler() file_handler = logging.FileHandler("app.log") # Create formatter and add to handlers formatter = logging.Formatter('%(asctime)s [%(levelname)s] %(name)s: %(message)s') console_handler.setFormatter(formatter) file_handler.setFormatter(formatter) # Add handlers to logger logger.addHandler(console_handler) logger.addHandler(file_handler) # Log messages logger.info("App started") logger.error("An error occurred")

Five Standard Logging Levels


DEBUG

  • Use for: Internal state, variable values, and detailed flow information during development.
  • Not for: Production environments (too verbose and may expose sensitive data).

INFO

  • Use for: High-level events showing normal operation, such as successful startup, shutdown, or completion of tasks.
  • Not for: Troubleshooting or error reporting.

WARNING

  • Use for: Unexpected events that aren’t errors but may require attention ( like deprecated API usage, missing optional files).
  • Not for: Routine events or critical failures.

ERROR

  • Use for: Failures that impact functionality, such as exceptions or failed operations that the program can recover from.
  • Not for: Non-critical issues or recoverable warnings.

CRITICAL

  • Use for: Catastrophic failures - data loss, security breaches, or situations where the application must shut down.
  • Not for: Recoverable errors or warnings.

Python method to log to console and file

To log messages to both the console and a file in Python, you configure the logging system with multiple handlers—one for each output destination. This approach ensures real-time visibility in the console and persistent storage in a log file, with customizable formats and timestamps.


Logging to Console

To log to the console, use a StreamHandler. By default, this outputs to stderr, but you can direct it to stdout if needed.


import logging import sys console_handler = logging.StreamHandler(sys.stdout) # or omit sys.stdout for stderr console_handler.setLevel(logging.INFO) console_formatter = logging.Formatter('%(levelname)s: %(message)s') console_handler.setFormatter(console_formatter)


Logging to a File

To log to a file, use a FileHandler. You can specify the log file path and the minimum log level.


file_handler = logging.FileHandler('app.log') file_handler.setLevel(logging.DEBUG) file_formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S') file_handler.setFormatter(file_formatter)


Changing Format and Timestamp

Each handler can have its own format. Common fields include:

  • %(asctime)s â€“ Timestamp
  • %(levelname)s â€“ Log level (INFO, ERROR, etc.)
  • %(message)s â€“ The log message
  • %(name)s, %(filename)s, %(lineno)d â€“ Logger name, source file, line number


Complete Example: Logging to Both Console and File


import logging import sys # Create a logger logger = logging.getLogger('my_logger') logger.setLevel(logging.DEBUG) # File handler (logs everything, includes timestamp) file_handler = logging.FileHandler('app.log') file_handler.setLevel(logging.DEBUG) file_formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S') file_handler.setFormatter(file_formatter) # Console handler (INFO and higher, simpler format) console_handler = logging.StreamHandler(sys.stdout) console_handler.setLevel(logging.INFO) console_formatter = logging.Formatter('%(levelname)s: %(message)s') console_handler.setFormatter(console_formatter) # Add handlers to the logger logger.addHandler(file_handler) logger.addHandler(console_handler) # Example log messages logger.debug('Debug message (file only)') logger.info('Info message (console and file)') logger.warning('Warning message (console and file)') logger.error('Error message (console and file)')


  • DEBUG logs appear only in the file.
  • INFO and higher appear in both file and console, with formats and timestamps as configure

Logging Best Practices in Python


1. Avoid the Root Logger

What & Why :

The root logger (logging.debug(), logging.info(), etc. without a named logger) is shared globally. Relying on it can lead to unpredictable log output, especially in multi-module or large applications.

Best Practice :

Always create and configure loggers using logging.getLogger(__name__) at the module or component level. This enables fine-grained control, easier debugging, and avoids accidental log pollution from third-party libraries.

Example:


import logging

logger = logging.getLogger(__name__) logger.info("Module-specific log message")

2. Don't Log Sensitive Information

What & Why:

Logging secrets, passwords, API keys, or personally identifiable information (PII) is a major security risk. Such data can end up in log files, monitoring systems, or even in public repositories, potentially exposing your organization to breaches and compliance violations.

Best Practice:

  • Carefully review log messages for accidental leaks.
  • Mask or redact sensitive fields before logging.
  • Use logging filters or custom formatters to automatically remove or obfuscate sensitive data.

Example:


logger.info("User login attempt", extra={"user_id": user_id})

# Avoid: logger.info(f"User password: {password}")

3. Use Structured Logs for Machines and Text for Humans

What & Why:

  • Structured logs (e.g., JSON) are easily parsed by log aggregation and monitoring tools, supporting advanced search and analytics.
  • Text logs are more readable for humans during local development or debugging.

Best Practice:

  • Use structured logs (JSON, key-value pairs) in production for machine processing.
  • Use human-friendly formats (simple text, colorized output) in development.
  • Many logging libraries and formatters (including Python’s logging and third-party tools like structlog) support both modes.

Example: Structured Logging


import json

log_record = { "timestamp": "2025-07-07T12:00:00Z", "level": "INFO", "user_id": 123, "action": "login" } logger.info(json.dumps(log_record))

4. Avoid Logging in Tight Loops

What & Why:

Logging inside loops that execute frequently (e.g., per request, per iteration) can:

  • Flood log files, making it hard to find important events.
  • Cause significant performance degradation.
  • Overwhelm log processing infrastructure.

Best Practice:

  • Only log loop events when necessary (e.g., on error or at intervals).
  • Use conditional logging, counters, or throttling.

Example:


for i in range(1000000): if i % 10000 == 0: logger.info(f"Processed {i} records")

5. Include Context in Logs (function name, user ID, etc.)

What & Why:

Logs are far more valuable when they include context: who did what, where, and when. This helps with debugging, auditing, and monitoring.

Best Practice:

  • Use contextual fields (user ID, request ID, function name, etc.).
  • Use the extra parameter or custom formatters to inject context.
  • Include timestamps and log levels in every message for traceability.

Example:


logger.info("User authenticated", extra={"user_id": user_id, "endpoint": "/api/data"})

Or with a custom formatter:


formatter = logging.Formatter( '%(asctime)s | %(levelname)s | %(name)s | %(funcName)s | %(user_id)s | %(message)s' )


Structured Logging with JSON in Python


What is Structured Logging?

Structured logging is the practice of recording log events as well-defined, machine-readable data rather than plain text. Instead of writing free-form strings, structured logs use key-value pairs—often in JSON format—to capture contextual information such as timestamps, log levels, user IDs, and more.

Key Benefits

  • Machine-readability: Logs are easy to parse, search, and analyze.
  • Contextual richness: Each log entry can include arbitrary metadata (e.g., request IDs, user info).
  • Compatibility: Structured logs integrate seamlessly with modern log aggregation and analysis tools.

Example:

  • Unstructured log:
    2025-07-07 12:00:00 ERROR User authentication failed for user123
  • Structured log (JSON):
    {"timestamp": "2025-07-07T12:00:00Z", "level": "ERROR", "message": "User authentication failed", "username": "user123"}

Structured logging enables powerful querying, filtering, and alerting in log management systems.

Output Logs in JSON Format

Python offers several ways to emit logs in JSON, making them suitable for structured logging.

Using python-json-logger

One of the most popular ways to format logs as JSON is with the python-json-logger package, which extends the standard logging module:


import logging from pythonjsonlogger import jsonlogger logger = logging.getLogger("my_json_app") logger.setLevel(logging.DEBUG) logger.handlers.clear() formatter = jsonlogger.JsonFormatter( fmt="%(asctime)s %(levelname)s %(name)s %(message)s", datefmt="%Y-%m-%dT%H:%M:%SZ" ) handler = logging.StreamHandler() handler.setLevel(logging.DEBUG) handler.setFormatter(formatter) logger.addHandler(handler) logger.info("User logged in", extra={"user_id": 123, "ip": "192.168.1.1"})

Output:

{ "asctime": "2025-07-07T12:00:00Z", "levelname": "INFO", "name": "my_json_app", "message": "User logged in", "user_id": 123, "ip": "192.168.1.1" }

  • Each log entry is a JSON object with consistent fields, making it easy for downstream tools to process.


Using structlog for Advanced Structured Logging

For more advanced scenarios, the structlog library offers a flexible, high-performance approach:


import structlog structlog.configure( processors=[ structlog.processors.TimeStamper(fmt="iso"), structlog.processors.add_log_level, structlog.processors.JSONRenderer() ] ) log = structlog.get_logger() log.info("user_logged_in", user="Vinay", ip="192.168.1.1")

Output:

{ "timestamp": "2025-07-07T12:00:00Z", "level": "info", "event": "user_logged_in", "user": "Vinay", "ip": "192.168.1.1" }

structlog allows you to add global or dynamic context (e.g., request IDs) and customize processors for advanced use cases

Everything you need

List and describe the key features of your solution or service.

Everything you need

List and describe the key features of your solution or service.

Everything you need

List and describe the key features of your solution or service.

Everything you need

List and describe the key features of your solution or service.