Configuration Management in Python
In Python for DevOps , deploying applications across multiple environments - such as development, staging, and production requires environment-specific settings like database credentials, logging levels, and debug flags. Embedding these settings directly into the source code introduces maintainability challenges and inhibits scalability. Configuration management addresses this issue by decoupling environment-specific data from application logic.
Configuration management provides a structured solution by externalizing these environment-specific settings. In Python, this means using standardized files and libraries to load, override, and validate configuration data at runtime.
This guide provides a detailed overview of :
- Supported configuration formats in Python, including JSON, YAML, and INI
- Key tools and libraries such as configparser, Pydantic, Hydra, Dynaconf, and .env file management
- Techniques for validating, overriding, and securing configuration data
Table of Contents
- Introduction to Configuration Management in Python
- What is Configuration Management?
- Why Config Management Matters in DevOps, SRE, and Automation
- Common Problems Without Proper Configuration Handling
- Understanding Configuration Files
- Reading .ini Configuration Files in Python with configparser
-
Reading json Files in Python with json Module
-
Reading yaml Files in Python with PyYAML Module
- Reading toml Files in Python with tomllib Module
- Reading Environment Variables in Python with pythonâdotenv Module
- Reading Environment Variables in Python with os.environ
- How to Write Configuration Files in Python ?
- Environment-Specific Configuration
- Dev, Staging, Production Config Separation
- .env Files and the python-dotenv Library
- Using Environment Variables as Overrides
- How to Handle Secrets in Python ?( like passwords, tokens )
- Working with Cloud Secrets Securely
- Using AWS Secrets Manager with Python
- Integrating Secrets into Config Classes
- Frequently Asked Questions (FAQs)
Introduction to Configuration Management in Python
Configuration management is the backbone that keeps complex systems running smoothly. Whether we are building a microservice, deploying a machine learning pipeline, or running a large-scale web application, the projectâs success depends on how well you manage its configuration.
In DevOps, Site Reliability Engineering (SRE), and Automation, the importance of robust configuration management has become very high. Python, with its simplicity and vast tooling, empowers teams to centralize, automate, and secure their configurations, making deployments safer and troubleshooting faster. But what happens when configuration is neglected ? Teams face outages, security breaches âit worked on my machineâ syndrome. So let's understand all these things in detail.
What is Configuration Management ?
Configuration management is the systematic process of establishing, maintaining, and controlling the configuration of software systems, infrastructure, and environments over time. In practice, it involves tracking, updating, and documenting all the parametersâsuch as software versions, environment variables, credentials, and network settings that defines how the system operates.
Key aspects include :
- Consistency: Ensuring all environments (dev, test, prod) are configured identically to avoid unexpected behavior
- Traceability: Documenting changes so you know who changed what, when, and why
- Recoverability: Quickly reverting to a previous, stable state if a new configuration causes issues
- Automation: Applying configuration changes across systems without manual intervention
- Security: Protecting sensitive data (like API keys and passwords) from exposure
âConfiguration management is the process of ensuring that the configurations of a systemâs servers, applications, and other environments remain known, consistent, and trusted over time. It establishes and maintains the consistency of a systemâs performance and its physical and functional attributes.â
â [AWS]
Why Config Management Matters in DevOps, SRE, and Automation ?
In the era of DevOps and SRE, configuration management is a critical pillar for operational excellence. Hereâs why :
1. Enables Reliable Automation
Automation is at the heart of DevOps and SRE practices. Automated deployments, CI/CD pipelines, and infrastructure as code all rely on configuration data to define how systems should behave. Without centralized configuration, automation scripts become brittle and error-prone.
- Example: A CI/CD pipeline that pulls environment-specific settings from a configuration file can deploy the same codebase to staging and production with confidence.
2. Reduces Human Error and Configuration Drift
Manual configuration is error-prone. Even small inconsistencies between environments can lead to outages, security vulnerabilities, or failed deployments. Configuration driftâwhere environments gradually diverge due to undocumented changesâis a common source of bugs and downtime.
- DevOps Impact: Automated, version-controlled configuration ensures that every environment is predictable and reproducible.
3. Improves Security and Compliance
Misconfiguration is a leading cause of data breaches and service outages. By centralizing and tracking configuration, organizations can:
- Audit changes and enforce compliance.
- Protect sensitive information using environment variables or secret management tools
- Quickly identify and remediate risky settings.
4. Accelerates Recovery and Debugging
When incidents occur, configuration management allows teams to quickly revert to a known-good state or recreate environments for troubleshooting. This minimizes downtime and speeds up root cause analysis.
5. Supports Scalability and Collaboration
As organizations grow, so does the complexity of their systems. Configuration management tools scale with your infrastructure, enabling seamless onboarding of new servers, services, or team members.
- SRE Perspective: Consistent configuration is essential for service reliability, rapid incident response, and efficient on-call rotations.
Common Problems Without Proper Configuration Handling
Neglecting configuration management leads to a host of operational and business risks. Here are the most frequent issues :
1. Configuration Drift
- Definition: Over time, manual tweaks and undocumented changes cause environments to diverge from their intended state.
- Impact: Bugs that only appear in production, inconsistent behavior, and failed deployments.
2. Increased Downtime and Outages
- Root Cause: Misconfigured systems are more likely to fail, and recovery is slower without a clear baseline.
- Example: A missing environment variable causes a critical service to crash during a deployment.
3. Security Breaches
- Risk: Hardcoded credentials or public exposure of sensitive configuration can lead to data leaks and unauthorized access.
- Example: Accidentally committing an API key to a public repository.
4. Difficult Debugging and Poor Auditability
- Problem: Without a record of configuration changes, itâs hard to trace the source of issues or roll back to a safe state.
-
Consequence: Longer incident resolution times and increased operational costs.
5. Inefficient Change Management
- Challenge: Making configuration changes manually across many systems is slow and error-prone.
- Result: Delayed releases and increased risk of accidental misconfiguration.
Understanding Configuration Files
Choosing the right configuration format is very important and the most widely used formats are INI, JSON, YAML, TOML, and ENV files .
Reading .ini Configuration Files in Python with configparser
Overview :
INI files are among the oldest and simplest configuration formats, widely used for basic settings. They are structured as sections, each containing key-value pairs. Pythonâs built-in configparser module makes INI files easy to read and write.
Example : config.ini
[server] host = 127.0.0.1 port = 8080 [database] user = admin password = secret
Python reading ini file :
from configparser import ConfigParser config = ConfigParser() config.read('config.ini') host = config['server']['host'] port = config['server'].getint('port')
Pros:
- Human-readable and easy to edit.
- Supported natively in Python.
- Good for simple, flat configurations.
Cons:
- Limited to two-level hierarchy (section/key).
- All values are strings; no native support for lists or nested structures.
- Not ideal for complex or deeply nested configs
Use Case:
Best for small projects, legacy systems, or when you need a quick, dependency-free solution.
Reading JSON Files in Python with json Module
Overview:
JSON (JavaScript Object Notation) is a lightweight, language-agnostic format that supports nested structures. Itâs widely used for data interchange and configuration, and is supported by Pythonâs built-in json module.
Example : config.json
{ "server": { "host": "127.0.0.1", "port": 8080 }, "database": { "user": "admin", "password": "secret" } }
Python reading json file :
import json with open('config.json') as f: config = json.load(f) host = config['server']['host'] port = config['server']['port']
Pros:
- Supports complex, nested data structures.
- Universally supported and easy to parse.
- Good for configs that need to be shared across languages.
Cons:
- No comments allowed (can hinder documentation).
- Less human-friendly for large or deeply nested files.
- All keys and string values must be in double quotes.
Use Case:
Ideal for web APIs, cross-language projects, or when configs are generated programmatically.
Reading YAML Files in Python with PyYAML Module
Overview:
YAML (YAML Ainât Markup Language) is designed for human readability and supports complex, nested data structures. It is widely used in DevOps and cloud-native environments. Pythonâs PyYAML library is commonly used to handle YAML files.
Example : config.yaml
server: host: 127.0.0.1 port: 8080 database: user: admin password: secret
Python reading yaml file :
import yaml with open('config.yaml') as f: config = yaml.safe_load(f) host = config['server']['host'] port = config['server']['port']
Pros:
- Highly readable and writable by humans.
- Supports comments, lists, and dictionaries.
- Flexible for complex and nested configurations.
Cons:
- Indentation-sensitive; formatting errors can cause subtle bugs.
- Multiple ways to represent the same data, which can lead to inconsistencies.
- Requires external library (PyYAML).
Use Case:
Preferred for DevOps, cloud-native apps, and complex configuration scenarios where readability matters.
Reading TOML Files in Python with tomllib Module
Overview:
TOML (Tomâs Obvious, Minimal Language) is a modern configuration format designed for clarity and minimalism. It is the standard for Pythonâs pyproject.toml and is gaining popularity for application configs.
Example : config.toml
[server] host = "127.0.0.1" port = 8080 [database] user = "admin" password = "secret"
Python reading toml file :
import toml with open('config.toml') as f: config = toml.load(f) host = config['server']['host'] port = config['server']['port']
Pros:
- Clear syntax, easy to read and write.
-
Supports nested tables and arrays.
Cons:
- Less common outside the Python ecosystem.
- Requires an external library (toml or tomli for reading)
Use Case:
Great for modern Python projects, especially when using tools that already rely on TOML.
Reading Environment Variables in Python with pythonâdotenv Module
Overview:
Managing sensitive configuration data like API keys, database credentials, and environment-specific settings is a critical part of any Python project. Instead of hardcoding these values, developers often use .env files to store them securely and load them into the application at runtime. In Python, the python-dotenv library makes this process simple and reliable.
Load a .env file into your Python project using load_dotenv(), and access values with os.getenv().
Example (.env file) :
SERVER_HOST=127.0.0.1 SERVER_PORT=8080 DATABASE_USER=admin DATABASE_PASSWORD=secret
Python reading .env file :
from dotenv import load_dotenv import os load_dotenv() # Loads variables from .env into environment
host = os.getenv('SERVER_HOST', 'localhost') port = int(os.getenv('SERVER_PORT', 8080))
Explanation :
The line host = os.getenv('SERVER_HOST', 'localhost') attempts to retrieve the value of the environment variable SERVER_HOST. If this variable is not set, it defaults to the string 'localhost'. This means the application will connect to the local machine by default unless another host is specified externally.
Similarly, port = int(os.getenv('SERVER_PORT', 8080)) fetches the SERVER_PORT environment variable as a string, then converts it to an integer. If SERVER_PORT is not defined, it uses 8080 as the default port number. This ensures the application uses a valid port number even when the environment variable is missing.
Pros:
- Keeps secrets and environment-specific settings out of code and version control.
- Easy to override per deployment environment.
- Supported by many deployment tools and platforms.
Cons:
- Flat structure; no native support for nested data.
- Requires careful management to avoid leaking secrets or misconfigurations.
Use Case:
Essential for cloud deployments, containerized apps, and any scenario where security and portability are priorities.
Reading Environment Variables in Python with os.environ
Overview :
Environment variables are key-value pairs maintained by the operating system and are commonly used to store configuration data outside of your source code. In Python, the built-in os module provides access to these variables through os.environ, making it easy to retrieve secrets, API keys, and deployment-specific settings directly from the environment.
Example :
import os db_host = os.environ['DB_HOST'] db_port = os.environ.get('DB_PORT', '5432')
Reading env with Python :
Pythonâs os.environ is a mapping object (like a dictionary) that gives you direct access to all environment variables available to your program. You can read variables using -
- os.environ['VAR_NAME'] â throws KeyError if not found
- os.environ.get('VAR_NAME', default) â returns default if variable is missing
Explanation :
- os.environ['DB_HOST'] retrieves the value of DB_HOST. If itâs not set, it raises a KeyError.
- os.environ.get('DB_PORT', '5432') safely returns '5432' if DB_PORT isnât defined.
- All values are returned as strings and may need type conversion ( like int(), bool()) .
Pros :
- Built-in: No extra libraries needed.
- Runtime flexible: Can change values per environment (dev, staging, prod).
Cons
- Requires setup : Environment variables must be defined in the OS or shell.
- No default file loading : Canât read .env files directly without python-dotenv.
- Type conversion : Manual casting required for non-string values.
Use Case :
- Ideal for storing secrets, config paths, and flags without hardcoding.
So each configuration format serves a different need.
- INI is excellent for simple, flat configs and legacy compatibility.
- JSON is ideal for structured, programmatic, and cross-platform configs.
- YAML shines in readability and complex, nested scenarios.
- TOML offers a modern, clear syntax, especially for Python projects.
- .env files and environment variables are indispensable for secure, environment-specific configuration.
How to Write Configuration Files in Python : INI, JSON, YAML, and TOML Explained
So here is the overall way of writing different configurational file used in Automation and Development .
# ini config file
[database] host = localhost port = 5432
# json config file
{ "database": { "host": "localhost", "port": 5432 } }
# yaml config file
database: host: localhost port: 5432
# toml config file
[database] host = "localhost" port = 5432
Managing Configuration for Dev, Staging, and Production Environments in Python
Python applications rarely run in a single static environment. Instead, they move between development, staging, and production where each requiring distinct settings for security, performance, and integration. Environment-specific configuration is essential for maintaining reliability, security, and agility across these stages.
Dev, Staging, Production Config Separation
Why separate configurations ?
- Development : Prioritizes debugging, uses local resources, and may enable verbose logging.
- Staging : Mirrors production as closely as possible for final testing, but may use test credentials or endpoints.
- Production : Focuses on security, performance, and stability, using real credentials and endpoints.
How to use Environment-Specific Configuration Files ?
- Multiple config files : Use naming conventions like config.dev.ini, config.staging.ini, and config.prod.ini or directories such as /configs/dev/, /configs/prod/ .
- Dynamic loading : Determine the environment at runtime (e.g., via an APP_ENV variable or platform detection) and load the corresponding file.
Example : OS-based config loading
import configparser import platform os_version = platform.system()
if os_version == 'Windows': config_file = 'config_windows.ini' elif os_version == 'Darwin': config_file = 'config_mac.ini' elif os_version == 'Linux': config_file = 'config_linux.ini' else: config_file = 'config.ini' # Default config = configparser.ConfigParser() config.read(config_file)
This approach ensures each environment gets the right settings, minimizing risk of accidental misconfiguration.
.env Files and the python-dotenv Library
.env files are a popular way to manage environment variables for local development. They are simple text files containing key-value pairs, typically excluded from version control to protect secrets.
Sample .env file :
APP_ENV=development DATABASE_URL=postgres://user:pass@localhost/db SECRET_KEY=dev_secret
Loading .env files in Python:
from dotenv import load_dotenv import os load_dotenv() # Loads variables from .env into the environment db_url = os.getenv('DATABASE_URL') secret = os.getenv('SECRET_KEY')
- Best practice: Never commit .env files with real secrets to your repository; use .gitignore to exclude them.
- Consistency: Ensures all developers and CI systems use the same environment variables during development.
How to use environment variables as overrides :
- Set at deployment : In Docker, Kubernetes, or your hosting platform, set environment variables for secrets and environment-specific settings.
- Override precedence : In your config manager, always check environment variables first, then fallback to config files.
Example : Dockerfile environment variables
FROM python:3.9-slim ENV APP_ENV=production ENV SECRET_KEY=super_secret_value ENV DATABASE_URL=postgres://user:password@dbserver/dbname
- This keeps sensitive data out of your codebase and supports production-specific configuration.
Python pattern for overrides:
import os import json def get_config_value(key, default=None): # Environment variable takes precedence env_value = os.getenv(key.upper()) if env_value is not None: return env_value # Fallback to config file with open('config.json') as f: config = json.load(f) return config.get(key, default)
Best Practices:
- Avoid hardcoding sensitive information in code or config files; use environment variables for secrets
- Use descriptive names for variables (e.g., DATABASE_URL, SECRET_KEY)
- Provide default values to make your app resilient to missing variables
- Document and version your configuration files, but never commit real secrets
How to Handle Secrets in Python ?
Handling sensitive information like API keys, database passwords, and access tokens is a critical aspect of writing secure Python applications. Hardcoding secrets into source files exposes them to version control, accidental leaks, or unauthorized access , especially in shared or cloud environments.
Connecting to a PostgreSQL Database Using Secrets
Imagine you're building a Flask API that connects to a PostgreSQL database. You need to protect your database credentials (DB_USER, DB_PASSWORD, etc.) without placing them in your code.
Step 1: Create a .env File (Never Commit This)
DB_HOST=localhost DB_PORT=5432 DB_NAME=inventory DB_USER=admin DB_PASSWORD=supersecret123
Step 2: Load Secrets Using python-dotenv
from dotenv import load_dotenv
import os
import psycopg2
from psycopg2.extras import RealDictCursor
# Load secrets from the appropriate .env file - Defaults to .env in current dir
load_dotenv()
# Retrieve secret values from environment variables
db_config = {
'host': os.getenv('DB_HOST'),
'port': int(os.getenv('DB_PORT', 5432)),
'dbname': os.getenv('DB_NAME'),
'user': os.getenv('DB_USER'),
'password': os.getenv('DB_PASSWORD'),
}
try:
conn = psycopg2.connect(**db_config, cursor_factory=RealDictCursor)
cursor = conn.cursor()
cursor.execute("SELECT id, name, email FROM users WHERE is_active = TRUE;")
active_users = cursor.fetchall()
print("Connected to database and fetched active users:")
for user in active_users:
print(f" - {user['name']} ({user['email']})")
except psycopg2.Error as e:
print(" Database connection failed!")
print(f"Error: {e}")
finally:
cursor.close()
conn.close()
The **db_config syntax unpacks this dictionary, passing each key-value pair as named arguments to the connect() function.
Managing AWS Secrets Securely in Python : From .env Files to AWS Secret Manager
Managing secrets securely is a top priority for any production-grade Python application. Relying on environment variables or .env files is suitable for development, but in real-world deployments as especially in cloud or regulated environments dedicated secret management services like AWS Secrets Manager .
Why Use AWS Secrets Manager ?
- Centralized secret storage: Manage all secrets in one place.
- Access control: Fine-grained permissions for users and services.
- Auditability: Track secret usage and changes.
- Automatic rotation: Rotate secrets without code changes or redeploys.
- Reduced risk: Secrets are never hardcoded or stored in plaintext on disk.
Using AWS Secrets Manager with Python
Prerequisites -
- AWS account and IAM permissions (secretsmanager:GetSecretValue)
- Python environment with boto3 installed
pip install boto3
Retrieve a Secret from AWS Secrets Manager
Suppose you have a secret named prod/db-credentials containing a JSON object :
{ "username": "prod_user", "password": "supersecret" }
Fetching a Secrets Manager secret value using Python
import boto3 from botocore.exceptions import ClientError import json def get_secret(secret_name, region_name="us-east-1"): session = boto3.session.Session() client = session.client(service_name='secretsmanager', region_name=region_name) try: response = client.get_secret_value(SecretId=secret_name) except ClientError as e: # Handle errors (e.g., secret not found, permission denied) raise e else: if 'SecretString' in response: return json.loads(response['SecretString']) else: # For binary secrets return response['SecretBinary'] # Usage db_secrets = get_secret("prod/db-credentials") print("DB User:", db_secrets["username"]) print("DB Password:", db_secrets["password"])
Best practice: Never print secrets in production logs; this is for demonstration only.
Production tip: For high-throughput apps, use client-side caching to reduce API calls and latency.
Integrating Secrets into Config Classes
For maintainable, scalable applications, integrate secrets directly into your configuration classes. This allows your app to load secrets from a manager transparently, regardless of the source. we will be using get_secret function and imports all together here .
Dynamic Config Class with AWS Secrets Manager
class AppConfig: def __init__(self, env): self.env = env self.db_secrets = None self.load_secrets() def load_secrets(self): if self.env == "production": # Fetch from AWS Secrets Manager self.db_secrets = get_secret("prod/db-credentials") elif self.env == "staging": self.db_secrets = get_secret("staging/db-credentials") else: # Local dev: fallback to environment variables or .env file import os self.db_secrets = { "username": os.getenv("DB_USER", "dev_user"), "password": os.getenv("DB_PASS", "dev_pass") } @property def db_username(self): return self.db_secrets["username"] @property def db_password(self): return self.db_secrets["password"] # Usage config = AppConfig(env="production") print(config.db_username)
- Pattern: The config class abstracts secret retrieval, so the rest of your codebase accesses secrets in a unified way.
- Flexibility: Easily swap secret sources based on environment.
Top 10 FAQs on Configuration Management in Python
1. What is the best way to manage configuration files in Python projects ?
The best way to manage configuration files in Python depends on your project's complexity and environment. Common formats include YAML, JSON, TOML, and INI. For structured configuration with validation, libraries like Pydantic, Dynaconf, and Hydra are highly recommended. These tools support environment-based settings, type checking, and multi-source loading â making them ideal for scalable Python applications.
2. How do I manage environment-specific settings (dev, staging, production) in Python ?
Use environment-based config loading by structuring your configs like config.dev.yaml, config.prod.yaml, etc., or using libraries like Dynaconf or Hydra. You can also load variables from .env files or environment variables using os.environ or python-dotenv.
In production, avoid hardcoded secrets â use tools like AWS Secrets Manager or HashiCorp Vault.
3. How do I securely manage secrets (API keys, DB passwords) in Python applications ?
To securely manage secrets in Python:
- Use AWS Secrets Manager, Azure Key Vault, or Vault by HashiCorp for encrypted secret storage.
- Avoid committing secrets to version control.
- Use .env files only for development and keep them excluded via .gitignore.
- Access secrets at runtime via os.environ.
4. How do I validate configuration in Python ?
Use Pydantic or Cerberus to define and validate schema-based configuration in Python. These libraries allow you to enforce types, required fields, and default values. For example:
from pydantic import BaseSettings class Config(BaseSettings): db_user: str db_pass: str config = Config()
This ensures your configuration is safe and structured â reducing runtime errors.
5. Which config format is best for Python apps?
The best configuration format for Python depends on readability and tool support:
- YAML: Human-friendly, widely used (especially with PyYAML, Hydra, Ansible)
- TOML: Used by pyproject.toml (great for tool configurations)
- JSON: Good for APIs, but harder to read/edit manually
- INI: Lightweight and native via configparser, but lacks nested structure
For modern apps, YAML or TOML is often preferred.
6. What are the pros and cons of using .env files with python-dotenv ?
Pros:
- Easy to use
- Great for local development
- Compatible with os.environ
Cons:
- Not encrypted
- Should never be used in production
- Can be mistakenly committed to Git
Use .env with python-dotenv for local settings only, and manage production secrets separately.
7. Whatâs the difference between configparser, pydantic, and dynaconf ?
- configparser: Built-in, supports INI files, no type safety.
- pydantic: Type-safe and schema-based, great for structured validation.
- dynaconf: Multi-layer config loader supporting YAML, TOML, ENV, etc.
Use pydantic for validation and dynaconf for flexibility in complex apps.
8. How can I dynamically reload config without restarting the Python app ?
You can implement dynamic config reloading using:
- File watchers (watchdog or watchfiles)
- Custom signal handlers (e.g., SIGHUP)
- Framework support (e.g., Flask with reloader=True)
- Polling mechanisms
Useful in long-running apps or microservices needing runtime updates.
9. How do I override configuration values via CLI arguments or environment variables ?
You can override config in Python by:
- Using argparse or click for CLI args
- Reading os.environ for environment variables
- Combining them with default configs using libraries like dynaconf or omegaconf
Example:
import argparse, os
parser = argparse.ArgumentParser() parser.add_argument("--port", default=os.getenv("PORT", 8000)) args = parser.parse_args()
10. How can I test Python code that depends on configuration values ?
Use test-specific config files or mock the configuration using libraries like unittest.mock or pytest fixtures. You can also temporarily override os.environ or use monkeypatch.
def test_with_mock_config(monkeypatch): monkeypatch.setenv("DB_USER", "test_user")
11. How do large Python applications organize and structure configuration files ?
Best practices include:
- Grouping configs into folders like config/dev.yaml, config/prod.yaml
- Using one file per concern (e.g., db.yaml, logging.yaml)
- Keeping secrets in separate encrypted sources
- Using environment variables for runtime overrides
This modular approach makes configs scalable and maintainable.
12. How do I load configuration from multiple sources (files, env vars, CLI args, secrets) ?
Use layered config libraries like Dynaconf, Omegaconf, or Hydra, which support:
- Base config files
- Override by environment variables
- Final override by CLI
This layered approach enables flexible deployments and overrides.
13. Whatâs the best practice for handling configurations in Dockerized or Kubernetes Python apps ?
In containerized environments:
- Use environment variables or mounted volume configs
- Store secrets securely in Kubernetes Secrets
- Use ConfigMaps for non-sensitive values
- Avoid using .env in containers
Combine with os.environ or tools like python-decouple, dynaconf.
Read Detailed Posts on Different Data Types :
Read the different Python Data Types and Functions for each Data Types with relevant examples with having a look on Python for DevOps .