Restrict IPs to Kubernetes Clusters
Managed Databases do not yet support security groups. As an alternative, you can run this automation to continuously synchronize your database IP filters with SKS cluster node IPs. The automation monitors node changes (scaling, replacements) and automatically updates the ip_filter to restrict database access to your cluster nodes only.
Note: All connections to Managed Databases are always encrypted with TLS. The IP filter provides an additional layer of access control by restricting which source IPs can connect.
How it works: The automation fetches node IPs from all configured Kubernetes clusters and updates the IP filter for every configured database. To create separate cluster-to-database pairings, deploy multiple instances (e.g., in different Kubernetes namespaces).
Deployment options:
- Kubernetes deployment (covered in this guide) - Using pre-built container images
- Docker or VM/local machine - See the GitHub repository README for alternative deployment methods
Key capabilities:
- Automatically discovers node IPs from one or more SKS clusters
- Updates IP filters for multiple databases
- Detects node changes at configurable intervals (default: 10 seconds)
- Minimal IAM permissions required
- Supports multiple deployments for different cluster-to-database pairings or multiple accounts
Prerequisites
- One or more Exoscale SKS clusters
- One or more DBaaS services created
Step 1: Create IAM API Credentials
You can do this step either in the Portal or via the CLI.
Create a policy file dbaas-filter-policy.json with least-privilege permissions:
{
"default-service-strategy": "deny",
"services": {
"dbaas": {
"type": "rules",
"rules": [
{
"expression": "operation in ['list-dbaas-services', 'get-dbaas-service-pg', 'get-dbaas-service-mysql', 'get-dbaas-service-kafka', 'get-dbaas-service-opensearch', 'get-dbaas-service-valkey', 'get-dbaas-service-grafana', 'get-dbaas-settings-pg', 'get-dbaas-settings-mysql', 'get-dbaas-settings-kafka', 'get-dbaas-settings-opensearch', 'get-dbaas-settings-valkey', 'get-dbaas-settings-grafana']",
"action": "allow"
},
{
"expression": "operation in ['update-dbaas-service-pg', 'update-dbaas-service-mysql', 'update-dbaas-service-kafka', 'update-dbaas-service-opensearch', 'update-dbaas-service-valkey', 'update-dbaas-service-grafana'] && parameters.has('ip_filter') && int(parameters.size()) == 2",
"action": "allow"
}
]
},
"compute": {
"type": "rules",
"rules": [
{
"expression": "operation in ['list-zones', 'list-sks-clusters', 'get-sks-cluster', 'get-instance-pool', 'get-instance']",
"action": "allow"
}
]
}
}
}This policy only allows:
- Compute: 5 GET/LIST operations to query SKS cluster nodes
- DBaaS: Reading database info and updating only the
ip-filterproperty
Create the IAM role and API key:
exo iam role create dbaas-filter-role \
--description "DBaaS IP filter automation" \
--policy file://dbaas-filter-policy.json
exo iam api-key create dbaas-filter-key --role dbaas-filter-roleSave the API key and secret from the output.
Step 2: Download the Deployment Manifests
You need the Kubernetes deployment manifests from the GitHub repository.
Option 1: Clone the repository
git clone https://github.com/exoscale-labs/sks-sample-manifests.git
cd sks-sample-manifests/exo-k8s-dbaas-filterOption 2: Download just the deployment files
Download these two files:
deployment.yaml- Contains the namespace, ConfigMap, and deploymentkustomization.yaml- Kustomize configuration
curl -O https://raw.githubusercontent.com/exoscale-labs/sks-sample-manifests/main/exo-k8s-dbaas-filter/deployment.yaml
curl -O https://raw.githubusercontent.com/exoscale-labs/sks-sample-manifests/main/exo-k8s-dbaas-filter/kustomization.yamlStep 3: Deploy to Kubernetes
Create the Secret
Create a namespace and secret with your API credentials:
kubectl create namespace exoscale-automation
kubectl -n exoscale-automation create secret generic exoscale-api-credentials \
--from-literal=api-key='EXOxxxxxxxxxxxxxxxxxxxxxxxx' \
--from-literal=api-secret='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'Configure the Deployment
Edit deployment.yaml ConfigMap to configure your clusters and databases:
data:
# SKS Clusters to monitor (format: "cluster-name:zone,cluster-name:zone")
sks-clusters: "my-cluster:ch-gva-2"
# DBaaS services to update (format: "db-name:zone:type,db-name:zone:type")
# Supported types: pg, mysql, kafka, opensearch, valkey, grafana
dbaas-services: "my-postgres-db:ch-gva-2:pg"
# Optional: Static IPs to always include (CIDR format)
static-ips: "" # Example: "192.168.1.1/32,10.0.0.0/24"
# Check interval in seconds (default: 10)
check-interval: "60"
# Log level: DEBUG, INFO, WARNING, ERROR (default: INFO)
log-level: "INFO"Configuration examples:
Single cluster with multiple databases:
sks-clusters: "prod-cluster:ch-gva-2"
dbaas-services: "prod-postgres:ch-gva-2:pg,prod-mysql:ch-gva-2:mysql"Multiple clusters with multiple databases:
sks-clusters: "prod-cluster:ch-gva-2,staging-cluster:de-fra-1"
dbaas-services: "prod-postgres:ch-gva-2:pg,prod-kafka:de-fra-1:kafka,staging-mysql:de-fra-1:mysql"Deploy
Deploy using kustomize:
kubectl apply -k .Monitor the Automation
Check the logs to verify it’s working:
kubectl logs -n exoscale-automation -l app=exo-dbaas-filter -fExpected output:
[2025-11-17 10:30:15 UTC] Starting DBaaS IP filter automation
[2025-11-17 10:30:15 UTC] Monitoring 1 SKS cluster(s)
[2025-11-17 10:30:15 UTC] Managing IP filters for 1 DBaaS service(s)
[2025-11-17 10:30:15 UTC] Checking for IP changes...
[2025-11-17 10:30:15 UTC] Gathering IPs from all clusters...
[2025-11-17 10:30:16 UTC] Querying cluster: my-cluster (zone: ch-gva-2)
[2025-11-17 10:30:18 UTC] Found IP: 194.182.169.155 (instance: pool-abc-12345)
[2025-11-17 10:30:19 UTC] Found IP: 89.145.162.50 (instance: pool-abc-67890)
[2025-11-17 10:30:20 UTC] IP change detected!
[2025-11-17 10:30:20 UTC] New IP list: 194.182.169.155/32, 89.145.162.50/32
[2025-11-17 10:30:20 UTC] Updating DBaaS IP filters...
[2025-11-17 10:30:21 UTC] Updating pg database: my-postgres-db (zone: ch-gva-2)
[2025-11-17 10:30:23 UTC] Update complete.Step 4: Verify the IP Filter
Verify the IP filter was applied to your database:
exo dbaas show my-postgres-db -z ch-gva-2The output should show your cluster node IPs in the IP filter field.
Configuration Reference
All configuration is done via environment variables in the ConfigMap:
| Variable | Required | Description | Example |
|---|---|---|---|
sks-clusters | Yes | SKS clusters to monitor (comma-separated) | prod:ch-gva-2,staging:de-fra-1 |
dbaas-services | Yes | DBaaS services to update (comma-separated) | prod-pg:ch-gva-2:pg,prod-mysql:de-fra-1:mysql |
static-ips | No | Additional static IPs to include (comma-separated) | 203.0.113.10/32,198.51.100.0/24 |
check-interval | No | Check interval in seconds (default: 10) | 60 |
log-level | No | Logging level (default: INFO) | DEBUG |
Supported database types: pg, mysql, kafka, opensearch, valkey, grafana
Troubleshooting
Pod not starting
kubectl -n exoscale-automation describe pod -l app=exo-dbaas-filter- Verify secret exists:
kubectl -n exoscale-automation get secret exoscale-api-credentials - Check ConfigMap values are correct
Authentication errors
- Verify API credentials are correct in the secret
- Check IAM role has all required permissions (see Step 1)
- Test credentials manually:
exo dbaas list(after settingEXOSCALE_API_KEYandEXOSCALE_API_SECRET)
Database not updating
- Verify database name, zone, and type are correct in ConfigMap
- Check logs for errors: Look for “Failed to update” messages
No IPs detected
- Verify cluster name and zone are correct in ConfigMap
- Check SKS cluster has running nodes:
exo compute sks nodepool list my-cluster -z ZONE
Enable debug logging:
Edit deployment.yaml ConfigMap:
log-level: "DEBUG"Then reapply:
kubectl apply -k .Multiple Deployments
Deploy multiple instances of this automation for different cluster-to-database pairings:
Example: Separate production and staging environments
The automation updates all configured databases with IPs from all configured clusters. For selective pairing (e.g., production cluster → production databases only), deploy separate instances in different namespaces:
# Production environment (prod cluster → prod databases)
kubectl create namespace exo-automation-prod
kubectl -n exo-automation-prod create secret generic exoscale-api-credentials \
--from-literal=api-key='EXO...' \
--from-literal=api-secret='...'
# Edit deployment.yaml ConfigMap: sks-clusters="prod-cluster:ch-gva-2", dbaas-services="prod-db:ch-gva-2:pg"
kubectl apply -f deployment.yaml -n exo-automation-prod
# Staging environment (staging cluster → staging databases)
kubectl create namespace exo-automation-staging
kubectl -n exo-automation-staging create secret generic exoscale-api-credentials \
--from-literal=api-key='EXO...' \
--from-literal=api-secret='...'
# Edit deployment.yaml ConfigMap: sks-clusters="staging-cluster:de-fra-1", dbaas-services="staging-db:de-fra-1:pg"
kubectl apply -f deployment.yaml -n exo-automation-stagingCleanup
To remove the automation:
kubectl delete namespace exoscale-automationTo revoke the IAM credentials:
exo iam api-key revoke dbaas-filter-key
exo iam role delete dbaas-filter-role