Managed PostgreSQL specifics
Specific options when creating or updating a PostgreSQL service
You can find all the specific options for PostgreSQL by using the CLI help as such:
exo dbaas create --help-pg
--pg-admin-password custom password for admin user
--pg-admin-username custom username for admin user
--pg-backup-schedule automated backup schedule (format: HH:MM)
--pg-bouncer-settings PgBouncer configuration settings (JSON format)
--pg-fork-from name of a Database Service to fork from
--pg-ip-filter allow incoming connections from CIDR address block
--pg-lookout-settings pglookout configuration settings (JSON format)
--pg-recovery-backup-time the timestamp of the backup to restore when forking from a Database Service
--pg-settings PostgreSQL configuration settings (JSON format)
--pg-version PostgreSQL major version
Restricting connections from the internet
By default, Exoscale DBaaS are not accessible from the whole internet. Data does not transmit in clear over the network as it is SSL encrypted by default and authenticated.
To allow incoming connections to your database service, you can add a filter allowing:
- just one IP address,
- a network range,
- or a combination of IP address and network range
To do this, update your service or create it with the IP filter, which is a comma separated list of CIDRs:
exo dbaas update -z de-fra-1 test-pg --pg-ip-filter=a.b.c.d/32,w,x,y,z/24
Managing JSON configuration settings
Here we will show an example with pg-settings
, but the same
process is valid with pg-bouncer-settings
or pg-lookout-settings
by using --settings pgbouncer
or --settings pglookout
respectively.
You can list all settings available for the database service by using the command:
exo dbaas type show pg --settings pg
┼─────────────────────────────────────┼──────────┼─────────────────────────────────────────────────────────┼
│ KEY │ TYPE │ DESCRIPTION │
┼─────────────────────────────────────┼──────────┼─────────────────────────────────────────────────────────┼
│ max_wal_senders │ integer │ PostgreSQL maximum WAL senders │
│ │ │ * Minimum: 8 / Maximum: 64 │
│ autovacuum_analyze_scale_factor │ number │ Specifies a fraction of the table size to add to │
│ │ │ autovacuum_analyze_threshold when deciding whether │
│ │ │ to trigger an ANALYZE. The default is 0.2 (20% of │
│ │ │ table size) │
│ │ │ * Minimum: 0 / Maximum: 1 │
│ temp_file_limit │ integer │ PostgreSQL temporary file limit in KiB, -1 for │
│ │ │ unlimited │
│ │ │ * Minimum: -1 / Maximum: 2.147483647e+09 │
│ │ │ * Example: 5e+06 │
│ bgwriter_lru_maxpages │ integer │ In each round, no more than this many buffers will │
│ │ │ be written by the background writer. Setting this │
│ │ │ to zero disables background writing. Default is │
│ │ │ 100. │
│ │ │ * Minimum: 0 / Maximum: 1.073741823e+09 │
│ │ │ * Example: 100 │
│ jit │ boolean │ Controls system-wide use of Just-in-Time │
│ │ │ Compilation (JIT). │
│ │ │ * Example: true │
│ track_io_timing │ string │ Enables timing of database I/O calls. This │
│ │ │ parameter is off by default, because it will │
│ │ │ repeatedly query the operating system for the │
│ │ │ current time, which may cause significant overhead │
│ │ │ on some platforms. │
│ │ │ * Supported values: │
│ │ │ - off │
│ │ │ - on │
│ │ │ * Example: off │
│ autovacuum_vacuum_threshold │ integer │ Specifies the minimum number of updated or deleted │
│ │ │ tuples needed to trigger a VACUUM in any one │
│ │ │ table. The default is 50 tuples │
│ │ │ * Minimum: 0 / Maximum: 2.147483647e+09 │
│ track_functions │ string │ Enables tracking of function call counts and time │
│ │ │ used. │
│ │ │ * Supported values: │
│ │ │ - all │
│ │ │ - pl │
│ │ │ - none │
│ autovacuum_freeze_max_age │ integer │ Specifies the maximum age (in transactions) that a │
│ │ │ table's pg_class.relfrozenxid field can attain │
│ │ │ before a VACUUM operation is forced to prevent │
│ │ │ transaction ID wraparound within the table. Note │
│ │ │ that the system will launch autovacuum processes │
│ │ │ to prevent wraparound even when autovacuum is │
│ │ │ otherwise disabled. This parameter will cause the │
│ │ │ server to be restarted. │
│ │ │ * Minimum: 2e+08 / Maximum: 1.5e+09 │
│ │ │ * Example: 2e+08 │
│ autovacuum_max_workers │ integer │ Specifies the maximum number of autovacuum │
│ │ │ processes (other than the autovacuum launcher) │
│ │ │ that may be running at any one time. The default │
│ │ │ is three. This parameter can only be set at server │
│ │ │ start. │
│ │ │ * Minimum: 1 / Maximum: 20 │
│ autovacuum_vacuum_cost_delay │ integer │ Specifies the cost delay value that will be used │
│ │ │ in automatic VACUUM operations. If -1 is │
│ │ │ specified, the regular vacuum_cost_delay value │
│ │ │ will be used. The default value is 20 milliseconds │
│ │ │ * Minimum: -1 / Maximum: 100 │
│ max_logical_replication_workers │ integer │ PostgreSQL maximum logical replication workers │
│ │ │ (taken from the pool of max_parallel_workers) │
│ │ │ * Minimum: 4 / Maximum: 64 │
│ max_replication_slots │ integer │ PostgreSQL maximum replication slots │
│ │ │ * Minimum: 8 / Maximum: 64 │
│ max_stack_depth │ integer │ Maximum depth of the stack in bytes │
│ │ │ * Minimum: 2.097152e+06 / Maximum: 6.291456e+06 │
│ max_standby_archive_delay │ integer │ Max standby archive delay in milliseconds │
│ │ │ * Minimum: 1 / Maximum: 4.32e+07 │
│ autovacuum_analyze_threshold │ integer │ Specifies the minimum number of inserted, updated │
│ │ │ or deleted tuples needed to trigger an ANALYZE in │
│ │ │ any one table. The default is 50 tuples. │
│ │ │ * Minimum: 0 / Maximum: 2.147483647e+09 │
│ pg_partman_bgw.interval │ integer │ Sets the time interval to run pg_partman's │
│ │ │ scheduled tasks │
│ │ │ * Minimum: 3600 / Maximum: 604800 │
│ │ │ * Example: 3600 │
│ wal_writer_delay │ integer │ WAL flush interval in milliseconds. Note that │
│ │ │ setting this value to lower than the default 200ms │
│ │ │ may negatively impact performance │
│ │ │ * Minimum: 10 / Maximum: 200 │
│ │ │ * Example: 50 │
│ max_parallel_workers │ integer │ Sets the maximum number of workers that the system │
│ │ │ can support for parallel queries │
│ │ │ * Minimum: 0 / Maximum: 96 │
│ max_pred_locks_per_transaction │ integer │ PostgreSQL maximum predicate locks per transaction │
│ │ │ * Minimum: 64 / Maximum: 640 │
│ bgwriter_lru_multiplier │ number │ The average recent need for new buffers is │
│ │ │ multiplied by bgwriter_lru_multiplier to arrive at │
│ │ │ an estimate of the number that will be needed │
│ │ │ during the next round, (up to │
│ │ │ bgwriter_lru_maxpages). 1.0 represents a “just in │
│ │ │ time” policy of writing exactly the number of │
│ │ │ buffers predicted to be needed. Larger values │
│ │ │ provide some cushion against spikes in demand, │
│ │ │ while smaller values intentionally leave writes to │
│ │ │ be done by server processes. The default is 2.0. │
│ │ │ * Minimum: 0 / Maximum: 10 │
│ │ │ * Example: 2 │
│ autovacuum_vacuum_scale_factor │ number │ Specifies a fraction of the table size to add to │
│ │ │ autovacuum_vacuum_threshold when deciding whether │
│ │ │ to trigger a VACUUM. The default is 0.2 (20% of │
│ │ │ table size) │
│ │ │ * Minimum: 0 / Maximum: 1 │
│ track_commit_timestamp │ string │ Record commit time of transactions. │
│ │ │ * Supported values: │
│ │ │ - off │
│ │ │ - on │
│ │ │ * Example: off │
│ max_worker_processes │ integer │ Sets the maximum number of background processes │
│ │ │ that the system can support │
│ │ │ * Minimum: 8 / Maximum: 96 │
│ bgwriter_delay │ integer │ Specifies the delay between activity rounds for │
│ │ │ the background writer in milliseconds. Default is │
│ │ │ 200. │
│ │ │ * Minimum: 10 / Maximum: 10000 │
│ │ │ * Example: 200 │
│ bgwriter_flush_after │ integer │ Whenever more than bgwriter_flush_after bytes have │
│ │ │ been written by the background writer, attempt to │
│ │ │ force the OS to issue these writes to the │
│ │ │ underlying storage. Specified in kilobytes, │
│ │ │ default is 512. Setting of 0 disables forced │
│ │ │ writeback. │
│ │ │ * Minimum: 0 / Maximum: 2048 │
│ │ │ * Example: 512 │
│ deadlock_timeout │ integer │ This is the amount of time, in milliseconds, to │
│ │ │ wait on a lock before checking to see if there is │
│ │ │ a deadlock condition. │
│ │ │ * Minimum: 500 / Maximum: 1.8e+06 │
│ │ │ * Example: 1000 │
│ max_locks_per_transaction │ integer │ PostgreSQL maximum locks per transaction │
│ │ │ * Minimum: 64 / Maximum: 6400 │
│ wal_sender_timeout │ integer │ Terminate replication connections that are │
│ │ │ inactive for longer than this amount of time, in │
│ │ │ milliseconds. Setting this value to zero disables │
│ │ │ the timeout. │
│ │ │ * Example: 60000 │
│ max_standby_streaming_delay │ integer │ Max standby streaming delay in milliseconds │
│ │ │ * Minimum: 1 / Maximum: 4.32e+07 │
│ track_activity_query_size │ integer │ Specifies the number of bytes reserved to track │
│ │ │ the currently executing command for each active │
│ │ │ session. │
│ │ │ * Minimum: 1024 / Maximum: 10240 │
│ │ │ * Example: 1024 │
│ autovacuum_vacuum_cost_limit │ integer │ Specifies the cost limit value that will be used │
│ │ │ in automatic VACUUM operations. If -1 is specified │
│ │ │ (which is the default), the regular │
│ │ │ vacuum_cost_limit value will be used. │
│ │ │ * Minimum: -1 / Maximum: 10000 │
│ max_parallel_workers_per_gather │ integer │ Sets the maximum number of workers that can be │
│ │ │ started by a single Gather or Gather Merge node │
│ │ │ * Minimum: 0 / Maximum: 96 │
│ idle_in_transaction_session_timeout │ integer │ Time out sessions with open transactions after │
│ │ │ this number of milliseconds │
│ │ │ * Minimum: 0 / Maximum: 6.048e+08 │
│ log_line_prefix │ string │ Choose from one of the available log-formats. │
│ │ │ These can support popular log analyzers like │
│ │ │ pgbadger, pganalyze etc. │
│ │ │ * Supported values: │
│ │ │ - '%m [%p] %q[user=%u,db=%d,app=%a] ' │
│ │ │ - 'pid=%p,user=%u,db=%d,app=%a,client=%h ' │
│ │ │ - '%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h ' │
│ log_error_verbosity │ string │ Controls the amount of detail written in the │
│ │ │ server log for each message that is logged. │
│ │ │ * Supported values: │
│ │ │ - TERSE │
│ │ │ - DEFAULT │
│ │ │ - VERBOSE │
│ log_min_duration_statement │ integer │ Log statements that take more than this number of │
│ │ │ milliseconds to run, -1 disables │
│ │ │ * Minimum: -1 / Maximum: 8.64e+07 │
│ log_autovacuum_min_duration │ integer │ Causes each action executed by autovacuum to be │
│ │ │ logged if it ran for at least the specified number │
│ │ │ of milliseconds. Setting this to zero logs all │
│ │ │ autovacuum actions. Minus-one (the default) │
│ │ │ disables logging autovacuum actions. │
│ │ │ * Minimum: -1 / Maximum: 2.147483647e+09 │
│ timezone │ string │ PostgreSQL service timezone │
│ │ │ * Example: Europe/Helsinki │
│ max_files_per_process │ integer │ PostgreSQL maximum number of files that can be │
│ │ │ open per process │
│ │ │ * Minimum: 1000 / Maximum: 4096 │
│ pg_stat_statements.track │ [string] │ Controls which statements are counted. Specify top │
│ │ │ to track top-level statements (those issued │
│ │ │ directly by clients), all to also track nested │
│ │ │ statements (such as statements invoked within │
│ │ │ functions), or none to disable statement │
│ │ │ statistics collection. The default value is top. │
│ │ │ * Supported values: │
│ │ │ - all │
│ │ │ - top │
│ │ │ - none │
│ autovacuum_naptime │ integer │ Specifies the minimum delay between autovacuum │
│ │ │ runs on any given database. The delay is measured │
│ │ │ in seconds, and the default is one minute │
│ │ │ * Minimum: 0 / Maximum: 86400 │
│ pg_partman_bgw.role │ string │ Controls which role to use for pg_partman's │
│ │ │ scheduled background tasks. │
│ │ │ * Example: myrolename │
│ max_prepared_transactions │ integer │ PostgreSQL maximum prepared transactions │
│ │ │ * Minimum: 0 / Maximum: 10000 │
┼─────────────────────────────────────┼──────────┼─────────────────────────────────────────────────────────┼
Update and display settings
To change a setting to a value other than default, you can use:
exo dbaas update test-pg -z de-fra-1 --pg-settings='{"deadlock_timeout": 1002}'
Updating Database Service "test-pg"...
┼───────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼
│ DATABASE SERVICE │ │
┼───────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼
│ Zone │ de-fra-1 │
│ Name │ test-pg │
│ Type │ pg │
│ Plan │ startup-4 │
│ Disk Size │ 80 GiB │
│ State │ running │
│ Creation Date │ 2021-10-21 08:42:40 +0000 UTC │
│ Update Date │ 2021-11-01 14:17:01 +0000 UTC │
│ Nodes │ 1 │
│ Node CPUs │ 2 │
│ Node Memory │ 4.0 GiB │
│ Termination Protected │ true │
│ Maintenance │ tuesday (10:10:10) │
│ Version │ 13.4 │
│ Backup Schedule │ 20:28 │
│ URI │ postgres://avnadmin:xxxxx@test-pg-exoscale-08b0165e-ef03-47ec-926f-f01163d557ed.aivencloud.com:21699/defaultdb?sslmode=require │
│ IP Filter │ 0.0.0.0/0 │
│ Components │ │
│ │ pg test-pg-exoscale-08b0165e-ef03-47ec-926f-f01163d557ed.aivencloud.com:21699 route:dynamic usage:primary │
│ │ pgbouncer test-pg-exoscale-08b0165e-ef03-47ec-926f-f01163d557ed.aivencloud.com:21700 route:dynamic usage:primary │
│ │ │
│ Users │ avnadmin (primary) │
┼───────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼
You can check the setting has been configured with the show
command
and the correct setting namespace.
Here our deadlock timeout is a general pg
namespace setting:
exo dbaas show test-pg -z de-fra-1 --settings=pg
{
"deadlock_timeout": 1002
}
Using a JSON file and Bash
If you are using Bash as your shell, you can also craft
your settings as a separate JSON file and pass it as a
parameter using the structure below. Assume config.json
is the name of your configuration file:
exo dbaas update test-pg -z de-fra-1 --pg-settings "$(<config.json)"
Connect with psql to your PostgreSQL
To connect to your DB instance with psql, you need to retrieve your credentials:
exo dbaas -z de-fra-1 show test-pg --uri
postgres://avnadmin:xxxxx@test-pg-exoscale-980a712f-5c76-4ec4-a3bc-4e4e442deef2.aivencloud.com:21699/defaultdb?sslmode=require
The status of your DB instance should have changed from
rebuilding after the create
command to running. To check:
exo dbaas show test-pg -z de-fra-1
┼───────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼
│ DATABASE SERVICE │ │
┼───────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼
│ Zone │ de-fra-1 │
│ Name │ test-pg │
│ Type │ pg │
│ Plan │ hobbyist-2 │
│ Disk Size │ 8.0 GiB │
│ State │ rebuilding │
│ Creation Date │ 2022-06-09 08:26:13 +0000 UTC │
│ Update Date │ 2022-06-09 08:27:25 +0000 UTC │
│ Nodes │ 1 │
│ Node CPUs │ 2 │
│ Node Memory │ 2.0 GiB │
│ Termination Protected │ true │
│ Maintenance │ sunday (11:52:43) │
│ Version │ 14 │
│ Backup Schedule │ 08:19 │
│ URI │ postgres://avnadmin:xxxxx@test-pg-exoscale-980a712f-5c76-4ec4-a3bc-4e4e442deef2.aivencloud.com:21699/defaultdb?sslmode=require │
│ IP Filter │ 1.2.3.4/32 │
│ Components │ │
│ │ pg test-pg-exoscale-980a712f-5c76-4ec4-a3bc-4e4e442deef2.aivencloud.com:21699 route:dynamic usage:primary │
│ │ pgbouncer test-pg-exoscale-980a712f-5c76-4ec4-a3bc-4e4e442deef2.aivencloud.com:21700 route:dynamic usage:primary │
│ │ │
│ Users │ n/a │
┼───────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼
You can now connect to the database engine:
psql postgres://avnadmin:xxxxx@test-pg-exoscale-980a712f-5c76-4ec4-a3bc-4e4e442deef2.aivencloud.com:21699/defaultdb?sslmode=require
psql (9.2.24, server 14.3)
WARNING: psql version 9.2, server version 14.0.
Some psql features might not work.
SSL connection (cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256)
Type "help" for help.
defaultdb=> \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
_aiven | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =T/postgres +
| | | | | postgres=CTc/postgres
defaultdb | avnadmin | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
(4 rows)
defaultdb=>
Migrating to Managed PostgreSQL
Note
Due to the fact that migrations always involve interaction with third party services running outside of Exoscale, only best effort support is provided on all migrations.
Migrate your PostgreSQL database to our Managed PostgreSQL service. Two methods are supported: Replication and pg_dump.
The migration first attempts to use the replication method, and if it fails, it uses pg_dump.
The replication method is preferred for production databases because it allows migrating data with minimal downtime. The pg_dump method is slower and will migrate one database at a time.
Prerequisites
- A target PostgreSQL service: this can be a new or an existing service.
- The hostname, port and password of the source PostgreSQL service.
- The source PostgreSQL service can be secured with SSL/TLS.
- Publicly accessible source PostgreSQL service (for that, you might need to set bind parameter to a publicly reachable network interface in your PostgreSQL configuration). Any firewalls need to be open on port TCP 5432 if using the default to allow traffic from the source to the target servers. Also pay attention to other protections, that can restrict connections: SELinux should be disabled or set in permissive mode for the duration of the migration on the source server.
- Migration using replication will not work for PostgreSQL databases with capital letters If you can not rename this database please check Aiven set up logical replication to replicate your data
- Access to a superuser role on the source cluster.
- wal_level setting to logical on the source cluster: This setting ensures that the necessary information for logical replication, such as individual data modifications and the associated metadata, is recorded in the WAL; it allows an accurate data replication to the target cluster during the migration process.
Performing data migration
To perform the migration, you can either use the portal or the Exoscale CLI, depends on when the action it’s performed and if the service receiving the migration already exists or not.
Run the following command to create a new service and start the migration immediately:
exo dbaas create pg business-4 target-postgresql-service-name --pg-ip-filter 0.0.0.0/0 --pg-migration-host x.x.x.x --pg-migration-port 5432 --pg-migration-username xxxxx --pg-migration-password xxxxx --pg-migration-method replication --pg-migration-ssl false --pg-migration-dbname xxxxx
- target-PostgreSQL-service-name : Target DBaaS Service Name
- –pg-migration-host : the IP address of your source PostgreSQL server
- –pg-migration-port : the port on which your source PostgreSQL server is reachable
- –pg-migration-username : username on your source PostgreSQL server
- –pg-migration-password : password on your source PostgreSQL server
- –pg-migration-method : the chosen method for migration, which could be replication or dump method.
In case the PostgreSQL service is already created, you can run the command below:
exo dbaas update --pg-migration-host x.x.x.x --pg-migration-port 5432 --pg-migration-username xxxxx --pg-migration-password xxxxx --pg-migration-method dump target-postgresql-service-name
To check status of our migration we can run command:
exo dbaas migration status target-postgresql-service-name
And if we want to stop the running migration:
exo dbaas migration stop target-postgresql-service-name
Note
Ensure you are using at least version 1.53.0 of the Exoscale CLI.
Disabling all constraints/triggers using aiven_extras
Note
If you are migrating data and before deciding on the fully manual approach using aiven_extras
consider checking if logical replication suits better your use case. The approach outlined below is done automatically via logical replication.
Access to super-user functionality is restricted in order to have a clear demarcation line for service availability. Disabling triggers is one of them.
Fortunately an alternative is available using aiven_extras plugin.
Step 1: In the database shell install the aiven_extras
extension with
CREATE EXTENSION aiven_extras CASCADE;
Step 2: Disable constraints with:
SELECT aiven_extras.session_replication_role('replica');
Step 3: Import your data or another task that requires disabled constraints
Step 4: Re-enable triggers
SELECT aiven_extras.session_replication_role('origin');
Keep in mind that as all constraints get disabled there’s a risk of creating orphaned records or inconsistent data.
Changes made without constraints can affect other transactions in progress, leading to data anomalies, so ideally should be done when users don’t insert data into the database.
Re-enabling constraints might trigger a check of all rows, which can be resource-intensive for large tables.
PostgreSQL Read Replicas
PostgreSQL read replicas provide improved performance by distributing read queries, thus reducing the load on the primary database. They also enhance high availability, scalability, and disaster recovery. Additionally, replicas can be placed in different geographical zones to further improve availability and reduce latency.
Creating a Read Replica
To create a read replica for an existing PostgreSQL database in another zone, you can use the following command:
exo api --zone at-vie-1 create-dbaas-service-pg rocket <<EOF
{
"name": "rocket",
"plan": "startup-8",
"integrations": [{"type": "read_replica", "source-service": "launch-site"}]
}
EOF
This will create an instance called rocket in Zone AT-VIE-1 for an existing database called launch-site.
Note
Do not forget to configure the IP Filter to access the new database.
Verifying the Replica
To monitor the replication status in PostgreSQL, you can utilize the pg_stat_replication
system view. This view offers insights into the replication status on a primary server, displaying details for each replication connection, such as the state, sent and written positions, and any lag.
To check the replication status, execute the following SQL query on your primary server:
\x
SELECT * FROM pg_stat_replication;
This query will return columns like pid
, application_name
, client_addr
, state
, sent_lsn
, and write_lsn
.
Promoting the Replica to Master
Step 1: Get details of the replica service to find the integration ID
exo api get-dbaas-service-pg -z at-vie-1 rocket
Step 2: Check Integration Status
exo api get-dbaas-integration <integration_id>
Step 3: Promote Replica to a master Database (Remove Replication)
exo api delete-dbaas-integration <integration_id>
Note
- Customers can create a read replica on a startup plan, regardless of the main cluster’s plan.
- The read replica node is vertically scalable.
- It can be horizontally scaled by promoting it to the primary, becoming a standalone service detached from the main cluster.