Replication
Exoscale’s Simple Object Storage (SOS) supports live bucket replication, allowing an automatic asynchronous copy of objects between one bucket, to one or multiple buckets across the same or a different zone.
This enables the creation of copies of buckets, in the same or a different zone, covering needs for disaster recovery, multi-zone storage synchronization, etc.
Requirements
In order to enable bucket replication, both the source and destination(s) buckets must be within the same Exoscale organization, and both of them need to have object versioning enabled.
If the source bucket has Object Locking enabled, the destination one must also have Object Locking enabled to allow a correct replication.
Configuration Structure
The REST API of Exoscale’s SOS allows you to configure the bucket replication configuration using XML. The Exoscale Portal or the Exoscale CLI offer a convenience mechanism to define it in the JSON format.
A replication configuration is composed of the unique identifier (uuid) of an Exoscale IAM role, and one or more rules. The replication will assume this IAM role to perform operations on your behalf, on both the source and destination buckets. For example:
<ReplicationConfiguration>
<Role>role-uuid</Role>
<Rule> ... </Rule>
<Rule> ... </Rule>
...
</ReplicationConfiguration>
The IAM role must have the following permissions:
get-object
on the source bucketput-object
,put-object-*
on the destination bucket
Rule configuration
Each rule will be configured with the following format:
ID
: a unique namePriority
: integer used to identifify the precedence between conflicting rules with the same destination bucket. The highest priority will win.Status
:Enabled
orDisabled
, to disable or enable a given rule while keeping the configurationFilter
: section to filter the elements to be replicatedPrefix
: ensures the rule applies only to objects matching the configured prefix
DeleteMarkerReplication
: section to configure replication of delete markersStatus
:Enabled
orDisabled
: to disable or enable the replication of delete markers
Destination
: section to configure the destination of the ruleBucket
: Target bucket name
You can use the following minimal configuration as an example:
<Rule>
<ID>My-Rule-0</ID>
<Priority>0</Priority>
<Status>Enabled</Status>
<Filter>
<Prefix></Prefix>
</Filter>
<DeleteMarkerReplication>
<Status>Enabled</Status>
</DeleteMarkerReplication>
<Destination>
<Bucket>my-exo-destination-bucket</Bucket>
</Destination>
</Rule>
Examples
Simple Leader-Follower Setup
Using the CLI
In this example, we will do a simple configuration with one leader bucket, replicating into the follower bucket. Writes in the leader will be visible inside of the follower.
- Using the following IAM configuration, create an IAM role for the replication:
# policy.json
{
"default-service-strategy": "deny",
"services": {
"sos": {
"type": "rules",
"rules": [
{
"action": "allow",
"expression": "parameters.bucket == 'my-source' && operation == 'get-object'"
},
{
"action": "allow",
"expression": "parameters.bucket == 'my-destination' && (operation.startsWith('put-object') || operation.startsWith('delete-object'))"
}
]
}
}
}
$ cat policy.json | exo iam role create --editable=true --description 'sos bucket replication between my-source and my-destination' --policy - replication-my-source-my-destination
┼─────────────┼─────────────────────────────────────────────────────────────┼
│ ID │ 6f00a21e-a535-4286-86c5-e81eeab3b4ff │
│ Name │ replication-my-source-my-destination │
│ Description │ sos bucket replication between my-source and my-destination │
│ Editable │ true │
│ Labels │ n/a │
│ Permissions │ n/a │
┼─────────────┼─────────────────────────────────────────────────────────────┼
- Create the source and destination buckets, with the correct configuration
$ exo storage create --zone ch-dk-2 my-source
┼──────────────────┼────────────────────────────────────────────┼
│ STORAGE │ │
┼──────────────────┼────────────────────────────────────────────┼
│ Name │ my-source │
│ Zone │ ch-dk-2 │
│ ACL │ │
│ │ Read - │
│ │ Write - │
│ │ Read ACP - │
│ │ Write ACP - │
│ │ Full Control xxxxxxxxxxxxxxxxxxxxxxx │
│ │ │
│ CORS │ │
│ Object Ownership │ BucketOwnerEnforced │
┼──────────────────┼────────────────────────────────────────────┼
$ exo storage create --zone at-vie-1 my-destination
┼──────────────────┼────────────────────────────────────────────┼
│ STORAGE │ │
┼──────────────────┼────────────────────────────────────────────┼
│ Name │ my-destination │
│ Zone │ at-vie-1 │
│ ACL │ │
│ │ Read - │
│ │ Write - │
│ │ Read ACP - │
│ │ Write ACP - │
│ │ Full Control xxxxxxxxxxxxxxxxxxxxxxx │
│ │ │
│ CORS │ │
│ Object Ownership │ BucketOwnerEnforced │
┼──────────────────┼────────────────────────────────────────────┼
$ exo storage bucket versioning enable --zone ch-dk-2 my-source
$ exo storage bucket versioning enable --zone at-vie-1 my-destination
- Using the following replication configuration, enable the replication between the two buckets.
# config.json
{
"Role": "6f00a21e-a535-4286-86c5-e81eeab3b4ff",
"Rules": [{
"ID": "vie1-follower",
"Priority": 1,
"Filter": {
"Prefix": ""
},
"Status": "Enabled",
"DeleteMarkerReplication": {
"Status": "Enabled"
},
"Destination": {
"Bucket": "my-destination"
}
}]
}
$ exo storage bucket replication set --zone ch-dk-2 sos://my-source ./config.json
- Confirm it works as expected:
$ exo storage upload -r ./ sos://my-source/
config.json [===========================================================================] 372.00 b / 372.00 b | 0s
policy.json [===========================================================================] 418.00 b / 418.00 b | 0s
$ exo storage list sos://my-source/
2025-02-20 15:50:12 UTC 372 B config.json
2025-02-20 15:50:13 UTC 418 B policy.json
$ exo storage list sos://my-destination/
2025-02-20 15:50:12 UTC 372 B config.json
2025-02-20 15:50:13 UTC 418 B policy.json
# You can also check the Replication Status of each object individually
$ exo storage show sos://my-source/config.json
[...]
│ Path │ config.json │
│ Bucket │ my-source │
│ Replication Status │ COMPLETED │
$ exo storage show sos://my-destination/config.json
[...]
│ Path │ config.json │
│ Bucket │ my-destination │
│ Replication Status │ REPLICA │
Using the Portal
- In
IAM
->Roles
->Add
, create a role with the following configuration:name
:replication-my-source-my-destination
description
:sos bucket replication between my-source and my-destination
Editable Policy
: truePolicy
: Using theAdvanced mode
, submit the following configuration:
{
"default-service-strategy": "deny",
"services": {
"sos": {
"type": "rules",
"rules": [
{
"action": "allow",
"expression": "parameters.bucket == 'my-source' && operation == 'get-object'"
},
{
"action": "allow",
"expression": "parameters.bucket == 'my-destination' && (operation.startsWith('put-object') || operation.startsWith('delete-object'))"
}
]
}
}
}
- In
Storage
->Add
, create the two buckets.- In the list of buckets, for your two bucket, click on
...
and open thedetails
page. EnableVersioning
for both buckets. - For our source bucket, open the
Replication
tab and write the following configuration:
- In the list of buckets, for your two bucket, click on
{
"Role": "6f00a21e-a535-4286-86c5-e81eeab3b4ff",
"Rules": [{
"ID": "vie1-follower",
"Priority": 1,
"Filter": {
"Prefix": ""
},
"Status": "Enabled",
"DeleteMarkerReplication": {
"Status": "Enabled"
},
"Destination": {
"Bucket": "my-destination"
}
}]
}
- Finally, we can upload objects in our source bucket, and confirm they eventually appear in the destination bucket.
Bi-Directional Replication
You can also synchronize two buckets in both direction. This can be used by example for creating an eventually-consistent multi-active system, or to ensure easy failover and rollback of your application in case of datacenter faults. Writes to any of the two buckets will be eventually replicated to the other.
Writes are only replicated once, this will not create any infinite loops in the replication process.
- Using the following IAM configuration, create an IAM role for the replication:
# a-b.json
{
"default-service-strategy": "deny",
"services": {
"sos": {
"type": "rules",
"rules": [
{
"action": "allow",
"expression": "parameters.bucket == 'bucket-a' && operation == 'get-object'"
},
{
"action": "allow",
"expression": "parameters.bucket == 'bucket-b' && (operation.startsWith('put-object') || operation.startsWith('delete-object'))"
}
]
}
}
}
# b-a.json
{
"default-service-strategy": "deny",
"services": {
"sos": {
"type": "rules",
"rules": [
{
"action": "allow",
"expression": "parameters.bucket == 'bucket-b' && operation == 'get-object'"
},
{
"action": "allow",
"expression": "parameters.bucket == 'bucket-a' && (operation.startsWith('put-object') || operation.startsWith('delete-object'))"
}
]
}
}
}
$ cat a-b.json | exo iam role create --editable=true --description 'sos bucket replication between bucket-a and bucket-b' --policy - replication-bucket-a-bucket-b
┼─────────────┼──────────────────────────────────────────────────────┼
│ ID │ 91390068-c386-4d9d-b0f1-951e2baa818e │
│ Name │ replication-bucket-a-bucket-b │
│ Description │ sos bucket replication between bucket-a and bucket-b │
│ Editable │ true │
│ Labels │ n/a │
│ Permissions │ n/a │
┼─────────────┼──────────────────────────────────────────────────────┼
$ cat b-a.json | exo iam role create --editable=true --description 'sos bucket replication between bucket-b and bucket-a' --policy - replication-bucket-b-bucket-a
┼─────────────┼──────────────────────────────────────────────────────┼
│ ID │ e23c886b-8091-4adc-b567-7bf796bed37b │
│ Name │ replication-bucket-b-bucket-a │
│ Description │ sos bucket replication between bucket-b and bucket-a │
│ Editable │ true │
│ Labels │ n/a │
│ Permissions │ n/a │
┼─────────────┼──────────────────────────────────────────────────────┼
- Create the A and B buckets, with the correct configuration
$ exo storage create --zone ch-dk-2 bucket-a
[...]
$ exo storage create --zone at-vie-1 bucket-b
[...]
$ exo storage bucket versioning enable --zone ch-dk-2 bucket-a
$ exo storage bucket versioning enable --zone at-vie-1 bucket-b
- Using the following replication configuration, enable the replication between the two buckets.
# conf-a.json
{
"Role": "91390068-c386-4d9d-b0f1-951e2baa818e",
"Rules": [{
"ID": "to-bucket-b",
"Priority": 1,
"Filter": {
"Prefix": ""
},
"Status": "Enabled",
"DeleteMarkerReplication": {
"Status": "Enabled"
},
"Destination": {
"Bucket": "bucket-b"
}
}]
}
# conf-b.json
{
"Role": "e23c886b-8091-4adc-b567-7bf796bed37b",
"Rules": [{
"ID": "to-bucket-a",
"Priority": 1,
"Filter": {
"Prefix": ""
},
"Status": "Enabled",
"DeleteMarkerReplication": {
"Status": "Enabled"
},
"Destination": {
"Bucket": "bucket-a"
}
}]
}
$ exo storage bucket replication set --zone ch-dk-2 sos://bucket-a ./conf-a.json
$ exo storage bucket replication set --zone at-vie-1 sos://bucket-b ./conf-b.json
- Confirm it works as expected:
$ exo storage upload *a.json sos://bucket-a
b-a.json [==============================================================================] 410.00 b / 410.00 b | 0s
conf-a.json [==============================================================================] 364.00 b / 364.00 b | 0s
$> exo storage upload *b.json sos://bucket-b
a-b.json [==============================================================================] 410.00 b / 410.00 b | 0s
conf-b.json [==============================================================================] 364.00 b / 364.00 b | 0s
$ exo storage list sos://bucket-a
2025-02-24 14:06:34 UTC 410 B a-b.json
2025-02-24 14:06:28 UTC 410 B b-a.json
2025-02-24 14:06:28 UTC 364 B conf-a.json
2025-02-24 14:06:34 UTC 364 B conf-b.json
$ exo storage list sos://bucket-b
2025-02-24 14:06:34 UTC 410 B a-b.json
2025-02-24 14:06:28 UTC 410 B b-a.json
2025-02-24 14:06:28 UTC 364 B conf-a.json
2025-02-24 14:06:34 UTC 364 B conf-b.json
Cost Impact
Using bucket replication is free, but replicated objects will count towards your total storage usage. If you replicate an entire bucket within the same or across zones, your storage consumption will double.
Limitations
- SOS doesn’t support Batch Replication. Only new data mutations will be replicated.
- Objects encrypted with SSE-C will not be replicated
- As SOS doesn’t support Object Tagging, filters referencing tags will not be evaluated
- Source and destination buckets must be within the same organization