Bookmark this page

Guided Exercise: Configuring a Multisite Object Storage Deployment

In this exercise, you will configure the RADOS Gateway with multisite support and verify the configuration.

Outcomes

You should be able to deploy a Ceph RADOS Gateway and configure multisite replication by using serverc in the primary cluster as site us-east-1 and serverf as site us-east-2.

As the student user on the workstation machine, use the lab command to prepare your system for this exercise.

[student@workstation ~]$ lab start object-multisite

This command confirms that the hosts required for this exercise are accessible.

Procedure 8.2. Instructions

  1. Open two terminals and log in to both serverc and serverf as the admin user. Verify that both clusters are reachable and have a HEALTH_OK status.

    1. Open a terminal window. Log in to serverc as the admin user and use sudo to run the cephadm shell. Verify that the primary cluster is in a healthy state.

      [student@workstation ~]$ ssh admin@serverc
      ...output omitted...
      [admin@serverc ~]$ sudo cephadm shell
      [ceph: root@serverc /]# ceph health
      HEALTH_OK
    2. Open another terminal window. Log in to serverf as the admin user and use sudo to run the cephadm shell. Verify that the secondary cluster is in a healthy state.

      [student@workstation ~]$ ssh admin@serverf
      ...output omitted...
      [admin@serverf ~]$ sudo cephadm shell
      [ceph: root@serverf /]# ceph health
      HEALTH_OK
  2. On the serverc node, configure the us-east-1 site. Create a realm, zone group, zone, and a replication user. Set the realm and zone as defaults for the site. Commit the configuration and review the period id. Use the names provided in the table:

    OptionName
    --realm cl260
    --zonegroup classroom
    --zone us-east-1
    --uid repl.user
    1. Create a realm called cl260. Set the realm as the default.

      [ceph: root@serverc /]# radosgw-admin realm create --rgw-realm=cl260 --default
      {
          "id": "9eef2ff2-5fb1-4398-a69b-eeb3d9610638",
          "name": "cl260",
          "current_period": "031c3bfb-6626-47ef-a523-30b4313499d9",
          "epoch": 1
      }
    2. Create a zone group called classroom. Configure the classroom zone group with an endpoint pointing to the RADOS Gateway running on the serverc node. Set the classroom zone group as the default.

      [ceph: root@serverc /]# radosgw-admin zonegroup create --rgw-zonegroup=classroom \
        --endpoints=http://serverc:80 --master --default
      {
          "id": "d3524ffb-8a3c-45f1-ac18-23db1bc99071",
          "name": "classroom",
          "api_name": "classroom",
          "is_master": "true",
          "endpoints": [
              "http://serverc:80"
          ],
      ...output omitted...
          "realm_id": "9eef2ff2-5fb1-4398-a69b-eeb3d9610638",
       ...output omitted...
      }
    3. Create a master zone called us-east-1. Configure the us-east-1 zone with an endpoint pointing to http://serverc:80. Use replication as the access key and secret as the secret key. Set the us-east-1 zone as the default.

      [ceph: root@serverc /]# radosgw-admin zone create --rgw-zonegroup=classroom \
        --rgw-zone=us-east-1 --endpoints=http://serverc:80 --master --default \
        --access-key=replication --secret=secret
      {
          "id": "4f1863ca-1fca-4c2d-a7b0-f693ddd14882",
          "name": "us-east-1",
          "domain_root": "us-east-1.rgw.meta:root",
          "control_pool": "us-east-1.rgw.control",
      ...output omitted...
          "system_key": {
              "access_key": "replication",
              "secret_key": "secret"
          },
          "placement_pools": [
              {
                  "key": "default-placement",
                  "val": {
                      "index_pool": "us-east-1.rgw.buckets.index",
                      "storage_classes": {
                          "STANDARD": {
                              "data_pool": "us-east-1.rgw.buckets.data"
                          }
                      },
                      "data_extra_pool": "us-east-1.rgw.buckets.non-ec",
                      "index_type": 0
                  }
              }
          ],
          "realm_id": "9eef2ff2-5fb1-4398-a69b-eeb3d9610638",
          "notif_pool": "us-east-1.rgw.log:notif"
      }
    4. Create a system user called repl.user to access the zone pools. The keys for the repl.user user must match the keys configured for the zone.

      [ceph: root@serverc /]# radosgw-admin user create --uid="repl.user" --system \
        --display-name="Replication User" --secret=secret --access-key=replication
      {
          "user_id": "repl.user",
          "display_name": "Replication User",
          "email": "",
          "suspended": 0,
          "max_buckets": 1000,
          "subusers": [],
          "keys": [
              {
                  "user": "repl.user",
                  "access_key": "replication",
                  "secret_key": "secret"
              }
      ...output omitted...
    5. Every realm has an associated current period, holding the current state of zone groups and storage policies. Commit the realm configuration changes to the period. Note the period id associated with the current configuration.

      [ceph: root@serverc /]# radosgw-admin period update --commit
      {
          "id": "7cdc83cf-69d8-478e-b625-d5250ac4435b",
          "epoch": 1,
          "predecessor_uuid": "031c3bfb-6626-47ef-a523-30b4313499d9",
          "sync_status": [],
          "period_map": {
              "id": "7cdc83cf-69d8-478e-b625-d5250ac4435b",
              "zonegroups": [
                  {
                      "id": "d3524ffb-8a3c-45f1-ac18-23db1bc99071",
                      "name": "classroom",
                      "api_name": "classroom",
                      "is_master": "true",
                      "endpoints": [
                          "http://serverc:80"
                      ],
      ...output omitted...
          "master_zonegroup": "d3524ffb-8a3c-45f1-ac18-23db1bc99071",
          "master_zone": "4f1863ca-1fca-4c2d-a7b0-f693ddd14882",
      ...output omitted...
          "realm_id": "9eef2ff2-5fb1-4398-a69b-eeb3d9610638",
          "realm_name": "cl260",
          "realm_epoch": 2
      }
  3. Create a new RADOS Gateway service called cl260-1 in the cl260 realm and us-east-1 zone, and with a single RGW daemon on the serverc node. Verify that the RGW daemon is up and running. Update the zone name in the configuration database.

    1. Create a new RADOS Gateway service called cl260-1.

      [ceph: root@serverc /]# ceph orch apply rgw cl260-1 --realm=cl260 \
        --zone=us-east-1 --placement="1 serverc.lab.example.com"
      Scheduled rgw.cl260-1 update...
      [ceph: root@serverc /]# ceph orch ps --daemon-type rgw
      NAME                             HOST                     STATUS        REFRESHED  AGE  PORTS ...
      rgw.cl260-1.serverc.sxsntj  serverc.lab.example.com  running (6m)  6m ago     6m   *:80  ...
    2. Update the zone name in the configuration database.

      [ceph: root@serverc /]# ceph config set client.rgw rgw_zone us-east-1
  4. On the serverf node, pull the realm and period configuration in from the serverc node. Use the credentials for repl.user to authenticate. Verify the current period id is the same as for the serverc node.

    1. On the second terminal, pull the realm configuration from the serverc node.

      [ceph: root@serverf /]# radosgw-admin realm pull --url=http://serverc:80 \
        --access-key=replication --secret-key=secret
      {
          "id": "9eef2ff2-5fb1-4398-a69b-eeb3d9610638",
          "name": "cl260",
          "current_period": "7cdc83cf-69d8-478e-b625-d5250ac4435b",
          "epoch": 2
      }
    2. Pull the period configuration from the serverc node.

      [ceph: root@serverf /]# radosgw-admin period pull --url=http://serverc:80 \
        --access-key=replication --secret-key=secret
      {
          "id": "7cdc83cf-69d8-478e-b625-d5250ac4435b",
          "epoch": 1,
          "predecessor_uuid": "031c3bfb-6626-47ef-a523-30b4313499d9",
          "sync_status": [],
          "period_map": {
              "id": "7cdc83cf-69d8-478e-b625-d5250ac4435b",
              "zonegroups": [
                  {
                      "id": "d3524ffb-8a3c-45f1-ac18-23db1bc99071",
                      "name": "classroom",
                      "api_name": "classroom",
                      "is_master": "true",
                      "endpoints": [
                          "http://serverc:80"
                      ],
                      "hostnames": [],
                      "hostnames_s3website": [],
                      "master_zone": "4f1863ca-1fca-4c2d-a7b0-f693ddd14882",
                      "zones": [
                          {
                              "id": "4f1863ca-1fca-4c2d-a7b0-f693ddd14882",
                              "name": "us-east-1",
                              "endpoints": [
                                  "http://serverc:80"
                              ],
      ...output omitted...
          "master_zonegroup": "d3524ffb-8a3c-45f1-ac18-23db1bc99071",
          "master_zone": "4f1863ca-1fca-4c2d-a7b0-f693ddd14882",
      ...output omitted...
          "realm_id": "9eef2ff2-5fb1-4398-a69b-eeb3d9610638",
          "realm_name": "cl260",
          "realm_epoch": 2
      }
    3. View the current period id for us-east-2 zone.

      [ceph: root@serverf /]# radosgw-admin period get-current
      {
          "current_period": "7cdc83cf-69d8-478e-b625-d5250ac4435b"
      }
  5. On the serverf node, configure the us-east-2 site. Set the cl260 realm and classroom zone group as the defaults and create the us-east-2 zone. Commit the site configuration and review the period id. Update the zone name in the configuration database.

    1. Set the cl260 realm and classroom zone group as default.

      [ceph: root@serverf /]# radosgw-admin realm default --rgw-realm=cl260
      [ceph: root@serverf /]# radosgw-admin zonegroup default --rgw-zonegroup=classroom
    2. Create a zone called us-east-2. Configure the us-east-2 zone with an endpoint pointing to http://serverf:80.

      [ceph: root@serverf /]# radosgw-admin zone create --rgw-zonegroup=classroom \
        --rgw-zone=us-east-2 --endpoints=http://serverf:80 --access-key=replication \
        --secret-key=secret --default
      {
          "id": "3879a186-cc0c-4b42-8db1-7624d74951b0",
          "name": "us-east-2",
          "domain_root": "us-east-2.rgw.meta:root",
          "control_pool": "us-east-2.rgw.control",
      ...output omitted...
          "system_key": {
              "access_key": "replication",
              "secret_key": "secret"
          },
          "placement_pools": [
              {
                  "key": "default-placement",
                  "val": {
                      "index_pool": "us-east-2.rgw.buckets.index",
                      "storage_classes": {
                          "STANDARD": {
                              "data_pool": "us-east-2.rgw.buckets.data"
                          }
                      },
                      "data_extra_pool": "us-east-2.rgw.buckets.non-ec",
                      "index_type": 0
                  }
              }
          ],
          "realm_id": "9eef2ff2-5fb1-4398-a69b-eeb3d9610638",
          "notif_pool": "us-east-2.rgw.log:notif"
      }
    3. Commit the site configuration.

      [ceph: root@serverf /]# radosgw-admin period update --commit --rgw-zone=us-east-2
      {
          "id": "7cdc83cf-69d8-478e-b625-d5250ac4435b",
          "epoch": 2,
          "predecessor_uuid": "031c3bfb-6626-47ef-a523-30b4313499d9",
          "sync_status": [],
          "period_map": {
              "id": "7cdc83cf-69d8-478e-b625-d5250ac4435b",
              "zonegroups": [
                  {
                      "id": "d3524ffb-8a3c-45f1-ac18-23db1bc99071",
                      "name": "classroom",
                      "api_name": "classroom",
                      "is_master": "true",
                      "endpoints": [
                          "http://serverc:80"
                      ],
                      "hostnames": [],
                      "hostnames_s3website": [],
                      "master_zone": "4f1863ca-1fca-4c2d-a7b0-f693ddd14882",
                      "zones": [
                          {
                              "id": "3879a186-cc0c-4b42-8db1-7624d74951b0",
                              "name": "us-east-2",
                              "endpoints": [
                                  "http://serverf:80"
                              ],
      ...output omitted...
                          },
                          {
                              "id": "4f1863ca-1fca-4c2d-a7b0-f693ddd14882",
                              "name": "us-east-1",
                              "endpoints": [
                                  "http://serverc:80"
                              ],
      ...output omitted...
          "master_zonegroup": "d3524ffb-8a3c-45f1-ac18-23db1bc99071",
          "master_zone": "4f1863ca-1fca-4c2d-a7b0-f693ddd14882",
      ...output omitted...
          "realm_id": "9eef2ff2-5fb1-4398-a69b-eeb3d9610638",
          "realm_name": "cl260",
          "realm_epoch": 2
      }
    4. Update the zone name in the configuration database.

      [ceph: root@serverf /]# ceph config set client.rgw rgw_zone us-east-2
  6. Create a new RADOS Gateway service called cl260-2 in the cl260 realm and us-east-2 zone, and with a single RGW daemon on the serverf node. Verify that the RGW daemon is up and running. View the period associated with the current configuration. Verify the sync status of the site.

    1. Create the RADOS Gateway service on the serverf node.

      [ceph: root@serverf /]# ceph orch apply rgw cl260-2 --realm=cl260 \
        --zone=us-east-2 --placement="1 serverf.lab.example.com"
      Scheduled rgw.east update...
      [ceph: root@serverf /]# ceph orch ps --daemon-type rgw
      NAME                     HOST                     STATUS         REFRESHED  AGE  PORTS  ...
      rgw.east.serverf.zgkgem  serverf.lab.example.com  running (37m)  6m ago     37m  *:80   ...
    2. View the period associated with the current configuration.

      [ceph: root@serverf /]# radosgw-admin period get-current
      {
          "current_period": "7cdc83cf-69d8-478e-b625-d5250ac4435b"
      }
    3. Verify the sync status.

      [ceph: root@serverf /]# radosgw-admin sync status
                realm 9eef2ff2-5fb1-4398-a69b-eeb3d9610638 (cl260)
            zonegroup d3524ffb-8a3c-45f1-ac18-23db1bc99071 (classroom)
                 zone 3879a186-cc0c-4b42-8db1-7624d74951b0 (us-east-2)
        metadata sync syncing
                      full sync: 0/64 shards
                      incremental sync: 64/64 shards
                      metadata is caught up with master
            data sync source: 4f1863ca-1fca-4c2d-a7b0-f693ddd14882 (us-east-1)
                              syncing
                              full sync: 0/128 shards
                              incremental sync: 128/128 shards
                              data is caught up with source
  7. Exit and close the second terminal. Return to workstation as the student user.

    [root@serverf ~]# exit
    [admin@serverf ~]$ exit
    [student@workstation ~]$ exit
    [root@serverc ~]# exit
    [admin@serverc ~]$ exit
    [student@workstation ~]$

Finish

On the workstation machine, use the lab command to complete this exercise. This is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish object-multisite

This concludes the guided exercise.

Revision: cl260-5.0-29d2128