In this review, you will deploy and configure RADOS Gateways using specified requirements.
Outcomes
You should be able to:
Deploy RADOS Gateway services.
Configure multisite replication.
Create and manage users to access the RADOS Gateway.
Create buckets and store objects by using the Amazon S3 and Swift APIs.
If you did not reset your classroom virtual machines at the end of the last chapter, save any work you want to keep from earlier exercises on those machines and reset the classroom environment now.
Reset your environment before performing this exercise. All comprehensive review labs start with a clean, initial classroom environment that includes a pre-built, fully operational Ceph cluster. All remaining comprehensive reviews use the default Ceph cluster provided in the initial classroom environment.
As the student user on the workstation machine, use the lab command to prepare your system for this exercise.
[student@workstation ~]$ lab start comprehensive-review5
This command ensures that all cluster hosts are reachable.
It also installs the AWS and Swift clients on the serverc and serverf nodes.
The primary Ceph cluster contains the serverc, serverd, and servere nodes. The secondary Ceph cluster contains the serverf node.
Specifications
Create a realm, zonegroup, zone, and a system user called Replication User on the primary Ceph cluster.
Configure each resource as the default.
Treat this zone as the primary zone.
Use the names provided in this table:
| Resource | Name |
|---|---|
| Realm |
cl260
|
| Zonegroup |
classroom
|
| Zone |
main
|
| System User |
repl.user
|
| Access Key |
replication
|
| Secret Key |
secret
|
| Endpoint |
|
Deploy a RADOS Gateway service in the primary cluster called cl260-1 with one RGW instance on serverc.
Configure the primary zone name and disable dynamic bucket index resharding.
On the secondary Ceph cluster, configure a secondary zone called fallback for the classroom zonegroup.
Object resources created in the primary zone must replicate to the secondary zone.
Configure the endpoint of the secondary zone as http://serverf:80
Deploy a RADOS Gateway service in the secondary cluster called cl260-2 with one RGW instance.
Configure the secondary zone name and disable dynamic bucket index resharding.
Create an Amazon S3 API user called S3 User with a uid of apiuser, an access key of review, and a secret key of securekey.
Create a Swift API subuser with secret key of secureospkey.
Grant full access to both the user and subuser.
Create a bucket called images by using the Amazon S3 API.
Upload the /etc/favicon.png file to the images container by using the Swift API.
The object must be available as favicon-image
Log in to serverc as the admin user.
Create a realm called cl260, a zonegroup called classroom, a zone called main, and a system user called Replication User.
Use the UID of repl.user, access key of replication, and secret key of secret for the user.
Set the zone endpoint as .http://serverc:80
Log in to serverc as the admin user and use sudo to run the cephadm shell.
[student@workstation ~]$ssh admin@serverc[admin@serverc ~]$sudo cephadm shell[ceph: root@serverc /]#
Create a realm called cl260.
Set the realm as default.
[ceph: root@serverc /]# radosgw-admin realm create --rgw-realm=cl260 --default
{
"id": "8ea5596f-e2bb-4ac5-8fc8-9122de311e26",
"name": "cl260",
"current_period": "75c34edd-428f-4c7f-a150-6236bf6102db",
"epoch": 1
}Create a zonegroup called classroom.
Configure the classroom zonegroup with an endpoint on the serverc node.
Set the classroom zonegroup as the default.
[ceph: root@serverc /]# radosgw-admin zonegroup create --rgw-zonegroup=classroom \
--endpoints=http://serverc:80 --master --default
{
"id": "2b1495f8-5ac3-4ec5-897e-ae5e0923d0b9",
"name": "classroom",
"api_name": "classroom",
"is_master": "true",
"endpoints": [
"http://serverc:80"
],
...output omitted...Create a master zone called main.
Configure the zone with an endpoint pointing to http://serverc:80.
Use replication as the access key and secret as the secret key.
Set the main zone as the default.
[ceph: root@serverc /]# radosgw-admin zone create --rgw-zonegroup=classroom \
--rgw-zone=main --endpoints=http://serverc:80 --access-key=replication \
--secret=secret --master --default
{
"id": "b50c6d11-6ab6-4a3e-9fb6-286798ba950d",
"name": "main",
...output omitted...
"system_key": {
"access_key": "replication",
"secret_key": "secret"
...output omitted...
"realm_id": "8ea5596f-e2bb-4ac5-8fc8-9122de311e26",
"notif_pool": "main.rgw.log:notif"
}Create a system user called repl.user to access the zone pools.
The keys for the repl.user user must match the keys configured for the zone.
[ceph: root@serverc /]# radosgw-admin user create --uid="repl.user" \
--display-name="Replication User" --secret=secret --system \
--access-key=replication
{
"user_id": "repl.user",
"display_name": "Replication User",
...output omitted...
{
"user": "repl.user",
"access_key": "replication",
"secret_key": "secret"
...output omitted...Commit the realm configuration structure changes to the period.
[ceph: root@serverc /]# radosgw-admin period update --commit
{
"id": "93a7f406-0bbd-43a5-a32a-c217386d534b",
"epoch": 1,
"predecessor_uuid": "75c34edd-428f-4c7f-a150-6236bf6102db",
...output omitted...
"id": "2b1495f8-5ac3-4ec5-897e-ae5e0923d0b9",
"name": "classroom",
"api_name": "classroom",
"is_master": "true",
"endpoints": [
"http://serverc:80"
],
...output omitted...
"id": "b50c6d11-6ab6-4a3e-9fb6-286798ba950d",
"name": "main",
"endpoints": [
"http://serverc:80"
...output omitted...
"realm_id": "8ea5596f-e2bb-4ac5-8fc8-9122de311e26",
"realm_name": "cl260",
"realm_epoch": 2
}Create a RADOS Gateway service called cl260-1 with a single RGW daemon on serverc.
Verify that the RGW daemon is up and running.
Configure the zone name in the configuration database and disable dynamic bucket index resharding.
Create a RADOS gateway service called cl260-1 with a single RGW daemon on the serverc node.
[ceph: root@serverc /]#ceph orch apply rgw cl260-1 --realm=cl260 --zone=main \ --placement="1 serverc.lab.example.com"Scheduled rgw.cl260-1 update... [ceph: root@serverc /]#ceph orch ps --daemon-type rgwNAME HOST STATUS REFRESHED AGE PORTS ... rgw.cl260-1.serverc.iwsaop serverc.lab.example.com running (70s) 65s ago 70s *:80 ...
Configure the zone name in the configuration database.
[ceph: root@serverc /]#ceph config set client.rgw rgw_zone main[ceph: root@serverc /]#ceph config get client.rgw rgw_zonemain
Disable dynamic bucket index resharding
[ceph: root@serverc /]#ceph config set client.rgw rgw_dynamic_resharding false[ceph: root@serverc /]#ceph config get client.rgw rgw_dynamic_reshardingfalse
Log in to serverf as the admin user.
Pull the realm and period configuration from the serverc node.
Use the credentials for repl.user to authenticate.
Verify that the pulled realm and zonegroup are set as default for the secondary cluster.
Create a secondary zone called fallback for the classroom zonegroup.
In a second terminal, log in to serverf as the admin user and use sudo to run the cephadm shell.
[student@workstation ~]$ssh admin@serverf[admin@serverf ~]$sudo cephadm shell[ceph: root@serverf /]#
Pull the realm and period configuration from serverc.
[ceph: root@serverf /]#radosgw-admin realm pull --url=http://serverc:80 \ --access-key=replication --secret-key=secret{ "id": "8ea5596f-e2bb-4ac5-8fc8-9122de311e26", "name": "cl260", "current_period": "93a7f406-0bbd-43a5-a32a-c217386d534b", "epoch": 2 } [ceph: root@serverf /]#radosgw-admin period pull --url=http://serverc:80 \ --access-key=replication --secret-key=secret{ "id": "93a7f406-0bbd-43a5-a32a-c217386d534b", "epoch": 1, "predecessor_uuid": "75c34edd-428f-4c7f-a150-6236bf6102db", "sync_status": [], "period_map": { "id": "93a7f406-0bbd-43a5-a32a-c217386d534b", "zonegroups": [ { "id": "2b1495f8-5ac3-4ec5-897e-ae5e0923d0b9", "name": "classroom", "api_name": "classroom", "is_master": "true", "endpoints": [ "http://serverc:80" ...output omitted... "zones": [ { "id": "b50c6d11-6ab6-4a3e-9fb6-286798ba950d", "name": "main", "endpoints": [ "http://serverc:80" ...output omitted... "master_zonegroup": "2b1495f8-5ac3-4ec5-897e-ae5e0923d0b9", "master_zone": "b50c6d11-6ab6-4a3e-9fb6-286798ba950d", ...output omitted... "realm_id": "8ea5596f-e2bb-4ac5-8fc8-9122de311e26", "realm_name": "cl260", "realm_epoch": 2 }
Set the cl260 realm and classroom zone group as default.
[ceph: root@serverf /]#radosgw-admin realm default --rgw-realm=cl260[ceph: root@serverf /]#radosgw-admin zonegroup default --rgw-zonegroup=classroom
Create a zone called fallback.
Configure the fallback zone with the endpoint pointing to http://serverf:80.
[ceph: root@serverf /]# radosgw-admin zone create --rgw-zonegroup=classroom \
--rgw-zone=fallback --endpoints=http://serverf:80 --access-key=replication \
--secret-key=secret --default
{
"id": "fe105db9-fd00-4674-9f73-0d8e4e93c98c",
"name": "fallback",
...output omitted...
"system_key": {
"access_key": "replication",
"secret_key": "secret"
...output omitted...
"realm_id": "8ea5596f-e2bb-4ac5-8fc8-9122de311e26",
"notif_pool": "fallback.rgw.log:notif"
}Commit the site configuration.
[ceph: root@serverf /]# radosgw-admin period update --commit --rgw-zone=fallback
{
"id": "93a7f406-0bbd-43a5-a32a-c217386d534b",
"epoch": 2,
"predecessor_uuid": "75c34edd-428f-4c7f-a150-6236bf6102db",
"sync_status": [],
"period_map": {
"id": "93a7f406-0bbd-43a5-a32a-c217386d534b",
"zonegroups": [
{
"id": "2b1495f8-5ac3-4ec5-897e-ae5e0923d0b9",
"name": "classroom",
"api_name": "classroom",
"is_master": "true",
"endpoints": [
"http://serverc:80"
],
...output omitted...
"zones": [
{
"id": "b50c6d11-6ab6-4a3e-9fb6-286798ba950d",
"name": "main",
"endpoints": [
"http://serverc:80"
],
...output omitted...
},
{
"id": "fe105db9-fd00-4674-9f73-0d8e4e93c98c",
"name": "fallback",
"endpoints": [
"http://serverf:80"
],
...output omitted...
"master_zonegroup": "2b1495f8-5ac3-4ec5-897e-ae5e0923d0b9",
"master_zone": "b50c6d11-6ab6-4a3e-9fb6-286798ba950d",
...output omitted...
"realm_id": "8ea5596f-e2bb-4ac5-8fc8-9122de311e26",
"realm_name": "cl260",
"realm_epoch": 2
}Create a RADOS Gateway service called cl260-2 with a single RGW daemon on the serverf node.
Verify that the RGW daemon is up and running.
Configure the zone name in the configuration database and disable dynamic bucket index resharding.
Create a RADOS gateway service called cl260-2 with a single RGW daemon on serverf.
[ceph: root@serverf /]#ceph orch apply rgw cl260-2 --zone=fallback \ --placement="1 serverf.lab.example.com" --realm=cl260Scheduled rgw.cl260-2 update... [ceph: root@serverf /]#ceph orch ps --daemon-type rgwNAME HOST STATUS REFRESHED AGE PORTS ... rgw.cl260-2.serverf.lqcjui serverf.lab.example.com running (20s) 14s ago 20s *:80 ...
Configure the zone name in the configuration database.
[ceph: root@serverf /]#ceph config set client.rgw rgw_zone fallback[ceph: root@serverf /]#ceph config get client.rgw rgw_zonefallback
Disable dynamic bucket index resharding
[ceph: root@serverf /]#ceph config set client.rgw rgw_dynamic_resharding false[ceph: root@serverf /]#ceph config get client.rgw rgw_dynamic_reshardingfalse
Verify the synchronization status.
[ceph: root@serverf /]#radosgw-admin sync statusrealm 8ea5596f-e2bb-4ac5-8fc8-9122de311e26 (cl260) zonegroup 2b1495f8-5ac3-4ec5-897e-ae5e0923d0b9 (classroom) zone fe105db9-fd00-4674-9f73-0d8e4e93c98c (fallback) metadata sync syncing full sync: 0/64 shards incremental sync: 64/64 shardsmetadata is caught up with masterdata sync source: b50c6d11-6ab6-4a3e-9fb6-286798ba950d (main) syncing full sync: 0/128 shards incremental sync: 128/128 shardsdata is caught up with source
On serverc, use the radosgw-admin command to create a user called apiuser for the Amazon S3 API and a subuser called apiuser:swift for the Swift API.
For the apiuser user, utilize the access key of review, secret key of securekey, and grant full access.
For the apiuser:swift subuser, utilize the secret of secureospkey and grant the subuser full access.
Create an Amazon S3 API user called S3 user with the UID of apiuser.
Assign an access key of review and a secret of securekey, and grant the user full access.
[ceph: root@serverc /]# radosgw-admin user create --display-name="S3 user" \
--uid="apiuser" --access="full" --access_key="review" --secret="securekey"
{
"user_id": "apiuser",
"display_name": "S3 user",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"subusers": [],
"keys": [
{
"user": "apiuser",
"access_key": "review",
"secret_key": "securekey"
}
],
"swift_keys": [],
...output omitted...Create a Swift subuser called apiuser:swift, set secureospkey as the subuser secret and grant full access.
[ceph: root@serverc /]# radosgw-admin subuser create --uid="apiuser" \
--access="full" --subuser="apiuser:swift" --secret="secureospkey"
{
"user_id": "apiuser",
"display_name": "S3 user",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"subusers": [
{
"id": "apiuser:swift",
"permissions": "full-control"
}
],
"keys": [
{
"user": "apiuser",
"access_key": "review",
"secret_key": "securekey"
}
],
"swift_keys": [
{
"user": "apiuser:swift",
"secret_key": "secureospkey"
}
...output omitted...On the serverc node, exit the cephadm shell.
Create a bucket called review.
Configure the AWS CLI tool to use the apiuser user credentials.
Use the swift upload command to upload the /etc/favicon.png file to the image bucket.
Exit the cephadm shell.
Configure the AWS CLI tool to use operator credentials.
Enter review as the access key and securekey as the secret key.
[ceph: root@serverc /]#exitexit [admin@serverc ~]$aws configure --profile=cephAWS Access Key ID [None]:reviewAWS Secret Access Key [None]:securekeyDefault region name [None]:EnterDefault output format [None]:Enter
Create a bucket called images.
[admin@serverc ~]$ aws --profile=ceph --endpoint=http://serverc:80 s3 \
mb s3://images
make_bucket: imagesUse the upload command of the swift API to upload the /etc/favicon.png file to the image bucket.
The object must be available as favicon-image.
[admin@serverc ~]$ swift -V 1.0 -A http://serverc:80/auth/v1 -U apiuser:swift \
-K secureospkey upload images /etc/favicon.png --object-name favicon-image
favicon-imageExit and close the second terminal.
Return to workstation as the student user.
[root@serverf ~]#exit[admin@serverf ~]$exit[student@workstation ~]$exit
[admin@serverc ~]$ exit
[student@workstation ~]$This concludes the lab.