Bookmark this page

Lab: Accessing Object Storage Using a REST API

In this lab, you will configure Ceph to provide object storage to clients, using both the Amazon S3 and OpenStack Swift APIs.

Outcomes

You should be able to:

  • Create buckets and containers using the Amazon S3 and OpenStack Swift APIs.

  • Upload and download objects using the Amazon S3 and OpenStack Swift APIs.

As the student user on the workstation machine, use the lab command to prepare your system for this lab.

This command ensures that the lab environment is created and ready for the lab exercise.

[student@workstation ~]$ lab start api-review

This command confirms that the hosts required for this exercise are accessible and configures a multisite RADOS Gateway service.

Procedure 9.3. Instructions

Important

This lab runs on a multisite RADOS Gateway deployment.

To ensure that metadata operations, such as user and bucket creation, occur on the master zone and are synced across the multisite service, perform all metadata operations on the serverc node. Other normal operations, such as uploading or downloading objects from the RADOS Gateway service, can be performed on any cluster node that has access to the service endpoint.

  1. On the serverc node, create a user for the S3 API and a subuser for the Swift API. Create the S3 user with the name S3 Operator, UID operator, access key 12345, and secret key 67890. Grant full access to the operator user.

    Create the Swift subuser of the operator user with the name operator:swift and the secret `opswift. Grant full access to the subuser.

    1. Log in to serverc as the admin user.

      [student@workstation ~]$ ssh admin@serverc
      [admin@serverc ~]$
    2. Create an Amazon S3 API user called S3 Operator with the UID of operator. Assign an access key of 12345 and a secret of 67890, and grant the user full access.

      [admin@serverc ~]$ sudo cephadm shell -- radosgw-admin user create \
        --uid="operator" --access="full" --display-name="S3 Operator" \
        --access_key="12345" --secret="67890"
      ...output omitted...
    3. Create a Swift subuser called operator:swift. Set opswift as the subuser secret and grant full access.

      [admin@serverc ~]$ sudo cephadm shell -- radosgw-admin subuser create \
        --uid="operator" --subuser="operator:swift" --access="full" --secret="opswift"
      ...output omitted...
  2. Configure the AWS CLI tool to use the operator user credentials. Create a bucket called log-artifacts. The RADOS Gateway service is running on the default port on the serverc node.

    1. Configure the AWS CLI tool to use operator credentials. Enter 12345 as the access key and 67890 as the secret key.

      [admin@serverc ~]$ aws configure --profile=ceph
      AWS Access Key ID [None]: 12345
      AWS Secret Access Key [None]: 67890
      Default region name [None]: Enter
      Default output format [None]: Enter
    2. Create a bucket called log-artifacts.

      [admin@serverc ~]$ aws --profile=ceph --endpoint=http://serverc:80 s3 mb \
        s3://log-artifacts
      make_bucket: log-artifacts
    3. Verify that the AWS bucket exists.

      [admin@serverc ~]$ aws --profile=ceph --endpoint=http://serverc:80 s3 ls
      2021-11-03 06:00:39 log-artifacts
  3. Create a container called backup-artifacts. The RADOS Gateway service is on the default port on the serverc node.

    1. Create a Swift container called backup-artifacts.

      [admin@serverc ~]$ swift -V 1.0 -A http://serverc:80/auth/v1 -U operator:swift \
        -K opswift post backup-artifacts
    2. Verify that the container exists.

      [admin@serverc ~]$ swift -V 1.0 -A http://serverc:80/auth/v1 -U operator:swift \
        -K opswift list
      backup-artifacts
      log-artifacts
  4. Create a 10MB file called log-object-10MB.bin in the /tmp directory. Upload the log-object-10MB.bin file to the log-artifacts bucket. On the serverf node, download the log-object-10MB.bin from the log-artifacts bucket.

    1. Create a 10 MB file called log-object-10MB.bin in the /tmp directory.

      [admin@serverc ~]$ dd if=/dev/zero of=/tmp/log-object-10MB.bin bs=1024K count=10
      10+0 records in
      10+0 records out
      10485760 bytes (10 MB, 10 MiB) copied, 0.00498114 s, 2.1 GB/s
    2. Upload the file log-object-10MB.bin to the log-artifacts bucket.

      [admin@serverc ~]$ aws --profile=ceph --endpoint=http://serverc:80 \
        --acl=public-read-write s3 cp /tmp/log-object-10MB.bin \
        s3://log-artifacts/log-object-10MB.bin
      ...output omitted...
    3. Log in to serverf as the admin user. Download the log-object-10MB.bin from the log-artifacts bucket.

      [admin@serverc ~]$ ssh admin@serverf
      admin@serverf's password: redhat
      [admin@serverf ~]$ wget http://serverc:80/log-artifacts/log-object-10MB.bin
      --2021-10-20 21:28:58--  http://serverc/log-artifacts/log-object-10MB.bin
      Resolving serverc (serverc)... 172.25.250.12
      Connecting to serverc (serverc)|172.25.250.12|:80... connected.
      HTTP request sent, awaiting response... 200 OK
      Length: 10485760 (10M) [application/octet-stream]
      Saving to: 'log-object-10MB.bin'
      
      log-object-10MB.bin     100%[==================================>]  10.00M  --.-KB/s    in 0.01s
      
      2021-10-20 21:28:58 (727 MB/s) - 'log-object-10MB.bin' saved [10485760/10485760]
  5. On the serverf node, create a 20MB file called backup-object20MB.bin in the /tmp directory. Upload the backup-object20MB.bin file to the backup-artifacts bucket, using the service default port. View the status of the backup-artifacts bucket and verify that the Objects field has the value of 1.

    1. Create a 20 MB file called backup-object-20MB.bin in the /tmp directory.

      [admin@serverf ~]$ dd if=/dev/zero of=/tmp/backup-object-20MB.bin \
        bs=2048K count=10
      10+0 records in
      10+0 records out
      20971520 bytes (21 MB, 20 MiB) copied, 0.010515 s, 2.0 GB/s
    2. Upload the backup-object20MB.bin file to the backup-artifacts bucket.

      [admin@serverf ~]$ swift -V 1.0 -A http://serverf:80/auth/v1 -U operator:swift \
        -K opswift upload backup-artifacts /tmp/backup-object-20MB.bin --object-name \
        backup-object-20MB.bin
      ...output omitted...
    3. View the statistics for the backup-artifacts bucket and verify that it contains the uploaded object.

      [admin@serverf ~]$ swift -V 1.0 -A http://serverf:80/auth/v1 -U operator:swift \
        -K opswift stat backup-artifacts
      ...output omitted...
  6. On the serverc node, download the backup-object-20MB.bin file to the /home/admin directory.

    [admin@serverf ~]$ exit
    Connection to serverf closed.
    [admin@serverc ~]$ swift -V 1.0 -A http://serverf:80/auth/v1 -U operator:swift \
      -K opswift download backup-artifacts backup-object-20MB.bin
  7. Return to workstation as the student user.

    [admin@serverc ~]$ exit
    [student@workstation ~]$

Evaluation

Grade your work by running the lab grade api-review command from your workstation machine. Correct any reported failures and rerun the script until successful.

[student@workstation ~]$ lab grade api-review

Finish

On the workstation machine, use the lab command to complete this exercise. This is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish api-review

This concludes the lab.

Revision: cl260-5.0-29d2128