Bookmark this page

Lab: Coordinating Rolling Updates

  • Modify a playbook to implement rolling updates.

Outcomes

  • Delegate tasks to other hosts.

  • Implement deployment batches with the serial keyword.

  • Limit failures with the max_fail_percentage keyword.

As the student user on the workstation machine, use the lab command to prepare your system for this exercise.

This command initializes the remote Git repository at https://git.lab.example.com/student/update-review.git. The Git repository contains playbooks that configure a front-end load balancer and a pool of back-end web servers. You can push changes to this repository by using Student@123 as the Git password.

[student@workstation ~]$ lab start update-review

Procedure 8.4. Instructions

  1. Clone the https://git.lab.example.com/student/update-review.git repository into the /home/student/git-repos/update-review directory, and then create the exercise branch for this exercise. Run the site.yml playbook to deploy the web application infrastructure.

    1. From a terminal, create the directory /home/student/git-repos if it does not exist, and then change into it.

      [student@workstation ~]$ mkdir -p ~/git-repos/
      [student@workstation ~]$ cd ~/git-repos/
    2. Clone the repository and then change into the cloned repository.

      [student@workstation git-repos]$ git clone \
      > https://git.lab.example.com/student/update-review.git
      Cloning into 'update-review'...
      ...output omitted...
      [student@workstation git-repos]$ cd update-review
    3. Create the exercise branch.

      [student@workstation update-review]$ git checkout -b exercise
      Switched to a new branch 'exercise'
    4. Use the ansible-navigator run command to run the site.yml playbook.

      [student@workstation update-review]$ ansible-navigator run \
      > -m stdout site.yml
      
      PLAY [Gather web_server facts] *************************************************
      
      TASK [Gathering Facts] *********************************************************
      ok: [serverc.lab.example.com]
      ok: [serverf.lab.example.com]
      ok: [serverd.lab.example.com]
      ok: [servere.lab.example.com]
      ok: [serverb.lab.example.com]
      
      PLAY [Ensure HAProxy is deployed] **********************************************
      
      ...output omitted...
      
      PLAY RECAP *********************************************************************
      servera.lab.example.com    : ok=7    changed=6    unreachable=0    failed=0  ...
      serverb.lab.example.com    : ok=12   changed=8    unreachable=0    failed=0  ...
      serverc.lab.example.com    : ok=12   changed=8    unreachable=0    failed=0  ...
      serverd.lab.example.com    : ok=12   changed=8    unreachable=0    failed=0  ...
      servere.lab.example.com    : ok=12   changed=8    unreachable=0    failed=0  ...
      serverf.lab.example.com    : ok=12   changed=8    unreachable=0    failed=0  ...
  2. The update_webapp.yml playbook requires collections that are defined in the collections/requirements.yml file.

    Obtain a token from https://hub.lab.example.com by logging in as the student user with the redhat123 password. Use this token to update the token option in the ansible.cfg file. Use the ansible-galaxy command to install the collections. The collections must be available to the execution environment, so they must be installed in the /home/student/git-repos/update-review/collections/ directory.

    1. Log in to the private automation hub at https://hub.lab.example.com as the student user with redhat123 as the password.

    2. Navigate to CollectionsAPI token management, and then click Load token. Copy the API token.

    3. Update both token lines in the ansible.cfg file by using the copied token. Your token is probably different from the one that is displayed in this example.

      ...output omitted...
      
      [galaxy_server.rh-certified_repo]
      url=https://hub.lab.example.com/api/galaxy/content/rh-certified
      token=f41f07130d6eb6ef2ded63a574c161b509c647dd
      
      [galaxy_server.community_repo]
      url=https://hub.lab.example.com/api/galaxy/content/community/
      token=f41f07130d6eb6ef2ded63a574c161b509c647dd
    4. Use the ansible-galaxy command to install the community.general content collection into the collections/ directory.

      [student@workstation update-review]$ ansible-galaxy collection install \
      > -r collections/requirements.yml -p collections/
      Starting galaxy collection install process
      ...output omitted...
      Installing 'community.general:6.1.0' to '/home/student/git-repos/update-review/collections/ansible_collections/community/general'
      community.general:6.1.0 was installed successfully
  3. Add a pre_tasks section to the play in the update_webapp.yml playbook. Add a task to the section that disables the web servers on the HAProxy load balancer. Without this task, external clients might experience problems with the web application due to unforeseen deployment issues.

    Configure the new task as follows:

    • Use the community.general.haproxy module.

    • Disable the host in the app back end by using the inventory_hostname variable.

    • Delegate each action to the load balancer. The {{ groups['lb_servers'][0] }} Jinja2 expression provides the name of this load balancer.

    1. Edit the play in the update_webapp.yml playbook by adding a pre_tasks section containing a task that uses the community.general.haproxy module to disable the web server on the load balancer.

        pre_tasks:
          - name: Remove web server from service during the update
            community.general.haproxy:
              state: disabled
              backend: app
              host: "{{ inventory_hostname }}"
            delegate_to: "{{ groups['lb_servers'][0] }}"
  4. In the update_webapp.yml playbook, the first task in the post_tasks section is a "smoke test". That task verifies that the web application on each web server responds to requests that originate from that same server. This test is not realistic because requests to the web server normally originate from the load balancer, but it verifies that the web servers are operating.

    However, if you only test a web server's response from the web server itself, you do not test any network-related functions.

    Using delegation, modify the smoke test task so that each request to a web server originates from the load balancer. Use the inventory_hostname variable to connect to each web server.

    1. Add a delegate_to directive to the smoke test task that runs on the load balancer. Change the testing URL to point to the current value of the inventory_hostname variable:

          - name: Smoke Test - Ensure HTTP 200 OK
            ansible.builtin.uri:
              url: "http://{{ inventory_hostname }}:{{ apache_port }}"
              status_code: 200
            become: false
            delegate_to: "{{ groups['lb_servers'][0] }}"
  5. Add a task to the post_tasks section after the smoke test to re-enable each web server on the HAProxy load balancer. This task is similar in structure to the community.general.haproxy task in the pre_tasks section, except that it sets the state to enabled instead of disabled.

        - name: Enable healthy server in load balancers
          community.general.haproxy:
            state: enabled
            backend: app
            host: "{{ inventory_hostname }}"
          delegate_to: "{{ groups['lb_servers'][0] }}"
  6. Configure the play in the update_webapp.yml playbook to run through its tasks in batches to mitigate the effects of unforeseen deployment errors. Ensure that the playbook uses no more than three batches to complete the upgrade of all web server hosts. Set the first batch to consist of 5% of the hosts, the second batch 35% of the hosts, and the last batch to consist of all remaining hosts.

    Add an appropriate keyword to the play to ensure that playbook execution stops if any host fails a task during the upgrade.

    1. Add the max_fail_percentage directive and set the value to 0 to stop execution of the play at any failure. Add the serial directive and set the value to a list of three elements: 5%, 35%, and 100% to ensure that all servers are updated in the last batch. The beginning of the play should appear as shown in the following example:

      - name: Upgrade Web Application
        hosts: web_servers
        become: true
        vars:
          webapp_version: v1.1
        max_fail_percentage: 0
        serial:
          - "5%"
          - "35%"
          - "100%"
    2. The completed update_webapp.yml playbook should appear as shown in the following example:

      ---
      - name: Upgrade Web Application
        hosts: web_servers
        become: true
        vars:
          webapp_version: v1.1
        max_fail_percentage: 0
        serial:
          - "5%"
          - "35%"
          - "100%"
      
        pre_tasks:
          - name: Remove web server from service during the update
            community.general.haproxy:
              state: disabled
              backend: app
              host: "{{ inventory_hostname }}"
            delegate_to: "{{ groups['lb_servers'][0] }}"
      
        roles:
          - role: webapp
      
        post_tasks:
          - name: Smoke Test - Ensure HTTP 200 OK
            ansible.builtin.uri:
              url: "http://{{ inventory_hostname }}:{{ apache_port }}"
              status_code: 200
            become: false
            delegate_to: "{{ groups['lb_servers'][0] }}"
      
          - name: Enable healthy server in load balancers
            community.general.haproxy:
              state: enabled
              backend: app
              host: "{{ inventory_hostname }}"
            delegate_to: "{{ groups['lb_servers'][0] }}"
  7. Run the update_webapp.yml playbook to perform the rolling update.

    1. Use the ansible-navigator command to run the update_webapp.yml playbook.

      [student@workstation update-review]$ ansible-navigator run \
      > -m stdout update_webapp.yml
      ...output omitted...
      
      PLAY RECAP *********************************************************************
      serverb.lab.example.com    : ok=6    changed=3    unreachable=0    failed=0  ...
      serverc.lab.example.com    : ok=6    changed=3    unreachable=0    failed=0  ...
      serverd.lab.example.com    : ok=6    changed=3    unreachable=0    failed=0  ...
      servere.lab.example.com    : ok=6    changed=3    unreachable=0    failed=0  ...
      serverf.lab.example.com    : ok=6    changed=3    unreachable=0    failed=0  ...
  8. Commit and push your changes to the remote Git repository.

    1. Add the changed files, commit the changes, and push them to the Git repository. If prompted, use Student@123 as the password.

      [student@workstation update-review]$ git add .
      [student@workstation update-review]$ git commit -m "Rolling updates"
      [exercise 8d38803] Rolling updates
       1 files changed, 26 insertions(+), 12 deletions(-)
      [student@workstation update-review]$ git push -u origin exercise
      Password for 'https://student@git.lab.example.com': Student@123
      Enumerating objects: 13, done.
      Counting objects: 100% (13/13), done.
      Delta compression using up to 4 threads.
      Compressing objects: 100% (6/6), done.
      Writing objects: 100% (7/7), 931 bytes | 931.00 KiB/s, done.
      Total 7 (delta 3), reused 0 (delta 0)
      To http://git.lab.example.com:8081/git/update-review.git
         a36da15..5feb08e  exercise -> exercise

Evaluation

As the student user on the workstation machine, use the lab command to grade your work. Correct any reported failures and rerun the command until successful.

Note

Make sure to commit and push your changes to the Git repository before rerunning the script.

[student@workstation ~]$ lab grade update-review

Finish

On the workstation machine, change to the student user home directory and use the lab command to complete this exercise. This step is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish update-review

This concludes the section.

Revision: do374-2.2-82dc0d7