Use filters and lookup plug-ins to get and format values for variables, modify a play to implement rolling updates, and optimize the execution of plays and tasks in a playbook.
Outcomes
Use filters and lookup plug-ins to manipulate variables in play and role tasks.
Delegate tasks to other hosts, run hosts through plays in batches with the serial keyword, and limit failures with the max_fail_percentage keyword.
Set privilege escalation per play or per task.
Add tags to target specific tasks.
As the student user on the workstation machine, use the lab command to prepare your system for this exercise.
This command ensures that the remote https://git.lab.example.com/student/review-cr2.git Git repository is initialized.
The Git repository contains playbooks that configure a front-end load balancer and a pool of back-end web servers.
You can push changes to this repository using Student@123 as the Git password.
[student@workstation ~]$ lab start review-cr2
Specifications
Create the /home/student/git-repos directory if it does not exist.
Clone the https://git.lab.example.com/student/review-cr2.git Git repository into the /home/student/git-repos directory.
Create the exercise branch for this exercise to store your changes.
Configure the project's ansible.cfg file so that privilege escalation is not performed by default.
Edit the plays in the deploy_apache.yml, deploy_haproxy.yml, and deploy_webapp.yml playbooks so that privilege escalation is enabled at the play level.
Privilege escalation is not required for fact gathering tasks.
Edit the roles/firewall/tasks/main.yml tasks file so that it uses filters to set default values for variables in each of the three tasks if they are not set, as follows:
For the state option, if item['state'] is not set, then set it to enabled by default.
For the zone option, if item['zone'] is not set, then omit the zone option.
In the Ensure Firewall Port Configuration task, in the port option to the firewalld module, if item['protocol'] is not set, then set it to tcp.
Use the lower filter to ensure that its value is in lowercase.
You can test your changes to the roles/firewall/tasks/main.yml file by running the test_firewall_role.yml playbook using automation content navigator and the ee-supported-rhel8 automation execution environment.
Edit the deploy_apache.yml playbook.
Change the value of the firewall_rules variable to a Jinja2 expression that uses the template lookup plug-in to dynamically generate the variable's setting from the apache_firewall_rules.yml.j2 Jinja2 template.
Use the from_yaml filter in the Jinja2 expression to convert the resulting value from a text string into a YAML data structure that Ansible can interpret.
Edit the templates/apache_firewall_rules.yml.j2 template to replace the load_balancer_ip_addr host variable in the Jinja2 for loop with the hostvars[server]['ansible_facts']['default_ipv4']['address'] fact.
Refactor the most expensive task in the apache role to make it more efficient.
Enable the timer and profile_tasks callback plug-ins for the project.
Using the ee-supported-rhel8:latest automation execution environment, run the site.yml playbook and analyze the output to find the most time-expensive task.
Refactor that time-expensive task.
Add the apache_installer tag to the refactored task.
To verify your work, rerun the site.yml playbook but limit the execution to the task with the apache_installer tag.
The update_webapp.yml playbook performs a rolling update of the web content on the back-end web servers.
It is not yet functional.
Edit the update_webapp.yml playbook as follows:
Add a pre_tasks section to the play.
In that section, add a task that uses the community.general.haproxy module to disable the web servers on the HAProxy load balancer.
Disable the host by using the inventory_hostname variable in the app back end.
Delegate the task to the load balancer.
The {{ groups['lb_servers'][0] }} Jinja2 expression provides the name of this load balancer.
Add a task at the end of the post_tasks section to re-enable the web servers.
Configure the play in the playbook to run in batches. The first batch must contain 5% of the hosts, the second batch 35% of the hosts, and the final batch all remaining hosts in the play.
Set max_fail_percentage on the play with a value that ensures that playbook execution stops if any host fails a task during the execution of the play.
To verify your work, run the update_webapp.yml playbook using automation content navigator and the ee-supported-rhel8 automation execution environment.
To verify the correct deployment of the load balancer and the web servers, run the curl servera command several times from the workstation machine.
The HAProxy server that you installed on the servera machine dispatches the requests between the back-end web servers that are installed on the serverb to serverf machines.
The command must return the name of a different back-end web server each time you run it.
[student@workstation review-cr2]$curl serveraThis is serverd. (version v1.0) [student@workstation review-cr2]$curl serveraThis is servere. (version v1.0) [student@workstation review-cr2]$curl serveraThis is serverf. (version v1.0)
Commit and push your changes to the exercise branch.
If prompted, use Student@123 as the password.
Clone the https://git.lab.example.com/student/review-cr2.git Git repository into the /home/student/git-repos directory and then create a branch for this exercise.
From a terminal, create the /home/student/git-repos directory if it does not exist, and then change into it.
[student@workstation ~]$mkdir -p ~/git-repos/[student@workstation ~]$cd ~/git-repos/
Clone the https://git.lab.example.com/student/review-cr2.git repository and then change into the cloned repository.
[student@workstation git-repos]$git clone \>https://git.lab.example.com/student/review-cr2.gitCloning into 'review-cr2'... ...output omitted... [student@workstation git-repos]$cd review-cr2
Create the exercise branch and switch to it.
[student@workstation review-cr2]$ git checkout -b exercise
Switched to a new branch 'exercise'Modify the ansible.cfg file so that privilege escalation is not performed by default.
Because the tasks in the firewall, haproxy, apache, and webapp roles require privilege escalation, enable privilege escalation at the play level in the deploy_apache.yml, deploy_haproxy.yml, and deploy_webapp.yml playbooks.
Edit the ansible.cfg file to remove the become=true entry from the privilege_escalation block, or use become=false in the privilege_escalation block.
Alternatively, remove the entire [privilege_escalation] block from the ansible.cfg file.
If you choose the last option, then the file contains the following content:
[defaults] inventory=inventory remote_user=devops collections_paths=./collections:/usr/share/ansible/collections
Enable privilege escalation for the Ensure Apache is deployed play in the deploy_apache.yml playbook.
Add the become: true line.
---
- name: Ensure Apache is deployed
hosts: web_servers
force_handlers: true
gather_facts: false
become: true
roles:
# Use the apache_firewall_rules.yml.j2 template to
# generate the firewall rules.
- role: apache
firewall_rules: []Enable privilege escalation for the play in the deploy_haproxy.yml playbook.
Add the become: true line.
---
- name: Gather web_server facts
hosts: web_servers
gather_facts: true
tasks: []
- name: Ensure HAProxy is deployed
hosts: lb_servers
force_handlers: true
become: true
roles:
# The "haproxy" role has a dependency on the "firewall"
# role. the "firewall" role requires a "firewall_rules"
# variable be defined.
- role: haproxy
haproxy_backend_port: "{{ apache_port }}"
# all backend servers are active; none are disabled.
haproxy_backend_pool: "{{ groups['web_servers'] }}"
haproxy_active_backends: "{{ groups['web_servers'] }}"Enable privilege escalation for the play in the deploy_webapp.yml playbook.
Add the become: true line.
---
- name: Ensure Web App is deployed
hosts: web_servers
become: true
roles:
- role: webappEdit each of the tasks in the roles/firewall/tasks/main.yml tasks file to use filters to set default values for specific variables if they do not have a value set, as follows:
For the state option, if item['state'] is not set, then set it to enabled by default.
For the zone option, if item['zone'] is not set, then omit the zone option.
In the Ensure Firewall Port Configuration task, in the port option to the firewalld module, if item['protocol'] is not set, then set it to tcp.
Use the lower filter to ensure that its value is in lowercase.
Edit the roles/firewall/tasks/main.yml file.
For the state option of all three firewalld tasks, add the default('enabled') filter to the item['state'] Jinja2 expression.
The updated file should consist of the following content:
- name: Ensure Firewall Sources Configuration
ansible.posix.firewalld:
source: "{{ item['source'] }}"
zone: "{{ item['zone'] }}"
permanent: true
state: "{{ item['state'] | default('enabled') }}"
loop: "{{ firewall_rules }}"
when: item['source'] is defined
notify: reload firewalld
- name: Ensure Firewall Service Configuration
ansible.posix.firewalld:
service: "{{ item['service'] }}"
zone: "{{ item['zone'] }}"
permanent: true
state: "{{ item['state'] | default('enabled') }}"
loop: "{{ firewall_rules }}"
when: item['service'] is defined
notify: reload firewalld
- name: Ensure Firewall Port Configuration
ansible.posix.firewalld:
port: "{{ item['port'] }}/{{ item['protocol'] }}"
zone: "{{ item['zone'] }}"
permanent: true
state: "{{ item['state'] | default('enabled') }}"
loop: "{{ firewall_rules }}"
when: item['port'] is defined
notify: reload firewalldFor the zone option of all three firewalld tasks, add the default('omit') filter to the item['zone'] Jinja2 expression.
The task file should now contain the following content:
- name: Ensure Firewall Sources Configuration
ansible.posix.firewalld:
source: "{{ item['source'] }}"
zone: "{{ item['zone'] | default(omit) }}"
permanent: true
state: "{{ item['state'] | default('enabled') }}"
loop: "{{ firewall_rules }}"
when: item['source'] is defined
notify: reload firewalld
- name: Ensure Firewall Service Configuration
ansible.posix.firewalld:
service: "{{ item['service'] }}"
zone: "{{ item['zone'] | default(omit) }}"
permanent: true
state: "{{ item['state'] | default('enabled') }}"
loop: "{{ firewall_rules }}"
when: item['service'] is defined
notify: reload firewalld
- name: Ensure Firewall Port Configuration
ansible.posix.firewalld:
port: "{{ item['port'] }}/{{ item['protocol'] }}"
zone: "{{ item['zone'] | default(omit) }}"
permanent: true
state: "{{ item['state'] | default('enabled') }}"
loop: "{{ firewall_rules }}"
when: item['port'] is defined
notify: reload firewalldIn the third firewalld task, Ensure Firewall Port Configuration, replace {{ item['protocol'] }} with {{ item['protocol'] | default('tcp') | lower }}.
Save your work.
The completed task file should contain the following content:
- name: Ensure Firewall Sources Configuration
ansible.posix.firewalld:
source: "{{ item['source'] }}"
zone: "{{ item['zone'] | default(omit) }}"
permanent: true
state: "{{ item['state'] | default('enabled') }}"
loop: "{{ firewall_rules }}"
when: item['source'] is defined
notify: reload firewalld
- name: Ensure Firewall Service Configuration
ansible.posix.firewalld:
service: "{{ item['service'] }}"
zone: "{{ item['zone'] | default(omit) }}"
permanent: true
state: "{{ item['state'] | default('enabled') }}"
loop: "{{ firewall_rules }}"
when: item['service'] is defined
notify: reload firewalld
- name: Ensure Firewall Port Configuration
ansible.posix.firewalld:
port: "{{ item['port'] }}/{{ item['protocol'] | default('tcp') | lower }}"
zone: "{{ item['zone'] | default(omit) }}"
permanent: true
state: "{{ item['state'] | default('enabled') }}"
loop: "{{ firewall_rules }}"
when: item['port'] is defined
notify: reload firewalldTest the changes that you made to the roles/firewall/tasks/main.yml file by running the test_firewall_role.yml playbook.
If you completed the preceding step successfully, then the playbook runs without errors.
[student@workstation review-cr2]$ansible-navigator run \>-m stdout test_firewall_role.ymlPLAY [Test Firewall Role] ****************************************************** ...output omitted... PLAY RECAP ********************************************************************* serverf.lab.example.com : ok=6 changed=5 unreachable=0 failed=0 ...
Examine the deploy_apache.yml playbook.
It calls the apache role to ensure that Apache HTTP Server is deployed, which itself calls the firewall role.
It also defines the firewall_rules variable that the firewall role uses to configure the firewalld service.
The 8008/tcp port for firewalld and the IPv4 address of the load balancer (172.25.250.10) must be enabled in the internal zone for firewalld.
Edit the playbook.
Change the value of the firewall_rules variable to a Jinja2 expression that uses the template lookup plug-in to dynamically generate the variable's setting from the apache_firewall_rules.yml.j2 Jinja2 template.
Use the from_yaml filter in the Jinja2 expression to convert the resulting value from a text string into a YAML data structure that Ansible can interpret.
One correct solution results in the following contents in the deploy_apache.yml Ansible Playbook:
- name: Ensure Apache is deployed
hosts: web_servers
force_handlers: true
gather_facts: false
become: true
roles:
# Use the apache_firewall_rules.yml.j2 template to
# generate the firewall rules.
- role: apache
firewall_rules: "{{ lookup('ansible.builtin.template', 'apache_firewall_rules.yml.j2') | from_yaml }}"In the preceding example, make sure you enter the expression for the firewall_rules variable on a single line.
If you use linting rules that indicate that lines should not exceed 80 characters, then you might define the Jinja2 expression over multiple lines by defining the firewall_rules variable using a greater than sign.
For example:
firewall_rules: >
{{ lookup('ansible.builtin.template', 'apache_firewall_rules.yml.j2')
| from_yaml }}Examine the templates/apache_firewall_rules.yml.j2 template that your playbook now uses to set the firewall_rules variable.
The template contains a Jinja2 for loop that creates a rule for each host in the lb_servers group, setting the source IP address from the value of the load_balancer_ip_addr variable.
This solution is not ideal.
It requires you to set load_balancer_ip_addr as a host variable for each load balancer in the lb_servers group.
To remove this manual maintenance, use gathered facts from these hosts to set that value instead.
Edit the templates/apache_firewall_rules.yml.j2 file to replace the load_balancer_ip_addr host variable in the Jinja2 for loop with the hostvars[server]['ansible_facts']['default_ipv4']['address'] fact.
The completed templates/apache_firewall_rules.yml.j2 template should contain the following content:
- port: {{ apache_port }}
protocol: TCP
zone: internal
{% for server in groups['lb_servers'] %}
- zone: internal
source: "{{ hostvars[server]['ansible_facts']['default_ipv4']['address'] }}"
{% endfor %}Save the template.
Enable the timer and profile_tasks callback plug-ins for the project.
The two plug-ins are part of the ansible.posix collection.
If desired, you can specify the FQCNs for the callback plug-ins.
Edit the ansible.cfg configuration file and add the plug-ins to the callbacks_enabled directive.
The modified file contains the following content:
[defaults]
inventory=inventory
remote_user=devops
collections_paths=./collections:/usr/share/ansible/collections
callbacks_enabled=ansible.posix.timer,ansible.posix.profile_tasksUsing the ee-supported-rhel8:latest automation execution environment, run the site.yml playbook and analyze the output to find the most time-expensive task.
Run the site.yml playbook using the stdout mode.
Your output should look similar to the following.
[student@workstation review-cr2]$ansible-navigator run site.yml -m stdout...output omitted... PLAY RECAP ******************************************************************** servera.lab.example.com : ok=7 changed=6 unreachable=0 failed=0 ... serverb.lab.example.com : ok=14 changed=9 unreachable=0 failed=0 ... serverc.lab.example.com : ok=14 changed=9 unreachable=0 failed=0 ... serverd.lab.example.com : ok=14 changed=9 unreachable=0 failed=0 ... servere.lab.example.com : ok=14 changed=9 unreachable=0 failed=0 ... serverf.lab.example.com : ok=14 changed=9 unreachable=0 failed=0 ... Playbook run took 0 days, 0 hours, 1 minutes, 32 seconds Friday 06 January 2023 20:10:46 +0000 (0:00:00.967) 0:01:32.206 ******** ===============================================================================apache : Ensure httpd packages are installed --------------------------- 36.38sapache : Ensure SELinux allows httpd connections to a remote database -- 25.72s haproxy : Ensure haproxy packages are present -------------------------- 10.91s apache : restart httpd -------------------------------------------------- 1.86s Gathering Facts --------------------------------------------------------- 1.63s ...output omitted...
The previous output sorted the tasks based on how long it took for each task to complete. The following task took the most time:
apache : Ensure httpd packages are installed --------------------------- 36.38s
Refactor the most expensive task to make it more efficient.
Add the apache_installer tag to the task.
Identify the file that contains the Ensure httpd packages are installed task.
[student@workstation review-cr2]$ grep -Rl 'Ensure httpd packages are installed'
roles/apache/tasks/main.yml
...output omitted...Edit the roles/apache/tasks/main.yml task file to remove the loop from the yum task.
The modified file contains the following content:
---
# tasks file for apache
- name: Ensure httpd packages are installed
ansible.builtin.yum:
name:
- httpd
- php
- git
- php-mysqlnd
state: present
...output omitted...Add the apache_installer tag to the yum task.
The modified file contains the following content:
---
# tasks file for apache
- name: Ensure httpd packages are installed
ansible.builtin.yum:
name:
- httpd
- php
- git
- php-mysqlnd
state: present
tags: apache_installer
- name: Ensure SELinux allows httpd connections to a remote database
ansible.posix.seboolean:
name: httpd_can_network_connect_db
state: true
persistent: true
- name: Ensure httpd service is started and enabled
ansible.builtin.service:
name: httpd
state: started
enabled: true
- name: Ensure configuration is deployed
ansible.builtin.template:
src: httpd.conf.j2
dest: /etc/httpd/conf/httpd.conf
owner: root
group: root
mode: 0644
setype: httpd_config_t
notify: restart httpdRun the site.yml playbook with the ee-supported-rhel8:latest automation execution environment and the apache_installer tag.
Verify that the Ensure httpd packages are installed task takes less time to complete.
[student@workstation review-cr2]$ansible-navigator run site.yml \>-m stdout --tags apache_installer...output omitted... PLAY RECAP ******************************************************************** servera.lab.example.com : ok=1 changed=0 unreachable=0 failed=0 ... serverb.lab.example.com : ok=3 changed=0 unreachable=0 failed=0 ... serverc.lab.example.com : ok=3 changed=0 unreachable=0 failed=0 ... serverd.lab.example.com : ok=3 changed=0 unreachable=0 failed=0 ... servere.lab.example.com : ok=3 changed=0 unreachable=0 failed=0 ... serverf.lab.example.com : ok=3 changed=0 unreachable=0 failed=0 ... Playbook run took 0 days, 0 hours, 0 minutes, 7 seconds Tuesday 10 January 2023 20:42:09 +0000 (0:00:01.036) 0:00:07.963 ******* ===============================================================================apache : Ensure httpd packages are installed ---------------------------- 4.02sGathering Facts --------------------------------------------------------- 1.52s Gathering Facts --------------------------------------------------------- 1.37s Gathering Facts --------------------------------------------------------- 1.04s
Add a pre_tasks section and task to the play in the update_webapp.yml playbook.
This task should disable the web servers on the HAProxy load balancer.
Without this task, external clients might experience problems due to unforeseen deployment issues with the web application.
Configure the new task as follows:
Use the community.general.haproxy module.
Disable the host by using the inventory_hostname variable in the app back end.
Delegate each action to the load balancer.
The {{ groups['lb_servers'][0] }} Jinja2 expression provides the name of this load balancer.
Edit the update_webapp.yml file, adding the pre_tasks task to disable the web server in the load balancer.
...output omitted...pre_tasks:- name: Remove web server from service during the updatecommunity.general.haproxy:state: disabledbackend: apphost: "{{ inventory_hostname }}"delegate_to: "{{ groups['lb_servers'][0] }}"...output omitted...
In the play in the update_webapp.yml playbook, add a task to the post_tasks section after the smoke test to re-enable each web server in the HAProxy load balancer.
This task is similar in structure to the community.general.haproxy task in the pre_tasks section.
Add the task after the smoke test with the state: enabled directive.
...output omitted... post_tasks: - name: Smoke Test - Ensure HTTP 200 OK ansible.builtin.uri: url: "http://localhost:{{ apache_port }}" status_code: 200 become: true- name: Enable healthy server in load balancerscommunity.general.haproxy:state: enabledbackend: apphost: "{{ inventory_hostname }}"delegate_to: "{{ groups['lb_servers'][0] }}"
Configure the play in the update_webapp.yml playbook to run in batches.
This change mitigates the effects of unforeseen deployment errors.
Ensure that the playbook uses no more than three batches to complete the upgrade of all web server hosts.
Set the first batch to consist of 5% of the hosts in the play, the second batch 35% of the hosts, and the final batch to consist of all remaining hosts.
Add an appropriate setting for max_fail_percentage to the play to ensure that playbook execution stops if any host fails a task during the upgrade.
Add the max_fail_percentage directive to the play and set its value to 0, to stop execution on any failure.
Add the serial directive and set the value to a list of three elements, 5%, 35%, and 100%, to ensure that all servers are updated in the last batch.
The entire playbook should contain the following content:
--- - name: Upgrade Web Application hosts: web_servers become: truemax_fail_percentage: 0serial:- "5%"- "35%"- "100%"pre_tasks: - name: Remove web server from service during the update community.general.haproxy: state: disabled backend: app host: "{{ inventory_hostname }}" delegate_to: "{{ groups['lb_servers'][0] }}" roles: - role: webapp post_tasks: - name: Smoke Test - Ensure HTTP 200 OK ansible.builtin.uri: url: "http://localhost:{{ apache_port }}" status_code: 200 become: true - name: Enable healthy server in load balancers community.general.haproxy: state: enabled backend: app host: "{{ inventory_hostname }}" delegate_to: "{{ groups['lb_servers'][0] }}"
Use the ansible-navigator command to run the update_webapp.yml playbook.
The playbook performs a rolling update.
[student@workstation review-cr2]$ansible-navigator run \>-m stdout update_webapp.yml...output omitted... TASK [Remove web server from serviceduring the update] ************************ Wednesday 04 January 2023 19:27:58 +0000 (0:00:01.568) 0:00:01.591 ****** changed: [serverb.lab.example.com-> servera.lab.example.com] ...output omitted... TASK [Enable healthy serverin load balancers] ********************************* Wednesday 04 January 2023 19:28:01 +0000 (0:00:00.705) 0:00:04.888 ****** changed: [serverb.lab.example.com-> servera.lab.example.com] ...output omitted... TASK [Remove web server from serviceduring the update] ************************ Wednesday 04 January 2023 19:28:03 +0000 (0:00:01.132) 0:00:06.517 ****** changed: [serverc.lab.example.com-> servera.lab.example.com] ...output omitted... TASK [Enable healthy serverin load balancers] ********************************* Wednesday 04 January 2023 19:28:05 +0000 (0:00:00.503) 0:00:08.753 ****** changed: [serverc.lab.example.com-> servera.lab.example.com] ...output omitted... TASK [Remove web server from serviceduring the update] ************************ Wednesday 04 January 2023 19:28:07 +0000 (0:00:01.316) 0:00:10.566 ****** changed: [serverd.lab.example.com-> servera.lab.example.com] changed: [servere.lab.example.com-> servera.lab.example.com] changed: [serverf.lab.example.com-> servera.lab.example.com] ...output omitted... TASK [Enable healthy serverin load balancers] ********************************* Wednesday 04 January 2023 19:28:10 +0000 (0:00:00.627) 0:00:13.911 ****** changed: [serverd.lab.example.com-> servera.lab.example.com] changed: [servere.lab.example.com-> servera.lab.example.com] changed: [serverf.lab.example.com-> servera.lab.example.com] ...output omitted...
Notice that Ansible first updates the serverb.lab.example.com machine, which is the only machine in the first batch (5% of 5 machines).
Then Ansible updates the serverc.lab.example.com machine, which is the only machine in the second batch (35% of 5 machines).
Finally, Ansible updates the remaining machines (100% of the machines in the last batch).
Verify that web browser requests from workstation to the load balancer on servera succeed.
[student@workstation review-cr2]$curl serveraThis is serverb. (version v1.0) [student@workstation review-cr2]$curl serveraThis is serverc. (version v1.0)
Add the changed files, commit the changes, and push them to the Git repository.
If prompted, use Student@123 as the password.
[student@workstation review-cr2]$git add .[student@workstation review-cr2]$git commit -m "Project updates"[exercise ff11b64] Project updates 6 files changed, 29 insertions(+), 14 deletions(-) [student@workstation update-review]$git push -u origin exercisePassword for 'https://student@git.lab.example.com':Student@123...output omitted...