Bookmark this page

Lab: Automating Linux Administration Tasks

In this lab, you perform common Linux administrative tasks on your managed hosts, using techniques that were covered in this chapter.

Outcomes

  • Create playbooks for configuring a software repository, users and groups, logical volumes, cron jobs, and additional network interfaces on a managed host.

As the student user on the workstation machine, use the lab command to prepare your system for this exercise.

This command prepares your environment and ensures that all required resources are available.

[student@workstation ~]$ lab start system-review

Procedure 9.6. Instructions

  1. Create a playbook named repo_playbook.yml to run on the webservers host group that configures those managed hosts to use the Yum internal repository located at http://materials.example.com/yum/repository, and then installs the rhelver package available from that repository on those managed hosts.

    The repository configuration must satisfy the following requirements:

    • The configuration is in the /etc/yum.repos.d/example.repo file.

    • The repository ID is example-internal.

    • The base URL is http://materials.example.com/yum/repository.

    • The repository is configured to check RPM GPG signatures.

    • The repository description is Example Inc. Internal YUM repo.

    All RPM packages in that repository are signed with an organizational GPG key pair. That GPG public key is available at http://materials.example.com/yum/repository/RPM-GPG-KEY-example. You need to make sure that the key is configured on all managed hosts in the webservers host group.

    Run the playbook. You should confirm that the playbook installed the package on your managed hosts after it runs.

    1. Change into the /home/student/system-review directory.

      [student@workstation ~]$ cd ~/system-review
      [student@workstation system-review]$
    2. Create the repo_playbook.yml playbook, which runs on the managed hosts in the webservers host group.

      Add a task that uses the ansible.builtin.yum_repository module to ensure the correct configuration of the internal Yum repository on the remote host.

      The configuration must satisfy the following requirements:

      • The configuration is stored in the /etc/yum.repos.d/example.repo file.

      • The repository ID is example-internal.

      • The base URL is http://materials.example.com/yum/repository.

      • The repository is configured to check RPM GPG signatures.

      • The repository description is Example Inc. Internal YUM repo.

      The playbook contains the following content:

      ---
      - name: Repository Configuration
        hosts: webservers
        tasks:
          - name: Ensure Example Repo exists
            ansible.builtin.yum_repository:
              name: example-internal
              description: Example Inc. Internal YUM repo
              file: example
              baseurl: http://materials.example.com/yum/repository/
              gpgcheck: yes
    3. Add a second task to the play that uses the ansible.builtin.rpm_key module to ensure that the repository public key is present on the remote host. The repository public key URL is http://materials.example.com/yum/repository/RPM-GPG-KEY-example.

      The second task contains the following content:

          - name: Ensure Repo RPM Key is Installed
            ansible.builtin.rpm_key:
              key: http://materials.example.com/yum/repository/RPM-GPG-KEY-example
              state: present
    4. Add a third task to install the rhelver package available in the Yum internal repository.

      The third task contains the following content:

          - name: Install rhelver package
            ansible.builtin.dnf:
              name: rhelver
              state: present
    5. Run the repo_playbook.yml playbook:

      [student@workstation system-review]$ ansible-navigator run \
      > -m stdout repo_playbook.yml
      
      PLAY [Repository Configuration] ************************************************
      
      TASK [Gathering Facts] *********************************************************
      ok: [serverb.lab.example.com]
      
      TASK [Ensure Example Repo exists] **********************************************
      changed: [serverb.lab.example.com]
      
      TASK [Ensure Repo RPM Key is Installed] ****************************************
      changed: [serverb.lab.example.com]
      
      TASK [Install rhelver package] *************************************************
      changed: [serverb.lab.example.com]
      
      PLAY RECAP *********************************************************************
      serverb.lab.example.com    : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
  2. Create a playbook named users.yml to run on the webservers host group that creates the webadmin user group, adds the ops1 and ops2 users, and ensures that both users have webadmin as a supplementary group on those managed hosts.

    Run the playbook. You should confirm that the users exist on the managed hosts and have webadmin as a supplementary group after the playbook runs.

    1. Create a vars/users_vars.yml variable file, which defines two users, ops1 and ops2, which belong to the webadmin user group.

      You might need to create the vars subdirectory.

      [student@workstation system-review]$ mkdir vars
      [student@workstation system-review]$ vim vars/users_vars.yml
      ---
      users:
        - username: ops1
          groups: webadmin
        - username: ops2
          groups: webadmin
    2. Create the users.yml playbook. Define a single play in the playbook that targets the webservers host group.

      Add a vars_files clause that defines the location of the vars/users_vars.yml file.

      Add a task that uses the group module to create the webadmin user group on the remote host.

      The playbook contains the following content:

      ---
      - name: Create multiple local users
        hosts: webservers
        vars_files:
          - vars/users_vars.yml
      
        tasks:
          - name: Add webadmin group
            ansible.builtin.group:
              name: webadmin
              state: present
    3. Add a second task to the playbook that uses the ansible.builtin.user module to create the users.

      Add a loop: "{{ users }}" clause to the task to loop through the variable file for every username found in the vars/users_vars.yml file.

      Use the item['username'] variable for the name: value. The variable file might contain additional information useful for creating the users, such as the groups that the users should belong to.

      The second task contains the following content:

          - name: Create user accounts
            ansible.builtin.user:
              name: "{{ item['username'] }}"
              groups: webadmin
            loop: "{{ users }}"
    4. Run the users.yml playbook:

      [student@workstation system-review]$ ansible-navigator run \
      > -m stdout users.yml
      
      PLAY [Create multiple local users] *********************************************
      
      TASK [Gathering Facts] *********************************************************
      ok: [serverb.lab.example.com]
      
      TASK [Add webadmin group] ******************************************************
      changed: [serverb.lab.example.com]
      
      TASK [Create user accounts] ****************************************************
      changed: [serverb.lab.example.com] => (item={'username': 'ops1', 'groups': 'webadmin'})
      changed: [serverb.lab.example.com] => (item={'username': 'ops2', 'groups': 'webadmin'})
      
      PLAY RECAP *********************************************************************
      serverb.lab.example.com    : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
  3. Install the redhat-rhel_system_roles-1.19.3.tar.gz Ansible Content Collection provided in the project directory to the collections directory.

    Create a playbook named storage.yml to run on the webservers host group and configure those managed hosts by using the redhat.rhel_system_roles.storage system role, as follows:

    • Uses the /dev/vdb device as an LVM physical volume for the apache-vg volume group.

    • Creates the content-lv logical volume, 64 MB in size, in the apache-vg volume group.

    • Creates the logs-lv logical volume, 128 MB in size, in the apache-vg volume group.

    • Formats each logical volume with an XFS file system.

    • Mounts the content-lv logical volume on the /var/www directory.

    • Mounts the logs-lv logical volume on the /var/log/httpd directory.

    Run the playbook. Confirm that the playbook ran correctly after you run it.

    1. Install the redhat-rhel_system_roles collection from the ~/system-review/redhat-rhel_system_roles-1.19.3.tar.gz file into the ~/system-review/collections directory.

      [student@workstation system-review]$ ansible-galaxy collection \
      > install ./redhat-rhel_system_roles-1.19.3.tar.gz -p collections
    2. Create the group_vars/webservers subdirectory.

      [student@workstation system-review]$ mkdir -pv group_vars/webservers
      mkdir: created directory 'group_vars'
      mkdir: created directory 'group_vars/webservers'
    3. Create the ~/system-review/group_vars/webservers/storage_vars.yml variables file.

      In the variable file, define a storage_pool variable with the pool name apache-vg for the volume group on the /dev/vdb device, and with the type set to lvm.

      Within the apache-vg pool define two logical volumes:

      • Define the content-lv logical volume with a size of 64 MB formatted with the XFS file system, mounted at /var/www.

      • Define the logs-lv logical volume with a size of 128 MB formatted with the XFS file system, mounted at /var/log/httpd.

      When completed, the ~/system-review/group_vars/webservers/storage_vars.yml variables file should contain the following content:

      ---
      storage_pools:
        - name: apache-vg
          type: lvm
          disks:
            - /dev/vdb
          volumes:
            - name: content-lv
              size: 64m
              mount_point: "/var/www"
              fs_type: xfs
              state: present
            - name: logs-lv
              size: 128m
              mount_point: "/var/log/httpd"
              fs_type: xfs
              state: present
    4. Create the storage.yml playbook to apply the redhat.rhel_system_roles.storage role to the webservers host group.

      When completed, the playbook should contain the following content:

      ---
      - name: Configure storage on webservers
        hosts: webservers
      
        roles:
          - name: redhat.rhel_system_roles.storage
    5. Run the storage.yml playbook.

      [student@workstation system-review]$ ansible-navigator run \
      > -m stdout storage.yml
      
      PLAY [Configure storage on webservers] *****************************************
      
      ...output omitted...
      
      TASK [redhat.rhel_system_roles.storage : make sure blivet is available] ********
      changed: [serverb.lab.example.com]
      
      ...output omitted...
      
      TASK [redhat.rhel_system_roles.storage : manage the pools and volumes to match the specified state] ***
      changed: [serverb.lab.example.com]
      
      ...output omitted...
      
      TASK [redhat.rhel_system_roles.storage : set up new/current mounts] ************
      changed: [serverb.lab.example.com] => (item={'src': '/dev/mapper/apache--vg-content--lv', 'path': '/var/www', 'fstype': 'xfs', 'opts': 'defaults', 'dump': 0, 'passno': 0, 'state': 'mounted'})
      changed: [serverb.lab.example.com] => (item={'src': '/dev/mapper/apache--vg-logs--lv', 'path': '/var/log/httpd', 'fstype': 'xfs', 'opts': 'defaults', 'dump': 0, 'passno': 0, 'state': 'mounted'})
      
      ...output omitted...
      
      PLAY RECAP *********************************************************************
      serverb.lab.example.com    : ok=21   changed=3    unreachable=0    failed=0    skipped=12   rescued=0    ignored=0
  4. Create a playbook named create_crontab_file.yml to run on the webservers host group that uses the ansible.builtin.cron module to create a system crontab file that schedules a recurring Cron job. Create the /etc/cron.d/disk_usage file as the system crontab file. Configure that file's system Cron job as follows:

    • It must run as the devops user.

    • It must run every two minutes from 9:00 to 16:59 on Monday through Friday.

    • It must run the command df >> /home/devops/disk_usage.

    Run the playbook. You should confirm that it set up the Cron job correctly after running the playbook. You could look to see if the file deployed correctly, or inspect the /var/log/cron log file as root to see if the job is running.

    1. Create a new playbook named create_crontab_file.yml, and add the lines needed to start the play. The play should target the managed hosts in the webservers group and enable privilege escalation.

      ---
      - name: Recurring cron job
        hosts: webservers
    2. Define a task that uses the ansible.builtin.cron module to schedule a recurring Cron job.

        tasks:
          - name: Crontab file exists
            ansible.builtin.cron:
              name: Add date and time to a file
    3. Configure the job to run every two minutes from 09:00 through 16:59 on Monday through Friday.

              minute: "*/2"
              hour: 9-16
              weekday: 1-5
    4. Use the cron_file parameter to use the /etc/cron.d/disk_usage system crontab file instead of an individual user's crontab file in /var/spool/cron/.

      Use a relative path to place the file in the /etc/cron.d directory.

      If the cron_file parameter is used, you must also specify the user parameter.

              user: devops
              job: df >> /home/devops/disk_usage
              cron_file: disk_usage
              state: present
    5. When completed, the playbook should contain the following content. Review the playbook for accuracy.

      ---
      - name: Recurring cron job
        hosts: webservers
      
        tasks:
          - name: Crontab file exists
            ansible.builtin.cron:
              name: Add date and time to a file
              minute: "*/2"
              hour: 9-16
              weekday: 1-5
              user: devops
              job: df >> /home/devops/disk_usage
              cron_file: disk_usage
              state: present
    6. Run the create_crontab_file.yml playbook.

      [student@workstation system-review]$ ansible-navigator run \
      > -m stdout create_crontab_file.yml
      
      PLAY [Recurring cron job] ******************************************************
      
      TASK [Gathering Facts] *********************************************************
      ok: [serverb.lab.example.com]
      
      TASK [Crontab file exists] *****************************************************
      changed: [serverb.lab.example.com]
      
      PLAY RECAP *********************************************************************
      serverb.lab.example.com    : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
  5. Create a playbook named network_playbook.yml to run on the webservers host group that uses the redhat.rhel_system_roles.network role to configure the network interface eth1 with the 172.25.250.40/24 IP address.

    Run the playbook. Confirm that the playbook ran correctly.

    1. Create a playbook named network_playbook.yml, with one play that targets the webservers host group.

      Include the redhat.rhel_system_roles.network role in the roles section of the play.

      ---
      - name: NIC Configuration
        hosts: webservers
      
        roles:
          - redhat.rhel_system_roles.network
    2. Create a new file named network.yml to define role variables.

      Because these variable values apply to the hosts on the webservers host group, you need to create that file in the group_vars/webservers directory.

      Add variable definitions to support the configuration of the eth1 network interface.

      The variable file contains the following content:

      [student@workstation system-review]$ vim group_vars/webservers/network.yml
      ---
      network_connections:
        - name: eth1
          type: ethernet
          ip:
            address:
              - 172.25.250.40/24
    3. Run the network_playbook.yml playbook to configure the eth1 network interface.

      [student@workstation system-review]$ ansible-navigator run \
      > -m stdout network_playbook.yml
      
      PLAY [NIC Configuration] *******************************************************
      
      TASK [Gathering Facts] *********************************************************
      ok: [serverb.lab.example.com]
      
      ...output omitted...
      
      TASK [redhat.rhel_system_roles.network : Configure networking connection profiles] ***
      changed: [serverb.lab.example.com]
      
      TASK [redhat.rhel_system_roles.network : Show stderr messages] *****************
      ok: [serverb.lab.example.com] => {
          "__network_connections_result.stderr_lines": [
              "[002] <info>  #0, state:None persistent_state:present, 'eth1': add connection eth1, b2332ada-021d-4f1b-a228-0e342034f95e"
          ]
      }
      
      ...output omitted...
      
      PLAY RECAP *********************************************************************
      serverb.lab.example.com    : ok=10   changed=1    unreachable=0    failed=0    skipped=12   rescued=0    ignored=0

Evaluation

As the student user on the workstation machine, use the lab command to grade your work. Correct any reported failures and rerun the command until successful.

[student@workstation ~]$ lab grade system-review

Finish

On the workstation machine, change to the student user home directory and use the lab command to complete this exercise. This step is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish system-review

This concludes the section.

Revision: rh294-9.0-c95c7de