Bookmark this page

Guided Exercise: Deploying Distributed Execution with Automation Mesh

  • Configure your automation controller as a control node that is connected using automation mesh to three execution nodes, one of which is behind a hop node.

Outcomes

  • Configure an inventory file to support automation mesh.

  • Install automation mesh.

As the student user on the workstation machine, use the lab command to prepare your system for this exercise.

This command downloads and extracts the Red Hat Ansible Automation Platform 2.2 bundled archive into the /home/student/aap2.2-bundle directory. It also downloads machine certificates and private keys into the /home/student/certs directory. Finally, it replaces the inventory file in the extracted bundle with the inventory file used in the chapter 1 installation guided exercise.

[student@workstation ~]$ lab start mesh-deploy

Procedure 11.1. Instructions

  1. Update the /home/student/aap2.2-bundle/inventory file to install a second automation controller, two execution nodes and a hop node connected directly to the automation controllers using automation mesh, and a third execution node that is connected to the hop node.

    1. Change to the /home/student/aap2.2-bundle directory.

      [student@workstation ~]$ cd ~/aap2.2-bundle/
    2. Update the inventory file to add control2.lab.example.com to the [automationcontroller] section:

      [automationcontroller]
      controller.lab.example.com
      control2.lab.example.com
    3. Configure the [automationcontroller:vars] section to add the node_type, web_server_ssl_cert, and web_server_ssl_key variables. Remove the existing peers=execution_nodes line. The updated inventory file contains the following lines for the [automationcontroller:vars] section:

      [automationcontroller:vars]
      node_type=control
      web_server_ssl_cert=/home/student/certs/{{ inventory_hostname }}.crt
      web_server_ssl_key=/home/student/certs/{{ inventory_hostname }}.key
    4. Because the [automationcontroller:vars] section now configures unique web server SSL certificates and keys for each automation controller host, comment out the existing web_server_ssl_cert and web_server_ssl_key variables in the [all:vars] section. The existing lines change to the following:

      #web_server_ssl_cert=/home/student/certs/controller.lab.example.com.crt
      #web_server_ssl_key=/home/student/certs/controller.lab.example.com.key
    5. Update the inventory file to add hosts to the [execution_nodes] section.

      • Add the exec1.lab.example.com and exec2.lab.example.com hosts and specify that they peer with the automationcontroller group.

      • Add the exec3.lab.example.com host and specify that it peers with the hop1.lab.example.com host.

      • Add the hop1.lab.example.com host and specify that it peers with the automationcontroller group and that it is a hop node.

      The updated inventory file contains the following lines for the [execution_nodes] section:

      [execution_nodes]
      exec1.lab.example.com peers=automationcontroller
      exec2.lab.example.com peers=automationcontroller
      exec3.lab.example.com peers=hop1.lab.example.com
      hop1.lab.example.com peers=automationcontroller node_type=hop
    6. Use the diff command to compare your modified inventory file with the ~/mesh-deploy/inventory file. The diff command does not display any output if the files have the same content. The -B option ignores blank lines. Correct any mistakes before proceeding.

      [student@workstation aap2.2-bundle]$ diff -B inventory ../mesh-deploy/inventory
  2. Generate and view the automation mesh topology file.

    1. Run the setup.sh script using the generate_dot_file tag.

      [student@workstation aap2.2-bundle]$ ./setup.sh -- --tags generate_dot_file
      ...output omitted...
      TASK [debug] *******************************************************************
      ok: [controller.lab.example.com] => {
          "msg": "Ansible Mesh topology graph created at 'mesh-topology.dot'. To render your dot graph, you could run: dot -Tjpg mesh-topology.dot -o graph-topology.jpg\n"
      }
      ...output omitted...
    2. Display the generated mesh-topology.dot topology file.

      [student@workstation aap2.2-bundle]$ cat mesh-topology.dot
      strict digraph "" {
          rankdir = TB
          node [shape=box];
          subgraph cluster_0 {
              graph [label="Control Nodes", type=solid];
              {
                  rank = same;
                  "controller.lab.example.com";
                  "control2.lab.example.com";
                  "controller.lab.example.com" -> "control2.lab.example.com";
              }
          }
      
          "exec1.lab.example.com";
          "exec2.lab.example.com";
          "exec3.lab.example.com";
          "hop1.lab.example.com";
          "exec1.lab.example.com" -> "control2.lab.example.com";
          "exec1.lab.example.com" -> "controller.lab.example.com";
          "exec2.lab.example.com" -> "control2.lab.example.com";
          "exec2.lab.example.com" -> "controller.lab.example.com";
          "exec3.lab.example.com" -> "hop1.lab.example.com";
          "hop1.lab.example.com" -> "control2.lab.example.com";
          "hop1.lab.example.com" -> "controller.lab.example.com";
      }
    3. Install the graphviz package.

      [student@workstation aap2.2-bundle]$ sudo dnf install graphviz
      [sudo] password for student: student
      ...output omitted...
    4. Render the generated topology file as a graphic.

      [student@workstation aap2.2-bundle]$ dot -Tjpg mesh-topology.dot \
      > -o graph-topology.jpg
      ...output omitted...
    5. Open the generated graphic file, graph-topology.jpg, in a web browser.

  3. Install automation mesh by applying the changes made to the inventory file.

    1. Become the root user.

      [student@workstation aap2.2-bundle]$ sudo -i
      [sudo] password for student: student
      [root@workstation ~]#
    2. Change to the /home/student/aap2.2-bundle directory.

      [root@workstation ~]# cd /home/student/aap2.2-bundle/
    3. Run the setup.sh script with -e ignore_preflight_errors=true set to ignore the results of checks it makes before the installation starts. (The classroom systems have less RAM than is optimal for a production installation.) The installation takes approximately 15 minutes to complete.

      [root@workstation aap2.2-bundle]# ./setup.sh -e ignore_preflight_errors=true
      ...output omitted...
      PLAY RECAP *********************************************************************
      control2.lab.example.com   : ok=246  changed=129  ...  failed=0  ...  ignored=5
      controller.lab.example.com : ok=263  changed=54   ...  failed=0  ...  ignored=1
      db.lab.example.com         : ok=75   changed=16   ...  failed=0  ...  ignored=1
      exec1.lab.example.com      : ok=104  changed=51   ...  failed=0  ...  ignored=3
      exec2.lab.example.com      : ok=104  changed=51   ...  failed=0  ...  ignored=3
      exec3.lab.example.com      : ok=104  changed=51   ...  failed=0  ...  ignored=3
      hop1.lab.example.com       : ok=83   changed=36   ...  failed=0  ...  ignored=2
      hub.lab.example.com        : ok=195  changed=23   ...  failed=0  ...  ignored=1
      localhost                  : ok=3    changed=1    ...  failed=0  ...  ignored=0
      
      The setup process completed successfully.
      [warn] /var/log/tower does not exist. Setup log saved to setup.log.
    4. After the installer finishes successfully, exit from the root session.

      [root@workstation aap2.2-bundle]# exit
  4. Access the https://controller.lab.example.com and https://control2.lab.example.com automation controllers.

    1. Navigate to https://controller.lab.example.com and log in as the admin user with redhat as the password.

    2. In a separate browser window or tab, navigate to https://control2.lab.example.com and log in as the admin user with redhat as the password.

  5. View the hosts in the controlplane and default instance groups.

    1. Navigate to AdministrationInstance Groups.

    2. Click the link for the controlplane instance group and then click the Instances tab. Both the control2.lab.example.com and the controller.lab.example.com hosts display the Healthy status. If you click the link for each hostname, then the Instance details page displays that each host is the control node type.

    3. Navigate to AdministrationInstance Groups and click the link for the default instance group.

    4. Click the Instances tab. The exec1.lab.example.com, exec2.lab.example.com, and exec3.lab.example.com hosts display the Healthy status. If you click the link for each hostname, then the Instance details page displays that each host is the execution node type.

Finish

On the workstation machine, change to the student user home directory and use the lab command to complete this exercise. This step is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish mesh-deploy

This concludes the section.

Revision: do467-2.2-08877c1