Distribute execution of Ansible Playbooks from automation controller control or hybrid nodes to remote execution nodes, communicating with them using automation mesh.
You can configure automation mesh to separate execution nodes from your control nodes, so that you can provide resilience and scalability to your automation, and so that you can move execution of your automation closer to your managed hosts.
To configure Red Hat Ansible Automation Platform nodes that are connected by automation mesh, edit the inventory file that is used by the installation script.
The following example deploys one control node (controller.lab.example.com) and one execution node (exec1.lab.example.com).
[automationcontroller]controller.lab.example.com [execution_nodes]
exec1.lab.example.com [automationcontroller:vars]
peers=execution_nodes
node_type=control
To add hybrid or control nodes, add an entry for each node in the | |
To add execution nodes or hop nodes, add an entry for each node in the | |
Use group variables to set up definitions, such as | |
The | |
The |
By default, automation mesh creates the controlplane instance group for nodes in the [automationcontroller] section of the inventory file, and the default instance group for execution nodes in the [execution_nodes] section of the inventory file.
You can configure the inventory to create additional instance groups. For example, you might want to create an instance group of execution nodes that are local to a particular data center. If you create instance groups based on data centers, then you can configure an inventory or a job template to run jobs for managed hosts that are in the same data center as the execution nodes.
The installer automatically creates instance groups for any group names in the installer’s inventory with the instance_group_ prefix. All of the hosts in the instance group must be execution nodes and cannot be hop nodes.
The following fragment of an inventory file creates two instance groups when you run setup.sh: local, which consists of the execution nodes exec1 and exec2, and remote, which consists of the execution node exec3.
[instance_group_local] exec1.lab.example.com exec2.lab.example.com [instance_group_remote] exec3.lab.example.com
You can also manage instance groups and assign execution nodes to them in the automation controller web UI.
To add an additional node, add the new node to the inventory file and run the installation script again.
To remove a node, append node_state=deprovision to the node entry in the inventory file and run the installation script again.
[automationcontroller] controller1.lab.example.com node_type=control controller2.lab.example.com controller3.lab.example.comnode_state=deprovision[execution_nodes] exec1.lab.example.com exec2.lab.example.comnode_state=deprovision
You cannot use the node_state=deprovision variable with the first entry in the [automationcontroller] section because we need at least one controller for an operational Ansible Automation Platform. The installer uses the first entry to launch the remaining installation and configuration. If you want to use the node_state=deprovision variable with the host listed in the first entry, then move that line to a different position in the [automationcontroller] section.
You can also remove entire groups from your automation mesh. The installer removes all configuration files and logs attached to the nodes in the group.
[execution_nodes]
exec1.lab.example.com peers=exec2.lab.example.com
exec2.lab.example.com peers=exec3.lab.example.com
exec3.lab.example.com
[execution_nodes:vars]
node_state=deprovisionThe Red Hat Ansible Automation Platform 2 installer provides a way to visualize the automation mesh topology defined in your installation inventory file. Using your inventory file, the installer can generate a text file that shows the relationships between the different nodes in your automation mesh. You can then use the dot command, provided by the graphviz package, to render the automation mesh topology text file as a graphic file.
Red Hat Ansible Automation Platform 2.2 adds a topology viewer in the automation controller web UI under → .
The following steps create and visualize the mesh topology:
Use an existing inventory file or create a new one. This example uses an inventory with two hybrid control nodes and one execution node.
[automationcontroller] controller1.lab.example.com control2.lab.example.com [automationcontroller:vars] peers=execution_nodes [execution_nodes] exec1.lab.example.com
Execute the script to generate the mesh-topology.dot file.
[user@demo aap-bundle]$ ./setup.sh -- --tag generate_dot_fileYou can run the ./setup.sh --help command to display command usage, including how to pass Ansible options.
Make sure that the graphviz RPM package is installed.
Render the generated topology file, mesh-topology.dot, as a graphic.
[user@demo aap-bundle]$dot -Tjpg mesh-topology.dot \>-o graph-topology.jpg...output omitted...
You can render the file in other formats by using the -T option to specify the output format, such as GIF, PNG, or SVG (-Tgif, -Tpng, or -Tsvg).
Open the generated graph-topology.jpg file with a web browser or other image viewer.
Even though the preceding diagram shows arrows for the connections between the various nodes on the automation mesh, these are peer connections and the arrow heads can be misleading.
Automation mesh is flexible and you can adapt it to meet your needs. The following two examples provide starting points. You might expand from these examples as you add locations in different regions, data centers, or behind firewalls. The key to setting up an effective automation mesh is to know the function of your nodes and which ones can directly peer with each other over your network.
The minimal resilient automation mesh configuration provides resilience in the control and execution planes. The control nodes connect to each other and to each execution node. There is no single point of failure.
The inventory file contains the following content:
[automationcontroller]controller.lab.example.com control2.lab.example.com [automationcontroller:vars] node_type=control
peers=execution_nodes
[execution_nodes]
exec1.lab.example.com exec2.lab.example.com
This example modifies the minimal resilient configuration to add an additional execution node that is reached through a hop node. It also sets up two instance groups, local (consisting of exec1 and exec2) and remote (consisting of exec3, which is behind the hop node). The instance groups are not shown on the following diagram.
This excerpt is from the inventory file that sets up this configuration:
[automationcontroller] controller.lab.example.com control2.lab.example.com [automationcontroller:vars]node_type=control peers=instance_group_local [execution_nodes]
exec1.lab.example.com exec2.lab.example.com exec3.lab.example.com hop1.lab.example.com [instance_group_local]
exec1.lab.example.com exec2.lab.example.com [instance_group_remote]
exec3.lab.example.com [instance_group_remote:vars] peers=hop
[hop]
hop1.lab.example.com [hop:vars]
node_type=hop peers=automationcontroller
All of the controllers are | |
List all execution nodes regardless of the node type. You could define the node type for each node, but it might be easier to create groups and group variables as shown in this example. | |
The four hosts defined in the | |
Execution nodes in the | |
The execution nodes in the | |
Creating a separate group for | |
All of the hosts in the |
Prior to installation, the installer script runs validation checks on your defined automation mesh configuration.
A host cannot belong to both the [automationcontroller] and [execution_nodes] groups.
A host cannot peer to a node that does not exist.
A host cannot peer to itself.
A host cannot have an inbound and an outbound connection to the same nodes.
Execution nodes must have a path back to the control plane.