Configure Ansible to use network telemetry to automatically respond to events and implement remediation or configuration changes.
Event-Driven Ansible (EDA) can help network administrators adapt to managing a growing number of network devices by automatically responding to changes in the network.
For example:
Activating an interface that goes down
Configuring an interface before bringing the interface up
Generating a notification when a configuration change occurs
Saving or reverting runtime configuration changes
Provisioning new network devices
Managing network devices can be different from managing Linux machines.
Although you can accomplish many tasks on Linux machines using modules in the ansible.builtin collection, configuring network devices often requires using vendor-specific collections.
The following are some vendor-specific content collections for networking:
The arista.eos collection can manage Arista EOS devices.
The cisco.ios collection can manage Cisco IOS devices.
The cisco.nxos collection can manage Cisco NX-OS devices.
The junipernetworks.junos collection can manage Juniper Networks Junos devices.
In addition to using specific networking modules, connecting to network devices typically requires defining variables. You might define these variables as inventory variables or as variables within plays:
ansible_connection
This variable is often set to the value of ansible.netcommon.network_cli.
However, many modules in the junipernetworks.junos collection require that this variable be set to ansible.netcommon.netconf.
ansible_network_os
This variable is specific to the type of network device.
Some example values include arista.eos.eos, cisco.ios.ios, and junipernetworks.junos.junos.
ansible_become_method
This variable is typically set to enable for network devices.
EDA can run playbooks for your networking automation. You can run the playbooks from your local system or from an automation controller.
Because the run_playbook action does not work in EDA controller, any rulebooks that use the run_playbook action must be run from your local system.
You must configure your local system to be able to run the specified playbooks.
If the plays in your playbooks use modules from vendor-specific content collections, then you must install those collections on your system.
In addition to the collections, you might need to install dependencies for the collections, such as Python packages and RPM packages.
The run_playbook action does not provide the ability to run a playbook from within an automation execution environment.
Use the run_job_template and the run_workflow_template actions to run playbooks on an automation controller.
Recall that automation controller uses automation execution environments to run playbooks.
Both the ee-supported-rhel8 and the ee-supported-rhel9 automation execution environments contain several content collections as well as the dependencies needed to use those collections.
If your playbooks use content collections that are already included in an automation execution environment (such as the arista.eos or junipernetworks.junos collections), then you do not need to manually install those collections or their dependencies.
Streaming network telemetry provides the ability to gather real-time information about your network and devices. By using network telemetry, EDA can react to changes as they happen.
One way to configure network telemetry is to use the network.telemetry validated content collection, available from automation hub at https://console.redhat.com/ansible/automation-hub/.
This collection enables you to manage telemetry configuration on networking devices and to configure a Telegraf and Apache Kafka stack, which you can integrate with EDA.
Documentation for the network.telemetry.run role provides the following example of how to deploy a telemetry collector:
---
- name: Deploy telemetry collector
hosts: collector01
gather_facts: true
tasks:
- name: Run Telemetry Manager
include_role:
name: network.telemetry.run
vars:
operations:
- name: deploy_collector
kafka_external_listener: 203.0.113.100 # optionalThe network.telemetry.run role can also collect configuration information and apply configuration changes in a vendor-agnostic manner by using the network.base validated content collection.
You can use gNMI (gRPC Network Management Interface) to configure a switch to send telemetry information. For example, you can configure an Arista switch to use gNMI by running the following commands:
arista1.lab.example.com>enablearista1.lab.example.com#conf tarista1.lab.example.com(config)#username student secret studentarista1.lab.example.com(config)#management api gnmiarista1.lab.example.com(config-mgmt-api-gnmi)#transport grpc defaultarista1.lab.example.com(config-gnmi-transport-default)#transport grpc eosarista1.lab.example.com(config-gnmi-transport-eos)#provider eos-native
You can configure Telegraf by updating the /etc/telegraf/telegraf.conf configuration file.
The following example is part of a Telegraf configuration file used in one of the exercises.
In this example, Telegraf queries TCP port 6030 of the arista1.lab.example.com device.
The gNMI input plug-in is subscribed to receive events related to changes in the status of the Ethernet1 interface.
Telegraf sends matching events in JSON format to the Apache Kafka broker at 172.25.250.220 on TCP port 9092.
The events are saved in the network topic.
...output omitted...
[[inputs.gnmi]]
addresses = ["arista1.lab.example.com:6030"]
username = "student"
password = "student"
insecure_skip_verify = true
[[inputs.gnmi.subscription]]
name = "Ethernet1"
origin = "openconfig"
subscription_mode = "on_change"
path = "/interfaces/interface[name=Ethernet1]/state/admin-status"
sample_interval = "30s"
suppress_redundant = true
[outputs.kafka]
brokers = ["172.25.250.220:9092"]
topic = "network"
data_format = "json"For more information about settings used in the Telegraf configuration file, see https://github.com/influxdata/telegraf/blob/master/plugins/inputs/gnmi/README.md.
Apache Kafka stores records as topics, and if you have access to the Apache Kafka server, then you can manage topics.
In the previous example Telegraf configuration, Telegraf streams data to the network topic.
From the Apache Kafka server, you can locate the relevant Bash scripts and then list existing topics:
[root@host bin]# ./kafka-topics.sh --list --bootstrap-server localhost:9092If necessary, you can create the network topic with the following command:
[root@host bin]#./kafka-topics.sh --create --topic network \--bootstrap-server localhost:9092Created topic network.
You can use the kafka-console-consumer.sh script to query records for a particular topic.
A successful query indicates that records are being stored for a topic.
Press CTRL+C to exit the command.
[root@host bin]#./kafka-console-consumer.sh --topic network --from-beginning \--bootstrap-server localhost:9092{"fields":{"admin_status":"UP"},"name":"Ethernet1", "tags":{"host":"00fba543c8d5","name":"Ethernet1","source":"arista1.lab.example.com"}, "timestamp":1711119501}
After verifying that Apache Kafka is storing records for the expected topic, you can create a simple rulebook to display events.
The following rulebook uses the ansible.eda.kafka event source plug-in and prints all events (the event.meta key should be defined for each event).
---
- name: Display Apache Kafka events
hosts: all
sources:
- name: Collect events from Apache Kafka
ansible.eda.kafka:
host: utility.lab.example.com
port: 9092
topic: network
rules:
- name: Print event
condition: event.meta is defined
action:
print_event:
pretty: trueDuring development, you might place a rule like this one at the bottom of your ruleset. Because this rule matches every event, the rule can help you identify events that other rules in your ruleset do not match.
By using the print_event action, you can display event details.
If you add the pretty argument with a value of true, then EDA indents the JSON keys in the output.
The indented output might make it easier to use event details to design rule conditions.
If you run the previous rulebook as a rulebook activation in EDA controller, then you might see an event displayed in the rulebook activation history similar to the following:
** 2024-03-22 16:15:14.400270 [event] ******************************************
{'body': {'fields': {'admin_status': 'UP'},
'name': 'Ethernet1',
'tags': {'host': '00fba543c8d5',
'name': 'Ethernet1',
'source': 'arista1.lab.example.com'},
'timestamp': 1711119501},
'meta': {'received_at': '2024-03-22T16:15:14.394353Z',
'source': {'name': 'Collect events from Apache Kafka',
'type': 'ansible.eda.kafka'},
'uuid': '0e3840eb-6448-4917-ae67-fa9ac1042167'}}
********************************************************************************
********************************************************************************Many chat services, including Mattermost, provide the ability to define both incoming and outgoing webhooks. By integrating EDA with a chat service, you can automatically react to specific chat messages and you can send notifications to the chat service.
You can post messages to a chat service by creating and using an incoming webhook. By posting messages to chat applications, you can increase the visibility of events that are important to you.
You an also create outgoing webhooks that trigger when certain chat messages are posted. EDA can listen for these outgoing webhooks and then react to the content of chat messages.
A play might use an incoming webhook to post a chat message. For example, EDA might launch a playbook, and a play in that playbook might send a chat message that indicates something changed because of the playbook run.
An incoming webhook typically specifies the channel to which a new chat message is posted. Existing incoming webhooks have a URL associated with them and you can use this URL to send chat messages to the channel defined by the incoming webhook.
![]() |
The following task in a play sends a message using an incoming webhook URL.
The task uses the message_title and message_text variables and these variables might be passed to the play when the playbook is launched.
The task is skipped unless both variables are defined.
- name: Send a message
when:
- message_title is defined
- message_text is defined
ansible.builtin.uri:
url: http://mattermost.lab.example.com:8065/hooks/d4ox1a5jrf8q7ytzbbaafihzta
method: POST
headers:
Content-Type: application/json
body_format: json
body:
text: |
# {{ message_title }}
#### {{ message_text }}An outgoing webhook can send a message if certain conditions are met. You can trigger an outgoing webhook by typing a specific message in a chat channel. For example, you might find it more efficient to launch a job template by typing a chat message rather than by launching the job template from an automation controller web UI.
An outgoing webhook must specify one or more words that trigger it. You might add multiple words and then configure rules within a ruleset to only act if the expected conditions are met. The webhook might be configured for a specific chat channel or the webhook might apply to all channels. The webhook also specifies one or more callback URLs to use when the webhook conditions are met.
![]() |
After configuring a webhook, you can create a ruleset within a rulebook that listens for webhook events.
You can create rules that evaluate conditions by using the events.payload key.
---
- name: Capture POST events from Mattermost
hosts: all
sources:
- name: Match events posted to port 5000
ansible.eda.webhook:
host: 0.0.0.0
port: 5000