Bookmark this page

Implementing Test Cases in Event-driven Applications

Objectives

After completing this section, you should be able to implement test cases in event-driven applications.

Describing Automated Testing

Automated tests are program specifications written as executable software. Developers define tests by writing the source code necessary to perform validations on their applications.

Just like synchronous applications, event-driven applications (EDAs) must include automated tests to ensure quality. Defining and developing an automated testing strategy requires time investments. The resulting tests, however, save time in the long term. Automated tests reduce the human error factor, they are repeatable, and you can run them as part of automated validation processes, such as continuous integration pipelines.

To cover the different aspects of a software system, you must test your applications from different perspectives. For example, you might want to verify that your stream processing topology works as expected, and also validate that the application communicates successfully with the Kafka broker.

The Testing Pyramid

The testing pyramid is a representation of the most widely accepted testing methods. It illustrates how teams should distribute their testing efforts.

Figure 6.7: The testing pyramid

At the bottom of the pyramid, tests are faster, specific, simpler, and easier to write. As you go up the pyramid, tests cover broader scopes but become more difficult to develop, and maintain.

Unit Tests

Unit tests verify the application at the source code level, by running very specific and fast validations on small code units in isolation. A unit is the smallest testable piece of code. Based on your own perspective of the code base, or your team agreements, a unit can be a method, a class, or even an API.

To test units in isolation, developers normally use test doubles. This technique replaces the dependencies of a unit with fake or simplified versions. There are multiple types of test doubles, such as fakes, stubs, and mocks. For example, when implementing unit tests for a stream processing topology, you should use a test double for the Kafka dependency. In this way, tests are isolated from Kafka, and you do not have to set up a Kafka cluster each time you run the tests.

In EDAs, unit tests are typically focused on validating how the business rules produce and consume certain events, under particular scenarios.

Integration Tests

Integration tests verify that multiple units, or components, work together. They are harder to set up than unit tests, but also cover scenarios uncovered by unit tests. Usually, you must set up each scenario with several components, which can be logical components, or even external systems. Just like in unit tests, you can use test doubles in integration tests.

In EDAs, integration tests might verify the correct integration at different levels, such as between logical components, between microservices, or between the application and the event storage layer, such as Kafka.

Functional Tests

Functional tests evaluate the application from the perspective of the user. The terms functional tests, acceptance tests, system tests, and end-to-end tests all make reference to the act of validating that an application meets the functional requirements.

Manual Tests

Manual tests are performed by humans, usually as a way to explore and research the behavior of the application. These tests have the highest cost. You should have less of them.

Implementing Unit Tests in EDAs

When implementing unit tests, you should decouple them from the infrastructure details to keep tests fast, isolated, and reusable. In EDAs, the principal piece of infrastructure is the event broker, such as Kafka. Your unit tests should not know whether you use Kafka or another system as an event broker. They should also be able to execute and pass without a Kafka instance.

Unit Testing Publisher-Subscriber (Pub-Sub) Applications

In EDAs using a Pub-Sub architecture, you implement Kafka producers to publish events, and Kafka consumers to subscribe to events. Decoupling the business logic from these producers and consumers is vital to achieve isolated, easy-to-write unit tests, and it helps to make tests independent of the Kafka infrastructure.

Note

You can achieve decoupled application architectures by applying techniques such as the SOLID principles or domain driven design (DDD).

Decoupled business logic code should not depend on specific event messaging systems, such as Kafka. After decoupling, business logic code units should normally emit and listen to events. Consequently, your unit tests will generally test the following:

  • For a code unit that emits events, test that the expected event is emitted when particular conditions are met.

  • For a code unit that listens to incoming events, test that the code produces the expected results.

To write tests, you can use JUnit. JUnit is the de facto standard for unit tests in Java. In other languages, similar libraries exist. For example, consider the following Java code, which implements a simple mechanism to process book orders.

public class BookStore {

    ...omitted...

    public BookOrderCreated order(Book book, CustomerInfo customerInfo)
            throws OutOfStockException {

        if (inventory.has(book)) {
            Order order = new Order(book, customerInfo);
            ordersRegistry.register(order);

            return new BookOrderCreated(order);
        }

        throw new OutOfStockException(book);
    }
}

The order method returns a BookOrderCreated event, if the book is available in the inventory. Note that the method's only responsibility is to verify the business rule and emit an event if the condition is met. The code does not include any Kafka dependencies or Kafka producer code.

You can implement a unit test for such a scenario as follows:

@Test
public void testProcessOrderReturnsBookOrderCreated() {
    // Given a book, included in the inventory
    Book book = buildTestBook();
    CustomerInfo customerInfo = buildTestCustomerInfo();
    Inventory inventory = buildTestInventory().add(book);

    // When the store processes an book order
    BookOrderCreated event = store.order(book, customerInfo);

    // Then the produced event corresponds to the ordered book
    assertBookOrderCreatedForBook(event, book);
}

The unit test only covers the BookStore#order method. Both the BookStore class and the test are unaware of the Kafka infrastructure layer, which is implemented in a different application layer. Other dependencies, such as the inventory, are implemented as test doubles.

Unit Testing Kafka Streams Applications

If you develop a Kafka Streams application, then you can use the Kafka Streams test utils package to unit test topologies. The test utils package decouples tests from Kafka, by providing methods to define test doubles for Kafka topics. By using test doubles to simulate streams, tests become easier to implement and do not require a running Kafka instance.

You can install Kafka Streams test utils by including the kafka-streams-test-utils artifact as a Maven dependency.

To test a topology, pass the topology to the TopologyTestDriver constructor. The TopologyTestDriver object creates a test double of a Kafka infrastructure and executes the topology on top of this simulated infrastructure. Next, by using the TopologyTestDriver methods, you specify the data that the test should expect in each topic.

For example, consider the following method, which builds a Kafka Streams topology.

public Topology getBookOrdersTopology() {
    KSTream<String, BookOrderCreated> ordersStream = builder.stream(
        "book-orders-created"
    )

    ordersStream
        //...transformations omitted...
        .to("book-orders-confirmed")

    ordersStream
        //...transformations omitted...
        .count(Materialized.as("book-orders-counts"))

    return builder.build();
}

The getBookOrdersTopology method reads a stream of events from the book-orders-created topic, transforms the stream, and materializes results in the book-orders-confirmed topic and the book-orders-counts store.

You can unit test this topology as follows:

import org.apache.kafka.streams.TestInputTopic;
import org.apache.kafka.streams.TestOutputTopic;
import org.apache.kafka.streams.test.TestRecord;
import org.apache.kafka.streams.TopologyTestDriver;
import org.apache.kafka.streams.state.KeyValueStore;

...class omitted...

@Test
public void testTopologyConfirmsBookOrders() {
    // Given 1
    TopologyTestDriver testDriver = new TopologyTestDriver(getBookOrdersTopology());
    TestInputTopic<String, BookOrderCreated> inputTopic = testDriver.createInputTopic(
        "book-orders-created",
        stringSerializer,
        bookOrderCreatedSerializer
    );
    TestOutputTopic<String, BookOrderCreated> outputTopic = testDriver.createOutputTopic(
        "book-order-confirmed",
        stringDeserializer,
        bookOrderConfirmedDeserializer
    );

    // When 2
    BookOrderCreated inputEvent = createTestOrder("book-1");
    inputTopic.pipeInput("order-42", inputEvent);

    // Then 3
    TestRecord<String, BookOrderConfirmed> record = outputTopic.readRecord();
    BookOrderConfirmed outputEvent = record.getValue();
    assertThat(outputEvent.bookId, "book-1");
}

1

Given a topology, an input and an output topic. These topics are test doubles and do not exist in Kafka.

2

When there is a new BookOrderCreated event in the input topic. The pipeInput method simulates a new record in the topic.

3

Then verify that the output topic contains the correct BookOrderConfirmed event.

The TopologyTestDriver object also allows you to handle and verify stores.

import org.apache.kafka.streams.state.KeyValueStore;

...class omitted...

@Test
public void testTopologyCountsBookOrders() {
    // Given 1
    testDriver = new TopologyTestDriver(getBookOrdersTopology());
    inputTopic = testDriver.createInputTopic(
        "book-orders-created",
        stringSerializer,
        bookOrderCreatedSerializer
    );
    KeyValueStore<String, Integer> bookCountStore = testDriver.getKeyValueStore(
        "book-orders-counts"
    )

    // When 2
    inputTopic.pipeInput("order-32", createTestOrder("book-1"));
    inputTopic.pipeInput("order-33", createTestOrder("book-1"));

    // Then 3
    assertThat(bookCountStore.get("book-1"), equalTo(2));
}

1

Given a topology, an input topic and a store. The store change log topic is a test double and does not exist in Kafka.

2

When two BookOrderCreated events occur for the same book.

3

Then verify that the count value in the store is 2.

Implementing Integration Tests in EDAs

In contrast to unit tests, where a test normally verifies single code units in isolation, the role of integration testing is to test a set of units together. A key difference with unit tests is the fact that integration tests normally include the infrastructure layers in test scenarios. Therefore, if you use Kafka, then you should implement integration tests to validate that your application integrates with the Kafka cluster under different conditions.

You should implement integration tests to ensure a correct integration from different perspectives and levels of abstractions, such as the following:

  • If you do not use the Kafka Streams library, then test that your business logic layer integrates with the Kafka producer/consumer layer.

  • Test that your microservices communicate with the Kafka cluster. You can either set up an actual Kafka cluster or use the TestContainers library. Test Containers make it easier to set up and tear down containerized Kafka clusters for your tests. Quarkus also provides the Dev Services for Kafka feature, to automatically start Kafka-compatible brokers when running tests.

  • Perform load testing or disaster recovery testing. For example, you might want to test that your application handles network errors gracefully.

  • Test duplicated events and data loss scenarios.

  • Test that the Kafka Streams topologies integrate with your topic partitions.

You can use JUnit to implement integration tests.

Implementing Functional Tests in EDAs

Functional and manual tests take the perspective of the user. Consequently, whether an application is event-driven or not, that detail is hidden from functional tests. You might need to set up dedicated environments, however, which means you must deploy and configure the application and the entire infrastructure in a staging or a production-ready environment.. This includes setting up Kafka with solutions such as Red Hat AMQ Streams or Red Hat OpenShift Streams for Apache Kafka.

To write functional tests in applications with a web user interface, you might want to use frameworks such as Selenium and Cypress. For other types of testing, such as API testing, you can use Quarkus HTTP testing or Microrocks.

Note

EDAs are mainly eventually consistent. Eventual consistency refers to the fact that after requesting a change in the application state, the application state might not be consistent immediately, but at some point in the future.

From a testing perspective, eventual consistency might make functional integration testing more difficult.

For example, consider the use case to register a book in a book store. In a synchronous application, a functional test can consider the book as registered when the server sends a valid response. However, in an event-driven application, a valid response does not necessarily mean that the book is registered in the application state.

Revision: ad482-1.8-cc2ae1c