After completing this section, you should be able to describe a single-site Red Hat OpenStack Platform overcloud architecture, including the purpose and layout of each of the default node roles.
A Red Hat OpenStack Platform deployment is often tailored to your organization's requirements. This section discusses a simple deployment that resides within a single physical location, and uses predefined roles.
An OpenStack role is a collection of services that fulfill a given purpose. Each node in an OpenStack deployment has a role assigned, and several roles are predefined.
A ceph-storage node operates as a member of a Ceph cluster, potentially providing storage for images managed by the Image service, instance disks, and instance shared storage.
A compute node is a hypervisor and runs all services in the compute role. Virtual machine workloads are run on compute nodes.
A controller node is the coordinating manager for the overcloud. All machines in an OpenStack cloud communicate with controller services using REST APIs. Individual subcomponents communicate with the Advanced Message Queuing Protocol (AMQP). In Red Hat OpenStack Platform, there are two options for AMQP: the Apache Qpid messaging daemon (qpidd), and RabbitMQ.
Director is the undercloud node used to build and manage the life cycle of the overcloud.
Network nodes provide network services. With OVN, network services are distributed across compute and controller nodes. If your OpenStack platform is very large or busy, you may want to move the networking services off the controllers and onto dedicated network nodes.
The following diagram shows a simple single-site deployment using the predefined roles. The diagram also includes the director node used for deploying the environment.
The deployment above only uses the predefined roles.
The deployment uses the recommended minimum of three clustered controller nodes to provide a highly available service in the case of a single node failure.
The minimum number of compute nodes required should be calculated based on the expected load plus one spare node. This allows all workloads to be restarted on the remaining nodes in the event of a single node failure.
Instances that do not have their disks hosted on Ceph and are not using shared storage, will be unaffected if the ceph-storage nodes are unavailable. This deployment uses the minimum of three ceph-storage nodes to provide redundancy in the case of a single node failure.
The single-site architecture is common, but layouts such as Distributed Compute Node (DCN) also exist. This is where a group of compute nodes and the control nodes that they communicate with are situated in different physical locations. The DCN architecture allows administrators to be located centrally, while compute capacity is available in several locations closer to where it may be needed. Distributed architectures are discussed in more detail in later chapters.
Further information is available in the Understanding the overcloud section of the Director Installation and Usage guide for Red Hat OpenStack Platform at https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html-single/director_installation_and_usage/index#sect-Overcloud