Bookmark this page

Configuring the Placement Service

Objectives

After completing this section, you should be able to describe and manage the placement service and filters which are used to control where application instances launch.

Describing the Placement Service

Red Hat OpenStack Platform 10 introduced the Placement service within the Nova repository and extracted to the Placement repository in Red Hat OpenStack Platform 15. The Placement service offers a REST API stack and data model used to track resource provider inventories and usages. A database that exists on the controllers to gather information from the infrastructure and save attributes and location, such as how much CPU is available and how much memory is used and free. The data model is used to find resources, for example, compute nodes, storage pools, or an IP allocation pool. The resources of each provider are tracked by the Placement service. Resources on compute nodes are RAM and CPU. Resources on storage nodes are disks. Consumed resources are tracked as classes.

When nodes are first created in the overcloud, the Placement service is populated with their resource information. This information is stored in the Placement database as resource records. Whenever an instance is deployed to a specific node, started, or stopped, the Placement service is updated with the changes to available resources on that specific node. In this way, the Placement service always has an accurate accounting of the load on each compute node and the available resources.

Therefore, when a new instance is about to be scheduled, the Nova scheduler does not need to poll the compute nodes to determine where to deploy this instance. The scheduler queries the Placement service for the most current information and makes a decision based on that. This might seem like an obvious way to do it, but this only changed with the introduction of the Placement service. Before that, the scheduler was trying to keep track of what it had done, and it was not aware of important changes as they occurred on compute nodes.

You must install the python3-osc-placement package to query the Placement service.

Tracking Resources

Cloud environments contain various kinds of resources. Resources provided by the compute nodes include RAM, CPU, PCI devices, and ephemeral disks. Other resources, for example the shared storage, are provided by an external resource pool. Placement is a service that manages resources, and some of their terminology is the following:

Resource providers

Resource providers are entities that provide an inventory of classes of resources, such as disks or RAM. A resource provider is generic; it is anything that has resources associated with it. For example, in Nova a resource provider is a compute host. The resource providers provide consumption information to all consumers of the resources, and updates the capacity and usage information.

Use the openstack resource provider command to manage resource providers.

[user@demo ~(admin)]$ openstack resource provider list -f json
[
{
  "uuid": "b923f2aa-7e69-4de5-8c05-632924ed7467",
  "name": "computehci0.overcloud.example.com",
  "generation": 8
},
{
  "uuid": "eef8c2ae-7245-4721-8eb5-25e33394e775",
  "name": "compute0.overcloud.example.com",
  "generation": 8
},
{
  "uuid": "a0febbac-01aa-4580-9e91-e4d12ebc50e0",
  "name": "compute1.overcloud.example.com",
  "generation": 6
}
]
Resource classes

Resource classes are entities that indicate standard or specific resources that can be provided by a resource provider, such as VCPUs, memory, and disks. Resource classes may include a unit of measurement in their name, for example, the MEMORY_MB class.

Use the openstack resource class command to manage resource classes.

[user@demo ~(admin)]$ openstack resource class list
+----------------------------+
| name                       |
+----------------------------+
| VCPU                       |
| MEMORY_MB                  |
| DISK_GB                    |
| PCI_DEVICE                 |
| SRIOV_NET_VF               |
| NUMA_SOCKET                |
| NUMA_CORE                  |
| NUMA_THREAD                |
| NUMA_MEMORY_MB             |
| IPV4_ADDRESS               |
| VGPU                       |
| VGPU_DISPLAY_HEAD          |
| NET_BW_EGR_KILOBIT_PER_SEC |
| NET_BW_IGR_KILOBIT_PER_SEC |
| PCPU                       |
| MEM_ENCRYPTION_CONTEXT     |
| FPGA                       |
| PGPU                       |
+----------------------------+
Inventory, allocation, and consumer

The resource provider, for example a compute node, has an inventory of those resource classes. An allocation represents those resources used by a consumer. A consumer of resources on a compute node is usually an instance.

Resource provider aggregates

Provider aggregates are used for modeling relationships among resource providers, and to create groups of them. Provider aggregates can create anti-affinity or affinity relationships such as physical location. Provider aggregates can create groups of compute host providers corresponding to Nova host aggregates or availability zones.

Qualitative Resources or Traits

The resource provider manages quantitative aspects at the boot request. It has a collection of inventory and allocation objects to manage these quantitative requests. The inventories are subtracted from the allocations when an instance uses resources from a resource provider. However, the resource provider also needs to manage nonconsumable, or qualitative, resources.

For example, a user may request 80 GB of disk space for an instance. This resource is quantitative. However, the user may also request an SSD disk. This resource is qualitative.

A REST resource in the placement API manages qualitative information. There are standard traits and custom traits. Standard traits can be used across different cloud environments. Standard traits cannot be modified.

Custom traits are used by administrative users to define nonstandard qualitative information. This information is used by the resource providers. By managing the characteristics of the resource providers, the Placement service can help the scheduler make better placement decisions. The traits API is used to store and query the qualitative resources.

For example, an administrator can add a trait to an existing resource provider aggregate, tagging it as SSD storage. This standard trait is then read by the compute node resource provider. When a user creates a new instance, specifying that it must use an SSD drive, the scheduler sends that information to the Placement service. The Placement service uses the aggregate to return the relevant information to the scheduler.

In previous versions of OpenStack, resource provider aggregate metadata would have been used to manage this qualitative data. However, this practice is hard to manage and is not scalable. Further, aggregate metadata only works for compute node resources.

Eventually the use of traits will deprecate resource provider aggregate metadata, but aggregates will remain as they control much more than just compute node metadata.

Use the openstack trait list command to list all traits created. Use the openstack resource provider trait list command to list the traits of the resource provider. Use the openstack resource provider trait set command to associated traits to the resource provider.

[user@demo ~(admin)]$ openstack resource provider \
> trait list eef8c2ae-7245-4721-8eb5-25e33394e775
+---------------------------------------+
| name                                  |
+---------------------------------------+
| HW_CPU_X86_MMX                        |
| COMPUTE_IMAGE_TYPE_ARI                |
| COMPUTE_IMAGE_TYPE_AMI                |
| COMPUTE_VOLUME_ATTACH_WITH_TAG        |
| COMPUTE_TRUSTED_CERTS                 |
| HW_CPU_X86_SSE                        |
| COMPUTE_IMAGE_TYPE_ISO                |
| COMPUTE_VOLUME_MULTI_ATTACH           |
| COMPUTE_IMAGE_TYPE_QCOW2              |
| COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG |
| HW_CPU_X86_SVM                        |
| HW_CPU_X86_SSE2                       |
| COMPUTE_VOLUME_EXTEND                 |
| COMPUTE_IMAGE_TYPE_AKI                |
| COMPUTE_DEVICE_TAGGING                |
| COMPUTE_IMAGE_TYPE_RAW                |
| COMPUTE_NET_ATTACH_INTERFACE          |
+---------------------------------------+

Use the openstack allocation candidate list command to list the possible allocation candidates.

[user@demo ~(admin)]$ openstack allocation candidate \
> list --resource VCPU=2 -f json
[
  {
    "#": 1,
    "allocation": "VCPU=2",
    "resource provider": "eef8c2ae-7245-4721-8eb5-25e33394e775",
    "inventory used/capacity": "VCPU=0/32",
    "traits": "HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,...output omitted..."
  },
  ...output omitted...
]

As the domain operator, you can use Placement service traits to specify resource provider requirements with two methods.

Image metadata

Requesting a trait with the image metadata, modify the image properties using the openstack image set command.

[user@demo ~(admin)]$ openstack image set \
> --property trait:HW_CPU_X86_AVX512BW=required \
> $IMAGE
Flavor extra specs

Requesting a trait with the flavor specifications, modify the flavor properties using the openstack flavor set command.

[user@demo ~(admin)]$ openstack flavor set \
> --property trait:HW_CPU_X86_AVX512BW=required \
> $FLAVOR

Host Aggregates

Host aggregates are a method to group hypervisor hosts based on configurable metadata. For example, hosts may be grouped based on hardware features, capabilities, or performance characteristics, such as CPU pinning. Host aggregates are not visible by users, but are used automatically for instance scheduling. A compute node can be included in multiple host aggregates. To specify the required features requested for an instance deployment, administrators can build a flavor with the extra specifications to match the available host's aggregate metadata. At deployment, the compute scheduler matches the request declared in the flavor by scheduling the provisioning on a compute host in a matching host aggregate.

Nova host aggregates is different from placement aggregates. Some main differences are listed below.

  • In nova, a host aggregate associates a Nova compute service with other Nova compute services. Placement aggregates are not specific to a Nova compute service. A resource provider in the Placement API is generic; placement aggregates are groups of generic resource providers. This is a significant difference, especially for Ironic, which, when used with Nova, has many Ironic bare-metal nodes attached to a single Nova compute service. In the Placement API, each Ironic bare-metal node is its resource provider and can be associated to other Ironic bare-metal nodes via a placement aggregate association.

  • In nova, a host aggregate may have metadata key-value pairs attached to it, and all services associated with a Nova host aggregate share the same metadata. Placement aggregates have no such metadata, and may have traits that provide qualitative information about the resource provider.

  • In nova, a host aggregate dictates the availability zone within which Nova compute services reside. Even though placement aggregates may be used to model availability zones, they have no inherent concept thereof.

 

References

Placement Service

Revision: cl110-16.1-4c76154