Rocky Release Highlights

Rocky Release Highlights

Barbican - Key Manager service

To produce a secret storage and generation system capable of providing key management for services wishing to enable encryption features.

Notes:

  • A new crypto plugin was added to allow secrets to be stored and generated in a HashiCorp Vault. This plugin uses the castellan vault plugin to access the Vault.
  • The PKCS#11 plugin was augmented to allow encryption and HMAC algorithms and key parameters to be customized to allow easier integration with Thales and ATOS HSMs. Changes were also made to barbican-manage to allow the key types and algorithms to be customized.
  • The simple crypto plugin was augmented to allow the generation of 512 bit AES keys for XTS mode.

Blazar - Resource reservation service

Blazar’s goal is to provide resource reservations in OpenStack clouds for different resource types, both virtual (instances, volumes, etc) and physical (hosts, storage, etc.).

Notes:

  • Host and instance reservation support multiple availability zones.
  • Resource monitoring supports periodic healing in order to avoid unnecessary reservation healing.
  • A time margin for cleaning up resources between back to back leases is introduced.

Cinder - Block Storage service

To implement services and libraries to provide on-demand, self-service access to Block Storage resources via abstraction and automation on top of other block storage devices.

Notes:

  • Improved user experience when recovering from storage system failures.
  • Security improvements enabled when creating volumes from signed images.
  • Numerous improvements that give administrators greater control over the placement of volumes.
  • Improved backup functionality and performance.

Cloudkitty - Rating service

CloudKitty is a rating component for OpenStack. Its goal is to process data from different metric backends and implement rating rule creation. Its role is to fit in-between the raw metrics from OpenStack and the billing system of a provider for chargeback purposes.

Notes:

  • Cloudkitty can now collect non-OpenStack metrics using a new Promotheus collector or the gnocchi collector.
  • Improvement in the metrics configuration to ease configuration.
  • New enhanced client with inclusion of the writer within.
  • Evolution of Storage with the removal of the pure gnocchi and gnocchihybrid storage and the addition of the new Backend V2 that will be more scalable / faster.

Congress - Governance service

To provide governance as a service across any collection of cloud services in order to monitor, enforce, and audit policy over dynamic infrastructure.

Notes:

  • Webhook integration: Congress can now accept webhook notifications from Monasca monitoring service and Vitrage root-cause analysis service to enable immediate fault-management responses according to operator policy.
  • Z3 Engine: The experimental rule engine based on Microsoft Research’s Z3-prover opens the way to efficient policy inferences at previously impractical scales.
  • Bug Fixes: As with every release, we continue to make Congress more robust and stable than ever with bug fixes and internal improvements.

Cyborg - Accelerator Life Cycle Management

To provide a general management framework for accelerators (FPGA,GPU,SoC, NVMe SSD,DPDK/SPDK,eBPF/XDP …)

Notes:

  • Add client and os-acc lib for attach/detach.
  • Improved interaction with Placement via agent provider tree support.
  • Metadata standardization among drivers for capability report.
  • Add FPGA programming support.

Glance - Image service

To provide services and associated libraries to store, browse, share, distribute and manage bootable disk images, other data closely associated with initializing compute resources, and metadata definitions.

Notes:

  • Secure Hash Algorithm Support, which allows operators to configure a self-describing secure hash that can be used by image consumers to verify image data integrity
  • Introduction of “hidden” images, a popular operator request that enables operators to keep outdated public images hidden from the default image-list response yet still available for server rebuilds
  • The glance-manage utility has been updated to address OpenStack Security Note OSSN-0075
  • An implementation of multiple backend support, which allows operators to configure multiple stores and allows end users to direct image data to a specific store, is introduced as the EXPERIMENTAL Image Service API version 2.8

Horizon - Dashboard

To provide an extensible unified web based user interface for all OpenStack services.

Notes:

  • Cinder Generic Groups are now supported
  • Added server groups and server group members quota management
  • Angularized Users and Server Groups panels are added to provide better UX

Ironic - Bare Metal service

To produce an OpenStack service and associated libraries capable of managing and provisioning physical machines, and to do this in a security-aware and fault-tolerant manner.

Notes:

  • Added the ability to define groups of conductors and nodes, enabling operators to define specific failure or management domains.
  • Added ability to manage BIOS settings, with driver support for iRMC and iLO hardware types.
  • Added automatic recovery from power faults, removing a long standing headache for operators.
  • Added a ramdisk deployment interface enabling high performance computing deployments to have diskless nodes.

Keystone - Identity service

To facilitate API client authentication, service discovery, distributed multi-tenant authorization, and auditing.

Notes:

  • Support for a new hierarchical enforcement model has been implemented in addition to several improvements to the unified limits APIs.
  • Parts of keystone’s API have converted from a custom WSGI implementation to using flask and flask-restful. This may affect people using custom middleware or injecting custom paste pipelines.
  • The token provider API has been refactored to have cleaner interfaces, reducing technical debt. Deployments using custom token providers may be affected.
  • Keystone now creates 2 default roles (member and reader) in addition to the admin role upon installation or bootstrap. These roles will be incorporated across other service policies by default in the future as an effort to simplify RBAC across OpenStack. Note that this might have an impact on deployments given case sensitivity issues with role naming. You can read about case sensitivty within keystone in here.

Kolla

To provide production-ready containers and deployment tools for operating OpenStack clouds.

Notes:

  • Added new docker images in kolla for logstash, monasca, prometheus, ravd, neutron-infoblox-ipam-driver and apache storm.
  • Allow operators to set resource limits in kolla-ansible deployed containers.
  • Implement Ceph Bluestore deployment in kolla and kolla-ansible.
  • Support deployment of prometheus, kafka and zookeeper via kolla-ansible.

Kuryr

Bridge between container framework networking and storage models to OpenStack networking and storage abstractions.

Notes:

  • Added support for High Availability kuryr-controller in an Active/Passive model, enabling quick and transparent recovery in case kuryr-controller is lost.
  • Added native route support enabling L7 routing via Octavia Amphorae instead of iptables, providing a more direct routing for load balancers and services.
  • Added support for namespace isolation, lettting users isolate pods and services in diferent namespaces, implemented through security groups.
  • Added support for health checks of the CNI daemon, letting users confirm the CNI daemon’s functionality and set limits on resources like memory, improving both stability and performance and for it to be marked as unhealthy if needed.
  • Added support for multi-vif based on Kubernetes Network Custom Resource Definition De-facto Standard spec defined by the Network Plumbing Working Group, allowing multiple interfaces per pod.

Manila - Shared File Systems service

To provide a set of services for management of shared file systems in a multitenant cloud environment, similar to how OpenStack provides for block-based storage management through the Cinder project.

Notes:

  • Metadata can be added to share access rules as key=value pairs with API microversion 2.45.
  • Added Inspur AS13000 driver which supports snapshot operation as well as all minimum driver features.

Masakari - Instances High Availability Service

Provide instances high availability service for OpenStack clouds by automatically recovering the instances from failures.

Notes:

  • Support Introspective Instance Monitoring through QEMU Guest Agent.
  • Operators can now customize taskflow workflows to process each type of failure notifications.
  • Support introspective Instance Monitoring through QEMU Guest Agent

Mistral - Workflow service

Provide a simple YAML-based language to write workflows (tasks and transition rules) and a service that allows to upload them, modify, run them at scale and in a highly available manner, manage and monitor workflow execution state and state of individual tasks.

Notes:

  • Introduced the notifier service for sending Mistral events.
  • Added Mistral actions for Swift service methods, Vitrage, Zun, Qinling and Manila.
  • Executors now have a heartbeat check to verify they are still running. This helps resolve workflows that can become stuck in the RUNNING state when an executor dies.
  • Improved performance and reduced memory footprint

Neutron - Networking service

To implement services and associated libraries to provide on-demand, scalable, and technology-agnostic network abstraction.

Notes:

  • Per TCP/UDP port forwarding on floating IP is supported. Operators can save the number of global IP addresses for floating IPs.
  • Multiple bindings for compute owned ports is supported for better server live migration support.
  • Perform validation on filter parameters on listing resources. Previously filter parameters were unclear for API users. This release improves the API behavior on resource filtering and documentation in the Neutron API references a lot.
  • (fwaas) Logging on firewall events is supported. It is useful for operators to debug FWaaS.
  • (vpnaas) Newer versions of liberswan 3.19+ are supported so that operators can run neutron-vpnaas IPsec VPN with newer distributions.
  • (ovn) Support migration from an existing ML2OVS TripleO deployment to ML2OVN TripleO deployment.
  • (bagpipe) bagpipe-bgp, a reference implementation of Neutron BGP VPN support, supports E-VPN with OVS.

Nova - Compute service

To implement services and associated libraries to provide massively scalable, on demand, self service access to compute resources, including bare metal, virtual machines, and containers.

Notes:

  • Improvements were made to minimize network downtime during live migrations. In addition, the libvirt driver is now capable of live migrating between different types of networking backends, for example, linuxbridge => OVS.
  • Handling of boot from volume instances when the Nova compute host does not have enough local storage has been improved.
  • Operators can now disable a cell to make sure no new instances are scheduled there. This is useful for operators to introduce new cells to the deployment and for maintenance of existing cells.
  • Security enhancements were made when using Glance signed images with the libvirt compute driver.
  • A nova-manage db purge command is now available to help operators with maintenance and avoid bloat in their database.
  • The placement service now supports granular RBAC policy rules configuration. See the placement policy documentation for details.

Octavia - Load-balancer service

To provide scalable, on demand, self service access to load-balancer services, in technology-agnostic manner.

Notes:

  • Octavia dashboard details pages now automatically refresh the load balancer status.
  • Octavia now supports provider drivers, allowing third party load balancing drivers to be integrated with the Octavia v2 API.
  • UDP protocol load balancing has been added to Octavia. This is useful for IoT use cases.
  • Pools can have backup members, also known as “sorry servers”, that respond when all of the members of a pool are not available.
  • Users can now configure load balancer timeouts per listener.

Qinling - Function as a Service

Provide a serverless platform for managing functions that can be executed in a scalable, highly-available manner.

Notes:

  • Support function versioning so that users can create different versions of a same purpose function.
  • Support function alias which is a pointer to a specific Qinling function version and can be updated to point to a different version.
  • Use secure connection to Kubernetes cluster in the default deployment.
  • Support to define untrusted workload for the runtime.
  • Node.JS runtime support(experimental).
  • Python runtime security improvement.
  • Document improvement.

Sahara - Data Processing service

To provide a scalable data processing stack and associated management interfaces.

Notes:

  • Plugins versions were updated.
  • Added support to boot from volume.
  • Ability to use S3 as job binary and data source backend.

Storlets - Compute inside Object Storage service

To enable a user friendly, cost effective scalable and secure way for executing storage centric user defined functions near the data within OpenStack Swift

Notes:

  • Refactor internal communication to invoke Storlets that make it more stable
  • Unit tests python 3 integration completed

Swift - Object Storage service

Notes:

  • Added an S3 API compatibility layer, so clients can use S3 clients to talk to a Swift cluster.
  • Added container sharding, an operator controlled feature that may be used to shard very large container databases into a number of smaller shard containers. This mitigates the issues with one large DB by distributing the data across multiple smaller databases throughout the cluster.
  • TempURLs now support IP range restrictions.
  • The trivial keymaster and the KMIP keymaster now support multiple root encryption secrets to enable key rotation.
  • Improved performance of many consistency daemon processes.
  • Added support for the HTTP PROXY protocol to allow for accurate client IP address logging when the connection is routed through external systems.

Tacker - NFV Orchestration service

To implement Network Function Virtualization (NFV) Orchestration services and libraries for end-to-end life-cycle management of Network Services and Virtual Network Functions (VNFs).

Notes:

  • VNF Forwarding Graph support for Network Service. Operators can describe VNFFGD inside NSD to define the network connections among all VNFs.
  • OpenStack Client support for Tacker.
  • Placement policy support. Operators can specify placement policies (like affinity, anti-affinity) for VDU’s in a VNF.

Trove - Database service

To provide scalable and reliable Cloud Database as a Service functionality for both relational and non-relational database engines, and to continue to improve its fully-featured and extensible open source framework.

Notes:

  • Continue migrating to OpenStackClient
  • Fix remaining Python 3 porting problems
  • Django 2.0 support
  • Improve Python 3 support, all py3 unitests are new enabled
  • Remove support of creating volume from Nova

Vitrage - RCA (Root Cause Analysis) service

To organize, analyze and visualize OpenStack alarms & events, yield insights regarding the root cause of problems and deduce their existence before they are directly detected.

Notes:

  • New API for alarm history
  • New UI for alarm history
  • Fast-failover of vitrage-graph for better High-Availability support
  • New Kubernetes datasource, for Kubernetes cluster as a workload on OpenStack
  • New Prometheus datasource, to handle alerts coming from Prometheus

Watcher - Infrastructure Optimization service

Watcher’s goal is to provide a flexible and scalable resource optimization service for multi-tenant OpenStack-based clouds.

Notes:

  • Watcher services can be launched in HA mode. From now on Watcher Decision Engine and Watcher Applier services may be deployed on different nodes to run in active-active or active-passive mode.
  • Added a host maintenance strategy to prepare compute node for maintenance by cleaning up instances via migration.
  • Excluding of instances from audit scope based on project_id is added.
  • Added the ability to ignore specified instances during strategy’s (an implementation in Watcher) work but still consider the workload they produce.
Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.