Rocky Release Highlights

Rocky Release Highlights



  • A new crypto plugin was added to allow secrets to be stored and generated in a HashiCorp Vault. This plugin uses the castellan vault plugin to access the Vault.

  • The PKCS#11 plugin was augmented to allow encryption and HMAC algorithms and key parameters to be customized to allow easier integration with Thales and ATOS HSMs. Changes were also made to barbican-manage to allow the key types and algorithms to be customized.

  • The simple crypto plugin was augmented to allow the generation of 512 bit AES keys for XTS mode.



  • Host and instance reservation support multiple availability zones.

  • Resource monitoring supports periodic healing in order to avoid unnecessary reservation healing.

  • A time margin for cleaning up resources between back to back leases is introduced.



  • Improved user experience when recovering from storage system failures.

  • Security improvements enabled when creating volumes from signed images.

  • Numerous improvements that give administrators greater control over the placement of volumes.

  • Improved backup functionality and performance.



  • Cloudkitty can now collect non-OpenStack metrics using a new Promotheus collector or the gnocchi collector.

  • Improvement in the metrics configuration to ease configuration.

  • New enhanced client with inclusion of the writer within.

  • Evolution of Storage with the removal of the pure gnocchi and gnocchihybrid storage and the addition of the new Backend V2 that will be more scalable / faster.



  • Webhook integration: Congress can now accept webhook notifications from Monasca monitoring service and Vitrage root-cause analysis service to enable immediate fault-management responses according to operator policy.

  • Z3 Engine: The experimental rule engine based on Microsoft Research’s Z3-prover opens the way to efficient policy inferences at previously impractical scales.

  • Bug Fixes: As with every release, we continue to make Congress more robust and stable than ever with bug fixes and internal improvements.



  • Add client and os-acc lib for attach/detach.

  • Improved interaction with Placement via agent provider tree support.

  • Metadata standardization among drivers for capability report.

  • Add FPGA programming support.



  • Secure Hash Algorithm Support, which allows operators to configure a self-describing secure hash that can be used by image consumers to verify image data integrity

  • Introduction of “hidden” images, a popular operator request that enables operators to keep outdated public images hidden from the default image-list response yet still available for server rebuilds

  • The glance-manage utility has been updated to address OpenStack Security Note OSSN-0075

  • An implementation of multiple backend support, which allows operators to configure multiple stores and allows end users to direct image data to a specific store, is introduced as the EXPERIMENTAL Image Service API version 2.8



  • Cinder Generic Groups are now supported

  • Added server groups and server group members quota management

  • Angularized Users and Server Groups panels are added to provide better UX



  • Added the ability to define groups of conductors and nodes, enabling operators to define specific failure or management domains.

  • Added ability to manage BIOS settings, with driver support for iRMC and iLO hardware types.

  • Added automatic recovery from power faults, removing a long standing headache for operators.

  • Added a ramdisk deployment interface enabling high performance computing deployments to have diskless nodes.



  • Support for a new hierarchical enforcement model has been implemented in addition to several improvements to the unified limits APIs.

  • Parts of keystone’s API have converted from a custom WSGI implementation to using flask and flask-restful. This may affect people using custom middleware or injecting custom paste pipelines.

  • The token provider API has been refactored to have cleaner interfaces, reducing technical debt. Deployments using custom token providers may be affected.

  • Keystone now creates 2 default roles (member and reader) in addition to the admin role upon installation or bootstrap. These roles will be incorporated across other service policies by default in the future as an effort to simplify RBAC across OpenStack. Note that this might have an impact on deployments given case sensitivity issues with role naming. You can read about case sensitivty within keystone in here.



  • Added new docker images in kolla for logstash, monasca, prometheus, ravd, neutron-infoblox-ipam-driver and apache storm.

  • Allow operators to set resource limits in kolla-ansible deployed containers.

  • Implement Ceph Bluestore deployment in kolla and kolla-ansible.

  • Support deployment of prometheus, kafka and zookeeper via kolla-ansible.



  • Added support for High Availability kuryr-controller in an Active/Passive model, enabling quick and transparent recovery in case kuryr-controller is lost.

  • Added native route support enabling L7 routing via Octavia Amphorae instead of iptables, providing a more direct routing for load balancers and services.

  • Added support for namespace isolation, lettting users isolate pods and services in different namespaces, implemented through security groups.

  • Added support for health checks of the CNI daemon, letting users confirm the CNI daemon’s functionality and set limits on resources like memory, improving both stability and performance and for it to be marked as unhealthy if needed.

  • Added support for multi-vif based on Kubernetes Network Custom Resource Definition De-facto Standard spec defined by the Network Plumbing Working Group, allowing multiple interfaces per pod.



  • Metadata can be added to share access rules as key=value pairs with API microversion 2.45.

  • Added Inspur AS13000 driver which supports snapshot operation as well as all minimum driver features.



  • Support Introspective Instance Monitoring through QEMU Guest Agent.

  • Operators can now customize taskflow workflows to process each type of failure notifications.

  • Support introspective Instance Monitoring through QEMU Guest Agent



  • Introduced the notifier service for sending Mistral events.

  • Added Mistral actions for Swift service methods, Vitrage, Zun, Qinling and Manila.

  • Executors now have a heartbeat check to verify they are still running. This helps resolve workflows that can become stuck in the RUNNING state when an executor dies.

  • Improved performance and reduced memory footprint



  • Per TCP/UDP port forwarding on floating IP is supported. Operators can save the number of global IP addresses for floating IPs.

  • Multiple bindings for compute owned ports is supported for better server live migration support.

  • Perform validation on filter parameters on listing resources. Previously filter parameters were unclear for API users. This release improves the API behavior on resource filtering and documentation in the Neutron API references a lot.

  • (fwaas) Logging on firewall events is supported. It is useful for operators to debug FWaaS.

  • (vpnaas) Newer versions of liberswan 3.19+ are supported so that operators can run neutron-vpnaas IPsec VPN with newer distributions.

  • (ovn) Support migration from an existing ML2OVS TripleO deployment to ML2OVN TripleO deployment.

  • (bagpipe) bagpipe-bgp, a reference implementation of Neutron BGP VPN support, supports E-VPN with OVS.



  • Improvements were made to minimize network downtime during live migrations. In addition, the libvirt driver is now capable of live migrating between different types of networking backends, for example, linuxbridge => OVS.

  • Handling of boot from volume instances when the Nova compute host does not have enough local storage has been improved.

  • Operators can now disable a cell to make sure no new instances are scheduled there. This is useful for operators to introduce new cells to the deployment and for maintenance of existing cells.

  • Security enhancements were made when using Glance signed images with the libvirt compute driver.

  • A nova-manage db purge command is now available to help operators with maintenance and avoid bloat in their database.

  • The placement service now supports granular RBAC policy rules configuration. See the placement policy documentation for details.



  • Octavia dashboard details pages now automatically refresh the load balancer status.

  • Octavia now supports provider drivers, allowing third party load balancing drivers to be integrated with the Octavia v2 API.

  • UDP protocol load balancing has been added to Octavia. This is useful for IoT use cases.

  • Pools can have backup members, also known as “sorry servers”, that respond when all of the members of a pool are not available.

  • Users can now configure load balancer timeouts per listener.



  • Support function versioning so that users can create different versions of a same purpose function.

  • Support function alias which is a pointer to a specific Qinling function version and can be updated to point to a different version.

  • Use secure connection to Kubernetes cluster in the default deployment.

  • Support to define untrusted workload for the runtime.

  • Node.JS runtime support(experimental).

  • Python runtime security improvement.

  • Document improvement.



  • Plugins versions were updated.

  • Added support to boot from volume.

  • Ability to use S3 as job binary and data source backend.



  • Refactor internal communication to invoke Storlets that make it more stable

  • Unit tests python 3 integration completed



  • Added an S3 API compatibility layer, so clients can use S3 clients to talk to a Swift cluster.

  • Added container sharding, an operator controlled feature that may be used to shard very large container databases into a number of smaller shard containers. This mitigates the issues with one large DB by distributing the data across multiple smaller databases throughout the cluster.

  • TempURLs now support IP range restrictions.

  • The trivial keymaster and the KMIP keymaster now support multiple root encryption secrets to enable key rotation.

  • Improved performance of many consistency daemon processes.

  • Added support for the HTTP PROXY protocol to allow for accurate client IP address logging when the connection is routed through external systems.



  • VNF Forwarding Graph support for Network Service. Operators can describe VNFFGD inside NSD to define the network connections among all VNFs.

  • OpenStack Client support for Tacker.

  • Placement policy support. Operators can specify placement policies (like affinity, anti-affinity) for VDU’s in a VNF.



  • Continue migrating to OpenStackClient

  • Fix remaining Python 3 porting problems

  • Django 2.0 support

  • Improve Python 3 support, all py3 unitests are new enabled

  • Remove support of creating volume from Nova



  • New API for alarm history

  • New UI for alarm history

  • Fast-failover of vitrage-graph for better High-Availability support

  • New Kubernetes datasource, for Kubernetes cluster as a workload on OpenStack

  • New Prometheus datasource, to handle alerts coming from Prometheus



  • Watcher services can be launched in HA mode. From now on Watcher Decision Engine and Watcher Applier services may be deployed on different nodes to run in active-active or active-passive mode.

  • Added a host maintenance strategy to prepare compute node for maintenance by cleaning up instances via migration.

  • Excluding of instances from audit scope based on project_id is added.

  • Added the ability to ignore specified instances during strategy’s (an implementation in Watcher) work but still consider the workload they produce.

Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.