Wallaby Release Highlights

Wallaby Release Highlights


These are significant changes reported directly from the project teams and have not been processed in any way. Some highlights may be more significant than others. Please do not take this list as a definitive set of highlights for the release until the Open Infrastructure Foundation marketing staff have had a chance to compile a more accurate message out of these changes.



  • Introduced a framework for enforcing operator-defined limits on reservation usage.



  • Block Storage API microversions 3.63 and 3.64 add useful information to the volume detail response (volume_type_id and encryption_key_id, respectively).

  • Added new backend drivers: Ceph iSCSI, Dell EMC PowerVault ME, KIOXIA Kumoscale, Open-E JovianDSS, and TOYOU ACS5000. Additionally, many current drivers have added support for features exceeding the required driver functions, with revert to snapshot and backend QoS being particularly popular this cycle.

  • Added a new backup driver for S3-compatible storage.



  • Users can launch instances with accelerators managed by Cyborg since Ussuri release, this release more operations such as Shelve/Unshelve are supported. See accelerator operation guide to find all supported operations.

  • Cyborg introduces more new accelerator drivers such as Intel NIC and Inspur NVMe SSD driver which allow user to boot up a VM with such device attached.

  • Cyborg now provides a new configuration for user to configure their devices, for example, user can indicate the vgpu type for their virtualized GPU, user can indicate the specific functions loaded on their NIC, etc.



  • Added the NS1 DNS backend driver for Designate.

  • Designate now supports the Keystone default roles and scoped tokens.



  • New API /v2/images/<image-id>/tasks to get tasks associated with image

  • Support for distributed image import

  • Secure RBAC - Experimental support for project personas

  • Cleanup of stale files in staging area upon service startup



  • Horizon supports the registered default policies. Now operators do not need to define all the policies in the policy file instead define only those policy they would like to override.

  • Chinese locales zh-cn and zh-tw have been changed to zh-hans and zh-hant respectively following the change in Django because the new locales decouple what is spoken from specific locations as they are also used outside of China.

  • Added Volume backups support for admin.



  • Redfish capability enhancements covering Out of Band hardware RAID configuration, and automatic Secure Boot setting enablement.

  • Deployment and Cleaning enhancements including UEFI Partition Image handling, NVMe Secure Erase, per-instance deployment driver interface overrides, deploy time “deploy_steps”, and file injection.

  • The System scoped RBAC model is now supported by Ironic along with the admin, member, and reader roles. This work has resulted in over 1500 new unit tests being added to Ironic.



  • Switched CentOS images to CentOS Stream 8.

  • Added support for Ubuntu in Kayobe.

  • Added support for the OpenID Connect authentication protocol in Keystone.

  • Added Docker healthchecks for several services.

  • Added support for Prometheus version 2.

  • Added support for multiple environments in a single Kayobe configuration.



  • Nested mode with nodes VMs running in multiple subnets is now available. To use that functionality a new option [pod_vif_nested]worker_nodes_subnets is introduced accepting multiple Subnet IDs.

  • Kuryr now handles Services that do not define the .spec.selector, allowing the user to manually manage the Endpoints object.

  • Kuryr can handle egress Network Policy that allows traffic to Pods being pointed by a Service without Selector.

  • Added support for SCTP.

  • Networks can now be created by relying on the default MTU defined in Neutron, regardless of the SDN used and without changing the default configuration value in Kuryr.





  • OSProfiler support has been added for tracing and observability.

  • Users may now add and update security services on share networks that are in use.

  • Operators may now set maximum and minimum share sizes as extra specifications on share types. It is also possible to limit the maximum size of shares via project and share type quotas.

  • The number and size of shares can be limited on share servers for load balancing.

  • The service provided default RBAC policies for all API endpoints have been adjusted to accommodate system scoped and project scoped personas with admin, member and reader roles where appropriate.

  • The service now supports a healthcheck middleware that is enabled by default.

  • Several driver improvements have been committed. The Container share driver now supports user defined LDAP security services that can be added to share networks or modified at any time. The NetApp driver supports setting up FPolicy events on shares. It also now allows users to add/update Kerberos, LDAP or Active Directory security services on their share networks at any time. The CephFS driver has been refactored to interact with the ceph manager daemon to create and manage shares. It also supports efficiently cloning snapshots into new shares.

  • A new share driver has been added for Zadara Cloud Storage and supports NFS and CIFS protocols.



  • Support for disabling and enabling failover segments. This way operators are able to put whole segments into maintenance mode instead of having to do it for each single host.

  • Support for smoothing-out the decision about whether to consider a host down or not. Operators can configure host monitors to consider a chosen number of probes before sending the notification about host being down.

  • Support for running host monitors in environments without systemd, such as app containers.

  • Support for using system-scoped tokens when contacting Nova.



  • New subnet type network:routed is now available. IPs of such subnet can be advertized with BGP over a provider network. This basically achieves a BGP-to-the-rack feature, where the L2 connectivity can be confined to a rack only, and all external routing is done by the switches, using BGP. In this mode, it is still possible to use VXLAN connectivity between the compute nodes, and only floating IPs and router gateways are using BGP routing.

  • A port already bound with a QoS minimum_bandwidth rule can now be updated with a new QoS policy with a minimum_bandwidth rule. It will change the allocations in placement as well.

  • A new vnic type vdpa has been added to allow requesting port that utilize a vHost-vDPA offload. It is supported by ML2/OVS and ML2/OVN mech drivers currently.

  • Deletion of the ML2/OVN agents is now supported.

  • New resource address-groups can be used in the security group rules to add group of the IP addresses to the rule.

  • The OVN Octavia provider driver now supports Stream Control Transmission Protocol (SCTP) load balancing.

  • Better co-existence with floating IP port forwarding load balancers.

  • Fixed a number of bugs so we better reflect load balancer status via the Octavia API.





  • With the addition of ALPN and HTTP/2 support for backend pool members, Octavia now supports the gRPC protocol. gRPC enables bidirectional streaming of Protocol Buffer messages through the load balancer.

  • Octavia now supports Stream Control Transmission Protocol (SCTP) load balancing. The addition of SCTP enables new mobile, telephony, and multimedia use cases for Octavia.

  • Load balancers using the amphora provider will benefit from increased performance and scalability when using amphora images built with version 2.x of the HAProxy load balancing engine.

  • Amphora instances are now supported on AArch64/ARM64 based instances.

  • Octavia now supports the Keystone default roles and scoped tokens.



  • Significantly improved Zun role and moved from experimental to stable status

  • Exerimental support for CentOS Stream

  • Experimental support for Debian Bullseye

  • Self-signed SSL will be generated and signed with local Certificate Authority



  • Static large object segments can now be deleted asynchronously; multipart uploads deleted through the S3 API will always be deleted asynchronously.

  • Numerous sharding improvements, including the ability to cache shard ranges for listings and support for operator-driven shrinking.

  • Several part-power-increase improvements, which ensure small clusters are capabale of growing to be large clusters.



  • Add APIs for scale, update, and rollback operations for VNF defined in ETSI NFV.

  • Add fundamental VNF lifecycle management support for subscriptions and notifications defined in ETSI NFV.

  • Implement VNF package management interface to obtain VNF package, grant interface to allow the VNFM to request a grant for authorization of a VNF lifecycle operation defined in ETSI NFV SOL003 specification compliant operations to cooperate with 3rd-Party NFVOs as VNFM.

  • Add container based VNF support with ESTI NFV-SOL003 v2.6.1 VNF Lifecycle Management. User is able to create, instantiate, terminate, and delete VNF on Kubernetes VIM. Kubernetes resource files are available as VNFD and it’s uploaded as a part of VNF Package.

  • Enable VNF vendors to customize configuration methods for applications via MgmtDriver. These customizations are specified by “interface” definition in ETSI NFV-SOL001 v2.6.1.



  • Moving network and network port creation out of the Heat stack and into the baremetal provisioning workflow.

  • Ceph version upgraded to Pacific. cephadm may be used to deploy/maintain a Ceph RBD cluster but not all Ceph services (e.g. RGW). ceph-ansible may still be used to deploy/maintain all Ceph services but will be replaced with cephadm in next release. This work is described in the TripleO Ceph spec and the Tripleo Ceph Client spec.

  • Removed Swift from the Undercloud services and removed the deployment ‘plan’ as described in the Excise swift spec.

  • Early (beta) support for deploying FRRouter in the Overcloud to support BGP routing as described in the FRRouter spec.

  • Moving away from using a dedicated Heat service on the Undercloud for the Overcloud deployment and instead using Ephemeral Heat.



  • Support image tags for the datastore version. When using image tags, Trove is able to get the image dynamically from Glance for creating instances.

  • Added custom container registry configuration for trove guest agent, it’s now possible to use images in private registry rather than docker hub.

  • Added a new field operating_status for the instance to show the actual operational status of user’s database.

  • In multi-region deployment with geo-replicated Swift, the user can restore a backup in one region by manually specifying the original backup data location created in another region.



  • Introduce the python-binding for iteracting with CRI runtime via GRPC

  • Introduce CNI plugin for container network

Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.