Victoria Release Highlights

Victoria Release Highlights


These are significant changes reported directly from the project teams and have not been processed in any way. Some highlights may be more significant than others. Please do not take this list as a definitive set of highlights for the release until the OpenStack Foundation marketing staff have had a chance to compile a more accurate message out of these changes.



  • Improved handling around the configured default volume-type and added new Block Storage API calls with microversion 3.62 that enable setting a project-level default volume-type for individual projects.

  • Added some new backend drivers, and many current drivers have added support for more features. For example, the NFS driver now supports volume encryption.

  • Support was added to cinder backup to use the popular Zstandard compression algorithm.



  • After a period of inactivity, development has been resumed by a group of new contributors.

  • Introduced a Monasca fetcher, to gather scopes to be rated from Monasca.



  • Users can launch instances with accelerators managed by Cyborg since Ussuri release, this release two more operations * Rebuild and * Evacuate are supported. See accelerator operation guide to find all supported operations.

  • Cyborg supported new accelerator drivers (Intel QAT and Inspur FPGA) and reached an agreement that Vendors who want to implement a new driver should at least provide a full driver report result. (Of course, providing third-party CI is more welcome.) Supported drivers <>_

  • Program API is supported, now users can program FPGA given the pre-uploaded bitstream. program API (PATCH deployable) <>_ And API microversion for existed APIs is improved such as arq APIs.

  • In this release, the policy refresh (RBAC with scoped) for cyborg is partially implemented (Device Profile APIs), we’ve implemented new default rules in base policy and device_profile policy, and added the basic testing framework for all policies. For the Backward Compatibility, old rules are maintained as deprecated rules with same defaults as today so that existing deployment will keep working as it is. After we implement all the features, we’ll give two cycles transition period for operators. See policy default refresh



  • Enhancement in multiple stores feature, administrator can now set policy to allow user to copy images owned by other tenants

  • Glance allow to configure cinder multi-stores

  • RBD and Filesystem drivers of glance now support sparse image upload

  • Enhancement in RBD driver chunk upload of image



  • Error messages shown in horizon now contains more detail. Previously GUI users cannot know detail reasons of operations, but users can now check detailed information from back-end service so that they can address causes.

  • Added a new tab that shows messages for volumes and volume snapshots. Users can know detail events which happend for corresponding volumes or snapshots.

  • Added support for extending in-use volumes. Users can extend in-use volumes via horizon now.



  • The deploy steps work has decomposed the basic deployment operation into multiple steps which can now also include steps from supported RAID and BIOS interfaces at the time of deploy.

  • An agent power interface enables provisioning operations without a Baseboard Management Controller.

  • Ironic can now be configured for HTTP Basic authentication without the need for additional services.

  • Adds initial support for DHCP-less based deployments with Redfish Virtual Media.



  • Added support for Ubuntu Focal 20.04.

  • Added support for automatic creation of resources for Octavia.

  • Added support for container healthchecks for core OpenStack services.

  • Improved TLS support, covering etcd, RabbitMQ, as well as Ironic, Neutron and Nova backends. Also adds initial support for ACME protocol, as used by Letsencrypt.

  • Improved performance and scalability of Ansible playbooks.

  • Added support for integrating Neutron with Mellanox InfiniBand.



  • Kuryr will no longer use annotations to store data about OpenStack objects in K8s API. Instead a corresponding CRDs are created, i.e. KuryrPort, KuryrLoadBalancer and KuryrNetworkPolicy.

  • Logs on INFO level should be much cleaner now.

  • Added support for autodetection of VM bridging interface in nested setups.



  • Kubernetes cluster owner can now do CA cert rotate to re-generate CA of the cluster, service account keys and the certs of all nodes.

  • Label cinder_csi_enabled now defaults to True.

  • Default storage driver has changed from devicemapper to overlay2.



  • Tenant driven share replication, a self-service aid to data protection, disaster recovery and high availability is now generally available and fully supported. Starting with API version 2.56, the X-OpenStack-Manila-API-Experimental header is no longer required to create/promote/resync/delete share replicas.

  • Share server migration is now available as an experimental feature. Share servers provide hard multi-tenancy guarantees by isolating shared file systems in the network path. In this release, cloud administrators are able to move share servers to different backends or share networks.



  • Adds ability for operators to override, per failure type, the instance metadata key controlling the behaviour of Masakari towards the instance. This makes it possible to differentiate between instance- and host-level failures per instance.



  • Metadata service is now available over IPv6. Users can now use metadata service without config drive in IPv6-only networks.

  • Support for flat networks has been added for Distributed Virtual Routers (DVR).

  • Support for Floating IP port forwarding has been added for the OVN backend. Users can now create port forwardings for Floating IPs when the OVN backend is used in Neutron.

  • Added support for router availability zones in OVN. The OVN driver can now read from the router’s availability_zone_hints field and schedule router ports accordingly with the given availability zones.





  • Users can now specify the TLS versions accepted for listeners and pools. Operators also now have the ability to set a minimum TLS version acceptable for their deployment.

  • Octavia now supports HTTP/2 over TLS using the new Application Layer Protocol Negotiation (ALPN) configuration option for listeners.

  • Load balancer statistics can now be reported to multiple statistics drivers simultaneously and supports delta metrics. This allows easier integration into external metrics system, such as a time series database.

  • Octavia flavors for the amphora driver now support specifying the glance image tag as part of the flavor. This allows the operator to define Octavia flavors that boot alternate amphora images.

  • Load balancer pools now support version two of the PROXY protocol. This allows passing client information to member servers when using TCP protocols. PROXYV2 improves the performance of establishing new connections using the PROXY protocol to member servers, especially when the listener is using IPv6.



  • MariaDB upgraded to 10.5 release

  • Ansible bumped to 2.10 release and switched to collections usage

  • Added os_senlin role

  • Added os_adjutant role



  • Improved time-to-first-byte latencies when reading erasure-coded data.

  • Increased isolation between background daemons and proxy-servers when running with a separate replication network.

  • We’re beginning to see non-trivial production clusters transition from running Swift under Python 2 to Python 3.



  • Implement ETSI NFV-SOL standard features (Life-cycle management, Scaling, VNF operation, etc.).

  • Add Fenix plugin for Rolling update for VNFs with Fenix and Heat.

  • Expand Kubernetes support.



  • Add new datasource for TMF API 639 Datasource.

  • Complete verification to verify the Vitrage API

Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.