Note
These are significant changes reported directly from the project teams and have not been processed in any way. Some highlights may be more significant than others. Please do not take this list as a definitive set of highlights for the release until the OpenStack Foundation marketing staff have had a chance to compile a more accurate message out of these changes.
Notes:
Some enhancements were made to the vault back-end. It is now possible to specify a KV mountpoint and use AppRoles to manage authentication.
We now run a Barbican specific Octavia gate to verify the Octavia load balancing scenario.
The PKCS#11 plugin was modified to allow the hmac_keywrap_mechanism to be configured. With this change, Barbican can be deployed with Ultimaco HSMs.
It is now possible to deploy Barbican with the pkcs#11 backend using either a Thales or an ATOS HSM via TripleO.
Fixes were made to ensure that the barbican-manage commands for key rotation worked for the PKCS#11 plugin.
Notes:
Introduced a new Resource Allocation API allowing operators to query the reserved state of their cloud resources.
Added support for affinity and no-affinity policies for instance reservations, allowing multiple instances of the same reservation to be scheduled to the same hypervisor.
Added a new plugin for reservation of floating IPs. This new feature is available as a preview and will be fully completed in the next release.
Integrated numerous bug fixes to improve reliability.
Notes:
Added multiattach and deferred deletion support for the RBD driver.
Numerous bug fixes have been integrated to address stability and reliability.
User experience improvements around driver initialization, data retained during volume transfers and the information returned by commands.
Continued improvements in the backup service.
Notes:
Important new NFV fault management capabilities through the addition of multiple new features in Congress integration with Nova, Tacker, and Monasca.
The new JGress framework unlocks whole new classes of policy use by making the state of the cloud as given by JSON APIs available for policy evaluation. By adopting a JSON query language for expressing policy directly over JSON API data, JGress enables deployers to plug-in new data sources without being limited by the availability of integration drivers.
As with every release, we continue to make Congress more robust and stable than ever with bug fixes and internal improvements.
Notes:
Add FPGA programming support
Add GPU drivers
DB re-work to align with NOVA placement api strategy
Notes:
Added CAA
recordset type for CA authorization for managed DNS Zones
Added NAPTR
recordset type for service chaining and SIP management
Validation of project IDs when updating quotas
Added designate-status upgrade check
command to aid in upgrades.
Notes:
Heat now supports orchestrating stacks in remote OpenStack clouds, using credentials stored by the user in Barbican.
It is now easier to recover from accidentally trying to replace a resource with a version that conflicts with the existing resource.
New resource types in Heat add support for Neutron Layer 2 Gateways, Blazar, and Tap-as-a-Service.
Support Glance web download image resource type, which allow get the image from URL without pre-load it out side of Glance.
Notes:
Cinder Generic Groups admin panels are now supported
Added option to mitigate breach attacks
Added an upgrade_check management command
Custom templates for clouds.yaml and openrc files support
Notes:
Adds additional interfaces for management of hardware including Redfish BIOS settings, explicit iPXE boot interface option, and additional hardware support.
Increased capabilities and options for operators including deployment templates, improved parallel conductor workers and disk erasure processes, deployed node protection and descriptions, and use of local HTTP(S) servers for serving images.
Improved options for standalone users to request allocations of bare metal nodes and submit configuration data as opposed to pre-formed configuration drives. Additionally allows for ironic to be leveraged using JSON-RPC as opposed to an AMQP message bus.
Notes:
Support for reset checkpoint to the specify state
Support for cross site backup and restore with volume_glance_plugin
Optimization for checkpoints management in different bank cases
Notes:
This release introduced Multi-Factor Authentication Receipts, which facilitates a much more natural sequential authentication flow when using MFA.
The limits API now supports domains in addition to projects, so quota for resources can be allocated to top-level domains and distributed among children projects.
JSON Web Tokens are added as a new token format alongside fernet tokens, enabling support for a internet-standard format. JSON Web Tokens are asymmetrically signed and so synchronizing private keys across keystone servers is no longer required with this token format.
Multiple keystone APIs now support system scope as a policy target, which reduces the need for customized policies to prevent global access to users with an admin role on any project.
Multiple keystone APIs now use default reader, member, and admin roles instead of a catch-all role, which reduces the need for customized policies to create read-only access for certain users.
Notes:
Completed addition of images and playbooks for the OpenStack Monitoring service, Monasca.
Added an image and playbooks for the OpenStack Placement service, which has been extracted from Nova into a separate project.
Added support for performing full and incremental backups of the MariaDB database.
Notes:
Added support for handling and reacting to Network Policies events from kubernetes, allowing Kuryr-Kubernetes to handle security group rules on the fly based on them.
Added support for K8s configured to use CRI-O, the Open Container Initiative-based implementation of Kubernetes Container Runtime Interface as container runtime.
Enhancement of readiness health checks to validate quota of handlers resources, improving overall perforamance and stability and for them to be marked as unhealthy if needed.
Improved DPDK and SRIOV support.
Notes:
Extended support for manage/unmanage support for shares and snapshots to DHSS=True mode and added manage/unmanage support for share-servers.
Notes:
Support for strict minimum bandwidth based scheduling. With this feature, Nova instances can be scheduled to compute hosts that will honor the minimum bandwidth requirements of the instance as defined by QoS policies of its ports.
Network Segment Range Management. This features enables cloud administrators to manage network segment ranges dynamically via a new API extension, as opposed to the previous approach of editing configuration files. This feature targets StarlingX and edge use cases, where ease of of management is paramount.
Speed up Neutron port bulk creation. The targets are containers / k8s use cases, where ports are created in groups.
(FWaaS) FWaaS v1 has been removed. FWaaS v2 is available since Newton release and it covers all features in FWaaS v1. A migartion script is provided to convert existing FWaaS v1 objects into FWaaS v2 models.
Notes:
It is now possible to run Nova with version 1.0.0 of the recently extracted placement service, hosted from its own repository. Note that install/upgrade of an extracted placement service is not yet fully implemented in all deployment tools. Operators should check with their particular deployment tool for support before proceeding. See the placement install and upgrade documentation for more details. In Stein, operators may choose to continue to run with the integrated placement service from the Nova repository, but should begin planning a migration to the extracted placement service by Train, as the removal of the integrated placement code from Nova is planned for the Train release.
Users can now specify a volume type when creating servers.
The compute API is now tolerant of transient conditions in a deployment like partial infrastructure failures, for example a cell not being reachable.
Users can now create servers with Neutron ports that have quality-of-service minimum bandwidth rules.
Operators can now set overcommit allocation ratios using Nova configuration files or the placement API.
Compute driver capabilities are now automatically exposed as traits in the placement API so they can be used for scheduling via flavor extra specs and/or image properties.
Live migration is now supported for the VMware driver.
The placement service was extracted from the Nova project and became a new official OpenStack project called Placement.
Added the ability to target a candidate resource provider, easing specifying a host for workload migration.
Increased API performance by 50% for common scheduling operations.
Simplified the code by removing unneeded complexity, easing future maintenance.
Notes:
Octavia now supports load balancer “flavors”. This allows an operator to create custom load balancer “flavors” that users can select when creating a load balancer.
You can now enable TLS client authentication when using TERMINATED_HTTPS listeners.
Octavia now supports backend re-encryption of connections to member servers.
Metadata tags can now be assigned to the elements of an Octavia load balancer.
Notes:
Tooling optimizations have been done that will result in faster and more reliable deployments.
Add Ubuntu Bionic support.
Add Mistral support.
Add Manila support.
Add Masakari support.
Notes:
Added a Castellan config driver that allows secrets to be moved from on-disk config files to any Castellan-compatible keystore. This driver lives in the Castellan project, so Castellan must be installed in order to use it.
Added a config driver to read values from environment variables, which allows configuration of services in containers without needing to inject a file. This driver is enabled by default in oslo.config.
Added a config validation tool, oslo-config-validator
. This uses the oslo-config-generator data to find options in a config file that are not defined in the service.
Notes:
Sahara plugins are removed from core code for easier maintenance and upgrades.
Release of APIv2 as stable.
Improvements on boot from volume feature.
Notes:
Searchlight now works with Elasticsearch 5.x
We have released a new vision to make Searchlight a multi-cloud application
Functional test setup has been improved
Searchlight now can work and be tested with Python 3.7
Notes:
Improved performance so that Senlin operations execute multiple orders of magnitude faster.
Health policy v1.1 now allows a user to specify multiple types of detection modes.
Senlin APIs now issues synchronous failures in case of cluster/node lock, cooldown in effect or action conflict.
Operators can now remove completed actions using action-purge subcommand in senlin-manage tool. This is useful for long-running clusters that have accumulated a large number of actions in the database.
Notes:
Support Python 3 runtime for user code
Notes:
Numerous improvements to the S3 API compatibility layer.
Several fixes and improvements for the data-encryption middleware, including allowing multiple keymaster middlewares. This allows migration from one key provider to another.
Operators have more control over account and container server background daemon I/O usage with the new databases_per_second
config option.
Erasure-coded data may now be rebuilt to handoff nodes. This improves data durability when disk failures go unremedied for extended periods.
Notes:
Added support for podman and buildah for containers and container images.
Open Virtual Network (OVN) now the default network configuration.
Improved composable network support for creating L3 routed networks and IPV6 network support.
Notes:
New and simplified template language! the new templates are shorter and much easier to understand and reuse.
Added a Trove datasource and a Zaqar notifier.
New APIs for querying Vitrage services and for resource count.
Performance improvements and faster data retrieval. The memory signature and processing runtime were significantly reduced.
Notes:
Watcher supports API microversions.
Watcher consumes Nova notifications to update its internal Compute CDM(Cluster Data Model).
Building the Compute CDM according to the audit scope.
Added start_time and end_time fields to CONTINUOUS audit.
Added a new config option ‘action_execution_rule’.
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.