Train Release Highlights

Train Release Highlights

Note

These are significant changes reported directly from the project teams and have not been processed in any way. Some highlights may be more significant than others. Please do not take this list as a definitive set of highlights for the release until the OpenStack Foundation marketing staff have had a chance to compile a more accurate message out of these changes.

Blazar

Notes:

  • Added support for a global request ID which can be used to track requests across multiple OpenStack services.

  • Added support for microversions to the v1 API.

  • Completed the implementation and documentation of the floating IP reservation feature introduced as a preview in Stein.

Cinder

Notes:

  • A number of drivers have added support for newer features like multi-attach and consistency groups.

  • When uploading qcow2 images to Glance the data can now be compressed.

  • Team focused on getting numerous bug fixes and usability improvements in place.

  • Cinder now has upgrade checks that can be run to check for possible compatibility issues when upgrading to Train.

Cloudkitty

Notes:

  • A v2 API, along with five new endpoints has been introduced. It is marked as EXPERIMENTAL for now. Its endpoints support timezones, and aim at being more generic and efficient than v1 endpoints.

  • A Prometheus scope fetcher has been added. It allows dynamic scope discovery from Prometheus and is intended to be used with the Prometheus collector.

  • Fault-tolerance and performance of the processor has been improved. Each processor does now spawn several workers, which are restarted in case of a failure.

  • A v2 storage driver for Elasticsearch has been introduced. It is marked as EXPERIMENTAL for now.

Cyborg

Notes:

  • Got Cyborg-Nova interaction spec merged. This is the blueprint for the end goal, i.e., launching and managing VMs with accelerators. <https://github.com/openstack/nova-specs/blob/master/specs/train/approved/nova-cyborg-interaction.rst>

  • Updated Cyborg APIs to version 2, which includes support for Nova interaction. Using v2 APIs, end users can create/delete device profiles and create/bind/unbind/delete accelerator requests (ARQs). In this release, we just introduced Device Profiles API and Accelerator Requests APIs.

  • Added new Cyborg driver (Ascend) and improved existing drivers (Intel FPGA, GPU).

  • Created tempest CI framework that can be used with a fake driver today and with real hardware in the future.

  • Enabled Python 3 testing and fixed issues in support of Train goals.

Designate

Notes:

  • Removal of old deprecated code like Pool Manager and old Power DNS drivers which ensures less complexity for operators. (95% of all deprecation warnings have been removed in Train)

  • V1 API code removed (previously had been disabled by default)

  • Full IPv6 Support for the API control plane, and for the DNS data plane

  • Audit of logging to ensure sane log messages and log quanitity

  • 100s of tests added, and code coverage increased by 5-6%.

  • By far the most active cycle in recent releases. 363 files changed, 12894 insertions(+), 9564 deletions(-)

  • Cycle MVP - Erik Olof Gunnar Andersson <eandersson@blizzard.com> with 66 out of the 178 commits in the cycle.

  • Train is the last release with Python 2.7 support

Glance

Notes:

  • Images API v2.9 has promoted to current with 2.7 & 2.8 marked as SUPPORTED. 2.8 will show in version list only when multi-store is configured.

  • Glance multi-store feature has been deemed stable

  • glance-cache-manage is not depending to glance-regstry anymore and is communicating directly with glance-api

  • Cache prefetching is now done as periodic task by glance-api removing the requirement to add it in cron.

  • Various bugfixes done in glance, glance_store and python-glanceclient

Horizon

Notes:

  • Volumes multi-attach is supported now

  • Horizon now supports the optional automatic generation of a Kubernetes configuration file

  • It is the last release with Python 2.7 and Django 1.11 support

Ironic

Notes:

  • Basic support for building software RAID

  • Virtual media boot for the Redfish hardware type.

  • Improvements in the sensor data collection.

  • New tool for building ramdisk images: ironic-python-agent-builder

  • Numerous fixes in the Ansible deploy interface.

Karbor

Notes:

  • Added events notifications for plan, checkpoint, restore, scheduled and trigger operations.

  • Allow users backup image boot servers with the new added data which is located on root disk.

Keystone

Notes:

  • All keystone APIs now use the default reader, member, and admin roles in their default policies. This means that it is now possible to create a user with finer-grained access to keystone APIs than was previously possible with the default policies. For example, it is possible to create an “auditor” user that can only access keystone’s GET APIs. Please be aware that depending on the default and overridden policies of other OpenStack services, such a user may still be able to access creative or destructive APIs for other services.

  • All keystone APIs now support system scope as a policy target, where applicable. This means that it is now possible to set [oslo_policy]/enforce_scope to true in keystone.conf, which, with the default policies, will allow keystone to distinguish between project-specific requests and requests that operate on an entire deployment. This makes it safe to grant admin access to a specific keystone project without giving admin access to all of keystone’s APIs, but please be aware that depending on the default and overridden policies of other OpenStack services, a project admin may still have admin-level privileges outside of the project scope for other services.

  • Keystone domains can now be created with a user-provided ID, which allows for all IDs for users created within such a domain to be predictable. This makes scaling cloud deployments across multiple sites easier as domain and user IDs no longer need to be explicitly synced.

  • Application credentials now support access rules, a user-provided list of OpenStack API requests for which an application credential is permitted to be used. This level of access control is supplemental to traditional role-based access control managed through policy rules.

  • Keystone roles, projects, and domains may now be made immutable, so that certain important resources like the default roles or service projects cannot be accidentally modified or deleted. This is managed through resource options on roles, projects, and domains. The keystone-manage bootstrap command now allows the deployer to opt into creating the default roles as immutable at deployment time, which will become the default behavior in the future. Roles that existed prior to running keystone-manage bootstrap can be made immutable via resource update.

Kolla

Notes:

  • Switched to Python 3 in Debian and Ubuntu source images.

  • Introduced images and playbooks for Masakari, which supports instance High Availability, and Qinling, which provides Functions as a Service.

  • Added support for deployment of multiple Nova cells.

  • Added support for control plane communication via IPv6.

Kuryr

Notes:

  • Stabilization of the support for Kubernetes Network Policy.

  • Kuryr CNI plugin is now rewriten to golang to make deploying it easier.

  • Multiple enhancements made in support for DPDK and SR-IOV.

  • Support for tagging all the Neutron and Octavia resources created by Kuryr.

Magnum

Notes:

  • Fedora CoreOS driver is now available which we are please to offer since Fedora Atomic will be end of life at the end of this November. We welcome users to test and provide feedback for this driver.

  • Node groups now allow users to create clusters with groups of nodes with different specs, e.g. GPU node groups and high memory node groups. Thanks for the great work from CERN team and StackHPC.

  • Rolling upgrade is available for both Kubernetes version and the node operating system with minimal downtime.

  • Auto healing can be deployed on Kubernetes cluster to monitor the cluster health and replace broken nodes when failure is detected.

  • Boot Kubernetes cluster from volumes and configurable volume types. They can even set the volume type etcd volumes. This may be useful for cloud providers who want to use SSDs, NVMEs, etc.

  • Private cluster can be created following best security practices, isolating Kubernetes clusters from Internet access. This may be a desirable feature for enterprise users. Now they have the flexibility to choose between a private cluster by default and only allow access to the API exposed on Internet or make it fully accessible.

  • Add cinder_csi_enabled label to support out of tree Cinder CSI.

  • Support containerd as a container_runtime as an alternative to host-docker.

  • A new config health_polling_interval is supported to make the interval configurable or disable completely.

Manila

Notes:

  • Manila share networks can now be created with multiple subnets, which may be in different availability zones.

  • NetApp backend added support for replication when DHSS=True.

  • GlusterFS back end added support for extend/shrink for directory layout.

  • Added Infortrend driver with support for NFS and CIFS shares.

  • CephFS backend now supports IPv6 exports and access lists.

  • Added Inspur Instorage driver with support for NFS and CIFS shares.

  • Added support for modifying share type name, description and/or public access fields.

Mistral

Notes:

  • improved performance by roughtly 40% (depends on a workflow)

  • improved event notification mechanism

  • finished leftovers on “advanced publishing”

  • workflow execution report now contains “retry_count” for tasks

  • completely ready for dropping py27

  • lots of bugfixes and small improvements (including dashboard)

Neutron

Notes:

  • OVN can now send ICMP “Fragmention Needed” packets, allowing VMs on tenant networks using jumbo frames to access the external network without any extra routing configuration.

  • Event processing performance has been increased by better distributing how work is done in the controller. This helps significantly when doing bulk port bindings.

  • When different subnet pools participate in the same address scope, the constraints disallowing subnets to be allocated from different pools on the same network have been relaxed. As long as subnet pools participate in the same address scope, subnets can now be created from different subnet pools when multiple subnets are created on a network. When address scopes are not used, subnets with the same ip_version on the same network must still be allocated from the same subnet pool.

  • A new API, extraroute-atomic, has been implemented for Neutron routers. This extension enables users to add or delete individual entries to a router routing table, instead of having to update the entire table as one whole

  • Support for L3 conntrack helpers has been added. Users can now configure conntrack helper target rules to be set for a router. This is accomplished by associating a conntrack_helper sub-resource to a router.

Nova

Notes:

  • Live migration support for servers with a NUMA topology, pinned CPUs and/or huge pages, when using the libvirt compute driver.

  • Live migration support for servers with SR-IOV ports attached when using the libvirt compute driver.

  • Support for cold migrating and resizing servers with bandwidth-aware Quality of Service ports attached.

  • Improvements to the scheduler for more intelligently filtering results from the Placement service.

  • Improved multi-cell resilience with the ability to count quota usage using the Placement service and API database.

  • A new framework supporting hardware-based encryption of guest memory to protect users against attackers or rogue administrators snooping on their workloads when using the libvirt compute driver. Currently only has basic support for AMD SEV (Secure Encrypted Virtualization).

  • API improvements for both administrators/operators and end users.

  • Improved operational tooling for things like archiving the database and healing instance resource allocations in Placement.

  • Improved coordination with the baremetal service during external node power cycles.

  • Support for VPMEM (Virtual Persistent Memory) when using the libvirt compute driver. This provides data persistence across power cycles at a lower cost and with much larger capacities than DRAM, especially benefitting HPC and memory databases such as redis, rocksdb, oracle, SAP HANA, and Aerospike.

  • Train is the first cycle where Placement is available solely from its own project and must be installed separately from Nova.

  • Extensive benchmarking and profiling have led to massive performance enhancements in the placement service, especially in environments with large numbers of resource providers and high concurrency.

  • Added support for forbidden aggregates which allows groups of resource providers to only be used for specific purposes, such as reserving a group of compute nodes for licensed workloads.

  • Added a suite of features which, combined, enable targeting candidate providers that have complex trees modeling NUMA layouts, multiple devices, and networks where affinity between and grouping among the members of the tree are required. These features will help to enable NFV and other high performance workloads in the cloud.

Octavia

Notes:

  • An Access Control List (ACL) can now be applied to the load balancer listener. Each port can have a list of allowed source addresses.

  • Octavia now supports Amphora log offloading. Operators can define syslog targets for the Amphora administrative logs and for the tenant load balancer connection logs.

  • Amphorae can now be booted using Cinder volumes.

  • The Amphora images have been optimized to reduce the image size and memory consumption.

Openstackansible

Notes:

  • Services’ venvs are using Python 3 by default

  • Added murano support

  • Projects become more re-usable outside of the full OpenStack-Ansible deployment

  • Added uwsgi role to unify uWSGI service configuration across roles

  • Fully migrated to systemd-journald from rsyslog

  • Added support of the metal multinode deployments

  • General reduction of technical debt

Senlin

Notes:

  • Added webhook v2 support:Previously webhook API introduced microversion 1.10 to allow callers to pass arbritary data in the body along with the webhook call.

  • Supported admin user can see details of any cluster profile.

  • Allows the cluster delete actions to detach policies and delete receivers for the cluster being deleted.

Storlets

Notes:

  • Python 3 support is going forward

  • Various code improvement

Swift

Notes:

  • Swift can now be run under Python 3.

  • Log formats are now more configurable and include support for anonymization.

  • Swift-all-in-one Docker images are now built and published to https://hub.docker.com/r/openstackswift/saio

Tacker

Notes:

  • Added support for force delete VNF and Network Service instances.

  • Added Partial support of VNF packages.

Trove

Notes:

  • A lot of improvements have been made for the Service Tenant Deployment model which is highly recommended in the production environment. The cloud administrator is allowed to define the management resources for the trove instance, such as keypair, security group, network, etc.

  • Creating trove guest image becomes much easier for the cloud administrator or developer by using trovestack script.

  • Users could expose the trove instance to the public with the ability to limit source IP addresses access to the database.

Vitrage

Notes:

  • Add new datasource for Kapacitor.

  • Add new datasource for Monasca.

  • New API for Vitrage status and templates versions.

  • Support Database upgrade for Vitrage with alembic tool.

Watcher

Notes:

  • Added ‘force’ field to Audit. User can set –force to enable the new option when launching audit.

  • Grafana has been added as datasource that can be used for collecting metrics.

  • Getting data from Placement to improve the Watcher compute data model.

  • Added show data model API.

  • Added node resource consolidation strategy.

Zun

Notes:

  • Zun compute agent report local resources to Placement API.

  • Zun scheduler gets allocation candidates from placement API and claim container allocation.

Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.