Ystia Orchestrator 4.0.0 released

Yorc release

The Ystia Orchestrator (Yorc) 4.0.0 has been released!

The Yorc dev team is proud to deliver this version even in this very particular situation of general lockdown.

We want to thank all contributors who worked on this release. We specially want to highlight contributions from T-Systems team who implemented the support of SSH bastions for OpenStack, and Alien4Cloud team for our co-dev efforts.

We hope you will appreciate the new features listed bellow. Please open an issue if you encounter a problem.

Enjoy and #StaySafe!

Yorc Logo

Download links

Documentation & Resources

You can find documentation on several places:

New and noteworthy

Alien4Cloud 2.2

Yorc is now officially the default orchestrator for Alien4Cloud. Consequently, a new Yorc Orchestrator provider is included in the Alien4Cloud distribution (it replaces the yorc-a4c-plugin that was developed for Ystia).

The configuration of a Yorc Orchestrator is documented by Alien4Cloud documentation.

When configuring the Yorc Orchestrator it is necessary to consider the mapping of the locations configured in the orchestrator, to the locations configured in the Yorc server. By default, the mapping is based on the location names. Particular mappings can be defined using Meta-properties.

Yorc Locations support

Yorc 3.x allowed applications deployment to one of the supported infrastructures (OpenStack, GCP, etc.), or to a pool of hosts (HostsPool infrastructure).

With the new Locations support feature, users can configure several deployment locations instead of infrastructures. A location is defined by specifying a target infrastructure type and some specific configuration properties necessary for connection. Consequently, configuring several locations of a same type is now allowed (for example, user may configure two OpenStack locations).

This feature has a major impact on Yorc configuration. The locations configuration is defined in a dedicated file written in JSON or YAML. The path for this file can be provided in the Yorc configuration file, or using a specific environment variable or yorc command line option.

Locations configuration is stored in the Consul KV database. It is initialized by reading the locations configuration file at the first Yorc server start-up. Afterwards, locations configuration can be changed using the REST API or the Yorc CLI, that were both enriched for this purpose (new yorc locations commands provided).

Note that HostPool becomes a particular location type ; the yorc hostspool command is maintained but adapted. An other side effect is a change of the DB schema in Consul KV for HostsPool configuration. Yorc supports automatic upgrade of the consul schema when upgrading from Yorc version 3 to version 4.

See more details about the breaking change itself on our changelog.

HostPool support improved

Deployment to a HostsPool is enhanced by the support of generic consumable resources. The aim is to allow the allocation of hosts for an application based on the hosts’ available resources and the application’s needs.

The initial amount of available resources is specified in the HostPool location configuration using labels. The application requirements for resources can be expressed in the Compute nodes definition using its host capability properties (See the detailed section in our HostsPool documentation).

This mechanism is provided in addition to the already existing labels and filters, available in Yorc 3.x for hosts allocation.

When an application is deployed, Yorc chooses the hosts based on the application’s resource requirements and updates the available resources for each allocated host. Optionally, a generic resource can be specified as not consumable in order to allow sharing it by several Compute nodes. If several hosts could be allocated based on the resources availability criteria, some placement policy is used to make the choice. The policy can be specified at deployment time using the Alien4Cloud deployment tool (See Alien4Cloud documentation for details steps).

Workflows inputs/outputs

Yorc now supports workflow inputs and outputs as specified by TOSCA 1.3. The REST API as well as the following CLI were updated accordingly:

Yorc Storage refurbished

In Yorc 3.x all the managed artifacts (deployment topologies, logs and events) were stored in Consul KV DB. Mainly for optimization reasons, storage configuration is introduced in Yorc 4. Storage support is based on three store types dedicated to storing the main artifact categories (Deployment store, Log store and Event store), and three types of store implementations (Consul KV, fileCache, cipherFileCache).

Yorc user can configure one or several stores, depending on its particular needs, using a new entry named storage in the Yorc configuration file. However, storage configuration is not mandatory as default setting are provided.

Storage reconfiguration is possible when Yorc restarts using reset configuration property. In case of reconfiguring a store by changing its implementation from consul to file, the data can be migrated by setting to true the migrate_data_from_consul configuration property.

See the Yorc Server Configuration, Storage configuration Chapter for more details.

Yorc Telemetry upgraded

In order to improve the observability of Yorc executions, the collected metrics namespace was modified to support labels. This allows for metric trees to be exposed to monitoring tools such as Prometheus.

Topologies Updates (Premium feature)

It’s now possible to update a deployed topology by making the following actions in the topology.

Add/remove/update workflows

This feature allows to add new workflows, remove or modify existing ones that can be run as custom workflows or during the application lifecycle workflows (install/start/stop/uninstall).

Add/remove/update monitoring policies

HTTP and TCP monitoring policies can be applied on an application in order to monitor Software components or Compute instances liveness. See Alien4Cloud documentation for more information.

With the Premium version, you can add new monitoring policies on a deployed application if you miss it when you deploy the app. You can also modify or remove existing monitoring policies on a deployed application if your needs changed. By instance, you can increase or decrease the monitoring time interval.

Update TOSCA types

This feature allows to update imported Tosca types either in the same version or in a new version in order to support new attributes, properties or even operations. By instance, mixed with a new custom workflow, this allows to execute new operations.

Add/remove nodes

This feature allows to add new node templates in a deployed topology or to remove existing ones. In the first implementation, it’s not possible to mix adds and removes in the same update, you need to do it in different updates.

Infrastructure support improvements

OpenStack

  • Bastions support allows to deploy applications to hosts that are not directly accessible via SSH. See an example here.
  • Allow to create OpenStack instances from a volume. See details here.

Kubernetes

  • StatefulSet deployments support
  • Persistent volumes support

Find details in section Configure a Kubernetes Location from chapter Configure a Yorc orchestrator and locations.

AWS

  • volumes support

Find details in section Configure an AWS Location from chapter Configure a Yorc orchestrator and locations.

Ystia Forge

For this release we focused on containers support.

Some components were also updated please refer to the changelog for details.

Release Notes

We also made a lot of bug fixes and improvements, you can checkout change logs here: