-
Notifications
You must be signed in to change notification settings - Fork 1
Container support
Cloudify supports integrations with Docker and Docker-based container managers, including Docker, Docker Swarm, Docker Compose, Kubernetes, and Apache Mesos. Cloudify can both manage container infrastructure, and/or orchestrate the services that run on container platforms. When orchestrating container orchestrators such as Kubernetes, Docker Swarm, and Mesos), Cloudify provides infrastructure management capabilities such as installation, auto healing and scaling. When orchestrating services on these platforms, Cloudify integrates seamlessly with native descriptors to not only support container cluster service deployment, but also to enable orchestrations that encompass systems beyond the edges of the container cluster.
Cloudify can be used to create, heal, scale, and tear down container clusters. This capability is key in providing a scalable and highly available infrastructure on which container managers can run.
Cloudify can also orchestrate related infrastructure on bare metal, virtualized, and cloud platforms. This can include networking and storage infrastructure, both virtual and physical.
Independently from the orchestration of infrastructure, Cloudify provides the ability to orchestrate heterogenous services across platforms. By leveraging the strength of TOSCA modeling, Cloudify can manage the instantiation and configuration of service chains regardless of the target platform. This ranges from containerized, to virtualized, to "bare metal" OS, to physical hardware.
The Docker plugin is a Cloudify plugin that defines a single type: cloudify.docker.Container
. The plugin is compatible with Docker 1.0 (API version 1.12) and relies on the docker-py library. The plugin executes on a computer host that has Docker pre-installed.
-
image
A dict describing a docker image. To import an image from a tarball use the src key. The value will be an absolute path or URL. If pulling an image from docker hub, do not use src. The key is repository. The value is that repository name. You may additionally specify the tag, if none is given, latest is assumed. -
name
The name of the Docker container. This will also be the host name in Docker host config. -
use_external_resource
Boolean indicating whether the container already exists or not.
The cloudify.interfaces.lifecycle
interface is implemented, and supports the following function parameters
-
create
inputs: -
params
A dict of parameters allowed by docker-py to the create_container function -
start
inputs: -
params
A dictionary of parameters allowed by docker-py to the start function -
processes_to_wait_for
A list of processes to verify are active on the container before completing the start operation. If all processes are not active the function will be retried. -
retry_interval
Before finishing start checks to see that all processes on the container a A dictionary of parameters allowed by docker-py to the stop function.re ready. This is the interval between checks. -
stop
inputs: -
params
A dictionary of parameters allowed by docker-py to the stop function. -
retry_interval
If Exited is not in the container status, then the plugin will retry. This is the number of seconds between retries. -
delete
inputs: -
params
A dictionary of parameters allowed by docker-py to the remove_container function.
The Docker Swarm blueprint creates and manages a Docker Swarm cluster on Openstack. There are 3 blueprints, with slightly different use cases:
- swarm-local-blueprint.yaml : a cfy local blueprint that orchestrates setup and teardown of the cluster without a manager
- swarm-openstack-blueprint.yaml : an Openstack blueprint that orchestrates setup and teardown of the cluster with a manager
- swarm-scale-blueprint.yaml : an Openstack blueprint that orchestrates setup, teardown, autohealing, and autoscaling of the cluster
These blueprints have only been tested against an Ubuntu 14.04 image with 2GB of RAM. The image used must be pre-installed with Docker 1.12. Any image used should have passwordless ssh, and passwordless sudo with requiretty false or commented out in sudoers. Also required is an Openstack cloud environment. The blueprints were tested on Openstack Kilo.
The swarm-local
blueprint is intended to be run using the cfy local CLI command. As such, no manager is necessary. The blueprint starts a two node Swarm cluster and related networking infrastructure in Openstack.
-
image
The Openstack image id. This image will be used for both master and worker nodes. This image must be prepared with Docker 1.12, as well as support passwordless ssh, passwordless sudo, and passwordless sudo over ssh. Only Ubuntu 14.04 images have been tested. -
flavor
The Openstack flavor id. This flavor will be used for both master and worker nodes. 2 GB RAM flavors and 20 GB disk are adequate. Flavor size will vary based on application needs. -
ssh_user
This blueprint uses the Fabric plugin and so requires ssh credentials. -
ssh_keyname
The Openstack ssh key to attach to the compute nodes (both master and worker). -
ssh_keyfile
This blueprint uses the Fabric plugin and so requires ssh credentials.
The blueprint contains a dsl_definitions
block to specify the Openstack credentials:
-
username
The Openstack user name -
password
The Openstack password -
tenant_name
The Openstack tenant -
auth_url
The Openstack Keystone URL
The swarm-openstack-blueprint.yaml is a Cloudify manager hosted blueprint that starts a Swarm cluster and related networking infrastucture.
-
image
The Openstack image id. This image will be used for both master and worker nodes. This image must be prepared with Docker 1.12, as well as support passwordless ssh, passwordless sudo, and passwordless sudo over ssh. Only Ubuntu 14.04 images have been tested. -
flavor
The Openstack flavor id. This flavor will be used for both master and worker nodes. 2 GB RAM flavors and 20 GB disk are adequate. Flavor size will vary based on application needs. -
ssh_user
This blueprint uses the Fabric plugin and so requires ssh credentials. -
agent_user
The user for the image.
-
swarm-info
which is a dict with two keys: -
manager_ip
the public IP address allocated to the Swarm manager -
manager_port
the port the manager listens on
The swarm-scale-blueprint.yaml is a Cloudify manager hosted blueprint that starts a Swarm cluster and related networking infrastucture. It installs metrics collectors on worker nodes, and defines scaling and healing groups for cluster high availability.
-
image
The Openstack image id. This image will be used for both master and worker nodes. This image must be prepared with Docker 1.12, as well as support passwordless ssh, passwordless sudo, and passwordless sudo over ssh. Only Ubuntu 14.04 images have been tested. -
flavor
The Openstack flavor id. This flavor will be used for both master and worker nodes. 2 GB RAM flavors and 20 GB disk are adequate. Flavor size will vary based on application needs. -
ssh_user
This blueprint uses the Fabric plugin and so requires ssh credentials. -
agent_user
The user for the image.
-
swarm-info
which is a dict with two keys: -
manager_ip
the public IP address allocated to the Swarm manager -
manager_port
the port the manager listens on
The Docker Swarm Plugin provides support for deploying services onto Docker Swarm clusters, as well as support for Docker Compose.
A type that represents a Swarm manager not managed by Cloudify. If a Cloudify managed manager is used, the Cloudify proxy plugin should be used instead.
-
ip
The IPV4 address of the Swarm manager -
port
The port the manager REST API is listening on (default 2375) -
ssh_user
An ssh user for operations that require ssh (Docker Compose) -
ssh_keyfile
An ssh private key for operations that require ssh (Docker Compose)
The cloudify.swarm.Microservice
type represents a Docker Swarm service. It can be configured to use TOSCA-style properties or point to an external Swarm yaml descriptor. Note that the source project has an example of usage.
-
compose_file
The path to a Docker compose descriptor file. If set, all other properties are ignored. - all other properties are translated into the Docker REST service create API call. Properties in the blueprint are encoded with underscores between words (e.g.
log_driver
) and converted internally to the REST API body camel case (e.g.LogDriver
). See comments in the plugin.yaml for an extensive example.
-
cloudify.swarm.relationships.microservice_contained_in_manager
This relationship connects a Microservice to a manager. The implementation allows the target to be either acloudify.swarm.Manager
type or acloudify.nodes.DeploymentProxy
type.
The Kubernetes Cluster Blueprint creates and manages a Kubernetes cluster on Openstack and Amazon EC2. It uses the containerized version of Kubernetes to create the cluster. It also installs the Kubernetes dashboard and the kubectl
utility on the master. By default, the blueprint is configured to install on AWS. To switch to Openstack, edit the blueprint file and comment out the line - imports/aws/blueprint.yaml
. Then uncomment the line below.
-
your_kubernetes_version
The version of Kubernetes to use. Default = 1.2.1. -
your_etcd_version
The version of Etcd to use. Default = 2.2.1. -
your_flannel_version
The version of Flannel to use. Default = 0.5.5 -
flannel_interface
The interface to bind flannel to. Default = eth0 -
flannel_ipmasq_flag
Whether to flannel should use IP Masquerading. Default = true
-
aws_access_key_id
The AWS access key -
aws_secret_access_key
The AWS secret key -
ec2_region_name
The EC2 region name. Default = us-east-1 -
ec2_region_endpoint
The EC2 region. Default = ec2.us-east-1.amazonaws.com
-
keystone_username
Openstack user name - 'keystone_password` Openstack password
-
keystone_tenant_name
Openstack tenant -
keystone_url
Openstack authentication URL -
region
Openstack region (optional) -
nova_url
Openstack Nova compute API URL (optional) -
neutron_url
Openstack Neutron network API URL (optional) -
openstack_management_network_name
The Cloudify management network name (optional)
- A single output
Kubernetes_Dashboard
with a dict value with a single keyurl
. URL uses the floating IP allocated to point to the Kubernetes dashboard.
- A single output
kubernetes_info
with a dict value with a single keyurl
. URL uses the floating IP allocated to point to the Kubernetes dashboard.
To tweak the scaling behavior, the groups are defined in the individual cloud specific imports for AWS and Openstack. Both sub-blueprints refer to a custom scaling policy type. The type definition documents how the scaling parameters can be tweaked for desired effects. The heal group uses the built in host failure policy which is triggered by named metrics expiring (60 seconds).
The Kubernetes Plugin provides support for deploying services on Kubernetes clusters.
deprecated
deprecated
The cloudify.kubernetes.MicroService
type deploys and undeploys Kubernetes services to a Kubernetes cluster. It provides options for specifying service configuration with TOSCA properties and embedded or external native Kubernetes service descriptors.
-
name
service name -
image
image name -
port
service listening port -
target_port
container port (default:port) -
protocol
TCP/UDP (default TCP) -
replicas
number of replicas (default 1) -
run_overrides
json overrides for kubectl "run" -
expose_overrides
json overrides for kubectl "expose"
-
config
a dict whose children can be native Kubernetes descriptor YAML
-
config_files
a dict with keys -
file
a Kubernetes descriptor file (e.g. pod.yaml) -
overrides
a list of substitutions to perform on the pod.yaml file (see below).
When configuring with external files, the files require no change to be used with Cloudify, but can be modified by means of "overrides", which can insert blueprint values dynamically. The target file is parsed into a Python datastructure (dict of dicts and lists). To understand how the substitutions work, consider this pod.yaml snippet:
apiVersion: v1
kind: ReplicationController
metadata:
name: nodecellar
spec:
replicas: 2
Now imagine I wish for some reason to change the number of replicas to 3. The "overrides" line in the blueprint would look like this:
['spec']['replicas']=3
Internally, the plugin simply evaluates this statement on the parsed data structure. After all substitutions are done, a new pod.yaml
is written to perform the actual deployment on the master node via kubectl
. The value type of the substitution line is a string, so standard intrinsics like concat
and get_property
can be used to insert properties from elsewhere in blueprints.
Sometimes it is desirable to inject runtime properties or information from the cloudify context. To enable this, the plugin implements a special syntax.
To insert runtime properties as values of substitutions, use the @{}
syntax. It takes two arguments; a node name and a property name. For example, if I need to inject a dynamically discovered port from another node, you could use something like [some][path]=@{target_node,discovered_port}
.
To insert values from the cloudify context, use the %{}
syntax. It takes a single argument; a path in the Cloudify node context object. For example, to insert the node id of the service, you could use something like [some][path]=${node.id}
. This is equivalent to evaluating ctx.node.id
in plugin code.
The Mesos blueprint creates and manages Mesos clusters on Openstack. It is a Cloudify manager hosted blueprint that starts a Mesos cluster and related networking infrastructure. It installs metrics collectors on slave nodes, and defines scaling and healing groups for cluster high availability.
The Mesos blueprint includes a secondary blueprint to aid in the creation of Cloudify compatible images on Openstack. The image preparation blueprint is located in the util
directory. In the util/imports/openstack/blueprint.yaml
, fill in the inputs and Openstack configuration. When done, run the create_image.sh
script. When complete, save a snapshot of the created image and use it as a base image for the Mesos blueprint. For more details, see the README.
-
image
The Openstack image id. Ideally this image is created by theImage Creation
process described previously. If not, the image must be an Ubuntu 14.04 OS prepared to allow passwordless ssh, passwordless sudo, passwordless sudo over ssh, Docker, and Mesos installed. You can also run the image creation script manually to prepare the image. -
flavor
The Openstack flavor id. This flavor will be used for all instances. 2 GB RAM flavors and 20 GB disk are adequate. Flavor size will vary based on application needs. -
agent_user
The user for the image. Should be 'ubuntu'.
-
mesos_ip
The public IP of the master server -
mesos_ui
The URL of the Mesos dashboard