-
Notifications
You must be signed in to change notification settings - Fork 1
Provisioning Cloudify Services With Kubernetes
This post is a continuation of my previous post about integrating Cloudify via the Kubernetes service broker extension. That post laid out a plan to essentially proxy Cloudify blueprints from Kubernetes using the service brokerage mechanism. To recap, the service brokerage extension in Kubernetes enables the consumption of services external to Kubernetes in a native way. This is a natural fit for Cloudify, which is itself a service orchestrator. With this integration, Kubernetes users can enumerate and consume services described in Cloudify blueprints as though they were native Kubernetes services.
Some progress has been made since that time, and while not a complete implementation, the fundamental tasks of service catalog listing and service provisioning are working and worth a look. I'll summarize the progress first, and dig into the details later for those interested.
The first step in any service broker implementation is to provide the ability to list services (and plans) via REST API per the Open Service Broker API Spec. This request to list services returns data from a configured Cloudify server. I used Cloudify 4.2 for this effort. In order to provide reasonable performance I do not proxy directly to Cloudify REST API calls, but instead access a small local database. Details later. In any case, when Kubernetes comes calling looking for a list of services, the contents of this local database are returned. On the Kubernetes side, the first step is the creation of a service broker resource. The one I made looks like this:
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ClusterServiceBroker
metadata:
name: cloudify-broker
spec:
url: http://192.168.33.10:5000
This associates the URL of my rest API with a service broker called cloudify-broker
. In Kubernetes, running kubectl create -f broker.yaml
is sufficient to hook up the broker. I can see what K8S thinks of it by running kubectl get clusterservicebrokers cloudify-broker -o yaml
. The output looks like this:
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ClusterServiceBroker
metadata:
creationTimestamp: 2018-03-22T14:48:22Z
finalizers:
- kubernetes-incubator/service-catalog
generation: 1
name: cloudify-broker
resourceVersion: "1609"
selfLink: /apis/servicecatalog.k8s.io/v1beta1/clusterservicebrokers/cloudify-broker
uid: 11ae3bc6-2de0-11e8-936d-2ea0451a3d4c
spec:
relistBehavior: Duration
relistDuration: 15m0s
relistRequests: 0
url: http://192.168.33.10:5000
status:
conditions:
- lastTransitionTime: 2018-03-22T21:54:21Z
message: Successfully fetched catalog entries from broker.
reason: FetchedCatalog
status: "True"
type: Ready
lastCatalogRetrievalTime: 2018-03-22T21:54:21Z
reconciledGeneration: 1
Now Kubernetes has the list of blueprints on my targeted server. I can see what's available on Kubernetes by running kubectl get clusterserviceclasses -o=custom-columns=NAME:.metadata.name,EXTERNAL\ NAME:.spec.externalName
and I get:
NAME EXTERNAL NAME
1 drupal
2 mariadb
3 nodecellar
I use simple integers for service ids because it makes less of a mess for this exercise, and in my little broker, even the lowly integer will be unique without resorting to UUIDs. Now that I have my list of services, I'd like to provision one. If you recall, one of the concepts in the broker API is one of 'plans'. Plans represent fixed configurations of services, or perhaps classes of services. I ignore plans for now and just provide a default one.
To trigger a service provisioning on K8S, a service resource is created. I described mine in a file with these contents:
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
name: nodecellar-instance
namespace: test-ns
spec:
clusterServiceClassExternalName: nodecellar
clusterServicePlanExternalName: default
This file indicates I want a service instance called nodecellar-instance
that is based on the nodecellar
service from the catalog listing. Note that even not depicted here, it is possible have a section that describes parameters to pass to the blueprint on the Cloudify side. To provision the service, you just run kubectl create -f nodecellar.yaml
, and Kubernetes instructs the service broker to create the instance. This results in the broker activating the Cloudify REST API and calling create deployment
and start execution
. This is an asynchronous call, and the broker returns a success indicator that the process of service creation has started. Then K8S polls a REST endpoint periodically until the broker indicates success or failure. Since the Cloudify server has no way to distinguish separate service instances (deployments in Cloudfy-speak), the broker creates a deployment name that is a UUID, and associates it with the execution and blueprint in it's local database. On the Cloudify side, it looks like this:
Note the UUID deployment name. While this is coming up, K8S is pinging the broker to check if it's done. When it's running, we can fetch the status of the instance with kubectl get serviceinstances -n test-ns nodecellar-instance -o yaml
, yielding:
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
creationTimestamp: 2018-03-22T22:36:18Z
finalizers:
- kubernetes-incubator/service-catalog
generation: 1
name: nodecellar-instance12
namespace: test-ns
resourceVersion: "1707"
selfLink: /apis/servicecatalog.k8s.io/v1beta1/namespaces/test-ns/serviceinstances/nodecellar-instance12
uid: 6febf2ce-2e21-11e8-936d-2ea0451a3d4c
spec:
clusterServiceClassExternalName: nodecellar
clusterServiceClassRef:
name: "3"
clusterServicePlanExternalName: default
clusterServicePlanRef:
name: "3"
externalID: 5c9ee825-bee8-4fd0-93b1-b6dfecf6dfee
updateRequests: 0
status:
asyncOpInProgress: false
conditions:
- lastTransitionTime: 2018-03-22T22:40:46Z
message: The instance was provisioned successfully
reason: ProvisionedSuccessfully
status: "True"
type: Ready
deprovisionStatus: Required
externalProperties:
clusterServicePlanExternalID: "3"
clusterServicePlanExternalName: default
orphanMitigationInProgress: false
reconciledGeneration: 1
At this point the service is up and running in Cloudify. I haven't gotten to binding yet, so I can't really interact with it, but that is left for another day.
Even though this is not intended for production, I wanted reasonable performance. That means not having the K8S service broker REST API coupled to the service broker to Cloudify REST calls. Also recall that service/blueprint metadata in Cloudify does not match up with what a service broker must provide. This means I need another data store that contains information from Cloudify, plus information that the service broker needs. I opted for Sqlite via the SQL Alchemy library. Another basic requirement is to provide a REST API to K8S, and so I used Flask. To decouple Cloudify from K8S, I spawn a background thread that syncs Cloudify with my local database, and serve read requests from the Sqlite database.
There are 3 source files
-
db.py
- A class that represents the database -
cfysync.py
- The thread that sync a Cloudify server with the database -
sbroker.py
- The REST server and main driver
At this point, the REST server only implements 3 REST endpoints
-
/v2/catalog
- GET delivers the service catalog to K8S- This path is called when the service broker resource is created. The URL is also called periodically by the broker controller to get updates, or recover from broker communication failures.
-
/v2/service_instances/<instance_id>
- PUT requests an instance creation- This path is called when a service instance resource is created in Kubernetes. The resource config uses the service name from the results of the
/v2/catalog
call.
- This path is called when a service instance resource is created in Kubernetes. The resource config uses the service name from the results of the
-
/v2/service_instances/<instance_id>/last_operation
- GET is how K8S checks instance status- This path is called by Kubernetes while waiting for the completion of service provisioning. Note that this is an optional operational mode for service brokers. The service creation call receives a URL parameter that indicates whether the broker controller supports asynchronous operation. The
cloudify-broker
mandates this parameter and will return the specified error code422
if async isn't supported.
- This path is called by Kubernetes while waiting for the completion of service provisioning. Note that this is an optional operational mode for service brokers. The service creation call receives a URL parameter that indicates whether the broker controller supports asynchronous operation. The
There is a bit of a dance between the Cloudify sync thread and the polling of Kubernetes. The sync thread examines all executions running on Cloudify and updates their status (e.g. started
, terminated
, failed
etc..), while the polling thread from K8S runs queries on the database and returns the results. Visually:
If you want to attempt setting this up yourself, you can grab the latest code on Github: https://github.com/dfilppi/service-broker/tree/provisionable. You'll need some dependencies:
- Python 2.7
- Cloudify REST API,
pip install cloudify
- Flask,
pip install flask
- SQL Alchemy,
pip install sqlalchemy
You'll need Kubernetes at least version 1.7. I used 1.9 so I can't vouch for earlier versions. My environment was Ubuntu Xenial on Virtualbox/Vagrant. I used kubeadm to install a single node Kubernetes. If you want to do this, just follow instructions to install the master, but you'll need to untaint the master so it can function as a node (kubectl taint nodes --all node-role.kubernetes.io/master-
).
You'll also need a Cloudify master. I tested both on 4.2 and 4.3, available here. My testing was on Openstack, but the cloud platform is irrelevant. Once running, you'll need to install at least one blueprint to test with. Some instructions are available here and here.
Once you have Cloudify and Kubernetes up, you can start deploying services. Start the broker with the command python sbroker.py --host <cloudify manager ip/name> --user <cloudify user> --password <cloudify password>
. If you've just installed the manager, you can use admin
for both user and password. The broker will start and begin listening on port 5000. This is hardcoded for now, and you're Kubernetes deployment will have to be free to access it over the network.
The broker logs to a file in the same directory called sbroker.log
. It can give you an idea of what is going on internally in the broker. A database file, called cfy.db
is also created. It holds the sync database from the Cloudify manager. You can use the sqlite3
tool to look at it and run queries and see what is being stored.
To see the broker in action, you'll need to introduce it to Kubernetes. In the resources
directory of the distro is a sample broker.yaml
file. Copy this to the K8S node, edit the URL, and run kubectl create -f broker.yaml
. Note that the URL address is from the perspective of Docker, not "localhost/127.0.0.x". The kubectl
command will output some information, and you can look at details to make sure all is well with kubectl get clusterservicebrokers cloudify-broker -o yaml
.
Now that the introduction of the broker to Kubernetes has been made, you can invoke a service. For this you'll need another resource descriptor file. In the resources
directory of the distro is a sample service definition file that you can change for your use case. The external name
of the service must match the blueprint name in Cloudify. Use kubectl create -f service.yaml
to provision the service. You can watch the Cloudify UI or the log file to see progress.
In this post I took the first step towards creating a usable Kubernetes service broker for Cloudify that includes catalog listing and provision from/to a live Cloudify manager. Next time, we'll create a more useful service and actually bind and deprovision it. The source code is available here: https://github.com/dfilppi/service-broker. This is very much in flux, so if you want known working distro, use the provisionable
tag. As always, comments welcome.