Note: this tutorial requires enabling custom metrics which are described in monitor custom metrics in Dynatrace. Please make sure first finish it before continue.
Kyma ships already a built-in Prometheus instance for easy access to pre-defined metrics like typical Kubernetes and Istio metrics. Workloads can expose custom metrics on top to increase the observability of the workload. However, with the bundled Prometheus of Kyma, custom metrics cannot be collected because of the unpredictable amount of data. It is recommended to either use a self-managed instance or export the data to an external too like Dynatrace.
In this chapter we will show how to deploy a custom Prometheus instance to scrape custom metrics, while at the same time collects metrics from the build-in Prometheus。 It will consolidate metrics from different source into one single Prometheus instance. In addition, we will redirect the metrics from custom Prometheus instance to an external tooling, such as Grafana Cloud Service or a custom Grafana instance in your Kyma cluster.
Among other options, Prometheus could be easily installed using (1) Prometheus operator kube-prometheus-stack or typical (2) Kubernetes deployment or statefulset. The operator usually would use for installations which need to be scalable and highly available. Keeping in mind that Kyma uses the prometheus-operator internally. If you want to deploy your own instance of a prometheus-operator, please deny the "kyma-system" namespace (refer to kyma-project/kyma#14379) as Kyma uses the prometheus-operator internally. For simplicity a native deployment with helm chart is used in this tutorial.
Follow the below steps to install a custom Prometheus instance in your Kyma cluster. In our case, it will be installed in the default namespace.
# First make sure default namespace is enabled with istio sidecar injection ##https://kyma-project.io/docs/kyma/latest/04-operation-guides/operations/smsh-01-istio-enable-sidecar-injection
kubectl label namespace default istio-injection=enabled
# add Prmetheus helm chart repo to your local helm repo
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
During installation we will disable the unnecessary components in the charts, and add two jobs to scrape additional metrics. A prometheus_values.yaml file with custom value for Prometheus is provided for this purpose. You can take a look to understand what have been changed.
Run the following command to install Prometheus with your custom value file.
helm install -f prometheus_values.yaml myprometheus prometheus-community/prometheus
You can then run following command to access Prometheus at http://localhost:9091/. In the Prometheus UI select Status -- Target, you can both jobs are running healthy.
kubectl port-forward svc/myprometheus-server 9091:80
We will expose metrics of the custom Prometheus to an external tooling. In this example two variants are demonstrated:
-
Variant 1: In-cluster Grafana: a custom Grafana will be installed in the same Kyma cluster to enable the access to the Prometheus metrics.
-
Variant 2: Grafana Cloud: Grafana Cloud is a SaaS platform, integrating metrics, traces and logs with Grafana. This variant will help you understand how third-party monitoring tools outside of your cluster can access metrics of your custom workload in the Kyma cluster.
We will use Grafana Helm chart to install Grafana in the Kyma cluster.
# add Grafana helm chart repo to your local helm repo
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
We will add our custom Prometheus server as default datasource for Grafana. A grafana_values.yaml file with custom value for Prometheus is provided for this purpose. You can take a look to understand what have been changed.
Run the following command from the code/day2-operations/deployment/k8s/ folder to install the Helm chart with your custom values.
helm install -f grafana_values.yaml mygrafana grafana/grafana
Run below command and access your Grafana instance at http://localhost:3000
kubectl port-forward svc/mygrafana 3000:80
For login credentials, the user is admin and the password can be retrieved with the following command:
kubectl get secret --namespace default mygrafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
Then, navigate to Explore from the left sidebar and see what metrics are available. The following screenshot shows an example of custom metric db_number_idle_connections_all_users from day2-service.
In addition, you can also navigate to Configuration to check your Prometheus datasource configuration.
To add the custom Prometheus instance in the Kyma cluster as the datasource to Grafana Cloud, first we need to enable external access to the service myprometheus-server.
NOTE: Following approach exposes the Prometheus API in an insecure way, please consider adding appropriate authentication.
- Set the environment variable
DOMAIN
:
export DOMAIN=$(kubectl config view --minify -o jsonpath='{.clusters[].cluster.server}' | sed -E 's_^https?://api.__')
- Run the following commands to expose the service:
cat <<EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: myprometheus-expose
spec:
gateways:
- kyma-system/kyma-gateway
hosts:
- myprometheus-server.$DOMAIN
http:
- match:
- uri:
regex: /.*
route:
- destination:
host: myprometheus-server
port:
number: 80
EOF
-
Show your Prometheus service with the command
echo https://myprometheus-server.$DOMAIN
. You will need the URL later to configure datasource in Grafana. -
To create a free trial account on Grafana Cloud, see Grafana Labs and follow the onscreen process. Once the account is created, open the Grafana instance by choosing Launch.
-
Inside the Grafana UI, navigate to datasource.
-
Choose Add data source of type Prometheus.
-
Add a Prometheus datasource name and the Prometheus URL exposed earlier.
Now you should be able to explore the metrics in Grafana Cloud as usual. The following screenshot shows the custom metrics db_max_pool_size which is scraped from day2-service.