Skip to content

Latest commit

 

History

History
354 lines (281 loc) · 10.5 KB

upgrade-to-crds.mdx

File metadata and controls

354 lines (281 loc) · 10.5 KB
layout page_title description
docs
Upgrade An Existing Cluster to CRDs
Upgrade an existing cluster to use custom resources.

Upgrade An Existing Cluster to CRDs

Upgrading to consul-helm versions >= 0.30.0 will require some changes if you utilize the following:

Central Config Enabled

If you were previously setting centralConfig.enabled to false:

connectInject:
  centralConfig:
    enabled: false

Then instead you must use server.extraConfig and client.extraConfig:

client:
  extraConfig: |
    {"enable_central_service_config": false}
server:
  extraConfig: |
    {"enable_central_service_config": false}

If you were previously setting it to true, it now defaults to true so no changes are required, but you can remove it from your config if you desire.

Default Protocol

If you were previously setting:

connectInject:
  centralConfig:
    defaultProtocol: 'http' # or any value

Now you must use custom resources to manage the protocol for new and existing services:

  1. To upgrade, first ensure you're running Consul >= 1.9.0. See Consul Version Upgrade for more information on how to upgrade Consul versions.

    This version is required to support custom resources.

  2. Next, modify your Helm values:

    1. Remove the defaultProtocol config. This won't affect existing services.
    2. Set:
      controller:
        enabled: true
  3. Now you can upgrade your Helm chart to the latest version with the new Helm values.

  4. From now on, any new service will require a ServiceDefaults resource to set its protocol:

    apiVersion: consul.hashicorp.com/v1alpha1
    kind: ServiceDefaults
    metadata:
      name: my-service-name
    spec:
      protocol: 'http'
  5. Existing services will maintain their previously set protocol. If you wish to change that protocol, you must migrate that service's service-defaults config entry to a ServiceDefaults resource. See Migrating Config Entries.

-> Note: This setting was removed because it didn't support changing the protocol after a service was first run and because it didn't work in secondary datacenters.

Proxy Defaults

If you were previously setting:

connectInject:
  centralConfig:
    proxyDefaults: |
      {
        "key": "value" // or any values
      }

You will need to perform the following steps to upgrade:

  1. You must remove the setting from your Helm values. This won't have any effect on your existing cluster because this config is only read when the cluster is first created.

  2. You can then upgrade the Helm chart.

  3. If you later wish to change any of the proxy defaults settings, you will need to follow the Migrating Config Entries instructions for your proxy-defaults config entry.

    This will require Consul >= 1.9.0.

-> Note: This setting was removed because it couldn't be changed after initial installation.

Mesh Gateway Mode

If you were previously setting:

meshGateway:
  globalMode: 'local' # or any value

You will need to perform the following steps to upgrade:

  1. You must remove the setting from your Helm values. This won't have any effect on your existing cluster because this config is only read when the cluster is first created.

  2. You can then upgrade the Helm chart.

  3. If you later wish to change the mode or any other setting in proxy-defaults, you will need to follow the Migrating Config Entries instructions to migrate your proxy-defaults config entry to a ProxyDefaults resource.

    This will require Consul >= 1.9.0.

-> Note: This setting was removed because it couldn't be changed after initial installation.

connect-service-protocol Annotation

If any of your Connect services had the consul.hashicorp.com/connect-service-protocol annotation set, e.g.

apiVersion: apps/v1
kind: Deployment
...
spec:
  template:
    metadata:
      annotations:
        "consul.hashicorp.com/connect-inject": "true"
        "consul.hashicorp.com/connect-service-protocol": "http"
  ...

You will need to perform the following steps to upgrade:

  1. Ensure you're running Consul >= 1.9.0. See Consul Version Upgrade for more information on how to upgrade Consul versions.

    This version is required to support custom resources.

  2. Next, remove this annotation from existing deployments. This will have no effect on the deployments because the annotation was only used when the service was first created.

  3. Modify your Helm values and add:

    controller:
      enabled: true
  4. Now you can upgrade your Helm chart to the latest version.

  5. From now on, any new service will require a ServiceDefaults resource to set its protocol:

    apiVersion: consul.hashicorp.com/v1alpha1
    kind: ServiceDefaults
    metadata:
      name: my-service-name
    spec:
      protocol: 'http'
  6. Existing services will maintain their previously set protocol. If you wish to change that protocol, you must migrate that service's service-defaults config entry to a ServiceDefaults resource. See Migrating Config Entries.

-> Note: The annotation was removed because it didn't support changing the protocol and it wasn't supported in secondary datacenters.

Migrating Config Entries

A config entry that already exists in Consul must be migrated into a Kubernetes custom resource in order to manage it from Kubernetes:

  1. Determine the kind and name of the config entry. For example, the protocol would be set by a config entry with kind: service-defaults and name equal to the name of the service.

    In another example, a proxy-defaults config has kind: proxy-defaults and name: global.

  2. Once you've determined the kind and name, query Consul to get its contents:

    $ consul config read -kind <kind> -name <name>
    

    This will require kubectl exec'ing into a Consul server or client pod. If you're using ACLs, you will also need an ACL token passed via the -token flag.

    For example:

    $ kubectl exec consul-server-0 -- consul config read -name foo -kind service-defaults
    {
        "Kind": "service-defaults",
        "Name": "foo",
        "Protocol": "http",
        "MeshGateway": {},
        "Expose": {},
        "CreateIndex": 60,
        "ModifyIndex": 60
    }
    
  3. Now we're ready to construct a Kubernetes resource for the config entry.

    It will look something like:

    apiVersion: consul.hashicorp.com/v1alpha1
    kind: ServiceDefaults
    metadata:
      name: foo
      annotations:
        'consul.hashicorp.com/migrate-entry': 'true'
    spec:
      protocol: 'http'
    1. The apiVersion will always be consul.hashicorp.com/v1alpha1.

    2. The kind will be the CamelCase version of the Consul kind, e.g. proxy-defaults becomes ProxyDefaults.

    3. metadata.name will be the name of the config entry.

    4. metadata.annotations will contain the "consul.hashicorp.com/migrate-entry": "true" annotation.

    5. The namespace should be whatever namespace the service is deployed in. For ProxyDefaults, we recommend the namespace that Consul is deployed in.

    6. The contents of spec will be a transformation from JSON keys to YAML keys.

      The following keys can be ignored: CreateIndex, ModifyIndex and any key that has an empty object, e.g. "Expose": {}.

      For example:

      {
        "Kind": "service-defaults",
        "Name": "foo",
        "Protocol": "http",
        "MeshGateway": {},
        "Expose": {},
        "CreateIndex": 60,
        "ModifyIndex": 60
      }

      Becomes:

      apiVersion: consul.hashicorp.com/v1alpha1
      kind: ServiceDefaults
      metadata:
        name: foo
        annotations:
          'consul.hashicorp.com/migrate-entry': 'true'
      spec:
        protocol: 'http'

      And

      {
        "Kind": "proxy-defaults",
        "Name": "global",
        "MeshGateway": {
          "Mode": "local"
        },
        "Config": {
          "local_connect_timeout_ms": 1000,
          "handshake_timeout_ms": 10000
        },
        "CreateIndex": 60,
        "ModifyIndex": 60
      }

      Becomes:

      apiVersion: consul.hashicorp.com/v1alpha1
      kind: ProxyDefaults
      metadata:
        name: global
        annotations:
          'consul.hashicorp.com/migrate-entry': 'true'
      spec:
        meshGateway:
          mode: local
        config:
          # Note that anything under config for ProxyDefaults will use the exact
          # same keys.
          local_connect_timeout_ms: 1000
          handshake_timeout_ms: 10000
  4. Run kubectl apply to apply the Kubernetes resource.

  5. Next, check that it synced successfully:

    $ kubectl get servicedefaults foo
    NAME              SYNCED   AGE
    foo               True     1s
    
  6. If its SYNCED status is True then the migration for this config entry was successful.

  7. If its SYNCED status is False, use kubectl describe to view the reason syncing failed:

    $ kubectl describe servicedefaults foo
    ...
    Status:
      Conditions:
        Last Transition Time:  2021-01-12T21:03:29Z
        Message:               migration failed: Kubernetes resource does not match existing Consul config entry: consul={...}, kube={...}
        Reason:                MigrationFailedError
        Status:                False
        Type:                  Synced
    

    The most likely reason is that the contents of the Kubernetes resource don't match the Consul resource. Make changes to the Kubernetes resource to match the Consul resource (ignoring the CreateIndex, ModifyIndex and Meta keys).

  8. Once the SYNCED status is true, you can make changes to the resource and they will get synced to Consul.