Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Birthday attacks against TLS ciphers with 64bit block size vulnerability (Sweet32) #9496

Closed
subudear opened this issue Jul 5, 2020 · 11 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@subudear
Copy link

subudear commented Jul 5, 2020

1. What kops version are you running? The command kops version, will display
this information.

version 1.16.0 (git-4b0e62b82)

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:47:41Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.10", GitCommit:"1bea6c00a7055edef03f1d4bb58b773fa8917f11", GitTreeState:"clean", BuildDate:"2020-02-11T20:05:26Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

3. What cloud provider are you using?

AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
Using cluster_template.yaml, created cluster using KOPS, added tlsciphersuites and tlsminversion but qualys scan still reports about Sweet32 vulnerability. Aslo below command is able to access kube-apiserver, kube-proxy, kube-scheduler, kubelet, kube-controller and etcd-manager using TLSv1 and TLSv1_1, Qualys want only TLSv1_2 be allowed-

openssl s_client -connect X.X.X.X:3996 -tls1 - should fail but its not happening.


kubeScheduler:
tlsMinVersion: VersionTLS12
tlsCipherSuites:
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
- TLS_RSA_WITH_AES_128_GCM_SHA256
- TLS_RSA_WITH_AES_256_GCM_SHA384
- TLS_RSA_WITH_AES_128_CBC_SHA
- TLS_RSA_WITH_AES_256_CBC_SHA
kubeControllerManager:
tlsMinVersion: VersionTLS12
tlsCipherSuites:
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
- TLS_RSA_WITH_AES_128_GCM_SHA256
- TLS_RSA_WITH_AES_256_GCM_SHA384
- TLS_RSA_WITH_AES_128_CBC_SHA
- TLS_RSA_WITH_AES_256_CBC_SHA
kubeAPIServer:
tlsMinVersion: VersionTLS12
tlsCipherSuites:
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
- TLS_RSA_WITH_AES_128_GCM_SHA256
- TLS_RSA_WITH_AES_256_GCM_SHA384
- TLS_RSA_WITH_AES_128_CBC_SHA
- TLS_RSA_WITH_AES_256_CBC_SHA
kubelet:
anonymousAuth: false
authenticationTokenWebhook: true
authorizationMode: Webhook
tlsMinVersion: VersionTLS12
tlsCipherSuites:
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
- TLS_RSA_WITH_AES_128_GCM_SHA256
- TLS_RSA_WITH_AES_256_GCM_SHA384
- TLS_RSA_WITH_AES_128_CBC_SHA
- TLS_RSA_WITH_AES_256_CBC_SHA
kubernetesApiAccess:

  • x.x.x.x/16
    kubernetesVersion: 1.15.10
    masterPublicName: api.{{.cluster_name.value}}
    networkCIDR: {{.vpc_cidr_block.value}}
    networkID: {{.vpc_id.value}}

5. What happened after the commands executed?

kube-apiserver.manifest shows that tlsMinVersion and tlsCipherSuites are set but still openssl can use TLSv1 to connect to kubernetes services on respective ports-
kube-scheduler - 10259
kubelet - 10250
kube-controller - 10257
kube-apiserver - 443
etcd-manager - 3996 and 3997

6. What did you expect to happen?

We want to fix SWEET32 vulnerability detected by Qualys scan.

7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.

apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
creationTimestamp: null
name: XXX
spec:
api:
loadBalancer:
type: Internal
authorization:
rbac: {}
channel: stable
cloudProvider: aws
configBase: s3://XXX
dnsZone: xxx
etcdClusters:

  • cpuRequest: 200m
    etcdMembers:
    • instanceGroup: master-ap-southeast-2a
      name: a
    • instanceGroup: master-ap-southeast-2b
      name: b
    • instanceGroup: master-ap-southeast-2c
      name: c
      memoryRequest: 100Mi
      name: main
  • cpuRequest: 100m
    etcdMembers:
    • instanceGroup: master-ap-southeast-2a
      name: a
    • instanceGroup: master-ap-southeast-2b
      name: b
    • instanceGroup: master-ap-southeast-2c
      name: c
      memoryRequest: 100Mi
      name: events
      iam:
      allowContainerRegistry: true
      legacy: false
      kubeAPIServer:
      tlsCipherSuites:
    • TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
    • TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
    • TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
    • TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
    • TLS_RSA_WITH_AES_128_GCM_SHA256
    • TLS_RSA_WITH_AES_256_GCM_SHA384
    • TLS_RSA_WITH_AES_128_CBC_SHA
    • TLS_RSA_WITH_AES_256_CBC_SHA
      tlsMinVersion: VersionTLS12
      kubeControllerManager:
      tlsCipherSuites:
    • TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
    • TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
    • TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
    • TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
    • TLS_RSA_WITH_AES_128_GCM_SHA256
    • TLS_RSA_WITH_AES_256_GCM_SHA384
    • TLS_RSA_WITH_AES_128_CBC_SHA
    • TLS_RSA_WITH_AES_256_CBC_SHA
      tlsMinVersion: VersionTLS12
      kubeDNS:
      provider: CoreDNS
      kubeScheduler: {}
      kubelet:
      anonymousAuth: false
      authenticationTokenWebhook: true
      authorizationMode: Webhook
      tlsCipherSuites:
    • TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
    • TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
    • TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
    • TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
    • TLS_RSA_WITH_AES_128_GCM_SHA256
    • TLS_RSA_WITH_AES_256_GCM_SHA384
    • TLS_RSA_WITH_AES_128_CBC_SHA
    • TLS_RSA_WITH_AES_256_CBC_SHA
      tlsMinVersion: VersionTLS12
      kubernetesApiAccess:
  • xxxxxx
    kubernetesVersion: 1.15.10
    masterPublicName: xxx
    networkCIDR: xxxxx
    networkID: xxxx
    networking:
    weave:
    mtu: 8912
    nonMasqueradeCIDR: xxx
    sshAccess:
    subnets:
  • egress: xxx
    id: xx
    name: xx
    type: Private
    zone: ap-southeast-2a
  • egress: xxf
    id: xx
    name: xx
    type: Private
    zone: ap-southeast-2b
  • egress: xx
    id: sxx
    name: xx
    type: Private
    zone: xx
  • id: sxx
    name: uxxx
    type: Utility
    zone: xx
  • id: sxx
    name: xx
    type: Utility
    zone: xx
  • id: sxx
    name: xx
    type: Utility
    zone: xxc
    topology:
    bastion:
    bastionPublicName: xxx
    dns:
    type: Private
    masters: private
    nodes: private

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2020-07-03T03:57:45Z"
generation: 1
labels:
kops.k8s.io/cluster: xxx
name: bastions
spec:
associatePublicIp: true
cloudLabels:
JiraTicket: N/A
image: xx
machineType: t2.small
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: bastions
role: Bastion
subnets:

  • utility.xx

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2020-07-03T03:57:45Z"
generation: 1
labels:
kops.k8s.io/cluster: xxx
name: masterxxx
spec:
image: kope.io/xxx
machineType: t2.medium
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: xx
role: Master
subnets:

  • prxxx

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2020-07-03T03:57:45Z"
generation: 1
labels:
kops.k8s.io/cluster: xx
name: mastxxx
spec:
image: kope.io/xxx
machineType: t2.medium
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: xx
role: Master
subnets:

  • pxxx

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2020-07-03T03:57:45Z"
generation: 1
labels:
kops.k8s.io/cluster: xxx
name: mastxxx
spec:
image: kope.io/xxx
machineType: t2.medium
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: xx
role: Master
subnets:

  • xxx

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2020-07-03T03:57:46Z"
generation: 1
labels:
kops.k8s.io/cluster: xxx
name: nodes
spec:
image: kope.io/xxxx
machineType: t2.large
maxSize: 3
minSize: 3
nodeLabels:
kops.k8s.io/instancegroup: nodes
role: Node
subnets:

  • xxxx

8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.

openssl s_client -connect xxx.xxx.xxx.xxx:443 -tls1_1


CONNECTED(00000003)
139811536363584:error:1409442E:SSL routines:ssl3_read_bytes:tlsv1 alert protocol version:../ssl/record/rec_layer_s3.c:1407:SSL alert number 70

no peer certificate available

No client certificate CA names sent

SSL handshake has read 7 bytes and written 102 bytes
Verification: OK

New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.1
Cipher : 0000
Session-ID:
Session-ID-ctx:
Master-Key:
PSK identity: None
PSK identity hint: None
SRP username: None
Start Time: 1593934012
Timeout : 7200 (sec)
Verify return code: 0 (ok)
Extended master secret: no


9. Anything else do we need to know?

We want to fix this Qualys vulnerability. We checked a number of github threads which talk abt its fix but its not working for us-
#6470
#5715
kubernetes/kubernetes#81145 (comment)
https://github.com/rochacon/kops/blob/6532ecf3779c25ae7e77216154f83e60c8d64d86/pkg/apis/kops/cluster.go
k3s-io/k3s#1765
https://github.com/kubernetes/kops/blob/master/docs/cluster_spec.md#feature-gates

@subudear
Copy link
Author

Do we have a kops patch to fix SWEET32 issue? Any idea which version of kops will fix this vulnerability?

@subudear
Copy link
Author

Kops does not provide option to set TLSmin version and Ciphers for etcd-manager. Any idea when will this be made available?

@rifelpet
Copy link
Member

kops 1.18 will support specifying environment variables passed to etcd and many etcd settings support environment variables like ETCD_CIPHER_SUITES. Is that sufficient for your use case?

@subudear
Copy link
Author

subudear commented Aug 2, 2020

Thanks @rifelpet for confirmation. Will it also allow TLSmin version? Right now openssl connection can be made using TLSv1 or TLSv1_1.

Will it also allow kube-scheduler to set TLS min version and Ciphers, similar to settings available for kube-api sever and kubelet?

Not sure whether anyone raised it, Qualys scans are failing for these services showing vulnerabilities like SWEET32, triple handshake and TLSv1.0 usage. This hampers PCI compliance requirement for production setups.

@rifelpet
Copy link
Member

rifelpet commented Aug 2, 2020

I think this issue suggests you can force TLS 1.2 by specifying a specific list of ciphers which means the ETCD_CIPHER_SUITES env var would be sufficient.

@subudear
Copy link
Author

subudear commented Aug 3, 2020

If kops 1.18 is going to use etcd 3.4 then I believe this will help our use case to solve qualys reported vulnerabilities. Thanks.

@olemarkus
Copy link
Member

I think a related question here is also what we should set by default. Should we be as relaxed as etcd default here or can we be stricter?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 1, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 1, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants