Goal: Leverage network policies to segment connections within Kubernetes cluster and prevent known bad actors from accessing the workloads.
-
Test connectivity between application components and across application stacks.
a. Add
curl
toloadgenerator
component to run test commands.This step is only needed if you're using
boutiqueshop
version in which theloadgenerator
component doesn't includecurl
orwget
utility. Note that package addition to a running pod will be removed as soon as the pod is restarted.# update loadgenerator package kubectl exec -it $(kubectl get po -l app=loadgenerator -ojsonpath='{.items[0].metadata.name}') -- sh -c 'apt-get update' # install curl utility kubectl exec -it $(kubectl get po -l app=loadgenerator -ojsonpath='{.items[0].metadata.name}') -- sh -c 'apt-get install -y curl'
b. Test connectivity between workloads within each namespace.
# test connectivity within dev namespace kubectl -n dev exec -t centos -- sh -c 'curl -m3 -sI http://nginx-svc 2>/dev/null | grep -i http' # test connectivity within default namespace kubectl exec -it $(kubectl get po -l app=loadgenerator -ojsonpath='{.items[0].metadata.name}') -- sh -c 'curl -m3 -sI frontend 2>/dev/null | grep -i http'
c. Test connectivity across namespaces.
# test connectivity from dev namespace to default namespace kubectl -n dev exec -t centos -- sh -c 'curl -m3 -sI http://frontend.default 2>/dev/null | grep -i http' # test connectivity from default namespace to dev namespace kubectl exec -it $(kubectl get po -l app=loadgenerator -ojsonpath='{.items[0].metadata.name}') -- sh -c 'curl -m3 -sI http://nginx-svc.dev 2>/dev/null | grep -i http'
d. Test connectivity from each namespace to the Internet.
# test connectivity from dev namespace to the Internet kubectl -n dev exec -t centos -- sh -c 'curl -m3 -sI http://www.google.com 2>/dev/null | grep -i http' # test connectivity from default namespace to the Internet kubectl exec -it $(kubectl get po -l app=loadgenerator -ojsonpath='{.items[0].metadata.name}') -- sh -c 'curl -m3 -sI www.google.com 2>/dev/null | grep -i http'
All of these tests should succeed if there are no policies in place to govern the traffic for
dev
anddefault
namespaces. -
Apply staged
default-deny
policy.Staged
default-deny
policy is a good way of catching any traffic that is not explicitly allowed by a policy without explicitly blocking it.kubectl apply -f demo/10-security-controls/staged.default-deny.yaml
You should be able to view the potential affect of the staged
default-deny
policy if you navigate to theDashboard
view in the Enterprise Manager UI and look at thePackets by Policy
histogram.# make a request across namespaces and view Packets by Policy histogram for i in {1..10}; do kubectl -n dev exec -t centos -- sh -c 'curl -m3 -sI http://frontend.default 2>/dev/null | grep -i http'; sleep 2; done
The staged policy does not affect the traffic directly but allows you to view the policy impact if it were to be enforced.
-
Apply network policies to control East-West traffic.
# deploy dev policies kubectl apply -f demo/dev/policies.yaml # deploy boutiqueshop policies kubectl apply -f demo/boutiqueshop/policies.yaml
Now as we have proper policies in place, we can enforce
default-deny
policy moving closer to zero-trust security approach. You can either enforced the already deployed stageddefault-deny
policy using thePolicies Board
view in the Enterirpse Manager UI, or you can apply an enforcingdefault-deny
policy manifest.# apply enforcing default-deny policy manifest kubectl apply -f demo/10-security-controls/default-deny.yaml # you can delete staged default-deny policy kubectl delete -f demo/10-security-controls/staged.default-deny.yaml
-
Test connectivity with policies in place.
a. The only connections between the components within each namespaces should be allowed as configured by the policies.
# test connectivity within dev namespace kubectl -n dev exec -t centos -- sh -c 'curl -m3 -sI http://nginx-svc 2>/dev/null | grep -i http' # test connectivity within default namespace kubectl exec -it $(kubectl get po -l app=loadgenerator -ojsonpath='{.items[0].metadata.name}') -- sh -c 'curl -m3 -sI frontend 2>/dev/null | grep -i http'
b. The connections across
dev
anddefault
namespaces should be blocked by the globaldefault-deny
policy.# test connectivity from dev namespace to default namespace kubectl -n dev exec -t centos -- sh -c 'curl -m3 -sI http://frontend.default 2>/dev/null | grep -i http' # test connectivity from default namespace to dev namespace kubectl exec -it $(kubectl get po -l app=loadgenerator -ojsonpath='{.items[0].metadata.name}') -- sh -c 'curl -m3 -sI http://nginx-svc.dev 2>/dev/null | grep -i http'
c. The connections to the Internet should be blocked by the configured policies.
# test connectivity from dev namespace to the Internet kubectl -n dev exec -t centos -- sh -c 'curl -m3 -sI http://www.google.com 2>/dev/null | grep -i http' # test connectivity from default namespace to the Internet kubectl exec -it $(kubectl get po -l app=loadgenerator -ojsonpath='{.items[0].metadata.name}') -- sh -c 'curl -m3 -sI www.google.com 2>/dev/null | grep -i http'
-
Protect workloads from known bad actors.
Calico offers
GlobalThreatfeed
resource to prevent known bad actors from accessing Kubernetes pods.a. Deploy a threat feed and a policy for it.
# deploy feodo tracker threatfeed kubectl apply -f demo/10-security-controls/feodotracker.threatfeed.yaml # deploy network policy that uses the threadfeed kubectl apply -f demo/10-security-controls/feodo-block-policy.yaml
b. Simulate access to threat feed endpoints.
# try to ping any of the IPs in from the feodo tracker list IP=$(kubectl get globalnetworkset threatfeed.feodo-tracker -ojson | jq .spec.nets[0] | sed -e 's/^"//' -e 's/"$//' -e 's/\/32//') kubectl -n dev exec -t centos -- sh -c "ping -c1 $IP"
c. Review the results.
Navigate to Calico Enterprise
Manager UI
->Alerts
to view the results. -
[Bonus task] Monitor the use of Tor exists by pods.
Calico leverages
GlobalThreatfeed
resource to monitor the use of Tor (The Onion Router) exists which provide the means to establish anonymous connections. You can configure Tor-VPN feed types to capture any attempts from within your cluster to use those types of exists.a. Configure Tor bulk exist feed.
kubectl apply -f https://docs.tigera.io/manifests/threatdef/tor-exit-feed.yaml
b. Simulate attempt to use a Tor exit.
IP=$(kubectl get globalnetworkset threatfeed.tor-bulk-exit-list -ojson | jq .spec.nets[0] | sed -e 's/^"//' -e 's/"$//' -e 's/\/32//') kubectl -n dev exec -t centos -- sh -c "ping -c1 $IP"
c. View the attempts to use the Tor exists in the Kibana dashboard.
Navigate to
Kibana
->Dashboard
->Tigera Secure EE Tor-VPN Logs
dashboard to view the results.