Question 8: CiliumNetworkPolicy

Problem Statement

Solve this question on: ssh cks7262

In Namespace team-orange a Default-Allow strategy for all Namespace-internal traffic was chosen. There is an existing CiliumNetworkPolicy default-allow which assures this and which should not be altered. That policy also allows cluster internal DNS resolution.

Now it's time to deny and authenticate certain traffic. Create 3 CiliumNetworkPolicies in Namespace team-orange to implement the following requirements:

Create a Layer 3 policy named p1 to:
  • Deny outgoing traffic from Pods with label type=messenger to Pods behind Service database
Create a Layer 4 policy named p2 to:
  • Deny outgoing ICMP traffic from Deployment transmitter to Pods behind Service database
Create a Layer 3 policy named p3 to:
  • Enable Mutual Authentication for outgoing traffic from Pods with label type=database to Pods with label type=messenger
All Pods in the Namespace run plain Nginx images with open port 80. This allows simple connectivity tests like: k -n team-orange exec POD_NAME -- curl database

Solution

Step 1: Overview

First, let's examine the existing resources in Namespace team-orange:

➜ ssh cks7262 

➜ candidate@cks7262:~$ k -n team-orange get pod --show-labels -owide
NAME                         ...     IP           ...   LABELS
database-0                   ...   10.244.2.13    ...   ...,type=database
messenger-57f557cd65-rhzd7   ...   10.244.1.126   ...   ...,type=messenger
messenger-57f557cd65-xcqwz   ...   10.244.2.70    ...   ...,type=messenger
transmitter-866696fc57-6ccgr ...   10.244.1.152   ...   ...,type=transmitter
transmitter-866696fc57-d8qk4 ...   10.244.2.214   ...   ...,type=transmitter

➜ candidate@cks7262:~$ k -n team-orange get svc,ep
NAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/database   ClusterIP   10.108.172.58           80/TCP    8m29s

NAME                 ENDPOINTS        AGE
endpoints/database   10.244.2.13:80   8m29s

This is the existing default-allow policy:

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: default-allow
  namespace: team-orange
spec:
  endpointSelector:
    matchLabels: {}             # Apply this policy to all Pods in Namespace team-orange 
  egress:
  - toEndpoints:
    - {}                        # ALLOW egress to all Pods in Namespace team-orange
  - toEndpoints:              
      - matchLabels:
          io.kubernetes.pod.namespace: kube-system
          k8s-app: kube-dns
    toPorts:
      - ports:
          - port: "53"
            protocol: UDP
        rules:
          dns:
            - matchPattern: "*"
  ingress:
  - fromEndpoints:              # ALLOW ingress from all Pods in Namespace team-orange
    - {}

CiliumNetworkPolicies behave like vanilla NetworkPolicies: once one egress rule exists, all other egress is forbidden. This is also the case for egressDeny rules: once one egressDeny rule exists, all other egress is also forbidden, unless allowed by an egress rule. This is why a Default-Allow policy like this one is necessary in this scenario. The behaviour explained above for egress is also the case for ingress.

Step 2: Policy 1 - Layer 3 Policy

First, let's check the current connectivity from a type=messenger Pod to the Service database:

➜ candidate@cks7262:~$ k -n team-orange exec messenger-57f557cd65-rhzd7 -- curl -m 2 database



Welcome to nginx!
...

Now, let's create the first policy to deny this traffic:

➜ candidate@cks7262:~$ vim 8_p1.yaml
# cks7262:~/8_p1.yaml
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: p1
  namespace: team-orange
spec:
  endpointSelector:
    matchLabels:
      type: messenger
  egressDeny:
  - toEndpoints:
    - matchLabels:
        type: database  # we use the label of the Pods behind the Service "database"

➜ candidate@cks7262:~$ k -f 8_p1.yaml apply
ciliumnetworkpolicy.cilium.io/p1 created

Let's verify the policy works by testing connectivity:

➜ candidate@cks7262:~$ k -n team-orange exec messenger-57f557cd65-rhzd7 -- curl -m 2 --head database
curl: (28) Resolving timed out after 2002 milliseconds
command terminated with exit code 28
Step 3: Policy 2 - Layer 4 Policy

First, let's verify that ICMP currently works:

➜ candidate@cks7262:~$ k -n team-orange exec transmitter-866696fc57-6ccgr -- ping 10.244.2.13
PING 10.244.2.13 (10.244.2.13): 56 data bytes
64 bytes from 10.244.2.13: seq=0 ttl=63 time=2.555 ms
64 bytes from 10.244.2.13: seq=1 ttl=63 time=0.102 ms
...

Now, let's create the second policy to deny ICMP traffic:

➜ candidate@cks7262:~$ vim 8_p2.yaml
# cks7262:~/8_p2.yaml
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: p2
  namespace: team-orange
spec:
  endpointSelector:
    matchLabels:
      type: transmitter
  egressDeny:
  - toEndpoints:
    - matchLabels:
        type: database
    icmps:
    - fields:
      - type: 8
        family: IPv4
      - type: EchoRequest
        family: IPv6

➜ candidate@cks7262:~$ k -f 8_p2.yaml apply
ciliumnetworkpolicy.cilium.io/p2 created

Let's verify the policy works:

➜ candidate@cks7262:~$ k -n team-orange exec transmitter-866696fc57-6ccgr -- ping -w 2 10.244.2.13
PING 10.244.2.13 (10.244.2.13): 56 data bytes

--- 10.244.2.13 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
command terminated with exit code 1
Step 4: Policy 3 - Mutual Authentication

Let's create the final policy to enable mutual authentication:

➜ candidate@cks7262:~$ vim 8_p3.yaml
# cks7262:~/8_p3.yaml
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: p3
  namespace: team-orange
spec:
  endpointSelector:
    matchLabels:
      type: database
  egress:
  - toEndpoints:
    - matchLabels:
        type: messenger
    authentication:
      mode: "required"     # Enable Mutual Authentication

➜ candidate@cks7262:~$ k -f 8_p3.yaml apply
ciliumnetworkpolicy.cilium.io/p3 created

Let's verify all policies are in place:

➜ candidate@cks7262:~$ k -n team-orange get cnp
NAME            AGE
default-allow   126m
p1              11m
p2              11m
p3              8s
The policies have been successfully implemented:
  • Policy p1 denies outgoing traffic from messenger Pods to database Pods
  • Policy p2 denies ICMP traffic from transmitter Pods to database Pods
  • Policy p3 enables mutual authentication for traffic from database Pods to messenger Pods
Back to Questions List