Solving a Kubernetes CTF: K8s LAN Party

Arnav Tripathy
9 min readApr 13, 2024

--

Home page of the CTF website

Recently I came across a Kubernetes CTF named K8s LAN Party which grabbed my attention. Being a Kubernetes enthusiast, I decided to test my knowledge and started solving the challenges. In this blog, I will discuss how I solved each challenge along with a recommendation on how you could have prevented this vulnerability in real life. No shame in admitting I used many hints provided :p. Let’s get started:

Challenge 1: Recon

Description: You have shell access to compromised a Kubernetes pod at the bottom of this page, and your next objective is to compromise other internal services further.

As a warmup, utilize DNS scanning to uncover hidden internal services and obtain the flag. We have “loaded your machine with dnscan to ease this process for further challenges.

Solution: We need to find the IPs associated with the pods. We can find that using the env command

IP found 10.100.0.1

Now that we found an IP, let’s run dnnscan to locate services in the cluster. To run dnsscan, I studied the code given and ran a dnsscan. Before that I had to locate the binary using find

Binary location: /usr/bin/dnscan

Run a dnscan:

Subnet: 10.100.0.0/16

Why /16? From experience in setting up clusters from scratch, that is usually the CIDR we apply to assign pods. It was more of an educated guess.

We have discovered a service namely getflag-service. Using curl I was able to extract the flag from the service as below:

Flag for challenge 1 :)

Security recommendation: Use Kubernetes network policies to ensure no random pod can query any service in the cluster environment.

Challenge 2: Finding Neighbours

Description: Sometimes, it seems we are the only ones around, but we should always be on guard against invisible sidecars reporting sensitive secrets.

Solution: Immediately I knew it meant that it was something to do with a sidecar pod which we are most likely a part. But I couldn’t think of anything else beyond this. So I took a hint:

Hint #1 for challenge 2

While I was aware of this already, the challenge seems to have highlighted network namespace which suggests something to do with an internal service or internal networking.

My first thought was to check if any internal ports were open and curl them. Any data between sidecar containers can be shared with the other container using 127.0.0.1 since they are sharing the same network namespace and interface. I ran ps aux command but couldn’t find any new service inside the container. Feeling hopeless, I took the other hint:

Hint #2 for challenge 2

This was a proper facepalm moment for me :( . I knew that network interfaces were common for sidecars but it never occurred to me that I should sniff traffic. Without wasting further time, I ran tcpdump to fetch the flag. Note , make sure to use the -A flag to read the packets.

Flag for Challenge 2 :)

Security recommendation: Ensure Pod to pod communication is always encrypted. The simplest way to get started with encrypting pod communication is using a service mesh.

Challenge 3: Data Leakage

Description: The targeted big corp utilizes outdated, yet cloud-supported technology for data storage in production. But oh my, this technology was introduced in an era when access control was only network-based 🤦‍️.

Solution: My immediate thought went to a cloud storage solution like S3 bucket and so I spent time trying to find AWS creds along with aws-cli binary.

After a while, I took a step back and read the question again. It says network-based, meaning I don’t need any authentication via creds, I should be able to authenticate on the basis of network. My mind went to a remote NFS and I started searching for mounts in the pod. I ran cat /proc/mounts to read all the mounts in the pod. I spotted our culprit, it’s an AWS EFS share!

fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com is an AWS EFS share.

This share was mounted on /efs path which had the flag. When I tried to read the flag, I couldn’t read the flag owing to restrictions. Also I cannot just mount the share in a different folder since I’m not root.

So close yet so far :(

How do we proceed here? It suddenly struck to me that if I can’t mount the file, can I remotely access it then? I started reading more about how I can read NFS files remotely. Why NFS if it’s EFS we’re dealing with? That’s because EFS is just a managed version of NFS. I tried to search for ways to access remotely, but I caved in and opened a hint.

Hint #1 for challenge 3

The first hint in the challenge pointed me to use nfs-cat and nfs-ls utilities along with a URL. These were CLI tools used to extract a NFS server’s data. I needed to use the IP mapped to the EFS server in order to get data. But the command was not working.

nfs-ls not working :(

Exhausted and with no ideas, I took the other hint, it told me to use other parameters in the NFS URL string.

Hint #2 for challenge 3

I still couldn’t figure it out properly. I had the URL constructed based on the documentation linked, but just didn’t tie it in. I gave up and found this guide. I realized I was just using the wrong IP 🤦‍♂.

Rectified the URL and fetched the flag as shown below. Gotta say I really struggled for this one but I was happy I learnt something new :)

Flag for Challenge 3:)

Security Recommendation: Consider using IAM to strictly guard NFS shares over cloud. For eg. , in EFS , we can use these permissions carefully based on requirements.

Challenge 4: Bypassing the Boundaries

Description: Apparently, new service mesh technologies hold unique appeal for ultra-elite users (root users). Don’t abuse this power; use it responsibly and with caution.

Attached was Policy as shown below:

apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: istio-get-flag
namespace: k8s-lan-party
spec:
action: DENY
selector:
matchLabels:
app: "{flag-pod-name}"
rules:
- from:
- source:
namespaces: ["k8s-lan-party"]
to:
- operation:
methods: ["POST", "GET"]

Solution: Istio is a service mesh. We have briefly touched upon Istio before in this blog so please check that out if you want to know all about a service mesh. An Istio authorization policy has been applied wherein we cannot send in a POST or GET request from any pod belonging to k8s-lan-party namespace matching certain labels. We can assume that this condition would match the pod we have a shell in . I ran a quick dnsscan on the pod subnet.

Service discovered istio-protected-pod-service.k8s-lan-party.svc.cluster.local

I guess like in the first challenge, we would need to curl this service to fetch our flag. Unlike the first challenge though, a curl request did not render anything :

RBAC denied :(

This makes sense. We need to somehow bypass the policy. I just searched for any bypasses regarding Istio authorization policy. I came across this bypass :

This bypass states that any user with userID 1337 can bypass the Istio proxy filter which enables the authorization policy. Luckily, this time we are root in the system, meaning we can create another user and set userID as 1337.

Create a user of ID 1337

Somehow I couldn’t, anyway I just check the passwd file. I realised there’s already a user with 1337 userID by the name istio. Switched to that user and fetched the flag :)

Flag for Challenge 4:)

Security Recommendation: Ensure pods are securely locked and never given root permissions. You can use security contexts to ensure that pods are not given root user permissions and locked down securely.

Challenge 5: Lateral Movement

Description: Where pods are being mutated by a foreign regime, one could abuse its bureaucracy and leak sensitive information from the administrative services.

A Kyverno policy was attached along with the question shown below:

apiVersion: kyverno.io/v1
kind: Policy
metadata:
name: apply-flag-to-env
namespace: sensitive-ns
spec:
rules:
- name: inject-env-vars
match:
resources:
kinds:
- Pod
mutate:
patchStrategicMerge:
spec:
containers:
- name: "*"
env:
- name: FLAG
value: "{flag}"

Solution: I first ran a dnsscan to check what are the services avaiable

Services available

Before that, let’s talk about Kyverno :

Kyverno is a tool for Kubernetes that helps manage configurations and enforce policies across clusters. It ensures compliance and security by defining rules through custom resource definitions (CRDs), giving precise control over resource management and behavior.

The policy given in the challenge is saying that any pod created in the sensitive-ns will have the secret injected in the FLAG env variable.

Since I was aware of admission controllers and mutating webhooks, I immediately understood the expectation. Below is a diagram of how it would go.

Pod admission request

To create a pod admission request , we can use a tool called kube-review based on hints . I saved a pod with a yaml file with below code:

apiVersion: v1
kind: Pod
metadata:
name: sensitive-pod
namespace: sensitive-ns
spec:
containers:
- name: nginx
image: nginx:latest

To create the admission request, download kube-review locally and run the below command

./kube-review-darwin-amd64 create pod.yaml

Paste the JSON in a pastebin file and transfer the file to the shell and store in a file named pod.json . Once the file is transferred, use the below command to send an admission request to Kyverno which will trigger a mutating webhook and create a pod with the env variable injected.

Output of admission request.

Adding the curl request since it doesn’t seem to be clear in the screenshot.

curl -X POST -H "Content-Type: application/json" --data @pod.json https://kyverno-svc.kyverno/mutate -k

Apologies for the image quality. Will upload a better quality image ASAP. In the patch field , we can see a base64 value which is basically the env FLAG value patched into the pod as per the policy. Let’s decrypt the base64 value to fetch the flag and therefore solving the challenge :)

echo “W3sib3AiOiJhZGQiLCJwYXRoIjoiL3NwZWMvY29udGFpbmVycy8wL2VudiIsInZhbHVlIjpbeyJuYW1lIjoiRkxBRyIsInZhbHVlIjoid2l6X2s4c19sYW5fcGFydHl7eW91LWFyZS1rOHMtbmV0LW1hc3Rlci13aXRoLWdyZWF0LXBvd2VyLXRvLW11dGF0ZS15b3VyLXdheS10by12aWN0b3J5fSJ9XX0sIHsicGF0aCI6Ii9tZXRhZGF0YS9hbm5vdGF0aW9ucyIsIm9wIjoiYWRkIiwidmFsdWUiOnsicG9saWNpZXMua3l2ZXJuby5pby9sYXN0LWFwcGxpZWQtcGF0Y2hlcyI6ImluamVjdC1lbnYtdmFycy5hcHBseS1mbGFnLXRvLWVudi5reXZlcm5vLmlvOiBhZGRlZCAvc3BlYy9jb250YWluZXJzLzAvZW52XG4ifX1d” | base64 -d | jq
Output:
[
{
"op": "add",
"path": "/spec/containers/0/env",
"value": [
{
"name": "FLAG",
"value": "wiz_k8s_lan_party{you-are-k8s-net-master-with-great-power-to-mutate-your-way-to-victory}"
}
]
},
{
"path": "/metadata/annotations",
"op": "add",
"value": {
"policies.kyverno.io/last-applied-patches": "inject-env-vars.apply-flag-to-env.kyverno.io: added /spec/containers/0/env\n"
}
}
]

Security Recommendation: Using network policies might have been able to prevent this . The admission controller webhook should not have been so easily accessible to any pod.

This was a fun CTF from which I learnt a lot! Would extend my gratitude to the entire team of Wiz labs to put together such an amazing CTF!

--

--

Arnav Tripathy

Feline powered security engineer . Follow me for a wide variety of topics in the field of cyber security and dev(sec)ops. Kubestronaut FTW!