Private Google Kubernetes Engine cluster tunneling setup

Victor Yeo
4 min readMay 6, 2021

In this article, we will discuss how to setup a private GKE cluster so that its IP address is not exposed to the Internet. After setting up, we use a local port forwarding method to access the private GKE cluster.

Go to GCP-> Kubernetes Engine

Looking at the Cluster basics section:

The endpoint shows the GCP cluster IP. Besides that, looking at Cluster networking section:

The VPC peering (gke-n3d….) is auto created by GKE.

Go to GCP->VPC network

In this article, we click dev-default-k8s

The above shows node IP range 10.50.0.0/23, pod IP range 10.50.16.0/20, services range 10.50.2.0/23

The node IP range contains the Google Compute Engine VM instance IP address (shown in VM instance details screen below). Do not use the IP. It is not accessible due to its ports are not open.

Go to GCP->VPC network->VPC network peering

(VPC peering is networking connection between two VPCs )

In this article, we click the masterdata-prd-peer.

As shown below, the exported routes are the private IPs that are from the GKE pods, services.

As shown below, the imported routes are IP addresses of other VPC. The IP addresses are masked out in the picture below for privacy reasons.

In VPC peering, we must explicitly export privately used IP subnet routes for other networks to use them, and must explicitly import privately used public IP subnet routes to receive them from other networks.

Go to GCP->network services->Cloud NAT

The default NAT is connected to network “mynetwork”, used by default by the GKE.

On your laptop:

Open a terminal, run the command:

gcloud container clusters get-credentials default-k8s --region asia-southeast1 --project my-software-dev

Edit this file ~/.kube/config

Change:

  • cluster

server: https://kubernetes:8643

Below is a screenshot that shows how to edit the “server” field.

  • context

name: just-name

After that, edit /etc/hosts

Add:

127.0.0.1 kubernetes

Then, run the command:

gcloud compute ssh bastion-vm --zone asia-southeast1-b --project my-software-dev -- -L 8765:10.50.11.2:443

(the setup local port forwarding from localhost port 8765 to 10.50.11.2:443 , keep this command running)

The bastion-vm is a GCE VM instance with open TCP port 22 for incoming ssh.

Open another terminal:

# use context
kubectl config use-context just-name
# show context
kubectl config current-context

Then, you can deploy pods to the private GKE cluster

For quick test, run:

kubectl create deployment nginx --image=nginxkubectl get pods -o wide

Update in December 2021:

For accessing pods in the private cluster, run:

gcloud compute ssh bastion-vm --zone asia-southeast1-b --project my-software-dev -- -L 8991:localhost:8888

The above command setups a ssh tunnel to tinyproxy running on bastion VM listening on port 8888. The localhost is used because we configured the tinyproxy to listen only to the loopback interface. When we set the proxy in postman to localhost:8991, the http traffic will be sent to bastion port 8888 (tinyproxy service).

Congrats, you reach the end of the article.

--

--